id
stringlengths
20
20
score
int64
1
5
normalized_score
float64
0.2
1
content
stringlengths
217
3.74M
sub_path
stringclasses
1 value
BkiUdEY4uzlhge3-E9Ma
5
1
\section{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \sect{Introduction\label{intro}} For quantum affine algebras, there are two types of loop realizations: 1) the FRTS realization and 2) the Drinfeld realization. For the case of $U_q(\hat { gl}(n))$ \cite{DF}, the connection of these two realizations are established via the Gauss decomposition of L-operators. This method is recently used to derive Drinfeld realization for quantum affine superalgebras. \cite{GZ} \cite{Z} \cite{CWWZ}. Though for the case of $U_q(\hat {osp}(1,2)) $, such a connection was established \cite{GZ}, there are two important aspects of the theory that are still needed to be addressed. The first aspect is the hidden symmetry of FRTS realization, namely in the FRTS realization, there is a hidden symmetry of the generating L-operators implied in the hidden symmetry of the R-matrix used in the FRTS realization. This hidden symmetry implies that the Drinfeld realization is a quotient of the FRTS realization. Another aspect of the theory is that of q-Serre relation. Let $E(z)$ $F(z)$ and $H(z)$ be the generating current operators of the affine Lie superalgebra $\hat { osp}(1,2)$. We know that the defining relations include the Serre relation: $$ E(z)(\{E(w), E(x)\})= (\{E(w), E(x)\})E(z), $$ $$ F(z)(\{F(w), F(x)\})= (\{F(w), F(x)\})F(z), $$ which clearly is not implied by the relation: $$(z-w)E(z)E(w)=(w-z)E(w)E(z),$$ $$(z-w)F(z)F(w)=(w-z)F(w)F(z).$$ In this aspect, the Drinfeld realization of $U_q(\hat { osp}(1,2)) $ in \cite{GZ} is incomplete in the sense that the Serre relation is missing. With the help of the hidden symmetry, we derive a q-Serre relation for the generating current operators of $U_q(\hat { osp}(1,2))$. This paper is organized as the following: in Section 2, we recall the main results about $U_q(\hat {osp}(1,2))$ in \cite{GZ}; and we present our main results in Section 3. \section{The FRTS realization and the Drinfeld realization of $U_q(\hat { osp}(1,2))$.} In this section, we will recall the main results and the notation in \cite{GZ}. The FRTS realization of affine superalgebras starts with a super R-matrix, $R(z)\in End(V\otimes V)$, where $V$ is a ${\bf Z}_2$ graded vector space, and $R(z)$ satisfying the condition $R(z)_{\alpha\beta,\alpha'\beta'}\neq 0$ only when $[\alpha']+[\beta']+[\alpha]+[\beta]=0$ mod$2$, and the ${\bf Z}_2$ graded Yang-Baxter equation (YBE) \begin{equation}\label{rrr} R_{12}(z)R_{13}(zw)R_{23}(w)=R_{23}(w)R_{13}(zw)R_{12}(z). \end{equation} The multiplication for the tensor product is defined for homogeneous elements $a,~ b,~ a'$, $b'$ by \begin{equation} (a\otimes b)(a'\otimes b')=(-1)^{[b][a']}\,(aa'\otimes bb'), \end{equation} where $[a]\in{\bf Z}_2$ denotes the grading of the element $a$. The FRTS realization of quantum affine superalgebras is given as the following. \begin{Definition}\label{rs}: Let $R(\frac{z}{w})$ be a R-matrix satisfying the ${\bf Z}_2$ graded YBE (\ref{rrr}). The FRTS superalgebra $U({ R})$ is generated by invertible $L^\pm(z)$, satisfying \begin{eqnarray} R({z\over w})L_1^\pm(z)L_2^\pm(w)&=&L_2^\pm(w)L_1^\pm(z)R({z\over w}),\nonumber\\ R({z_+\over w_-})L_1^+(z)L_2^-(w)&=&L_2^-(w)L_1^+(z)R({z_-\over w_+}),\label{super-rs} \end{eqnarray} where $L_1^\pm(z)=L^\pm(z)\otimes 1$, $L_2^\pm(z)=1\otimes L^\pm(z)$ and $z_\pm=zq^{\pm {c\over 2}}$. For the first formula of (\ref{super-rs}), the expansion direction of $R({z\over w})$ can be chosen in $z\over w$ or $w\over z$, for the second formula, the expansion direction is in $z\over w$. \end{Definition} The algebra $U({ R})$ is a graded Hopf algebra: its coproduct is defined by \begin{equation} \Delta(L^\pm(z)=L^\pm(zq^{\pm 1\otimes {c\over 2}})\stackrel{.}{\otimes} L^\pm(zq^{\mp {c\over 2}\otimes 1}), \end{equation} and its antipode is \begin{equation} S(L^\pm(z))=L^\pm(z)^{-1}. \end{equation} For the case of $U_q(\hat { osp(1,2)})$, its FRTS realization is given with the super R-matrix $R({z\over w})\in End(V\otimes V)$, where $V$ is the 3-dimensional vector representation of $U_q({ osp(1,2)})$. In V, we fix a set of basis vectors, $v_1,v_2$ and $v_3$, where $v_1,\;v_3$ are graded 0 (mod 2) and $v_2$ is graded 1(mod 2). With this set of basis. the R-matrix is given as: \begin{equation} R({z\over w})=\left( \begin{array}{ccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & a & 0 & b & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & d & 0 & c & 0 & r & 0 & 0\\ 0 & f & 0 & a & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & g & 0 & e & 0 & c & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a & 0 & b & 0\\ 0 & 0 & s & 0 & g & 0 & d & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & f & 0 & a & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right), \label{r12} \end{equation} where \begin{eqnarray} &&a=\frac{q(z-w)}{zq^2-w},~~~~b=\frac{w(q^2-1)}{zq^2-w},~~~~ c=\frac{q^{1/2}w(q^2-1)(z-w)}{(zq^2-w)(zq^3-w)},\nonumber\\ &&d=\frac{q^2(z-w)(zq-w)}{(zq^2-w)(zq^3-w)},~~~~ e=a-\frac{zw(q^2-1)(q^3-1)}{(zq^2-w)(zq^3-w)},\nonumber\\ &&f=\frac{z(q^2-1)}{zq^2-w},~~~~ g=-\frac{q^{5/2}z(q^2-1)(z-w)}{(zq^2-w)(zq^3-w)},\nonumber\\ &&r=\frac{w(q^2-1)[q^3z+q(z-w)-w]}{(zq^2-w)(zq^3-w)},~~~~ s=\frac{z(q^2-1)[q^3z+q^2(z-w)-w]}{(zq^2-w)(zq^3-w)}. \end{eqnarray} We also have that $R_{21}(\frac{z}{w})=R(\frac{w}{z})^{-1}$ . To derive the Drinfeld current realization, we need the Gauss decomposition of the L-operators $L^\pm(z)$, which is given as \begin{eqnarray} L^\pm(z)&=&\left ( \begin{array}{ccc} 1 & 0 & 0\\ e^\pm_1(z) & 1 & 0\\ e^\pm_{3,1}(z) & e^\pm_2(z) & 1 \end{array} \right ) \left ( \begin{array}{ccc} k^\pm_1(z) & 0 & 0\\ 0 & k^\pm_2(z) & 0\\ 0 & 0 & k^\pm_3(z) \end{array} \right ) \left ( \begin{array}{ccc} 1 & f^\pm_1(z) & f^\pm_{1,3}(z) \\ 0 & 1 & f^\pm_2(z)\\ 0 & 0 & 1 \end{array} \right ) .\label{l+-} \end{eqnarray} Let $\{X,Y\}\equiv XY+YX$ denotes an anti-commutator, and \begin{equation} \delta(z)=\sum_{l\in {\bf Z}}\,z^l \end{equation} as a formal series. Let $X^\pm_i(z)$ be defined as \begin{eqnarray} X^+_i(z)&=&f^+_{i,i+1}(z_+)-f^-_{i,i+1}(z_-),\nonumber\\ X^-_i(z)&=&e^-_{i+1,i}(z_+)-e^+_{i+1,i}(z_-), \end{eqnarray} where $z_\pm=zq^{\pm{c\over 2}}$. Let \begin{equation} X^\pm(z)=(q-q^{-1})\left[X^\pm_1(z)+X^\pm_2(zq)\right].\label{x=x1+x2} \end{equation} In \cite{GZ}, the following commutation relations are derived. \begin{Theorem} \begin{eqnarray} k^\pm_1(z)k^\pm_1(w)&=&k^\pm_1(w)k^\pm_1(z),\nonumber\\ k^+_1(z)k^-_1(w)&=&k^-_1(w)k^+_1(z),\nonumber\\ k^\pm_2(z)k^\pm_2(w)&=&k^\pm_2(w)k^\pm_2(z),\nonumber\\ k^\pm_3(z)k^\pm_3(w)&=&k^\pm_3(w)k^\pm_3(z),\nonumber\\ k^+_3(z)k^-_3(w)&=&k^-_3(w)k^+_3(z),\nonumber\\ k^\pm_1(z)k^\pm_2(w)&=&k^\pm_2(w)k^\pm_1(z),\nonumber\\ \frac{z_\pm-w_\mp}{z_\pm q^2-w_\mp} k^\pm_1(z)k^\mp_2(w)&=& \frac{z_\mp-w_\pm}{z_\mp q^2-w_\pm} k^\mp_2(w)k^\pm_1(z),\nonumber\\ k^\pm_1(z)k^\pm_3(w)^{-1}&=&k^\pm_3(w)^{-1}k^\pm_1(z),\nonumber\\ \frac{(z_\mp-w_\pm)(z_\mp q-w_\pm)}{(z_\mp q^2-w_\pm)(z_\mp q^3-w_\pm)} k^\pm_1(z)k^\mp_3(w)^{-1}&=& \frac{(z_\pm-w_\mp)(z_\pm q-w_\mp)}{(z_\pm q^2-w_\mp)(z_\pm q^3-w_\mp)} k^\mp_3(w)^{-1}k^\pm_1(z),\nonumber\\ \frac{z_\pm-w_\mp q}{z_\pm q-w_\mp}k^\pm_2(z)k^\mp_2(w)&=& \frac{z_\mp-w_\pm q}{z_\mp q-w_\pm}k^\mp_2(w)k^\pm_2(z),\nonumber\\ k^\pm_2(z)^{-1}k^\pm_3(w)^{-1}&=&k^\pm_3(w)^{-1}k^\pm_2(z)^{-1},\nonumber\\ \frac{z_\pm-w_\mp}{z_\pm q^2-w_\mp} k^\pm_2(z)^{-1}k^\mp_3(w)^{-1}&=& \frac{z_\mp-w_\pm}{z_\mp q^2-w_\pm} k^\mp_3(w)^{-1}k^\pm_2(z)^{-1},\label{k1k2k3} \end{eqnarray} \begin{eqnarray} k^\pm_1(z)X^-(w)k^\pm_1(z)^{-1}&=&\frac{z_\pm q^2-w}{q(z_\pm-w)} X^-(w),\nonumber\\ k^\pm_1(z)^{-1}X^+(w)k^\pm_1(z)&=&\frac{z_\mp q^2-w}{q(z_\mp-w)} X^+(w),\nonumber\\ k^\pm_2(z)X^-(w)k^\pm_2(z)^{-1}&=&\frac{(z_\pm-w q^2)(z_\pm q-w)}{q(z_\pm-w) (z_\pm-wq)}X^-(w),\nonumber\\ k^\pm_2(z)^{-1}X^+(w)k^\pm_2(z)&=&\frac{(z_\mp-w q^2)(z_\mp q-w)}{q(z_\mp-w) (z_\mp-wq)}X^+(w),\nonumber\\ k^\pm_3(z)X^-(w)k^\pm_3(z)^{-1}&=&\frac{z_\pm-wq^3}{q(z_\pm-wq)} X^-(w),\nonumber\\ k^\pm_3(z)^{-1}X^+(w)k^\pm_3(z)&=&\frac{z_\mp-wq^3}{q(z_\mp-wq)} X^+(w),\label{x+-k1k2k3} \end{eqnarray} \begin{eqnarray} \frac{z-wq}{zq-w}X^-(z)X^-(w)+\frac{z-wq^2}{zq^2-w}X^-(w)X^-(z) &=&0,\nonumber\\ \frac{z-wq^2}{zq^2-w}X^+(z)X^+(w)+\frac{z-wq}{zq-w}X^+(w)X^+(z) &=&0,\label{x++x--} \end{eqnarray} \begin{eqnarray} \{X^-(w), X^+(z)\}&=&\frac{-1}{q-q^{-1}}\left[\delta(\frac{z}{w}q^c)\left( (1+q^{-{1\over 2}}-q^{1\over 2}) k^+_2(z_+) k^+_1(z_+)^{-1}-k^+_3(z_+q)k^+_2(z_+q)^{-1}\right)\right.\nonumber\\ & &\left.-\delta(\frac{z}{w}q^{-c})\left(k^-_2(w_+)k^-_1(w_+)^{-1} -(1+q^{-{1\over 2}}-q^{1\over 2})k^-_3(w_+q)k^-_2(w_+q)^{-1}\right)\right].\nonumber\\ \label{x+x-} \end{eqnarray} \end{Theorem} In \cite{GZ}, the following definition is proposed: \begin{eqnarray} &&\phi_i(z)=k^+_{i+1}(z)k^+_i(z)^{-1},\nonumber\\ &&\psi_i(z)=k^-_{i+1}(z)k^-_i(z)^{-1},~~i=1,2,\nonumber\\ &&\phi(z)=(1+q^{-{1\over 2}}-q^{1\over 2})\phi_1(z)-\phi_2(zq),\nonumber\\ &&\psi(z)=\psi_1(z)-(1+q^{-{1\over 2}}-q^{1\over 2})\psi_2(zq),\label{phi-psi}. \end{eqnarray} Then \cite{GZ}, \begin{Theorem}: $q^{\pm{c\over 2}},\; X^\pm(z),\;\phi(z),\;\psi(z)$ give the defining relations of $U_q(\hat{osp}(1,2))$. More precisely, $U_q(\hat {osp}(1,2))$ is an associative algebra with unit 1 and the Drinfeld generators: $X^\pm(z),~\phi(z)$ and $\psi(z)$, a central element $c$ and a nonzero complex parameter $q$. $\phi(z)$ and $\psi(z)$ are invertible. The gradings of the generators are: $[X^\pm(z)]=1$ and $[\phi(z)]=[\psi(z)]=[c]=0$. The relations are given by \begin{eqnarray*} \phi(z)\phi(w)&=&\phi(w)\phi(z),\nonumber\\ \psi(z)\psi(w)&=&\psi(w)\psi(z),\nonumber\\ \phi(z)\psi(w)\phi(z)^{-1}\psi(w)^{-1}&=&\frac{(z_+q-w_-)(z_--w_+q) (z_+-w_-q^2)(z_-q^2-w_+)}{(z_+-w_-q)(z_-q-w_+)(z_+q^2-w_-) (z_--w_+q^2)}, \end{eqnarray*} \begin{eqnarray*} \phi(z)X^-(w)\phi(z)^{-1}&=&\frac{(z_+-w q^2)(z_+ q-w)}{(z_+ q^2-w) (z_+-wq)}X^-(w),\nonumber\\ \phi(z)^{-1}X^+(w)\phi(z)&=&\frac{(z_--w q^2)(z_- q-w)}{(z_- q^2-w) (z_--wq)}X^+(w),\nonumber\\ \psi(z)X^-(w)\psi(z)^{-1}&=&\frac{(z_--w q^2)(z_- q-w)}{(z_- q^2-w) (z_--wq)}X^-(w),\nonumber\\ \psi(z)^{-1}X^+(w)\psi(z)&=&\frac{(z_+-w q^2)(z_+ q-w)}{(z_+ q^2-w) (z_+-wq)}X^+(w),\label{x-phipsi} \end{eqnarray*} \begin{equation} \{X^+(z), X^-(w)\}=\frac{1}{q-q^{-1}}\left[ \delta(\frac{w}{z}q^{c})\psi(w_+) -\delta(\frac{w}{z}q^{-c})\phi(z_+)\right].\label{x+x-=psiphi} \end{equation} \begin{eqnarray} \frac{z-wq}{zq-w}X^-(z)X^-(w)+\frac{z-wq^2}{zq^2-w}X^-(w)X^-(z) &=&0,\nonumber\\ \frac{z-wq^2}{zq^2-w}X^+(z)X^+(w)+\frac{z-wq}{zq-w}X^+(w)X^+(z) &=&0.\label{Fx++x--} \end{eqnarray} \end{Theorem} \section{The hidden symmetry of FRTS realization and the q-Serre relation of $U_q(\hat {osp}(1,2))$} As we know, for the classical case, there is a hidden symmetry on the three dimensional representation $V$ of $\hat {osp}(1,2)$, which comes from fact that V is selfdual. It is , therfore, natural to expect that it is so for the quantum case. One observation on the results of \cite{GZ} in the previous section is that there are far too many generating current operators in the FRTS realization than that of the Drinfeld realization. This also clearly indicates that there should be a hidden symmetry of the L-operators, which can help us to resolve this problem. Let us start with the Heisenberg subalgebra of the FRTS realization. Through calculation (see also the formulas in the previous section), it is not difficult to derive that \begin{Proposition} $ k_1^{\pm}(z) k_3^{\pm}(zq^3)$ and $ k_2^{\pm}(z)( k_1^{\pm}(zq^{-1})(k_1^{\pm} (zq^{-2}))^{-1})^{-1} $ commute with $X^{\pm}_i(z)$. \end{Proposition} Clearly, from the point view of the theory of universal R-matrix \cite{D}\cite{FR}\cite{GZ}\cite{KT}, the image of universal R-matrix of $U_q(\hat {osp}(1,2))$ on $V\otimes V$ actually is $f(z)R(z)$, where $f(z)$ is an analytic function. Therefore, the $L^{\pm}(z)$ in the definition of $U(R)$ is already different by an multiple of an extra copy of Heisenberg algebra, which is the reason why in \cite{DF} we deal with $U_q(\hat {gl}(n))$ not $U_q(\hat {sl}(n))$. Thus, we know that $U(R)$ is actually bigger. The difference comes from the Heisenberg subalgebra generated by $ k_1^{\pm}(z) k_3^{\pm}(zq^3)$. We can also derive the following \begin{Proposition} $k_2^{\pm}(z)(k_1^{\pm}(z))^{-1}( k_3^{\pm}(zq) (k_2^{\pm}(zq))^{-1})^{-1}$ are central in $U(R)$. \end{Proposition} From the point view of the universal R-matrix, we can see that these two central current operators are nothing but identity operators. This shows that either $k_2^{\pm}(z)(k_1^{\pm}(z))^{-1}$ or $( k_3^{\pm}(zq) (k_2^{\pm}(zq))^{-1})$ as generating operators for the subalgebra generated by $X^{\pm}_i(z)$. From now on, we impose the condition that {\bf Condition I} $$k_2^{\pm}(z)(k_1^{\pm}(z))^{-1}( k_3^{\pm}(zq) (k_2^{\pm}(zq))^{-1})^{-1}=1,$$ on the FRTS realization. From this and by calculation, we can deduce that \begin{Proposition} Let$$Y(z)= X_2^{\pm}(z)-({\mp}q^{\mp \frac 1 2}X_1^{\pm}(zq^{-1}))=\Sigma_{n\in {\bf Z}} Y(n)z^{-n}. $$ Let $Y$ be the subalgebra generated by $Y(n), n\in {\bf Z}$. Then $YU(R)$ is an double sided ideal in $U(R)$. \end{Proposition} We can check that $YU(R)$ has no intersection with the subalgebra generated by $k^\pm_i(z)$, $z=,1,2,3$. If we look at the image of $Y(z)$ on V, we can see that it is actually zero. If we look from the point view of universal R-matrix\cite{KT}, we should have that {\bf Condition II} \begin{eqnarray} Y(z)=0, \end{eqnarray} which, from now on, we impose on the FRTS realization. From the above two propositions, we can derive that \begin{Proposition} In the FRTS realization, $X^{\pm}_2(z)$, $k_2^+(z)(k_1^+(z))^{-1}$ and $k_2^+(z)(k_1^+(z))^{-1}$, $ k_1^{\pm}(z) k_3^{\pm}(zq^3)$ and $ k_2^{\pm}(z)( k_1^{\pm}(zq^{-1})(k_1^{\pm} (zq^{-2}))^{-1})^{-1} $ generate the whole algebra. \end{Proposition} This follows the similar calculation for the case $U_q(\hat { sl}(3))$\cite{DF} to show $f^{\pm}_ {1,3}(z)$ and $e^{\pm}_{3,1}$ are generated by $X^{\pm}_2(z)$. All the three propositions and two conditions above gives us the hidden symmetry of the L-operators of the FRTS realization of $U_q(\hat {osp}(1,2))$. On the other hand, we should understand this hidden symmetry comes from the fact that on V, the representation of $U_q(\hat {osp}(1,2))$ is selfdual. Therefore $R(z)$ has a hidden symmetry like that of $U_q(o(n))$ and $U_q(sp(2n)$ in \cite{FRT} that basically determines the hidden symmetry of the L-operators\cite{KT}. This should automatically lead us to the Condition (I) and (II). As we explain in the introduction, from the classical theory, we know that the Drinfeld realization given in Theorem 2 in the section above is incomplete in the sense that q-Serre relation is missing. Before we deal with the q-Serre relation, we would like to point out that the relation (2.14) and the similar ones in (2.18) are not completely right. The correct ones are given as following: \begin{Proposition} \begin{eqnarray} (zq^2-w)({z-wq})X^-(z)X^-(w)+(zq-w)({z-wq^2})X^-(w)X^-(z) &=&0,\nonumber\\ (zq-w)({z-wq^2})X^+(z)X^+(w)+(zq^2-w)({z-wq})X^+(w)X^+(z) &=&0. \label{++} \end{eqnarray} \end {Proposition} Here, the point is that the relation given in (2.14) and (2.18) are much stronger than (3.20) in the proposition above, and they actually imply the relations (3.20). However (3.20) do not imply (2.14) and (2.18). The reason comes from the fact that they imply different pole conditions on the products $X^{\pm}(z)X^{\pm}(w)$. Similarly the relations given in \cite{GZ} about $X_i^{\pm}(z)X_i^{\pm}(w)$ should be corrected correspondingly as in the proposition above. With rather difficult calculation (similar to that in \cite{DF} for the case of $U_q(\hat { sl}(3))$), we can derive that \begin{Proposition} $$ \frac {(z_3-z_1q^{-1})(z_3-z_1q^{3})(z_1-z_2q^2)}{z_3-z_1q}X_i^+(z_3)X_i^+(z_1)X_i^+(z_2) +$$ $$ \frac { (z_2-z_1q^2) ({z_2-z_1q^{-1}}) (z_3-z_1q^{-1})(z_3-z_1q^{3})}{ (z_1-z_2q^{-1}) (z_3-z_1q) }X_i^+(z_3) X_i^+(z_2)X_i^+(z_1) ) -$$ $$ \frac{((z_1-z_3q^2)(z_1-z_3q)(z_1q-z_3q^{-1})(z_1-z_2q^2)} {(z_1-z_3)(z_3-z_1q^2)} X_i^+(z_1)X_i^+(z_2)X_i^+(z_3)- $$ \begin{eqnarray} \frac{((z_1-z_3q^2)(z_1-z_3q)(z_1q-z_3q^{-1})(z_2-z_1q^2)(z_2-z_1q^{-1})} {(z_1-z_3)(z_3-z_1q^2)(z_1-z_2q^{-1})} X_i^+(z_2)X_i^+(z_1)X_i^+(z_3) )= 0 ,\nonumber \\ \end{eqnarray} $$ \frac {(z_3-z_1q^{})(z_3-z_1q^{-3})(z_1-z_2q^{-2})}{z_3-z_1q^{-1}}X_i^-(z_3)X_i^-(z_1)X_i^-(z_2) + $$ $$\frac { (z_2-z_1q^{-2}) ({z_2-z_1q^{}}) (z_3-z_1q^{})(z_3-z_1q^{-3})}{ (z_1-z_2q^{}) (z_3-z_1q^{-1}) }X_i^-(z_3) X_i^-(z_2)X_i^-(z_1) ) -$$ $$ \frac{((z_1-z_3q^{-2})(z_1-z_3q^{-1})(z_1q-z_3q^{})(z_1-z_2q^{-2})} {(z_1-z_3)(z_3-z_1q^{-2})} X_i^-(z_1)X_i^-(z_2)X_i^-(z_3)- $$ \begin{eqnarray} \frac{((z_1-z_3q^{-2})(z_1-z_3q^{-1})(z_1q-z_3q^{})(z_2-z_1q^{-2})(z_2-z_1q^{})} {(z_1-z_3)(z_3-z_1q^{-2})(z_1-z_2q^{})} X_i^-(z_2)X_i^-(z_1)X_i^-(z_3) )= 0,\nonumber\\ \end{eqnarray} for $i=1,2,\emptyset$ and where the coefficient functions of the relations above are expanded in the region of the expansion region of the corresponding monomial of the product of $X_i^{\pm}(z_j)$. \end{Proposition} We call the two relations above, the q-Serre relations. It is not very difficult to show that this relation will degenerate into the classical Serre relations, but we still do not know how to write a simple formulation like that of the Drinfeld realization of $U_q(\hat { sl}(3))$. \begin{Definition} $U_q(\hat {osp}(1,2))$ is an ${\bf Z}_2$ graded associative algebra generated by $c$ , an central element; $$\phi (z)=\sum_{-m\in {\bf Z}_{\geq 0}}\phi (-m)z^{m};$$ $$\psi (z)=\sum_{m\in {\bf Z}_{\geq 0}}\psi (-m)z^{m};$$ $$ X^\pm(z)=\Sigma X(n)z^{-n});$$ where $\phi(z), \psi(z)$ are invertible and $\phi(0)\psi(0)=1=\psi(0)\phi(0).$ The gradings of the generators are: $[\bar X^\pm(z)]=1$ and $[\phi(z)]=[\psi(z)]=[c]=0$. The relations are given by (2.17), (3.20), (3.22)and (3.23). \end{Definition} \begin{Theorem} $U_q(\hat {osp}(1,2))$ is isomorphic to a quotient of the subalgebra of the FRTS algebra generated by $X^{\pm}(z)$ and $k_3^\pm(z)k_2^\pm(z)^{-1}$, where the quotient ideal is generated by Condition (I) and (II), and the map is given by $$ \frac 1 {q-q^{-1}} X^{\pm}_2(zq) \rightarrow X^\pm(z) ,$$ $$k_3^+(z)k_2^+(z)^{-1}\rightarrow \psi(z), $$ $$k_3^-(z)k_2^-(z)^{-1}\rightarrow \phi(z). $$ \end{Theorem} This theorem also tells us that the FRTS realization U(R) is nothing else but $U_q(\hat{osp}(1,2))$ tensored by another copy of Heisenberg algebra. Starting from a completely different point of view, we also come to the same definition of $U_q(\hat {osp}(1,2))$\cite{DFe}, where the q-Serre relation was derived in a different way. \vskip.3in \noindent {\bf Acknowledgments.} I would like to thank B. Feigin and S. Khoroshkin for useful discussions. \vskip.3in
train/arxiv
BkiUd8E5qhLACGCwOahF
5
1
\section{Introduction} \textbf{Motivation and literature survey:} The increasing penetration of renewable sources of generation is expected to cause more frequent generation-demand imbalances within the power network, which may harm power quality and even cause blackouts \cite{ipakchi2009grid}. Controllable demand is considered to be a means to address this issue, since loads may provide a fast response \ak{to counterbalance} intermittent generation \cite{kamyab2015demand}. However, the increasing number of such active units makes traditionally implemented centralized \ak{control} schemes expensive and inefficient, motivating the adoption of distributed schemes. Such schemes offer many advantages, such as scalability, reduced expenses associated with the necessary communication infrastructure \ak{and enhanced reliability due to the absence of a single point of failure.} The introduction of controllable loads and local renewable generation raises an issue \ak{of economic optimality} in the power allocation. In addition, the introduction of smart meters for the monitoring of \ak{generation and demand} units poses a privacy threat for the citizens, since readings may be used to expose customers daily life and habits, by inferring the users energy consumption patterns and types of appliances \cite{zeifman2011nonintrusive}. \ak{For example, this issue} led the Dutch Parliament to prohibit the deployment of smart meters until the privacy concerns are resolved \cite{erkin2013privacy}, \ak{as well as} several counties and cities in California to vote for making smart meters illegal in their jurisdictions \cite{hess2014wireless}. These \ak{concerns} motivate the design of distributed schemes that will simultaneously achieve an optimal power allocation and preserve the privacy of local prosumption profiles. In recent years, various studies considered the use of decentralized/distributed control schemes for generation and controllable demand with applications to both primary \cite{devane2016primary}, \cite{kasis2017stability}, \cite{zhao2014design}, \cite{kasis2021primary} and secondary \cite{kasis2016primary}, \cite{trip2016internal}, \cite{li2015connecting}, \cite{chen2020distributed} frequency regulation, where the objectives are to ensure generation-demand balance and that the frequency attains its nominal value at steady state respectively. In addition, the problem of obtaining an optimal power allocation within the secondary frequency control timeframe has received broad attention in the literature \cite{mallada2017optimal}, \cite{zhao2015distributed}, \cite{zhao2018distributed}. These studies considered suitably constructed optimization problems and designed the system equilibria to coincide with the solutions to these problems. In many studies, the control dynamics were inspired from the dual of the considered optimization problems \cite{li2015connecting}, \cite{low2014distributed}, \cite{kasis_TCST}. Such schemes, usually referred to in the literature as \textit{Primal-Dual schemes}, yield an optimal power allocation and at the same time allow operational constraints to be satisfied. Alternative distributed schemes, which ensure that frequency attains its nominal value at steady state by using the generation outputs, have also been proposed \cite{kasis2020distributed}, \cite{trip2017distributed}. However, the use of real-time knowledge of the generation and controllable demand in the proposed schemes may compromise the privacy of prosumers. The topic of preserving the privacy of generation and demand units \ak{has recently} attracted wide attention in the literature. Different types of privacy concerns, resulting from the integration of information and communication technologies in the smart grid, are mentioned in \cite{zeadally2013towards}. In addition, \cite{siddiqui2012smart} analyzes various smart grid privacy issues and discusses recently proposed solutions for enhanced privacy, while \cite{yu2013privacy} proposes a privacy-preserving power request scheme. In addition, \cite{fioretto2019differential} uses the differential privacy framework to provide privacy guarantees and \cite{eibl2017differential} studies the effect of differential privacy on smart metering data. Moreover, homomorphic encryption has been used in \cite{marmol2012not} to enable the direct connection and exchange of data between electricity suppliers and final users, while preserving the privacy in the smart grid. A privacy-preserving aggregation scheme is proposed in \cite{lu2012eppa} which considers various security threats. The use of energy storage units to preserve the privacy of user consumption has been considered in \cite{yang2014cost} and \cite{zhang2016cost}. Furthermore, \cite{wu2021privacy} and \cite{dvorkin2020differentially} aim to simultaneously preserve the privacy of individual agents and enable an optimal power allocation using homomorphic encryption and differential privacy respectively. Both approaches result in suboptimal allocations, which suggests a trade-off between optimality and privacy. Several existing techniques \ak{that aim at} preventing disclosure of private data are also discussed in \cite{souri2014smart}. Although the problems of preserving the privacy of power prosumption and obtaining an optimal power allocation in power networks have been independently studied, \ak{the problem of simultaneously achieving these goals has not been adequately investigated. In addition, to the authors best knowledge, no study has considered the impact of such schemes on the stability and \ak{dynamic} performance of the power grid.} This study aims to jointly consider these objectives within the secondary frequency control timeframe. \textbf{Contribution:} This paper studies the problem of providing optimal frequency regulation within the secondary frequency control timeframe while preserving the privacy of generation and controllable demand profiles. We first propose an optimization problem that ensures that secondary frequency regulation objectives, i.e. achieving generation-demand balance and frequency attaining its nominal value at steady state, are satisfied. In addition, to facilitate the interpretation of our privacy results, we define two types of eavesdroppers; (i) \textit{naive eavesdroppers}, that do not \ak{possess/make use of knowledge of the system dynamics to analyze} the intercepted information and (ii) \textit{informed or intelligent eavesdroppers} that \ka{use} knowledge of the underlying system dynamics to infer the prosumption profiles. We consider a distributed scheme that has been extensively studied in the literature, usually referred to as the \textit{Primal-Dual scheme}, that enables an optimal power allocation and the satisfaction of system constraints, and explain why it \ak{causes} privacy issues. Inspired by the \textit{Primal-Dual scheme}, we propose the \textit{Extended Primal-Dual scheme} that incorporates a distributed controller at each privacy-seeking unit of the power grid. The latter replaces the communication of prosumption profiles with a consensus signal providing privacy against naive eavesdroppers. However, we explain how intelligent eavesdroppers may infer the prosumption profiles using the communicated signal trajectories and knowledge of the underlying system dynamics. To resolve this, we propose the \textit{Privacy-Preserving scheme}, which incorporates two important features into the \textit{Extended Primal-Dual scheme}, such that privacy against intelligent eavesdroppers is achieved. In particular, the proposed scheme \an{continuously} alters the speed of response of each controller, making model based inference inaccurate. Moreover, it adds bounded noise to the prosumption information within each controller, with a maximum magnitude proportional to the local frequency deviation. The latter yields changes in all controllers when a disturbance occurs, making it hard to detect the origin of the disturbance. These properties ensure that the \textit{Privacy-Preserving scheme} \ka{guarantees} the privacy of the prosumption units against intelligent eavesdroppers. \ak{On the other hand, due to its additional features, the \textit{Privacy-Preserving scheme} could potentially result in slower convergence, since the controllers response speed is reduced.} For both proposed schemes, we provide analytic stability guarantees and show that an optimal power allocation is achieved at steady state. In addition, the proposed schemes are distributed and applicable to arbitrary network topologies, \ak{while} the proposed conditions are locally verifiable. Our analytic results are \ak{illustrated} with numerical simulations on the NPCC 140-bus system which validate that the proposed schemes enable an optimal power allocation and satisfy the secondary frequency regulation objectives. In addition, we demonstrate how the \textit{Extended Primal-Dual} and the \textit{Privacy-Preserving} schemes offer privacy of the prosumption profiles against naive and intelligent eavesdroppers respectively. To the authors best knowledge, this is the first study that: \begin{enumerate}[(i)] \item Jointly studies the \ka{privacy, optimality and stability} properties of distributed schemes within the secondary frequency control timeframe. \item Proposes distributed schemes that yield an optimal power allocation and simultaneously preserve the privacy of the \ak{prosumption} profiles. In particular, the proposed schemes offer privacy guarantees against naive (\textit{Extended Primal-Dual scheme}) and informed (\textit{Privacy-Preserving scheme}) eavesdroppers respectively. For the proposed schemes, we show that stability is guaranteed and that the secondary frequency control objectives are satisfied. \end{enumerate} \textbf{Paper structure:} In Section \ref{II} we present the dynamics of the power network, the considered optimization problem and the problem statement. In Sections \ref{III} and \ref{IV} we present the proposed \textit{Extended Primal-Dual} and \textit{Privacy-Preserving} schemes respectively and provide our main analytic results. In Section \ref{V} we validate our main results through numerical simulations on the NPCC 140-bus system. Finally, conclusions are drawn in Section \ref{VI}. The proofs of all analytic results are provided in the Appendix. \textbf{Notation:} Real numbers and the set of n-dimensional vectors with real entries are denoted by $\mathbb{R}$ and $\mathbb{R}^n$ respectively. The $p$-norm of a vector $x \in \mathbb{R}^n$ is given by $\norm{x}_p = (|x_1|^p + \dots + |x_n|^p)^{1/p}, 1 \leq p < \infty$. A function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ is said to be locally Lipschitz continuous at $x$ if there exists some neighbourhood $X$ of $x$ and some constant $L$ such that $\norm{f(x) - f(y)} \leq L \norm{x - y}$ for all $y \in X$, where $\norm{.}$ denotes any $p$-norm. A matrix $A \in \mathbb{R}^{n \times n}$ is called diagonal if $A_{ij} = 0$ for all $i \neq j$. In addition, $A \preceq 0$ indicates that the matrix $A$ is negative semi-definite. The image of a vector $x$ is denoted by $\Ima(x)$. The cardinality of a discrete set $\mathcal{S}$ is denoted by $|\mathcal{S}|$. \ka{A set $\mathcal{B}$ is a proper subset of a set $\mathcal{A}$ if $\mathcal{B} \subset \mathcal{A}$ and $\mathcal{B} \neq \mathcal{A}$.} For a graph with sets of nodes and edges denoted by $\mathcal{A}$ and $\mathcal{B}$ respectively, we define the incidence matrix $H \in \mathbb{R}^{|{\mathcal{A}}| \times |{\mathcal{B}}|}$ as follows \begin{gather*} H_{ij} = \begin{cases} +1, \text{ if } i \text{ is the positive end of edge } j \in \mathcal{B}, \\ -1, \text{ if } i \text{ is the negative end of edge } j \in \mathcal{B}, \\ 0, \text{ otherwise.} \end{cases} \end{gather*} We use $\vect{0}_n$, $\vect{1}_n$ and $\mathds{1}_{n,m}$ to denote the $n$-dimensional vectors with all elements equal to $0$, all elements equal to $1$ and element $m$ equal to $1$ and all remaining elements equal to $0$ respectively. Finally, for a state $x \in \mathbb{R}^n$, we let $x^*$ denote its equilibrium value. \section{Problem Formulation}\label{II} \subsection{Power network model} We describe the power network by a connected graph ${\mathcal{(N,E)}}$ where ${\mathcal{N}}=\{1,2,..,|{\mathcal{N}}|\}$ is the set of buses and ${\mathcal{E}}\subseteq {\mathcal{N}}\times {\mathcal{N}}$ the set of transmission lines connecting the buses. The term $(i,j)$ denotes the link connecting buses $i$ and $j$. The graph ${\mathcal{(N,E)}}$ is assumed to be directed with an arbitrary direction, so that if $(i,j)\in{{\mathcal{E}}}$ then $(j,i)\notin {{\mathcal{E}}}$. For each $j \in \mathcal{N}$, we define the sets of predecessor and successor buses by $\mathcal{N}^p_{j} = \{k : (k,j) \in \mathcal{E}\}$ and $\mathcal{N}^s_{j} = \{k : (j,k) \in \mathcal{E}\}$ respectively. It should be noted that the form of the considered dynamics is unaffected by changes in the graph ordering and the results presented in this paper are independent of the choice of direction. The following assumptions are made for the network: \newline 1) Bus voltage magnitudes are $|V_j| = 1$ per unit for all $j \in \mathcal{N}$. \newline 2) Lines $(i,j) \in \mathcal{E}$ are lossless and characterized by the magnitudes of their susceptances $B_{ij} = B_{ji} > 0$. \newline 3) Reactive power flows do not affect bus voltage phase angles and frequencies. \newline 4) The relative phase angles are sufficiently small such that the approximation $\sin \eta_{ij} = \eta_{ij}$ is valid. \newline The first three assumptions have been frequently used in the literature in frequency regulation studies \cite{kasis2017stability, zhao2014design, li2015connecting, kasis2015stability}. They are valid in medium to high voltage transmission systems since transmission lines are dominantly inductive and voltage variations are small. In addition, they are valid in distribution networks with tight voltage control. The fourth assumption is valid when the network operates in nominal conditions, where relative phase angles are small\footnote{It should be noted that the results presented in this paper can be extended by considering sinusoidal phase angles, see e.g. the approach in \cite{kasis2017stability}. We have opted not to consider this case for simplicity and to keep the main focus of the paper on the privacy aspects of the proposed schemes.}. It should be noted that the theoretical results presented in this paper are validated with numerical simulations in Section \ref{V}, on a comprehensive power network model. We use the swing equations to describe the rate of change of frequency at buses \cite{machowski2020power}. In particular, at each bus we consider a set of generation and controllable and uncontrollable demand units. This motivates the following system dynamics: \begin{subequations}\label{eq1} \begin{align} \dot{\eta}_{ij} &=\omega_{i}-\omega_{j}, (i,j)\in{\mathcal{E}}, \label{eq1a} \\ \begin{split} M_{j}\dot{\omega}_{j} & = \sum_{k \in \mathcal{N}^G_j} p^M_{k,j} - \sum_{k \in \mathcal{N}^L_j} d^c_{k,j} - \sum_{k \in \mathcal{N}_j} p^L_{k,j} - D_{j}\omega_{j}\\ & - \sum_{i \in \mathcal{N}^s_j} p_{ji} + \sum_{i \in \mathcal{N}^p_j} p_{ij}, j\in{\mathcal{N}}, \label{eq1b} \end{split} \\ p_{ij} &=B_{ij}\eta_{ij},(i,j)\in{\mathcal{E}}. \label{eq1c} \end{align} \end{subequations} In system \eqref{eq1}, variable $\omega_{j} $ represents the deviation of the frequency at bus $j$ from its nominal value, namely 50 Hz (or 60 Hz). Variable $p^M_{k,j}$ represents the mechanical power injection associated with the $k$th generation unit at bus $j$. Moreover, $d^c_{k,j}$ denotes the demand associated with the $k$th controllable load at bus $j$. $\mathcal{N}^G_j$ and $\mathcal{N}^L_j$ represent the sets of generation units and controllable loads, which are jointly referred to as active elements or active units, at bus $j$ respectively. Each of these units are associated with a privacy-seeking user or entity. The set of active units at bus $j$ is given by $\mathcal{N}_j = \mathcal{N}^G_j \cup \mathcal{N}^L_j$. The variable $p^L_{k,j}$ represents the uncontrollable demand associated with the $k$th active unit at bus $j$. Furthermore, the time-dependent variables $\eta_{ij} $ and $p_{ij}$ represent, respectively, the power angle difference and the power transmitted from bus ${i}$ to bus ${j}$. The quantities $B_{ij}$ represent the line susceptances between buses $i$ and $j$. Finally, the positive constants $D_{j}$ and $M_j$ represent the generation damping and inertia at bus $j$ respectively. The generation and consumption will be jointly referred to as prosumption. \begin{Remark} An alternative, but equivalent, representation of \eqref{eq1} could include a single variable at each bus representing the aggregation of uncontrollable demand. We opted to associate uncontrollable loads with active units to facilitate the study of their privacy properties. The benefits of this representation are evident in Sections \ref{III} and \ref{IV}. Note that when no uncontrollable load is associated with some generation or controllable demand unit, then $p^L_{k,j} = 0$. \ka{In addition, note that the results presented in this paper can be extended to the case where uncontrollable loads are not associated with prosumption units.} \end{Remark} \subsection{Generation and controllable demand dynamics} We will study the behavior of the power system under the following dynamics for generation and controllable loads, \begin{subequations}\label{eq2} \begin{align} \tau_{k,j} \dot{x}_{k,j} &= -x_{k,j} + m_{k,j}(u_{k,j} - \omega_j), k \in \mathcal{N}^G_j, j\in{\mathcal{N}}, \label{eq2a} \\ p^M_{k,j} &= x_{k,j} + h_{k,j} (u_{k,j} - \omega_j), k \in \mathcal{N}^G_j, j\in{\mathcal{N}}, \label{eq2b} \\ d^c_{k,j} &= - h_{k,j} (u_{k,j} - \omega_j), k \in \mathcal{N}^L_j, j \in{\mathcal{N}}, \label{eq2c} \end{align} \end{subequations} where $x_{k,j} \in \mathbb{R}$ represents the internal state, and $\tau_{k,j} > 0$ and $m_{k,j} > 0$ the time and droop constants associated with generation unit $k$ at bus $j$ respectively. The positive constant $h_{k,j}$ represents the damping associated with active unit $k$ (generation or controllable load) at bus $j$. In addition, $u_{k,j}$ represents the control input to the $k$th active unit at bus $j$, the dynamics of which are discussed in the following sections. We consider first-order generation dynamics and static controllable demand for simplicity and to keep the focus of the paper on developing a privacy-preserving scheme. More involved generation and demand dynamics could be considered by applying existing results (e.g. \cite{kasis2017stability}, \cite{kasis2016primary}, \cite{kasis_TCST}). For convenience, we define the vectors $p^M_j = [p^M_{k,j}]_{k \in \mathcal{N}^G_j}$, $d^c_j = [d^c_{k,j}]_{k \in \mathcal{N}^L_j}$, $p^L_j = [p^L_{k,j}]_{k \in \mathcal{N}_j}$, $p^M = [p^M_j]_{j \in \mathcal{N}}$, $d^c = [d^c_j]_{j \in \mathcal{N}}$ and $p^L = [p^L_j]_{j \in \mathcal{N}}$. \subsection{\ak{Prosumption cost minimization} problem} In this section we form an optimization problem that aims to minimize the costs associated with generation and controllable demand and simultaneously achieve generation-demand balance. The considered optimization problem is described below. A cost $\frac{1}{2}q_{k,j}(p^M_{k,j})^2$ is incurred when the generation unit $k$ at bus $j$ produces a power output of $p^M_{k,j}$. In addition, a cost $\frac{1}{2}q_{k,j}(d^c_{k,j})^2$ is incurred when controllable load $k$ at bus $j$ adjusts its demand to $d^c_{k,j}$. The optimization problem is to obtain the vectors $p^M$ and $d^c$ that minimize the cost associated with the aggregate generation and controllable demand and simultaneously achieve power balance. The considered optimization problem is presented below. \begin{equation}\label{eq3} \begin{aligned} \min_{p^M,d^c} & \sum_{j \in \mathcal{N}} ( \sum_{k \in \mathcal{N}^G_j} \frac{1}{2}q_{k,j}(p^M_{k,j})^2 + \sum_{k \in \mathcal{N}^L_j} \frac{1}{2}q_{k,j}(d^c_{k,j})^2)\\ \text{ subject to } & \sum_{j \in \mathcal N} (\sum_{k \in \mathcal{N}^G_j} p^M_{k,j} - \sum_{k \in \mathcal{N}^L_j} d^c_{k,j} - \sum_{k \in \mathcal{N}_j} p^L_{k,j}) = 0. \end{aligned} \end{equation} The equality constraint in \eqref{eq3} requires all the uncontrollable loads to be matched by the generation and controllable demand, such that generation-demand balance is achieved. The equality constraint also guarantees that the frequency attains its nominal value at equilibrium, which is a main objective of secondary frequency control. The latter follows by summing \eqref{eq1b} at steady state over all buses, which yields $\sum_{j \in \mathcal{N}} D_j \omega_j = 0$, and noting that frequency synchronizes at equilibrium from \eqref{eq1a}. \subsection{\ka{Eavesdropper and privacy definitions}} \ka{In this section, we define the two considered eavesdropper types, inspired from \cite{parsaeefard2015improving}, and present two notions of privacy to facilitate the interpretation and intuition of our results.} \begin{definition}\label{Def_eavesdropper} \ka{An eavesdropper is a person or entity that aims to extract private information by intercepting the signals communicated to and from generation and controllable demand units.} Eavesdroppers are classified as follows: \begin{enumerate}[(i)] \item Naive eavesdroppers, who \ka{posses knowledge of: \newline (K1) All signals communicated to and from a given unit, for which it aims to obtain private information.} \item \ka{Informed or intelligent eavesdroppers, who posses knowledge of K1 and: \newline (K2) The underlying control dynamics of the system.} \end{enumerate} \end{definition} Definition \ref{Def_eavesdropper} presents two types of eavesdroppers, based on \ak{whether they make use of knowledge of the underlying system dynamics to infer private information.} In particular, naive eavesdroppers \ak{have no knowledge of the system model that may allow them to analyze the} intercepted signals. They only try to overhear sensitive information. Informed eavesdroppers \ak{analyze the intercepted signals using knowledge of} the underlying dynamics. It is intuitive to note that privacy against intelligent eavesdroppers implies privacy against naive eavesdroppers but not vice versa. \ka{ Below we provide a definition of a private prosumption trajectory and profile, used throughout the rest of the manuscript. We remind that $s^*$ denotes the equilibrium value of $s$, i.e. $s^* = \lim_{t \rightarrow \infty} s(t)$. \begin{definition}\label{dfn_privacy} The following two notions of prosumption privacy are considered: \newline (i) A prosumption trajectory is called private against an eavesdropper type if the knowledge available to the eavesdropper does not allow the estimation of $s(t), t \geq 0, s \neq s^*$. \newline (ii) A prosumption profile is called private against an eavesdropper type if the knowledge available to the eavesdropper does not allow the estimation of its trajectory and steady state values, i.e. of $s(t), t \geq 0$. \end{definition} The considered privacy definition implies that a prosumption trajectory is private when an eavesdropper cannot accurately estimate its initial condition and values when not at steady state. The privacy of a prosumption profile requires in addition the privacy of its steady state value. The distinction between the two notions allows privacy guarantees based on different conditions (see Section \ref{sec_privacy_analysis}). } \subsection{Problem Statement} This paper aims to design control schemes that enable stability and optimality guarantees and at the same time preserve the privacy of all active units. The problem is stated below. \begin{problem}\label{problem_definition} Design a control scheme that: \begin{enumerate}[(i)] \item Preserves the privacy \ka{of the prosumption profiles} against intelligent eavesdroppers. \item Enables asymptotic stability guarantees. \item Uses local information and locally verifiable conditions. \item Yields an optimal steady-state power allocation. \item Applies to arbitrary connected network configurations. \end{enumerate} \end{problem} Problem \ref{problem_definition} aims to design a control scheme that enables stability guarantees, ensures an optimal power allocation at steady state, and guarantees the privacy of the generation/demand profiles against informed eavesdroppers, \ka{following Definitions \ref{Def_eavesdropper} and \ref{dfn_privacy}.} In addition, we aim to design a scheme that relies on locally available information and locally verifiable conditions, to enable scalable designs. Finally, it is desired that the proposed scheme is applicable to general network topologies. \section{Extended Primal-dual scheme}\label{III} In this section we consider the problem of determining the generation/demand inputs with aim to steer the system trajectories to a global minimum of the \ak{prosumption cost minimization} problem \eqref{eq3}. We first examine a distributed scheme that has been widely studied in the literature \cite{kasis2017stability}, \cite{li2015connecting}, \cite{zhao2015distributed}, \cite{low2014distributed}, usually referred to as the \textit{Primal-Dual scheme}, that enables an optimal power allocation, and discuss its resulting privacy issues. To resolve these issues, we propose the \textit{Extended Primal-Dual scheme}, which enables privacy of the prosumption profiles against naive eavesdroppers. \subsection{Primal-Dual scheme} To describe the \textit{Primal-Dual scheme}, we consider a connected communication graph ($\mathcal{N},\hat{\mathcal{E}}$), where $ \hat{\mathcal{E}}$ represents the set of communication lines among the buses, i.e. $(i,j) \in \widetilde{\mathcal{E}}$ if buses $i$ and $j$ communicate. In addition, we let $\hat{H}$ be the incidence matrix of ($\mathcal{N},\hat{\mathcal{E}}$) and define the variable $\zeta_j = \vect{1}^T_{|\mathcal{N}_j|}p^L_{j} + \vect{1}^T_{|\mathcal{N}^L_j|}d^c_{j} -\vect{1}^T_{|\mathcal{N}^G_j|}p^M_{j}$ for all $j \in \mathcal{N}$. The prosumption input dynamics are given by \begin{subequations}\label{eq4} \begin{align} \hat{\Gamma} \dot{\psi} &= \hat{H}^T p^c, \label{eq4a} \\ \bar{\Gamma} \dot{p}^c &= \zeta - \hat{H} \psi,\label{eq4b} \\ u_{k,j} &= p^c_j, k \in \mathcal{N}_j, j \in{\mathcal{N}},\label{eq4c} \end{align} \end{subequations} where the diagonal matrices $\hat{\Gamma} \in \mathbb{R}^{|\hat{\mathcal{E}}| \times |{\hat{\mathcal{E}}|}}$ and $\bar{\Gamma} \in \mathbb{R}^{|{\mathcal{N}}| \times |{\mathcal{N}}|}$ contain the positive time constants associated with \eqref{eq4a} and \eqref{eq4b} respectively and $p^c_j$ is a power command variable associated with bus $j$ and shared with communicating buses. In addition, variable $\psi$ is a state of the \textit{Primal-Dual scheme} that integrates the difference in power command variables between communicating buses. The input for all active elements at bus $j$ is given by the local power command value $p^c_j$, via \eqref{eq4c}. The dynamics in \eqref{eq4a} enable the synchronization of the power command variables at steady state. This property is useful to provide an optimality interpretation of the system's equilibria. In addition, \eqref{eq4b} ensures that the secondary frequency control objectives, i.e. ensuring generation/demand balance and the frequency attaining its nominal value, are satisfied at steady state. The latter follows by summing \eqref{eq1b} and \eqref{eq4b} at steady state over all $j \in \mathcal{N}$, which yields $\sum_{j \in \mathcal{N}} D_j \omega^*_j = 0$, which in turn implies that $\omega^* = \vect{0}_{|\mathcal{N}|}$ from the synchronization of frequency at equilibrium, as follows from \eqref{eq1a}. It should be noted that the stability and optimality of the \textit{Primal-Dual scheme} \eqref{eq4} for a wide class of generation/demand dynamics, including those in \eqref{eq2}, have been analytically shown in the literature (e.g. \cite{kasis2017stability}). \begin{Remark}\label{remark_privacy} A shortcoming of the \textit{Primal-Dual scheme} \eqref{eq4} is the requirement for real-time knowledge of the generation and demand from all active units in the network. In practice, this would require the transmission of this information to a central controller at each bus, in order to calculate $\zeta_j$, exposing the local generation/demand profiles to a naive eavesdropper who intercepts these signals. The latter compromises the privacy of the prosumption profiles. \end{Remark} \subsection{Extended Primal-Dual scheme}\label{sec_extended_PD} \begin{figure*}[htb!] \centering\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{eavesdropper_n2.eps} \caption{Schematic representation of system \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} on a simple 3-bus network. Privacy-seeking users are associated with prosumption units. Blue lines represent power transfers whereas red lines represent information flows. \ak{Users monitor the local frequency and communicate their respective power command values to neighbouring users.} Both naive and intelligent eavesdroppers intercept the communicated signals between users, but only intelligent eavesdroppers \ak{possess knowledge of the underlying dynamics, that may be used to analyze the intercepted information.}} \label{Fig1} \vspace{-2mm} \end{figure*} In this section we present a scheme that aims to improve the privacy properties of the generation/demand profiles. In contrast to \eqref{eq4}, which includes a controller at each bus, the \ak{proposed} scheme employs a controller at each privacy-seeking unit (generator or controllable load). We demonstrate that the presented scheme offers privacy against naive eavesdroppers and simultaneously enables an optimal power allocation. To describe the new scheme, we consider a communication network characterized by a connected graph ($\widetilde{\mathcal{N}}, \widetilde{\mathcal{E}}$), where $\widetilde{\mathcal{N}} = \cup_{j \in \mathcal{N}}\mathcal{N}_j$ represents the set of active units within the power network and $\widetilde{\mathcal{E}} \subseteq \widetilde{\mathcal{N}} \times \widetilde{\mathcal{N}}$ the set of connections. Moreover, we let $H \in \mathbb{R}^{|\widetilde{\mathcal{N}}| \times |\widetilde{\mathcal{E}}|}$ be the incidence matrix of ($\widetilde{\mathcal{N}}, \widetilde{\mathcal{E}}$). In addition, the following variables are defined for compactness in presentation, \begin{subequations}\label{eq_s} \begin{align} s_j^T &= [(-p^M_j)^T \;,\; (d^c_j)^T], j \in \mathcal{N}, \\ \widetilde{s}_j &= s_j + p^L_j, j \in \mathcal{N}, \end{align} \end{subequations} where $\widetilde{s} \in \mathbb{R}^{|\widetilde{\mathcal{N}}|}$ is a vector with all generation and controllable and uncontrollable demand units. The proposed \textit{Extended Primal-Dual scheme}, is presented below \begin{subequations}\label{eq5} \begin{align} \widetilde{\Gamma} \dot{\psi} &= H^T p^c \label{5a}, \\ \Gamma \dot{p}^c &= \widetilde{s} - H \psi, \label{5b} \\ u &= p^c, \label{5c} \end{align} \end{subequations} where $\widetilde{\Gamma} \in \mathbb{R}^{|\widetilde{\mathcal{E}}| \times |{\widetilde{\mathcal{E}}|}}$ and $\Gamma \in \mathbb{R}^{|\widetilde{\mathcal{N}}| \times |\widetilde{\mathcal{N}}|}$ are diagonal matrices containing the positive time constants associated with \eqref{5a} and \eqref{5b} respectively, and $p^c_{k,j}$ corresponds to the power command variable associated with active unit $k$ at bus $j$, that is also used as the input to \eqref{eq2} following \eqref{5c}. A schematic representation of the system \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} is provided in Fig. \ref{Fig1}. \ka{ \begin{Remark}\label{rem_comm_network} The proposed \textit{Extended Primal-dual scheme} assumes communication among prosumption units by considering the connected graph ($\widetilde{\mathcal{N}}, \widetilde{\mathcal{E}}$). Note that when the communication of prosumption is considered for the Primal-Dual scheme, then its communication topology (i.e. a meshed network at bus level with a star structure within each bus to allow communication from prosumption units towards the bus controller) is a special case to that of the \textit{Extended Primal-dual scheme}. It should also be noted that, apart from connectivity, no assumption is made on the topology of the communication network. The latter allows practical considerations to be considered in the design of the communication network (e.g. communication among buses could be at bus level only). \end{Remark} } Following the \textit{Extended Primal-Dual scheme}, privacy-seeking users share power command signals instead of their generation and demand values. Hence, the prosumption profiles are not communicated towards local controllers. The latter suffices to ensure privacy against naive eavesdroppers, \ka{since inferring the prosumption profiles from the power command variables would require knowledge of the underlying dynamics. The privacy of prosumption profiles against naive eavesdroppers under the \textit{Extended Primal-Dual scheme} is demonstrated in the following proposition, proven in the Appendix. \begin{proposition}\label{prop_privacy_naive} Consider any supply unit $k,j$ implementing the \textit{Extended Primal-Dual scheme} \eqref{eq5}. Then, its prosumption profile $\widetilde{s}_{k,j}$ is private against eavesdroppers with knowledge of K1. \end{proposition} } \subsection{Equilibrium Analysis} We now provide a definition of an equilibrium point to the interconnected dynamical system \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. \begin{definition} The point $\alpha^*$ = $(\eta^*,\psi^*,\omega^*,x^*, p^{c,*})$ defines an equilibrium of the system \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} if all time derivatives of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} are equal to zero at this point. \end{definition} We will make use of the following equilibrium equations for \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. \begin{subequations}\label{eq7} \begin{align} &0=\omega^*_{i}-\omega^*_{j}, (i,j)\in{\mathcal{E}}, \label{eq7a} \\ \begin{split} &0 = \vect{1}^T_{|N^G_j|} p^{M,*}_{j} - \vect{1}^T_{|N^L_j|} d^{c,*}_{j} -\vect{1}^T_{|N_j|} p^L_{j} \\ & \hspace{2mm} - \sum_{i \in \mathcal{N}^s_j} p^*_{ji} + \sum_{i \in \mathcal{N}^p_j} p^*_{ij}, j \in \mathcal{N} \label{eq7b} \end{split} \\ &0 = -x^*_{k,j} + m_{k,j}(u^*_{k,j} - \omega^*_j), k \in \mathcal{N}^G_j, j\in{\mathcal{N}}, \label{eq7c} \\ &0 = H^T p^{c,*}, \label{7f}\\ &0 = \widetilde{s}^* - H \psi^*, \label{7g} \\ \text{where} & \text{ the variables $p^*, p^{M,*}, d^{c,*}, u^*, \widetilde{s}^*$ satisfy} \nonumber\\ p^*_{ij} &=B_{ij}\eta^*_{ij}, (i,j)\in{\mathcal{E}}, \label{eq7h} \\ p^{M,*}_{k,j} &= x^*_{k,j} + h_{k,j} (u^*_{k,j} - \omega^*_j), k \in \mathcal{N}^G_j, j\in{\mathcal{N}}, \label{eq7d} \\ d^{c,*}_{k,j} &= - h_{k,j} (u^*_{k,j} - \omega^*_j), k \in \mathcal{N}^L_j, j \in{\mathcal{N}}, \label{eq7e} \\ u^*_{k,j} &= p^{c,*}_{k,j}, k \in \mathcal{N}_j, j \in{\mathcal{N}} \label{eq7i}, \\ (s_j^*)^T &= [(-p^{M,*}_j)^T \;,\; (d^{c,*}_j)^T], j \in \mathcal{N}, \\ \widetilde{s}^* &= s^* + p^L. \end{align} \end{subequations} The following lemma, proven in the Appendix, characterizes the equilibria of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. \begin{lemma}\label{Lemma1} The equilibria of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} satisfy $\omega^*=\vect{0}_{|{\mathcal{N}}|}$ and $p^{c,*} \in \Ima(\vect{1}_{|\widetilde{\mathcal{N}}|})$. \end{lemma} Lemma \ref{Lemma1} demonstrates that the presented scheme ensures that the frequency attains its nominal value at equilibrium, which is a main objective of secondary frequency control. In addition, it shows that power command variables share the same value at steady state. The latter can be used to enable an optimal power allocation, as demonstrated in the following section. \subsection{Optimality and Stability Analysis} The following proposition, proven in the Appendix, provides necessary and sufficient conditions that ensure that the equilibrium values of $p^{M}$ and $d^{c}$ are global solutions to the optimization problem \eqref{eq3}. \begin{proposition}\label{proposition1} Let $q_{k,j}(m_{k,j}+h_{k,j})=1, k \in \mathcal{N}^G_j, j \in \mathcal{N}$ and $q_{k,j}h_{k,j}=1, k\in \mathcal{N}^L_j, j \in \mathcal{N}$. Then, the equilibrium values $p^{M,*}$ and $d^{c,*}$ of system \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} globally minimize \ka{the} optimization problem \eqref{eq3}. \end{proposition} Proposition \ref{proposition1} follows directly from the KKT conditions \cite{boyd2004convex}. It demonstrates how the controller gains in generation and controllable load units should be designed such that an optimal power allocation is ensured. Hence, we deduce that the \textit{Extended Primal-Dual scheme} \eqref{eq5} enables an optimal power allocation. The following theorem, proven in the Appendix, provides global asymptotic stability guarantees for \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. \begin{theorem}\label{theorem1} Solutions to \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} globally asymptotically converge to the set of its equilibria, where $\omega^*= \boldsymbol{0}_{\mathcal{|N|}}$. \end{theorem} Theorem \ref{theorem1} guarantees the convergence of solutions to \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} to the set of its equilibria. In addition, the \textit{Extended Primal-Dual scheme} is \ak{locally} verifiable and applicable to arbitrary network configurations. Furthermore, the presented scheme guarantees the privacy of the prosumption profiles against naive eavesdroppers. Noting also that Proposition \ref{proposition1} demonstrates how optimality may be achieved at steady state, it follows that the \textit{Extended Primal-Dual scheme} satisfies all objectives of Problem \ref{problem_definition}, except from ensuring privacy against intelligent eavesdroppers. \subsection{Discussion} The scheme presented in this section extends the \textit{Primal-Dual scheme} \eqref{eq4} by including a controller at each unit contributing to secondary frequency control. The \textit{Extended Primal-Dual scheme} results in the transmission of power command signals instead of prosumption signals, which enables privacy against naive eavesdroppers, \ka{as demonstrated by Proposition \ref{prop_privacy_naive}}. \ak{On the other hand, the interaction between an increased number of controllers may result in slower convergence.} The proposed scheme yields an optimal power allocation, ensures that frequency attains its nominal value at steady state and guarantees the global stability of the power network as follow from Proposition \ref{proposition1}, Lemma \ref{Lemma1} and Theorem \ref{theorem1} respectively. However, the \textit{Extended Primal-Dual scheme} \eqref{eq5} does not ensure the privacy of generation and demand profiles against intelligent eavesdroppers. In particular, an intelligent eavesdropper may use the communicated power command trajectories and knowledge of the underlying power command dynamics to infer the prosumption profiles using \eqref{5b}, e.g. by $\widetilde{s} = \Gamma \dot{p}^c + H \psi$. In the next section, we present \ak{a} scheme that aims to resolve this issue. \section{Privacy-Preserving scheme}\label{IV} In this section we present \ak{a} scheme that aims to preserve the beneficial properties of the \textit{Extended Primal-Dual scheme} described in the previous section and simultaneously \ka{guarantee the} privacy of the generation/demand profiles against intelligent eavesdroppers. \subsection{Privacy-Preserving scheme} The proposed scheme, which shall be referred \ak{to} as the \textit{Privacy-Preserving scheme}, incorporates a privacy-enhancing signal $n$ in the power command dynamics, as follows \begin{subequations}\label{eq8} \begin{align} \widetilde{\Gamma} \dot{\psi} &= H^T p^c \label{eq8a}, \\ \Gamma \dot{p}^c &= \widetilde{s} - H \psi + n \label{eq8b}, \\ u &= p^c. \label{eq8c} \end{align} \end{subequations} In \eqref{eq8} above, the locally Lipschitz, privacy-enhancing signal $n = [n_i]_{i \in {\mathcal{N}}}$, where $n_i = [n_{k,i}]_{{k \in {\mathcal{N}_i}}}$, adapts the derivative of the power command variables to enable enhanced privacy properties. The design of the signal $n$ is crucial in providing enhanced privacy properties and simultaneously enabling stability and optimality guarantees for the \textit{Privacy-Preserving scheme} \eqref{eq8}. Some desired properties of the privacy-enhancing signal $n$ are: (i) to permit the existence of equilibria, by taking a constant value when the states of the system are at equilibrium, and (ii)~to allow an optimality interpretation of the resulting equilibria. Both objectives can be achieved if $n$ is zero at steady state since in this case the equilibria of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, and \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} are identical. The following design condition is imposed on the privacy-enhancing signal $n$. As demonstrated below, this condition ensures the privacy of the prosumption profiles against intelligent eavesdroppers and allows stability and optimality to be deduced. \an{It should be noted that the trajectories of $n$ are in general non-unique.} \begin{design}\label{assumption1} \an{The privacy-enhancing signals satisfy $n_{k,j} = n^d_{k,j} + n^f_{k,j}, k \in \mathcal{N}_j, j \in \mathcal{N}$, where: \begin{enumerate}[(i)] \item $n^d_{k,j}(t) = - \xi_{k,j}(t) \dot{p}^c_{k,j}(t)$, where the non-negative signal $\xi_{k,j}(t)$ satisfies $\dot{\xi}_{k,j}(t) < \hat{\beta}_{k,j}$ for all $t \geq 0$, \item $|n^f_{k,j}(t)| < \beta_{k,j}|\omega_j(t)|$, for all $t \geq 0$. \end{enumerate} Moreover, the positive design constants $\beta_{k,j}, \hat{\beta}_{k,j}$ satisfy $\begin{bmatrix} -h_{k,j} -D_j/|\mathcal{N}_j| & h_{k,j} + \beta_{k,j}/2\\ h_{k,j} + \beta_{k,j}/2 & -h_{k,j} + \hat{\beta}_{k,j}/2 \end{bmatrix} \preceq 0, k \in \mathcal{N}_j, j \in \mathcal{N}$.} \end{design} Design Condition \ref{assumption1} splits the privacy-enhancing signal $n$ to two other signals, $n^d$ and $n^f$, that serve different purposes. The signal $n^d_{k,j}$ is proportional to the power command derivative $\dot{p}^c_{k,j}$ with \an{a non-negative, time-varying} gain $\xi_{k,j}$ \an{designed such that $\dot{\xi}_{k,j}(t) < \hat{\beta}_{k,j}$ is satisfied at all times}. The latter adjusts the rate at which the power command variables respond to external signals and makes any prior estimates of the power command model inaccurate. Hence, a potential eavesdropper utilizing model-based observations will produce inaccurate results. The component $n^f$ introduces a noise signal\footnote{It should be noted that $n^f$ \an{(and similarly $\xi$)} are treated as time-dependent variables rather than random variables, following the assumption that $n$ is locally Lipschitz. The latter is made for simplicity and to avoid a diversion of the paper focus from the privacy properties of the proposed schemes.} that is mixed with the generation/demand values. The latter offers improved privacy properties since: (i) the generation/demand profile information in the controller is distorted, and (ii) it perturbs the communicated signals of all controllers when a disturbance occurs, making it harder to detect the origin of the disturbance from a change in the transmitted signal. Design Condition \ref{assumption1}(ii) restricts the magnitude of $n^f$ in relation with the magnitude of the local frequency. The \an{values of $\beta_{k,j}, \hat{\beta}_{k,j}$ are selected to satisfy the linear matrix inequality (LMI) in Design Condition \ref{assumption1}} such that convergence is guaranteed, as demonstrated in Theorem \ref{theorem2} later on. These properties enable the privacy of prosumpion against intelligent eavesdroppers since the same power command trajectories result from a (wide) class of prosumption profiles due to different potential trajectories of the privacy-enhancing signal $n$. \ka{The latter is analytically demonstrated in Section \ref{sec_privacy_analysis} below.} In addition, note that since all communicated power command signals synchronize at steady state, \ka{their equilibrium values do not convey any information about local generation/demand.} \an{ \begin{Remark} The bounds $\beta_{k,j}$ and $\hat{\beta}_{k,j}$ associated with $n^f_{k,j}$ and $n^d_{k,j}$ respectively are interdependent through the LMI in Design Condition \ref{assumption1}. Hence, there is a trade-off between the maximum allowed derivative of the gain $\xi_{k,j}$ and the maximum magnitude ratio between the signal $n^f_{k,j}$ and the local frequency $\omega_j$. The latter can be used for design purposes by placing different weights on the the associated bounds, and hence the effect, of signals $n^f_{k,j}$ and $n^d_{k,j}$. \end{Remark} } \ka{ \subsection{Privacy analysis}\label{sec_privacy_analysis} In this section, we present our main privacy results regarding the proposed \textit{Privacy-Preserving scheme}. First, we clarify that for an intelligent eavesdropper, K1 implies knowledge of all power command signals communicated to and from a considered unit. In addition, K2 implies knowledge of the \emph{Privacy-Preserving scheme} dynamics \eqref{eq8}. The following proposition, proven in the Appendix, demonstrates that the proposed scheme preserves the privacy of the prosumption profiles against intelligent eavesdroppers. \begin{proposition}\label{prop_privacy} Consider any supply unit $k,j$ implementing the \textit{Privacy-Preserving scheme} \eqref{eq8}. Then, its prosumption profile $\widetilde{s}_{k,j}$ is private against intelligent eavesdroppers with knowledge of K1 and K2. \end{proposition} Proposition \ref{prop_privacy} provides privacy guarantees for the prosumption profiles when the \textit{Privacy-Preserving} scheme is implemented. The latter demonstrates that the proposed scheme satisfies objective (i) within Problem \ref{problem_definition}. A reasonable case to be considered is when intelligent eavesdroppers gain knowledge of the steady-state value of the privacy-preserving signal $n$, i.e. have the following knowledge: \newline (K3) The steady state value of the privacy-preserving signal $n$, i.e. that $\lim_{t \rightarrow \infty} n(t) = \vect{0}_{|\widetilde{\mathcal{N}}|}$. The following proposition, proven in the Appendix, shows that the prosumption trajectories are private against eavesdroppers with knowledge of K1, K2 and K3. We remind that the definition of a private trajectory is provided in Definition \ref{dfn_privacy}(i). \begin{proposition}\label{prop_privacy_trajectory} Consider any supply unit $k,j$ implementing the \textit{Privacy-Preserving scheme} \eqref{eq8}. Then, its prosumption trajectory is private against intelligent eavesdroppers with knowledge of K1, K2 and K3. \end{proposition} Note that Proposition \ref{prop_privacy_trajectory} does not guarantee the privacy of the prosumption at steady state, since knowledge of the variable $\psi$ may yield the equilibrium values of $\widetilde{s}$ from \eqref{7g}. However, since $\psi$ results from integrating the differences between communicated power command variables, any inaccuracy on determining these variables will lead to growing deviations between the estimated and true values of $\psi$, compromising the reliability of such estimate. Stronger privacy guarantees may be obtained, such that the prosumption profiles are kept private when eavesdroppers have knowledge of K3, by relaxing K1. In particular, we consider the case where an eavesdropper does not have full knowledge of the information communicated to a considered unit, i.e. has knowledge of: \newline (K4) A proper subset of the power command signals communicated to and from a given unit, for which it aims to obtain private information. The following proposition, proven in the Appendix, guarantees the privacy of prosumption profiles when intelligent eavesdroppers have knowledge of K2, K3 and K4. \begin{proposition}\label{lemma_privacy} Consider any supply unit $k,j$ implementing the \textit{Privacy-Preserving scheme} \eqref{eq8}. Then, its prosumption profile $\widetilde{s}_{k,j}$ is private against intelligent eavesdroppers with knowledge of K2, K3 and K4. \end{proposition} Proposition \ref{lemma_privacy} enables privacy guarantees of the prosumption profile, when knowledge of the steady state value of $n$ is available. However, it assumes that the intelligent eavesdropper does not possess full knowledge of the information communicated to and from the considered privacy-seeking unit. Note that the latter case might describe eavesdroppers associated with some prosumption unit that communicates with the considered privacy-seeking unit, under specific conditions on the communication network topology such that K4 is satisfied. \begin{Remark}\label{rem_noise_signals} The presented privacy results hold when either of $n^f$ or $n^d$ is neglected in $n$. However, their combined impact keeps additional information associated with the prosumption profiles private. In particular, the presence of $n^f$ results in a change on all controllers after a power disturbance, making it difficult to infer its origin from the power command signals. In addition, $n^d$ makes model based inference inaccurate, and hence difficult to have a reasonable range estimate of the prosumption magnitude, i.e. obtain an estimate with a margin of error analogous to the magnitude of $n^f_{k,j}$. \end{Remark} } \subsection{Optimality and Stability Analysis} In this section we provide analytic optimality and stability guarantees for system \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}. The following proposition, proven in the Appendix, extends Proposition \ref{proposition1} by demonstrating that Design Condition \ref{assumption1} enables an optimal steady state power allocation. \begin{proposition}\label{proposition2} Let Design Condition \ref{assumption1}, $q_{k,j}(m_{k,j}+h_{k,j})=1, k \in \mathcal{N}^G_j, j \in \mathcal{N}$ and $q_{k,j}h_{k,j}=1, k\in \mathcal{N}^L_j, j \in \mathcal{N}$ hold. Then, the equilibrium values $p^{M,*}$ and $d^{c,*}$ of system \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, globally minimize the optimization problem \eqref{eq3}. \end{proposition} Proposition \ref{proposition2} demonstrates that when Design Condition \ref{assumption1}, and the gain conditions provided in Proposition \ref{proposition1} hold, then the \textit{Privacy-Preserving scheme} yields an optimal power allocation. The latter follows trivially from Proposition \ref{proposition1}, since the privacy-enhancing signal $n_{k,j}$ is zero at steady state from Design Condition \ref{assumption1}, which results in identical equilibrium points for \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8} and \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. The following theorem, proven in the Appendix, demonstrates that when Design Condition \ref{assumption1} holds, then the set of equilibria of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, is attracting. The latter shows that the proposed \textit{Privacy-Preserving scheme} does not compromise the stability of the power network. \begin{theorem}\label{theorem2} Let Design Condition \ref{assumption1} hold. Then, the solutions of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, globally asymptotically converge to the set of its equilibria, where $\omega^*= \vect{0}_{\mathcal{|N|}}$. \end{theorem} Theorem \ref{theorem2} guarantees the convergence of solutions to \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, to the set of its equilibria. In addition, the dynamics of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, are distributed, applicable to arbitrary network configurations and locally verifiable. Moreover, \ka{as demonstrated in Section \ref{sec_privacy_analysis}, the} \textit{Privacy-Preserving scheme} enables the privacy of prosumption profiles against informed eavesdroppers. Finally, as demonstrated in Proposition \ref{proposition2}, the presented scheme allows an optimal power allocation among generation and controllable demand. Hence, all objectives of Problem \ref{problem_definition} are satisfied. \section{Simulation on the NPCC 140-bus system}\label{V} In this section, we \ak{illustrate} our analytic results with simulations using the Power system toolbox \cite{chow1992toolbox} on Matlab. For our simulations, we use the Northeast Power Coordinating Council (NPCC) 140-bus interconnection system. This model is more detailed and realistic than the considered analytical model, including voltage dynamics, line resistances, and a transient reactance generator model\footnote{\ak{The details of the simulation model can be found in the Power System Toolbox data file datanp48.}}. The test system consists of 93 load buses and 47 generation buses and has a total real power of 28.55 GW. Controllable demand was considered in $20$ load buses, where at each bus the number of controllable loads was randomly selected from an integer uniform distribution with range $[90, 180]$. A single generation unit was added at each of $20$ generation buses. In addition, quadratic cost functions were considered for generation and controllable demand following the description in \eqref{eq3}. The values for $q_{k,j}, k \in \mathcal{N}_j, j \in \mathcal{N}$ were selected from a uniform distribution with range $[50, 250]$. For the simulation, a step change in demand of magnitude $0.2$ per unit (100 MW) at $10$ randomly selected loads at each of buses $2$ and $3$ was considered at $t = 1$ second. The time step for the simulations, \ak{denoted by $\Delta T$,} was set at 10 ms. \begin{figure}[t!] \centering \includegraphics[scale = 0.57]{Frequency.eps} \caption{Frequency at bus $18$ when the following schemes are implemented: (i) Integral action scheme, (ii) Primal-Dual scheme, (iii) Extended Primal-Dual scheme, and (iv) Privacy-Preserving scheme.} \label{Fig_frequency} \vspace{-2mm} \end{figure} \begin{figure*}[!htb] \begin{minipage}{0.9\textwidth} \begin{subfigure}{0.23\textwidth} \centering \includegraphics[scale=0.52]{Marginal_1_3.eps} \end{subfigure} \hspace{0.24\textwidth} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[scale=0.52]{Marginal_2_3.eps} \end{subfigure} \\ \begin{subfigure}{0.23\textwidth} \includegraphics[trim = 0 0 0 2mm, clip,scale=0.52]{Marginal_3_3.eps} \end{subfigure} \hspace{0.24\textwidth} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[trim = 0 0 0 2mm, clip,scale=0.52]{Marginal_4_3.eps} \end{subfigure} \end{minipage} \hspace{-7.5mm} \begin{minipage}{0.08\textwidth} {\includegraphics[trim = 77mm 15mm 17mm 12mm, clip,scale=0.65]{Schemes.eps}} \end{minipage} \caption{Marginal costs for all generation and controllable demand units contributing to secondary frequency control when the following control schemes are implemented: (i) Integral action scheme, (ii) Primal-Dual scheme, (iii) Extended Primal-Dual scheme, and (iv) Privacy-Preserving scheme.} \label{Fig_marginal} \vspace{-2mm} \end{figure*} The system was tested \ak{under} the four control schemes described below: \begin{enumerate}[(i)] \item An Integral action scheme, where generation units and controllable loads integrate the local frequency with the controller gains selected to be inversely proportional to their respective cost coefficients. \item The \textit{Primal-Dual scheme}, described by \eqref{eq4}. \item The \textit{Extended Primal-Dual scheme} that we proposed, described by \eqref{eq5}. \item The \textit{Privacy-Preserving scheme} that we proposed, described by \eqref{eq8} and Design Condition \ref{assumption1}. \an{First suitable values for $\beta_{k,j}, \hat{\beta}_{k,j}, k \in \mathcal{N}_j, j \in \mathcal{N}$ were selected in accordance with the LMI in Design Condition \ref{assumption1}. The values of $\xi_{k,j}(t)$ were then randomly selected at each time step such that $(\xi_{k,j}(t) - \xi_{k,j}(t- \Delta T))/\Delta T$ lied in $[- \hat{\beta}_{k,j}, \hat{\beta}_{k,j}]$ following Design Condition \ref{assumption1}(i). In addition, the values of $n^f_{k,j}(t)$ were randomly selected at each time step from the uniform distribution $[-\beta_{k,j}|\omega_j|,\beta_{k,j}|\omega_j|]$ such that Design Condition \ref{assumption1}(ii) was satisfied}. \end{enumerate} In schemes (ii)-(iv), the dynamics of the implemented generation and controllable demand units followed from \eqref{eq2} and the controller gains were selected such that the optimality conditions presented in Propositions \ref{proposition1} and \ref{proposition2} were satisfied. The communication network associated with scheme (ii) had the same structure as the power network. A random connected communication network was generated when schemes (iii) and (iv) were implemented. For consistency, the same sets of randomly selected parameters were considered in all simulations. \begin{figure*}[ht] \begin{subfigure}{0.23\textwidth} \centering \includegraphics[scale=0.4]{Demand_1_3.eps} \end{subfigure} \hspace{0.09\textwidth} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[scale=0.4]{Demand_2_3.eps} \end{subfigure} \hspace{0.09\textwidth} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[scale=0.4]{Demand_3_3.eps} \end{subfigure} \\[1mm] \begin{subfigure}{0.23\textwidth} \includegraphics[trim = 0 0 0 2mm, clip,scale=0.4]{Comm_signal_1_3.eps} \end{subfigure} \hspace{0.09\textwidth} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[trim = 0 0 0 2mm, clip,scale=0.4]{Comm_signal_2_3.eps} \end{subfigure} \hspace{0.09\textwidth} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[trim = 0 0 0 2mm, clip,scale=0.4]{Comm_signal_3_3.eps} \end{subfigure} \caption{Controllable demand (top) and communicated signals (bottom) for loads $9, 18$ and $27$ at bus $2$ for the following control schemes: (left) Primal-Dual scheme, (middle) Extended Primal-Dual scheme, and (right) Privacy-Preserving scheme.} \label{Privacy_A} \vspace{-2mm} \end{figure*} The frequency response at a randomly selected bus (bus $18$) is depicted in Fig. \ref{Fig_frequency}. From Fig. \ref{Fig_frequency}, it follows that the frequency converges to its nominal value at all simulated cases. The latter suggests that the proposed \textit{Extended Primal-Dual} and \textit{Privacy-Preserving} schemes yield a stable response. Note also that the frequency returns to within $0.01$ Hz from its nominal value in less than two minutes, which is well within the secondary frequency control timeframe. Nevertheless, the \textit{Extended Primal-Dual} and \textit{Privacy-Preserving} schemes result in slower convergence of frequency to its nominal value. \ak{This is due to} a larger number of controllers \ak{that} need to synchronize for convergence. In addition, the implementation of the \textit{Privacy-Preserving scheme}, and particularly Design Condition \ref{assumption1}(i), results in slower convergence compared with the \textit{Extended Primal-Dual scheme}. To demonstrate the optimality of the proposed analysis, we consider the marginal costs of each active unit, defined as the absolute value of the cost derivative of the local cost functions. The marginal costs for all controllable loads and local generators are depicted in Fig. \ref{Fig_marginal}. From Fig. \ref{Fig_marginal}, it follows that for schemes (ii), (iii) and (iv), the marginal costs for all units converge to the same value. The latter suggests that an optimal power allocation is attained at steady state and validates the presented optimality analysis. By contrast, the marginal costs differ at equilibrium in scheme (i), which suggests that a suboptimal response is obtained. To validate the enhanced privacy properties associated with the \textit{Extended Primal-Dual} and the \textit{Privacy-Preserving} schemes, compared with the \textit{Primal-Dual scheme}, we considered the communicated signals from three randomly selected loads (loads $9, 18$ and $27$ at bus $2$). The results are shown on Fig. \ref{Privacy_A}, which demonstrate that when the \textit{Primal-Dual scheme} is implemented (scheme (ii)), the privacy of the controllable demand units is compromised \ka{since the demand values} are communicated. By contrast, when the \textit{Extended Primal-Dual} and \textit{Privacy-Preserving} schemes were implemented (schemes (iii) and (iv) respectively), the privacy of the controllable load profiles against naive eavesdroppers is preserved. \begin{figure} \hspace{-1.5mm} \includegraphics[scale=0.6]{Extracted_3_3.eps} \\ \includegraphics[scale=0.6]{Extracted_4_3.eps} \caption{Inferred demand information on loads $9, 18$ and $27$ at bus $2$ using the power command trajectories for the following two control schemes: (top) Extended Primal-Dual scheme, and (bottom) Privacy-Preserving scheme.} \label{Privacy_B} \vspace{-2mm} \end{figure} To demonstrate that the \textit{Privacy-Preserving scheme} ensures the privacy of the prosumption profiles against intelligent eavesdroppers, we considered an observer scheme that aims to infer the controllable demand using a model of the power command dynamics and knowledge of the power command signals. In particular, by evaluating the power command derivative and the value of $\psi$, an eavesdropper may attempt to observe the generation and controllable demand profiles by reversing \eqref{5b}, i.e. using $ \widetilde{s} = \Gamma \dot{p}^c + H \psi$. Figure \ref{Privacy_B} demonstrates the result from such observer scheme for the same three loads considered in Fig. \ref{Privacy_A}, when the \textit{Extended Primal-Dual} and \textit{Privacy-Preserving} schemes are implemented. From Fig. \ref{Privacy_B}, it follows that an intelligent eavesdropper may obtain the controllable demand profiles when the \textit{Extended Primal-Dual scheme} is applied. By contrast, the application of the \textit{Privacy-Preserving scheme} ensures that the demand is private against intelligent eavesdroppers, since the retrieved information is distorted by the signal $n_{k,j}$. \section{Conclusion}\label{VI} We have considered the problem of enabling an optimal power allocation and simultaneously preserving the privacy of generation and controllable demand profiles within the secondary frequency control timeframe. To enhance the intuition on our results, two types of eavesdroppers were defined; naive eavesdroppers that \ak{do not possess/make use of knowledge of the internal \ka{system} dynamics to} analyze the intercepted signals and intelligent eavesdroppers that use knowledge of the underlying dynamics to infer the privacy-sensitive prosumption profiles. We proposed the \textit{Extended Primal-Dual scheme}, which implements a controller at each privacy-seeking unit in the power grid to provide improved privacy properties. The proposed scheme enables privacy guarantees against naive eavesdroppers. However, the generation/demand profiles may be inferred by intelligent eavesdroppers using the communicated signal trajectories and information on the underlying dynamics. To resolve this issue, we proposed the \textit{Privacy-Preserving scheme}, which shares the structure of the \textit{Extended Primal-Dual scheme} but also incorporates a privacy-enhancing signal at each controller. The latter \an{continuously} adjusts the response speed of the controllers, making model based observations inaccurate, and disturbs the generation/demand profile information within the controllers, enabling privacy against intelligent eavesdroppers. For both proposed schemes, we provide analytic stability, optimality and privacy guarantees. Our presented results are distributed, locally verifiable and applicable to general network configurations. The applicability of the proposed schemes is demonstrated with simulations on the NPCC 140-bus system where we show that stability is preserved, and improved privacy properties and an optimal power allocation are attained. \section*{Appendix} In this appendix, we prove our main results, Theorems \ref{theorem1} and \ref{theorem2}, Lemma \ref{Lemma1} and Propositions \ref{prop_privacy_naive}-\ref{proposition2}. \ka{\textbf{\emph{Proof of Proposition \ref{prop_privacy_naive}}:} The proof follows from the fact that naive eavesdroppers do not possess knowledge of \eqref{eq5}. In particular, inferring the trajectory of $\widetilde{s}_{k,j}$, i.e. either by reversing \eqref{5b} or by implementing an observer, requires knowledge of \eqref{eq5}. In addition, using the equilibrium value of $\psi$ to infer the steady state of $\widetilde{s}_{k,j}$ requires knowledge of \eqref{5b}. Hence, the prosumption profile and steady state of $\widetilde{s}_{k,j}$ are private. \hfill $\blacksquare$} \textbf{\emph{Proof of Lemma \ref{Lemma1}}:} To show that $\omega^*=\vect{0}_{|\mathcal{N}|}$, we sum equations \eqref{5b} at equilibrium over all $j \in \mathcal{N}$, resulting in $\sum_{j \in \mathcal{N}}(\sum_{k \in \mathcal{N}^G_j} p^M_{k,j} - \sum_{k \in \mathcal{N}^L_j} d^c_{k,j})=\sum_{j \in \mathcal{N}} \sum_{k \in \mathcal{N}_j} p^L_{k,j}$. Then, summing \eqref{eq7b} over all $j \in \mathcal{N}$ results in $\sum_{j \in \mathcal{N}} D_j \omega^*_j = 0$, which suggests that $\omega^* = \vect{0}_{|\mathcal{N}|}$ from the frequency synchronization at equilibrium and $D_j > 0, j \in \mathcal{N}$, as follows from \eqref{eq7a}. Moreover, $p^{c,*} \in \Ima(\vect{1}_{\widetilde{\mathcal{N}}})$ follows directly from \eqref{7f}. \hfill $\blacksquare$ \textbf{\emph{Proof of Proposition \ref{proposition1}}:} The optimization problem \eqref{eq3} includes a strictly convex, continuously differentiable cost function and a linear equality constraint. Thus, a point $(\overline{p}^M,\overline{d}^c)$ is a global minimum of \eqref{eq3} if and only if it satisfies the KKT conditions \cite{boyd2004convex}, \begin{subequations}\label{eq10_KKT} \begin{align} q_{k,j} \overline{p}^M_{k,j}&=-\lambda, k\in \mathcal{N}^G_j, j\in\mathcal{N}, \label{10a} \\ q_{k,j} \overline{d}^c_{k,j}&= \lambda, k\in \mathcal{N}^L_j, j\in\mathcal{N}, \label{10b} \\ 0 &= \sum_{j\in \mathcal{N}}(\sum_{k\in \mathcal{N}^G_j}\overline{p}^M_{k,j}-\sum_{k\in \mathcal{N}^L_j}\overline{d}^c_{k,j}-\sum_{k \in \mathcal{N}_j} p^L_{k,j}), \label{10c} \end{align} \end{subequations} for some constant $\lambda$. It will be shown below that, when the conditions in the proposition statement hold, then \eqref{eq10_KKT} is satisfied when $(\overline{p}^M,\overline{d}^c) = (p^{M,*}, d^{c,*})$, where the equilibrium values follow from \eqref{eq7c}, \eqref{eq7d} and \eqref{eq7e}. First, note that from \eqref{7f} it follows that $p^{c,*}_{k,i} = p^{c,*}_{l,j}$ for all $k \in \mathcal{N}_i$, $l \in \mathcal{N}_j$ and all $i,j \in\widetilde{\mathcal{N}}$ and let their common value be $\overline{p}^{c,*}$. Then, let $\lambda = -\overline{p}^{c,*}$. In addition, note that at equilibrium $\omega^* = \vect{0}_{|\mathcal{N}|}$ from Lemma \ref{Lemma1}. Then from \eqref{eq7e}, \eqref{eq7i}, \eqref{10b} and $q_{k,j}h_{k,j}=1$, it follows that $d^{c,*}=-h_{k,j}(\overline{p}^{c,*}_j) = \lambda/q_{k,j} = \overline{d}^c_{k,j}$. Similarly, from \eqref{eq7c}, \eqref{eq7d}, \eqref{eq7i}, \eqref{10a} and $q_{k,j}(m_{k,j}+h_{k,j})=1$, it follows that $p^{M,*}_{k,j}= \overline{p}^{c,*} (m_{k,j}+h_{k,j}) = -\lambda/q_{k,j} = \overline{p}^M_{k,j}$. Furthermore, \eqref{10c} follows by multiplying \eqref{7g} with $\vect{1}_{|\widetilde{\mathcal{N}}|}$. Hence, the values $(\overline{p}^M,\overline{d}^c)=(p^{M,*},d^{c,*})$ satisfy the KKT conditions \eqref{eq10_KKT}. Therefore, the equilibrium values $p^{M,*}$ and $d^{c,*}$ define a global minimum for \eqref{eq3}. \hfill $\blacksquare$ \textbf{\emph{Proof of Theorem \ref{theorem1}}:} We will use the dynamics in \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}, to define a Lyapunov candidate function for \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. Firstly, we consider an equilibrium point $\alpha^*$ = $(\eta^*,\psi^*,\omega^*,x^*,p^{c,*})$ that satisfies \eqref{eq7}. Then, we let $V_F(\omega)=1/2\sum_{j\in{\mathcal{N}}}M_{j}(\omega_{j}-\omega^*_{j})^2$. The time derivative of $V_F$ along the trajectories of \eqref{eq1b} is given by \begin{multline}\label{eq10} \dot{V}_F=\sum_{j\in{\mathcal{N}}}(\omega_{j}-\omega^*_{j})(-\sum_{k \in \mathcal{N}_j} p^L_{k,j}+\sum_{k \in \mathcal{N}^G_j} p^M_{k,j} - \sum_{k \in \mathcal{N}^L_j} d^c_{k,j} \\- D_{j}\omega_{j} - \sum_{i \in \mathcal{N}^s_j} p_{ji} + \sum_{i \in \mathcal{N}^p_j} p_{ij}). \end{multline} Subtracting the product of $(\omega_{j}-\omega^*_{j})$ with each term in \eqref{eq7b} it follows that \begin{multline}\label{eq11} \dot{V}_F=\sum_{j\in{\mathcal{N}}}(\omega_{j}-\omega^*_{j})(\sum_{k \in \mathcal{N}^G_j}(p^M_{k,j}-p^{M,*}_{k,j}) + \sum_{k \in \mathcal{N}^L_j} (-d^c_{k,j}+d^{c,*}_{k,j}) \\-D_{j}(\omega_{j} - \omega^*_{j})) + \sum_{(i:j) \in \mathcal{E}}(p_{ij} - p^*_{ij})(\omega_{j}-\omega_i). \end{multline} Moreover, we let $V_C(p^c)=1/2 (p^c-p^{c,*})^T \Gamma (p^c-p^{c,*})$. Using \eqref{5b} and \eqref{7g} the time-derivative of $V_C$ is given by \begin{multline}\label{eq12} \dot{V}_C= (p^c-p^{c,*})^T((\widetilde{s}-\widetilde{s}^*)-(H \psi- H \psi^*)). \end{multline} Additionally, we define $V_P(\eta)=1/2\sum_{(i,j)\in{\mathcal{E}}}B_{ij}(\eta_{ij}-\eta^*_{ij})^2$. Using \eqref{eq1a} and \eqref{eq1c}, the time-derivative is given by \begin{equation}\label{eq13} \dot{V}_P \hspace{-1mm}=\hspace{-1.5mm}\sum_{(i,j)\in{\mathcal{E}}} \hspace{-1.5mm} B_{ij}(\eta_{ij}-\eta^*_{ij})(\omega_{i}-\omega_{j}) =\hspace{-1.5mm}\sum_{(i:j)\in{\mathcal{E}}}\hspace{-1.5mm}(p_{ij}-p^*_{ij})(\omega_{i}-\omega_{j}). \end{equation} Furthermore, consider $V_\psi(\psi)=1/2(\psi-\psi^{*})^T\widetilde{\Gamma}(\psi-\psi^{*})$ with time-derivative given by \eqref{5a} and \eqref{7f} as \begin{equation}\label{eq14} \dot{V}_\psi=( \psi - \psi^*)^T H^T (p^c - p^{c,*}). \end{equation} Finally, we let $V_M(x)= \sum_{k\in{\mathcal{N}^G_j}}\sum_{j\in{\mathcal{N}}}\tau_{k,j}/2m_{k,j}(x_{k,j}-x^{*}_{k,j})^2$ with time-derivative given by \eqref{eq2a}, \eqref{5c}, \eqref{eq7c} and \eqref{eq7i} as follows \begin{multline}\label{eq15} \dot{V}^M =\sum_{k\in{\mathcal{N}^G_j}}\sum_{j\in{\mathcal{N}}}[(x_{k,j}-x^{*}_{k,j})(-x_{k,j}+x^*_{k,j})]/m_{k,j}+ \\(x_{k,j}-x^{*}_{k,j})((p^c_{k,j}-p^{c,*}_{k,j})-(\omega_{j}-\omega^*_{j})). \end{multline} Based on the above, we define the function \begin{equation}\label{eq16} V(\omega,p^c,\eta,\psi,x)=V_F+V_P+V_C+V_\psi+V_M, \end{equation} which we aim to use as a Lyapunov candidate. Using \eqref{eq11} - \eqref{eq15} and substituting the values of $p^M, d^c$ and $\widetilde{s}$ from \eqref{eq2b}, \eqref{eq2c} and \eqref{eq_s}, it follows that the time derivative of V satisfies \begin{multline} \label{eq18} \dot{V} \leq \sum_{j \in \mathcal{N}} (-D_j(\omega_{j}-\omega^*_{j})^2 - \sum_{k\in{\mathcal{N}^G_j}}(x_{k,j}-x^{*}_{k,j})^2/m_{k,j}) \\- \sum_{j \in \mathcal{N}} \sum_{k \in \mathcal{N}_j} h_{k,j}((p^c_{k,j} - p^{c,*}_{k,j}) - (\omega_j - \omega^*_j))^2 \leq 0. \end{multline} It is straightforward that $V$ has a global minimum at $\alpha^*$ = $(\eta^*,\psi^*,\omega^*,x^*, p^{c,*})$. Hence, we consider the set $\Xi = \{\alpha : V \leq \epsilon, V \text{ connected}\}$ containing $\alpha^*$ for some $\epsilon > 0$. From \eqref{eq18}, it follows that $\Xi$ is an invariant set for \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. We are hence in a position to apply LaSalle's Invariance principle \cite{khalil2002nonlinear} on \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}, using $\Xi$ as an invariant set. From LaSalle's Invariance principle, we deduce that solutions to \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}, converge to the largest invariant set within $\Xi$ where $\dot{V} = 0$. Within this set, it follows that $(\omega = \omega^* = \vect{0}_{|\mathcal{N}|})$, as follows from Lemma \ref{Lemma1}, and $(p^c, x) = (p^{c,*}, x^*)$. In addition, convergence of $(\omega, p^c)$ to $(\omega^*, p^{c,*})$ implies the convergence of $(\eta, \psi)$ to some constant values $(\bar{\eta}, \bar{\psi})$. Hence, solutions initiated within $\Xi$ converge to the set of equlibria within \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. Noting that $\epsilon$ in the definition of $\Xi$ can be selected to be arbitrarily large demonstrates global convergence and completes the proof. \hfill $\blacksquare$ \ka{ \textbf{\emph{Proof of Proposition \ref{prop_privacy}}:} Consider an intelligent eavesdropper, aiming to estimate $\widetilde{s}_{k,j}$ based on K1 and K2. There are two ways to proceed for this. The first is to try reversing the dynamics in \eqref{eq8}, i.e. use the relation $\widetilde{s} = \Gamma \dot{p}^c + H \psi - n$ associated with a particular unit. This approach cannot be applied since the value of $n$ is unknown to the eavesdropper. The second approach would be to use the equilibrium values and trajectories of the power command and $\psi$ to estimate $\widetilde{s}_{k,j}$, i.e. use $\widetilde{s}_{k,j} (t) = \widetilde{s}^*_{k,j} - \int_{t}^{\infty} \gamma_{k,j} \dot{p}^c_{k,j}(\tau) - \mathds{1}^T_{|\tilde{\mathcal{N}}|,f} H^T \psi(\tau) - n_{k,j}(\tau) d\tau$ where $f$ corresponds to the row associated with unit $k,j$ in $H^T \psi$. However, estimating the equilibrium value of $\widetilde{s}^*_{k,j}$ using $\widetilde{s}^* + n^* = H \psi^*$ requires knowledge of the equilibrium values of both $\psi$ and $n$, with the latter being unknown. Hence, the \textit{Privacy-Preserving scheme} \eqref{eq8} guarantees the privacy of the prosumption profiles against eavesdroppers with knowledge of K1 and K2. \hfill $\blacksquare$ \textbf{\emph{Proof of Proposition \ref{prop_privacy_trajectory}}:} Consider an intelligent eavesdropper, aiming to estimate $\widetilde{s}_{k,j}$ based on K1, K2 and K3. Similarly to the proof of Proposition \ref{prop_privacy}, there are two ways to proceed for this. The first is to try reversing the dynamics in \eqref{eq8}, i.e. use $\widetilde{s} = \Gamma \dot{p}^c + H \psi - n$ associated with a particular unit. However, when the system is not at steady state, the value of $n$ is unknown to the eavesdropper. The second approach would be to use that $n = \vect{0}_{|\widetilde{\mathcal{N}}|}$ at steady state from K3 and calculate the vector $\psi$ using $\psi(t) = \psi(0) - \int_{0}^t \overline{\Gamma}^{-1}H^Tp^c(\tau) d \tau$. Although the value of $\psi(0)$ might be difficult to obtain, an eavesdropper could be interested in the deviation of supply so $\psi(0) = \vect{0}_{|\widetilde{\mathcal{E}}|}$ could be used. Then, the equilibrium value $\widetilde{s}^* = H^T \psi^*$ could be used to estimate $\widetilde{s}_{k,j}$, i.e. use $\widetilde{s}_{k,j} (t) = \widetilde{s}^*_{k,j} - \int_{t}^{\infty} \gamma_{k,j} \dot{p}^c_{k,j}(\tau) - \mathds{1}^T_{|\tilde{\mathcal{N}}|,f} H^T \psi(\tau) - n_{k,j}(\tau) d\tau$ where $f$ corresponds to the row associated with unit $k,j$ in $H^T \psi$. The implementation of the latter is impossible since the trajectory of $n_{k,j}$ is unknown. Hence, the \textit{Privacy-Preserving scheme} \eqref{eq8} guarantees the privacy of the prosumption trajectories. \hfill $\blacksquare$ \textbf{\emph{Proof of Proposition \ref{lemma_privacy}}:} The proof follows using similar arguments as in the proof of Proposition \ref{prop_privacy_trajectory}. In particular, since K2, K3, K4 include less information than K1, K2, K3, then the prosumption trajectory $\widetilde{s}_{k,j}$ is private as a special case of Proposition \ref{prop_privacy_trajectory}. In addition, $\widetilde{s}^* = H^T \psi^*$ cannot be used to estimate the steady state value of $\widetilde{s}_{k,j}$, since $\psi$ cannot be estimated from K2, K3, K4. Hence, the privacy of the prosumption profile $\widetilde{s}_{k,j}$ is guaranteed. \hfill $\blacksquare$} \textbf{\emph{Proof of Proposition \ref{proposition2}}:} The proof follows by noting that the equilibria of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, are identical to those of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5}. The latter follows from Design Condition \ref{assumption1}, which ensures that at steady state $n^* = \vect{0}_{|\widetilde{\mathcal{N}}|}$ since at equilibrium the frequency attains its nominal value and $\dot{p}^c = \vect{0}_{|\widetilde{\mathcal{N}}|}$. Therefore, the proof of Proposition \ref{proposition2} follows directly from the proof of Proposition \ref{proposition1}. \hfill $\blacksquare$ \textbf{\emph{Proof of Theorem \ref{theorem2}:}} First, we consider the function $V_n(p^c)=(p^c-p^{c,*})^T (\Gamma + \Xi) (p^c-p^{c,*})/2$, where $\Xi \in \mathbb{R}^{|\widetilde{\mathcal{N}}| \times |\widetilde{\mathcal{N}}|}$ is a diagonal, \an{positive-semidefinite} matrix containing the elements $\xi_{k,j}$ associated with Design Condition \ref{assumption1}(i), and note that its time derivative along the trajectories of \eqref{eq8b} is given by \begin{align} \dot{V}_n = &(p^c-p^{c,*})^T((\widetilde{s}-\widetilde{s}^*)-(H \psi- H \psi^*) \nonumber\\ &\an{+ (n^f - n^{f,*}) + \dot{\Xi}(p^c-p^{c,*})/2),} \nonumber \end{align} \an{where $\dot{\Xi} = \frac{d \Xi}{dt}$,} noting that $n^{f,*} = \vect{0}_{|\widetilde{\mathcal{N}}|}$ from Design Condition \ref{assumption1}(ii) and $\omega^* = \vect{0}_{|{\mathcal{N}}|}$, as follows from Lemma \ref{Lemma1} and the fact that the equilibria of \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8} and \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq5} are identical. Then, we consider the function $\hat{V} = V_F+V_P+V_n+V_\psi+V_M$, where the terms $V_F, V_P, V_\psi, V_M$ follow from the proof of Theorem \ref{theorem1}. Compared with the time-derivative of $V$, given in \eqref{eq18}, $\dot{\hat{V}}$ has only \an{two extra terms given by $(p^c-p^{c,*})^T n^f$ and $(p^c-p^{c,*})^T \dot{\Xi} (p^c-p^{c,*})/2$.} Hence, it follows that the time derivative of $\hat{V}$ satisfies \begin{multline} \label{eq20} \dot{\hat{V}} \leq \sum_{j \in \mathcal{N}} \big(\sum_{k \in \mathcal{N}_j} [-h_{k,j}((p^c_{k,j} - p^{c,*}_{k,j}) - (\omega_j - \omega^*_j))^2 \\+ (p^c_{k,j}-p^{c,*}_{k,j}) n^f_{k,j} + \an{\dot{\xi}_{k,j} (p^c_{k,j}-p^{c,*}_{k,j})^2/2}] - D_j(\omega_{j}-\omega^*_{j})^2 \big). \end{multline} From \eqref{eq20} \an{and Design Condition \ref{assumption1}} it follows that \begin{multline} \label{eq21} \dot{\hat{V}} \leq \sum_{j \in \mathcal{N}} \sum_{k \in \mathcal{N}_j}[(2 h_{k,j} + \beta_{k,j}) |(p^c_{k,j}-p^{c,*}_{k,j})( \omega_j-\omega^*_j)| \\-(h_{k,j} \an{- \hat{\beta}_{k,j}/2})(p^c_{k,j}-p^{c,*}_{k,j})^2 -(h_{k,j}+D_j/|\mathcal{N}_j|)(\omega_{j}-\omega^*_{j})^2] \\ \leq 0, \end{multline} where the last inequality follows from the \an{LMI condition in} Design Condition \ref{assumption1}. The rest of the proof follows by similar arguments as in the proof of Theorem \ref{theorem1}. In particular, $\hat{V}$ has a global minimum at $\alpha^*$ = $(\eta^*,\psi^*,\omega^*,x^*, p^{c,*})$. Hence, the set $\Xi = \{\alpha : \hat{V} \leq \epsilon, \hat{V} \text{ connected}\}$ containing $\alpha^*$ is positively invariant for \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, and some $\epsilon > 0$ as follows from \eqref{eq21}. By applying \cite[Theorem 4.2]{haddad2011nonlinear} on \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, using $\Xi$ as an invariant set, we deduce that solutions to \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}, converge to the largest invariant set within $\Xi$ where $\dot{\hat{V}} = 0$. The characterisation of this set follows in analogy to the proof of Theorem \ref{theorem1}. Hence, solutions initiated within $\Xi$ converge to the set of equlibria within \eqref{eq1}, \eqref{eq2}, \eqref{eq_s}, \eqref{eq8}. Noting that $\epsilon$ in the definition of $\Xi$ can be selected to be arbitrarily large demonstrates global convergence and completes the proof. \hfill $\blacksquare$ \balance
train/arxiv
BkiUeFw4ubnhCU1ZeB3o
5
1
\section{Introduction} At the crossroad of general relativity, elementary particle theory, phase transitions in the early Universe and Cosmology, emerged in the early eighties of the past Century many interesting developments where topological defects were studied in connection with gravitational fields, see e.g. Kibble and Vilenkin, \cite{Kibble,VilShel,Vachaspati2006}. At first, the attention was mainly devoted to consider the topological defects as sources of gravitational fields, and later the backreaction of the gravitational field on the source was also investigated, although this is a much more difficult problem. As an attempt to address both issues in the clearest possible terms, we choose here to work in low dimensional universes with low dimensional kinks. Observational constraints on the evolution of the cosmic microwave background (CMB) seems to give importance mainly to strings, either primordial or solitonic. For example, convolutional neural network are used in recent references, see \cite{Ciuca2020}, to study the cosmic string formation pattern in the CMB. Large-scale simulations of an expanding-universe string network are analyzed in \cite{JJ2015}. On the theoretical side, different types of theories containing strings can be considered and, in particular, the sigma model versions of the Abelian Higgs model studied in \cite{AGG15} offer a rich phenomenology. We postpone for a later investigation the analysis of self-gravitating cosmic strings of this type in $(2+1)$-pseudo-Riemannian Universes. In the mean-time, we address this issue in linear gravity coupled to a real scalar field on a line. As it is well known, general relativity in lower dimensions is quite peculiar \cite{Br88, Co77}. In three dimensions, Einstein equations give a viable theory, albeit spacetime turns out to be flat around the sources and these do not create a gravitational field, only conical singularities \cite{DJtH84}. The situation is worse in 1+1 dimensions, where the Einstein-Hilbert action, being a topological invariant, the Euler class, has identically vanishing variations with respect to the metric. Hence, two-dimensional Einstein equations only make sense if the energy-momentum tensor of matter is zero everywhere. A sensible theory of gravity in 1+1 dimensions requires thus a different dynamics. Given that in two dimensions the only independent component of the Riemann curvature tensor is directly determined by the curvature scalar $R$, the simplest vacuum gravitational equation in sight is \beq R=\Lambda\label{eq:jtvac} \eeq where $\Lambda$ is a cosmological constant. Despite its simplicity, this equation can be derived from a variational principle only at the price of putting aside some of the tenets of Einstein general relativity. Thus, Jackiw \cite{Ja8485} found an action for (\ref{eq:jtvac}) which, although generally covariant, sacrifices the equivalence principle by depending not only on the metric but also on an auxiliary Lagrange multiplier field. Alternatively, Teitelboim \cite{Te8384} was able to find an action leading to (\ref{eq:jtvac}) which depends only on the metric, but this time giving up general covariance. In any case, Jackiw-Teitelboim gravity is a very remarkable theory with many interesting properties, see \cite{Br88} for review. It is, for instance, directly related to bosonic string theory. This comes about because, although the classical Einstein-Hilbert action is trivial in two dimensions, when the path integral is performed in conformal gauge $g_{\mu\nu}=\eta_{\mu\nu} e^\phi$ there is a contribution from the conformal anomaly. This generates a quantum field theory for $\phi$ with Liouville action \cite{Po81}, but it turns out that, in this gauge, the Liouville equation is exactly (\ref{eq:jtvac}) \cite{Ja8485}. Other noteworthy aspect of JT gravity is that it is equivalent to a gauge field theory with gauge group $SO(2,1)$: if $P_0, P_1$ and $J$ are the generators of the Lie algebra $so(2,1)$, one defines this theory by introducing a gauge field $A_\mu=e^a_\mu P_a+\omega_\mu J$, with $e_\mu^a$ the zweibein and $\omega_\mu$ the spin connection, and using a gauge invariant Lagrangian ${\cal L}=\eta_a F^a$ where $\eta_a$ are auxiliary fields in the co-adjoint representation. Once the Euler-Lagrange equations for these field are taken into account, the action of the gauge theory coincides with the action for (\ref{eq:jtvac}) proposed by Jackiw, see\cite{FuKa87,Ja92}. On the other hand, (\ref{eq:jtvac}) can be generalized to take care of the presence of matter. This leads to the equation \beq R-\Lambda=8\pi G T\label{eq:jtmat}, \eeq where $T$ is the trace of the energy-momentum tensor, this new theory being also susceptible of being derived from a local action \cite{To88,MaMoSiSt91}. It is to be noted that (\ref{eq:jtmat}) is the trace of Einstein equations in four dimensions (although with the wrong sign for the coupling) and, in fact, it can be argued in various ways that JT with matter is the closest relative to Einstein theory in 1+1 dimensions. For instance, conventional general relativity in higher dimensions can be thought of as a limit version of Brans-Dicke theory in which the kinetic term of the scalar field is multiplied by a constant which approaches infinity. As it has been shown in \cite{LeSa94}, performing the same limit in 1+1 dimensions leads to JT theory with sources. Another argument is that the action for (\ref{eq:jtmat}) can be recovered from the Einstein-Hilbert action in $D$ dimensions by taking the limit $D\rightarrow 2$ if, at the same time, the gravitational coupling $G_D$ is made to decrease at the same pace than the Einstein tensor does, see \cite{MaRo93}. Apart of its relation to general relativity, JT gravity is also interesting because it is one of the simplest examples of the so-called dilaton gravity theories in two dimensions, see \cite{GrKuVa02} for a comprehensive review, and furthermore it is singular among these in that, as we have said, it admits a Lie algebra valued gauge version, while the gauge realization of more general dilatonic theories requires the use of central extensions of Lie algebras or W-algebras \cite{GrKuVa02,IkIz9394}. At any rate, the gravity theory of Jackiw and Teitelboim has attracted considerable attention and its phenomenology has been widely studied over the years, especially from the point of view of black hole physics, but also in other contexts such as stellar structure, gravitational waves, cosmology, etc; some references are \cite{BrHeTe86,MaShTa90,MaMoSiSt91,SiMa91,MoMa91,CaMi02} and other listed in \cite{GrKuVa02}. Scalar field theories display also some special features when they are formulated in two dimensions. If the potential has degenerate minima, these theories can harbor topological kinks, stable static solutions of the Euler-Lagrange equations whose energy is localized and finite, and which interpolate between different classical vacua. Due to Derrick's theorem \cite{De64}, for a relativistic theory with the canonical form of the Lagrangian, this is not possible in higher dimensions, where the existence of regular solitons requires the presence of gauge fields (exceptions occur, however, if the potential is allowed to include Lorentz invariant terms depending on the coordinates \cite{BaMeMe03}). Thus, as it was the case for JT gravity, topological kink solutions are another characteristic two-dimensional paradigm, one that has been intensely studied both by taking the kinks as classical field configurations as well as treating them as quantum particles, see \cite{Ra82} for a pedagogical exposition, or \cite{nuestro1,nuestro2} and references therein for more recent results on the quantum aspect of kinks. Also, kink solutions have been found useful for a variety of physical applications in fields like cosmology, supersymmetric gauge theory or solid state physics, see for instance \cite{BaInLo04}, which includes a collection of references about these topics. It is thus interesting to merge these two facets of two-dimensional physics and investigate the coupling of JT gravity to scalar fields theories allowing for the presence of kinks. In principle, a phenomenon to be considered is the formation of a black hole due to gravitational collapse of a lump of scalar matter which is subject to topologically non-trivial boundary conditions at infinity. Nevertheless, it seems that a kind of no-hair theorem operates here: once the black hole forms, the scalar field is forced by gravitational attraction to settle to a classical vacua out of the horizon, and we finish with a profile in which the field takes two different constant values, corresponding to minima of the potential, at each side of the black hole. However, it is clear that for field configurations sitting asymptotically onto two different vacua, the gravitational pull towards the center of the lump produces also an increase of the gradient potential energy of the scalar matter. This suggests that there should be some static solutions in which the gravitational attraction and the repulsive gradient energy are in equilibrium and a self-gravitating kink with a non-trivial profile forms. The purpose of this paper is to study several static configurations of this type. This is in contrast with former research in a similar scenario, see References \cite{Stoet} and \cite{Gegen}, where the general structure of two-dimensional dilatonic gravity coupled to the sine-Gordon scalar field model is discussed. The organization of the paper is as follows. In Section 2 we describe the general setting and examine the interaction between JT gravity and the most prototypical system allowing for topological kinks, the $\phi^4$ theory with broken discrete ${\mathbb Z}_2$-symmetry, both for general gravitational coupling and in the weak coupling regime. Then, we present in Section 3 the analysis of another important case, this time with a non-polynomial potential, the sine-Gordon theory. The treatment given for the $\phi^4$ and sine-Gordon kinks reveals that the underlying supersymmetry present in both models has an important role in the procedure to find the gravitating solutions. Thus, in Section 4 we extend the approach to deal with general kinks related to unbroken supersymmetry and apply it to a hierarchy of scalar field theories of this type, which includes the $\phi^4$ and sine-Gordon ones as the two first members. Finally, we offer in Section 5 some concluding remarks. \section{Self-gravitating kinks in the $\phi^4$ theory} \subsection{Scalar kink configurations coupled to Jackiw-Teitelboim gravity} The purpose in this paper is to describe some classical static solutions which arise within the theory of a 1+1 dimensional real scalar field $\phi(t,x)$ when the dynamics is governed not only by the presence of a non-linear potential allowing for topologically non-trivial boundary conditions, but also by the existence of gravitational forces of the Jackiw-Teitelboim type. It turns out that, in order to define the action of such a system, one needs to introduce, apart of the $\phi(t,x)$ field itself and the metric tensor $g_{\mu\nu}(t,x)$, another auxiliary scalar field \cite{MaMoSiSt91}. This extra field can take the form of a mere Lagrange multiplier $N(t,x)$ or, alternatively, of a field $\Psi(t,x)$ which couples to the curvature scalar in a way which closely resembles the analogous coupling of the dilatonic field in string theory. Here, we will choose this second possibility. Thus, the action is \bdm S=\frac{1}{16\pi G}\int d^2x \sqrt{-g}\left( \Psi R+\frac{1}{2} g^{\mu\nu}\partial_\mu\Psi\partial_\nu\Psi+\Lambda\right)+S_M \edm where $\Psi=e^{-2 \Phi}$ with $\Phi$ the dilaton field, and the matter action is \bdm S_M=-\int d^2x \sqrt{-g}\left(\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi+V(\phi)\right). \edm Upon variation of $S$, one arrives to two field equations \beqr \nabla_\mu \nabla^\mu \phi&=&\frac{dV}{d\phi}\label{eqjt1}\\ R-\Lambda&=&8\pi G T\label{eqjt2} \eeqr in which $\Psi$ decouples. Here $\nabla_\mu$ is the standard covariant derivative and $T$ is the trace of the energy-momentum tensor for the scalar field \bdm T_{\mu\nu}=\partial_\mu\phi\partial_\nu\phi-g_{\mu\nu}\left(\frac{1}{2} \partial_\sigma\phi\partial^\sigma\phi+V(\phi)\right). \edm With suitable boundary conditions, these equations are enough to completely determine the field $\phi$ and the metric. Once these are known, the auxiliary field is obtained from the remaining Euler-Lagrange equation \bdm \nabla_\mu \nabla^\mu \Psi=R . \edm In 1+1 dimensions the metric has three independent components, which are subject to arbitrary reparametrizations of the two coordinates. Thus, the metric can be put locally in a form which depends only on a single function of the coordinates. We will choose a gauge which is commonly used in problems related to black holes, see for instance \cite{MaMoSiSt91,MaShTa90}, although in our case the metric will be regular \bdm ds^2=-\alpha(t,x) dt^2+\frac{1}{\alpha(t,x)} dx^2. \edm In this gauge, the Christoffel symbols are \bdm \Gamma^0_{00}=-\Gamma^1_{01}=\frac{1}{2\alpha}\frac{\partial\alpha}{\partial t},\hspace{0.8cm}\Gamma^0_{01}=-\Gamma^1_{11}=\frac{1}{2\alpha}\frac{\partial\alpha}{\partial x},\hspace{0.8cm}\Gamma^0_{11}=-\frac{1}{2\alpha^3}\frac{\partial\alpha}{\partial t},\hspace{0.8cm}\Gamma^1_{00}=\frac{1}{2}\alpha\frac{\partial\alpha}{\partial x} , \edm the only independent component of the Riemann tensor and the curvature scalar are \bdm 2\alpha R^0_{\;101}=R=\frac{\partial}{\partial t^2}\left(\frac{1}{\alpha}\right)-\frac{\partial\alpha}{\partial x^2} \edm and the diagonal elements of the energy-momentum tensor of matter are \beqr T_{00}&=&\frac{1}{2}\left(\frac{\partial\phi}{\partial t}\right)^2+\frac{1}{2} \alpha^2\left(\frac{\partial\phi}{\partial x}\right)^2+\alpha V(\phi)\label{t00}\\ T_{11}&=&\frac{1}{2\alpha^2}\left(\frac{\partial\phi}{\partial t}\right)^2+\frac{1}{2} \left(\frac{\partial\phi}{\partial x}\right)^2-\frac{1}{\alpha} V(\phi).\label{t11} \eeqr Thus the field equations (\ref{eqjt1})-(\ref{eqjt2}) take the form \beqr -\frac{\partial}{\partial t}\left(\frac{1}{\alpha}\frac{\partial\phi}{\partial t}\right)+\frac{\partial}{\partial x}\left(\alpha\frac{\partial\phi}{\partial x}\right)&=&\frac{dV}{d\phi}\label{eqjt1gauge}\\ \frac{\partial^2}{\partial t^2}\left(\frac{1}{\alpha}\right)-\frac{\partial^2\alpha}{\partial x^2}-\Lambda&=&-16\pi G V(\phi).\label{eqjt2gauge} \eeqr Henceforth, we will concentrate on static configurations where both $\alpha$ and $\phi$ depend only on the space-like coordinate $x$. Also, for simplicity, we will be concerned only with the atractive JT gravitational force and will set the cosmological constant to zero. The reason is that our main interest is to explore the behavior, once gravitation is put on, of some standard kink solutions that are usually studied on a 1+1 dimensional Minkowski setting, instead of its de Sitter or anti-de Sitter counterparts, which are by themselves non-static geometric backgrounds. Also, for concreteness, in the rest of this section we will focus on the the simplest and most paradigmatic model exhibiting spontaneous symmetry breaking, the $\phi^4$ theory with potential \beq V(\phi)=\frac{\lambda}{4}(\phi^2-v^2)^2.\label{potencialphi4} \eeq Let us mention that in the static case it is also useful another gauge for the metric, namely $ds^2=-\gamma(z) dt^2+dz^2$, a gauge that has been used, for instance, to study stellar structure in two dimensions in \cite{SiMa91}. Of course, we can recover this gauge from our present conventions by defining a spatial coordinate $z$ by $dz=\frac{dx}{\sqrt{\alpha(x)}}$ and then $\gamma(z)=\alpha(x(z))$. The theory with potential (\ref{potencialphi4}) has been much investigated, both from the point of view of elementary particle physics, where renormalizability in four dimensions makes it especially interesting, and from a condensed matter perspective, where it is the typical interaction appearing in Ginzburg-Landau functionals. At the level of perturbation theory, the model encodes the interaction of real scalar quanta of mass $M=\sqrt{2\lambda} v$ through third and fourth order vertices, but the dynamics has also room for interesting non-perturbative phenomena. In fact, as it is well known, the space of finite-energy configurations splits in four sectors, which are classified by the topological charge $Q~=~\frac{1}{2 v}\int_{-\infty}^\infty \partial_x\phi$ and are both classically and quantum mechanically disconnected. There are two vacuum sectors in which the asymptotic values of the field are the same for positive and negative $x$, i.e., $\phi(t,\pm\infty)=v$ or $\phi(t,\pm\infty)=-v$, along with other two non-trivial topological sectors with mixed asymptotics given by $\phi(t,\pm\infty)=\pm v$ or $\phi(t,\pm\infty)=\mp v$. The decay of a configuration with non-vanishing topological charge to one of the two constant classical vacua $\phi=-v$ or $\phi=v$ is forbidden by the presence of infinite potential barriers among the sectors. Thus, in the topological sectors the energy is minimized by a kink or antikink, a static solution of the Euler Lagrange equations which continuously interpolates between different vacua. Indeed, we see from (\ref{t00}) that in Minkowski spacetime the energy of a static configuration can be written in the form \bdm E[\phi]=\frac{1}{2}\int_{-\infty}^\infty\left[\frac{d\phi}{dx}\pm\sqrt{\frac{\lambda}{2}}(\phi^2-v^2)\right]^2\mp\sqrt{\frac{\lambda}{2}}\int_{\phi(-\infty)}^{\phi(\infty)} d\phi (\phi^2-v^2) \edm and this expression attains its minimum when the Bogomolnyi equation is satisfied \bdm \frac{d\phi}{dx}=\pm\sqrt{\frac{\lambda}{2}}(v^2-\phi^2). \edm The solutions corresponding to the topologically non-trivial asymptotic conditions are \bdm \phi(x)=\pm v\tanh\left(v\sqrt{\frac{\lambda}{2}}(x-x_0)\right), \edm where the plus and minus signs correspond, respectively, to the kink $\phi_K(x)$ and antikink $\phi_{AK}(x)$ configurations. Notice that the Bogomolny equation implies the Euler-Lagrange equation \bdm \frac{d^2\phi}{dx^2}=\mp \sqrt{2\lambda}\;\phi\;\frac{d\phi}{dx}=\lambda\phi(\phi^2-v^2) \edm and the kink and antikink are thus true solutions of the theory. Their energy is \bdm E[\phi_K]=E[\phi_{AK}]=\frac{2}{3}\sqrt{2\lambda}\;v^3. \edm For definiteness, to study the effect of Jackiw-Teitelboim gravity on configurations of this type we will focus on kinks rather than on antikinks, which are analogous. Thus, we have to solve the field equations (\ref{eqjt1gauge})-(\ref{eqjt2gauge}) in the static limit, with $\Lambda$ set to zero and boundary conditions $\phi(-\infty)=-v$, $\phi(+\infty)=v$. In two dimensions, the scalar field $\phi$ and the constants $v$ and $G$ are non-dimensional quantities, while the coupling $\lambda$ has mass dimension two. It is convenient to shift to a set of non-dimensional variables by redefining \bdm \phi=v\psi,\hspace{1cm}x=\sqrt{\frac{2}{\lambda}}\frac{y}{v},\hspace{1cm}8\pi G v^2=g, \edm so that the equations become \beqr \frac{d}{d y}\left(\alpha\frac{d\psi}{dy}\right)&=&2\psi(\psi^2-1)\label{eqk1a}\\ \frac{d^2\alpha}{dy^2}&=&g(\psi^2-1)^2.\label{eqk1b} \eeqr By translational invariance we can take the center of the kink, i.e., the point at which $\psi$ is zero, at the origin of the $y$ coordinate. Of course, in this case, given the $\mathbb{Z}_2$ symmetry of the potential and the form of the kink boundary conditions, the kink profile has to be an odd function of $y$. Then the equations imply that the metric coefficient $\alpha$ is even and, in order to completely fix the set up, we can define the timelike variable $t$ in such a way that it measures the proper time at the core of the kink. It thus follows that it is enough to solve (\ref{eqk1a})-(\ref{eqk1b}) for $y\geq 0$ and with boundary conditions at $y=0$ given by \beqr &&\alpha(0)=1,\hspace{2cm}\left.\frac{d\alpha}{dy}\right|_{y=0}=0,\label{boundk1a}\\ &&\psi(0)=0,\hspace{2cm}\left.\frac{d\psi}{dy}\right|_{y=0}=p,\label{boundk1b} \eeqr where the slope $p$ of the kink profile $\psi$ at the origin has to be chosen in such a way that the boundary condition at infinity \beq \psi(+\infty)=1\label{asympk1} \eeq is satisfied. Once the equations are solved, the solution can be used to compute several physical quantities characterizing the kink. Among these, the most relevant ones are the total rest energy, the energy density and the pressure distribution inside the kink. The total energy is \bdm E[\phi]=\int_{-\infty}^{\infty} dx \sqrt{\alpha(x)}\; T^{00}(x)=v^3\sqrt{\frac{\lambda}{2}} E_{\rm norm}[\psi] \edm where the ``normalized" non-dimensional rest energy $E_{\rm norm}[\psi]$ is \beq E_{\rm norm}[\psi]=\int_0^\infty dy \left\{\sqrt{\alpha}\left(\frac{d\psi}{dy}\right)^2+\frac{1}{\sqrt{\alpha}}(\psi^2-1)^2\right\}. \label{enerk1} \eeq Thus, the energy density of the kink, also in normalized form, is \bdm {\cal E}_{\rm norm}(y)=\sqrt{\alpha(y)}\left(\frac{d\psi}{dy}\right)^2+\frac{1}{\sqrt{\alpha(y)}}\left[\psi^2(y)-1\right]^2, \edm where, since we integrate only from zero to infinity, our normalization includes in this case a factor of two with respect to the true energy density. The pressure, on the other hand, is given by ${\cal P}=\frac{T^{11}}{\alpha}=\frac{\lambda v^4}{4} {\cal P}_{\rm norm}$, with the normalized pressure distribution being \beq {\cal P}_{\rm norm}(y)=\alpha(y) \left(\frac{d\psi}{dy}\right)^2-\left[\psi^2(y)-1\right]^2 \label{prek1}. \eeq \subsection{Some numerical results} If we solve the system (\ref{eqk1a})-(\ref{asympk1}) with vanishing $g$, the gravitational field decouples, the background geometry is 1+1 dimensional Minkowski spacetime and we recover the standard kink of $\phi^4$ theory, which in rescaled variables reads \bdm \alpha(y)=1\hspace{2cm} \psi(y)=\psi_K(y)=\tanh(y). \edm In particular, the slope $p$ at the origin is unity. Also, we can compute the normalized energy, energy density and pressure to find the results \bdm E_{\rm norm}[\psi_K]=\frac{4}{3}\hspace{2cm} {\cal E}_{\rm norm}^{\psi_K}(y)= 2\;\sech^4 (y)\hspace{2cm}{\cal P}_{\rm norm}^{\psi_K} (y)=0. \edm Now, we turn the gravitational interaction on. Since the system (\ref{eqk1a})-(\ref{eqk1b}) cannot be analytically solved for arbitrary values $g>0$, we have to resort to approximate methods. Thus, after analyzing the behavior of the kink fields both inside its core and at large distances, we have to carry out a numerical integration of the field equations, seeking for a consistent interpolation between these regions. The boundary conditions (\ref{boundk1a})-(\ref{boundk1b}) imply that the metric coefficient near the origin has the form $\alpha(y)\simeq 1+\frac{1}{2} g y^2$ and substitution in (\ref{eqk1a}) gives, to dominant order, the differential equation \bdm \frac{d^2\psi}{dy^2}+g y \frac{d\psi}{dy} +2\psi=0\hspace{2cm}y\simeq 0. \edm With the change of variables $2 z=-g y^2$, this is a Kummer equation $z\frac{d^2\psi}{dz^2}+(\frac{1}{2}-z)\frac{d\psi}{dz}-\frac{1}{g}\psi=0$, which, along with (\ref{boundk1b}), determines the form of $\psi$ near the center of the kink as \bdm \psi(y)= p y\, {}_1 \! F_1(\frac{1}{2}+\frac{1}{g}; \frac{3}{2}; -\frac{g y ^2}{2})\simeq p y-\frac{1}{6} p(g+2) y^3+\ldots\hspace{2cm} y\simeq 0 \edm where ${}_1 \! F_1(a;b;z)$ is the confluent hypergeometric function of the first kind. In the asymptotic region, on the other hand, we write $\psi(y)=1-\xi(y)$ and work at leading order in $\xi$. Equation(\ref{eqk1b}) and boundary condition (\ref{asympk1}) demand that \bdm \alpha(y)=q y+r\hspace{2cm}y\rightarrow\infty \edm for some coefficients $q$ and $r$. Using this expression in (\ref{eqk1a}) leads to the differential equation \[ qy \frac{d^2 \xi}{dy^2}+q\frac{d\xi}{dt}-4\xi=0 \hspace{0.5cm} y\rightarrow \infty \] Therefore, the solution is \[ \xi(y)=a K_0 \Big( \frac{4\sqrt{y}}{\sqrt{q}} \Big) + b I_0 \Big( \frac{4\sqrt{y}}{\sqrt{q}} \Big) \approx \frac{1}{2\sqrt{2}} \Big( \frac{q}{y} \Big)^{\frac{1}{4}} \Big( a \sqrt{\pi} e^{-\frac{4\sqrt{y}}{\sqrt{q}}} + \frac{b}{\sqrt{\pi}} e^{\frac{4\sqrt{y}}{\sqrt{q}}} \Big) \hspace{0.5cm} y\rightarrow \infty . \] Hence, there should be a solution to (\ref{eqk1a})-(\ref{eqk1b}) on the half-line $[0,+\infty)$ with this asymptotic behavior, and a critical value $p=p_{\rm crit}$ of the slope at the origin such that the coefficient $b$ vanishes, thus fulfilling the proper boundary condition (\ref{asympk1}). The task at hand is to integrate numerically the system (\ref{eqk1a})-(\ref{eqk1b}) while varying the value of $p$ until the critical value is attained. Hence, we use a shooting method, starting from $y=0$, integrating the equations for positive values of $y$ and then extending the solution to negative values by means of the known parities of $\psi(y)$ and $\alpha(y)$. We look for the value of the slope at the origin which gives a monotonically increasing $\psi(y)$ that asymptotically reaches $\psi=1$. For all values of the coupling, we obtain convergence to a convincing kink profile in which $\psi$ takes a constant value very close to one for an ample interval, with width of order $\Delta y\simeq 10$ for small $g$ up to $\Delta y\simeq 40$ for higher $g$, much larger than the size of the core of the kink. We thus expect that the $b$ coefficient multiplying the term which makes $\psi$ to diverge exponentially from the correct asymptotic value is very small and the solutions found are accurate. We have performed this for different values of the coupling $g$ and we show the results in Figure \ref{fig1}, where the profiles for $\psi(y)$ and $\alpha(y)$ are displayed, and in Figure \ref{fig2}, which shows the normalized energy density and pressure distributions of the kinks. \begin{figure} \centering \includegraphics[width=8cm]{figura1.eps} \hspace{1cm} \includegraphics[width=8cm]{figura2.eps} \caption{$\psi$ and $\alpha$ profiles of the $\phi^4$ kink for several strengths of the gravitational interaction.} \label{fig1} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{figura3.eps} \hspace{1cm} \includegraphics[width=8cm]{figura4.eps} \caption{The normalized energy density and pressure distribution of the $\phi^4$ kink for several for several strengths of the gravitational interaction.} \label{fig2} \end{figure} Also, we list in Table \ref{table1} the values of $p_{\rm crit}$; the normalized energy; the maxima of the energy density and pressure distributions, which, of couse, are located at the center of the kink; the width of the kink, which we have conventionally defined as the value of the $y$ coordinate containing a 95\% of the total energy; the slope $q$ and intercept $r$ of the asymptotic $\alpha$ profile and, finally, a parameter $\kappa$ whose meaning is explained below. \begin{table}[t] \begin{center} \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline \multicolumn{9}{||c||}{Numerical results for $\phi^4$ kinks}\\ \hline $g$&$p_{\rm crit}$&$E_{\rm norm}$&${\cal E}_{\rm norm}(0)$&${\cal P}_{\rm norm}(0)$&width&$q$&$r$&$\kappa$\\ \hline 0.00&1.000&1.333&2.000&0.000&1.131&0.000&1.000&1.000\\ \hline 0.10&1.011&1.333&2.022&0.022&1.129&0.066&0.971&0.996\\ \hline 0.50&1.052&1.335&2.107&0.107&1.123&0.327&0.854&0.980\\ \hline 1.00&1.099&1.338&2.208&0.208&1.117&0.645&0.710&0.963\\ \hline 3.00&1.254&1.361&2.574&0.574&1.099&1.855&0.132&0.909\\ \hline 5.00&1.381&1.389&2.907&0.907&1.084&3.011&-0.472&0.867\\ \hline 10.00&1.634&1.460&3.670&1.670&1.050&5.780&-2.143&0.792\\ \hline 20.00&2.016&1.592&5.066&3.066&0.987&11.074&-6.233&0.696\\ \hline 30.00&2.320&1.709&6.382&4.382&0.932&16.214&-11.334&0.633\\ \hline 40.00&2.580&1.815&7.656&5.656&0.884&21.269&-17.425&0.586\\ \hline 50.00&2.811&1.912&8.903&6.903&0.842&26.267&-24.460&0.549\\ \hline \end{tabular} \end{center} \caption{Some parameters characterizing the $\phi^4$ kink for different values of $g$} \label{table1} \end{table} As one can see from the figures, the rate at which the scalar field varies behaves differently near the origin and afar from it. In the central region, the kink profile is steeper for higher $g$ but, as $y$ increases, the kinks of high gravitational coupling approach the minimum of the potential more slowly than those with small $g$. The energy density and the pressure decrease quickly to zero from their maxima at the core of the kink, and both the values of these maxima and the rate of decreasing are grater as the parameter $g$ increases. In fact, as it can be checked from the table, the total energy of the kink increases with $g$, while the trend for its size variation goes in the opposite sense: the more the coupling $g$ grows, the more the energy is concentrated around the origin, as it should be expected under the influence of a gravitational field. Indeed, as it is shown in the figures, the metric distortion created by the kink at its core is very pronounced, except for very small $g$. On the other hand, although as $y$ increases and $\alpha(y)$ approaches its linear regime the curvature tends to vanish, there is, however, a residual gravitational force at great distance. Mechanical energy conservation for a particle of rescaled non-dimensional mass $m$ falling from rest at $y=y_0$ under the gravity of the kink implies that velocity and acceleration are \bdm v(y)=\alpha(y)\sqrt{1-\frac{\alpha(y)}{\alpha(y_0)}},\hspace{2cm}a(y)=\alpha(y)\left(1-\frac{3\alpha^2(y)}{2\alpha(y_0)}\right)\left.\frac{d\alpha}{dy}\right|_{y}, \edm and thus the gravitational force on a particle at rest at position $y$ is \bdm F\equiv m a=-\frac{1}{2} m\alpha(y)\left.\frac{d\alpha}{dy}\right|_y\rightarrow -\frac{1}{2} m q y\hspace{1cm} {\rm for}\ y\rightarrow\infty, \edm where the values of $q$ can be read from the table. Regarding the interpretation of last column in Table \ref{table1}, let us recall that for a point particle of mass $M$ located at the origin, with the stipulation $\alpha(0)=1$ that we are sticking to, the metric coefficient, in terms of the dimensional $x$ coordinate, is $\alpha(x)=4\pi G M|x|+1$ \cite{SiMa91}. Thus, transforming this formula to non-dimensional variables, we can write the relationship between the point-particle mass and the slope of the metric coefficient at infinity as \bdm \left.\frac{d\alpha}{dy}\right|_{y=\infty}=\frac{4\sqrt{2}\pi G M}{v\sqrt{\lambda}}. \edm For the kink, on the other hand, from (\ref{eqk1b}) and (\ref{boundk1a}), we have \bdm \left.\frac{d\alpha}{dy}\right|_{y=\infty}=8\pi G v^2 I[\psi]\hspace{2cm} I[\psi]=\int_0^\infty dy (\psi^2-1)^2. \edm Thus, from the point of view of the the long distance analysis, in which the kink appears as a point particle, it makes sense to assign to the kink a gravitational mass \bdm M_{\rm GRAV}=2 v^3 \sqrt{\frac{\lambda}{2}} I[\psi]. \edm Nevertheless, looking at (\ref{enerk1}) we see that the inertial mass is different: \bdm M_{\rm INER}=E[\psi]=v^3\sqrt{\frac{\lambda}{2}} E_{\rm norm}[\psi]. \edm The last column in Table \ref{table1} gives account of this difference by means of the parameter $\kappa$, which is, precisely, the quotient between $M_{\rm GRAV}$ and $M_{\rm INER}$, i.e., $\kappa=2 \frac{I[\psi]}{E_{\rm norm}[\psi]}$. The discrepancy between both mass values stems from the fact that, in contrast with the case of a true point particle, for an extended object such as the kink the two diagonal elements $T_{00}$ and $T_{11}$ of the energy-momentum tensor, i.e. not only energy density but also pressure, source JT gravity, having thus an effect on the long distance metric. The situation is analogous of what happens in general relativiy in 3+1 dimensions, where the exterior metric of a ball of perfect fluid in equilibrium is the Schwarzschild solution with a mass parameter $m$ which differs from the mass $m^\prime$ obtained by integrating the energy density of the fluid in the ball, see for instance \cite{Carr04}, an effect which is interpreted as due to the existence of a binding energy originated for the attractive gravitational forces in the interior of the fluid, which are, in fact, compensated by the pressure. For the standard kink, $\frac{d\psi_K}{dy}=1-\psi_K^2$ and $\alpha(y)=1$, thus $E_{\rm norm}[\psi_K]=2 I[\psi_k]$ and $\kappa=1$, consistently with the fact that for $g\rightarrow 0$ internal gravitational self-energy and pressure disappear. For other values of $g$, the numerical computation of the integrals gives the results collected in the table. \subsection{The limit of small gravitational coupling: perturbative analysis} We have demonstrated the action of JT gravity on the kinks of the of $\phi^4$ theory, allowing for the possibility that the gravitational coupling can be, in principle, arbitrary high. Of course, this is in contrast with the usual point of view adopted when this theory, or other quantum field theories, are investigated which assumes that, compared with the other interactions present in the system, gravity is so weak that its effects can be completely disregarded. This approach has brought out a plenty of remarkable results on kinks or other topological defects. Thus, in the event that one is interested in reintroducing gravity in the picture, it is quite reasonable to limit the shifting of the gravitational coupling from zero to a tiny value. In this subsection, we will adopt this perspective by taking $g$ small enough to make a linear perturbative analysis of the kink equations feasible. As we know, for $g=0$ the solution is the standard kink and the spacetime is Minkowski. Hence, we shall assume that \beqr \psi(y)&=&\psi_K(y)+g\varphi(y)+o(g^2)\label{expa}\\ \alpha(y)&=&1+g \beta(y)+o(g^2)\label{expb} \eeqr and we will substitute these expressions in (\ref{eqk1a})-(\ref{eqk1b}) keeping only the linear terms in the coupling. Given that $\alpha(y)$ increases monotonically for $y\rightarrow\infty$, perturbation theory breaks down at great distances, but we see from Figure 1 that if $g$ is small enough there is a wide interval around the core of the kink in which $\alpha(y)\simeq 1$ and, at least in this region, the perturbative treatment should give a good approximation of the complete solution. Having in mind this point and plugging the expansions (\ref{expa}) and (\ref{expb}) into equations (\ref{eqk1a}) and (\ref{eqk1b}), we obtain the following linear equations \beqr {\cal H}_y \varphi&=&\frac{d}{dy}\left(\beta\frac{d\psi_K}{dy}\right)\label{eqk1perta}\\ \frac{d^2\beta}{dy^2}&=&\left(\psi_K^2-1\right)^2\label{eqk1pertb} \eeqr where ${\cal H}_y$ is the Hessian operator in the background of the standard kink solution, whose form is \bdm {\cal H}_y=-\frac{d^2}{dy^2}+U(y)\hspace{2cm}U(y)=2\left(3\psi_K^2(y)-1\right). \edm The boundary conditions at the origin (\ref{boundk1a})-(\ref{boundk1b}), on the other hand, become in this regime \beqr \beta(0)=0\hspace{2cm}\left.\frac{d\beta}{dy}\right|_{y=0}=0\label{boundk1perta}\\ \varphi(0)=0\hspace{2cm}\left.\frac{d\varphi}{dy}\right|_{y=0}=s\label{boundk1pertb} \eeqr with the functions $\beta(y)$ and $\varphi(y)$ being, respectively, even and odd in $y$. The value of $s$ has to be chosen in such a way that the asymptotic behavior of $\varphi(y)$ is consistent with (\ref{asympk1}), which implies \beq \varphi(\infty)=0\label{asympk1pert}. \eeq Equation (\ref{eqk1pertb}) combined with the boundary condition (\ref{boundk1perta}) allows for a direct computation of the perturbation of the metric coefficient \beq \beta(y)=\int_0^y dz\int_0^z du\; {\rm sech}^4(u)=\frac{1}{6}\left\{1+4\log\left(\cosh(y)\right)-\sech^2(y)\right\}. \eeq As it should be, this is a function interpolating between a parabola at the core of the kink and a straight line at large distances, in fact \beqrn \beta(y)&\simeq&\frac{y^2}{2}\hspace{5cm}y\simeq 0\\ \beta(y)&\simeq& \frac{2}{3} y+\frac{1-4\log 2}{6}\hspace{2.3cm}y\rightarrow \infty \eeqrn Consequently, this computation provides us with exact values, in the limit of small $g$, for the $q$ and $r$ coefficients that we had computed numerically in the previous subsection. The other equation, (\ref{eqk1perta}), is a Schr\"{o}dinger equation of non-homogeneous type. The potential $U(y)$ of the Schr\"{o}dinger operator is a symmetric well \bdm U(y)=4-6\;\sech^2(y) \edm which displays a minimum at the origin $U(0)=-2$ and attains a flat profile $U(y)\rightarrow 4$ when $|y|\rightarrow\infty$, see Figure \ref{fig3}. The source term, on the other hand, is \beq {\cal R}(y)=\frac{d}{dy}\left(\beta\frac{d\psi_K}{dy}\right)=\frac{1}{3}\left\{1-4\log\left(\cosh(y)\right)+2 \sech^2(y)\right\}\sech^2(y)\tanh(y),\label{rk1} \eeq an odd function whose behavior near the origin and at infinity is of the form \beqr {\cal R}(y)&\simeq&y\hspace{3.8cm}y\simeq 0\\ {\cal R}(y)&\simeq&-\frac{16}{3} y e^{-2|y|}\hspace{2cm}|y|\rightarrow \infty\label{rinfty} \eeqr and whose full profile is also shown in Figure \ref{fig3}. \begin{figure}[t] \centering \includegraphics[height=5cm]{figura5.eps} \hspace{0.5cm} \includegraphics[height=5cm]{figura6.eps} \caption{The potential well $U(y)$ and the source ${\cal R}(y)$ of the inhomogeneous Schr\"{o}dinger equation for the $\phi^4$ kink.} \label{fig3} \end{figure} \begin{figure}[b] \centering \includegraphics[height=5cm]{figura6a.eps} \hspace{1cm} \includegraphics[height=5cm]{figura6b.eps} \caption{The metric perturbation $\beta(y)$ and the response $\varphi(y)$ for the $\phi^4$ kink, this last one found by numerical methods.} \label{fig3b} \end{figure} A numerical integration of the non-homogeneous Schr\"{o}dinger equation gives a graphic of $\varphi(y)$ as shown in Figure \ref{fig3b}, and it turns out that the value of $s$ leading to a good convergence at infinity is is $s\simeq 0.111$. However, the perturbative approach allows us to go beyond the numerical methods and to solve exactly (\ref{eqk1perta}) in addition to (\ref{eqk1pertb}). Indeed, given the odd parity of $\varphi(y)$, it is enough to work out the solution for $y>0$. Thus, we can write \beq \varphi(y)=\int_0^\infty dz G(y,z) {\cal R}(z)\label{greenphi} \eeq where $G(y,z)$ is a Green function in $[0,+\infty)$ of the Schr\"{o}dinger operator, \beq \left[-\frac{d^2}{d y^2}+U(y)\right]G(y,z)=\delta(y-z)\label{eqgreenk1}, \eeq with suitable boundary conditions to ensure that $\varphi(y)$ behaves at the origin and infinity as required by (\ref{boundk1pertb}) and (\ref{asympk1pert}). Notice that, although the Hessian operator of the standard non-gravitational kink has a normalizable zero mode due to translational invariance, this mode, being proportional to $\frac{d\psi_K}{dy}$, has no nodes. Thus, it satisfies the boundary condition at infinity, but not (\ref{boundk1pertb}). This fact guarantees that the Schr\"{o}dinger operator is invertible within the space of functions we are interested in, and therefore that the Green function we are seeking for exists. On the other hand, it is precisely by means of zero modes that this Green functions can be constructed, see for instance \cite{Col77}. In fact, it is not difficult to check that the general solution of (\ref{eqgreenk1}) is \beq G(y,z)=a(z)\rho(y)+b(z) \chi(y)+\frac{1}{W} \theta(y-z) \left[\chi(z)\rho(y)-\rho(z)\chi(y)\right]\label{gengreen} \eeq where $\rho(y)$ and $\chi(y)$ are, respectively, the even, normalizable, and odd, non-normalizable, zero modes of the Schr\"{o}dinger operator, $a(z)$ and $b(z)$ are arbitrary functions, $\theta(z)$ is the Heaviside step function and $W$ is the Wronskian of the zero modes: \bdm W=\rho \frac{d\chi}{dy}-\frac{d\rho}{dy}\chi. \edm The existence of two eigenmodes with zero eigenvalue and with the said features of normalizability and parity can be easily verified by taking advantage of the factorization of the Hessian operator, \bdm {\cal H}_y=D^\dagger_y D_y\hspace{2cm} D_y=\frac{d}{dy}+2 \tanh(y), \edm which implies that the normalizable mode comes about as the solution of \bdm D_y\rho(y)=0\Rightarrow \rho(y)=C_\rho\exp\left[-2\int_0^y du \tanh(u)\right], \edm while the non-normalizable one follows from \bdm D_y^\dagger (D_y\chi)=0\Rightarrow D_y\chi=C_\chi\exp\left[2\int_0^y du \tanh(u)\right] \edm with $C_\rho$ and $C\chi$ arbitrary constants. Thus, the results are \beqr \rho(y)&=&\sech^2(y)\label{mod1k1}\\ \chi(y)&=&3y\;\sech^2(y)+\left(3+2 \cosh^2(y)\right) \tanh(y)\label{mod2k1}, \eeqr where we have picked out a normalization such that $W=8$. Now, since $\rho(0)\neq 0$ and $\chi(0)=0$, we choose $a(z)=0$ in (\ref{gengreen}) in order that $G(0,z)=0$ for positive $z$, thus ensuring that $\varphi(0)=0$ in accordance with (\ref{boundk1pertb}). As one can see, the remaining condition, (\ref{asympk1pert}), is satisfied by choosing $b(z)=\frac{1}{W}\rho(z)$. This leaves us with the integral formula \beq \varphi(y)=\frac{1}{W}\rho(y)\int_0^y dz \chi(z) {\cal R}(z)+\frac{1}{W}\chi(y)\int_y^\infty dz \rho(z) {\cal R}(z).\label{phi} \eeq In this expression, although $\chi(y)$ is non normalizable, the second term vanishes by construction when $y$ approaches infinity, whereas by virtue of the asymptotic behavior of the two zero modes of the kink \bdm \rho(y)\simeq 4 e^{-2y},\hspace{1cm}\chi(y)\simeq\frac{1}{2} e^{2y}\hspace{1cm}{\rm for}\hspace{1cm} y\rightarrow \infty, \edm and of ${\cal R}(y)$, written in (\ref{rinfty}) above, the first one goes as $y^2 e^{-2y}$ in the same limit. Thus $\varphi(y)$ vanishes for $y\rightarrow\infty$, as it should do. It turns out that, after plugging (\ref{rk1}) and (\ref{mod1k1})-(\ref{mod2k1}) into (\ref{phi}), the integrals can be done exactly, and the result is \bdm \varphi(y)=\frac{1}{6}\left(\frac{2}{3} \tanh(y)-{\rm Li}_2(-e^{-2y})-y(y-2\log 2)-\frac{\pi^2}{12}\right)\sech^2(y). \edm Here ${\rm Li}_n(y)$ is the polylogarithm function, which can be written as ${\rm Li}_n(y)=y \Phi(y,n,1)$ where $\Phi(y,n,v)$ is the Lerch transcendent function, see \cite{GrRy80} for more details about these functions. It is possible to check that the solution given coincides with the function drawn in Figure 4 and, in particular, the derivative at $y=0$ gives exactly $s=\frac{1}{9}$ as we had found numerically. Apart from that, other traits of the solution are that, as we can see in the figure, for $y\rightarrow +\infty$ the function reaches zero from below after an oscillatory regime near the origin, with a maximum value $\varphi=0.0351$ at $y=0.5176$ and a subsequent minimum at $y=1.976$ at which $\varphi=-0.0164$. The zero between these two extrema lies at $y=1.244$. We have thus developed a successful perturbative approach for $g\rightarrow 0$. Let us now briefly comment on the opposite regime $g\rightarrow \infty$. In principle, one can proceed in the same way, defining $g=\frac{1}{g^\prime}$, obtaining the solution $\tilde{\psi}_K(y), \tilde{\alpha}_K(y)$ valid for $g^\prime=0$, and then perturbing this solution for small $g^\prime$, i.e. large $g$, as \bdm \psi(y)=\tilde{\psi}_K(y)+g^\prime \varphi(y)+\ldots,\hspace{1.5cm}\alpha(y)=\tilde{\alpha}_K(y)+g^\prime \beta(y)+\ldots . \edm Now, in order that the left hand side of equation (\ref{eqk1b}) remains finite, the kink solution for $g^\prime=0$ has to be singular, $\tilde{\psi}_K(y)={\rm sgn}(y)$, and, given that the kink core is now concentrated at $y=0$, the metric should be that of a point particle, $\tilde{\alpha}_K(y)=1+ D |y|$, where $D$ is a constant which reflects the gravitational effect of the singular kink. Substituting this in (\ref{eqk1a})-(\ref{eqk1b}) and working at leading order in $g^\prime$, we obtain a system of equations similar to (\ref{eqk1perta})-(\ref{eqk1pertb}) \beqrn -\frac{d}{dy}\left(\tilde{\alpha}_K\frac{d\varphi}{dy}\right)+3\left(2\tilde{\psi}_K^2-1\right)\varphi&=&\frac{d}{dy}\left(\beta \frac{d \tilde{\psi}_K}{dy}\right)\\ \frac{d^2\beta}{dy^2}&=&4\tilde{\psi}_K^2\varphi^2, \eeqrn but now involving Dirac delta singularities due to the non-smooth character of both the unperturbed kink and the background metric. Also, from the figure \ref{fig1}, one should expect that the $D$ constant is divergent, and some regularization procedure, possibly involving a certain degree of arbitrariness, should accompany the present approach. We see thus that the case $g\rightarrow\infty$ is not so neatly defined as the situation for $g\rightarrow 0$ and its detailed investigation is the subject of a different work. \section{Self-gravitating sine-Gordon solitons} Besides the $\phi^4$ kink, the other most prominent example of a localized solution in a scalar relativistic field theory in 1+1 dimensions is the sine-Gordon soliton, which deserves this name, instead of the simple denomination of kink, due of its absolute stability after collisions; the $\phi^4$ kink is not a soliton in this sense, but only a solitary wave \cite{Ra82}. Like the $\phi^4$ theory, the sine-Gordon model has been intensely studied and applications for it have been found in a variety of fields ranging from the mechanics of coupled torsion pendula to the study of dislocations in crystals, Josephson junctions, DNA molecules or black holes, see for instance \cite{CuKeWi14,ZdSaDa13}. In what follows, we shall proceed along the lines developed in the previous section to transform the standard flat space sine-Gordon soliton into a self-gravitating object by coupling the field theory to JT gravity. The potential of the sine-Gordon Lagrangian is \bdm V(\phi)=\frac{\lambda}{\gamma^2}\left(\cos(\gamma\phi)+1\right) \edm and an important difference in the space of classical vacua with respect to the $\phi^4$ theory arises: now the degenerate vacua form an infinite set, $V(\phi)=0$ for $\phi=(2 n+1) \frac{\pi}{\gamma}$, $n\in\mathbb{Z}$, and the discrete symmetry is thus $\mathbb{Z}_n$ instead of $\mathbb{Z}_2$. Notwithstanding this, since finite energy static solutions of a real scalar field theory can interpolate only between consecutive vacua, for our purposes we can limit ourselves to consider configurations which approach asymptotically one of the two vacua $\phi=\pm \frac{\pi}{\gamma}$. In this way, we come back to a situation analogous to that of $\phi^4$ theory. When gravity is neglected, we have a Bogomolny splitting, with Bogomolny equation \bdm \frac{d\phi}{dx}=\pm\sqrt{\frac{2\lambda}{\gamma^2}\left(\cos(\gamma\phi)+1\right)}, \edm and kink-like and antikink-like solutions interpolating between that two vacua appear. The kink is the sine-Gordon soliton, with profile \bdm \phi_S(x)=\frac{4}{\gamma} \arctan\left(\tanh\left(\frac{\sqrt{\lambda} x}{2}\right)\right)Vachaspati2006 \edm such that $\phi_S(\pm\infty)=\pm\frac{\pi}{\gamma}$. To study its generalization in the presence of the JT gravitational field, we rescale to non dimensional variables \bdm \phi=\frac{\psi}{\gamma},\hspace{1cm}x=\frac{y}{\sqrt{\lambda}},\hspace{1cm}16\pi \frac{G}{\gamma^2}=g \edm and, proceeding as we did in the $\phi^4$ theory, obtain field equations of the form \beqr \frac{d}{d y}\left(\alpha\frac{d\psi}{dy}\right)&=&-\sin\psi\label{eqk2a}\\ \frac{d^2\alpha}{dx^2}&=&g(\cos\psi+1).\label{eqk2b} \eeqr Due to the well definite parities in the variable $y$ of both $\psi$ and $\alpha$, the equations can be solved in $[0,+\infty)$ with the same boundary conditions (\ref{boundk1a}) and (\ref{boundk1b}) used before, whilst the asymptotic condition at infinity changes now to \beq \psi(+\infty)=\pi\label{asympk2}. \eeq The energy and pressure of the static solutions are $E[\phi]=\frac{\sqrt{\lambda}}{\gamma^2} E_{\rm norm}[\psi]$ and ${\cal P}=\frac{\lambda}{2\gamma^2}{\cal P}_{\rm norm}$, where the normalized quantities are those written in (\ref{enerk1}) and (\ref{prek1}) with $(\psi^2-1)^2$ replaced by $2(\cos\psi+1)$. In particular, the standard flat-space sine-Gordon soliton is \bdm \psi_S(y)=4\arctan(e^y)-\pi \edm and we find \bdm E_{\rm norm}[\psi_S]=8\hspace{1.7cm} {\cal E}_{\rm norm}^{\psi_S}(y)=8\,\sech^2(y)\hspace{1.7cm}{\cal P}_{\rm norm}^{\psi_S}(y)=0. \edm As in Section 2, in order to look for numerical solutions of (\ref{eqk2a})-(\ref{eqk2b}) with the conditions (\ref{boundk1a}), (\ref{boundk1b}), (\ref{asympk2}), we first solve the system near the origin, to obtain \beqrn \alpha(y)&=&1+ g y^2\\ \psi(y)&=&p y\, {}_1 \! F_1(\frac{1}{2}+\frac{1}{4g}; \frac{3}{2}; -g y ^2)\simeq p y-\frac{1}{6} p(2g+1) y^3+\ldots\hspace{2cm} y\simeq 0 \eeqrn and also near infinity, where $\psi(y)=\pi-\xi(y)$ and the $\alpha$ profile is linear, $\alpha(y)=qy+r$. The behavior of $\xi$ turns out to be \bdm \xi(y)=a K_0 \Big( \frac{2\sqrt{y}}{\sqrt{q}} \Big) + b I_0 \Big( \frac{2\sqrt{y}}{\sqrt{q}} \Big) \approx \frac{1}{2} \Big( \frac{q}{y} \Big)^{\frac{1}{4}} \Big( a \sqrt{\pi} e^{-\frac{2\sqrt{y}}{\sqrt{q}}} + \frac{b}{\sqrt{\pi}} e^{\frac{2\sqrt{y}}{\sqrt{q}}} \Big) \hspace{0.5cm} y\rightarrow \infty \edm and we have to integrate the ODEs numerically to figure out the critical value of $p$ yielding $b=0$. Through this approach, we have found the results shown in the figures \ref{fig4} and \ref{fig5} and in Table \ref{table2}. The parameter $\kappa$ appearing in the table relates the gravitational and inertial masses of the soliton as in the previous section, and is now given by \bdm \kappa=4\frac{I[\psi]}{E_{\rm norm}[\psi]}\hspace{2cm}I[\psi]=\int_0^\infty dy [\cos(\psi)+1]. \edm\\\\ As the figures and table reflect, the qualitative features of the sine-Gordon solitons are similar to those already commented for the $\phi^4$ kinks. The most notorious difference is the that the values of energy, pressure and gravitational distortion are, with the natural normalization and non-dimensional variables that we are using, much greater for the former than for the latter. \begin{figure}[t] \centering \includegraphics[width=8cm]{figura7.eps} \hspace{1cm} \includegraphics[width=8cm]{figura8.eps} \caption{$\psi$ and $\alpha$ profiles of the sine-Gordon soliton for several for several strengths of the gravitational interaction.} \label{fig4} \end{figure} \begin{figure}[b] \centering \includegraphics[width=8cm]{figura9.eps} \hspace{1cm} \includegraphics[width=8cm]{figura10.eps} \caption{The normalized energy density and pressure distribution of the sine-Gordon soliton for several for several strengths of the gravitational interaction.} \label{fig5} \end{figure} \begin{figure}[b] \centering \includegraphics[height=5cm]{figura11.eps} \hspace{0.5cm} \includegraphics[height=5cm]{figura12.eps} \caption{The potential well $U(y)$ and the source ${\cal R}(y)$ of the inhomogeneous Schr\"{o}dinger equation for the sine-Gordon soliton.} \label{fig6} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline \multicolumn{9}{||c||}{Numerical results for sine-Gordon solitons}\\ \hline $g$&$p_{\rm crit}$&$E_{\rm norm}$&${\cal E}_{\rm norm}(0)$&${\cal P}_{\rm norm}(0)$&width&$q$&$r$&$\kappa$\\ \hline 0.00 &2.000 &8.000 &8.000 &0.000 &1.832&0.000&1.000&1.000\\ \hline 0.10 &2.094 &8.008 &8.386 &0.386 &1.823&0.197&0.862&0.982\\ \hline 0.50 &2.401 &8.119 &9.766 &1.766 &1.800&0.940&0.307&0.926\\ \hline 1.00 &2.702 &8.309 &11.300 &3.302 &1.777&1.817&-0.437&0.875\\ \hline 3.00 &3.565 &9.097 &16.710 &8.707 &1.684&5.109&-4.155&0.749\\ \hline 5.00 &4.201 &9.808 &21.650 &13.650 &1.594&8.252&-9.075&0.673\\ \hline 7.00 &4.731 &10.450 &26.380 &18.380 &1.514&11.341&-15.507&0.620\\ \hline 9.00 &5.196 &11.040 &31.000 &23.000 &1.442&14.377&-22.929&0.579\\ \hline 11.00 &5.615 &11.580 &35.530 &27.530 &1.378&17.395&-31.867&0.546\\ \hline 15.00 &6.358 &12.580 &44.420 &36.420 &1.269&23.378&-53.842&0.496\\ \hline 18.00 &6.856 &13.260 &51.000 &43.000 &1.201&27.789&-71.581&0.466\\ \hline \end{tabular} \end{center} \caption{Some parameters characterizing the solitons for different values of $g$.} \label{table2} \end{table} The perturbative approach for the case of small $g$ can also be worked out like we did the case of $\phi^4$ theory. At first order in $g$, the equations for the perturbations $\beta$ and $\varphi$ of the flat metric and standard soliton are \beqr {\cal H}_y \varphi&=&\frac{d}{dy}\left(\beta\frac{d\psi_S}{dy}\right)\label{eqk2perta}\\ \frac{d^2\beta}{dy^2}&=&\cos(\psi_S)+1,\label{eqk2pertb} \eeqr where the Hessian operator is now \bdm {\cal H}_y=-\frac{d^2}{dy^2}+U(y)\hspace{2cm}U(y)=-\cos\left[\psi_S(y)\right]=1-2\, \sech^2(y), \edm whereas the boundary conditions are again (\ref{boundk1perta})-(\ref{asympk1pert}). Thus, integration of (\ref{eqk2pertb}) gives directly the answer for $\beta$ \bdm \beta(y)=2 \log\left(\cosh(y)\right), \edm a function which interpolates consistently between a parabola and a straight line \beqrn \beta(y)&\simeq&y^2\hspace{4.2cm}y\simeq 0\\ \beta(y)&\simeq& 2 y-2\log 2\hspace{2.3cm}y\rightarrow \infty. \eeqrn The Hessian is a Schr\"{o}dinger operator with the potential drawn in Figure \ref{fig6}, a symmetric well showing a dip $U(0)=-1$ in the center and reaching asymptotic values $U(y)\rightarrow 1$ for $|y|\rightarrow \infty$. This operator is sourced by the function \bdm {\cal R}(y)=\frac{d}{dy}\left(\beta\frac{d\psi_S}{dy}\right)=4\left[1-\log\left(\cosh(y)\right)\right]\sech(y)\tanh(y), \edm which displays a linear behavior near the soliton center and decays exponentially at infinity \beqrn {\cal R}(y)&\simeq& 4y\hspace{3.3cm} y\simeq 0\\ {\cal R}(y)&\simeq&-8 y e^{-|y|}\hspace{2cm}|y|\rightarrow \infty, \eeqrn as it is also shown in the figure. \begin{figure}[t] \centering \includegraphics[height=5cm]{figura12a.eps} \hspace{1cm} \includegraphics[height=5cm]{figura12b.eps} \caption{The metric perturbation $\beta(y)$ and the response $\varphi(y)$ for the sine-Gordon soliton, this last one found by numerical methods.} \label{fig6b} \end{figure} To compute $\varphi(y)$ we need to know the kernel of the Hessian. Again, this is facilitated by the factorization \bdm {\cal H}_y=D^\dagger_y D_y\hspace{2cm} D_y=\frac{d}{dy}+\tanh(y), \edm which allows for the computation the normalizable and non normalizable zero modes. They turn out to be \beqrn \rho(y)&=&\sech(y)\\ \chi(y)&=&y\;\sech(y)+\sinh(y), \eeqrn with Wronskian $W=2$. Substituting all that in (\ref{phi}) yields the final answer for the scalar perturbation of the soliton \bdm \varphi(y)=-\left({\rm Li}_2(-e^{-2y})+y(y-1-2\log 2)+\frac{\pi^2}{12}\right)\sech(y). \edm The shape of this function is portrayed in Figure \ref{fig6b}. In particular, taking the derivative at $y=0$ we learn that the parameter $s$ in the boundary condition (\ref{boundk1pertb}) is exactly one in the sine-Gordon theory. We can also obtain the upper and lower limits of the oscillation of the function as $y$ varies. There is a maximum at $y=0.797$ in which $\varphi=0.478$, and a minimum at $y=3.463$ in which $\varphi=-0.285$, with a zero in between at $y=1.981$. Thus, as it happened with the numerical results, in non dimensional variables the scale of the perturbative solution is considerably greater for the sine-Gordon case than for the $\phi^4$ one. \section{Self-gravitating kinks related to transparent P\"{o}schl-Teller potentials} \subsection{Kink Hessian and supersymmetry} In the two previous sections we have studied the coupling to JT gravity of the kink solutions of the $\phi^4$ and sine-Gordon models, both for arbitrarily high gravitational coupling and in the regime of small $g$. Now, we will concentrate on the perturbative dominium only and will extend the treatment to deal with the self-gravitating kink solutions arising within an infinite hierarchy of field theoretical models, the P\"{o}schl-Teller hierarchy, which encompasses as particular cases the two theories considered so far. An important subject which shapes the hierarchy is unbroken supersymmetry. As we have seen, a common feature of the $\phi^4$ and sine-Gordon models is that the Hessian operator of the standard, non gravitational, kink admit a factorization \beq {\cal H}_y=D^\dagger_y D_y\hspace{2cm} D_y=\frac{d}{dy}+W(y)\label{factor} \eeq which, indeed, is the key instrument to obtain the perturbation of the scalar field once gravity is turned on. As it is well known, this factorization has a supersymmetrical origin. In fact, Witten $N=2$ supersymmetric quantum mechanics, see \cite{Ju96} for a review, admits a realization on the real line in which the two supersymmetric generators can be assembled into a non hermitian supersymmetric charge of the form $Q=\frac{1}{2}\left(\sigma_1-i\sigma_2\right) D_y$, with $\sigma_k$ the Pauli matrices. The Hamiltonian is then a $2\times 2$ diagonal matrix $H=\frac{1}{2}\left\{ Q,Q^\dagger\right\}$ whose upper element is, apart of the factor $1\over 2$, precisely the Hessian ${\cal H}_y$. In the context of supersymmetric quantum mechanics, the existence of a normalizable zero mode of the Hessian, which was used in previous sections to solve the gravitational equations to linear order in $g$, is tantamount to the statement that supersymmetry is not spontaneously broken, while the standard kink solutions have the status of BPS object preserving half of the original $N=2$ supersymmetry. Models of supersymmetric quantum mechanics have important applications in physics \cite{Ju96}, and very remarkable ones in mathematics, as are the proof given by Witten of Morse inequalities in \cite{Wi82} or the physicist's proof of the Atiyah-Singer index theorem presented in \cite{AG83},\cite{FrWi84}. All other members of the P\"{o}schl-Teller hierarchy enjoy, like the $\phi^4$ and sine-Gordon models, unbroken supersymmetry. Thus, in order to describe them, let us begin by formulating the perturbative approach to JT gravity for a model with this property. In general, the static limit of the field equations (\ref{eqjt1gauge})-(\ref{eqjt2gauge}) can be reverted to non-dimensional variables $y$ and $\psi$ by means of some convenient parameters $a$ and $b$ with mass dimensions one and zero through a rescaling \beq x=\frac{y}{a}\hspace{1.5cm} \phi=b\psi\hspace{1.5cm}V(\phi)=a^2 b^2 {\cal U}(\psi).\label{reparam} \eeq This makes ${\cal U}(\psi)$ also non-dimensional, and gives to the equations the form \beqr \frac{d}{dy}\left(\alpha\frac{d\psi}{dy}\right)&=&\frac{d{\cal U}}{d\psi}\label{eqjty1}\\ \frac{d^2\alpha}{dy^2}&=&g {\cal U}(\psi)\label{eqjty2} \eeqr where $g=16\pi G b^2$. This reparametrization matches with the changes applied in previous sections: for the sine-Gordon model $a=\sqrt{\lambda}$, $b=\frac{1}{\gamma}$ and for the $\phi^4$ theory $a=v\sqrt{\frac{\lambda}{2}}$, $b=v$, although the coupling denoted $g$ in Section 2 is half the coupling $g$ in our current notation. Let us also assume that, as it was the case for the $\phi^4$ and sine-Gordon models, the potential ${\cal U}(\psi)$ is a semidefinite positive, even function of $\psi$, with at least two consecutive symmetric vacua $\pm\psi_{\rm vac}$ such that ${\cal U}(\pm\psi_{\rm vac})=0$. In these circumstances there is, for zero gravitational coupling, a kink $\psi_K(y)$ of finite energy which lives in Minkowski spacetime, is an odd function of the coordinate $y$, and satisfies the Bogomolny equation \beq \frac{d\psi_K}{dy}=\sqrt{2{\cal U}(\psi_k)}\hspace{2cm} \psi_k(\pm\infty)=\pm\psi_{\rm vac}.\label{boggen} \eeq In fact, since \bdm \frac{d^2\psi_k}{dy^2}=\frac{1}{\sqrt{2{\cal U}(\psi_k)}}\left.\frac{d{\cal U}}{d\psi}\right|_{\psi=\psi_K}\frac{d\psi_K}{dy}=\left.\frac{d{\cal U}}{d\psi}\right|_{\psi=\psi_K} \edm the functions $\alpha(y)=1$ and $\psi(y)=\psi_K(y)$ solve the equations (\ref{eqjty1})-(\ref{eqjty2}) for $g=0$. Expanding around the zero-coupling kink as we did in (\ref{expa})-(\ref{expb}), we find the equations at linear order in $g$ \beqr {\cal H}_y\varphi&=&\frac{d}{dy}\left(\beta\frac{d\psi_K}{dy}\right)\label{eqpertjt1}\\ \frac{d^2\beta}{dy^2}&=&{\cal U}(\psi_K)\label{eqpertjt2}, \eeqr where the Hessian operator is \bdm {\cal H}_y=-\frac{d^2}{dy^2}+\left.\frac{d^2{\cal U}}{d\psi^2}\right|_{\psi=\psi_K}. \edm Now, let us assume that the supersymmetric factorization (\ref{factor}) is valid in this model and that supersymmetry is unbroken, with the superpotential $W(y)$ being such that $W(+\infty)>0$ and $W(-\infty)<0$, as it happens in the $\phi^4$ and sine-Gordon theories. In such a case, we can proceed backwards, i.e., we can reconstruct the kink and, to some extent, even the potential of the scalar field theory starting from the Hessian \cite{BoYu03, BaBe17}. The reason is translational invariance, because this symmetry implies that ${\cal H}_y$ has always a normalizable zero mode, the translational mode of the kink, given by $f_0=\frac{d\psi_K}{dy}$ which, by virtue of unbroken supersymmetry, satisfies $D_y f_0=0$. Thus, for a kink centered at the origin, the translational mode is determined by the superpotential as \beq f_0(y)=A \exp\left(-\int_0^y dz W(z)\right)\label{zeromode} \eeq and, once we know the zero mode, we can integrate it to recover the kink profile \beq \psi_K(y)=\int_0^y f_0(z) dz,\label{kink} \eeq where the constant $A$ is determined in terms of the vacuum expectation value $\psi_{\rm vac}$ through the asymptotic condition \beq \int_0^\infty f_0(z) dz=\psi_{\rm vac}\label{condi}. \eeq Also, from the zero mode of the kink it is possible to work out the potential energy of the model. The Bogomolnyi equation (\ref{boggen}) is tantamount to \bdm {\cal U}(\psi_K(y))=\frac{1}{2} f_0^2(y) \edm and this, together with (\ref{kink}), makes it feasible, at least in principle, to solve for ${\cal U}(\psi)$ for field values in the interval $[-\psi_{\rm vac},\psi_{\rm vac}]$. Finally, the energy of the kink follows also directly from the zero mode: the reparametrization (\ref{reparam}) gives $E[\phi_K]=a b^2 E_{\rm norm}[\psi_K]$, with the normalized energy and energy density given by \beq E_{\rm norm}[\psi_K]=\int_0^\infty\left\{\left(\frac{d\psi_K}{dy}\right)^2+ 2 {\cal U}(\psi_K)\right\}=2\int_0^\infty dy f_0^2(y),\hspace{2cm}{\cal E}_{\rm norm}^{\psi_K}(y)=2 f_0^2(y).\label{enernorm} \eeq The Bogomolny equation, on the other hand, implies that the pressure ${\cal P}_{\rm norm}^{\psi_K}(y)$ of the kink with $g=0$ vanishes. \subsection{The P\"{o}schl-Teller hierarchy of reflectionless potentials} The P\"{o}schl-Teller hierarchy, reviewed for instance in \cite{Ma15}, provides a concrete realization of the previous scheme. This hierarchy is built by choosing a sequence of superpotentials of the form $W_\ell(y)=\ell \tanh(y)$ with $\ell$ a natural number. Thus, in particular, the superpotentials with $\ell=1$ and $\ell=2$ are those associated to the sine-Gordon and $\phi^4$ kinks. The Hessian operators are given by \beq ({\cal H}_y)_\ell=(D^\dagger_y)_\ell(D_y)_\ell=\frac{d^2}{d y^2}+U_\ell(y)\hspace{2cm}U_\ell(y)=\ell^2-\ell(\ell+1) {\rm sech}^2(y)\label{hesspt} \eeq and exhibit potential wells which are deeper for increasing $\ell$. Incidentally, the free Hamiltonian can be included in the series by extending it to the case $\ell=0$. Each member in the sequence is related to the previous one by an interchange of the order of the first order differential operators, namely $(D_y)_\ell(D^\dagger_y)_\ell=({\cal H}_y)_{\ell-1}+\ell^2-(\ell-1)^2$. This property, called shape invariance, provides the basis of an algebraic method for solving the spectrum of $({\cal H}_y)_\ell$ which generalizes the factorization method originally developed by Schr\"{o}dinger, Infeld and Hull and others, see \cite{CaRa00}. Other remarkable features of the P\"{o}schl-Teller potentials are reflectionless scattering, the occurrence of a half-bound state at the threshold of continuous spectrum and the fact that the functional determinant can be computed exactly \cite{Ma15}. As demanded by unbroken supersymmetry, all the models in the hierarchy have a normalizable zero mode, which using (\ref{zeromode}) turns out to be \bdm (f_0(y))_\ell=A_\ell\, {\rm sech}^\ell(y). \edm In order to match the usual conventions for the sine-Gordon and $\phi^4$ cases, it is convenient to choose different vacuum expectation values $\psi_{\rm vac}$ for the cases of even and odd $\ell$, according to \bdm \psi_{\rm vac}=\pi\ (\ell\ {\rm odd})\hspace{2cm}\psi_{\rm vac}=1\ (\ell\ {\rm even}), \edm and with this choice, the the $A_\ell$ factor coming from (\ref{condi}) is given by \bdm A_\ell=2 \pi^{\frac{(-1)^{\ell+1}}{2}}\frac{\Gamma(\frac{\ell+1}{2})}{\Gamma(\frac{\ell}{2})}. \edm On the other hand, we can use (\ref{kink}) to determine the profile of the kink leading to the Hessian $({\cal H}_y)_\ell$, and we arrive to the result \bdm (\psi_K(y))_\ell=A_\ell \int_0^y dz\;{\rm sech}^\ell(z)=A_\ell F\left(\frac{1}{2},\frac{\ell+1}{2};\frac{3}{2};-\sinh^2(y)\right)\sinh(y) \edm with $F$ the hypergeometric function. The next step is to figure out the scalar field theory which accommodates such a kink configuration. While in the cases of the sine-Gordon and $\phi^4$ models we have a our disposal an explicit expression of the potential ${\cal U}$ in terms of the field $\psi$, for higher $\ell$ the best that we can do is to proceed as was done in \cite{BoYu03} (see also \cite{AlJu12}) and to give both $\psi$ and ${\cal U}$ parametrically. For odd $\ell$, it is convenient to choose the parameter in the form $\tau={\rm sech}(y)$. As $\tau$ goes from $\tau=1$ to $\tau=0$, the field $\psi_K$ interpolates continuously between its values at the origin, $\psi_K(0)=0$ and at infinity, $\psi_K(\infty)=\pi$. The parametric expressions for the field and potential for odd $\ell$ are thus \beqrn \psi&=&A_\ell F\left(\frac{1}{2},\frac{\ell+1}{2};\frac{3}{2};\frac{\tau^2-1}{\tau^2}\right)\frac{\sqrt{1-\tau^2}}{\tau}\\ {\cal U}&=&\frac{1}{2} A_\ell^2 \tau^{2 \ell} \eeqrn and some particular cases are \beqrn \ell=1:&&\hspace{1cm} \psi=2 \arccos(\tau)\hspace{5cm} {\cal U}=2 \tau^2\\ \ell=3:&&\hspace{1cm} \psi=2t\sqrt{1-\tau^2}+2\arccos(\tau)\hspace{2.7cm} {\cal U}=8 \tau^6\\ \ell=5:&&\hspace{1cm} \psi=2t\sqrt{1-\tau^2}(1+\frac{2}{3} \tau^2)+2\arccos(\tau)\hspace{1.05cm} {\cal U}=\frac{128}{9} \tau^{10}. \eeqrn For even values of $\ell$ the explicit expressions are slightly simpler by defining the parameter as $\tau=\tanh(y)$. In this case, as $\tau$ interpolates between $\tau=0$ and $\tau=1$ the kink field varies from $\psi_K(0)=0$ to $\psi_K(\infty)=1$. The parametric expressions are \beqrn \psi&=&A_\ell F\left(\frac{1}{2},\frac{\ell+1}{2};\frac{3}{2};\frac{\tau^2}{\tau^2-1}\right)\frac{\tau}{\sqrt{1-\tau^2}}\\ {\cal U}&=&\frac{1}{2} A_\ell^2 (1-\tau^2)^\ell \eeqrn and some low-$\ell$ cases are \beqrn \ell=2:&&\hspace{1cm} \psi=\tau\hspace{4.5cm} {\cal U}=\frac{1}{2}(1-\tau^2)^2\\ \ell=4:&&\hspace{1cm} \psi=\frac{\tau}{2}(3-\tau^2)\hspace{3cm} {\cal U}=\frac{9}{8}(1-\tau^2)^4\\ \ell=6:&&\hspace{1cm} \psi=\frac{\tau}{8}(15-10 \tau^2+3\tau^4)\hspace{1.3cm} {\cal U}=\frac{225}{128}(1-\tau^2)^6. \eeqrn We show in figures \ref{fig7} and \ref{fig8} the kink profiles and the field theory potential for a few examples. Notice that the existence of the kink requires only that $\psi_{\rm vac}$ is a minimum of ${\cal U}$ with ${\cal U}(\psi_{\rm vac})=0$. Thus, as long as this condition is met, the potential ${\cal U}(\psi)$ for $\psi>\psi_{\rm vac}$ can be chosen arbitrarily. Since it is not needed for our current purposes, we have made no attempt to fix this arbitrariness, and in the figures we simply have chosen ${\cal U}(\psi)$ symmetric around $\psi=\pi$ or $\psi=1$, at least near these points. Note, however, that well defined procedures to extend the potential beyond these limits, based on the single-valuedness of ${\cal U}(\psi)$ for complex values of the parametrization, have been developed \cite{BoYu03}, \cite{AlJu12}. Another interesting point is that if vacua with different absolute values of the vev are allowed, the reconstruction can result in a non-univocal answer, and different field theories can be recovered from the same Hessian \cite{BaBe17}. \begin{figure} \centering \includegraphics[width=7cm]{figura13.eps} \hspace{1.5cm} \includegraphics[width=7cm]{figura14.eps} \caption{The kink profile for zero gravitational coupling and the scalar potential for several odd values of $\ell$.} \label{fig7} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{figura15.eps} \hspace{1.5cm} \includegraphics[width=7cm]{figura16.eps} \caption{The kink profile for zero gravitational coupling and the scalar potential for several even values of $\ell$.} \label{fig8} \end{figure} Finally, we can use (\ref{enernorm}) to compute the normalized energy of the kink \bdm (E_{\rm norm}[\psi_K])_\ell=\ell\,\pi^{(-1)^{\ell+1}} \frac{2^\ell \Gamma^3\left(\frac{\ell+1}{2}\right)}{\Gamma(\ell+\frac{1}{2})\Gamma(\frac{\ell}{2}+1)} \edm and the normalized energy density, which has in all cases a maximum at the origin and decays with $y$ according to the expression \bdm ({\cal E}_{\rm norm}^{\psi_K}(y))_\ell=2 A_\ell^2 {\rm sech^{2 \ell}}(y). \edm \subsection{Self gravitating P\"{o}schl-Teller kinks} Having reviewed the main features of the kinks of and potentials of the P\"{o}schl-Teller hierarchy without gravity, we now come back to the case where $g$ is small but different from zero to look for solutions of the perturbative equations (\ref{eqpertjt1})-(\ref{eqpertjt2}) for these theories. Equation (\ref{eqpertjt2}), in conjunction with the usual boundary conditions (\ref{boundk1perta}) at the origin, can be integrated directly to give \bdm \beta_\ell(y)=\frac{A_\ell^2}{2}\int_0^y dz \int_0^z du\; {\rm sech}^{2\ell}(u)=\frac{A_\ell^2}{2}\int_0^y dz \int_0^z d \tanh(u)\, \left(1-\tanh^2(u)\right)^{(l-1)}. \edm Thus, the integral in $t$ leads to a sum of odd powers of hyperbolic tangents \bdm \int_0^z du\; {\rm sech}^{2\ell}(u)=\sum_{j=0}^{\ell-1} (-1)^j\left(\begin{array}{c}\ell-1\\j\end{array}\right)\frac{\tanh^{2j+1}(z)}{2j+1}, \edm while the subsequent integration of these of powers gives rise to hypergeometric functions. All in all, the result for the perturbation of the metric is \bdm \beta_\ell(y)=\frac{A_\ell^2}{2}\sum_{j=0}^{\ell-1} (-1)^j\left(\begin{array}{c}\ell-1\\j\end{array}\right)\frac{\tanh^{2j+2}(y)}{(2j+1)(2j+2)} F\left(1,1+j;2+j;\tanh^2(y)\right). \edm Nevertheless, for later use of $\beta_\ell(y)$ into the integrals needed to compute the perturbation of the scalar field profile, it is more convenient to get rid of the hypergeometric functions. This can be done by means of formula 2.424-2 in \cite{GrRy80}, which gives an alternative expression for the integral of odd powers of the hyperbolic tangent as sums of even powers of hyperbolic cosines or tangents plus a logarithm. This formula, along with the identity \bdm \sum_{j=0}^{\ell-1}\frac{(-1)^j}{2j+1}\left(\begin{array}{c}\ell-1\\j\end{array}\right)=\frac{\sqrt{\pi}\Gamma(\ell)}{2 \Gamma(\ell+\frac{1}{2})} \edm enables us to trade the previous expression for $\beta_\ell(y)$ for another one in which only elementary functions appear \beq \beta_\ell(y)=\frac{A_\ell^2}{2}\left\{\frac{\sqrt{\pi}\Gamma(\ell)}{2 \Gamma(\ell+\frac{1}{2})}\log(\cosh(y))-\sum_{i=1}^{\ell-1} C_{i,\ell} \tanh^{2i}(y)\right\},\label{altbeta} \eeq where the coefficients entering in the second term are \bdm C_{i,\ell}=\frac{1}{2i}\sum_{j=i}^{\ell-1} \frac{(-1)^j}{2j+1}\left(\begin{array}{c}\ell-1\\j\end{array}\right). \edm Let us now turn to the inhomogeneous Schr\"{o}dinger equation (\ref{eqpertjt1}). The potential well in the Hessian is given in (\ref{hesspt}), while the source \bdm R_\ell(y)=\frac{d}{dy}\left[ A_\ell \beta_\ell(y) {\rm sech}^\ell(y)\right] \edm can be written by means of the incomplete Euler beta function $B_z(a,b)=\int_0^z u^{a-1}(1-u)^{b-1} du$ as \bdm R_\ell(y)=\frac{A_\ell^3}{4}\sum_{j=0}^{\ell-1} (-1)^j\left(\begin{array}{c}\ell-1\\j\end{array}\right)\frac{2 \tanh^{2j}(y)-\ell B_{\tanh^2(y)}(j+1,0)}{1+2j}{\rm sech}^{\ell+1}(y) \sinh(y) \edm or, if one makes use of the version (\ref{altbeta}) of $\beta_\ell(y)$, as an expression with only hyperbolic functions \bdm R_\ell(y)=\frac{A_\ell^3}{2}\left\{\frac{\sqrt{\pi}\Gamma(\ell)}{2 \Gamma(\ell+\frac{1}{2})}\left[1-\ell \log(\cosh(y))\right]\tanh(y)+\sum_{i=1}^{\ell-1} D_{i,\ell}(y) \tanh^{2i-1}(y){\rm sech}^2(y)\right\}{\rm sech}^\ell(y), \edm but with the occurrence of some awkward coefficients: \bdm D_{i,\ell}(y)=\frac{1}{2i}\sum_{j=i}^{\ell-1} \frac{(-1)^j}{2j+1}\left(\begin{array}{c}\ell-1\\j\end{array}\right)(\ell \sinh^2(y)-2 i). \edm Once the source is given in explicit form, we can solve the non homogeneous Schr\"{o}dinger equation by the procedure developed in Subsection 2.3. For the even normalizable zero mode we can take directly the translational mode \bdm \rho_\ell(y)=(f_0(y))_\ell={\rm sech}^\ell(y), \edm while the odd non-normalizable one $\chi_\ell(y)$ is obtained by solving \bdm \frac{d\chi_\ell}{dy}+\ell\tanh(y) \chi_\ell=\cosh^\ell(y) \edm with the result \beqrn \chi_\ell(y)&=&{\rm sech}^\ell(y)\int_0^y dz \cosh^{2\ell}(z)=\frac{i y}{(2\ell+1)|y|}\cosh^{\ell+1}(y) F\left(\frac{1}{2},\frac{1}{2}+\ell;\frac{3}{2}+\ell;\cosh^2(y)\right)\\&-&\frac{i\sqrt{\pi}\Gamma(\ell+\frac{1}{2})}{2\Gamma(\ell+1)}{\rm \sech}^\ell(y), \eeqrn which can, once again, be more conveniently given as a sum of hyperbolic functions \bdm \chi_\ell(y)=\frac{1}{2^{2\ell}}\left[\left(\begin{array}{c}2\ell\\\ell\end{array}\right)y+\sum_{j=0}^{\ell-1}\left(\begin{array}{c}2\ell\\j\end{array}\right)\frac{\sinh(2(\ell-j)y)}{\ell-j}\right]{\rm sech}^\ell(y). \edm The normalization of zero modes has been chosen so that the Wronskian is unity. Now, to compute $\varphi_\ell(y)$ we have to do the integral (\ref{phi}). The integrand can be decomposed as a sum with terms made of products of powers of hyperbolic functions which, in some cases, also include a factor $\log(\cosh(z))$, or $z$, or both. Although it turns out that all these expressions can be integrated exactly, since both $\chi_\ell(y)$ and $R_\ell(y)$ involve sums with rather unhandy coefficients, to obtain a general expression valid for all $\ell$ appears to be quite cumbersome. Thus, here we content ourselves with giving the results for some low-$\ell$ members of the hierarchy, $3\leq\ell\leq 8$. In all these cases, the perturbation of the scalar field can be put in the form \bdm \varphi_\ell(y)=-M_\ell \left[\gamma(y)+ N_\ell\;y - {\cal P}_\ell\left({\rm sech}^2(y)\right) \tanh(y)\right] {\rm sech}^\ell(y), \edm where $\gamma(y)$ is the function \bdm \gamma(y)=\frac{\pi^2}{12} + y (y - 2\log 2) + {\rm Li}_2(-e^{-2y}), \edm the factors $M_\ell$ and $N_\ell$ are rational numbers, and ${\cal P}_\ell\left(t\right) $ is a polynomial of degree $\ell-2$ with rational coefficients. The explicit values of the factors appear in the table \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center} \begin{tabular}{||c|c|c|c|c|c|c||} \hline \multicolumn{7}{||c||}{\small Factors entering in $\varphi_\ell(y)$}\\ \hline $\ell$&{\small 3}&{\small 4}&{\small 5}&{\small 6}&{\small 7}&{\small 8}\\ \hline $M_\ell$&${{64\over 15}}$&${27\over140}$&${65536\over8505}$&${375\over1232}$&${4194304\over375375}$&${8575\over20592}$\\ \hline $N_\ell$&${17\over48}$&${13\over24}$&${5069\over7680}$&${1901\over2560}$&${172889\over215040}$&${45791\over53760}$\\ \hline \end{tabular} \end{center} \end{table} \noindent and the polynomials are given in the following list: \beqrn {\cal P}_3(t)&=&{4\over 5} + {7 \over 80}t\\ {\cal P}_4(t)&=& {533\over630} + {31\over252} t+{5\over168} t^2\\ {\cal P}_5(t)&=&{1745\over2016} + {569 \over 4032}t+ {35 \over 768}t^2 + {65 \over 4608}t^3 \\ \\ {\cal P}_6(t)&=& {45457\over 51975} + {2251 \over 14850}t+ {34907 \over 633600}t^2 + {641 \over 28160}t^3 + {7 \over 880}t^4\\ {\cal P}_7(t)&=& {904757\over 1029600} + {325607 \over 2059200}t+ {168307 \over 2745600}t^2 + {93947 \over 3294720}t^3 + {3983 \over 299520}t^4 + {133 \over 26624}t^5\\ {\cal P}_8(t)&=& {1233833\over 1401400} + {4097497 \over 25225200}t + {2205607 \over 33633600}t^2 + {119437 \over 3669120}t^3 + {35831 \over 2096640}t^4 +{307 \over 35840}t^5 + {121 \over 35840}t^6. \eeqrn We present also some graphics in figures \ref{fig9} and \ref{fig10}. As one can see, the general features of the metric and scalar field perturbations are the same irrespective of the value of $\ell$, the differences being only quantitative. For each parity, the function $\beta_\ell(y)$ increases with $|y|$ at a rate which is rising as $\ell$ becomes higher. As for the function $\varphi_\ell(y)$, it displays a sort of damped oscillations around the origin before decaying for high $|y|$. For $y>0$, it first reaches a maximum which is higher and closer to the origin as $\ell$ increases, and then a minimum, this time shallower and also closer to the origin for increasing $\ell$. \begin{figure}[t] \centering \includegraphics[width=7cm]{figura17.eps} \hspace{1cm} \includegraphics[width=9cm]{figura18.eps} \caption{The perturbations of the metric and scalar kink profile for several odd values of $\ell$.} \label{fig9} \end{figure} \begin{figure}[b] \centering \includegraphics[width=7cm]{figura19.eps} \hspace{1cm} \includegraphics[width=9cm]{figura20.eps} \caption{The perturbations of the metric and scalar kink profile for several even values of $\ell$.} \label{fig10} \end{figure} \section{Outlook} Along this paper we have been investigating the interplay between Jackiw-Teitelboim gravity and travelling kink solutions in severals scalar field theories. According to the classification given by Bazeia in \cite{Ba02}, all these theories are of type I, models with a single scalar field supporting structureless kinks. In fact, they all belong to a subclass in which the energy density is symmetric around the center of the kink, a condition which does not apply for other models within type I such as the $\phi^6$ kink \cite{Lo79}. There is also a second type of models which, although contain also a sole field, are able to embrace kinks of two different species, for instance the double sine-Gordon model introduced in \cite{MaKu76}. Finally, type III comprises a variety of models with several scalar fields and where the interactions among different kink components lend them internal structure. In models of this type, when the potential has non collinear minima, it is possible to engineer junction configurations of kinks, see some examples in \cite{Ba02}, or to consider a non flat target manifold to uncover the presence of kinks in nonlinear sigma models \cite{Juanetal1,Juanetal2}. It would be interesting to put JT gravity into models of types II and III and to explore the consequences of gravitational physics on the rich dynamics enjoyed by these theories. The scope of the treatment that we have given here is limited to the presentation of the self-gravitating kink solutions arising in the models considered, but without a detailed analysis of their stability or possible quantization, which can be the subject of further work. For zero gravitational coupling, the stability of kinks is very solid due to topological reasons. Since, in presence of gravity, finiteness of energy imposes the same constraints on the asymptotic behavior of the scalar field than in the non gravitational case, we expect the self-gravitating kinks to be stable against scalar field fluctuations, at least for small $g$ coupling. However, fluctuations of the metric fields have also a role, and they could give rise to instability modes leading to gravitational collapse into a black hole with topologically non-trivial boundary conditions. This could be specially significant when $g$ is higher and the kink energy increases. A criterion for gravitational stability for pressureless matter has been worked out in \cite{SiMa91}, but the situation here is more complicated and further study is required to elucidate this point. In any case, the computation of the spectrum of kink fluctuations and the study of scalar and gravitational waves in a kink background are interesting issues which deserve a careful examination. As we have mentioned in the Introduction, the theory of Jackiw and Teitelboim is only a particular case, although a fairly interesting one, within the category of two-dimensional dilaton gravity theories, which can be formulated in quite a variety of ways. The presence of black holes in these theories and the diverse gravitational phenomenology springing up in them has been a theme of considerable research \cite{GrKuVa02}. Thus, to deal with other dilaton theories is, along with the consideration of the wide diversity of scalar models alluded above, another natural direction in which the results reported in the present work can be extended. Finally, although taking the cosmological constant equal to zero, as we have done, is the most natural option in the prospect for obtaining kink solutions, taking into account the effect of a non vanishing $\Lambda$ on field configurations like those described in the paper could be another problem to be addressed. \section*{Acknowledgments} The authors acknowledge the Junta de Castilla y Le\'on for financial help under grants SA067G19 and BU229P18.
train/arxiv
BkiUdWHxK0zjCsHea4kj
5
1
\section{Introduction} New heavy neutral fermions could affect low-energy measurements, made well below their production thresholds. In general, these new states mix with the three light neutrinos, thus modifying their neutral current (NC) and charged current (CC) couplings. Such effects depend on the light--heavy neutrino mixing angles, which can then be constrained by the set of NC and CC precision data at ``low" energy \cite{fermionmix}--\cite{mix94}. If the explanation for the smallness of the known neutrino masses is given by the see-saw mechanism \cite{see-saw}, these mixing angles are proportional to the square roots of the ratios of the light and the heavy mass eigenvalues, so that their effects on the light neutrino couplings to the standard gauge bosons vanish when the heavy neutrino masses go to infinity. In this limit, the heavy neutrinos completely decouple from the low-energy physics. In fact, not only the tree-level light--heavy mixing effects, but also the loop diagrams involving the heavy neutrinos, contributing to physical observables, turn out to be suppressed by inverse powers of the heavy mass scale \cite{lim-inami,cheng-li}. However, the see-saw mechanism is not the only possible scenario to explain the lightness of the known neutrinos. In particular, a viable alternative involving heavy neutral states has been considered \cite{numix}--\cite{unconventional}, where vanishingly small masses for the known neutrinos are predicted by a symmetry argument, and at the same time large light--heavy mixing angles are allowed. In this case, due to the mixing effects in the light neutrino interactions, the new neutral fermions affect the low-energy physics even if their masses are very large. In the present paper, we discuss the conditions under which the mixing angles between the standard and new neutral fermions heavier than $M_Z$ can be large, keeping at the same time the masses of the known neutrinos below the laboratory limits. We will then study the limits that can be set on the mixing parameters, concentrating on the ones which would induce Lepton Flavour Violation (LFV). In particular, we will consider a class of models that allow total lepton number conservation, and we will show that loop contributions involving virtual heavy neutrinos to the decays $\mu\to e\gamma$, $\mu\to ee^+e^-$, and to the process of $\mu$--$e$ conversion in nuclei, can be large and then provide significant constraints on the mixing parameters. In particular for a heavy mass scale around the electroweak scale the induced $\mu\to e e^+ e^-$ and $\mu$--$e$ conversion rates {\it increase} with the masses of the heavy neutrinos, showing a non-decoupling behaviour, and the present limits on the processes put significant constraints in the space of the light--heavy mixing parameters and new neutrino masses. The planned experiments looking for muon--electron conversion are specially suitable for finding signals arising from this kind of physics. The paper is organized as follows. In section 2, we discuss the class of models for neutrino mass that predict significant light-heavy mixing. In section 3, we review the direct constraints on the flavour-diagonal, and up-date the resulting indirect limits on the flavour-changing, mixing parameters. In section 4, we compute the non-decoupling contributions to the decays $\mu\to e\gamma$, $\mu\to ee^+e^-$, and to the process of $\mu$--$e$ conversion in nuclei. We discuss their importance and their interplay in constraining the models, and we show how the non-decoupling behaviour is strictly related to the breaking of the electroweak symmetry. Finally, section 5 summarizes our conclusions. \section{Models for light--heavy neutrino mixing} Let us first consider a generic extension of the Standard Model (SM), including right-handed neutrinos (singlets under the SM group) as the only new neutral fermions. We can arrange all the independent neutrino degrees of freedom in two vectors of {\it left-handed} fields, $\nu$ and $N\equiv C\bar\nu_R^T$, where $C$ is the charge-conjugation matrix and a family index is understood. In the basis $(\nu,N)$ the mass matrix can be written in a block form as \begin{equation} {\mathbold M}=\pmatrix{m_{\nu \nu} & m_{\nu N}\cr m_{\nu N}^T & M_{N N}\cr}. \label{nondec1} \end{equation} The entry $m_{\nu\nu}$ can be due to a possible lepton-number-violating vacuum expectation value (VEV) of a triplet Higgs field, as in left--right models. Although in principle the singlet neutrinos can be light, we are interested in the case when all the new (i.e. non-SM) states are heavier than $\sim M_Z$. This is also the theoretical expectation in most models, which generally predict large masses for the heavy states. In the one family case, as far as the entry $m_{\nu\nu}$ can be neglected, we have the usual see-saw mechanism \cite{see-saw} for the generation of a small neutrino mass. In this case, the light--heavy mixing angle $\theta$ depends on the ratio of the light and heavy mass scales, as $\sin^2\theta \sim m/M$ ($m\sim m_{\nu N}^2/M_{N N}$, $M\sim M_{N N}$). For $M\ \rlap{\raise 2pt\hbox{$>$}}{\lower 2pt \hbox{$\sim$}}\ M_Z$, taking the laboratory limits on the $\nu_e$, $\nu_\mu$ and $\nu_\tau$ masses \cite{pdg94}, we get the bounds $\sin^2\theta_{\nu_e}\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 10^{-10}$, $\sin^2\theta_{\nu_\mu}\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 10^{-6}$, $\sin^2\theta_{\nu_\tau}\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 10^{-3}$, which are too small to have any phenomenological interest. To have significant light--heavy mixing in the one family case, we are left with only one solution, requiring that the entry $m_{\nu\nu}$ be non vanishing, and satisfying the relation $m_{\nu\nu}=m_{\nu N}^2/M_{N N}$. This would ensure that the mass matrix of Eq. (\ref{nondec1}) be singular, so that the mixing angle $\sin\theta\sim m_{\nu N}/M_{N N}$ would no longer be related to the ratio of the light to the heavy eigenvalues, and would be allowed to be as large as $O(1)$. However, it seems hard to find a reasonable motivation for the underlying fine tuning of the parameters in the mass matrix. The model with three families of left- and right- handed neutrinos ($\nu\equiv(\nu_e,\nu_\mu,\nu_\tau)$, $N\equiv (N_e,N_\mu,N_\tau)$) allows a solution even in the case $m_{\nu\nu}=0$ \cite{buchmuller-greub}. In this case, the fine-tuning conditions, which allow for finite light--heavy mixing and vanishing mass of the known neutrinos, are the following: 1) the Dirac mass matrix $m_{\nu N}$ is of rank 1, that is all the three lines (rows) are proportional; 2) the trace $Tr(m_{\nu N}^T M_{NN}^{- 1} m_{\nu N})=0$ (assuming that $M_{NN}$ is not singular). These considerations can be generalized. The neutrino mass matrix should be singular with a three times degenerate zero eigenvalue, in the limit in which the masses of the three known neutrinos are neglected. In the see-saw mechanism, this is ensured by letting the heavy scale go to infinity, which implies that the light--heavy mixing angles go to zero. However, if for some reason the mass matrix is (three times) singular even for {\it finite} values of the heavy scale, the light--heavy mixing angle can be substantial. Any model realizing this idea in a natural way, e.g. due to a symmetry argument, is a viable alternative to the see-saw mechanism to explain the lightness of the known neutrinos. For instance, let us assume that pairs $N$, $N'$ of (left-handed) new neutrinos exist, with the lepton-number assignments $L(N)=-L(N')=L(\nu)=1$, and that $L$ is conserved. We understand a family index, i.e. $\nu\equiv(\nu_e,\nu_\mu,\nu_\tau)$, $N\equiv (N_1,...,N_{n-3}),$ $N'\equiv(N'_1,...,N'_{n-3}),$ where $n-3$ is the number of new pairs of neutral fermions. Then, in the basis $(\nu, N, N')$, the mass matrix is \begin{equation} {\mathbold M}=\pmatrix{0 & 0 & M_{\nu N'}\cr 0 & 0 & M_{N N'}\cr M_{\nu N'}^T & M_{NN'}^T & 0\cr}, \label{nondec2} \end{equation} which is singular and ensures that three eigenstates form massless Weyl neutrinos. In fact, as in the SM, the light states remain with no chirality partners and hence massless.\footnote{Small $L$-violating Majorana mass terms for the states $\nu$ and $N$ could also be allowed \cite{ll2}, and could be relevant for explaining the solar neutrino deficit via neutrino oscillations.} On the contrary, the heavy states form Dirac neutrinos, whose left-handed components are mainly the $N$ and whose right-handed parts are given by $C\bar (N')^T$. Mass matrices of the form (\ref{nondec2}) have been considered in Ref. \cite{numix}. They can arise in generalized E$_6$ models \cite{mohapatra-valle}--\cite{unconventional}, as well as in models predicting other kinds of vector multiplets (singlets, triplets, \dots) or new mirror multiplets of leptons \cite{maalampi-roos} with neutral components $N$, $N'$. The mass matrix (\ref{nondec2}) can be put in a ``Dirac diagonal" form by an ``orthogonal" transformation, \begin{equation} {\mathbold U}^T {\mathbold M} {\mathbold U}=\pmatrix{0 & 0 & 0\cr 0 & 0 & M\cr 0 & M & 0\cr}, \label{nondec3} \end{equation} where the block $M$ is diagonal. The {\it unitary} matrix ${\mathbold U}$ in Eq. (\ref{nondec3}) can be chosen in the form \begin{equation} {\mathbold U}=\pmatrix{A&G&0\cr F&H&0\cr 0&0&K\cr} .\label{nondec4} \end{equation} Several relations amongst the blocks in (\ref{nondec4}) can also be deduced from the unitarity condition ${\mathbold U}^\dagger{\mathbold U}={\mathbold U}{\mathbold U}^\dagger= {\mathbold 1}$. Equation (\ref{nondec4}) describes the mixing between the $\nu$ and $N$ states, the mixing parameters being the elements of the matrix \begin{equation} G H^{-1}=-(F A^{-1})^\dagger = [M_{\nu N'}M_{N'N'}^{-1}]^* .\label{nondec4b} \end{equation} Clearly, if for the relevant matrix elements $M_{\nu N'}\sim M_{N N'}$, the mixings between $\nu$ and $N$ can be arbitrarily large. We will consider in the following two particular cases: i) The new neutrinos $N$ are {\it ordinary}, i.e. they belong to a weak (left-handed) doublet. Then $M_{\nu N'}$ and $M_{N N'}$ could be generated by vacuum expectation values of Higgs fields transforming in the same way under SU(2) so that the $\nu$--$N$ mixing could be naturally close to maximal. In particular, we can consider the SM with $n>3$ families, and with $n-3$ right-handed neutrinos. Then $N$ would describe the neutrinos of the new $n-3$ families, appearing with right-handed partners $\overline{N'}$. In this picture, the three known neutrinos remain strictly massless since they have no right-handed component. The same scenario arises in some lepton-number-conserving E$_6$ models \cite{unconventional}, predicting three new ordinary $N$ states (one per standard family) and six isosinglets, three of which can play the role of our $N'$ in such a way that the relevant part of the mass matrix assumes the form of Eq. (\ref{nondec2}). ii) Another case that has been considered in the literature \cite{mohapatra-valle} corresponds to both $N$ and $N'$ singlets. In this case, a significant light--heavy mixing can be expected only if the SU(2)-invariant mass term $M_{NN'}$ is generated not far from the electroweak scale. In both the above cases, the $N'$ states are assumed to be isosinglets. This implies that the block $M_{\nu N'}$ violates SU(2), and can be generated by the VEV of a doublet Higgs field at the electroweak scale. \section{Constraints from flavour-diagonal processes} The mixing with heavy states would affect the observables at energies below the threshold for their production. Following e.g. Ref. \cite{fitnu}, we introduce a vector ${N}$ to describe all the new independent neutral fermionic degrees of freedom which mix with the three known left-handed neutrinos $\nu\equiv(\nu_e,\nu_\mu,\nu_\tau)$. We will use only left-handed fields, without distinguishing between neutrinos and antineutrinos. Then the light $n$ and heavy ${\cal N}$ mass eigenstates can be obtained by a unitary transformation \begin{equation} \pmatrix{\nu\cr {N}\cr}= \pmatrix{A&G\cr F&H\cr} \pmatrix{n\cr {\cal N}\cr} .\label{nondec5} \end{equation} This general formalism covers in particular the cases discussed in the previous section. In the case of Eqs. (\ref{nondec2})--(\ref{nondec4}), this means that we are now focusing on the submatrix involving the light--heavy mixing, given by Eq. (\ref{nondec5}), which is unitary. The light--heavy mixing is described by the matrix $G$, and is reflected also in the non-unitarity of the block $A$ ($AA^\dagger+GG^\dagger=1$). Notice that $A$ also describes the leptonic Cabibbo--Kobayashi--Maskawa mixing, in the basis where the mass matrix for the (light) charged leptons is diagonal. In processes occurring at energies below the threshold for the production of the heavy states, the standard gauge eigenstate $\nu_a$ ($a=e,\mu,\tau$) is effectively replaced by its (normalized) projection $\vert \nu_a^{light}\rangle$ onto the subspace of the light neutrinos $\vert n_i\rangle$ ($i=1,2,3$), \begin{equation} \vert \nu_a^{light}\rangle \equiv {1\over c_{\nu_a}}\sum_{i=1}^{3} A^\dagger_{ia} \vert n_i\rangle, \label{nondec6} \end{equation} where $c_{\nu_a}^2\equiv\cos^2\theta_{\nu_a}\equiv (AA^\dagger)_{aa}$. The state $\vert \nu_a^{light}\rangle$ has non-trivial projections on the subspace of the standard neutrinos $\vert \nu_b\rangle$ as well as on the subspaces of the new neutrinos $\vert {N}_B\rangle$. In fact we have \begin{equation} \begin{array}{ll} {\displaystyle \sum_b}\vert\langle\nu_b\vert\nu_a^{light}\rangle\vert^2= &{(AA^\dagger)_{aa}^2\over c_{\nu_a}^2} =c_{\nu_a}^2 ,\\ {\displaystyle \sum_B}\vert\langle{N}_B\vert\nu_a^{light}\rangle\vert^2= &{(AF^\dagger FA^\dagger)_{aa}\over c_{\nu_a}^2}=s_{\nu_a}^2, \end{array} \label{nondec7} \end{equation} with $s_{\nu_a}^2\equiv 1-c_{\nu_a}^2=\sin^2\theta_{\nu_a}$. The parameter $\theta_{\nu_a}$ measures the total amount of mixing of the known state of flavour $a=e,\mu,\tau$ with the new states. These three mixing angles are sufficient to describe the {\it tree-level} effects of the light--heavy mixing in the CC and NC processes at energies below the threshold for the production of the heavy states \cite{fitnu}. The entries of the matrix $GG^\dagger$, describing the mixing with the new neutrinos, are limited by the constraints on CC universality and, if the heavy states do not belong to SU(2) doublets, by the measurement of the $Z$ boson invisible width at LEP \cite{fermionmix}--\cite{mix94}. For the diagonal elements $(GG^\dagger)_{aa}\equiv s^2_{\nu_a}$, the 90\% C.L. bounds are \cite{fitnu,mix94} \begin{equation} s^2_{\nu_e}<0.007(0.005),\qquad s^2_{\nu_\mu}<0.002 ,\qquad s^2_{\nu_\tau}<0.03(0.01), \label{nondec8} \end{equation} where the more conservative limits are due to the CC constraints and apply to any kind of heavy neutrinos, while the limits in parentheses correspond to the mixing with SU(2) singlets and take into account also the LEP \cite{lep94} and SLC \cite{alr-slc} data. For a complete discussion, we refer to \cite{fitnu,mix94}. These limits can be somewhat relaxed if the cancellations with the effects due to different fermion-mixing parameters which might be present in some extended models are taken into account \cite{fermionmix}--\cite{mix94}. Nevertheless, we will not try to allow for the corresponding fine-tunings and we will consider the stringent bounds of Eq. (\ref{nondec8}) to be reliable. For the off-diagonal elements of the matrix $GG^\dagger$, indirect limits can be obtained from Eq. (\ref{nondec8}) and the relation \cite{ll2} \begin{equation} \vert (GG^\dagger)_{ab}\vert<\vert s_{\nu_a}s_{\nu_b}\vert, \label{nondec9} \end{equation} which can be deduced from the unitarity of the full mixing matrix by applying the Schwartz inequality. Using the bounds of Eq. (\ref{nondec8}), we find that all the elements of the matrix $GG^\dagger$ can be constrained as in Table 1. Loop diagrams involving virtual heavy neutrinos can contribute to flavour-diagonal observables \cite{lim-inami,nuloopfcons}, such as the leptonic widths $Z\to l\bar l$ or the polarization asymmetries measured at the $Z$ peak \cite{lep94,alr-slc}. However, taking into account the updated limits of Table 1, the predictions for these observables turn out to be below the attainable experimental limits. Heavy neutrinos in general also affect the electroweak radiative corrections which are tested e.g. in the LEP experiments. For instance, if the new neutral states $N$ belong to SU(2) doublets $\pmatrix{N\cr E}$, they will then contribute to the $\rho$ parameter \cite{rho-mt,rhoparam} through loop diagrams, resulting in a (top-like) non-decoupling dependence, \begin{equation} \delta\rho\simeq\sum {G_F\over 8\sqrt2\pi^2}\Delta M^2, \label{nondec10} \end{equation} where $\Delta M^2\equiv M_E^2+M_N^2 - {4 M_E^2 M_N^2\over M_E^2 - M_N^2} \ln {M_E\over M_N}\ge (M_N-M_E)^2$; the sum runs over all the new doublets and we have neglected the effects of the light--heavy neutrino mixing. The value of the $\rho$ parameter is constrained by the electroweak data. For $m_t=174\pm16$ GeV, as suggested by the CDF measurement \cite{cdf-mtop}, the result is $\delta\rho=0.0004\pm0.0022\pm0.002$ \cite{rhoparam} (the second error is from the uncertainty in the Higgs mass $m_H$) for the ($m_t$-independent) corrections due to possible new physics. {}From Eq. (\ref{nondec10}) and for $m_H<1$ TeV, we then find that any new lepton doublet should be degenerate within $\vert M_N-M_E\vert \lessim220$ GeV at 90\% C.L.. This constraint is significant if the new neutrinos are close to the perturbative limit $M_N\lessim1$ TeV, that holds for $N$ belonging to an SU(2) doublet, but in any case within this region it does not require an important fine-tuning. A second phenomenological constraint on heavy ordinary neutrinos, belonging to weak doublets, comes from the limit on the $S$ Peskin--Takeuchi \cite{heavyloops} parameter. The contribution from a multiplet of heavy degenerate fermions is $\Delta S = C{\sum_f}(t_{3L}(f)-t_{3R}(f))^2/3\pi$, where $t_{3L,R}(f)$ is the weak isospin of the left- (right-) handed component of fermion $f$, and $C$ is the number of colours \cite{langacker-erler}. Then the contribution from the set of all the particles in each new ordinary family of heavy fermions is $\Delta S\simeq 2/3\pi>0$. {}From the analysis of the electroweak data, one gets the 95\% C.L. bound $S<0.2$, which can be relaxed to $S<0.4$ if only positive contributions to $S$ \cite{langacker-erler} are allowed. As a consequence, only a single, or very marginally two, new families are allowed to exist by the present data. This is the maximum number of pairs $N$, $N'$, for $N$ belonging to a new family and $N'$ isosinglet, and in this case $M_{\nu N'}$ and $M_{NN'}$ can be $3\times1$ (much less likely $3\times2$) matrices. However, in the following we will retain the general notation, allowing for an arbitrary number of $N$, $N'$ pairs. In fact, this number is not important for our discussion (provided it is non-zero), and in addition one can also consider new pairs $N$, $N'$ from (respectively) an isodoublet and an isosinglet which do not belong to new ordinary families. An example of the latter situation is given by E$_6$ models themselves \cite{unconventional} that contain new leptonic doublets and isosinglets in the ${\bf 27}$ representation, so that in the three-family models there are also three such pairs. In this case, the contribution to the $S$ parameter is zero, since the non-isosinglet new states appear in vector doublets ($t_{3L}(f)=t_{3R}(f)$). In the case when both $N$ and $N'$ are isosinglets, as in the model \cite{mohapatra-valle}, the $S$ parameter is not affected, while the contribution to $\delta\rho$ is suppressed by the fourth power in the mixing angles and is negligible \cite{cheng-li,nuloopfcons}. \section{Constraints from Flavour-Changing (FC) processes} The indirect limits presented in Table 1 are more stringent than any direct bound on the tree-level effects of the off-diagonal mixings, such as the constraints from the search for neutrino oscillations \cite{ll2}. However, loop diagrams involving heavy neutrinos give rise to unobserved rare processes such as $\mu\to e \gamma$, $\mu\to ee^+e^-$, $\tau\to l_al_b^+l_c^-$ ($a,b,c=e,\mu$), $Z\to l_a^-l_b^+$ ($a,b=e,\mu,\tau$), etc. \cite{lim-inami,concha-valle}. Taking into account the stringent constraints in Table 1, the rates for all the processes involving the violation of the $\tau$ lepton number turn out to be below the experimental sensitivity even for extreme values of the heavy neutrino masses \cite{concha-valle}. In other words, it is not possible to improve the limits in Table 1 on the parameters $(GG^\dagger)_{a\tau}$ ($a=e,\mu$). However the extraordinary sensitivity of the experiments looking for FC processes involving the first two families implies that the constraints from $\mu\to e\gamma$, from $\mu\to ee^+e^-$, or from $\mu$--$e$ conversion in nuclei, are significant and turn out to be stronger than those from Table 1. The diagrams contributing to these processes are proportional to factors involving the light--heavy mixing angles, and in the cases of the processes $\mu\to ee^+e^-$ and $\mu$--$e$ conversion in nuclei, they depend up to quadratically in the heavy neutrino mass scale $M$. In see-saw models, where the mixing angles are suppressed by inverse powers of the heavy masses, the resulting dependence is $\sim M^{-2}$ \cite{lim-inami,cheng-li}, in agreement with the decoupling theorem \cite{decoupling}. However, the models discussed in section 2 predict a finite light--heavy mixing independent of the light-to-heavy mass ratio, so that, assuming that the $N'$ states are isosinglets, a genuine $\sim M^2$ dependence is obtained. This non-decoupling behaviour is comparable to the top mass dependence of the $\rho$ parameter \cite{rho-mt} and of the $Z\to b\bar b$ vertex \cite{zbb-mt}. Since the effects of any SU(2)-invariant mass term should decouple when the mass term goes to infinity \cite{decoupling}, in all these cases the relevant combinations of the mass and mixing parameters entering the graphs are connected to the electroweak breaking scale and cannot exceed $\sim1$ TeV. This consideration is obvious for the top-dependent non-decoupling effects, and will be explicitly verified in the following in the case of the heavy-neutrino contributions. \bigskip \noindent {\it 4.1 $\mu\to e\gamma$ } Let us first consider the decay $\mu\to e \gamma$, induced by one-loop graphs involving virtual heavy neutrinos \cite{ma-pramudita,ll2}. The corresponding branching ratio is given by \begin{equation} B(\mu\to e \gamma)={3\alpha\over 8\pi}\left|\sum_i G_{ei}G^\dagger_{i\mu} \phi\left(M_i^2\over M_W^2\right)\right|^2 ,\label{nondec11} \end{equation} where $M_i$ is the mass of the heavy eigenstate ${\cal N}_{i}$, and the function \begin{equation} \phi(x)={x(1-6x+3x^2+2x^3-6x^2\ln x)\over 2(1-x)^4} \label{nondec12} \end{equation} varies slowly from $0$ to $1$ as $x$ ranges from $0$ to $\infty$. Taking into account the 90\% C.L. limit $B(\mu\to e \gamma)<4.9\times10^{- 11}$ \cite{muegamma-exp}, and assuming $M_i\ \rlap{\raise 2pt\hbox{$>$}}{\lower 2pt \hbox{$\sim$}}\ M_W$, one gets the estimate $\vert (GG^\dagger)_{e\mu}\vert \ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 0.95\times10^{-3}$ \cite{ll2}. This bound holds under the assumption that no important fine-tuning works in the sum $\sum_i G_{ei}G^\dagger_{i\mu}\phi\left(M_i^2\over M_W^2\right)$, and is independent of the weak isospin of the new states. We will be interested in the case of very heavy neutrinos, $M_i\gg M_W$. In this case $\phi\simeq 1$ and we get the stringent bound \begin{equation} \vert (GG^\dagger)_{e\mu}\vert \ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 0.24\times 10^{-3}. \label{nondec13} \end{equation} In the models discussed in Section 2, allowing for large light--heavy mixings not suppressed by see-saw relations, the loop contribution of Eq. (\ref{nondec11}) does not vanish for large heavy-neutrino masses; however, it does not have a hard non-decoupling dependence on the heavy mass scale, as the $Ze\mu$ vertex and the box diagrams that we will discuss in the next paragraphs. \bigskip \noindent {\it 4.2 $Ze\mu$ vertex } Let us consider now the FC $Z\bar e\mu$ current, parameterized in the form \begin{equation} J^\mu_{Z\bar e\mu}= g \bar e \gamma^\mu(k_LP_L+k_RP_R)\mu= {g\over2}\bar e \gamma^\mu(k_V-k_A\gamma^5)\mu, \label{nondec14} \end{equation} where $g=(4\sqrt2 G_F M_Z^2)^{1/2}$ is the weak coupling constant to the $Z$ boson, and $P_{R,L}=(1\pm\gamma_5)/2$. The leading contribution from heavy neutrinos ${\cal N}_i$ of mass $M_{i}\gg M_W$ arises from the (convergent) loop diagram in Fig. 1 involving the exchange of {\it longitudinal} $W$ bosons (more precisely of the would-be Goldstone bosons in the Feynman gauge). Notice that, as far as we are interested only on the quadratic term in the mass of the heavy states, it is consistent to ignore all the other loop contributions to the vertex, since their leading quadratic terms sum to zero when $M_i\gg M_W$. We also remark that the corrections to our approximation, which would become the main contribution for $M_i\lessim100$ GeV, would give a phenomenologically small result due to the constraint coming from $\mu\to e\gamma$, Eq. (\ref{nondec11}), which would be dominant for such relatively light new states. A more formal justification for considering only the graphs in Fig. 1 can be given in the effective lagrangian approach \cite{efflagr}. Neglecting the external momenta, and for $M_i\gg M_W$, the contribution in Fig. 1 reads \begin{equation} k_L=k_V=k_A=-{ g^2 \over 128 \pi^2 M_Z^2} {\cal F}_{e\mu}, \label{nondec15} \end{equation} where the dependence from the new physics parameters is given by the factor \begin{equation} {\cal F}_{e\mu}\equiv \sum_{i,j=heavy} S_{ij} M_i M_j f(M_i,M_j), \qquad f(M_i,M_j) = {M_i M_j \ln (M_i^2/M_j^2) \over M_i^2-M_j^2}. \label{nondec16} \end{equation} The dependence on the light--heavy mixing angles is contained in the term \begin{equation} S_{ij}\equiv G^*_{\mu i}G_{ej} [(G^\dagger G)_{ji} + 2 t_3^{N} (H^\dagger H)_{ji}] = G^*_{\mu i}G_{ej} [(G^\dagger G)_{ji}(1 - 2 t_3^{N}) + 2 t_3^{N}\delta_{ji}] ,\label{nondec17} \end{equation} where $t_3^{N}$ is the weak isospin of the (left-handed) $N$ field. The dependence is quadratic in the light--heavy mixing matrix $G$, unless the new neutrinos $N$ are weak isosinglets, in which case it is quartic. The best limit on the FC current $J^\mu_{Z\bar e\mu}$ arises from the search for $\mu$--$e$ conversion in nuclei \cite{mue-exp,mue-new}. For general FC couplings $k_V$ and $k_A$ and for nuclei with atomic number $A\lessim100$, the induced branching ratio with the total nuclear muon capture rate is \cite{mueconv} \begin{equation} R\simeq {G_F^2\alpha^3\over\pi^2} m_\mu^3p_eE_e {Z_{eff}^4\over Z} \vert F(q)\vert^2{1\over \Gamma_{capture}} (k_V^2+k_A^2)Q_W^2 , \label{nondec18} \end{equation} where $p_e$ ($E_e$) is the electron momentum (energy), $E_e\simeq p_e\simeq m_\mu$ for this process, and $F(q)$ is the nuclear form factor, as measured for example from electron scattering \cite{escatt}. Here $Q_W = (2Z+N)v_u + (Z+2N)v_d$ is the coherent nuclear charge associated with the vector current of the nucleon, as a function of the quark couplings to the $Z$ boson and of the nucleon charge and atomic $(A=Z+N)$ numbers, and $Z_{eff}$ has been determined in the literature \cite{zeff}. For $\Gamma_{capture}$ in $^{48}_{22}$Ti we will use the experimental determinations $\Gamma_{capture}\simeq (2.590\pm0.012)\times10^6 {\rm s}^{-1}$ \cite{mue-exp}, $F(q^2\simeq -m_\mu^2)\simeq0.54$ and $Z_{eff}\simeq17.6$. The resulting limit for the FC couplings is then \cite{mueconv} \begin{equation} (k_V^2+k_A^2) < 5.2\times10^{-13} \left(B\over 4\times10^{-12}\right), \label{nondec19} \end{equation} where $B$ is the value of the experimental bound to $R$, $B=4\times10^{-12}$ at present \cite{mue-exp}. Comparing with the prediction of our model, Eqs. (\ref{nondec15})--(\ref{nondec17}), we find that the non observation of the process of $\mu$--$e$ conversion in nuclei results in the bound \begin{equation} {\vert {\cal F}_{e\mu}\vert \over({\rm 100 GeV})^2}\lessim0.97\times10^{-3} \left(B\over 4\times10^{-12}\right)^{1/2}. \label{nondec20} \end{equation} In order to find out the impact of this constraint, we have to specify the value of the weak isospin of the new states involved in the mixing. Let us consider first the case i) of Section 2, when the new states $N$ are {\it ordinary}, that is $t_3^{N}=1/2$. In this case, a substantial light--heavy mixing can be expected, since $M_{\nu N'}$ and $M_{N N'}$ can be generated both by the VEVs of SU(2)-doublet Higgs fields (we are assuming that $N'$ are singlets). Then $S_{ij}=G_{\mu i}^*G_{e i} \delta_{ij}$, and only the diagonal terms contribute in Eq. (\ref{nondec17}). It is easy to show that $f(M,M)=1$, then Eq. (\ref{nondec16}) simplifies to \begin{equation} {\cal F}_{e\mu}=(G M^2 G^\dagger)_{e\mu}= (M_{\nu N'}M^\dagger_{\nu N'})_{e\mu}, \label{nondec21} \end{equation} where $M$ is the diagonal Dirac mass matrix for the heavy states appearing in Eq. (\ref{nondec3}), and we have used Eqs. (\ref{nondec2})--(\ref{nondec4}). Since $M_{\nu N'}$ and $M_{N N'}$ arise from the breaking of the SU(2) symmetry, their entries are expected to be generated at the electroweak scale. In particular, the heavy masses should be $M_i\lessim1$ TeV, assuming perturbation theory not to be spoiled. We see explicitly in this case that the non-decoupling behaviour of the $Ze\mu$ vertex is due to the breaking of the SU(2) symmetry. On the other hand, since ${\cal F}_{e\mu}=(M_{\nu N'}M^\dagger_{\nu N'})_{e\mu}$ is naturally expected to be $\sim (100$ GeV$)^2$, we see that Eq. (\ref{nondec20}) indeed represents a strong constraint on the model, like the bounds on the mixing matrix $GG^\dagger$ discussed in the previous section and the limit from $\mu\to e \gamma$ of the previous paragraph. To compare these different bounds, let us assume for simplicity that the mass differences amongst the heavy states are smaller than their common scale $M$, namely $\vert M_i^2-M_j^2\vert/(M_i^2+M_j^2)\ll1$. The allowed regions in the LFV mixing parameter $S\equiv (GG^\dagger)_{e\mu}^{1/2}$ and the heavy mass scale $M$ is then given in Fig. 2. For $200$ GeV$\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ M$, the constraint on $S$ from $\mu$--$e$ conversion in nuclei, given by Eq. (\ref{nondec20}) and represented by the full line in Fig. 2\footnote{For $M\to 100$ GeV, towards the left margin of the figure, the non-decoupling contribution to the $Ze\mu$ vertex is not more important than the others we have neglected \cite{efflagr}, but in this regime the bound from $\mu\to e \gamma$ dominates \cite{lim-inami,concha-valle}.}, is more stringent than the bound from $\mu\to e\gamma$ (dashed line), resulting from Eqs. (\ref{nondec11}), (\ref{nondec12}) and Ref. \cite{muegamma-exp}. If the new states are lighter than $\sim200$ GeV, the constraint from $\mu\to e \gamma$ is the most stringent one. The indirect limit from Table 1, $S=(GG^\dagger)_{e\mu}^{1/2} <\sqrt{0.004}=0.063$, is much worse and would be represented by a horizontal line above the figure. Let us now consider the case ii) of Section 2, in which the new states $N$ mixing with the light neutrinos are singlets under SU(2). Then from Eq. (\ref{nondec17}) we see that the mixing factor depends of the fourth power of the light--heavy mixings. One could expect that in this case no significant constraint can be obtained from Eq. (\ref{nondec20}); however the mass eigenvalues $M_i$ are no longer limited by 1 TeV. In fact, since the states $N'$ are also assumed to be isosinglets, the entries $M_{NN'}$ in Eq. (\ref{nondec2}) are not related to the electroweak scale and can be expected to be generated at a higher scale. To have an idea of the impact of the constraint of Eq. (\ref{nondec20}) in this case, let us assume again that the mass differences amongst the heavy states are smaller than their common scale $M$. In this case, $f(M_i,M_j)\simeq1$, and using the identity $MG^\dagger=K^TM^T_{\nu N'}$ (which can be deduced from Eqs. (\ref{nondec3}) and (\ref{nondec4})), we find that ${\cal F}_{e\mu}\simeq (GMG^\dagger)^2_{e\mu}= (G_{ej}G^\dagger_{i\mu })(K^\dagger M_{\nu N'}^\dagger M_{\nu N'}K)_{ij}$. Since $K$ is unitary, we can expect that $\vert (K^\dagger M_{\nu N'}^\dagger M_{\nu N'}K)_{ij}\vert\sim \bar M_{\nu N'}^2$, where $\bar M_{\nu N'}$ is an average scale for the entries of the SU(2)-breaking mass matrix $M_{\nu N'}$, which is generated at the electroweak scale. Again, we see explicitly that the non-decoupling behaviour is due to the spontaneous breaking of the SU(2) symmetry. A more drastic approximation, ${\cal F}_{e\mu}\sim(GG^\dagger)_{e\mu} \bar M_{\nu N'}^2$, gives an expression that is similar to Eq. (\ref{nondec21}), corresponding to the mixing with doublet neutrinos. In fact, $\bar M_{\nu N'}\lessim1$ TeV since it breaks SU(2), so that the considerations given in the previous case of doublet new neutrinos can be repeated here. The constraint from $\mu$--$e$ conversion in nuclei can be represented again by the full line in Fig. 2, after the substitution $M\to \bar M_{\nu N'}$. In particular, the constraint of Eq. (\ref{nondec20}) is more stringent than the constraint from $\mu\to e\gamma$ if $\bar M_{\nu N'}\gtrsim200$ GeV. If $\bar M_{\nu N'}$ and $\bar M_{N' N'}\simeq M$ are the typical orders of magnitude of the entries of the matrices $ M_{\nu N'}$ and $M_{N' N'}$, from Eq. (\ref{nondec4b}) the typical value of the mixing angles is \begin{equation} S\equiv(GG^\dagger)_{e\mu}^{1/2} \sim \bar M_{\nu N'}/M, \label{nondec23} \end{equation} which is of course similar to the see-saw formula \cite{see-saw,cheng-li}, though no longer related to the ratio of the physical mass eigenstates. Then the range $200$ GeV $\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ \bar M_{\nu N'}\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 1$ TeV and the bound of Eq. (\ref{nondec20}) correspond to a heavy scale $M \ \rlap{\raise 2pt\hbox{$>$}}{\lower 2pt \hbox{$\sim$}}\ (10-300)$ TeV. This last constraint is not problematic, since in the present case we are considering singlet new states which can be originated by SU(2)-invariant VEVs. In fact, the model {\it predicts} observable LFV effects if the latter inequality is (almost) an equality, corresponding to the existence of an intermediate scale $\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 300$ TeV for the heavy states. However, for the mixing with singlet neutrinos this assumption on the heavy scale would be somewhat arbitrary (unless a justification is given by fully specifying the model), so that in general significant LFV effects are not necessarily {\it predicted} in this case, contrary to the case of the mixing with new {\it ordinary} states that we have considered above. Moreover, Eq. (\ref{nondec23}) allows us to express the dependence of the leading contribution to the $\mu$--$e$ vertex in terms of the mass parameters alone. In fact, for the mixing with singlet neutrinos we get ${\cal F}_{e\mu}\sim (\bar M_{\nu N'}^2/M)^2$, which becomes vanishingly small when the invariant mass term $M\to \infty$, since $\bar M_{\nu N'}\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 1$ TeV. This can be considered as a generalization of the decoupling theorem \cite{decoupling,cheng-li} to the case of the mixing with singlet neutrinos $N$ in the class of models characterized by Eq. (\ref{nondec2}). \bigskip \noindent {\it 4.3 $\mu\to ee^+e^-$ } The leading contribution from heavy neutrinos ${\cal N}_i$ of mass $M_{i}\gg M_W$ arises from the (convergent) loop diagrams in Fig. 3 involving the exchange of {\it longitudinal} $W$ bosons (more precisely of the would-be Goldstone bosons in the Feynman gauge). Again, since we are interested only in the coefficient of the quadratic term in the masses $M_i$ of the heavy states, it is consistent to ignore all the other loop contributions to the process \cite{lim-inami,cheng-li,concha-valle}. Neglecting the external momenta, and for $M_i\gg M_W$, we obtain for the branching ratio, compared to the main channel $\mu\to e\nu\bar \nu$: \begin{equation} {B(\mu\to e e^-e^+)\over B(\mu\to e \nu\bar\nu)}= 8 \left( g^2 \over 16^2 \pi^2 M_Z^2 \right)^2 \left(\vert {\cal B}_{e\mu} - 2 \epsilon_L {\cal F}_{e\mu} \vert^2 + {1\over2} \vert 2 \epsilon_R {\cal F}_{e\mu} \vert^2 \right), \label{nondec24} \end{equation} where $\epsilon_L=-1/2+s_w^2\simeq-0.27$ and $\epsilon_R=s_w^2\simeq0.23$ are the SM left- and right- handed current couplings of the electron to the $Z$. The mixing factor entering the box diagram is \begin{equation} {\cal B}_{e\mu}\equiv \sum_{i,j=heavy} G^*_{\mu i}G_{ei} G^*_{e j}G_{ej} M_i M_j f(M_i,M_j), \label{nondec25} \end{equation} while ${\cal F}_{e\mu}$ and the loop integral $f(M_i,M_j)$, entering also in the $Ze\mu$ vertex, are given by Eqs. (\ref{nondec16}) and (\ref{nondec17}). The 90\% C.L. experimental bound, $B(\mu\to e e^-e^+)<1.0 \times 10^{-12}$ \cite{mueee-exp}, then results in the limit \begin{equation} {\left(\vert {\cal B}_{e\mu} + 0.54 {\cal F}_{e\mu} \vert^2 + {1\over2} \vert 0.46 {\cal F}_{e\mu} \vert^2 \right)^{1/2} \over (100 \, {\rm GeV})^2} < 1.4\times 10^{-3} \left( B\over 10^{-12}\right)^{1/2} .\label{nondec26} \end{equation} This constraint is complementary to the limits from $\mu\to e\gamma$ and from $\mu$--$e$ conversion in nuclei, Eqs. (\ref{nondec13}) and (\ref{nondec20}), since it affects a different combination of the mixing parameters, namely ${\cal B}_{e\mu}$. As a general result, we see that the contribution to the amplitude for $\mu\to e e^+e^-$ presents a leading quadratic dependence on the heavy mass scale, similar to that of the $Ze\mu$ vertex. In the case i) of section 2, when the mixing is with new ordinary neutrinos, the vertex contribution of Fig. 3.a depends quadratically on the light--heavy mixing angles, so that we can neglect the the box diagram 3.b, which depends on the fourth power in the mixings. Then the constraint of Eq. (\ref{nondec26}) becomes \begin{equation} {\vert {\cal F}_{e\mu}\vert \over({\rm 100\, GeV})^2} \lessim2.3\times 10^{-3} \left(B\over 10^{-12}\right)^{1/2}. \label{nondec28} \end{equation} We find that in this case the limit of Eq. (\ref{nondec20}) from $\mu$--$e$ conversion in nuclei is stronger by a factor $\sim2$, as could be expected from Ref. \cite{mueconv}. For this reason, the limit from $\mu\to e e^+e^-$ can be important only in the case ii) of section 2, when the new states $N$ involved in the mixing are singlets, since in the opposite case the contribution to the $Ze\mu$ vertex is quadratic in the mixing angles and the bound from $\mu$--$e$ conversion in nuclei results in a stronger constraint. When the mixing is with new singlets $N$, the two general constraints, Eqs. (\ref{nondec20}) and (\ref{nondec26}), appear to be of similar strength. For a more quantitative confrontation, let us consider again a particular case, when the mass differences amongst the heavy states are smaller than their common scale, so that $f(M_i,M_j)\simeq1$. In this case, we have ${\cal B}_{e\mu}\simeq (GMG^\dagger)_{ee}(GMG^\dagger)_{e\mu}$, while ${\cal F}_{e\mu}\simeq (GMG^\dagger)^2_{e\mu}= (GMG^\dagger)_{ee}(GMG^\dagger)_{e\mu}+ (GMG^\dagger)_{e\mu}(GMG^\dagger)_{\mu\mu}+ (GMG^\dagger)_{e\tau}(GMG^\dagger)_{\tau\mu}$. If $(GMG^\dagger)_{ee}(GMG^\dagger)_{e\mu}$ is the largest contribution to ${\cal F}_{e\mu}$, then ${\cal F}_{e\mu}\simeq{\cal B}_{e\mu}\simeq (GMG^\dagger)_{ee}(GMG^\dagger)_{e\mu}$, and the constraint of Eq. (\ref{nondec26}) becomes \begin{equation} {\vert {\cal F}_{e\mu}\vert \over({\rm 100\, GeV})^2} \lessim0.93\times 10^{-3} \left(B\over 10^{-12}\right)^{1/2}, \label{nondec27} \end{equation} which is as stringent as the bound from $\mu$--$e$ conversion in nuclei, Eq. (\ref{nondec20}). On the other hand, if $(GMG^\dagger)_{ee}(GMG^\dagger)_{e\mu}$ is not the main part of ${\cal F}_{e\mu}$, e.g. $(GMG^\dagger)_{ee}(GMG^\dagger)_{e\mu}< (GMG^\dagger)_{e\tau}(GMG^\dagger)_{\tau\mu}$, then ${\cal F}_{e\mu}\ \rlap{\raise 2pt\hbox{$>$}}{\lower 2pt \hbox{$\sim$}}\ {\cal B}_{e\mu}$. If ${\cal B}_{e\mu}$ can be neglected, the rate for $\mu\to ee^+e^-$ is given mainly by the $Ze\mu$ vertex contribution, and the constraint of Eq. (\ref{nondec26}) is given again by Eq. (\ref{nondec28}) and is less important by a factor $\sim2$ than the limit of Eq. (\ref{nondec20}) from $\mu$--$e$ conversion in nuclei. \section{Conclusions} We have considered a class of models predicting new heavy neutral fermionic states, whose mixing with the light neutrinos can be naturally significant. In contrast with the see-saw models, the known neutrino masses are predicted to vanish due to a symmetry, such as lepton number. Possible non-vanishing masses for the light neutrinos could then be attributed to small violations of such a symmetry. We have then reviewed the bounds on the flavour-diagonal light--heavy mixing parameters, arising mainly from the constraints on Charged Current Universality and the LEP data, and we have updated and collected in Table 1 the indirect limits on the flavour non-diagonal mixing parameters. In spite of these stringent constraints, the one-loop-induced rare processes due to the exchange of virtual heavy neutrinos, involving the violation of the muon and electron lepton numbers, which is tested with an impressive experimental precision, have then been shown to be potentially significant. In particular, the $Ze\mu$ vertex, constrained by the non-observation of $\mu$--$e$ conversion in nuclei, and the amplitude for $\mu\to ee^+e^-$, show a non-decoupling quadratic dependence on the heavy neutrino mass $M$, while $\mu\to e\gamma$ is almost independent of the heavy mass above the electroweak scale. These three processes are then used to set constraints on the LFV parameters entering the corresponding loop diagrams, which turn out to be stronger than the indirect limits of Table 1. If the mass scale $M$ of the heavy states is in the range $M_Z\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ M\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ 200$ GeV, the best constraint on the light--heavy LFV mixing comes from $\mu\to e\gamma$. If $M\gtrsim200$ GeV, and the heavy neutrinos involved in the mixing are not singlets under SU(2), the best constraint is given in most cases by the present data on the search for $\mu$--$e$ conversion in nuclei, and the planned experiments looking for this process have an opportunity to find signals from this kind of physics. In fact, we have pointed out that in this case a significant rate is naturally {\it expected} in the class of models considered here. In contrast, if the heavy neutrinos mixing with the light states are singlets under SU(2), then the leading contribution to the $Ze\mu$ vertex is suppressed by two more powers of the light--heavy mixing angles, and the constraint from $\mu$--$e$ conversion in nuclei is comparable to that from $\mu\to ee^+e^-$. These two bounds turn out to be important in a region of the parameter space corresponding to SU(2)-breaking mass entries in the range $200\, {\rm GeV}\ \rlap{\raise 2pt\hbox{$<$}}{\lower 2pt \hbox{$\sim$}}\ M_{\nu N'}\lessim1\, {\rm TeV}$. In contrast to the model with non-singlet heavy states involved in the mixing, in this latter case of the mixing with isosinglet neutrinos no general prediction can be made on the heavy mass scale, and our constraints are significant only if it lies at an `intermediate' scale $M\lessim300$ TeV. Moreover, in this particular case the prediction for the LFV observables decreases for increasing values of the heavy mass scale $M\to\infty$, resulting in a generalization of the decoupling theorem to this class of `non-see-saw' models. In all the cases considered, we have explicitly discussed how the non-decoupling behaviour is strictly related to the spontaneous breaking of the SU(2) symmetry. This result could be expected, since the Appelquist--Carazzone theorem \cite{decoupling} applies to the unbroken gauge theory. \vskip 1.truecm \begin{center} {\bf ACKNOWLEDGMENTS} \end{center} We are grateful to A. Santamaria, M.C. Gonzalez-Garcia, E. Nardi, S. Peris, N. Rius, E. Roulet and J. Peltoniemi for several very useful discussions and for critically reading the preliminary versions of the paper. \newpage
train/arxiv
BkiUd3A5qX_BxlWi8by2
5
1
\section{Introduction} The class of linear quasi-cyclic codes over finite fields is a generalization of cyclic codes and is known to be asymptotically good (see, e.g., Chen--Peterson--Weldon~\cite{chen_results_1969}). Many of the best known linear codes belong to this class (see, e.g., Gulliver--Bhargava~\cite{gulliver_best_1991} and Chen's database~\cite{chen_database_2014}). Several good LDPC codes are quasi-cyclic and the connection to convolutional codes was investigated among others in~\cite{solomon_connection_1979,esmaeili_link_1998,lally_algebraic_2006}. The algebraic structure of quasi-cyclic codes was exploited in various ways (see, e.g., Lally--Fitzpatrick~\cite{lally_algebraic_2001}, Ling--Solé~\cite{ling_algebraic_2001, ling_algebraic_2003, ling_algebraic_2005}, Barbier~\textit{et al.}~\cite{barbier_quasi-cyclic_2012,barbier_decoding_2013}), but the estimates on the minimum distance are far away from the real minimum distance and thus the guaranteed decoding radius. Recently, Semenov and Trifonov~\cite{semenov_spectral_2012} developed a spectral analysis of quasi-cyclic codes based on the work of Lally and Fitzpatrick~\cite{lally_construction_1999, lally_algebraic_2001} and formulated a BCH-like lower bound on the minimum distance of quasi-cyclic codes. We generalize the Semenov--Trifonov~\cite{semenov_spectral_2012} bound on the minimum distance of quasi-cyclic codes. Our new approach is similar to the Hartmann--Tzeng (HT,~\cite{hartmann_decoding_1972, hartmann_generalizations_1972}) bound, which generalizes the BCH~\cite{bose_class_1960,hocquenghem_codes_1959} bound for cyclic codes. Moreover, we prove a quadratic-time syndrome-based algebraic decoding algorithm up to the new bound and show that it is advantageous in the case of burst errors. This paper is organized as follows. In Section~\ref{sec_Preliminaries}, we recall the Gröbner basis representation of quasi-cyclic codes of Lally--Fitzpatrick~\cite{lally_construction_1999,lally_algebraic_2001} and the definitions of the spectral method of Semenov--Trifonov~\cite{semenov_spectral_2012}. The new HT-like bound on the minimum distance is formulated and proven in Section~\ref{sec_HartmannTzeng}. Section~\ref{sec_Decoding} describes a syndrome-based decoding algorithm up to our bound and shows that in the case of burst errors more symbol errors can be corrected. We draw some conclusions in Section~\ref{sec_conclusion}. \section{Preliminaries} \label{sec_Preliminaries} \subsection{Reduced Gröbner Basis} Let $\F{q}$ denote the finite field of order $q$ and $\Fxsub{q}$ the polynomial ring over $\F{q}$ with indeterminate $X$. Let $z$ be a positive integer and denote by $\interval{z}$ the set of integers $\{0,1,\dots,z-1\}$. A vector of length $n$ is denoted by a lowercase bold letter as $\vec{v} = (v_0 \, v_1 \, \dots \, v_{n-1})$ and $\vec{v} \circ \vec{w}$ denotes the scalar product $\sum_{i=0}^{n-1} v_i w_i$ of two vectors $\vec{v}, \vec{w}$ of length $n$. An $m \times n$ matrix is denoted by a capital bold letter as $\M{M}=(m_{i,j})_{i \in \interval{m}}^{j \in \interval{n}}$. A linear \LINQCC{\ensuremath{m}}{\ensuremath{\ell}}{\QCCk}{\QCCd}{q} code $\ensuremath{\mathcal{C}}$ of length $\ensuremath{m} \ensuremath{\ell}$, dimension $\QCCk$ and minimum Hamming distance $\QCCd$ over $\F{q}$ is $\ensuremath{\ell}$-quasi-cyclic if every cyclic shift by $\ensuremath{\ell}$ of a codeword is again a codeword of $\ensuremath{\mathcal{C}}$, more explicitly if: \begin{align*} &(c_{0,0} \dots c_{\ensuremath{\ell}-1,0} \ c_{0,1} \dots c_{\ensuremath{\ell}-1,1} \ \dots \ c_{\ensuremath{\ell}-1,\ensuremath{m}-1}) \in \ensuremath{\mathcal{C}} \Rightarrow \\ & (c_{0,\ensuremath{m}-1} \dots c_{\ensuremath{\ell}-1,\ensuremath{m}-1} \ c_{0,0} \dots c_{\ensuremath{\ell}-1,0} \ \dots c_{\ensuremath{\ell}-1,\ensuremath{m}-2}) \in \ensuremath{\mathcal{C}}. \end{align*} We can represent a codeword of an \LINQCC{\ensuremath{m}}{\ensuremath{\ell}}{\QCCk}{\QCCd}{q} $\ensuremath{\ell}$-quasi-cyclic code as $\mathbf{c}(X) = (c_0(X) \ c_1(X) \ \dots \ c_{\ensuremath{\ell}-1}(X)) \in \Fxsub{q}^{\ell} $, where \begin{equation*} c_i(X) \overset{\defi}{=} \sum_{j=0}^{\ensuremath{m}-1} c_{i,j} X^{j}, \quad \forall i \in \interval{\ensuremath{\ell}}. \end{equation*} Then, the defining property of $\ensuremath{\mathcal{C}}$ is that each component $c_i(X)$ of $\mathbf{c}(X)$ is closed under multiplication by $X$ and reduction modulo $X^{\ensuremath{m}}-1$. Lally and Fitzpatrick~\cite{lally_construction_1999, lally_algebraic_2001} showed that this enables us to see a quasi-cyclic code as an $\Fxsub{q}/\langle X^{\ensuremath{m}}-1 \rangle$-submodule of the algebra $(\Fxsub{q}/\langle X^{\ensuremath{m}}-1 \rangle)^{\ensuremath{\ell}}$ and they proved that every quasi-cyclic code has a generating set in the form of a reduced Gröbner basis with respect to the position-over-term order in $\Fxsub{q}^{\ensuremath{\ell}}$. This basis can be represented in the form of an upper-triangular $\ell \times \ell$ matrix with entries in $\Fxsub{q}$ as follows: \begin{equation} \label{def_GroebBasisMatrix} \mathbf{\tilde{G}}(X) = \begin{pmatrix} g_{0,0}(X) & g_{0,1}(X) & \cdots & g_{0,\ensuremath{\ell}-1}(X) \\ & g_{1,1}(X) & \cdots & g_{1,\ensuremath{\ell}-1}(X) \\ \multicolumn{2}{c}{\bigzero}& \ddots & \vdots \\ & & & g_{\ensuremath{\ell}-1,\ensuremath{\ell}-1}(X) \end{pmatrix}, \end{equation} where the following conditions must be fulfilled: \begin{tabular}[htb]{lll} 1) & $g_{i,j}(X) = 0,$ & $\forall 0 \leq j < i < \ensuremath{\ell}$, \\ 2) & $\deg g_{j,i}(X) < \deg g_{i,i}(X),$ & $ \forall j < i, i \in \interval{\ensuremath{\ell}}$,\\ 3) & $g_{i,i}(X) | (X^{\ensuremath{m}}-1),$ & $\forall i \in \interval{\ensuremath{\ell}}$,\\ 4) & if $g_{i,i}(X)=X^{\ensuremath{m}}-1$ then \\ & $g_{i,j}(X)=0,$ & $ \forall j=i+1,\dots,\ensuremath{\ell}-1$. \end{tabular} A codeword of $\ensuremath{\mathcal{C}}$ can be represented as $\mathbf{c}(X) = \mathbf{a}(X) \mathbf{\tilde{G}}(X)$ and it follows that $\QCCk = \ensuremath{m} \ensuremath{\ell} - \sum_{i=0}^{\ensuremath{\ell}-1} \deg g_{i,i}(X)$. For $\ensuremath{\ell}=1$, the generator matrix $\mathbf{\tilde{G}}(X)$ becomes the well-known generator polynomial of a cyclic code of degree $\ensuremath{m}-\QCCk$. We restrict ourselves throughout this paper to the single-root case, i.e., $\gcd(\ensuremath{m}, \text{char}(\F{q}))=1$. \subsection{Spectral Analysis of Quasi-Cyclic Codes} Let $\mathbf{\tilde{G}}(X)$ be the upper-triangular generator matrix of a given \LINQCC{\ensuremath{m}}{\ensuremath{\ell}}{\QCCk}{\QCCd}{q} $\ensuremath{\ell}$-quasi-cyclic code $\ensuremath{\mathcal{C}}$ in reduced Gröbner basis form as in~\eqref{def_GroebBasisMatrix}. Let $\alpha \in \F{q^{\ensuremath{r}}}$ be an $\ensuremath{m}$-th root of unity. An eigenvalue $\eigenvalue{i} = \alpha^{j_i}$ of $\ensuremath{\mathcal{C}}$ is defined to be a root of $\det(\mathbf{\tilde{G}}(X))$, i.e., a root of $\prod_{i=0}^{\ensuremath{\ell}-1} g_{i,i}(X)$. The \textit{algebraic} multiplicity of $\eigenvalue{i}$ is the largest integer $ \mult{i} $ such that $(X-\eigenvalue{i})^{\mult{i}} \mid \det(\mathbf{\tilde{G}}(X)).$ Semenov and Trifonov~\cite{semenov_spectral_2012} defined the \textit{geometric} multiplicity of an eigenvalue $\eigenvalue{i}$ as the dimension of the right kernel of the matrix $\mathbf{\tilde{G}}(\eigenvalue{i})$, i.e., the dimension of the solution space of the homogeneous linear system of equations: \begin{equation} \label{eq_eigenvectors} \mathbf{\tilde{G}}(\eigenvalue{i}) \eigenvector = \textbf{0}. \end{equation} The solution space of~\eqref{eq_eigenvectors} is called the right kernel eigenspace and it is denoted by $\eigenspace[i]$. Furthermore, it was shown that, for a matrix $\mathbf{\tilde{G}}(X) \in \Fxsub{q}^{\ensuremath{\ell} \times \ensuremath{\ell}}$ in the reduced Gröbner basis representation, the algebraic multiplicity $\mult{i}$ of an eigenvalue $\eigenvalue{i}$ equals the geometric multiplicity (see~\cite[Lemma 1]{semenov_spectral_2012}). Moreover, they gave in~\cite{semenov_spectral_2012} an explicit construction of the parity-check matrix of an \LINQCC{\ensuremath{m}}{\ensuremath{\ell}}{\QCCk}{\QCCd}{q} $\ensuremath{\ell}$-quasi-cyclic code $\ensuremath{\mathcal{C}}$ and proved a BCH-like~\cite{bose_class_1960, hocquenghem_codes_1959} lower bound on $\QCCd$ using the parity-check matrix and the so-called eigencode. We generalize their approach, but do not explicitly need the parity-check matrix for the proof though the eigencode is still needed. \begin{definition}[Eigencode] \label{def_eigencode} Let $\eigenspace{} \subseteq \F{q^{\ensuremath{r}}}^{\ensuremath{\ell}}$ be an eigenspace. Define the $\LINq{\ensuremath{n^{ec}}=\ensuremath{\ell}}{\ensuremath{k^{ec}}}{\ensuremath{d^{ec}}}$ eigencode corresponding to $\eigenspace{}$ by \begin{equation} \label{def_eq_eigencode} \eigencode{\eigenspace{}} \overset{\defi}{=} \left \lbrace (c_0 \dots c_{\ensuremath{\ell}-1}) \in \F{q}^{\ensuremath{\ell}} \mid \forall \eigenvector \in \eigenspace : \sum_{i=0}^{\ensuremath{\ell}-1} \eigenvector[i] c_i = 0 \right\rbrace. \end{equation} \end{definition} If there exists $\eigenvector = (\eigenvector[0] \ \eigenvector[1] \ \dots \ \eigenvector[\ensuremath{\ell}-1]) \in \eigenspace$ such that the elements $\eigenvector[0], \eigenvector[1], \dots, \eigenvector[\ensuremath{\ell}-1] $ are linearly independent over $\F{q}$, then $\eigencode{\eigenspace{}} = \{ (0 \ 0 \ \dots \ 0) \} $ and $\ensuremath{d^{ec}}$ is infinity. To describe quasi-cyclic codes explicitly, we need to recall the following facts about cyclic codes. A $q$-cyclotomic coset $M_i$ is defined as: \begin{equation} \label{eq_cyclotomiccoset} M_i \overset{\defi}{=} \Big\{ iq^j \mod \ensuremath{m} \, \vert \, j \in \interval{a} \Big\}, \end{equation} where $a$ is the smallest positive integer such that $iq^{a} \equiv i \bmod \ensuremath{m}$. The minimal polynomial in $\Fxsub{q}$ of the element $\alpha^i \in \F{q^{\ensuremath{r}}}$ is given by $m_i(X) = \prod_{j \in M_i} (X-\alpha^j)$. \section{Improved Lower Bound} \label{sec_HartmannTzeng} In this section, we generalize the lower bound on the minimum distance of quasi-cyclic codes given in~\cite[Thm.~2]{semenov_spectral_2012} in a similar way as the Hartmann--Tzeng bound~\cite{hartmann_decoding_1972, hartmann_generalizations_1972} generalizes the BCH bound~\cite{bose_class_1960, hocquenghem_codes_1959} for cyclic codes. \begin{theorem}[New Lower Bound] \label{theo_HTBound} Let $\ensuremath{\mathcal{C}}$ be an \LINQCC{\ensuremath{m}}{\ensuremath{\ell}}{\QCCk}{\QCCd}{q} $\ensuremath{\ell}$-quasi-cyclic code and let $\alpha \in \F{q^{\ensuremath{r}}}$ denote an element of order $\ensuremath{m}$. Define the set \begin{align*} D \overset{\defi}{=} \big\{ \ensuremath{f}, \ensuremath{f}+\ensuremath{z},\dots, & \ensuremath{f}+(\ensuremath{\delta}-2)\ensuremath{z}, \\ \ensuremath{f}+1,\ensuremath{f}+&1+\ensuremath{z},\dots,\ensuremath{f}+ 1+(\ensuremath{\delta} -2)\ensuremath{z}, \\ \ddots & \qquad \ddots \qquad \cdots \qquad \ddots \\ \ensuremath{f} & + \ensuremath{\nu},\ensuremath{f}+\ensuremath{\nu}+\ensuremath{z} ,\dots,\ensuremath{f} +\ensuremath{\nu}+(\ensuremath{\delta}-2)\ensuremath{z} \big\}, \end{align*} for some integers $\ensuremath{f}$, $\ensuremath{\delta} > 2 $ and $\ensuremath{z} > 0$ with $\gcdab{\ensuremath{m}}{\ensuremath{z}} = 1$. Let the eigenvalues $\eigenvalue{i} = \alpha^i, \forall i \in D$, their corresponding eigenspaces $\eigenspace[i], \forall i \in D$, be given, and let their intersection be $\eigenspace \overset{\defi}{=} \bigcap_{i \in D} \eigenspace[i]$. Let $\ensuremath{d^{ec}}$ denote the distance of the eigencode $\eigencode{\eigenspace{}}$ and let $\eigenvector= (\eigenvector[0] \ \eigenvector[1] \ \dots \ \eigenvector[\ensuremath{\ell}-1]) \in \eigenspace$ be an eigenvector where $\eigenvector[0], \eigenvector[1], \dots, \eigenvector[\ensuremath{\ell}-1]$ are linearly independent over $\F{q}$. If \begin{align} \label{eq_HTBound} \sum_{i=0}^{\infty} \mathbf{c}(\alpha^{\ensuremath{f}+\ensuremath{z} i + j }) \circ \eigenvector X^i \equiv 0 \mod X^{\ensuremath{\delta}-1}, \; \forall j \in \interval{\ensuremath{\nu}+1}, \end{align} holds for all $\mathbf{c}(X) = \big( c_0(X) \ c_1(X) \ \dots \ c_{\ensuremath{\ell}-1}(X) \big) \in \ensuremath{\mathcal{C}}$, then, $d \geq \ensuremath{d^{\ast}} \overset{\defi}{=} \min(\ensuremath{\delta}+\ensuremath{\nu}, \ensuremath{d^{ec}})$. \end{theorem} \begin{IEEEproof} Let $c_i(X) = \sum_{j \in \supp[i]}c_{i,j}X^j, \forall i \in \interval{\ensuremath{\ell}}$, where $c_{i,j} \in \F{q}$. We can write the LHS of~\eqref{eq_HTBound} more explicitly: \begin{align} \label{eq_LHSb} & \sum_{i=0}^{\infty} \Bigg(\sum_{t=0}^{\ensuremath{\ell}-1} c_{t}(\alpha^{\ensuremath{f}+\ensuremath{z} i + j }) \eigenvector[t] \Bigg) X^i \equiv 0 \bmod X^{\ensuremath{\delta}-1}, \forall j \in \interval{\ensuremath{\nu}+1}. \end{align} Now, define: \begin{align} \label{eq_UnionOfWeights} \supp & = \{i_0,i_1,\dots,i_{y-1} \} \overset{\defi}{=} \bigcup_{i=0}^{\ensuremath{\ell}-1} \supp[i] \subseteq \interval{\ensuremath{m}}. \end{align} We obtain from~\eqref{eq_LHSb} with~\eqref{eq_UnionOfWeights} : \begin{align} \label{eq_ExplCodewords} \sum_{i=0}^{\infty} \Bigg( \sum_{s \in \supp} \Bigg( \sum_{t=0}^{\ensuremath{\ell}-1} c_{t,s} \eigenvector[t] \Bigg) & \alpha^{(\ensuremath{f}+\ensuremath{z} i + j)s } \Bigg) X^i \nonumber \\ & \equiv 0 \bmod X^{\ensuremath{\delta}-1}, \; \forall j \in \interval{\ensuremath{\nu}+1}. \end{align} We define $\ensuremath{m}$ elements in $\F{q^{\ensuremath{r}}}$ as follows: \begin{equation} \label{eq_BigElement} C_s \overset{\defi}{=} \sum_{t=0}^{\ensuremath{\ell}-1} c_{t,s} \eigenvector[t], \quad \forall s \in \interval{\ensuremath{m}}. \end{equation} With~\eqref{eq_BigElement}, we can simplify~\eqref{eq_ExplCodewords} to \begin{align} \label{eq_ExplCodewordsLarge} \sum_{i=0}^{\infty} \Bigg( \sum_{s \in \supp} C_s & \alpha^{(\ensuremath{f}+\ensuremath{z} i + j)s } \Bigg) X^i \equiv 0 \bmod X^{\ensuremath{\delta}-1}, \forall j \in \interval{\ensuremath{\nu}+1}. \end{align} We linearly combine the $\ensuremath{\nu}+1$ sequences of~\eqref{eq_ExplCodewordsLarge}, multiply each of them by an element $\omega_j \in \F{q^{\ensuremath{r}}} \backslash \{0\}$ and obtain: \begin{equation} \label{eq_LinerCombineda} \sum_{j = 0}^{\ensuremath{\nu}} \omega_j \sum_{i=0}^{\infty} \Bigg( \sum_{s \in \supp} C_{s}\alpha^{(\ensuremath{f}+\ensuremath{z} i+j)s} \Bigg) X^i \equiv 0 \bmod X^{\ensuremath{\delta}-1}. \end{equation} Interchanging the sums in~\eqref{eq_LinerCombineda} leads to: \begin{equation} \label{eq_ExpressionBeforeVandermonde} \sum_{i=0}^{\infty} \sum_{s \in \supp} \Big( C_{s}\alpha^{(\ensuremath{f}+\ensuremath{z} i)s} \sum_{j = 0}^{\ensuremath{\nu}} \omega_j \alpha^{j s} \Big) X^i \equiv 0 \bmod X^{\ensuremath{\delta}-1}. \end{equation} We choose $\omega_0,\omega_1, \dots, \omega_{\ensuremath{\nu}}$ such that the first $\ensuremath{\nu}$ terms with coefficients $C_{i_0}, C_{i_1}, \dots, C_{i_{\ensuremath{\nu}-1}}$ are annihilated. We obtain the following linear $(\ensuremath{\nu}+1) \times (\ensuremath{\nu}+1)$ system of equations: \begin{align} \label{eq_SystemForCoefficients} \begin{pmatrix} 1 & \alpha^{i_0} & \alpha^{i_0 2} & \cdots & \alpha^{i_0 \ensuremath{\nu}} \\ 1 & \alpha^{i_1} & \alpha^{i_1 2} & \cdots & \alpha^{i_1 \ensuremath{\nu} } \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & \alpha^{i_{\ensuremath{\nu}}} & \alpha^{i_{\ensuremath{\nu}} 2} & \cdots & \alpha^{i_{\ensuremath{\nu}} \ensuremath{\nu}} \\ \end{pmatrix} \begin{pmatrix} \omega_0 \\ \omega_1 \\ \vdots \\ \omega_{\ensuremath{\nu}} \end{pmatrix} = \begin{pmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{pmatrix}, \end{align} with Vandermonde structure and therefore the non-zero solution is unique. Let $\tilde{\mathcal{Y}} \overset{\defi}{=} \mathcal{Y} \setminus \{ i_0,i_1,\dots, i_{\ensuremath{\nu}-1}\}$. Then we can rewrite~\eqref{eq_ExpressionBeforeVandermonde}: \begin{equation} \label{eq_BeforeGeometric} \sum_{i=0}^{\infty} \sum_{s \in \tilde{\mathcal{Y}}} \Big( C_{s} \alpha^{(\ensuremath{f}+\ensuremath{z} i)s} \sum_{j = 0}^{\ensuremath{\nu}} \omega_j \alpha^{j s} \Big) X^{i} \equiv 0 \bmod X^{\ensuremath{\delta}-1}. \end{equation} With the geometric series we get from~\eqref{eq_BeforeGeometric}: \begin{align*} \sum_{s \in \tilde{\mathcal{Y}}} \frac{C_{s} \alpha^{s \ensuremath{f}} \Big(\sum\limits_{j = 0}^{\ensuremath{\nu}} \omega_j \alpha^{j s} \Big) }{1-\alpha^{\ensuremath{z} s} X} & \equiv 0 \mod X^{\ensuremath{\delta}-1}, \end{align*} and writing each fraction as an equivalent fraction with the least common denominator leads to: \begin{equation} \label{eq_FinalExpressionHT} \frac{\sum\limits_{s \in \tilde{\mathcal{Y}}} \Big( C_{s} \alpha^{s \ensuremath{f}} \big( \sum\limits_{j = 0}^{\ensuremath{\nu}} \omega_j \alpha^{j s} \big) \prod\limits_{\substack{h \in \tilde{\mathcal{Y}}\\ h \neq s}} (1-\alpha^{\ensuremath{z} h} X) \Big) }{\prod\limits_{s \in \tilde{\mathcal{Y}}} (1-\alpha^{\ensuremath{z} s} X)} \equiv 0 \bmod X^{\ensuremath{\delta}-1}, \end{equation} where the degree of the numerator is at most $|\tilde{\mathcal{Y}}|-1 = y-\ensuremath{\nu}-1$ and has to be at least $\ensuremath{\delta}-1$. To bound the distance $\QCCd$ we distinguish two cases. For the first case where $\ensuremath{d^{ec}} > \ensuremath{\delta} + \ensuremath{\nu}$, at least $y-\ensuremath{\nu}$ elements $C_i \in \F{q^{\ensuremath{r}}}$ have to be non-zero such that \eqref{eq_FinalExpressionHT} holds, i.e., at least $y-\ensuremath{\nu}$ elements $c_{t_0,i_0},c_{t_1,i_1}, \dots, c_{t_{y-1-\ensuremath{\nu}},i_{y-1-\ensuremath{\nu}}} \in \F{q}$ for $t_0, \dots, t_{y-1-\ensuremath{\nu}} $ distinct, have to be non-zero and therefore $\QCCd-\ensuremath{\nu}-1 \geq \ensuremath{\delta} -1 \Longleftrightarrow \QCCd \geq \ensuremath{\delta}+\ensuremath{\nu}$. For the second case where $\ensuremath{d^{ec}} < \ensuremath{\delta} + \ensuremath{\nu}$, at least $\ensuremath{d^{ec}}$ elements $c_{j,i_0}, c_{j,i_1}, \dots, c_{j,i_{\ensuremath{d^{ec}}-1}}$ have to be non-zero (see~\eqref{eq_BigElement}) such that $C_j = 0$ and if all the other $C_s, s \in \tilde{\mathcal{Y}} \backslash \{j\}$, are zero, then the LHS of \eqref{eq_FinalExpressionHT} becomes zero. In this case $\QCCd \geq \ensuremath{d^{ec}}$. \end{IEEEproof} For $\ensuremath{\nu} = 0$, the bound of Theorem~\ref{theo_HTBound} becomes the bound of Semenov--Trifonov (see~\cite[Thm. 2]{semenov_spectral_2012}). We chose to state Thm.~\ref{theo_HTBound} in terms of all $\mathbf{c}(X) \in \ensuremath{\mathcal{C}}$ (see~\eqref{eq_HTBound}) to easily obtain a syndrome expression (see Section~\ref{sec_Decoding}). In practice, from the spectral analysis of $\tilde{{\mathbf G}}(X)$, one can search for eigenvalues of the form $\alpha^i$, for $i$ in some $D$ of the form in Thm.~\ref{theo_HTBound}, and determine the corresponding eigencode with its minimum distance. The condition~\eqref{eq_HTBound} is then automatically satisfied for all codewords $\mathbf{c}(X) \in \ensuremath{\mathcal{C}}$, with the corresponding $\ensuremath{f}$, $\ensuremath{z}$ and $\ensuremath{\delta}$. \begin{example}[HT-like Bound for Quasi-Cyclic Code] \label{ex_HTbound} Let $\ensuremath{\mathcal{C}}$ be the binary $\LINQCC{63}{2}{100}{6}{2}$ $2$-quasi-cyclic code with $2 \times 2$ generator matrix in reduced Gröbner form as defined in~\eqref{def_GroebBasisMatrix}: \begin{equation*} \mathbf{\tilde{G}}(X) = \begin{pmatrix} g_{0,0}(X) & g_{0,1}(X) \\ 0 & g_{1,1}(X) \end{pmatrix}, \end{equation*} where: \begin{align*} g_{0,0}(X) & = m_0(X)m_1(X)m_9(X),\\ g_{0,1}(X) & = g_{0,0}(X) a_{0,1}(X), \quad g_{1,1}(X) = g_{0,0}(X) m_5(X), \end{align*} and $a_{0,1}(X)= X^4+X^3+X^2+X+1$ with $\deg a_{0,1}(X) < \deg m_5(X)$ and $ a_{0,1}(X) \nmid (X^{63}-1)$. Let $\alpha \in \F{2^6} \cong \Fxsub{2}/(X^6 + X^4 + X^3 + X + 1)$ be an element of order $63$. The eigenvalues $\eigenvalue{i} = \alpha^i, i \in \{0, 1, 2, 4, 8, 9, 16, 18, 32, 36 \} = M_0 \cup M_1 \cup M_9$ are the roots of $g_{0,0}(X)$, $g_{0,1}(X)$, $g_{1,1}(X)$ and have (algebraic and geometric) multiplicity two. Therefore, the corresponding eigenvectors span the full space $\F{2^{6}}^{2}$. The distinct eigenvectors $\eigenvector^{(i)}, \forall i \in M_5$, are in $\F{2^{6}}^{2}$ and $\eigenvector[0]^{(i)}, \eigenvector[1]^{(i)} \in \F{2^{6}}$, are linearly independent over $\F{2}$ for each $i \in M_5$. With $\ensuremath{f} = 0$, $\ensuremath{z} = 4$, $\ensuremath{\delta} = 4$, $\ensuremath{\nu} = 1 $, we obtain two consecutive sequences of eigenvalues $\alpha^0,\alpha^4,\alpha^8$ and $\alpha^1,\alpha^5,\alpha^9$ of length three, where $\eigenvector[0]^{(5)}=1, \eigenvector[1]^{(5)}=\alpha^4+1$, are linearly independent over $\F{2}$ and $\eigenvector^{(5)}$ is contained in the intersection of the eigenspaces $\eigenspace[i], i \in D \overset{\defi}{=} \{0,4,8,1,5,9\}$, and therefore $\ensuremath{d^{ec}} = \infty$ of $\eigencode{\cap_{i \in D} \eigenspace[i]}$. With Theorem~\ref{theo_HTBound}, we can bound $\QCCd$ to be at least $\ensuremath{\delta}+\ensuremath{\nu} = 5$, which is one less than the actual minimum distance for the $\LINQCC{63}{2}{100}{6}{2}$ $2$-quasi-cyclic code. The bound of Semenov--Trifonov gives $\QCCd \geq 4$. \end{example} \section{Syndrome-Based Decoding of Quasi-Cyclic Codes} \label{sec_Decoding} In this section, we develop a syndrome-based decoding algorithm, which guarantees to correct up to $\lfloor (\ensuremath{d^{\ast}}-1)/2 \rfloor$ symbol errors in $\F{q}$. Let the received word of a given $\LINQCC{\ensuremath{m}}{\ensuremath{\ell}}{\QCCk}{\QCCd}{q}$ $\ensuremath{\ell}$-quasi-cyclic code be: \begin{align*} \mathbf{r}(X) & = \big( r_0(X) & \ \dots \ r_{\ensuremath{\ell}-1}(X) & \big) \\ & = \big( c_0(X) + e_0(X) & \ \dots \ c_{\ensuremath{\ell}-1}(X) & + e_{\ensuremath{\ell}-1}(X) \big), \end{align*} where \begin{equation} \label{eq_errorword} e_i(X) = \sum_{j \in \errorsup[i]} e_{i,j} X^j, \quad i \in \interval{\ensuremath{\ell}}, \end{equation} are $\ensuremath{\ell}$ error polynomials in $\Fxsub{q}$ with $\noerrors[i] \overset{\defi}{=} |\errorsup[i]|$ and degree less than $\ensuremath{m}$. The number of errors in $\F{q}$ is $\tilde{\noerrors} \overset{\defi}{=} \sum_{i=0}^{\ensuremath{\ell}-1} \noerrors[i]$. Define the following set of burst errors: \begin{equation} \label{eq_ModError} \errorsup \overset{\defi}{=} \bigcup_{i=0}^{\ensuremath{\ell}-1} \errorsup[i] \subseteq \interval{\ensuremath{m}}. \end{equation} with cardinality $\noerrors \overset{\defi}{=} |\errorsup| \leq \tilde{\noerrors}$. In the following, we describe a decoding procedure that is able to decode up to $\noerrors \leq \tau$ errors, where: \begin{equation} \label{eq_DecRadius} \tau \leq \frac{\ensuremath{d^{\ast}}-1}{2}. \end{equation} Let $\alpha \in \F{q^{\ensuremath{r}}}$ denote an $\ensuremath{m}$-th root of unity and let the $(\ensuremath{\nu}+1)(\ensuremath{\delta}-1)$ eigenvalues $\eigenvalue{i} = \alpha^{\ensuremath{f}+i \ensuremath{z}+j}, \forall i \in \interval{\ensuremath{\delta}-1}, j \in \interval{\ensuremath{\nu}+1}$, the integer $\ensuremath{f}$ and the integer $\ensuremath{z} > 0$ with $\gcd(\ensuremath{z},\ensuremath{m})=1$ be given as stated in Thm.~\ref{theo_HTBound}. Furthermore, let $\eigenspace = \bigcap_{i \in \interval{\ensuremath{\delta}-1}, j \in \interval{\ensuremath{\nu}+1}} \eigenspace[\ensuremath{f} + i \ensuremath{z} + j]$ and let one eigenvector $\eigenvector = (\eigenvector[0] \ \eigenvector[1] \ \dots \ \eigenvector[\ensuremath{\ell}-1]) \in \eigenspace$, where $\eigenvector[0], \eigenvector[1], \dots, \eigenvector[\ensuremath{\ell}-1] $ are linearly independent over $\F{q}$, be given. We assume that the minimum distance of the corresponding eigencode $\eigencode{\eigenspace}$ is greater than $\ensuremath{\delta}+\ensuremath{\nu}$. Then, we define the following $\ensuremath{\nu}+1$ syndrome polynomials in $\Fxsub{q^{\ensuremath{r}}}$: \begin{align} \label{eq_DefSyndromes} S_{t}(X) & \overset{\defi}{\equiv} \sum_{i=0}^{\infty} \Bigg( \sum_{j=0}^{\ensuremath{\ell}-1} r_j(\alpha^{\ensuremath{f} +i \ensuremath{z}+t}) \eigenvector[j] \Bigg) X^i \mod X^{\ensuremath{\delta}-1} \nonumber \\ & \overset{\hphantom{\defi}}{=} \sum_{i=0}^{\ensuremath{\delta}-2} \Bigg( \sum_{j=0}^{\ensuremath{\ell}-1} r_j(\alpha^{\ensuremath{f} +i \ensuremath{z}+t}) \eigenvector[j] \Bigg) X^i, \: \forall t \in \interval{\ensuremath{\nu}+1}. \end{align} From Thm.~\ref{theo_HTBound} it follows that the syndrome polynomials as defined in~\eqref{eq_DefSyndromes} depend only on the error and therefore: \begin{align*} S_{t}(X) & = \sum_{i=0}^{\ensuremath{\delta}-2} \Bigg( \sum_{j=0}^{\ensuremath{\ell}-1} e_j(\alpha^{\ensuremath{f} +i \ensuremath{z}+t}) \eigenvector[j] \Bigg) X^i , \quad \forall t \in \interval{\ensuremath{\nu}+1}. \end{align*} Define an error-locator polynomial in $\Fxsub{q^{\ensuremath{r}}}$: \begin{equation} \label{eq_ELP} \Lambda(X) = \sum_{i=0}^{\noerrors} \Lambda_i X^i \overset{\defi}{=} \prod_{i \in \errorsup}(1-X\alpha^{i \ensuremath{z}}). \end{equation} Like in the classical case of cyclic codes, we get $\ensuremath{\nu}+1$ \textit{Key Equations} with a common error-locator polynomial $\Lambda(X)$ as defined in~\eqref{eq_ELP}: \begin{equation} \label{eq_KeyEquation} \Lambda(X) \cdot S_t(X) \equiv \Omega_t(X) \mod X^{\ensuremath{\delta}-1}, \quad \forall t \in \interval{\ensuremath{\nu}+1}, \end{equation} where the degree of each of $\Omega_0(X), \Omega_1(X), \dots, \Omega_{\ensuremath{\nu}}(X) $ is smaller than $\noerrors$. Solving these $\ensuremath{\nu}+1$ Key Equations~\eqref{eq_KeyEquation} jointly can be realized by multi-sequence shift-register synthesis and several efficient realizations exist~\cite{feng_generalized_1989, feng_generalization_1991, zeh_fast_2011}. Solving~\eqref{eq_KeyEquation} jointly is equivalent to solving the following heterogeneous system of equations: \begin{equation} \label{eq_DecodingSystem} \begin{pmatrix} \mathbf{S}^{\langle 0 \rangle} \\ \mathbf{S}^{\langle 1 \rangle} \\ \vdots \\ \mathbf{S}^{\langle \ensuremath{\nu} \rangle} \end{pmatrix} \begin{pmatrix} \Lambda_{\noerrors} \\ \Lambda_{\noerrors-1} \\ \vdots \\ \Lambda_{1} \end{pmatrix} = \begin{pmatrix} \mathbf{T}^{\langle 0 \rangle} \\ \mathbf{T}^{\langle 1 \rangle} \\ \vdots \\ \mathbf{T}^{\langle \ensuremath{\nu} \rangle} \end{pmatrix}, \end{equation} where each $(\ensuremath{\delta}-1-\noerrors) \times \noerrors$ submatrix is a Hankel matrix: \begin{equation} \label{eq_SyndromeMatrix} \mathbf{S}^{\langle t \rangle} = \big ( S^{\langle t \rangle}_{i+j} \big)_{i \in \interval{\ensuremath{\delta}-1-\noerrors}}^{j \in \interval{\noerrors}} , \quad \forall t \in \interval{\ensuremath{\nu}+1}, \end{equation} and each $\mathbf{T}^{\langle t \rangle} = (S_{\noerrors}^{\langle t \rangle} \ S_{\noerrors+1}^{\langle t \rangle} \ \dots \ S_{\ensuremath{\delta}-2}^{\langle t \rangle})^T$ with: \begin{align*} S_i^{\langle t \rangle} & = \sum_{j=0}^{\ensuremath{\ell}-1} r_j(\alpha^{\ensuremath{f} + i \ensuremath{z} + t}) \eigenvector[j], \quad \forall i \in \interval{\ensuremath{\delta}-1}, t \in \interval{\ensuremath{\nu}+1}. \end{align*} \begin{theorem}[Decoding up to New Bound] \label{theo_DecodingHTLikeBound} Let $\ensuremath{\mathcal{C}}$ be an $\ensuremath{\ell}$-quasi-cyclic code and let the conditions of Thm.~\ref{theo_HTBound} hold. Let \eqref{eq_DecRadius} be fulfilled, let the $\ensuremath{\nu} +1$ syndrome polynomials $S_0(X), S_1(X), \dots, S_{\ensuremath{\nu}}(X)$ be defined as in~\eqref{eq_DefSyndromes}, and let the set of burst errors$\errorsup = \{j_0,j_1,\dots,j_{\noerrors-1} \}$ be as defined in~\eqref{eq_ModError}. Then, the syndrome matrix $\mathbf{S} = (\mathbf{S}^{\langle 0 \rangle} \ \mathbf{S}^{\langle 1 \rangle} \ \dots \ \mathbf{S}^{\langle \ensuremath{\nu} \rangle} )^T$ with the submatrices from~\eqref{eq_SyndromeMatrix} has $\rank(\mathbf{S})=\noerrors$. \end{theorem} \begin{IEEEproof} Assume w.l.o.g. that $\ensuremath{f} = 0$. Similar to \cite[Section~VI]{feng_generalization_1991}, we can decompose the syndrome matrix into three matrices as follows: $\M{S} = (\M{S}^{\langle 0 \rangle} \ \M{S}^{\langle 1 \rangle} \ \cdots \ \M{S}^{\langle \ensuremath{\nu} \rangle} )^T = \M{X} \cdot \M{Y} \cdot \overline{\M{X}} = ( \M{X}^{\langle 0 \rangle} \ \M{X}^{\langle 1 \rangle} \ \cdots \ \M{X}^{\langle \ensuremath{\nu} \rangle})^T \cdot \M{Y} \cdot \overline{\M{X}}$, where $\M{X}$ is a $(\ensuremath{\nu}+1)(\ensuremath{\delta}-1-\noerrors) \times \noerrors$ matrix over $\F{q^{\ensuremath{r}}}$ and $\M{Y}$ and $\overline{\M{X}}$ are $\noerrors \times \noerrors$ matrices over $\F{q^{\ensuremath{r}}}$. Explicitly the decomposition provides the following matrices: \begin{align*} \M{X}^{\langle t \rangle} & = \big( \alpha^{(t+\ensuremath{z} i )j} \big)_{i \in \interval{\ensuremath{\delta} - 2 - \noerrors}}^{j \in \errorsup}, \quad t \in \interval{\ensuremath{\nu}+1},\\ \overline{\M{X}} & = \big( \alpha^{i \ensuremath{z} j } \big)_{i \in \errorsup}^{j \in \interval{\noerrors}}, \quad \M{Y} = \diag(E_{i_0},E_{i_1},\dots,E_{i_{\noerrors-1}}), \end{align*} where $E_{i} \overset{\defi}{=} \sum_{t=0}^{\ensuremath{\ell}-1} e_{i,t} \eigenvector[t]$ for all $i \in \errorsup$. Since $\M{Y}$ is a diagonal matrix, it is non-singular. From $\gcd(\ensuremath{m}, \ensuremath{z}) = 1$, we know that $\overline{\M{X}}$ is a Vandermonde matrix and has full rank. Hence, $\M{Y} \cdot \overline{\M{X}}$ is a non-singular $\noerrors \times \noerrors$ matrix and therefore $\rank(\M{S}) = \rank(\M{X})$. In order to analyze the rank of $\M{X}$, we proceed similarly as in \cite[Sec.~VI]{feng_generalization_1991}. We use the matrix operation from \cite{van_lint_minimum_1986} to rewrite $\M{X} = \M{A} * \M{B}$, where \begin{equation*} \M{A} = \big( \alpha^{ij} \big)_{i \in \interval{\ensuremath{\nu}+1}}^{j \in \errorsup} \quad \text{and} \quad \M{B} = \M{X}^{\langle 0 \rangle}. \end{equation*} We know from~\cite{van_lint_minimum_1986} that, if $\rank(\M{A}) + \rank(\M{B}) > \noerrors$, then $ \rank(\M{A} * \M{B}) = \noerrors$. Since $\gcd(\ensuremath{m}, \ensuremath{z}) =1$, both matrices $\M{A}$ and $\M{B}$ are Vandermonde matrices with $\rank(\M{A}) = \min\{\ensuremath{\nu}+1, \noerrors \}$ and $ \rank(\M{B}) = \min\{\ensuremath{\delta}-1-\noerrors, \noerrors \}$. Assume w.l.o.g. that $(\ensuremath{\delta} -1) > \ensuremath{\nu}$ (else we can interchange the roles $\ensuremath{\delta}$ and $\ensuremath{\nu}$ in Thm.~\ref{theo_HTBound}). Therefore, from~\eqref{eq_DecRadius} we obtain $ \noerrors \leq (d^{\ast}-1)/2 = (\ensuremath{\delta} + \ensuremath{\nu} -1)/2 < \ensuremath{\delta}-1$. Hence, investigating all four possible cases of $\rank(\M{A}) + \rank(\M{B})$ gives: \begin{align*} & \ensuremath{\nu}+ 1 + \ensuremath{\delta} - 1 - \noerrors \geq 2 \noerrors - \noerrors +1 = \noerrors +1 > \noerrors,\\ & \ensuremath{\nu} + 1 + \noerrors > \noerrors, \\ & \noerrors + \ensuremath{\delta}-1-\noerrors = \ensuremath{\delta} -1 > \noerrors,\\ & \noerrors + \noerrors = 2\noerrors > \noerrors, \end{align*} Thus, $\rank(\M{A}) + \rank(\M{B}) > \noerrors$. \end{IEEEproof} Algorithm~\ref{algo:decalgo} summarizes the whole decoding procedure, where the complexity is dominated by the operation in Line~\ref{algo_KeyEquation}. After the syndrome calculation (in Line~\ref{algo_SyndCalc} of Algorithm~\ref{algo:decalgo}), the $\ensuremath{\nu}+1$ Key Equations~\eqref{eq_KeyEquation} are solved jointly (here in Line~\ref{algo_KeyEquation} with a Generalized Extended Euclidean Algorithm, GEEA~\cite{feng_generalized_1989}). Various other algorithms for solving the Key Equations jointly as in Line~\ref{algo_KeyEquation} with sub-quadratic time complexity exist. Afterwards, the roots of $\Lambda(X)$ as defined in~\eqref{eq_ELP} correspond to the positions of the burst errors as defined in~\eqref{eq_ModError} (see Line~\ref{algo_RootFinding}). The error values $E_{i_0}, E_{i_0}, \dots E_{i_{\noerrors-1}}$ can be obtained from one of the $\ensuremath{\nu}+1$ polynomials $\Omega_j(X)$ as given from the Key Equations~\eqref{eq_KeyEquation} (see Line~\ref{algo_BigErrorValues} in Algorithm~\ref{algo:decalgo}). In Line~\ref{algo_SmallErrorValues}, each error value $E_{i_j} \in \F{q^{\ensuremath{r}}}$ is mapped back to the $\ensuremath{\ell}$ error symbols $e_{i_j,0}, e_{i_j,1}, \dots, e_{i_j,\ensuremath{\ell}-1} \in \F{q}$ and the codeword $\mathbf{c}(X) = (c_0(X) \ c_1(X) \ \dots \ c_{\ensuremath{\ell}-1}(X) )$ can be reconstructed. \printalgo{\renewcommand{\algorithmcfname}{Algo} \caption{\textsc{Decoding an $\LINQCC{\ensuremath{m}}{\ensuremath{\ell}}{\QCCk}{\QCCd}{q}$ Quasi-Cyclic Code}} \label{algo:decalgo} \DontPrintSemicolon \SetAlgoVlined \LinesNumbered \SetKwInput{KwIn}{Input} \SetKwInput{KwOut}{Output} \BlankLine \KwIn{Parameters $\ensuremath{m}, \ensuremath{\ell},k, q, \ensuremath{r}$ of the quasi-cyclic code\\ Received word $\mathbf{r}(X) = (r_0(X) \ \dots \ r_{\ensuremath{\ell}-1}(X)) \in \Fxsub{q}^{\ensuremath{\ell}}$\\ Integers $\ensuremath{f}, \ensuremath{\delta} > 2, \ensuremath{\nu} \geq 0$ and $\ensuremath{z} > 0$ with $\gcd(\ensuremath{z},\ensuremath{m})=1$\\ Eigenvalues $\eigenvalue{i} = \alpha^{\ensuremath{f}+i \ensuremath{z}+j}, \; \forall i \in \interval{\ensuremath{\delta}-1}, j \in \interval{\ensuremath{\nu}+1} $\\ Eigenvector $(\eigenvector[0] \ \eigenvector[1] \ \dots \ \eigenvector[\ensuremath{\ell}-1]) \in \F{q^{\ensuremath{r}}}^{\ensuremath{\ell}}$} \KwOut{Estimated codeword\\ $ \mathbf{c}(X)=(c_0(X) \ c_1(X) \ \dots \ c_{\ensuremath{\ell}-1}(X) ) $ \\ or \textsc{Decoding Failure}} \BlankLine \BlankLine Calculate $S_0(X),S_1(X),\dots, S_{\ensuremath{\nu}}(X)$ as in~\eqref{eq_DefSyndromes} \nllabel{algo_SyndCalc} \; \BlankLine Solving Key Equations jointly $(\Lambda(X), \Omega_0(X), \Omega_1(X), \dots, \Omega_{\ensuremath{\nu}}(X))$ = \texttt{GEEA$\big( X^{\ensuremath{\delta}-1}, S_0(X), S_{1}(X), \dots, S_{\ensuremath{\nu}}(X) \big)$} \nllabel{algo_KeyEquation}\; \BlankLine Find all $i$: $\Lambda(\alpha^{-i \ensuremath{z}})=0$ $\Rightarrow$ ${\mathcal E}=\lbrace i_0,i_1,\dots,i_{\noerrors-1}\rbrace$ \nllabel{algo_RootFinding}\; \BlankLine \If{$\noerrors < \deg \Lambda(X)$} {Declare \textsc{Decoding Failure}} \Else{Determine error values $E_{i_0},E_{i_1},\dots,E_{i_{\noerrors-1}} \in \F{q^{\ensuremath{r}}}$ \nllabel{algo_BigErrorValues} \; Determine $e_{i_j,0}, e_{i_j,1}, \dots, e_{i_j,\ensuremath{\ell}-1} \in \F{q}$, s.t. $\sum_{t=0}^{\ensuremath{\ell}-1} e_{i_j,t}v_t = E_{i_j}, \quad \forall i_j \in \mathcal E$ \nllabel{algo_SmallErrorValues}\; \BlankLine $e_i(X) \leftarrow \sum_{j \in \mathcal{E}_i} e_{j,i} X^{j}, \quad \forall i \in \interval{\ensuremath{\ell}}$ \; \BlankLine $c_i(X) \leftarrow r_i(X) - e_i(X), \quad \forall i \in \interval{\ensuremath{\ell}}$\;} } \begin{example}[Decoding up to HT-like New Bound] \label{ex_DecodingBound} Suppose the all-zero codeword of the $\LINQCC{63}{2}{100}{6}{2}$ $2$-quasi-cyclic code from Example~\ref{ex_HTbound} was transmitted. Let the two received polynomials in $\Fxsub{2}$ be: \begin{equation*} r_0(X) = e_0(X) = 1+X^{32}, \quad \quad r_1(X) = e_1(X) = X^{32}. \end{equation*} We have $\tilde{\noerrors} = 3$, but $\noerrors = 2$ (see~\eqref{eq_ModError}). The eigenvector $\eigenvector^{(5)} = (1 \ \alpha^4+1) \in \F{2^6}^2$ is contained in the intersection of the eigenspaces $\cap_{i \in D} \eigenspace[i]$, where $D \overset{\defi}{=} \{0,4,8,1,5,9\} $, and is used for decoding. The system of two equations as in~\eqref{eq_DecodingSystem} becomes here: \begin{equation*} \begin{pmatrix} \alpha^{35} & \alpha^{26} \\ \alpha^{45} & \alpha^{33} \end{pmatrix} \begin{pmatrix} \Lambda_2\\ \Lambda_1 \end{pmatrix} = \begin{pmatrix} \alpha^{7}\\ \alpha^{51} \end{pmatrix}, \end{equation*} and the corresponding error-locator polynomial is $\sum_{i=0}^{2} \Lambda_iX^i =1+\alpha^{49}X+\alpha^{2}X^2 = (1-X)(1-X \alpha^{128}) $. The error-evaluation gives the two error values in $\F{2^6}$: $E_0 = 1$ and $E_{32} = \alpha^4$. Therefore we can reconstruct the $\tilde{\noerrors} = 3$ error values $e_{0,0} = 1$, $e_{32,0}=1$ and $e_{32,1}=1$ in $\F{2}$. \end{example} \section{Conclusion and Outlook} \label{sec_conclusion} We proved a new lower bound on the minimum distance of quasi-cyclic codes based on the spectral analysis introduced by Semenov and Trifonov. Moreover, a syndrome-based decoding algorithm was developed and its correctness proven. \section*{Acknowledgments} This work was initiated when S. Ling was visiting the CS Department of the Technion. He thanks this institution for its hospitality. \vspace{-.3cm} \printbibliography \end{document}
train/arxiv
BkiUdWs25V5i-i-xnXjo
5
1
\section*{Acknowledgements} The author is grateful to C. Ahn, P. Dorey, F. G\"{o}hmann, A. Kl\"{u}mper, and J. Suzuki for helpful discussions and comments. We acknowledge JSPS Research Fellowship for Young Scientists for supporting the beginning of this work. This research is also partially supported by the Aihara Innovative Mathematical Modelling Project, the Japan Society for the Promotion of Science (JSPS) through the ``Funding Program for World-Leading Innovative R\&D on Science and Technology (FIRST Program)'', initiated by the Council for Science and Technology Policy (CSTP). \section{Concluding remarks} We have been discussed subspaces of the SSG model with Dirichelt boundaries obtained through a light-cone lattice, on which we derived NLIEs from the corresponding Zamolodchikov-Fateev spin-$1$ XXZ chain with boundary magnetic field. The most important result in this paper is the fact that Dirichlet boundaries allows us to obtain the R sector, which cannot be obtained from the periodic system. According to UV analysis, it is the NLIE for $y(\theta)$ that determines which sector is realized from the LSSG model. On the other hand, a winding number $m$ is determined by $b(\theta)$ in such a way that takes an integer for the NS sector and a half integer for the R sector. At a separation point of these two sectors, an energy gap has been obtained (Figure \ref{fig:boundary_energy}). Counting equations (\ref{counting-a}) and (\ref{counting-b*}) also show existence of sector separations with respect to boundary parameters; Either an even number or an odd number of particles are allowed to exist depending on boundary parameters. In principle, a light-cone regularized quantum field theory consists of an even number of sites in order to generate a pair of a right-mover and a left-mover, on which only an even number of excitation particles are allowed to exist. However, in connection with allowed excitations in the boundary SG model \cite{bib:ABPR08}, strong enough boundary field arrests a particle, making a system effectively consisting of an odd number of sites, and then an excitation state with an odd number of particles is also obtained. In the IR limit, we have analyzed difference of boundary terms in NLIEs in a realm of boundary bootstrap principle. Mathematically, difference in boundary terms originates in change of analyticity structure of $T$-functions. Boundary bootstrap approach tells that this change occurs due to emergence of a boundary bound state. According to discussion in Section \ref{sec:IRlim}, the symmetries obtained in a corresponding spin chain are preserved in this limit. However, the given interpretation is incomplete, since it is the SUSY part which brings the phase separation and we did not discuss it yet. If both statements for the UV limit and the IR limit are correct, there seems to be hidden symmetry between the ground state and the boundary excitation state. In order to make it clear how phase separations vary from the UV limit to the IR limit, analysis on intermediate volume would be important. Another interesting future problem is to construct full regime of the SSG model from the spin chain. Recently, a supercharge defined on a spin chain is studied in connection with integrability of a system \cite{bib:HF12, bib:H13}. This supercharge adds one site to a system, {\it i.e.} it makes a system consisting of an even number of sites to that of an odd number of sites. If a supercharge defined on a lattice is correctly identified with that originally defined on a continuum theory, we may obtain subspaces of quantum field theories which cannot be obtained by a conventional method. \ifx10 \end{document} \fi \section{Dilogarithm identities} \label{sec:dilog} Dilogarithm functions appear widely in integrable systems and have been intesively studied. The dologarithm function defined by (\ref{dilog}) is connected to a Rogers' dilogarithm: \begin{equation} L(x) = -\frac{1}{2} \int_0^x dy\; \Big( \frac{\ln (1-y)}{y} + \frac{\ln y}{1-y} \Big) \end{equation} via a relation $L_+(x) = L(\frac{x}{1+x})$. Subsequently, remarkable relations among dilogarithms have been found mathematical physics problems \cite{bib:Z91, bib:RVT93, bib:K89, bib:BR90, bib:K93, bib:K95}. Here we list some of them which appear in the expression of eigenenergy (\ref{UVenergy}): \begin{equation} \begin{split} &L(0) = 0, \qquad L(\tfrac{1}{2}) = \tfrac{\pi^2}{12}, \qquad L(-\infty) = -\tfrac{\pi^2}{6}, \\ &L(1) = L(x) + L(1 - x) = \tfrac{\pi^2}{6} \qquad x \in [0,1], \\ &2L(1) = 2L(\tfrac{1}{n+1}) + \sum_{j=0}^{n-1} L(\tfrac{1}{(1 + j)^2}) \qquad n \in \mathbb{Z}_{\geq 0}, \\ &L(1) \tfrac{3n}{n+2} = \sum_{j=0}^{n-1} L(\tfrac{\sin^2\frac{\pi}{n+2}}{\sin^2\frac{\pi(j+1)}{n+2}}) \qquad n \in \mathbb{Z}_{\geq 0}. \end{split} \end{equation} Moreover, it has been obtained in \cite{bib:S04} that the following relation holds: \begin{equation} 2 \sum_{p=1}^{k-1}\Big[ L(\tfrac{p(p+2)}{(p+1)^2}) - L(\tfrac{\sin\frac{\pi p}{k+2} \sin\frac{\pi(p+2)}{k+2}} {\sin^2\frac{\pi(p+1)}{k+2}}) \Big] + 4L(\tfrac{k}{k+1}) = \tfrac{\pi^2 k}{k+2}. \end{equation} Thus, we obtain the following relation for small $\gamma$: \begin{equation} \begin{split} &L_+(b^+(-\infty)) + L_+(\bar{b}^+(-\infty)) + L_+(y^+(-\infty)) \\ &- L_+( (\tfrac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_+)))_{{\rm mod}\,2} ) \\ &= \begin{cases} \frac{\pi^2}{4} & (\frac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_+)))_{{\rm mod}\,2} = 0 \\ \tfrac{\pi^2}{2} & (\frac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_+)))_{{\rm mod}\,2} = 1. \end{cases} \end{split} \end{equation} \ifx10 \end{document} \fi \section{Asymptotic behaviors of NLIEs} \label{sec:int_const} Integration constants of (\ref{NLIE_b}) and (\ref{NLIE_y}) are determined from asymptotic behaviors of NLIEs. From the definitions, auxiliary functions $b(\theta)$ and $y(\theta)$ behave as \begin{align} &b(\infty) = e^{-2i\omega} + e^{-4i\omega}, \label{asym_b} \\ &B(\infty) = 1 + e^{-2i\omega} + e^{-4i\omega}, \label{asym_B} \\ &y(\infty) = e^{2i\omega} + 1 + e^{-2i\omega}, \label{asym_y} \\ &Y(\infty) = e^{2i\omega} + 2 + e^{-2i\omega}. \label{asym_Y} \end{align} Here we set $\omega = \gamma(2S^{\rm tot} + H + 1)$ by defining total spin $S^{\rm tot} = N - M$ and an averaged boundary parameter $H = \frac{H_+ + H_-}{2}$. Apparently, left-hand sides of NLIEs remain finite, while linear terms $C_b^{(1)} \theta$ and $C_y^{(1)} \theta$ go to infinity as $\theta \to \infty$, which results in $C_b^{(1)} = C_y^{(1)} = 0$. Right-hand sides of NLIEs at $x \to \infty$ are evaluated from the following asymptotic behaviors: \begin{equation} \label{asym} \begin{split} &g(\infty) = \pi G(\infty) = \frac{\pi}{2} \frac{\pi - 3\gamma}{\pi - 2\gamma}, \qquad g_K(\infty) = \pi G_K(\infty) = \frac{\pi}{2}, \\ &J(\infty) = \pi \frac{\pi - 3\gamma}{\pi - 2\gamma}, \hspace{23mm} J_K(\infty) = \pi. \end{split} \end{equation} Boundary terms $F(\theta;H)$ and $F_y(\theta;H)$ in the regime (a) behave as \begin{equation} \label{basym-a} F(\infty;H) = \frac{\pi}{2} \frac{\pi - \gamma H}{\pi - 2\gamma}, \qquad F_y(\infty;H) = 0, \end{equation} while for the regime (b): \begin{equation} \label{basym-b} F(\infty;H) = \frac{\pi}{2} \frac{-\pi - \gamma H + 2 \gamma}{\pi - 2\gamma}, \qquad F_y(\infty;H) = \pi, \end{equation} and then for the regime (c) we have \begin{equation} \label{basym-c} F(\infty;H) = \frac{\pi}{2} \frac{-\pi - \gamma H}{\pi - 2\gamma}, \qquad F_y(\infty;H) = 0. \end{equation} Substituting (\ref{asym_B}), (\ref{asym_y}), (\ref{asym}), and (\ref{basym-a})-(\ref{basym-c}) into the NLIE for $\ln y(\theta)$, the integration constant $C_y^{(2)}$ is determined as \begin{equation} C_y^{(2)} = i\pi \widetilde{C}_y^{(2)} = -i\pi [N_H - 2 (N_S + N_V) - M_C + 1 + n_y(H_+) + n_y(H_-)]_{{\rm mod}\,2}, \end{equation} where \begin{equation} n_y(H) = \begin{cases} 0 & |H| > 1, \\ 1 & |H| < 1. \end{cases} \end{equation} The integration constant $C_b^{(2)}$ is obtained from (\ref{asym_b}), (\ref{asym_B}), (\ref{asym_Y}), (\ref{asym}), and (\ref{basym-a})-(\ref{basym-c}): \begin{equation} \label{Cb_const} C_b^{(2)} = i\pi \widetilde{C}_b^{(2)} = -i\pi [2S^{\rm tot} + N + N_1 + 1 - \delta_b + n_b(H_+) + n_b(H_-)]_{{\rm mod}\,2}, \end{equation} where \begin{equation} n_b(H) = \begin{cases} \frac{3}{2} & H > 1, \\ -\frac{1}{2} & |H| < 1, \\ -\frac{3}{2} & -1 > H \end{cases} \end{equation} and \begin{equation} \delta_b = \begin{cases} 0 & \cos\omega > 0, \\ 1 & \cos\omega < 0. \end{cases} \end{equation} Here we used a notation $\lfloor*\rfloor$ which means the largest integer part of $*$. Besides (\ref{Cb_const}), we obtain a counting equation for holes: \begin{equation} \begin{split} N_H - 2(N_S + N_V) &= 2S^{\rm tot} + M_C + 2M_W -\delta_B \\ &+ \tfrac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1 + H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_-)), \end{split} \end{equation} where \begin{equation} \label{def_deltaB} \delta_B = \begin{cases} 0 & 1 + 2\cos2\omega > 0 \\ 1 & 1 + 2\cos2\omega < 0. \end{cases} \end{equation} In order to derive a counting equation for type-$1$ holes, we need to derive a NLIE for the auxiliary function $a(\theta)$. \ifx10 Keeping in our mind that real zeros of $T_1(\theta)$ consists of type-$1$ holes, we obtain \begin{equation} \label{NLIEa} \begin{split} \ln a(\theta) &= \int_{-\infty}^{\infty} d\theta'\; G_a(\theta - \theta' + i\epsilon) \ln A(\theta' - i\epsilon) - \int_{-\infty}^{\infty} d\theta'\; G_a(\theta - \theta' - i\epsilon) \ln \bar{A}(\theta' + i\epsilon) \\ &+ iD^{(a)}_{\rm bulk}(\theta) + iD^{(a)}_{\rm B}(\theta) + iD_{\rm a}(\theta) + C_a, \end{split} \end{equation} where \begin{equation} G_a(\theta) = \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{e^{-ik\theta} \sinh(\frac{\pi}{\gamma} - 2)\frac{\pi k}{2}}{2\cosh\frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 1)\frac{\pi k}{2}}. \end{equation} Boundary terms $iD^{(a)}_{\rm B}(\theta) = F_a(\theta;H_+) + F_a(\theta;H_-) + J_a(\theta)$ depend on boundary parameters and have different forms for positive $H$: \begin{equation} F_a(\theta;H) = \int d\theta\; \int_{-\infty}^{\infty} dk\; e^{-ik\theta} \frac{(\frac{\pi}{\gamma} - H) \frac{\pi k}{2}}{2\cosh\frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 1)\frac{\pi k}{2}} \end{equation} and negative $H$: \begin{equation} F_a(\theta;H) = -\int d\theta\; \int_{-\infty}^{\infty} dk\; e^{-ik\theta} \frac{(\frac{\pi}{\gamma} + H) \frac{\pi k}{2}}{2\cosh\frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 1)\frac{\pi k}{2}} \end{equation} A boundary independent term $J_a(\theta)$ is given by \begin{equation} J_a(\theta) = \int d\theta\; \int_{-\infty}^{\infty} dk\; e^{-ik\theta} \frac{\cosh\frac{\pi k}{4} \sinh(\frac{\pi}{\gamma} - 2)\frac{\pi k}{4}}{\cosh \frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 1)\frac{\pi k}{4}}. \end{equation} Particle source terms are given by \begin{equation} \label{source-a} \begin{split} &D_{\rm a}(\theta) = \sum_j c^{(a)}_j \{ g_{(j)}^{(a)}(\theta - \theta_j) + g_{(j)}^{(a)}(\theta + \theta_j) \}, \\ &c_j^{(a)} = \begin{cases} 1 & \text{for type-$1$ holes} \\ -1 & \text{otherwise}, \end{cases} \\ &g_{(j)}^{(a)}(\theta) = \begin{cases} (g_a)_{\rm II}(\theta) = g_a(\theta) + g_a(\theta - i\pi {\rm sgn}({\rm Im}\,\theta)) & \text{for roots satisfying $|{\rm Im}\,\theta_j| > \pi$} \\ g_a(\theta + i\epsilon) + g_A(\theta - i\epsilon) & \text{for specials} \\ g_a(\theta) & \text{otherwise}, \end{cases} \end{split} \end{equation} where \begin{equation} g_a(\theta) = 2\gamma \int d\theta\; G_a(\theta). \end{equation} A bulk term is very similar to source terms, obtained as \begin{equation} D^{(a)}_{\rm bulk}(\theta) = N \{g_a(\theta - i\Theta) + g_a(\theta + i\Theta)\}. \end{equation} A summation in (\ref{source-a}) is taken over $j$ such that $\theta_j$ is a type-$1$ hole, a real special object, or a complex root. \fi Since the auxiliary function $a(\theta)$ asymptotically behaves as $a(\infty) = e^{-2i\omega}$, we obtain the following counting equation for type-$1$ holes by comparing both sides of a NLIE (\ref{NLIE_a}): \begin{equation} \label{counting_n1} N_1 - 2(N_S^R + N_V^R) = S^{\rm tot} - M_R + M_{C>\pi} + M_W + \tfrac{1}{2}({\rm sgn}(H_+) + {\rm sgn}(H_-)), \end{equation} where $M_{C>\pi}$ represents the number of roots which satisfy $|{\rm Im}\,\theta_j| > \pi$. An integration constant $C_a$ is also determined as follows: and an integration constant: \begin{equation} \label{int_const_a} C_a = i\pi\widetilde{C}_a = -i\pi[ 2S^{\rm tot} + 1 + {\rm sgn}(H_+) + {\rm sgn}(H_-) ]_{{\rm mod}\, 2}. \end{equation} In derivation of (\ref{counting_n1}) and (\ref{int_const_a}), we used the following asymptotic behaviors: \begin{align} &g_a(\infty) = \pi G_a(\infty) = \frac{\pi}{2} \frac{\pi - 2\gamma}{\pi - \gamma}, \\ &J_a(\infty) = \pi \frac{\pi - 2\gamma}{\pi - \gamma}, \qquad F_a(\infty;H) = \frac{\pi}{2} \frac{{\rm sgn}(H)\, \pi - \gamma H}{\pi - \gamma} \end{align} and a relation $M = M_R + M_C + M_W$. \ifx10 \end{document} \fi \section{Introduction} \label{sec:introduction} Physical systems on finite volume show interesting features such as edge states and boundary critical exponents and their importance has been noticed for years. It is also important, as any real materials are finite-size systems, to know boundary effects on physical quantities. Nevertheless, existence of boundaries often destroys good symmetry obtained for periodic systems, which makes it more difficult to study a system with boundaries. For this reason, it would be nice to work on systems with good symmetries, even after adding non-trivial boundary conditions, which somehow allow us exact calculation of physical quantities. Although adding boundaries breaks symmetry of an integrable system at boundaries, whose integrability is ensured by the Yang-Baxter equation, there exist such boundary conditions that preserve integrability of the system satisfying the reflection relation \cite{bib:C84, bib:S88} at boundaries. Due to the Yang-Baxter equation and the reflection relation, a many-body scattering process can be decomposed into a sequence of two-body scatterings which allows us to find exact scattering and reflection matrices. An example which holds these symmetries is the spin-$\frac{1}{2}$ XXZ spin chain with boundary magnetic fields, whose $R$ and $K$-matrices can be obtained as solutions of the Yang-Baxter equation and the reflection relation. Another example is the sine-Gordon (SG) model with Dirichlet boundary conditions, which is obtained through bosonization of the spin-$\frac{1}{2}$ XXZ spin chain with boundary magnetic field. Both models has characterizing $R$-matrices associated with the $U_q(sl_2)$-algebra \cite{bib:KRS81, bib:J85}. \bigskip Different methods have been developed for spin chains and quantum field theories, since the former model is a discrete system, while the latter a continuum one. For spin chains, a transfer matrix method is often used to solve a system by regarding a two-dimensional lattice with time sequences of a transfer matrix. The Bethe-ansatz method is one of the most successful method to diagonalize a transfer matrix \cite{bib:B31}. This method can be also applied to a system with non-trivial boundaries, as long as they satisfy the reflection relation. For instance, the XXZ model with boundary fields was first solved by the coordinate Bethe-ansatz method \cite{bib:ABBBQ87} and the method was algebraically formulated for the diagonal boundary case by introducing the double-row transfer matrix \cite{bib:S88}. In a presence of magnetic boundary fields, existence of boundary bound states have been found through a $q$-deformed vertex operator \cite{bib:JKKKM95} and later also by the Bethe-ansatz method \cite{bib:SS95,bib:KS96}. In a realm of the Bethe-ansatz method, boundary bound states are obtained as imaginary solutions of the Bethe-ansatz equations \cite{bib:SS95,bib:KS96}. One needs exact distribution of Bethe roots for computation of physical quantities by the Bethe-ansatz method. Existence of imaginary roots slightly deforms root density for the bulk, and as a result deforms root distribution for the ground state as well. This fact leads us to a question whether boundary bound states are to be included in the ground state or not. The answer to this question was given for the repulsive regime \cite{bib:SS95} and for the attractive regime \cite{bib:KS96} by calculating a energy shift coming from emergence of imaginary roots themselves and a shift of root density driven by imaginary roots. On the other hand, analytical discussion of a continuum theory has been achieved by the bootstrap approach \cite{bib:ZZ79}. This method allows us to compute a scattering matrix between any particles subsequently from a soliton-soliton $S$-matrix obtained as a solution of the Yang-Baxter equation. Similarly, the boundary bootstrap principle was also developed which subsequently gives a reflection amplitude on a boundary with excitation particles. In the context of a quantum field theory, boundary bound states are obtained as poles in a reflection matrix. Existence of boundary bound states in the SG model with Dirichlet boundary conditions was discussed in \cite{bib:GZ94, bib:G94} together with explicit forms of reflection matrices. Then spectrum of boundary states has been calculated in \cite{bib:MD00, bib:BPT01, bib:BPTT02, bib:BPT02}. However, it is hard to know whether boundary bound states are included in the ground state or not, since in a quantum field theory realm, the ground state is always considered as a vacuum. \bigskip If one correctly knows a corresponding lattice model to a quantum field theory, one can use a method valid only to discretized systems in analysis of a system which is originally continuum. Therefore, our main aim in this paper is to know correct correspondence between a lattice system and a quantum field theory. The notion to discretize an integrable quantum field theory was first introduced in \cite{bib:STF80}. Among various types of discretization, we employ the light-cone regularization \cite{bib:DV87, bib:V89, bib:V90}. The light-cone regularization is achieved by discretization of a light cone, at the same time with fixing a mass parameter \cite{bib:DV88}. This treatment is called ``scaling'' and we call a continuum limit to reproduce an original theory the scaling limit. A discretized light cone looks like a two-dimensional lattice system rotated by $45$-degrees. Each line is a trajectory of each particle and a right-mover runs over a line from left-bottom to right-top, while a left-mover runs over a line from right-bottom to left-top. A scattering occurs only at a vertex with a corresponding scattering amplitude to an original theory. This scattering matrix coincides with the $R$-matrix of the spin-$\frac{1}{2}$ XXZ with alternating inhomogeneity, which algebraically connects these two models. In order to derive characteristic quantities in quantum field theories, such as $S$-matrices and conformal dimensions, from a light-cone regularized model, two different approaches have been developed to describe only excitation particles. The first one is based on the physical Bethe-ansatz equations calculated by assuming string solutions and deriving equations for density of those strings on an infinite system \cite{bib:K79, bib:DL82, bib:JNW83, bib:AD84, bib:R91}. The second is derived from the nonlinear integral equations (NLIEs) for counting functions or auxiliary functions defined from eigenvalues of transfer matrices \cite{bib:DV92, bib:FMQR97, bib:DV97, bib:KBP91}. This method, which allows us to deal with a finite-size system, is more algebraic in the sense that the equations are obtained based on $T$-systems and $Y$-systems, whose concept was first introduced in \cite{bib:Z91} and link with Dynkin diagram was explored in \cite{bib:RVT93}. \bigskip Correspondence between the SG model and the spin-$\frac{1}{2}$ XXZ model has been closely discussed through light-cone lattice approach. NLIEs of only excitation states with an even number of particles have been accessed under a periodic boundary, as a corresponding spin chain consists of an even number of sites. Consequently, in the ultraviolet (UV) limit, obtained conformal dimensions have an even winding number. Later in \cite{bib:FRT98}, it has been suggested that a subspace characterized by odd winding numbers is obtained from a spin chain consisting of an odd number of sites, although it has not been found yet how to define a scaling limit on an odd-site system. On the other hand, correspondence of these two models under Dirichlet boundaries has been discussed in \cite{bib:ABR04, bib:ABPR08} and found that a subsector consisting of odd winding numbers is also obtained for certain values of boundary parameters. Our interest is, if we consider more complicated case with supersymmetry, how boundary fields affect on continuum-discrete correspondence. For this aim, we discuss the supersymmetric sine-Gordon (SSG) model \cite{bib:FGS78} and a corresponding spin chain, the Zamolodchikov-Fateev spin-$1$ XXZ chain \cite{bib:ZF80}. Correspondence of these two models has been discussed in \cite{bib:IO92, bib:A94} and under Dirichlet boundary conditions in \cite{bib:ANS07}. The periodic case was discussed from a light-cone point of view in \cite{bib:D03, bib:BDPTW04, bib:HRS07} and only the Neveu-Schwarz (NS) sector, {\it i.e.} one of two sectors in which the SSG model results in the UV limit, was obtained \cite{bib:HRS07}. In analysis of the SSG model from a light-cone regularization approach, they used NLIEs instead of a method based on string hypothesis, since Bethe roots of a higher-spin system are subjected to deviations of $\mathcal{O}(N^{-1})$ from string solutions. Higher-spin extension of a spin chain was first advocated in \cite{bib:KRS81}. Using a good property of the $U_q(sl_2)$ $R$-matrix, the fusion method has been developed. Applying a projection operator, the $R$-matrix of the spin-$1$ XXZ model is constructed. The $R$-matrix constructed in this way again satisfies the Yang-Baxter equation, which ensures integrability of a system associated with this $R$-matrix. The diagonal solution of the reflection relation for the spin-$1$ $R$-matrix results in Dirichlet boundaries on the SSG model \cite{bib:FLU02}. The BSSG model was first introduced in \cite{bib:IOZ95}. Boundary bound states and mass spectra have been discussed by a boundary bootstrap approach in \cite{bib:BPT02*}. Then light-cone regularization was also applied in \cite{bib:ANS07} where correspondence of a spin chain to the original theory has been intensively discussed in relation with a renormalization flow from the infrared (IR) limit to the UV limit. Although they limited their discussion to a regime where no boundary bound state is obtained, we are more interested in how physics changes according to boundary parameters. Indeed, $T$-functions change their analytical structure in accordance with boundary parameters, which is physically interpreted as appearance of boundary bound states. Performing analytic continuation, we obtained different NLIEs for three regimes of boundary parameters. From each set of NLIEs, different counting equations were derived, which allows different types of excitations. We found, for certain values of boundary parameters, an odd number of particles is obtained in the system consisting of an even number of sites. Thus, we expect a similar sector separation obtained for the SG model under Dirichlet boundary conditions \cite{bib:ABPR08} but more complicated one due to supersymmetry. \bigskip Now we show the plan of this paper. Throughout this paper, we analyze the SSG model with Dirichlet boundaries on a finite volume. We focus on the repulsive regime where no breather {\it i.e.} a bound state of solitons exists in a system. Although two types of Dirichlet boundary conditions are allowed due to supersymmetry of Majorana fermions, we chose the condition referred by BSSG$^+$ in \cite{bib:BPT02}. In Section \ref{sec:ssg}, we first introduce the SSG model and review known results from a viewpoint of an integrable quantum field theory, including scattering and reflection matrices and a corresponding conformal field theory. A method of light-cone regularization is also explained in this section and properties obtained in a corresponding spin chain are referred. We use a method of NLIEs, since the spin-$1$ chain, a corresponding lattice model to the BSSG$^+$ model, is exposed to string deviations of $\mathcal{O}(N^{-1})$, which results in difficulty in calculation of physical quantities sensitive to a system size. This method also resolves a problem how to define a counting function of string solutions \cite{bib:ANS07}, which is also a fatal problem since a ground state of the SSG model is given by two-string roots. In Section \ref{sec:nlie}, we derive NLIEs of an arbitrary excitation state for a whole regime of boundary parameters. Derivation of NLIEs associated with a higher-spin representation of the $U_q(sl_2)$ algebra is based on $T$-$Q$ relations \cite{bib:S88, bib:B82, bib:S99} together with analyticity structure of $T$-functions given by eigenvalues of transfer matrices. From asymptotic behaviors of NLIEs, counting equations are also derived. Counting equations relate the numbers of excitation particles by which we discuss allowed excitations in each regime of boundary parameters in connection with eigenenergy computed from NLIEs. In the next section, scattering and reflection amplitudes are discussed by taking the IR limit. Different NLIEs for three boundary regimes are connected via a boundary bootstrap method by interpreting a change of analyticity structure due to emergence of a boundary bound state. From symmetries obtained in reflection amplitudes, it is also referred how lattice symmetries survive in the scaling limit. Then in Section \ref{sec:UVlim}, the UV limit is considered. Conformal dimensions are computed for a state obtained from a light-cone regularized BSSG$^+$ model and we show that both the NS and Ramond (R) sectors are obtained. A similar restriction on a winding number to the Dirichlet SG case is also obtained, which strongly motivate us to construct a corresponding spin chain to a subspace of the BSSG$^+$ model which cannot be obtained from a conventional light-cone regularization. The last section is devoted to concluding remarks and future works. \section{Infrared limit} \label{sec:IRlim} Scattering and reflection amplitudes can be read off from the IR limit of NLIEs given by $m_0 L \rightarrow \infty$. In this limit, rapidities of bound states are given by positions of poles in a $S$-matrix, while those of boundary bound states locate at positions of poles in a reflection matrix. At the same time, the ground state configuration of Bethe roots forms pure two-strings in an absence of Dirichlet boundaries. However, less is known about a ground state and excitation states under Dirichlet boundaries besides emergence of boundary bound states \cite{bib:BPT02}. Therefore, we discuss how presence of boundary bound states affects ground state configurations of Bethe roots from the viewpoint of NLIEs. In the IR limit, a simplification occurs in NLIEs, since the terms involving $\ln B(\theta)$ becomes negligibly small \cite{bib:HRS07}. However, the third term in (\ref{NLIE_b}) remains finite and we obtain a set of NLIEs as follows: \begin{align} \ln b(\theta) = &\int_{-\infty}^{\infty} d\theta'\; G_K(\theta - \theta' - \textstyle\frac{i\pi}{2} + i\epsilon) \ln Y(\theta' - i\epsilon) + 2im_0L \sinh \theta \nonumber \\ &+ iD_{\rm B}(\theta) + iD(\theta) + i \pi C^{(2)}_b, \label{NLIE_IR1} \\ \ln y(\theta) = &iD_{\rm SB}(\theta) + iD_K(\theta) + i\pi C^{(2)}_y. \label{NLIE_IR2} \end{align} From the quantization condition for holes (\ref{quant_cond_h}), the following equation holds for any rapidity of holes $h_j$: \begin{equation} \label{quant_nlie} b(h_j) = -1. \end{equation} On the other hand, a quantization condition in a realm of a quantum field theory has been obtained from a boundary condition for phase shifts \cite{bib:K79, bib:AD84}: \begin{equation} \label{quant_qft} e^{2im_0L \sinh \theta_j} R(\theta_j;\xi_+) \cdot \prod_{l=1 \atop l \neq j}^n S(\theta_j - \theta_l) S(\theta_j + \theta_l) \cdot R(\theta_j;\xi_-) = 1, \end{equation} where $\theta_j$ is rapidity of a SSG soliton. Comparing (\ref{quant_nlie}) with (\ref{quant_qft}), we obtain the following relation between scattering amplitudes and NLIEs: \begin{equation} \begin{split} &\int_{-\infty}^{\infty} d\theta' G_K(h_j - \theta' - \tfrac{i\pi}{2} + i\epsilon) \ln Y(\theta' - i\epsilon) + iD_{\rm B}(h_j) + iD(h_j) + i\pi (C_b^{(2)} + 1) \\ & = \ln R(h_j;\xi_+) + \sum_{l=1 \atop l\neq j}^n \left(\ln S(h_j - h_l) + \ln S(h_j + h_l)\right) + \ln R(h_j;\xi_-), \end{split} \end{equation} which allows us to obtain scattering and reflection amplitudes in terms of lattice parameters. Counting equations (\ref{counting-a}) and (\ref{counting-b*}) admit $N_H^{\rm eff} = N_1 = 0$ for $H_{\pm} > 1$, which means no particle is obtained in a ground state. In this regime, both boundary terms belong to the regime (a) of NLIEs. From now on, we focus on the SG part of a reflection amplitude, as the RSOS part does not concern with boundary bound states. Reflection amplitudes for the regime (a) have been derived in \cite{bib:ANS07}: \begin{align} &\ln R_1^+(\theta) = iF^{(a)}(\theta;H), \label{sg-ref1} \\ &\ln R_2(\theta) = iJ(\theta), \label{susy-ref1} \end{align} which result in relations obtained in Table \ref{tab:para-rel}. We denote boundary dependent terms for each regime by $F^{(x)}$ and $F_y^{(x)}$ ($x\in\{a,b,c\}$). A factor $2^{-\theta/2\pi}$ obtained in (\ref{ref_susy_int}) does not appear, since it is removed by a similarity transformation. When $H_+$ reaches $1$ while keeping $H_-$ greater than $1$, the counting equations are solved as $M_C=1$, showing a no-pairing close root emerges. A boundary term of NLIEs is given by the regime (b) (\ref{regimeb_b}) and (\ref{regimeb_y}) for $H_+$, although the terms of $H_-$ are still given by the regime (a) (\ref{regimea_b}) and (\ref{regimea_y}). This change occurs due to emergence of a boundary bound state; Boundary-dependent parts for the regime (b) are expressed through those for the regime (a): \begin{equation} \label{Fb} \begin{split} iF^{(b)}(\theta;H) &= \ln R_1^+(\theta) + ig(\theta - \tfrac{i\pi}{2}(1-H)) + ig(\theta + \tfrac{i\pi}{2}(1-H), \\ iF_y^{(b)}(\theta;H) &= iF_y^{(a)}(\theta;H) + i\widetilde{g}_K(\theta - \tfrac{i\pi}{2}(1-H)) + i\widetilde{g}_K(\theta + \tfrac{i\pi}{2}(1-H)). \end{split} \end{equation} This state is interpreted as a pure two-string state with an imaginary hole at $\theta = \frac{i\pi}{2}(1-H)$, which is a pole of a reflection amplitude (\ref{sg-ref1}). Indeed, the boundary bootstrap principle leads to the relations (\ref{Fb}) \cite{bib:ZZ79}. Boundary energy in Figure \ref{fig:boundary_energy} also supports this interpretation, by showing an energy gap $E_{\rm B}^{(b)\to(a)} = m_0 \cos\frac{\pi}{2}(1-H)$ at $H=1$. For $H_+<0$, we still obtain a solution $M_C=1$ from counting equations, whose imaginary part is, however, greater than $\pi$. Such a root contributes to the counting equation for type-$1$ holes (\ref{counting-a}). Ground-state reflection amplitudes in this regime are obtained as \begin{align} &\ln R_1^+(\theta) = iF^{(b)}(\theta;H) - ig_K(\theta - \tfrac{i\pi H}{2}) - ig_K(\theta + \tfrac{i\pi H}{2}), \label{sg-ref2} \end{align} which require different parameter relations as shown in Table \ref{tab:para-rel}. Subsequently, boundary energy becomes negative (Figure \ref{fig:boundary_energy}), giving ground state energy. Thus, a ground state for this regime includes a close root at $\tilde{\theta} = \frac{i\pi}{2}(1+H)$. Besides this close root, the state includes a type-$1$ hole, implying existence of a non-paring root which affects on RSOS indices. Thus, the state for $H_+<0$ has different RSOS indices from the ground state obtained for $H_+>0$, and we expect a soliton state constructed through a light-cone lattice in this regime belongs to a different sector of a superconformal field theory from that for $H_+>0$. When $H_+$ reaches $-1$, boundary terms for $H_+$ belong to the regime (c), which are expressed by adding two holes and one type-$1$ hole to the two-string state: \begin{equation} \label{Fc} \begin{split} iF^{(c)}(\theta;H) &= iF^{(a)}(\theta;H) + ig(\theta - \tfrac{i\pi}{2}(1-H)) + ig(\theta + \tfrac{i\pi}{2}(1-H)) \\ &+ ig(\theta + \tfrac{i\pi}{2}(1+H)) + ig(\theta - \tfrac{i\pi}{2}(1+H)) + ig_K(\theta - \tfrac{i\pi H}{2}) + ig_K(\theta + \tfrac{i\pi H}{2}), \\ iF_y^{(c)}(\theta;H) &= iF_y^{(a)}(\theta;H). \end{split} \end{equation} Since boundary terms (\ref{Fb}) describe a ground state for $H_+<0$, we write boundary dependent terms as \begin{equation} \label{Fc*} \begin{split} iF^{(c)}(\theta;H) &= iF^{(b)}(\theta;H) + ig(\theta - \tfrac{i\pi}{2}(1+H)) + ig(\theta + \tfrac{i\pi}{2}(1+H)), \\ iF_y^{(c)}(\theta;H) &= iF_y^{(b)}(\theta;H) + i\widetilde{g}_K(\theta + \tfrac{i\pi}{2}(1+H)) + i\widetilde{g}_K(\theta - \tfrac{i\pi}{2}(1+H)). \end{split} \end{equation} Thus, boundary dependent terms for the regime (c) describe a one-particle excitation state from the ground state including a close root. A reflection amplitude (\ref{sg-ref2}) indeed has a pole at $\theta = -\frac{i\pi}{2}(1+H)$. Figure \ref{fig:boundary_energy} also supports this interpretation which shows an energy gap $E_{\rm B}^{(c) \to (b)} = m_0 \cos\frac{\pi}{2}(1+H)$ at $H=-1$. Finally, $H_+$ reaches $-2$ and again we obtain the ground state with no particles nor holes. A reflection amplitude on this ground state is then obtained as \begin{align} &\ln R_1^+(\theta) = iF^{(c)}(\theta;H), \label{sg-ref3} \end{align} by resetting parameter relations as in Table \ref{tab:para-rel}. Besides solutions $N_H^{\rm eff} = N_1 = 0$, counting equations admit a solution $M_W = 1$ together with $S = -1$. Taking into account that a wide root gets into a second determination, one can also write (\ref{Fc}) as \begin{equation} \begin{split} iF^{(c)}(\theta;H) &= \ln R_1^+(\theta) + ig_K(\theta - \tfrac{i\pi H}{2}) + ig_K(\theta + \tfrac{i\pi H}{2}) \nonumber \\ &- ig_{\rm II}(\theta - \tfrac{i\pi}{2}(1+H)) - ig_{\rm II}(\theta - \tfrac{i\pi}{2}(1+H)), \\ iF_y^{(c)}(\theta;H) &= iF_y^{(a)}(\theta;H), \end{split} \end{equation} by regarding contribution of two holes in (\ref{Fc}) as that of a wide root. It is natural to consider a state in this regime is obtained just by a soliton-antisoliton translation from the regime $H_+>1$, and therefore, belongs to the same sector of a superconformal field theory. \begin{table} \caption{Relations between a lattice boundary parameter and a QFT boundary parameter. } \label{tab:para-rel} \begin{center} \begin{tabular}{ccc} \hline\hline $H>0$ & $0>H>-2$ & $-2>H$ \\ \hline \\[-2mm] $H = -\frac{2\xi}{\pi\lambda} + \frac{1}{\lambda} + 1$ & $H = -\frac{2\xi}{\pi\lambda} + \frac{1}{\lambda} - 1$ & $H = -\frac{2\xi}{\pi\lambda} - \frac{1}{\lambda} - 3$ \\[2mm] \hline\hline \end{tabular} \end{center} \end{table} Now let us discuss how symmetries obtained in a discretized system survive after taking the scaling limit. By replacing $H$ of a reflection amplitude on a ground state for $|H+1|>1$ by $-H-\frac{2\pi}{\gamma}-2$, one obtains a antisoliton reflection amplitude (\ref{SG-antiref}): \begin{equation} iF^{(a)}(\theta;-H-\tfrac{2\pi}{\gamma}-2) = iF^{(c)}(\theta;-H-\tfrac{2\pi}{\gamma}-2) = \ln R_1^-(\theta). \end{equation} The same symmetry is also obtained for $|H+1|<1$ by substituting $H$ by $-H-2$: \begin{equation} \begin{split} &iF^{(b)}(\theta;-H-2) - ig_K(\theta - \tfrac{i\pi}{2}(-H-2)) - ig_K(\theta + \tfrac{i\pi}{2}(-H-2)) \\ &=iF^{(c)}(\theta;-H-2) - ig_K(\theta - \tfrac{i\pi}{2}(-H-2)) - ig_K(\theta + \tfrac{i\pi}{2}(-H-2)) \\ &= \ln R_1^-(\theta). \end{split} \end{equation} This implies that a $S^z \leftrightarrow -S^z$-symmetry, {\it i.e.} a soliton-antisoliton symmetry survives even after continualization. \ifx10 \end{document} \fi \section{Nonlinear integral equations} \label{sec:nlie} \subsection{$T$-functions and auxiliary functions} An eigenvalue of the transfer matrix of the LSSG model with Dirichlet boundary conditions (\ref{T-for-SSG}) was found to be written as a function of Bethe roots \cite{bib:ANS07}: \begin{equation} \label{TQ2} T_2(\theta) = \lambda_1(\theta) + \lambda_2(\theta) + \lambda_3(\theta), \end{equation} where \begin{equation} \begin{split} &\lambda_1(\theta) = \sinh\tfrac{\gamma}{\pi}(2\theta - 2i\pi) B_-(\theta - \tfrac{i\pi}{2}) B_-(\theta + \tfrac{i\pi}{2}) \phi(\theta - \tfrac{3i\pi}{2}) \phi(\theta - \tfrac{i\pi}{2}) \frac{Q(\theta + \frac{3i\pi}{2})}{Q(\theta - \frac{i\pi}{2})}, \\ &\lambda_2(\theta) = \sinh\tfrac{\gamma}{\pi}(2\theta) B_+(\theta - \tfrac{i\pi}{2}) B_-(\theta + \tfrac{i\pi}{2}) \phi(\theta - \tfrac{i\pi}{2}) \phi(\theta + \tfrac{i\pi}{2}) \frac{Q(\theta + \frac{3i\pi}{2}) Q(\theta - \frac{3i\pi}{2})}{Q(\theta - \frac{i\pi}{2})Q(\theta + \frac{i\pi}{2})}, \\ &\lambda_3(\theta) = \sinh\tfrac{\gamma}{\pi}(2\theta + 2i\pi) B_+(\theta - \tfrac{i\pi}{2}) B_+(\theta + \tfrac{i\pi}{2}) \phi(\theta + \tfrac{3i\pi}{2}) \phi(\theta + \tfrac{i\pi}{2}) \frac{Q(\theta - \frac{3i\pi}{2})}{Q(\theta + \frac{i\pi}{2})}. \end{split} \end{equation} A function $\phi(\theta)$ gives a phase shift given by \begin{equation} \phi(\theta) = \sinh^N\tfrac{\gamma}{\pi}(\theta - \Theta) \sinh^N\tfrac{\gamma}{\pi}(\theta + \Theta) \end{equation} and functions $B_{\pm}(\theta)$ come from boundary effects which depend on boundary parameters as \begin{equation} B_{\pm}(\theta) = \sinh\tfrac{\gamma}{\pi}(\theta \pm \tfrac{i\pi H_+}{2}) \sinh\tfrac{\gamma}{\pi}(\theta \pm \tfrac{i\pi H_-}{2}). \end{equation} Bethe-root dependence shows up through a function $Q(\theta)$: \begin{equation} Q(\theta) = \prod_{j=1}^M \sinh\tfrac{\gamma}{\pi}(\theta - \theta_j) \sinh\tfrac{\gamma}{\pi}(\theta + \theta_j), \end{equation} where $\theta_j$ is a Bethe root. Another transfer matrix is defined for the LSSG model whose eigenvalue is given by \begin{equation} \label{TQ1} T_1(\theta) = l_1(\theta) + l_2(\theta), \end{equation} where \begin{equation} \begin{split} &l_1(\theta) = \sinh\tfrac{\gamma}{\pi}(2\theta + i\pi) B_+(\theta) \phi(\theta + i\pi) \frac{Q(\theta - i\pi)}{Q(\theta)}, \\ &l_2(\theta) = \sinh\tfrac{\gamma}{\pi}(2\theta - i\pi) B_-(\theta) \phi(\theta - i\pi) \frac{Q(\theta + i\pi)}{Q(\theta)}. \end{split} \end{equation} Both functions $T_1(\theta)$ and $T_2(\theta)$ are symmetric with respect to a sign of a Bethe root $\theta_j \leftrightarrow -\theta_j$, {\it i.e.} Bethe roots symmetrically locate to the origin of a complex plane. Auxiliary functions are defined from $T_2(\theta)$ as \begin{equation} \begin{split} &b(\theta) = \frac{\lambda_1(\theta) + \lambda_2(\theta)}{\lambda_3(\theta)}, \qquad \bar{b}(\theta) = \frac{\lambda_3(\theta) + \lambda_2(\theta)}{\lambda_1(\theta)} = b(-\theta), \\ &B(\theta) = 1 + b(\theta), \qquad \bar{B}(\theta) = 1 + \bar{b}(\theta). \end{split} \end{equation} Similarly, we define \begin{equation} \begin{split} &a(\theta) = \frac{l_2(\theta)}{l_1(\theta)}, \qquad \bar{a}(\theta) = \frac{l_1(\theta)}{l_2(\theta)} = a(-\theta), \\ &A(\theta) = 1 + a(\theta), \qquad \bar{A}(\theta) = 1 + \bar{a}(\theta). \end{split} \end{equation} The function $A(\theta)$ has zeros at positions of roots $\theta = \theta_k$, while $B(\theta)$ at positions which become string centers $\theta = \theta_k \pm \frac{i\pi}{2}$ in a large-volume limit \cite{bib:T82}. Therefore, $\ln b(\theta)$ and $\ln a(\theta)$ are interpreted as ``counting functions'' of real roots and two-string roots, respectively. Based on algebraic structure of integrable scattering theories, the $T$-system and the $Y$-systems have been developed \cite{bib:Z91, bib:RVT93}. These systems provide a systematic way to connect different types of $T$-functions, {\it e.g.} \begin{equation} \label{fusion} \begin{split} &T_1(\theta - \tfrac{i\pi}{2}) T_1(\theta + \tfrac{i\pi}{2}) = f(\theta) + T_0(\theta) T_2(\theta), \\ &T_0(\theta) = \sinh \tfrac{\gamma}{\pi}(2\theta), \end{split} \end{equation} and $Y$-functions {\it e.g.} \begin{align} &y(\theta) = \frac{T_0(\theta) T_2(\theta)}{f(\theta)}, \label{T-Yrel} \\ &T_1(\theta - \tfrac{i\pi}{2}) T_1(\theta + \tfrac{i\pi}{2}) = f(\theta) Y(\theta), \label{T-Yrel2} \end{align} where $Y(\theta) = 1 + y(\theta)$ and $f(\theta) = l_2(\theta - \tfrac{i\pi}{2}) l_1(\theta + \tfrac{i\pi}{2})$. \subsection{Classification of roots and holes} A logarithm of each function in auxiliary functions belongs to different Riemann surface depending on an imaginary value of a root or a hole, and therefore we classify roots in the following way: \begin{itemize} \item Inner roots $c^{\rm IN}_j$ ($j \in \{1,\dots,M_{C^{\rm IN}}\}$) s.t. $|{\rm Im}\,c^{\rm IN}_j| \le \frac{\pi}{2} + \epsilon$ \item Close roots $c_j$ ($j \in \{1,\dots,M_C\}$) s.t. $\frac{\pi}{2} + \epsilon <|{\rm Im}\,c_j| < \frac{3\pi}{2}$ \item Wide roots $w_j$ ($j \in \{1,\dots,M_W\}$) s.t. $\frac{3\pi}{2} < |{\rm Im}\,w_j| \le \frac{\pi^2}{2\gamma}$ \end{itemize} An infinitesimal $\epsilon$ is introduced for two-string roots to be classified into inner roots, {\it i.e.} it is chosen to be greater than root-deviations from two-string roots. Wide roots s.t. $|{\rm Im}\,w_j| = \frac{\pi^2}{2\gamma}$ are called self-conjugate roots, as their complex conjugates are themselves. Note that any complex roots appear in pairs with their complex conjugates except for real and self-conjugate ones. Quantization conditions are given by the following relations: \begin{align} &{\rm Im}\,\ln b(c_j^{{\rm IN}\uparrow} - \tfrac{i\pi}{2}) = 2\pi (I_{c_j^{{\rm IN}\uparrow}} - \tfrac{1}{2}), \\ &{\rm Im}\,\ln b(c_j^{\uparrow} - \tfrac{i\pi}{2}) = 2\pi (I_{c_j^{\uparrow}} - \tfrac{1}{2}), \\ &{\rm Im}\,\ln b(w_j^{\uparrow} - \tfrac{i\pi}{2}) = 2\pi (I_{w_j^{\uparrow}} - \tfrac{1}{2}), \end{align} where $I_{\theta_j^{\uparrow}}$ ($\theta \in \{c^{\rm IN}, c, w\}$) is a quantum number which takes an integer. Here we introduced a new notation $\theta_j^{\uparrow}$ for a root with a positive imaginary part. For simplicity, we call a shifted root $\tilde{\theta}^{\uparrow}_j = \theta^{\uparrow}_j - \frac{i\pi}{2}$ an effective roots. Similarly, quantization conditions for holes and type-$1$ holes are respectively given by \begin{align} &{\rm Im}\,\ln b(h_j) = 2\pi (I_{h_j} - \tfrac{1}{2}), \hspace{12mm} j \in \{1,\dots,N_H\}, \label{quant_cond_h} \\ &{\rm Im}\,\ln a(h_j) = 2\pi (I_{h_j} - \tfrac{1}{2}), \qquad j \in \{1,\dots,N_1\}. \end{align} In an increasing sequence of quantum numbers with respect to $j$, a root or a hole which makes $I_{\theta_j} < I_{\theta_{j-1}}$ is called a special object. Here we denote special roots $s_j$ and $s_j^R$ whose quantization condition is given by \begin{equation} \begin{split} &{\rm Im} \ln b(\tilde{s}_j^{\uparrow}) = 2\pi (I_{s_j^{\uparrow}} - \tfrac{1}{2}), \qquad j \in \{1,\dots,N_S\}, \\ &{\rm Im} \ln a(s_j^{R \uparrow}) = 2\pi (I_{s_j^{R \uparrow}} - \tfrac{1}{2}), \qquad j \in \{1,\dots,N^{R}_S\} \end{split} \end{equation} while a special hole and a type-$1$ special hole by $v_j$ and $v_j^{R}$, respectively, whose quantization conditions are given by \begin{equation} \begin{split} &{\rm Im} \ln b(v_j) = 2\pi (I_{v_j} - \tfrac{1}{2}), \qquad j \in \{1,\dots,N_V\}, \\ &{\rm Im} \ln a(v^R_j) = 2\pi (I_{v^R_j} - \tfrac{1}{2}), \qquad j \in \{1,\dots,N^{R}_V\} \end{split} \end{equation} \subsection{Cauchy theorem for $T$-functions} Derivation of NLIEs for the ground state of the LSSG model with Dirichlet boundaries has been closely discussed in \cite{bib:ANS07}. Here we derive NLIEs for an arbitrary excited state of the LSSG model. We do not assume string-like distribution of Bethe roots. \begin{figure} \begin{center} \includegraphics[scale=0.75]{contour1} \caption{A contour $\mathcal{C}_1$ is taken to surround the real axis.} \label{fig:contour1} \end{center} \end{figure} Nontrivial equations can be derived from analyticity structure of the $T$-functions. Since the function $T_2(\theta)$ is analytic and nonzero (ANZ) around the real axis of the complex plane except for the origin and positions of holes, we have the following equation as a result of the Cauchy theorem: \begin{equation} \label{CauchyT2} \oint_{\mathcal{C}_1} d\theta\; e^{ik\theta} [\ln T_2(\theta)]'' = \frac{2\pi k}{1 - e^{-\pi k}} \Big(1 + \sum_{h_j\in \mathbb{R}} e^{ik h_j}\Big), \end{equation} where a contour $\mathcal{C}_1$ is taken as Figure \ref{fig:contour1}. This is an equation for $B(\theta)$, $\bar{B}(\theta)$, and $y(\theta)$ since $T_2(\theta)$ is expressed by the following two forms besides (\ref{T-Yrel}): \begin{align} T_2(\theta) &= t_+(\theta) \frac{Q(\theta - \frac{3i\pi}{2})}{Q(\theta + \frac{i\pi}{2})} B(\theta) \label{tplus} \\ &= t_-(\theta) \frac{Q(\theta + \frac{3i\pi}{2})}{Q(\theta - \frac{i\pi}{2})} \bar{B}(\theta), \label{tminus} \end{align} where \begin{equation} \label{tpm} t_{\pm}(\theta)=\sinh\tfrac{\gamma}{\pi}(2\theta\pm 2i\pi) B_{\pm} \left(\theta - \tfrac{i\pi}{2}\right) B_{\pm} \left(\theta + \tfrac{i\pi}{2}\right) \phi\left(\theta \pm \tfrac{3i\pi}{2}\right) \phi\left(\theta \pm \tfrac{i\pi}{2}\right). \end{equation} Another nontrivial equation is derived from ANZ property of $T_1(\theta)$ in ${\rm Im}\theta \in [-\frac{\pi}{2}, \frac{\pi}{2})$ except for the origin and positions of type-$1$ holes. The function $T_1(\theta)$ shows up in the auxiliary function $b(\theta)$ through \begin{equation} b(\theta) = \frac{T_1(\theta - \frac{i\pi}{2})}{\sinh\frac{\gamma}{\pi}(2\theta + 2i\pi)} \frac{\phi(\theta - \frac{i\pi}{2})}{\phi(\theta + \frac{i\pi}{2}) \phi(\theta + \frac{3i\pi}{2})} \frac{B_-(\theta + \frac{i\pi}{2})}{B_+(\theta - \frac{i\pi}{2}) B_+(\theta + \frac{i\pi}{2})} \frac{Q(\theta + \frac{3i\pi}{2})}{Q(\theta - \frac{3i\pi}{2})}. \end{equation} Applying the Cauchy theorem, we have the following equation (Figure \ref{fig:contour2}): \begin{equation} \label{cauchy_t1} \oint_{\mathcal{C}_2} d\theta\; e^{ik\theta} [\ln T_1(\theta)]'' = \frac{2\pi k}{1 - e^{-\pi k}} \Big(1 + \sum_{{\rm Im}h^{(1)}_j\in [-\frac{\pi}{2}, \frac{\pi}{2})} e^{ik h^{(1)}_j}\Big) \end{equation} which gives an NLIE for $b(\theta)$. \begin{figure} \begin{center} \includegraphics[scale=0.75]{contour2} \caption{A contour $\mathcal{C}_2$.} \label{fig:contour2} \end{center} \end{figure} Thus a set of NLIEs is derived for the LSSG model with Dirichlet boundaries as follows: \begin{align} \ln b(\theta) = &\int_{-\infty}^{\infty} d\theta'\; G(\theta - \theta' - i\epsilon) \ln B(\theta' + i\epsilon) - \int_{-\infty}^{\infty} d\theta'\; G(\theta -\theta' + i\epsilon) \ln \bar{B}(\theta' - i\epsilon) \nonumber \\ &+ \int_{-\infty}^{\infty} d\theta'\; G_K(\theta - \theta' - \textstyle\frac{i\pi}{2} + i\epsilon) \ln Y(\theta' - i\epsilon) + i D_{\rm bulk}(\theta) + i D_{\rm B}(\theta) + i D(\theta) \nonumber \\ &+ C_b^{(1)} \theta + C_b^{(2)} \label{NLIE_b}\\ \ln y(\theta) = &\int_{-\infty}^{\infty} d\theta'\; G_K(\theta - \theta' + {\textstyle\frac{i\pi}{2}} - i\epsilon) \ln B(\theta' + i\epsilon) + \int_{-\infty}^{\infty} d\theta'\; G_K(\theta - \theta' -{\textstyle\frac{i\pi}{2}} + i\epsilon) \ln \bar{B}(\theta' - i\epsilon) \nonumber \\ &+ i D_{\rm SB}(\theta) + i D_K(\theta) + C_y^{(1)} \theta + C_y^{(2)}, \label{NLIE_y} \end{align} where $C_b^{(i)}$ and $C_y^{(i)}$ ($i \in \{1,2\}$) are integration constants which are determined by asymptotic analysis of NLIEs (Appendix \ref{sec:int_const}). Functions $G(\theta)$ and $G_K(\theta)$ are given by \begin{align} G(\theta) = \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{e^{-ik\theta} \sinh(\frac{\pi}{\gamma} - 3)\frac{\pi k}{2}}{2 \cosh\frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 2)\frac{\pi k}{2}}, \qquad G_K(\theta) = \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{e^{-ik\theta}}{2 \cosh\frac{\pi k}{2}}, \end{align} which correspond to soliton-soliton scattering factors. Indeed, $G(\theta)$ is nothing but the bulk scattering amplitude of the SG model (\ref{SGamp}). A bulk phase shift shows up in $D_{\rm bulk}(\theta)$ as \begin{equation} D_{\rm bulk}(\theta) = 2N \arctan\frac{\sinh \theta}{\cosh \Theta}. \end{equation} A particle source term $D(\theta)$ is given by \begin{equation} \begin{split} &D(\theta) = \sum_j c_j \{g_{(j)}(\theta - \tilde{\theta}_j) + g_{(j)}(\theta + \tilde{\theta}_j)\}, \\ &g(\theta) = 2\gamma \int_0^{\infty} d\theta'\; G(\theta'), \qquad g_K(\theta) = 2\gamma \int_0^{\infty} d\theta'\; G_K(\theta'), \end{split} \end{equation} where $\theta_j$ is a Bethe root. A function $g_{(j)}$ is defined for each object differently: \begin{equation} g_{(j)}(\theta) = \begin{cases} g_{\rm II}(\theta) = g(\theta) + g(\theta - i\pi\, {\rm sign}({\rm Im}\, \theta)) & \text{for wide roots} \\ g(\theta + i\epsilon) + g(\theta - i\epsilon) & \text{for specials} \\ g_K(\theta) & \text{for type-$1$ holes} \\ g(\theta) & \text{otherwise}, \end{cases} \end{equation} together with a choice of $c_j$: \begin{equation} c_{j} = \begin{cases} +1 & \text{for holes} \\ -1 & \text{otherwise}. \end{cases} \end{equation} A kink source term $D_K(\theta)$ is given by \begin{equation} \begin{split} &D_K(\theta) = \mathop{\lim}_{\epsilon \to +0} \widetilde{D}_K(\theta + \textstyle\frac{i\pi}{2} - i\epsilon) \\ &\widetilde{D}_K(\theta) = \sum_j c_j \{g^{(1)}_{(j)}(\theta - \tilde{\theta}_j) + g^{(1)}_{(j)}(\theta + \tilde{\theta}_j)\}, \end{split} \end{equation} where $g^{(1)}_{(j)}(\theta)$ are \begin{equation} g^{(1)}_{(j)}(\theta) = \begin{cases} (g_K)_{\rm II}(\theta) = g_K(\theta) + g_K(\theta - i\pi\,{\rm sign}({\rm Im}\, \theta)\,) = 0 & \text{for wide roots} \\ g_K(\theta + i\epsilon) + g_K(\theta - i\epsilon) & \text{for specials} \\ g_K(\theta) & \text{otherwise}, \end{cases} \end{equation} which means no contribution from wide roots to a kink source term. Functions $D_{\rm B}(\theta)$ and $D_{\rm SB}(\theta)$ are boundary terms which we discuss in the next subsection. For later use, we also derive a NLIE for the auxiliary function $a(\theta)$. Keeping in our mind that real zeros of $T_1(\theta)$ consists of type-$1$ holes which results in (\ref{cauchy_t1}), we obtain \begin{equation} \label{NLIE_a} \begin{split} \ln a(\theta) &= \int_{-\infty}^{\infty} d\theta'\; G_a(\theta - \theta' + i\epsilon) \ln A(\theta' - i\epsilon) - \int_{-\infty}^{\infty} d\theta'\; G_a(\theta - \theta' - i\epsilon) \ln \bar{A}(\theta' + i\epsilon) \\ &+ iD^{(a)}_{\rm bulk}(\theta) + iD^{(a)}_{\rm B}(\theta) + iD_{\rm a}(\theta) + C_a, \end{split} \end{equation} where $C_a$ is an integration constant derived in Appendix \ref{sec:int_const}. A function $G_a(\theta)$ is given by \begin{equation} G_a(\theta) = \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{e^{-ik\theta} \sinh(\frac{\pi}{\gamma} - 2)\frac{\pi k}{2}}{2\cosh\frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 1)\frac{\pi k}{2}}. \end{equation} A boundary term $iD^{(a)}_{\rm B}(\theta) = F_a(\theta;H_+) + F_a(\theta;H_-) + J_a(\theta)$ depends on boundary parameters: \begin{equation} F_a(\theta;H) = \int_0^{\infty} d\theta'\; \int_{-\infty}^{\infty} dk\; e^{-ik\theta'} \frac{({\rm sgn}(H)\,\frac{\pi}{\gamma} - H) \frac{\pi k}{2}}{2\cosh\frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 1)\frac{\pi k}{2}}, \end{equation} while a boundary-parameter-independent term $J_a(\theta)$ is given by \begin{equation} J_a(\theta) = \int_0^{\infty} d\theta'\; \int_{-\infty}^{\infty} dk\; e^{-ik\theta'} \frac{\cosh\frac{\pi k}{4} \sinh(\frac{\pi}{\gamma} - 2)\frac{\pi k}{4}}{\cosh \frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 1)\frac{\pi k}{4}}. \end{equation} Particle source terms are written as \begin{equation} \label{source-a} D_{\rm a}(\theta) = \sum_j c^{(a)}_j \{ g_{(j)}^{(a)}(\theta - \theta_j) + g_{(j)}^{(a)}(\theta + \theta_j) \}, \end{equation} where $c_j^{(a)}$ is defined by \begin{equation} c_j^{(a)} = \begin{cases} 1 & \text{for type-$1$ holes} \\ -1 & \text{otherwise}. \end{cases} \end{equation} $g_{(j)}^{(a)}(\theta)$ is a function differently defined for each root or hole: \begin{equation} g_{(j)}^{(a)}(\theta) = \begin{cases} (g_a)_{\rm II}(\theta) = g_a(\theta) + g_a(\theta - i\pi {\rm sgn}({\rm Im}\,\theta)) & \text{for roots satisfying $|{\rm Im}\,\theta_j| > \pi$} \\ g_a(\theta + i\epsilon) + g_A(\theta - i\epsilon) & \text{for specials} \\ g_a(\theta) & \text{otherwise}, \end{cases} \end{equation} where \begin{equation} g_a(\theta) = 2\gamma \int_0^{\infty} d\theta'\; G_a(\theta'). \end{equation} A summation in (\ref{source-a}) is taken over $j$ such that $\theta_j$ is a type-$1$ hole, a real special object, or a complex root. A bulk term is obtained as \begin{equation} D^{(a)}_{\rm bulk}(\theta) = N \{g_a(\theta - i\Theta) + g_a(\theta + i\Theta)\}. \end{equation} \subsection{Boundary dependence of NLIEs} Boundary dependence of NLIEs emerges through branch cuts of logarithms. The following integral often appears in NLIEs: \begin{equation} \int_{-\infty}^{\infty} d\theta\; e^{ik\theta} [\ln \sinh(\theta- i\alpha)]'' = \frac{2\pi k}{1 - e^{-\frac{\pi^2}{\gamma}k}} e^{-(\alpha - \frac{\pi^2}{\gamma}n)k}, \end{equation} in which a branch cut is characterized by an integer $n$ s.t. $0 < {\rm Re}(\alpha - \pi n ) \le \pi$. Boundary parameters $H_{\pm}$ appear through the functions $B_{\pm}(\theta)$ as in forms of $B_{\pm}(\theta \pm \frac{i\pi}{2})$, and then we need to distinguish three regimes for each $H_{\pm}$: (a) $1 < H_{\pm} \leq \frac{2\pi}{\gamma} - 1$; (b) $-1 < H_{\pm} \leq 1$; (c) $-\frac{2\pi}{\gamma} + 1 < H_{\pm} \leq -1$. Boundary terms in NLIEs $D_{\rm B}(\theta)$ and $D_{\rm SB}(\theta)$ consist of right-boundary parts and left-boundary parts: \begin{align} &D_{\rm B}(\theta) = F(\theta;H_+) + F(\theta;H_-) + J(\theta) \label{boundary_term_sg} \\ &D_{\rm SB}(\theta) = F_y(\theta;H_+) + F_y(\theta;H_-) + J_K(\theta), \label{boundary_term_susy} \end{align} where $J(\theta)$ and $J_K(\theta)$ do not depend on boundary parameters: \begin{align} &J(\theta) = \int_0^{\infty} d\theta' \int_{-\infty}^{\infty} dk\; e^{-ik\theta'} \frac{\cosh\frac{\pi k}{4} \sinh (\frac{\pi}{\gamma} - 3) \frac{\pi k}{4}}{\cosh\frac{\pi k}{2} \sinh(\frac{\pi}{\gamma} - 2) \frac{\pi k}{2}}, \\ &J_K(\theta) = 2 \widetilde{g}_K(\theta) = \underset{\epsilon \rightarrow +0}{\lim} 2g_K(\theta + \tfrac{i\pi}{2} - i\epsilon). \end{align} Boundary-parameter dependent parts $F(\theta;H)$ and $F_y(\theta;H)$ have different forms for the three regimes (a), (b), and (c). \vspace{2mm} \par\noindent \underline{\bf{Regime (a)}} \begin{align} &F(\theta;H) = \int_0^{\infty} d\theta' \int_{-\infty}^{\infty} dk\; e^{-ik\theta'} \frac{\sinh (\frac{\pi}{\gamma} - H) \frac{\pi k}{2}}{2\cosh\frac{\pi k}{2} \sinh (\frac{\pi}{\gamma} - 2) \frac{\pi k}{2}}, \label{regimea_b} \\ &F_y(\theta;H) = 0. \label{regimea_y} \end{align} \underline{\bf{Regime (b)}} \begin{align} &F(\theta;H) = -\int_0^{\infty} d\theta' \int_{-\infty}^{\infty} dk\; e^{-ik\theta'} \frac{\sinh(\frac{\pi}{\gamma} + \pi H - 2) \frac{\pi k}{2}} {2\cosh\frac{\pi k}{2} \sinh (\frac{\pi}{\gamma} - 2) \frac{\pi k}{2}}, \label{regimeb_b} \\ &F_y(\theta;H) = \widetilde{g}_K(\theta - \tfrac{i\pi(1-H)}{2}) + \widetilde{g}_K(\theta + \tfrac{i\pi(1-H)}{2}). \label{regimeb_y} \end{align} \underline{\bf{Regime (c)}} \begin{align} &F(\theta;H) = -\int_0^{\infty} d\theta' \int_{-\infty}^{\infty} \frac{dk}{2\pi}\; e^{-ik\theta'} \frac{\sinh (\frac{\pi}{\gamma} + H) \frac{\pi k}{2}}{2\cosh\frac{\pi k}{2} \sinh (\frac{\pi}{\gamma} - 2) \frac{\pi k}{2}}, \label{regimec_b} \\ &F_y(\theta;H) = 0. \label{regimec_y} \end{align} NLIEs of the BSSG$^+$ model are obtained through lattice regularization of light-cone. The original continuum theory is recovered in the scaling limit \cite{bib:IO95}. Since parameters concerning the scaling limit appear only through a bulk term, NLIEs for the BSSG$^+$ model are obtained just by the following replacement: \begin{equation} 2 N \arctan\frac{\sinh \theta}{\cosh \Theta} \to 2i m_0 L \sinh\theta. \end{equation} \subsection{Eigenenergies} By definition, an eigenenergy of the lattice-regularized BSSG$^+$ model is obtained from the function $T_2(\theta)$ \cite{bib:MNR90}: \begin{equation} E = \frac{1}{4ia} \left( \frac{d}{d\theta} \ln T_2(\theta)\Big|_{\theta = \Theta + \frac{i\pi}{2}} - \frac{d}{d\theta} \ln T_2(\theta)\Big|_{\theta = \Theta - \frac{i\pi}{2}} \right). \end{equation} Using a fusion relation (\ref{T-Yrel}) and a NLIE (\ref{NLIE_y}), we obtain an eigenenergy of the BSSG$^+$ model in the scaling limit: \begin{equation} \label{energy} E = E_{\rm bulk} + E_{\rm B} + E_{\rm ex} + E_{\rm C}, \end{equation} where bulk energy $E_{\rm bulk}$, excitation energy $E_{\rm ex}$, and Casimir energy $E_{\rm C}$ is given by \begin{align} &E_{\rm bulk} = 0, \\ &E_{\rm ex} = m_0 \sum_{j=1}^{N_H} \cosh h_{j} - m_0 \sum_{j=1}^{M_C} \cosh \tilde{c}_j, \\ &E_C = \frac{m_0}{2\pi}\, {\rm Im} \int_{-\infty - i\epsilon}^{\infty - i\epsilon} d\theta\, e^{-\theta} \ln \bar{B}(\theta). \end{align} Boundary energy is given by a function of boundary parameters: \begin{align} &E_{\rm B} = m_0 + E_{\rm b}(H_+) + E_{\rm b}(H_-), \label{boundary-e} \\ &E_{\rm b}(H) = \begin{cases} 0 & |H| > 1, \\ m_0 \cos\tfrac{\pi(1-H)}{2} & |H| < 1, \end{cases} \end{align} whose behavior is shown in Figure \ref{fig:boundary_energy}. We expect appearance of a boundary bound state at $H = \pm1$ which causes a gap in a boundary energy function. \begin{figure} \begin{center} \includegraphics[scale=0.5]{boundary_energy} \caption{Boundary energy is depicted by a bold dotted line as a function of a boundary parameter $H$ for anisotropy $\gamma = \frac{\pi}{5}$. A mass parameter $m_0$ is taken to be $1$. The upper figure is behavior of boundary magnetic fields.} \label{fig:boundary_energy} \end{center} \end{figure} \subsection{Restriction on excitations} Allowed excitations of the BSSG$^+$ model can be discussed by counting equations derived from NLIEs. Counting equations relate the numbers of different types of particles, {\it i.e.} the numbers of excitation particles are not arbitrary but restricted by counting equations. \subsubsection{Counting equations} A counting equation for holes is derived by comparing asymptotic behaviors of both sides of the NLIE (\ref{NLIE_b}). As was discussed in Appendix \ref{sec:int_const}, we obtain \begin{equation} \label{counting-b} \begin{split} N_H - 2(N_S + N_V) &= 2S^{\rm tot} + M_C + 2M_W - \delta_B \\ &+ \tfrac{1}{2} ({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_-)), \end{split} \end{equation} where $\delta_B$ is defined by (\ref{def_deltaB}). There exists another counting equation for type-$1$ holes, which is derived from the NLIE for the auxiliary function $a(\theta)$ (\ref{NLIE_a}): \begin{equation} \label{counting-a} N_1 - 2(N_S^R + N_V^R) = S^{\rm tot} - M_R + M_{C>\pi} + M_W + \tfrac{1}{2}({\rm sgn}(H_+) + {\rm sgn}(H_-)). \end{equation} \ifx10 \subsubsection{Counting functions} We first introduce counting functions. Here we use two types of counting functions; Remind that the auxiliary function $B(x)$ has zeros at positions which becomes real string centers in a large volume limit and holes on positive half of the real axis. Then a function $\lfloor\frac{1}{2\pi} {\rm Im}\, \ln b(x) + \frac{1}{2}\rfloor$ counts, in a large volume limit, quantum numbers of two-strings and holes whose maximum value is given in $x \to \infty$: \begin{equation} \label{basym} \left\lfloor \tfrac{1}{2\pi} {\rm Im}\, \ln b(\infty) + \tfrac{1}{2} \right\rfloor = I_{\rm max} = M_2 + N_H - 2(N_S + N_V). \end{equation} Here we denoted the number of two-string centers by $M_2$. The auxiliary function $A(x)$ has zeros at positions which becomes real roots in a large volume limit and type-$1$ holes on a positive half of the real axis. Similarly, we introduce a counting function of real roots and type-$1$ holes by $\lfloor \frac{1}{2\pi} {\rm Im}\, \ln a(x) + \frac{1}{2} \rfloor$ whose maximum value is given by \begin{equation} \label{aasym} \left\lfloor \tfrac{1}{2\pi} {\rm Im}\, \ln a(\infty) + \tfrac{1}{2} \right\rfloor = I_{\rm max}^{(1)} = M_R + N_1 - 2(N_S^R + N_V^R). \end{equation} A parameter $M_R$ denotes the number of real roots, while $N_S^R$ and $N_V^R$ are the numbers of real special objects. \subsubsection{Counting equations} Counting equations are derived by evaluating left-hand sides of (\ref{basym}) and (\ref{aasym}). By choosing a branch cut as in Figure \ref{}, the definition of $a(x)$ leads to \begin{equation} {\rm Im}\, \ln a(\infty) = 2\pi (N -M_I - M_{C<\pi} + \tfrac{1}{2} + \tfrac{1}{2}\,{\rm sgn}(H_+) + \tfrac{1}{2}\,{\rm sgn}(H_-)) - 2\omega, \end{equation} where $M_{C<\pi}$ is the number of close roots whose imaginary parts are less than $\pi$. A parameter $\omega$ is introduced in \ref{}. Thus we obtain a counting equation for type-$1$ holes: \begin{equation} \label{sum1} N_1 - 2 (N_S^R + N_V^R) = S - M_R + M_{C > \pi} + M_W + \tfrac{1}{2}\{{\rm sgn}(H_+) + {\rm sgn}(H_-)\} - \left\lfloor \tfrac{\omega}{\pi} \right\rfloor, \end{equation} where $M_{C>\pi}$ is the number of close roots whose imaginary parts are greater than $\pi$. On the other hand, the definition of $b(x)$ leads to \begin{equation} \begin{split} {\rm Im} \ln b(\infty) &= 2\pi( 2N - 2M_I - M_2 - M_C + 1 + \tfrac{\delta_a}{2} - \lfloor \tfrac{\omega}{2\pi} + \tfrac{\delta_a}{2} \rfloor\\ &+ \tfrac{1}{2}\,{\rm sgn}(H_+-1) + \tfrac{1}{2}\,{\rm sgn}(H_++1) + \tfrac{1}{2}{\rm sgn}(H_--1) + \tfrac{1}{2}\,{\rm sgn}(H_-+1) ) - 3\omega, \end{split} \end{equation} where $\delta_a$ is given by \begin{equation} \delta_a = \begin{cases} 0 & \cos\omega > 0, \\ 1 & \cos\omega < 0. \end{cases} \end{equation} Thus we obtain a counting equation for holes: \begin{equation} \label{sum2} \begin{split} N_H - 2(N_S + N_V) &= 2S + M_C + 2M_W - \lfloor \tfrac{\omega}{2\pi} + \tfrac{\delta_a}{2} \rfloor - \lfloor \tfrac{3\omega}{2\pi} - \tfrac{\delta_a}{2} - \tfrac{1}{2} \rfloor\\ &+ \tfrac{1}{2}\{{\rm sgn}(H_+ - 1) + {\rm sgn}(H_+ + 1) + {\rm sgn}(H_- - 1) + {\rm sgn}(H_- + 1)\}. \end{split} \end{equation} \fi In the scaling limit, a special object or a self-conjugate root emerges exactly when $\gamma$-dependent terms raise their values by $1$ \cite{bib:DV97}. Therefore, one can regard $N_H^{\rm eff}$ defined by the following as the number of SSG solitons: \begin{equation} \label{counting-b*} N_H^{\rm eff} = N_H - 2(N_S + N_V) - 2M_{SC} -\delta_B. \end{equation} Then the counting equation for holes is written in terms of $N_H^{\rm eff}$: \begin{align} N_H^{\rm eff} &= 2S^{\rm tot} + M_C + 2(M_W - M_{SC}) \nonumber \\ &+ \tfrac{1}{2} \{{\rm sgn}(H_+ - 1) + {\rm sgn}(H_+ + 1) + {\rm sgn}(H_- - 1) + {\rm sgn}(H_- + 1)\}. \label{counting-b*} \\ \end{align} \subsubsection{Allowed excitations} Using (\ref{counting-a}) and (\ref{counting-b*}), we discuss allowed excitations. As we chose the lattice system consisting of an even number of sites, it is obvious that $S^{\rm tot}$ takes only an integer. For $H_+>1$ and $H_-<-1$, $N_H^{\rm eff}$ apparently takes an even integer, since close roots always appear in complex conjugate pairs. The counting equations admit a state with no particles and holes under $S^{\rm tot}=0$, that implies the ground state is given by a pure two-string state. When $H_+$ crosses $1$ while keeping $H_-<-1$, $N_H^{\rm eff}$ takes an odd integer. Especially a solution $N_H^{\rm eff}=1$ describes a state obtained in Figure \ref{fig:boundary_energy} which corresponds to a one-particle excitation at a boundary. Numerical study (Figure \ref{fig:zeros}) shows that rapidity of this boundary bound state seems to be fixed at $\theta = \frac{i\pi(1-H_+)}{2}$ with a small deviation exponentially vanishing as system length increases. At $H_+=0$, an energy function is analytically continued in Figure \ref{fig:boundary_energy}. However, for $H_+<0$, the energy function takes negative value implying it gives ground-state energy. Together with the fact that $N_1$ takes a different value for positive or negative $H_+$, we expect that states obtained in this regime belong to a different sector of a superconformal field theory from that for $H_+>0$. Finally when $H_+$ reaches $-1$, counting equations again admit only an even value for $N_H^{\rm eff}$, meaning that excitations with only an even number of particles are allowed. If one starts discussion from $H_{\pm}>1$, the ground state is characterized by two-string roots but with $S^{\rm tot}=-1$. In the realm of a spin chain, it is understood as follows; By polarizing the outermost spins at both ends in the same direction, spins freely interact with their neighbors only on the bulk $N-2$ sites. This gives rise to emergence of a spinon, which results in the ground state with $S^{\rm tot}=-1$. Except for $S^{\rm tot}$, discussion for $H_->1$ goes on quite similar to the case of $H_-<-1$ due to a symmetry $H_- \leftrightarrow -H_- -2$. For $1>H_{\pm}>0$, the outermost spins are trapped at both boundaries. Each of them can be released separately, resulting in $N_H^{\rm eff}$ with an odd integer. Thus, non-holomorphicity of boundary energy shows up due to change of a root configuration of the ground state, which directly affects analyticity structure of $T$-functions and consequently NLIEs. We support this statement later in discussion of reflection amplitudes obtained in the IR limit. \begin{figure} \begin{center} \subfigure[Analyticity structure of $T_1(\theta)$ (left) and $T_2(\theta)$ (right) $B(\theta)$ for $H_+=1.5$ and $H_-=2.2$.] {\includegraphics[scale=0.65]{Hm15_Hp22}} \label{fig:zeros-a} \subfigure[Analyticity structure of $T_1(\theta)$ (left) and $T_2(\theta)$ (right) $B(\theta)$ for $H_+=1.5$ and $H_-=0.3$.] {\includegraphics[scale=0.65]{Hm15_Hp03}} \label{fig:zeros-b} \subfigure[Analyticity structure of $T_1(\theta)$ (left) and $T_2(\theta)$ (right) $B(\theta)$ for $H_+=1.5$ and $H_-=-1.8$.] {\includegraphics[scale=0.65]{Hm15_Hp-18}} \label{fig:zeros-d} \end{center} \caption{Analyticity structure of $T_1(\theta)$ and $T_2(\theta)$ is plotted for three regimes of boundary parameters. One of boundary parameters is fixed at $H_+ = 1.5$. Zeros of $T$-functions are depicted by black dots and roots by gray dots. These plots are calculated for a system of length $N=8$ with $4$ pairs of two-string roots in the homogeneous and isotropic limit. } \label{fig:zeros} \end{figure} \ifx10 \end{document} \fi \section{SSG model with Dirichlet boundary conditions}\label{sec:ssg} The SSG model is an integrable one-dimensional quantum field theory consisting of a real scalar field $\Phi$ and a Majorana fermion $\Psi$. On a finite system size $L$, the action of the SSG model is given by \begin{equation}\label{SSG_action} \begin{split} &\mathcal{A}_{\rm SSG} = \int_{-\infty}^{\infty} dt \int_0^L dx\; \mathcal{L}_{\rm SSG}(x;t), \\ &\mathcal{L}_{\rm SSG} = \frac{1}{2} \partial_{\mu} \Phi \partial^{\mu} \Phi + \frac{i}{2} \bar{\Psi} \gamma^{\mu} \partial_{\mu} \Psi - \frac{m_0}{2} \cos(\beta \Phi) \bar{\Psi} \Psi + \frac{m_0^2}{2\beta^2} \cos^2(\beta \Phi), \end{split} \end{equation} where \begin{equation} \Psi = \begin{pmatrix} \psi \\ \bar{\psi} \end{pmatrix}, \quad \gamma^0 = \begin{pmatrix} 0 & i \\ -i & 0 \end{pmatrix}, \quad \gamma^1 = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}. \end{equation} A mass parameter $m_0$ determined in such a way that realizes a proper scaling limit \cite{bib:DV88} is related to the physical soliton mass via the relation found in \cite{bib:BF98}. The theory behaves differently depending on a value of the coupling constant $\beta$; In the attractive regime ($0<\beta^2<\frac{4\pi}{3}$), solitons form bound states called breathers, while the repulsive regime ($\frac{4\pi}{3}<\beta^2<4\pi$) does not admit breathers. Throughout this paper, we concentrate on the repulsive regime. Besides, we impose the Dirichlet boundary conditions \cite{bib:IOZ95}: \begin{equation} \label{dirichlet} \begin{split} &\Phi (0;t) = \Phi_-, \hspace{8mm} \Psi(0;t) \mp \bar{\Psi}(0;t) = 0, \\ &\Phi (L;t) = \Phi_+, \qquad \Psi(L;t) \mp \bar{\Psi}(L;t) = 0. \end{split} \end{equation} By following the notations used in \cite{bib:BPT02*}, we call the conditions given by (\ref{dirichlet}) the BSSG$\pm$ model, respectively. \subsection{SSG model as a perturbed CFT} From a viewpoint of a renormalization group theory, the SSG model is considered as a perturbation from a $\mathcal{N}=1$ superconformal field theory consisting of free bosons and free fermions compactified on a cylinder with radius $R = \frac{4\sqrt{\pi}}{\beta}$. The third term in Lagrangian (\ref{SSG_action}) is an irrelevant perturbation in the UV limit given as a small-volume limit ($L \rightarrow 0$). The bosonic part is also obtained from the SG model, while the fermionic part from the tricritical Ising model \cite{bib:AK96, bib:NA02}. \subsubsection{Free boson} A free boson theory compactified on a radius $R$ is defined by the following action: \begin{equation} \mathcal{A}_{\text{FB}} = \frac{1}{8\pi} \int_{-\infty}^{\infty}d\tau \int_0^{2\pi} d\sigma\; \partial_{\mu} \varphi \partial^{\mu} \varphi, \end{equation} which is identified with the first term of the SSG action (\ref{SSG_action}) by the relation $\Phi = \frac{1}{\sqrt{4\pi}}\varphi$. A conformal boson has a $\widehat{U}(1) \times \widehat{U}(1)$ symmetry. Applying a conformal map from a cylinder onto a complex plane (Figure \ref{fig:CFT_map}): \begin{equation} \sigma = \frac{1}{2i} (\ln z - \ln \bar{z}), \qquad \tau = \frac{1}{2i} (\ln z + \ln \bar{z}), \end{equation} a boson field is decomposed into a holomorphic part and an anti-holomorphic part: \begin{equation} \varphi(z, \bar{z}) = \frac{1}{2}(\phi(z) + \bar{\phi}(\bar{z})). \end{equation} \begin{figure} \begin{center} \includegraphics[scale=0.65]{CFT_mapping} \caption{A conformal map from a cylinder onto a complex plain.} \label{fig:CFT_map} \end{center} \end{figure} Subsequently, mode expansion of a boson field is obtained as \begin{equation} \phi(z) = Q - ia_0 \ln z + i \sum_{n\neq 0} \frac{1}{n} z^{-n} a_n, \qquad \bar{\phi}(\bar{z}) = Q - i\bar{a}_0 \ln \bar{z} + i \sum_{n\neq 0} \frac{1}{n} \bar{z}^{-n} \bar{a}_n, \end{equation} where $Q$ is a zero mode of $\varphi(z,\bar{z})$. Bosonic modes satisfy commutation relations given by \begin{equation} [a_k,\, a_l] = k \delta_{k+l}, \qquad [a_k,\, \bar{a}_l] = 0, \qquad [\bar{a}_k,\, \bar{a}_l] = k \delta_{k+l}. \end{equation} Space of states of a free boson theory is spanned by highest weight vectors and their descendants created by bosonic modes with negative labels: \begin{equation} \begin{split} \underset{m,n \in \mathbb{Z}}{\oplus} \underset{p_i q_i > 0}{\oplus} \prod_i \bar{a}_{-p_i} \prod_j a_{-q_j} |m, n \rangle, \end{split} \end{equation} where a highest weight vector $|m, n \rangle$ is created from the vacuum state $|0, 0 \rangle$ by applying a vertex operator: \begin{align} &|m, n \rangle = V_{(m, n)}(z, \bar{z}) |0, 0 \rangle, \\ &V_{(m, n)} = :e^{i(mR + \frac{n}{R}) \phi(z) + i(mR - \frac{n}{R}) \bar{\phi}(\bar{z})}:. \end{align} Here we used a notation $:*:$ for the normal order. Therefore, infinitely many highest weight vectors are obtained in a free boson theory. One of characteristic quantities of a conformal state $|m, n \rangle$ is a conformal dimension which shows up in energy as a function of system size: \begin{align} &E(L) = -\tfrac{\pi}{6L} (1- 12(\Delta^+_{\rm FB} + \Delta^-_{\rm FB})) + \mathcal{O}(L^{-2}), \\ &\Delta_{\text{FB}}^{\pm} = \tfrac{1}{2} \left(mR \pm \tfrac{n}{R}\right)^2. \end{align} If one imposes Dirichlet boundary conditions (\ref{dirichlet}), a boson field should satisfy the following conditions: \begin{equation} \tfrac{1}{\sqrt{4\pi}} \varphi(z,\bar{z})|_{\sigma=0} = \Phi_-, \qquad \tfrac{1}{\sqrt{4\pi}} \varphi(z,\bar{z})|_{\sigma=\frac{L}{R}} = \Phi_+, \end{equation} which lead us to obtain $\bar{\alpha}_n = -\alpha_n$. As a result, the theory is described only by a holomorphic part and a momentum part of a conformal dimension vanishes: \begin{equation} \Delta_{\rm BFB} = \tfrac{1}{2} \left(\tfrac{1}{\sqrt{\pi}}(\Phi_+ - \Phi_-) + mR\right)^2, \end{equation} Consequently, energy is obtained as \begin{equation} \label{energy_BFB} E(L) = -\tfrac{\pi}{24L} (1 - 24\Delta_{\rm BFB}) + \mathcal{O}(L^{-2}). \end{equation} \subsubsection{Free fermion} A free fermion theory appears with a bosonic coupling in the SSG theory in the second term (\ref{SSG_action}) whose action is given by \begin{equation} \mathcal{A}_{\text{FF}} = \iint\frac{dz d\bar{z}}{2\pi}\Big(\psi \frac{\partial}{\partial\bar{z}} \psi + \bar{\psi} \frac{\partial}{\partial z} \bar{\psi}\Big). \end{equation} Mode expansion of a fermion field is given by \begin{equation} \psi(z) = \sum_{n \in \mathbb{Z} + r} b_n z^{-n -1/2}, \qquad \bar{\psi}(\bar{z}) = \sum_{n \in \mathbb{Z} + r} \bar{b}_n \bar{z}^{-n -1/2}, \end{equation} where $r$ is a free parameter, in principle, but takes only $0$ or $\frac{1}{2}$ under compactification with an arbitrary radius. In the case of $r=\frac{1}{2}$, the theory results in the NS sector, {\it i.e.} an periodic boundary condition for a fermion part of the superconformal field theory, while $r=0$ leads to the R sector, {\it i.e.} an anti-periodic boundary condition. Fermionic modes satisfy anti-commutation relations given by \begin{equation} \{b_s,\, b_t\} = \delta_{s+t}, \qquad \{b_s,\, \bar{b}_t\} = 0, \qquad \{\bar{b}_s,\, \bar{b}_t\} = \delta_{s+t}. \end{equation} Space of states of a free fermion theory is spanned by \begin{equation} \begin{split} \underset{\hat{f} \in \mathcal{V}}{\oplus} \underset{p_i, q_j > 0}{\oplus} \prod_i \bar{b}_{-p_i} \prod_j b_{-q_j} \hat{f}(z, \bar{z}) |0,0 \rangle, \end{split} \end{equation} where highest vectors are constructed from the vacuum by applying an operator $\hat{f}(z,\bar{z})$ ($\mathcal{V} \in \{\mathbb{I}, \psi(z)\bar{\psi}(\bar{z}), \sigma(z,\bar{z})\}$). One may notice that, unlike the free boson theory, there are only three highest weight vectors whose conformal dimensions are given by \begin{equation} \begin{split} (\Delta_{\mathbb{I}}^+, \Delta_{\mathbb{I}}^-) = (0,0), \qquad (\Delta_{\psi \bar{\psi}}^+, \Delta_{\psi \bar{\psi}}^-) = (\tfrac{1}{2}, \tfrac{1}{2}), \qquad (\Delta_{\sigma}^+, \Delta_{\sigma}^-) = (\tfrac{1}{16}, \tfrac{1}{16}), \end{split} \end{equation} for the first two belonging to the NS sector, while the last one belonging to the R sector. Free energy is then expressed in terms of conformal dimensions: \begin{equation} E(L) = -\tfrac{\pi}{6L}\left(\tfrac{1}{2} - 12(\Delta^+ + \Delta^-)\right) + \mathcal{O}(L^{-2}), \end{equation} from which two sectors of a superconformal field theory are distinguished. If one imposes Dirichlet boundary conditions (\ref{dirichlet}), we obtain $b_n = \pm \bar{b}_n$. As a result, the free fermion theory is also written only by a holomorphic part and then energy is given by \begin{equation} \label{energy_BFF} E(L) = -\tfrac{\pi}{24L} \left(\tfrac{1}{2} - 24 \Delta\right) + \mathcal{O}(L^{-2}), \end{equation} where a conformal dimension $\Delta$ takes either $0$ or $\frac{1}{2}$ in the NS sector and $\frac{1}{16}$ in the R sector. \subsection{Scattering theory of the SSG model} Supersymmetric solitons are described by non-commuting symbols $A_{a_j a_{j+1}}^{\epsilon_j}$. A superscript represents a soliton charge $\epsilon_j \in \{\pm\}$, a set of subscripts represents RSOS indices $a_j, a_{j+1} \in \{0,\pm 1\}$ with an adjacency condition $|a_j - a_{j+1}| = 1$. \subsubsection{Bulk $S$-matrix} Corresponding to soliton-soliton scattering, the following commutation relations are obtained: \begin{equation} A_{ab}^{\epsilon_1}(\theta_1) A_{bc}^{\epsilon_2}(\theta_2) = \sum_{\epsilon_1', \epsilon_2'} \sum_{d} S^{\epsilon_1 \epsilon_2}_{\epsilon_1' \epsilon_2'}|^{ac}_{bd} (\theta_1 - \theta_2) A_{ad}^{\epsilon_2'}(\theta_2) A_{dc}^{\epsilon_1'}(\theta_1) \end{equation} with a parameter $\theta_j$ as rapidity of a supersymmetric soliton. As a known fact, the $S$-matrix of the SSG model is decomposed into a tensor product of the SG part and the RSOS part: \begin{equation} S^{\epsilon_1 \epsilon_2}_{\epsilon_1' \epsilon_2'}|^{ac}_{bd} (\theta) = S^{\epsilon_1 \epsilon_2}_{\epsilon_1' \epsilon_2'}(\theta) \times S^{ac}_{bd}(\theta). \end{equation} As a result of integrability, each $S$-matrix satisfies the Yang-Baxter equation: \begin{align} &S^{\epsilon_1 \epsilon_2}_{\epsilon_1' \epsilon_2'}(\theta_1 - \theta_2) S^{\epsilon_1' \epsilon_3}_{\epsilon_1'' \epsilon_3'}(\theta_1 - \theta_3) S^{\epsilon_2' \epsilon_3'}_{\epsilon_2'' \epsilon_3''}(\theta_2 - \theta_3) = S^{\epsilon_2 \epsilon_3}_{\epsilon_2' \epsilon_3'}(\theta_2 - \theta_3) S^{\epsilon_1 \epsilon_3'}_{\epsilon_1' \epsilon_3''}(\theta_1 - \theta_3) S^{\epsilon_1' \epsilon_2'}_{\epsilon_1'' \epsilon_2''}(\theta_1 - \theta_2), \\ &S^{ac}_{bg}(\theta_1 - \theta_2) S^{gd}_{ce}(\theta_1 - \theta_3) S^{ae}_{gf}(\theta_2 - \theta_3) = S^{bd}_{cg'}(\theta_2 - \theta_3) S^{ag'}_{bf}(\theta_1 - \theta_3) S^{fd}_{g'e}(\theta_1 - \theta_2), \end{align} from which the exact $S$-matrix is derived. A solution to the SG part has been obtained in \cite{bib:ZZ79, bib:Z77}: \begin{align} &S_{\epsilon \epsilon}^{\epsilon \epsilon}(\theta) = S(\theta), \\ &S_{\epsilon -\epsilon}^{\epsilon -\epsilon}(\theta) = \frac{\sinh \lambda\theta}{\sinh \lambda(i\pi - \theta)} S(\theta), \quad S_{\epsilon -\epsilon}^{-\epsilon \epsilon}(\theta) = i \frac{\sin \pi\lambda}{\sinh \lambda(i\pi - \theta)} S(\theta), \end{align} where $\epsilon \in \{\pm\}$, and found that it is closely related to the $R$-matrix of the six-vertex model. The overall factor $S(\theta)$ is obtained by setting $u=i\theta$: \begin{align} S(\theta) &= -\prod_{l=1}^{\infty} \frac{\Gamma(2(l-1)\lambda - \frac{\lambda u}{\pi}) \Gamma (2l\lambda +1 -\frac{\lambda u}{\pi})} {\Gamma ((2l-1)\lambda - \frac{\lambda u}{\pi}) \Gamma ((2l-1)\lambda + 1 - \frac{\lambda u}{\pi})}/(u\rightarrow -u) \label{SGamp} \\ &= \exp\left[ i \int_0^{\infty} \frac{dt}{t} \frac{\sin \frac{\theta t}{\pi} \sinh(\frac{1}{\lambda} - 1)\frac{t}{2}}{\cosh\frac{t}{2} \sinh\frac{t}{2\lambda}} \right]. \label{SGamp_int} \end{align} A parameter $\lambda$ is determined by a coupling constant $\beta$ via $\lambda = \frac{2\pi}{\beta^2} - \frac{1}{2}$ \cite{bib:ANS07}. A solution to the RSOS part is also obtained in \cite{bib:GZ94, bib:A91, bib:AK96, bib:NA02} as \begin{equation} \label{RSOSamp*} S^{ac}_{bd}(\theta) = X^{ac}_{bd}(\theta) K(\theta), \end{equation} where \begin{equation} \begin{split} &X^{\sigma \sigma}_{0 0} (\theta) = 2^{(i\pi - \theta)/2\pi i} \cos\left(\frac{\theta}{4i} - \frac{\pi}{4}\right), \qquad X^{0 0}_{\sigma \sigma} (\theta) = 2^{\theta/2\pi i} \cos\left(\frac{\theta}{4i}\right), \\ &X^{\sigma -\sigma}_{0 0} (\theta) = 2^{(i\pi - \theta)/2\pi i} \cos\left(\frac{\theta}{4i} + \frac{\pi}{4}\right), \qquad X^{0 0}_{\sigma -\sigma} (\theta) = 2^{\theta/2\pi i} \cos\left(\frac{\theta}{4i} - \frac{\pi}{2}\right), \end{split} \end{equation} with $\sigma\in\{\pm 1\}$. The overall factor $K(\theta)$ is given by \begin{align} K(\theta) &= \frac{1}{\sqrt{\pi}} \prod_{k-1}^{\infty} \frac{\Gamma (k-\frac{1}{2}+\frac{\theta}{2\pi i}) \Gamma (k-\frac{\theta}{2\pi i})} {\Gamma(k+\frac{1}{2}-\frac{\theta}{2\pi i}) \Gamma (k+\frac{\theta}{2\pi i})} \label{RSOSamp} \\ &= \frac{-i}{\sqrt{2} \sinh\frac{\theta - i\pi}{4}} \exp\left[ i \int_{0}^{\infty} \frac{dt}{t} \frac{\sin \frac{\theta t}{\pi} \sinh \frac{3t}{2}}{\sinh 2t \cosh\frac{t}{2}} \right]. \label{RSOSamp_int} \end{align} \subsubsection{Boundary $S$-matrix} In a finite and non-periodic system, a soliton is reflected at a boundary with a reflection amplitude obtained from the following algebraic relations: \begin{equation} A_{ab}^{\epsilon}(\theta) B = \sum_c \sum_{\epsilon'} R_{\epsilon'}^{\epsilon}|_{ac}^b A_{bc}^{\epsilon'}(-\theta) B. \end{equation} Here we used a boundary creation operator $B$. As in the case of the bulk $S$-matrix, the reflection matrix of the SSG model is also written by a tensor product of the SG part and the RSOS part: \begin{equation} R_{\epsilon'}^{\epsilon}|_{ab}^c(\theta) = R_{\epsilon'}^{\epsilon}(\theta) \times R_{ab}^c(\theta). \end{equation} Each reflection matrix of integrable boundaries like the Dirichlet boundary conditions independently satisfies the reflection relation: \begin{align} &S^{\epsilon_1 \epsilon_2}_{\epsilon_2' \epsilon_1'} (\theta_1 - \theta_2) R^{\epsilon_2'}_{\epsilon_2''} (\theta_2) S^{\epsilon_2'' \epsilon_1'}_{\epsilon_1'' \epsilon_2'''} (\theta_1 + \theta_2) R^{\epsilon_1''}_{\epsilon_1'''} (\theta_1) = R^{\epsilon_1}_{\epsilon_1'} (\theta_1) S^{\epsilon_1' \epsilon_2}_{\epsilon_2'' \epsilon_1'} (-\theta_1 - \theta_2) R^{\epsilon_2'}_{\epsilon_2''} (\theta_2) S^{\epsilon_2' \epsilon_1''}_{\epsilon_1''' \epsilon_2'''} (-\theta_1 + \theta_2), \label{refrel_sg} \\ &S^{ac}_{bf} (\theta_1 - \theta_2) R_{ag}^f (\theta_2) S^{gc}_{fd} (\theta_1 + \theta_2) R_{ge}^d (\theta_1) = R_{af'}^b (\theta_1) S^{f'c}_{bg'} (-\theta_1 - \theta_2) R_{f'e}^{g'} (\theta_2) S^{ec}_{g'd} (-\theta_1 + \theta_2). \label{refrel_susy} \end{align} A solution of the reflection relation has only diagonal elements under Dirichlet boundaries as obtained in \cite{bib:GZ94}: \begin{equation} R_{\pm}^{\pm}(\theta) = \cos(\xi \pm \lambda u) R_0(u) \frac{\sigma(\theta, \xi)}{\cos \xi}, \end{equation} where $R_0(u)$ is given by \begin{equation} R_0(u)= \prod_{l=1}^{\infty} \left[ \frac{\Gamma(4l\lambda - \frac{2\lambda u}{\pi}) \Gamma (4\lambda (l-1) + 1 -\frac{2\lambda u}{\pi})} {\Gamma((4l-3) \lambda - \frac{2\lambda u}{\pi}) \Gamma((4l-1)\lambda +1 -\frac{2\lambda u}{\pi})} / (u\rightarrow -u) \right]. \end{equation} The overall factor $\sigma(\theta,\xi)$ is written by $\Gamma$-functions: \begin{equation} \sigma(\theta, \xi) = \frac{\cos\xi}{\cos(\xi+\lambda u)} \prod_{l=1}^{\infty}\left[ \frac{\Gamma(\frac{1}{2} + \frac{\xi}{\pi} + (2l-1)\lambda -\frac{\lambda u}{\pi}) \Gamma (\frac{1}{2} - \frac{\xi}{\pi} + (2l-1)\lambda -\frac{\lambda u}{\pi})} {\Gamma(\frac{1}{2} - \frac{\xi}{\pi} + (2l-2)\lambda - \frac{\lambda u}{\pi}) \Gamma(\frac{1}{2} + \frac{\xi}{\pi} + 2l\lambda - \frac{\lambda u}{\pi})} /(u\rightarrow -u) \right]. \end{equation} Treating $\Gamma$-functions with negative real parts separately, one can write a soliton reflection $R_{+}^{+}(\theta)$ by integral forms \cite{bib:SS95, bib:FS94}: \begin{equation} \begin{split} &\frac{R_{+}^{+}(\theta)}{R_0(\theta)} = R^+_1(\theta) + R_2(\theta), \\ &R^+_1(\theta) = \exp\left[ i \int_0^{\infty} \frac{dt}{t} \left( \frac{\sinh(1 - \frac{2\xi}{\pi\lambda})\frac{t}{2}}{2 \sinh\frac{t}{2\lambda} \cosh\frac{t}{2}} + \frac{\sinh(\frac{\xi}{\pi} - \lfloor \frac{\xi}{\pi} - \frac{1}{2} \rfloor - 1) \frac{t}{\lambda}}{\sinh\frac{t}{2\lambda}} \right) \sin \frac{\theta t}{\pi} \right], \\ &R_2(\theta) = \exp\left[ i \int_{0}^{\infty} \frac{dt}{t} \frac{\sinh\frac{3t}{4} \sinh(\frac{1}{\lambda} - 1)\frac{t}{4}}{\sinh t \sinh\frac{t}{4\lambda}} \right]. \end{split} \end{equation} On the other hand, an anti-soliton reflection is given by \cite{bib:SS95} \begin{equation} \label{SG-antiref} \begin{split} &\frac{R_{-}^{-}(\theta)}{R_0(\theta)} = R^-_1(\theta) + R_2(\theta), \\ &R^-_1(\theta) = \exp\left[ i \int_0^{\infty} \frac{dt}{t} \frac{\sinh(1 - \frac{2\xi}{\pi\lambda})\frac{t}{2}}{2 \sinh\frac{t}{2\lambda} \cosh\frac{t}{2}} \sin \frac{\theta t}{\pi} \right]. \end{split} \end{equation} A boundary parameter $\xi$ is connected to field values at boundaries through $\xi_{\pm} = \frac{2\pi}{\beta} \Phi_{\pm}$ \cite{bib:ANS07}. The RSOS part of the reflection relation has been solved. Different solutions were obtained for two sectors of the superconformal field theory \cite{bib:AK96, bib:NA02}. For the NS sector, a solution is given by \begin{align} &R_{\sigma \sigma}^0(\theta; \xi) = P(\theta; \xi), \label{ramp_NS1} \\ &R_{00}^{\pm 1}(\theta; \xi) = \left(\cos\frac{\xi}{2} \pm i \sinh\frac{\theta}{2}\right) 2^{i\theta/\pi} K(\theta - i\xi) K(\theta + i\xi) P(\theta; \xi), \label{ramp_NS2} \end{align} where \begin{align} &P(\theta, \xi) = \frac{\sin\xi - i\sinh\theta}{\sin\xi + i\sinh\theta} P_0(\theta), \\ &P_0(\theta) = \prod_{k=1}^{\infty} \left[ \frac{\Gamma (k - \frac{\theta}{2\pi i}) \Gamma(k - \frac{\theta}{2\pi i})} {\Gamma(k-\frac{1}{4} - \frac{\theta}{2\pi i}) \Gamma(k+\frac{1}{4} - \frac{\theta}{2\pi i})} /(\theta \rightarrow -\theta) \right] \\ &\hspace{8mm}= \exp\left( -\frac{\theta}{2\pi} \ln 2 + \frac{1}{8} \int_0^\infty \frac{dt}{t} \frac{\sin\frac{2\theta t}{\pi}}{\cosh^2 t \cosh^2 \frac{t}{2}} \right). \label{ref_susy_int} \end{align} Thus only diagonal matrix elements are non-zero in the reflection matrix of the NS sector. On the other hand, a solution to the R sector is obtained as \begin{align} &R_{\sigma \sigma}^0(\theta; \xi) = \cos\frac{\xi}{2} K(\theta - i\xi) K(\theta + i\xi) P(\theta; \xi), \label{ramp_R1} \\ &R_{-\sigma \sigma}^0(\theta; \xi) = -ir^{\sigma} \sinh\frac{\theta}{2} K(\theta - i\xi) K(\theta + i\xi) P(\theta; \xi), \label{ramp_R2} \\ &R_{00}^{\sigma}(\theta; \xi) = 2^{i\theta/\pi} P(\theta; \xi). \label{ramp_R3} \end{align} Unlike the NS sector, the reflection matrix of the R sector has non-diagonal elements $R^0_{-\sigma \sigma}(\theta; \xi)$. The matrix (\ref{ramp_R1})-(\ref{ramp_R3}) is block diagonal whose non-diagonal subspace is diagonalized with eigenvalues $\cos\frac{\xi}{2} \pm i\sinh\frac{\theta}{2}$, which are clearly the same as (\ref{ramp_NS2}) up to a factor $2^{i\theta/\pi}$ which can be removed by a similarity transformation. \subsection{Light-cone regularization} The light-cone regularization of a quantum field theory is achieved by discretizing a light-cone with a lattice spacing $a$ \cite{bib:DV87, bib:V89, bib:V90}. A trajectory of each particle then forms a two-dimensional lattice. Particle scattering occurs only at a vertex with an amplitude properly scaled from the original quantum field theory. If one works on an integrable quantum field theory in which an exact $S$-matrix can be derived, one may expect that an amplitude assigned on each vertex of a regularized light-cone can be identified with a Boltzmann weights of an integrable lattice model. Indeed, it was found that a light-cone of the lattice-regularized SG (LSG) model is obtained as a $90$-degree rotation of the six-vertex model. In the case of the SSG model, the light-cone regularization leads to the $19$-vertex model \cite{bib:ANS07, bib:HRS07}. This fact leads us to discuss the SSG model on an integrable lattice system, on which the transfer matrix method has been developed intensively. As a well-known fact, a transfer matrix of the spin-$1$ Zamolodchikov-Fateev model is defined on the $19$-vertex model \cite{bib:ZF80}. Since time development of a SSG state is also defined on the $19$-vertex model with inhomogeneities $\pm \Theta$ corresponding to rapidity of a right or left mover, a transfer matrix of the Zamolodchikov-Fateev spin chain with inhomogeneity describes time development of an SSG state. The spin-$1$ Zamolodchikov-Fateev model is defined by the following Hamiltonian: \begin{equation} \label{hamiltonian} \mathcal{H}=\sum_{j=1}^{N-1} \left[ T_j - (T_j)^2 - 2\sin^2\gamma \;( T_j^z + (S_j^z)^2 + (S_{j+1}^z)^2 - (T_j^z)^2 ) + 4\sin^2\tfrac{\gamma}{2} \; (T_j^{\bot} T_j^z + T_j^z T_j^{\bot}) \right] + \mathcal{H}_{\rm B}, \end{equation} where \begin{equation} T_j = \vec{S}_j \cdot \vec{S}_{j+1}, \qquad T^{\bot}_j = S^x_j S^x_{j+1} + S^y_j S^y_{j+1}, \qquad T^z_j = S^z_j S^z_{j+1}. \end{equation} Operators $S_j^{\alpha}$ ($\alpha \in \{x,y,z\}$) are three-dimensional $SU(2)$ spin operators which nontrivially act on the $j$th space of $N$-fold tensor product of three-dimensional vector space. A parameter $\gamma$ is an anisotropy parameter which determines a coupling constant of the SSG model in the scaling limit by $\beta^2 = 4(\pi - 2\gamma)$. Since $\beta^2$ in the SSG model takes a real value, an allowed value for $\gamma$ is less than $\frac{\pi}{2}$. In a spin chain realm, a parameter $\gamma$ in this condition makes the system gapless. Corresponding to Dirichlet boundaries, which do not change a soliton charge, the boundary Hamiltonian $\mathcal{H}_{\rm B}$ is given by diagonal operators: \begin{equation} \label{bhamiltonian} \mathcal{H}_{\rm B} = h_1(H_-) S_1^z + h_2(H_-)(S_1^z)^2 + h_1(H_+) S_N^z + h_2(H_+) (S_N^z)^2, \end{equation} where two types of boundary fields are connected by a common parameter $H$ as \begin{align} &h_1(H) = \tfrac{1}{2} \sin 2\gamma \left(\cot\tfrac{\gamma H}{2} + \cot\tfrac{\gamma(H+2)}{2}\right), \\ &h_2(H) = \tfrac{1}{2} \sin 2\gamma \left(-\cot\tfrac{\gamma H}{2} + \cot\tfrac{\gamma (H+2)}{2}\right). \end{align} \begin{figure} \begin{center} \includegraphics[scale=0.75]{boundary_fields} \caption{Boundary magnetic fields as functions of a boundary parameter $H$. Anisotropy is taken to be $\gamma = \frac{\pi}{5}$.} \label{fig:boundary_fields} \end{center} \end{figure} These boundary fields are $\frac{2\pi}{\gamma}$-periodic functions with respect to $H$ (Figure \ref{fig:boundary_fields}). Each periodicity cell apparently consists of two domains $[-2+\frac{2\pi n}{\gamma}, \frac{2\pi n}{\gamma}]$ (domain NS) and $[\frac{2\pi (n-1)}{\gamma}, -2+\frac{2\pi n}{\gamma}]$ ($n \in \mathbb{Z}$) (domain R), and therefore, we expect different behaviors for the corresponding quantum field theory obtained after taking the scaling limit. For instance, this system has symmetries with respect to boundary magnetic fields and they have different meanings in each domain; We first obtain $\frac{2\pi}{\gamma}$-periodicity for both domains NS and R. In contrast, a symmetry $H \leftrightarrow -H-\frac{2\pi}{\gamma}-2$ is understood as a $S^z \leftrightarrow -S^z$-symmetry in domain NS, while $H \leftrightarrow -H-2$ gives the same symmetry but for domain $R$. The transfer matrix of the Zamolodchikov-Fateev spin chain is obtained from the $19$-vertex model by taking a trace over an auxiliary space. If one inserts inhomogeneities corresponding to rapidities of right and left movers, the transfer matrix of the LSSG model is given by a set of the following operators: \begin{equation} \label{T-for-SSG} T_{\rm R} = {\rm tr}_0 [K_+(\theta) T(\theta) K_-(\theta) \widehat{T}(\theta)]_{\theta = \Theta}, \qquad T_{\rm L} = {\rm tr}_0 [K_+(\theta) T(\theta) K_-(\theta) \widehat{T}(\theta)]_{\theta = -\Theta}, \end{equation} where \begin{equation} \begin{split} &T(\theta) = R_{0,2N}(\tfrac{\gamma}{\pi}(\theta-\Theta)) R_{0,2N-1}(\tfrac{\gamma}{\pi}(\theta+\Theta)) \dots R_{02}(\tfrac{\gamma}{\pi}(\theta-\Theta)) R_{01}(\tfrac{\gamma}{\pi}(\theta+\Theta)), \\ &\widehat{T}(\theta) = R_{10}(\tfrac{\gamma}{\pi}(\theta+i\pi+\Theta)) R_{20}(\tfrac{\gamma}{\pi}(\theta+i\pi-\Theta)) \dots R_{2N-1,0}(\tfrac{\gamma}{\pi}(\theta+i\pi+\Theta)) R_{2N,0}(\tfrac{\gamma}{\pi}(\theta+i\pi-\Theta)) \end{split} \end{equation} and $R_{ij}(\theta)$ is the $R$-matrix of the $19$-vertex model \cite{bib:ZF80} constructed from that of the six-vertex model through the fusion procedure \cite{bib:KRS81}. Boundary reflection is described by a reflection matrix $K_{\pm}(\theta)$ \cite{bib:S88, bib:FLU02} obtained as a diagonal solution of the reflection relation (\ref{refrel_sg}) and (\ref{refrel_susy}). Let us note that Hamiltonian and total momentum is obtained from the transfer matrix: \begin{equation} \mathcal{H} = \frac{i\gamma}{2\pi a}[\ln T_{\rm R} + \ln T_{\rm L}], \qquad \mathcal{P} = \frac{i\gamma}{2\pi a}[\ln T_{\rm R} - \ln T_{\rm L}]. \end{equation} \ifx10 \end{document} \fi \section{Evaluation of eigenenergy from NLIEs} \label{sec:uv_der} Eigenenergy is evaluated from NLIEs without solving them. This technique was first introduced in \cite{bib:KBP91} and widely used for analytical calculation of $\mathcal{O}(N^{-1})$-corrections. Let us rewrite NLIEs (\ref{NLIEb-UV}) and (\ref{NLIEy-UV}) in a vector form: \begin{equation} \label{nlie-int} \bm{lb}^+(\hat{\theta}) = \mathcal{G}*\bm{lB}^+(\hat{\theta}) + i\bm{g}(\hat{\theta}), \end{equation} where \begin{align} &\bm{lb}^+(\hat{\theta}) = \begin{pmatrix} \ln b^+(\hat{\theta}) \\ \ln \bar{b}^+(\hat{\theta}) \\ \ln y^+(\hat{\theta}) \end{pmatrix}, \qquad \bm{lB}^+(\hat{\theta}) = \begin{pmatrix} \ln B^+(\hat{\theta}) \\ \ln \bar{B}^+(\hat{\theta}) \\ \ln Y^+(\hat{\theta}) \end{pmatrix}, \\ &\bm{g}(\hat{\theta}) = \begin{pmatrix} e^{\hat{\theta}} + \sum_j c_j g_{(j)} (\hat{\theta} - \hat{\theta}_j) + \pi \hat{C}_b \\ -e^{\hat{\theta}} - \sum_j c_j g_{(j)} (\hat{\theta} - \hat{\theta}_j) - \pi \hat{C}_b \\ \sum_j c_j g_{(j)}^{(1)} (\hat{\theta} - \hat{\theta}_j) + \pi \hat{C}_y \end{pmatrix}. \end{align} $\mathcal{G}$ is a matrix given by \begin{equation} \mathcal{G}(\hat{\theta}) = \begin{pmatrix} G(\hat{\theta} - i\epsilon) & -G(\hat{\theta} + i\epsilon) & G_K(\hat{\theta} - \frac{i\pi}{2} + i\epsilon) \\ -\bar{G}(\hat{\theta} - i\epsilon) & \bar{G}(\hat{\theta} - i\epsilon) & G_K(\hat{\theta} + \frac{i\pi}{2} - i\epsilon) \\ G_K(\hat{\theta} + \frac{i\pi}{2} - i\epsilon) & G_K(\hat{\theta} - \frac{i\pi}{2} + i\epsilon) & 0 \end{pmatrix}. \end{equation} Using $G(\hat{\theta}) = \bar{G}(-\hat{\theta})$ and $G_K(\hat{\theta}) = \bar{G}_K(-\hat{\theta}) = G_K(\hat{\theta})$, one obtains that $\mathcal{G}(\hat{\theta})$ satisfies \begin{equation} \label{prop-G} \mathcal{G}_{ij}(\hat{\theta}) = \mathcal{G}_{ji}(-\hat{\theta}), \qquad i \neq j. \end{equation} Consider an integral of $\bm{lb}^{+'} \cdot \bm{lB}^+ - \bm{lb}^+ \cdot \bm{lB}^{+'}$, which is written in terms of dilogarithm functions: \begin{equation} \label{lhs} \begin{split} &\frac{1}{2} \int_{-\infty}^{\infty} d\hat{\theta}\ \Big(\bm{lb}^{+'}(\hat{\theta}) \cdot \bm{lB}^+(\hat{\theta}) - \bm{lb}^+(\hat{\theta}) \cdot \bm{lB}^{+'}(\hat{\theta})\Big) \\&= L_+(b^+(\infty)) - L_+(b^+(-\infty)) + L_+(\bar{b}^+(\infty)) - L_+(\bar{b}^+(-\infty)) + L_+(y^+(\infty)) - L_+(y^+(-\infty)). \end{split} \end{equation} On the other hand, by substituting (\ref{nlie-int}) into $\bm{lb}^{+'}$ and $\bm{lb}^+$ and then we observe that compensation occurs to terms concerning $\mathcal{G}$ due to (\ref{prop-G}). Remaining terms are obtained as \begin{equation} \label{rhs} \begin{split} &\frac{1}{2} \int_{-\infty}^{\infty} d\hat{\theta}\ \Big(\bm{lb}^{+'}(\hat{\theta}) \cdot \bm{lB}^+(\hat{\theta}) - \bm{lb}^+(\hat{\theta}) \cdot \bm{lB}^{+'}(\hat{\theta})\Big) \\ &= 2{\rm Im} \int_{-\infty}^{\infty} d\hat{\theta}\; e^{\hat{\theta}} \ln \bar{B}^+(\hat{\theta}) + 2\pi \sum_j c_j e_{(j)}^{\hat{\theta}_j} \\ &- \frac{i}{2} \Big[ \Big(e^{\hat{\theta}} + \sum_j c_j g_{(j)}(\hat{\theta} - \hat{\theta}_j) + \pi \hat{C}_b\Big) (\ln B^+(\hat{\theta}) - \ln\bar{B}^+(\hat{\theta})) \Big]_{-\infty}^{\infty} \\ &- \frac{i}{2} \Big[ \Big(\sum_j c_j g^{(1)}_{(j)}(\hat{\theta} - \hat{\theta}_j) + \pi \hat{C}_y\Big) \ln Y^+(\hat{\theta}) \Big]_{-\infty}^{\infty} \\ &+ 2\pi i \sum_{j;\hat{\theta}_j \neq \hat{h}^{(1)}_j} c_j \ln b^+_{(j)}(\hat{\theta}_j) + 2\pi i \sum_{j = 1}^{N_1^+} \ln y^+(\hat{h}^{(1)}_j) \\ &+ 2\pi^2 \hat{C}_b (N_H^+ - 2N_S^+ - M_C^+ - M_W^+ - M_{SC}^+) + 2\pi^2 \hat{C}_y N_1^+, \end{split} \end{equation} where \begin{align} &\ln b_{(j)}^+(\hat{\theta}) = \begin{cases} \ln b^+(\hat{\theta} - i\epsilon) + \ln b^+(\hat{\theta} + i\epsilon) & \text{for specials} \\ \ln b^+(\hat{\theta}) & \text{otherwise} \end{cases} \\ &e_{(j)}^{\hat{\theta}} = \begin{cases} e^{\hat{\theta} - i\epsilon} + e^{\hat{\theta} + i\epsilon} & \text{for specials} \\ e_{\rm II}^{\hat{\theta}} = 0 & \text{for wide roots} \\ e^{\hat{\theta}} & \text{otherwise}. \end{cases} \end{align} From definitions of auxiliary functions, we have the following relations: \begin{equation} \begin{split} &\textstyle\sum_{j;\hat{\theta}_j \neq h^{(1)}_j} c_j \ln b^+_{(j)}(\hat{\theta}_j) = 2 \pi i (I_{N_H^+} - 2(I_{N_S^+} + I_{N_V^+}) - I_{M_C^+} - I_{M_W^+}), \\ &\textstyle\sum _{j=1}^{N_1^+} \ln y^+_{(j)}(\hat{h}_j^{(1)}) = 2\pi i I_{N_1^+} \end{split} \end{equation} by introduced integers $I_{A^+}$ ($A \in \{N_H, N_S, N_V, M_C, M_W\}$) which give summation of quantum numbers {\it i.e.} $I_{A^+} = \sum_{j =1}^{A^+} I_{A,j}^+ = \frac{1}{2\pi i} \sum_{j=1}^{A^+} \ln b^+(\hat{\theta}_j)$. Using (\ref{lhs}), (\ref{rhs}), and an energy formula (\ref{cft-energy}), we finally obtain (\ref{UVenergy}). \if10 \end{document} \fi \section{Ultraviolet limit} \label{sec:UVlim} As discussed in Section \ref{sec:ssg}, the SSG model is known to be a perturbed theory of an $\mathcal{N}=1$ superconformal field theory. Conformal invariance is obtained in the UV limit realized by $m_0L \to 0$. From the original SSG model, one obtains a complete space of states of an $\mathcal{N}=1$ superconformal field thoery, while it has been known the R sector cannot be realized through a lattice regularization under the periodic boundary condition. In this section, we discuss the UV limit of the SSG model using NLIEs, which are derived via a lattice regularization. Under Dirichlet boundary conditions, NLIEs show dependence on boundary parameters, resulting in different forms. Consequently, we obtained a first evidence that subsectors not being obtained under the periodic boundary condition are realized through lattice regularization. In order to support this statement, we calculate conformal dimensions of eigenstates in each regime of boundary parameters and then show both NS and R sectors can be obtained. \subsection{UV behavior of Bethe roots and scaling functions} In the UV limit, there exist Bethe roots which tend to infinity. Such roots behave as $\theta \sim \hat{\theta} - \ln m_0L$, asymptotizing to infinity as $m_0 L \to 0$ \cite{bib:DV92}. Thus, a scaling function defined by $f^+(\hat{\theta}) = f(\hat{\theta} - \ln m_0L)$ shows a step-function like behavior at $\hat{\theta} \sim \ln m_0L$ \cite{bib:DV92, bib:KBP91, bib:Z90}. Using this function, one can rewrite NLIEs as follows: \begin{align} \ln b^+(\hat{\theta}) &= \int_{-\infty}^{\infty} d\hat{\theta}'\; G(\hat{\theta} - \hat{\theta}' - i\epsilon) \ln B^+(\hat{\theta}' + i\epsilon) - \int_{-\infty}^{\infty} d\hat{\theta}'\; G(\hat{\theta} - \hat{\theta}' + i\epsilon) \ln \bar{B}^+(\hat{\theta}' - i\epsilon) \nonumber \\ &+ \int_{-\infty}^{\infty} d\hat{\theta}'\; G_K(\hat{\theta} - \hat{\theta}' - \tfrac{i\pi}{2} + i\epsilon) \ln Y^+(\hat{\theta}' - i\epsilon) + ie^{\hat{\theta}} + i\sum_{j} c_j g_{(j)}(\hat{\theta} - \hat{\theta}_j) + i\pi \hat{C}_b, \label{NLIEb-UV} \\ \ln y^+(\hat{\theta}) &= \int_{-\infty}^{\infty} d\hat{\theta}'\; G_K(\hat{\theta} - \hat{\theta}' + \tfrac{i\pi}{2} - i\epsilon) \ln B^+(\hat{\theta}' + i\epsilon) \nonumber \\ &+ \int_{-\infty}^{\infty} d\hat{\theta}'\; G_K(\hat{\theta} - \hat{\theta}' - \frac{i\pi}{2} + i\epsilon) \ln\bar{B}^+(\hat{\theta}' - i\epsilon) + i\sum_{j } c_{j} g^{(1)}_{(j)}(\hat{\theta} - \hat{\theta}_j ) + i\pi \hat{C}_y. \label{NLIEy-UV} \end{align} Constants $\hat{C}_b$ and $\hat{C}_y$, which are not necessarily integers, include integration constants and asymptotic values obtained in Appendix \ref{sec:int_const}: \begin{align} i\pi \hat{C}_b &= i\pi \widetilde{C}_b^{(2)} + iF(\infty; H_+) + iF(\infty; H_-) + iJ(\infty) \nonumber \\ &+ ig(\infty) (N_H^+ - 2(N_S^+ + N_V^+) - M_C^+ - 2M_W^+) + 2ig(\infty) (N_H^0 - 2(N_S^0 + N_V^0) - M_C^0 - 2M_W^0) \nonumber \\ &+ ig_K(\infty) N_1^+ + 2ig_K(\infty) N_1^0, \\ i\pi \hat{C}_y &= i\pi \widetilde{C}_y^{(2)} + iF_y(\infty; H_+) + iF_y(\infty; H_-) + 2ig_K(\infty) \nonumber \\ &+ ig_K(\infty) (N_H^+ - 2 (N_S^+ + N_V^+) - M_C^+) + 2ig_K(\infty) (N_H^0 - 2 (N_S^0 + N_V^0) - M_C^0), \end{align} where $A^+$ ($A \in \{N_H, N_1, N_S, N_V, M_C, M_W\}$) is a number of roots/holes which tend to infinity, while we denote those which remain finite in the UV limit by $A^0$. As energy in the context of conformal field theories is written as a function of system length $L$, we write eigenenergy obtained in the UV limit of (\ref{energy}) as a function of system length: \begin{equation} \label{cft-energy} \begin{split} &E_{\rm CFT}(L) = E(L) - (E_{\rm bulk} + E_{\rm B}) = E_{\rm ex}(L) + E_C(L), \\ &E_{\rm ex}(L) = \frac{1}{2L} \sum_{j = 1}^{N_H^+} e^{\hat{h}_j} - \frac{1}{2L} \sum_{j = 1}^{M_C^+} e^{\hat{c}_{j}}, \\ &E_C(L) = \frac{1}{2\pi L} {\rm Im} \int_{-\infty}^{\infty} d\hat{\theta}\; e^{\hat{\theta}} \ln\bar{B}^+(\hat{\theta}). \end{split} \end{equation} Although it is cumbersome to calculate these quantities directly, a trick used in \cite{bib:S99} allows us to write eigenenergy in a form which does not depend on Bethe roots (Appendix \ref{sec:uv_der}): \begin{equation} \label{UVenergy} \begin{split} E_{\rm CFT}(L) &= \frac{1}{4\pi L} \{ L_+(b^+(\infty)) - L_+(b^+(-\infty)) + L_+(\bar{b}^+(\infty)) - L_+(\bar{b}^+(-\infty)) \\ &\hspace{13mm}+ L_+(y^+(\infty)) - L_+(y^+(-\infty)) \} \\ &+ \frac{i}{8\pi L} \Big[ \{ e^{\hat{\theta}} + \sum_j c_j g_{(j)}(\hat{\theta} - \hat{\theta}_j) + \pi \hat{C}_b \} (\ln B^+(\hat{\theta}) - \ln\bar{B}^+(\hat{\theta})) \Big]_{-\infty}^{\infty} \\ &+ \frac{i}{8\pi L} \Big[ \{ \sum_j c_j g^{(1)}_{(j)}(\hat{\theta} - \hat{\theta}_j) + \pi \hat{C}_y \} \ln Y^+(\hat{\theta}) \Big]_{-\infty}^{\infty} \\ &+ \frac{\pi}{L} (I_{N_H^+} - 2(I_{N_S^+} + I_{N_V^+}) - I_{M_C^+} - I_{M_W^+} + I_{N_1^+}) \\ &- \frac{\pi}{2L} \{ \hat{C}_b (N_H^+ - 2(N_S^+ + N_V^+) - M_C^+ - M_W^+) + \hat{C}_y N_1^+ \}, \end{split} \end{equation} where $L_+(x)$ is a dilogarithm function defined by \begin{equation} \label{dilog} L_+(x) = \frac{1}{2} \int_0^x dy\; \left( \frac{\ln (1 + y)}{y} - \frac{\ln y}{1 + y} \right). \end{equation} Asymptotic values in (\ref{UVenergy}) are directly calculated from NLIEs (\ref{NLIEb-UV}) and (\ref{NLIEy-UV}): \begin{equation} \label{asym-uv} \begin{split} &b^+(\infty) = 0, \hspace{27mm} y^+(\infty) = (-1)^{\frac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_-))_{{\rm mod}\,2}}, \\ &b^+(-\infty) = 2e^{3i\rho_+} \cos\rho_+, \qquad y_+(-\infty) = \frac{\sin3\rho_+}{\sin\rho_+}, \end{split} \end{equation} where \begin{equation} \begin{split} \rho_+ &= \pi\{ N_H^0 - 2(N_S^0 + N_V^0) - M_C^0 - 2M_W^0 + N_1^0 + 1 \\ &\hspace{8mm} + \tfrac{1}{3}(n_b(H_+) - n_y(H_+) + n_b(H_-) - n_y(H_-)) + \widetilde{C}_b^{(2)} \} \\ &- \gamma\{ 3(N_H^0 - 2(N_S^0 + N_V^0) - M_C^0 - 2M_W^0) + 2N_1^0 + 3 + H - n_y(H_+) - n_y(H_-) + 2\widetilde{C}_b^{(2)} \}. \end{split} \end{equation} Besides, we obtain the following condition: \begin{equation} N_H^0 - 2(N_S^0 + N_V^0) - M_C^0 + 1 + \widetilde{C}_y^{(2)} + n_y(H_+) + n_y(H_-) = 0. \end{equation} Asymptotic values of $\bar{b}(\hat{\theta})$ are obtained by taking complex conjugates of $b(\hat{\theta})$. Using (\ref{asym-uv}) and properties of dilogarithm functions (Appendix \ref{sec:dilog}), we finally obtain \begin{equation} \label{UVenergy*} \begin{split} E_{\rm CFT}(L) &= \frac{\pi}{2L} \Bigg(\frac{1}{\sqrt{\pi}} (\Phi_+ - \Phi_-) + \sqrt{\frac{\pi - 2\gamma}{\pi}} \Big(\widetilde{C}_b^{(2)} + N_1^0 + 3S^0 + 1 - \frac{1}{2}(n_y(H_+) - n_y(H_-)) \Big) \\ &- \sqrt{\frac{\pi}{\pi - 2\gamma}} \Big(S - \frac{1}{4}({\rm sign}(1-H_+) + {\rm sign}(1-H_-) + {\rm sign}(1+H_+) + {\rm sign}(1+H_-)) \Big) \Bigg)^2 \\ &+ \frac{\pi}{16 L} \left(\frac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_-))\right)_{{\rm mod}\, 2} - \frac{\pi}{16 L} \\ &+ \frac{\pi}{L} (I_{N_H^+} - 2(I_{N_S^+} + I_{N_V^+}) - I_{M_C^+} - I_{M_W^+}) - \frac{\pi}{2L} \left((3S^+ + 2N_1^+) S^+ + (S^+ + M_W^+) N_1^+ \right). \end{split} \end{equation} where \begin{align} &\Phi_{\pm} = \mp \frac{\gamma (H_{\pm} + 1)}{2 \sqrt{\pi - 2\gamma}}, \\ &2S^\alpha = N_H^\alpha - 2(N_S^\alpha + N_V^\alpha) - M_C^\alpha - 2M_W^\alpha, \qquad \alpha = \{0,+\}, \\ &S = S^+ + S^0. \end{align} By comparing (\ref{UVenergy*}) with energy obtained in the context of a conformal field theory (\ref{energy_BFB}) and (\ref{energy_BFF}), one obtains a central charge and a conformal dimension as \begin{align} &c = \frac{3}{2}, \\ &\Delta = \frac{1}{2}\left( \frac{\Phi_+ - \Phi_-}{\sqrt{\pi}} + mR + \frac{n}{R} \right)^2 \\ &\hspace{7mm}+ \frac{1}{16} \left(\frac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_-))\right)_{{\rm mod}\, 2}, \end{align} where \begin{align} &m = -S + \tfrac{1}{4}({\rm sign}(1-H_+) + {\rm sign}(1-H_-) + {\rm sign}(1+H_+) + {\rm sign}(1+H_-)), \label{conf-dim-bm} \\ &n = \widetilde{C}_b^{(2)} + N_1^0 + 3S^0 + 1 - \tfrac{1}{2}(n_y(H_+) + n_y(H_-)), \label{conf-dim-bn} \end{align} under a choice of compactification radius $R = \sqrt{\frac{\pi}{\pi - 2\gamma}}$. Thus, the theory belongs to the NS sector when $(\frac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_-)))_{{\rm mod}\, 2} = 0$ giving $\Delta_{\rm F} = 0$, while it belongs to the R sector when $(\frac{1}{2}({\rm sgn}(1-H_+) + {\rm sgn}(1-H_-) + {\rm sgn}(1+H_+) + {\rm sgn}(1+H_-)))_{{\rm mod}\, 2} = 1$ giving $\Delta_{\rm F} = \frac{1}{16}$. This sector separation with respect to boundary parameters is shown in Figure \ref{fig:UV_phase_transition}. A boson part of conformal dimensions is labeled by two indices $(m,n)$. A momentum part (\ref{conf-dim-bn}) must vanish due to Dirichlet boundary conditions. On the other hand, $m$ given by (\ref{conf-dim-bm}) takes either an integer or a half-integer depending on boundary parameters, as $S$ takes only an integer. \begin{figure} \begin{center} \includegraphics[scale=0.4]{UV_phase_transition} \caption{A phase separation between the NS sector and the R sector is obtained for boundary parameters in the UV limit.} \label{fig:UV_phase_transition} \end{center} \end{figure} \ifx10 \end{document} \fi
train/arxiv
BkiUbTg4eIfiUOJsp8Mv
5
1
\section{Introduction} The relation between the properties of galaxies and their parent dark matter halos, as well as the physical processes that regulate such properties, is a long-standing puzzle. In a cosmological framework, the angular momentum of a dark matter halo is initially acquired through tidal torques from neighbouring perturbations. The classical tidal torque theory \citep{Hoyle1949, Peebles1969} predicts that the specific angular momentum $j_{\rm h}$ (sAM hereafter) of halos follows $j_{\rm h}\propto M_{\rm h}^{2/3}$ where $M_{\rm h}$ is the halo mass \citep[e.g.][]{Peebles1969, White1984}. If angular momentum is conserved throughout the formation of galaxies by accreting gas that decoupled from their host dark matter halos, a similar relation should also apply to galaxies. Observations show that stellar masses $M_{\rm s}$ and sAM $j_{\rm s}$ of disk galaxies are correlated as a power law with index $0.52-0.64$ \cite[e.g.,][]{Fall1983, Romanowsky&Fall2012, Fall&Romanowsky2013, Posti2018b, DiTeodoro2021, Hardwick2022, Pina2021}. This empirical trend is often called the ``Fall relation''. The model $j_{\rm s} \propto f_j f_{\rm m}^{-2/3} M_{\rm s}^{2/3} \lambda$ \citep{Romanowsky&Fall2012} has been widely used to explain the $j_{\rm s}$-$M_{\rm s}$ relation. It requires that the retention factor of angular momentum $f_j\equiv j_{\rm s}/j_{\rm h}$, the stellar-to-halo mass ratio $f_{\rm m} \equiv M_{\rm s}/M_{\rm h}$, and the spin parameter $\lambda$ are independent of stellar mass. Unless these conditions are met, their dependence on stellar mass must conspire to cancel out to generate a correlation of the form $j_{\rm s} \propto M_{\rm s}^{2/3}$. One of the key ingredients of the galaxy-halo connection is $f_{\rm m}$ or the the stellar-to-halo mass relation \citep[SHMR; reviewed by][]{Wechsler&Tinker2018}. Within the general framework of abundance matching, the stellar-to-halo mass ratio peaks in halos around $10^{12} M_\odot$, assuming there is a little or no dependence on galaxy morphology. This result directly leads to a non-linear $j_{\rm s}$-$M_{\rm s}$ relation in logarithmic space, which is inconsistent with observations, as discussed by \citet{Posti2018a}. Yet, several works suggest that the exact shape of the SHMR is not independent of galaxy morphology \citep[e.g.,][]{Mandelbaum2006, Dutton2010, Rodriguez-Gomez2015, Posti&Fall2021}. Recently, \citet{Posti2019a, Posti2019b}, using a sample of isolated disk galaxies with presumably more accurate halo masses measured dynamically, found that the SHMR follows a nearly linear relation in logarithmic space \citep[see][for some extremely massive cases]{DiTeodoro2021, DiTeodoro2022}. \citet{ZhangZhiwen2022} reached a similar result for more than 20000 star-forming galaxies whose dynamical masses were measured via galaxy-galaxy lensing and satellite kinematics. Another key ingredient is the retention factor of angular momentum $f_j$. Disky structures grow at $z \lesssim 2$ by accreting cold gas from the vast reservoir of their circumgalactic medium \citep[e.g.,][]{Tacchella2019, DeFelippis2020, Renzini2020, Du2021}. During this phase, the angular momentum of the gas should be conserved (up to a certain factor) to form disk galaxies with angular momenta tightly correlated with that of their parent dark matter halos. But no agreement is fully reached in studies that examined the link of the angular momentum amplitude between halos and galaxies. \citet{Zavala2016} and \citet{Lagos2017} found a remarkable connection between the sAM evolution of the dark and baryonic components of galaxies in the EAGLE simulations. A similar correlation is suggested in \citet{Teklu2015} using the Magneticum Pathfinder simulation. \citet{Grand2017} and \citet{Rodriguez-Gomez2022} showed that the disk sizes and scale lengths are closely related to the angular momentum of halos in the Auriga and IllustrisTNG-100 simulations. However, \citet{Jiang2019} found little to no correlation using the NIHAO zoom-in simulation. A similar conclusion was drawn in \citet{Scannapieco2009} using eight Milky Way analogs. \citet{Danovich2015} argued that cold gas inflows cannot conserve angular momentum when they move into the inner regions of halos. In this paper, we use IllustrisTNG\ \citep{Marinacci2018, Nelson2018a, Nelson2019a, Naiman2018, Pillepich2018b, Pillepich2019, Springel2018} to revisit the longstanding open question of how the $j$-$M$ relation develops in disk galaxies . We aim to address: (1) whether or not there is a connection between the angular momentum of dark halos and that of the galaxies they host; (2) how the $j_{\rm s}$-$M_{\rm s}$ evolves in disk galaxies; and (3) how the $j_{\rm s}$-$M_{\rm s}$ relation can be explained using a simple theoretical model. \section{TNG50 Simulation and data reduction} IllustrisTNG\ is a suite of cosmological simulations that was run with gravo-magnetohydrodynamics and incorporates a comprehensive galaxy model \citep[][]{Weinberger2017, Pillepich2018a}. The TNG50-1 run of the IllustrisTNG\ has the highest resolution. It includes $2 \times 2160^3$ initial resolution elements in a $\sim 50$ comoving Mpc box, corresponding to a baryon mass resolution of $8.5 \times 10^4 M_\odot$ with a gravitational softening length for stars of about $0.3$ kpc at $z = 0$. Dark matter is resolved with particles of mass $4.5 \times 10^5 M_\odot$. Meanwhile, the minimum gas softening length reaches 74 comoving pc. This resolution is able to reproduce the kinematic properties of galaxies with stellar mass $\gtrsim 10^9 M_\odot$ \citep{Pillepich2019}. The galaxies are identified and characterized with the friends-of-friends \citep{Davis1985} and {\tt SUBFIND} \citep{Springel2001} algorithms. Resolution elements (gas, stars, dark matter, and black holes) belonging to an individual galaxy are gravitationally bound to its host subhalo. In this work, we mainly focus on how the $j_{\rm s}$-$M_{\rm s}$ relation develops in central galaxies dominated by disks. In such cases, neither mergers nor environmental effects have played an important role. We use the galaxies over the stellar mass range $10^{9}-10^{11.5} M_\odot$ from the TNG50-1 run. Disk-dominated galaxies are identified by $\kappa_{\rm rot} \geq 0.5$, where $\kappa_{\rm rot} = K_{\rm rot}/K$ \citep{Sales2012} denotes the relative importance of cylindrical rotational energy $K_{\rm rot}$ over the total kinetic energy $K$ measured for a given snapshot. \citet{Du2021} showed that $\kappa_{\rm rot} \geq 0.5$ selects galaxies whose mass fractions of kinematically derived spheroidal structures are $\lesssim 0.5$. The other galaxies are classified as spheroid-dominated galaxies, which correspond to elliptical galaxies or slow rotators in observations. We further divide disk-dominated galaxies into two subgroups with $\kappa_{\rm rot} \geq 0.7$ and $0.5 \leq \kappa_{\rm rot} <0.7$, which correspond to the cases with strong rotation and with relatively moderate rotation, respectively, for a given snapshot. The former ones are likely to have more disky morphology. All quantities in this paper are calculated using all particles belonging to galaxies/subhalos that include all gravitationally bound particles identified with the {\tt SUBFIND} algorithm \citep{Springel2001}. We use only central galaxies that are primary subhalos of their parent halos. The specific angular momentum vector is thus ${\boldsymbol j} = \sum_{i} {\boldsymbol J_{i}}/\sum_{i} m_{i}$, where ${\boldsymbol J_{i}}$ and $m_{i}$ are the angular momentum and mass of particle $i$, respectively. Galaxies are centered at the position with the minimum gravitational potential energy. No limitation on the radial extent is made to obtain the overall properties. The radial variation is ignored to simplify our discussion. \section{The generation of the \lowercase{$j_{\rm s}$}-$M_{\rm \lowercase{s}}$ relation of disk galaxies at \lowercase{$z=0$}} In the left-most panel of \reffig{fig:Msjs_evo}, we show the $j_{\rm s}$-$M_{\rm s}$ relation of galaxies at $z=0$ from TNG50, in comparison with those measured in observations. The shaded region encloses the fitting results of disk galaxies measured in the local Universe \citep{Romanowsky&Fall2012, Fall&Romanowsky2013, Posti2018b, Hardwick2022, Pina2021}. These studies concluded that $j_{\rm s}$-$M_{\rm s}$ follows a well-defined linear scaling relation in logarithmic space with slope $0.52-0.64$ and a root-mean-square scatter of $\sim 0.2$ dex. It is clear that TNG50 reproduces well the $j_{\rm s}$-$M_{\rm s}$ relation observed in disk-dominated central galaxies (small blue and cyan dots). A linear fit (blue line) of all disk-dominated galaxies of TNG50 gives \begin{equation}\label{eqjsMs} \begin{aligned} {\rm log}\ j_{\rm s} = (0.55\pm 0.01)\ {\rm log}\ M_{\rm s} - (2.77\pm 0.11), \end{aligned} \end{equation} with a scatter ($0.3$ dex) similar to that observed. In this study, we adopt the linear regression and the least-squares method in fitting. The median trend (large blue dots) that matches the linear fitting well. We focus on the general trend and physical origin of the $j_{\rm s}$-$M_{\rm s}$ relation. Figure \ref{fig:Msjs_evo} further shows that the $j_{\rm s}$-$M_{\rm s}$ relation in the local Universe develops at $z\lesssim 1$, which coincides well with the epoch of the formation and growth of disk galaxies. Its slope becomes shallower at higher redshifts, thus deviating from the $j_{\rm s}$-$M_{\rm s}$ relation at $z=0$ (shaded regions); for example, log $j_{\rm s} = 0.34\ {\rm log}\ M_{\rm s} - 0.90$ at $z=1.5$. In the third panel of \reffig{fig:Msjs_evo}, we can see that the disk galaxies at $z=0.5-1.5$ measured by \citet{Swinbank2017} follow a consistent distribution with the TNG50 disk galaxies. It is worth mentioning again that here the galaxies for each redshift are kinematically classified by $\kappa_{\rm rot}$. At high redshifts, the spheroid-dominated galaxies follow a similar $j_{\rm s}$-$M_{\rm s}$ relation as the disk-dominated cases, but with a larger scatter. As less disk-dominated galaxies form at higher redshifts, this result suggests that the growth of disky structures at late times ($z < 1$) is the key to establishing the locally observed $j_{\rm s}$-$M_{\rm s}$ relation in disk galaxies. The decrease of $j_{\rm s}$ toward high redshifts is most likely due to the effect of biased collapse \citep[e.g.,][]{vandenBosch1998}, which predicts that gas with less angular momentum collapses earlier. The formation of galaxies at high redshifts thus is largely dominated by the assembly of spheroidal components whose angular momentum correlates weakly with that of their parent halos. The effect of biased collapse is gradually weakened toward low redshifts due to the assembly of disks by accretion of gas with high angular momentum. In this study, we focus mainly on the generation of the $j_{\rm s}$-$M_{\rm s}$ relation in disk-dominated galaxies at low redshifts. The effect of biased collapse is examined later in the paper. \section{A physical model of the \lowercase{$j_{\rm s}$}-$M_{\rm \lowercase{s}}$ relation} \begin{figure* \begin{center} \includegraphics[width=0.98\textwidth]{./images_submit/Relation_onflow_Ms__js__TNG50_nosat_krot2_sm9.pdf} \caption{The evolution of the $j_{\rm s}$-$M_{\rm s}$ relation in TNG50 from $z=1.5$ to $z=0$. The red, green, and blue symbols are central galaxies that correspond to spheroid-dominated galaxies with $\kappa_{\rm rot} < 0.5$, disk-dominated galaxies with $0.5 \leq \kappa_{\rm rot} < 0.7$, and disk-dominated galaxies with $\kappa_{\rm rot} \geq 0.7$, respectively. The redshift is given at the top-left corner of each panel. The blue lines with error bars are the linear fitting results of disk-dominated galaxies. The error bars represent the standard deviation from the linear fitting, which is 0.3 dex at $z=0$. The large blue dots show the trend of the median values. The shaded region shows the variance of the $j_{\rm s}$-$M_{\rm s}$ relation suggested by observations at $z=0$, where we combine the fitting results given by \citet{Romanowsky&Fall2012}, \citet{Fall&Romanowsky2013}, \citet{Posti2018b}, \cite{DiTeodoro2021}, and \citet{Pina2021}. In the third panel, the black squares show the $j_{\rm s}$-$M_{\rm s}$ relation measured for disk galaxies at $z=0.5-1.5$ \citep{Swinbank2017}.} \label{fig:Msjs_evo} \end{center} \end{figure*} The existence of the $j_{\rm s}$-$M_{\rm s}$ relation suggests that, despite the complexity of galaxy formation in a cosmological context, a fundamental regularity still exists. This relation can be directly obtained by three simple equations: \begin{equation}\label{eqjM} \begin{aligned} {\rm log}\ j_{\rm tot} = \alpha\ {\rm log}\ M_{\rm tot} + a \end{aligned} \end{equation} \begin{equation}\label{eqMM} \begin{aligned} {\rm log}\ M_{\rm tot} = \beta\ {\rm log}\ M_{\rm s} + f_m^{'} \end{aligned} \end{equation} \begin{equation}\label{eqjj} \begin{aligned} {\rm log}\ j_{\rm s} = \gamma\ {\rm log}\ j_{\rm tot} + f_j^{'}. \end{aligned} \end{equation} Here $M_{\rm tot}$ and $j_{\rm tot}$ are the total mass and angular momentum of a halo system, including baryonic and dark matter. This model yields \begin{equation}\label{eqmain} \begin{aligned} {\rm log}\ j_{\rm s} = \alpha \beta \gamma\ {\rm log}\ M_{\rm s} + a\gamma + \alpha \gamma f_{\rm m}^{'} + f_j^{'}. \end{aligned} \end{equation} Equation \ref{eqjM} is a general form of the theoretical prediction of the halo $j$-$M$ relation. Tidal torque theory suggests $\alpha=2/3$, but if we allow for potential correction to the theory, $\alpha$ may deviate from $2/3$. As suggested by \citet[][PFM19 hereafter]{Posti2019a}, we assume that the stellar-to-halo mass ratio follows a single power-law relation (i.e., equation \ref{eqMM}) for disk galaxies. We apply \refeq{eqjj} to describe the retention of angular momentum; the retention factor $f_j^{'}={\rm log}\ f_j$ if $\gamma=1$. In this section, we apply this simple model to the TNG50 data to show that they provide a good interpretation of the $j_{\rm s}$-$M_{\rm s}$ relation. \refsec{sec:fj} shows the angular momentum correlation between halos and stars and then examines whether the effect of biased collapse is important. In \refsec{sec:SHMR}, we show the SHMR and the $j$-$M$ relation of halos, which play important roles in establishing the $j_{\rm s}$-$M_{\rm s}$ relation. It is worth emphasizing that our results are based on semi-quantitative analysis that is not sensitive to minor deviations from the scaling relations. All linear fitting results are roughly consistent with the trends of median values over the mass range considered. We further discuss how our results challenge the SHMR obtained by the abundance matching method and the halo $j$-$M$ relation predicted by tidal torque theory. \subsection{Angular momentum conservation during disk assembly} \label{sec:fj} \begin{figure* \begin{center} \includegraphics[width=0.98\textwidth]{./images_submit/Relation_onflow_SubhaloSpin__js__TNG50_nosat_krot2_sm9.pdf} \caption{Evolution of the $j_{\rm s}$-$j_{\rm tot}$ relation of central galaxies in TNG50. The red, green, and blue symbols are central galaxies that correspond to spheroid-dominated galaxies with $\kappa_{\rm rot} < 0.5$, disk-dominated galaxies with $0.5 \leq \kappa_{\rm rot} < 0.7$, and disk-dominated galaxies with $\kappa_{\rm rot} \geq 0.7$, respectively. The dotted lines highlight the log $j_{\rm s}={\rm log}\ j_{\rm tot} + f_j^{'}$ scaling relation in an interval of $\Delta f_j^{'} = 0.5$.} \label{fig:fj} \end{center} \end{figure*} \begin{figure \begin{center} \includegraphics[width=0.5\textwidth]{./images_submit/Relation_onflow_jdm__jg__TNG50_nosat_krot2_sm9.pdf} \includegraphics[width=0.5\textwidth]{./images_submit/Relation_onflow_j_dm__j_ys__TNG50_nosat_krot2_sm9.pdf} \caption{The evolution of the $j$-$j_{\rm h}$ relation for gas (top) and young stars (bottom) in TNG50 at $z=0$ (left) and $z=1.0$ (right). The red, green, and blue symbols are central galaxies that correspond to spheroid-dominated galaxies with $\kappa_{\rm rot} < 0.5$, disk-dominated galaxies with $0.5 \leq \kappa_{\rm rot} < 0.7$, and disk-dominated galaxies with $\kappa_{\rm rot} \geq 0.7$, respectively. The dotted lines highlight the scaling relation in an interval of $0.5$ dex. We exclude the cases with star formation rates lower than $0.1\ M_\odot\ {\rm yr}^{-1}$ in the last 1 Gyr (i.e., quenched galaxies), which only contribute a small fraction ($\sim 1/4$ at $z=0$) of even the spheroid-dominated galaxies.} \label{fig:jysjdm} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[width=0.45\textwidth]{./images_submit/theta_s_dm_cls_acc.pdf} \caption{The number distribution of misalignment angle $\theta$ (top) and its accumulative fraction (bottom) at $z=0$. The misalignment angle $\theta$ measures the angle between vectors ${\boldsymbol j_{\rm s}}$ and ${\boldsymbol j_{\rm h}}$. The three groups of galaxies are shown in blue, cyan, and red. The black curve corresponds to the distribution of all central galaxies.} \label{fig:misangle} \end{center} \end{figure} Previous works have suggested many phenomena that may induce angular momentum losses or gains, including dynamical friction, hydrodynamical viscosity, galactic winds \citep[e.g.,][]{Governato2007, Brook2011}, and galactic fountains \citep[e.g.,][]{Brook2012, DeFelippis2017}. These processes, in conjunction with gas cooling and subsequent star formation, drive circulation of gas in the circumgalactic medium. The tight correlation between $j_{\rm s}$ and $j_{\rm tot}$ in central disk-dominated galaxies at $z=0$ (\reffig{fig:fj}) verifies that the overall angular momentum is retained in a nearly constant ratio during star formation and gas circulation. This result supports the long-standing assumption from theory \citep[e.g.,][]{Fall&Efstathiou1980, Mo1998} and recent cosmological simulations \citep{Teklu2015, Zavala2016, Lagos2017} that angular momentum is approximately conserved during galaxy formation. The galaxies with $\kappa_{\rm rot} \geq 0.7$ match the $y=x$ line (thick dotted line), which suggests that angular momentum is conserved in galaxies with strong rotation, giving $j_{\rm s} \sim j_{\rm tot}$, namely $\gamma \sim 1$ and $f_j^{'}\sim 0$. The disk-dominated galaxies with $0.5 \leq \kappa_{\rm rot} < 0.7$ are slightly offset parallel to $y=x$. Equation \ref{eqjj} can thus be written as ${\rm log}\ j_{\rm s} \simeq {\rm log}\ j_{\rm tot} + f_j^{'}$, where the offset $f_j^{'}$ decreases with $\kappa_{\rm rot}$ following a nearly parallel sequence. An accurate calculation of median retention factors gives $f_j^{'} = -0.07_{-0.17}^{+0.15}$ and $-0.21_{-0.24}^{+0.20}$ for disk-dominated galaxies with $\kappa_{\rm rot} \geq 0.7$ and $\kappa_{\rm rot} \geq 0.5$ , respectively, which is consistent with the observational estimation for disk galaxies \citep{Fall&Romanowsky2013, Fall&Romanowsky2018, Posti2019b, DiTeodoro2021}. For comparison, the spheroid-dominated galaxies (median $f_j^{'} = -0.63_{-0.36}^{+0.31}$) follow a weak correlation with a rather large scatter. They thus cannot be described by a linear relation. It is worth emphasizing that the evolution of $j_{\rm s}$ is a cumulative effect that quantifies the overall conservation of angular momentum over the past evolution. In \reffig{fig:jysjdm}, we further show the relation between sAM of the dark matter halo ($j_{\rm h}$) and that of gas ($j_{\rm g}$) and young stars. It is clear that $j_{\rm g}$ (upper panels) correlates linearly with $j_{\rm h}$, but offsets toward higher sAM by about $0$-$0.5$ dex, in qualitatively agreement with observations \citep{ManceraPina2021b}. In the lower panels of \reffig{fig:jysjdm}, we can see that $j_{\rm s}$ for the young stars and $j_{\rm h}$ of disk-dominated galaxies (especially the cases with $\kappa_{\rm rot} \geq 0.7$) follow roughly a similar linear scaling relation as the $j_{\rm g}$-$j_{\rm h}$ relation at $z=0$, albeit with a larger scatter and relatively lower sAM. Here $j_{\rm s}$ of young stars is approximated using stars that form within 1 Gyr in each galaxy for a given snapshot. This result suggests that the sAM of gas and the assembly of disks are largely determined by the sAM of their parent halos. The fact that gas and young stars have higher sAM than dark matter can be partially explained by the biased collapse scenario \citep[e.g.,][]{vandenBosch1998}. In this scenario, gas with lower angular momentum collapses earlier, whereupon the remaining gas, and consequently the young stars that form from it at lower redshifts, would be left with somewhat higher angular momentum. The conservation of sAM evidenced by \reffig{fig:fj} suggests, however, that the overall effect of biased collapse has been quite modest in disk-dominated galaxies after a sufficiently long period of gas accumulation. A dramatic loss of angular momentum only occurs in spheroid-dominated galaxies, probably due to dry major mergers that can destroy the global rotation of their initial disks. We further verify that the angular momentum vectors of the dark matter halo and stars are roughly aligned. Defining the misalignment angle $\theta$ as the angle between vectors ${\boldsymbol j_{\rm s}}$ and ${\boldsymbol j_{\rm h}}$, the lower panel of Figure \ref{fig:misangle} shows that $\sim 60\%$ of disk-dominated galaxies have $\theta < 30^{\circ}$ at $z=0$. This may induce a scatter on ${\boldsymbol j_{\rm h}}$ by a factor of $< 1-{\rm cos}\ 30^{\circ}=0.13$, which is negligible. This result is consistent with previous studies \citep[e.g.,][]{Bailin2005, Bett2010, Teklu2015, ShaoShi2016}. \citet{MotlochYu2021} further find a correlation between galaxy spin direction and halo spin reconstructed from cosmic initial conditions \citep{YuHaoRan2020, WuQiaoYa2021}. We thus ignore the effect of orientation misalignment in this study that uses disk-dominated galaxies. We conclude that angular momentum is roughly conserved by median factor $f_j^{'}\approx -0.21$ (corresponding to $j_{\rm s}/j_{\rm tot} \approx 0.62$) for disk-dominated central galaxies. A similar result is obtained in an independent analysis using the TNG100 run of IllustrisTNG\ \citep{Rodriguez-Gomez2022}. The overall correlation between galaxies and halos is maintained during the formation of disk-dominated galaxies. It is clear that the accretion of gas with high angular momentum dominates the growth of disk galaxies since $z=1.5$. Without experiencing violent mergers, the assembly of disky structures is able to conserve angular momentum as stars form from the cold gas. While biased collapse has been considered to play an important role in interpreting the observed $j_{\rm s}$-$M_{\rm s}$ relation \citep[][]{ShiJingjing2017, Posti2018a}, our results indicate that its effect has been largely erased in the local Universe. \begin{figure*}[htbp] \begin{center} % \includegraphics[width=\textwidth]{./images_submit/Relation_onflow_SubhaloMass__SubhaloMassType_star_TNG50_nosat_krot2_sm9.pdf} \includegraphics[width=\textwidth]{./images_submit/Relation_onflow_SubhaloMass__SubhaloSpin__TNG50_nosat_krot2_sm9_normedM23_odark.pdf} \caption{Evolution of the $M_{\rm tot}$-$M_{\rm s}$ and $j_{\rm tot}$-$M_{\rm tot}$ relations of central galaxies from $z=1.5$ (right) to $z=0$ (left) in TNG50. The red, green, and blue symbols are central galaxies that correspond to spheroid-dominated galaxies, disk-dominated galaxies with $0.5 \leq \kappa_{\rm rot} < 0.7$, and disk-dominated galaxies with $\kappa_{\rm rot} \geq 0.7$, respectively. In the upper panels, we overlay the observations of disk galaxies from PFM19 and \citet{Moster2013} for comparison. Both the linear fitting and medians are measured using equal bins in log $M_{\rm s}$, thus giving the parameters $\beta$ and $f_{\rm m}^{'}$ of Equation \ref{eqMM} directly. In the bottom panels, we normalize $j_{\rm tot}$ by $M_{\rm tot}^{2/3}$ to highlight the discrepancy from tidal torque theory. In the bottom-left panel, the $j_{\rm tot}$-$M_{\rm tot}$ relation of central galaxies in TNG100-dark is overlaid for comparison. The shaded regions correspond to the 68 and 95 percentile envelopes. The dotted line and squares are the linear fitting result and median values, respectively.} \label{fig:M0j0_onflow} \end{center} \end{figure*} \subsection{Constraining the $j$-$M$ relation of halos with the SHMR} \label{sec:SHMR} According to the $j_{\rm s}$-$j_{\rm tot}$ relation, $\gamma \sim 1$, and therefore \refeq{eqmain} can be written as \begin{equation}\label{eqmain_v1} \begin{aligned} {\rm log}\ j_{\rm s} \simeq \alpha \beta\ {\rm log}\ M_{\rm s} + a + \alpha f_{\rm m}^{'} + f_j^{'}, \end{aligned} \end{equation} whose slope is determined by $\alpha$ and $\beta$. Tidal torque theory predicts $\alpha = 2/3$, which has been widely assumed. For this to hold, the index $\beta$ of the SHMR must be close to 1. In the upper panels of \reffig{fig:M0j0_onflow}, we show the $M_{\rm s}$-$M_{\rm tot}$ relation of TNG50 galaxies. The linear fitting (dash-dotted line) of disk-dominated galaxies (small blue and cyan dots) gives \begin{equation}\label{eq:M0Ms} \begin{aligned} {\rm log}\ M_{\rm tot} = (0.67\pm 0.01)\ {\rm log}\ M_{\rm s} - (4.80\pm 0.06) \end{aligned} \end{equation} at $z=0$, according to which $\beta=0.67$, significantly smaller than 1. We overlay the SHMR measured by PFM19 (black squares), who estimated halo masses directly from the kinematics of extended HI in central disk galaxies. The SHMR of TNG50 galaxies matches well with the results of PFM19, while it is systematically offset from that derived from the abundance matching method \citep[e.g.,][the solid black profile]{Moster2013} for massive galaxies. The abundance matching method suggests that the stellar-to-halo mass ratio follows a broken power-law relation peaking at $M_{\rm tot} \approx 10^{12} M_\odot$ \citep[][]{Wechsler&Tinker2018}, assuming there is no dependence on galaxy morphology. In relatively less massive galaxies with halo mass $< 10^{12} M_\odot$, both observations of PFM19 and simulations are consistent with abundance matching. The absence of a significant down-bending break in the SHMR of massive disk galaxies, however, challenges the results of abundance matching. While the SHMR of TNG50 galaxies are roughly consistent with that of PFM19, we do see a relatively minor down-bending break for $M_{\rm tot} > 10^{12} M_\odot$ in TNG50. We have confirmed that the TNG100 simulation run in an 8 times larger box also exhibits a similar minor down-bending break. A similar result was obtained in \citet{Marasco2020} using the TNG100 run. They suggested that the AGN feedback used in the TNG simulations is too efficient at suppressing star formation in massive disk galaxies. Here we simply use a linear fitting to describe the SHMR because (1) massive disk galaxies with $M_{\rm s} \geq 10^{11} M_\odot$ that are offset significantly are rare, and (2) the difference from the median values (large blue dots) is minor. The IllustrisTNG\ simulations suggest that there is a non-negligible correction to the $j \propto M^{2/3}$ relation when baryonic processes are considered. The lower panels of \reffig{fig:M0j0_onflow} show the $j_{\rm tot}$-$M_{\rm tot}$ relation. Using ${\rm log}\ j_{\rm tot}- {\rm log}\ M_{\rm tot}^{2/3}$ as the $y$-axis to highlight the discrepancy from the traditional tidal torque theory, the bottom-left panel clearly shows that the dark matter-only runs in the IllustrisTNG\ simulations indeed generate $j \propto M^{2/3}$ (TNG100-dark, corresponding to the dotted line and shaded regions), consistent with the theoretical expectation. However, in the presence of baryons, the $j$-$M$ relation gradually deviates from this relation below $z=1$ (lower panels of \reffig{fig:M0j0_onflow}). At $z=0$, fitting the central galaxies dominated by disks from TNG50 gives \begin{equation} \begin{aligned} {\rm log}\ j_{\rm tot} = (0.81\pm 0.02)\ {\rm log}\ M_{\rm tot} - (6.37\pm 0.21) \end{aligned} \end{equation} The power-law index reaches $\alpha = 0.81$ at $z=0$. Combining with the SHMR of disk-dominated galaxies (equation \ref{eq:M0Ms}), \begin{equation}\label{eqmain_v2} \begin{aligned} {\rm log}\ j_{\rm s} \simeq 0.54\ {\rm log}\ M_{\rm s} + f_j^{'} - 2.48, \end{aligned} \end{equation} which explains perfectly the $j_{\rm s}$-$M_{\rm s}$ index of 0.55 of disk galaxies at $z=0$ (\reffig{fig:Msjs_evo}). Apparently, the decrease of $f_j^{'}$ leads to a parallel shift of the $j_{\rm s}$-$M_{\rm s}$ relation from disk-dominated toward more spheroid-dominated galaxies following a nearly parallel sequence. Shown in \reffig{fig:fj}, $f_j^{'}\approx 0$ for the galaxies with $\kappa_{\rm rot} \geq 0.7$ at $z=0$. For all disk-dominated galaxies ($\kappa_{\rm rot} \geq 0.5$), for which $f_j^{'} \approx -0.21$, \refeq{eqmain_v2} gives ${\rm log}\ j_{\rm s} \simeq 0.54\ {\rm log}\ M_{\rm s} - 2.69$, which predicts exactly the outcome of the $j_{\rm s}$-$M_{\rm s}$ relation of disk-dominated galaxies at $z=0$ (equation \ref{eqjsMs}). The mass ratio of the spheroidal component quantified by $\kappa_{\rm rot}$ clearly correlates inversely with $f_j^{'}$ (\reffig{fig:fj}), offering a qualitative explanation for the morphological dependence of the $j_{\rm s}$-$M_{\rm s}$ relation. At high redshifts, the deviation from the local $j_{\rm s}$-$M_{\rm s}$ relation is partially explained by the evolution of the $j_{\rm tot}$-$M_{\rm tot}$ relation and the retention factor of angular momentum, as the $M_{\rm tot}$-$M_{\rm s}$ relation remains nearly invariant since $z=1.5$. The $j_{\rm tot}$-$M_{\rm tot}$ and $M_{\rm tot}$-$M_{\rm s}$ relations at $z=1.5$ give log $j_{\rm s} = 0.46\ {\rm log}\ M_{\rm s} - 1.86$ in the case of $j_{\rm tot}=j_{\rm s}$, which still cannot fully explain the index 0.34 of the $j_{\rm s}$-$M_{\rm s}$ relation at high redshifts. This may be due to the fact that galaxies have been affected by biased collapse and by losses of angular momentum due to gas-rich mergers and clumpy instabilities at $z>1.5$, as a consequence of which the retention factor $f_j^{'}$ is smaller at high redshifts (right-most panel of \reffig{fig:fj}). Our results suggest that the dark matter-only $j\propto M^{2/3}$ relation cannot explain the $j_{\rm s}$-$M_{\rm s}$ relation. This is because the effect of central halos gaining angular momentum under the effect of baryonic processes needs to be included. The SHMR can be used to probe the $j$-$M$ relation in the local Universe when the effect of biased collapse becomes insignificant. It it worth emphasizing that all conclusions above are mainly drawn using less massive galaxies, whose SHMR can be described by a linear fit and is not sensitive to the down-bending break in massive ($M_{\rm tot} > 10^{12}M_\odot$) galaxies. The mechanism responsible for the discrepancy from the traditional tidal torque theory is still not well known. In previous studies, \citet{Zjupa&Springel2017} suggested that the angular momentum in Illustris galaxies is underestimated by dark matter-only simulations for especially massive cases. Shown in their Figure 19, the spin parameter has indeed a weak dependence on halo mass ($M_{\rm h} > 10^{11}M_\odot$) in a similar manner to our halo $j$-$M$ relation. We have verified that the disk-dominated galaxies in the TNG100 run have a similar discrepancy to the cases in the TNG50 \citep[see the result of TNG100 also in Figure 10 of][]{Rodriguez-Gomez2022}. \citet{ZhuQirong2017} showed that the presence of the baryonic component can induce net rotation in the inner regions of dark matter halos, which may lead to the increase of their angular momentum. \citet{Pedrosa2010} suggested that central galaxies may acquire angular momentum from their satellites that are disrupted by dynamical friction. Similarly, \citet{LuShengdong2022} showed that galaxy interactions can inject angular momentum to the circumgalactic medium. Moreover, galaxies with relatively lower $j_{\rm tot}$ may have higher probability of mergers, thus transforming their morphology into ellipticals. We see that the slope of all galaxies (black dashed lines) is slightly smaller, but it cannot fully explain the increase of $j_{\rm tot}$. \section{Summary} In this paper, we show that the TNG50 simulation reproduces the observed scaling relation between stellar specific angular momentum $j_{\rm s}$ and mass $M_{\rm s}$ of galaxies, as measured in the local Universe. The disk-dominated central galaxies in TNG50 follow log $j_{\rm s} =0.55\ {\rm log}\ M_{\rm s} - 2.77$, which matches observations remarkably well. Our result confirms that the observed $j_{\rm s}$-$M_{\rm s}$ relation may be regarded as evidence that the formation of disk galaxies is tightly correlated with dark matter halos. However, the theoretical $j$-$M$ relation ($j \propto M^{2/3}$) from dark matter-only simulations is not able to explain the $j_{\rm s}$-$M_{\rm s}$ relation. We show that the local $j_{\rm s}$-$M_{\rm s}$ relation develops at $z\lesssim 1$ in disk galaxies. During this epoch, disky structures form or grow significantly. Angular momentum is roughly conserved during the assembly of disky structures, which leads to a median retention factor ${\rm log}\ j_{\rm s}/j_{\rm tot} = -0.07_{-0.17}^{+0.15} \ (-0.21_{-0.24}^{+0.21})$ for disk-dominated galaxies with $\kappa_{\rm rot} \geq 0.7\ (0.5)$. The $j_{\rm s}$-$M_{\rm s}$ relation of disk galaxies in the local Universe can be well explained by a simple model for which $j_{\rm tot}\propto M_{\rm tot}^{0.81}$, $M_{\rm tot}\propto M_{\rm s}^{0.67}$, and $j_{\rm s} \propto j_{\rm tot}$, where $j_{\rm tot}$ is the overall specific angular momentum and $M_{\rm tot}$ is the mass of the dark and baryonic components. Because of the cumulative accretion of mass with high angular momentum, the effect of biased collapse has been erased at low redshifts. The index 0.55 of the $j_{\rm s}$-$M_{\rm s}$ relation comes from the indices of the $j_{\rm tot}$-$M_{\rm tot}$ and $M_{\rm tot}$-$M_{\rm s}$ relation. We show that there is a non-negligible deviation from the halo $j\propto M^{2/3}$ relation to explain the $j_{\rm s}$-$M_{\rm s}$ relation. This model further suggests that the stellar-to-halo mass ratio of disk galaxies increases monotonically following a nearly power-law function, which is consistent with the latest dynamical measurement of disk galaxies. This challenges the general expectation from abundance matching that the stellar-to-halo mass ratio of disk galaxies decreases toward the massive end. Moreover, the retention factor of angular momentum inversely correlates with the mass ratio of spheroids, which possibly leads to the morphological dependence of the $j_{\rm s}$-$M_{\rm s}$ relation. \begin{acknowledgements} We thank an anonymous referee for helpful suggestions. The authors acknowledge constructive comments and suggestions from S. M. Fall, L. Posti, S. Liao, and J. Shi. LCH was supported by the National Science Foundation of China (11721303, 11991052, 12011540375) and the China Manned Space Project (CMS-CSST-2021-A04, CMS-CSST-2021-A06). MD and HRY acknowledge the support by the China Manned Space Program through its Space Application System, and the National Science Foundation of China 11903021 and 12173030. VPD was supported by STFC Consolidated grant ST/R000786/1. The TNG50 simulation used in this work, one of the flagship runs of the IllustrisTNG project, has been run on the HazelHen Cray XC40-system at the High Performance Computing Center Stuttgart as part of project GCS-ILLU of the Gauss centres for Supercomputing (GCS). The authors are acknowledged for the help with the high-performance computing of Xiamen University. This work is also supported by the High-performance Computing Platform of Peking University, China. \end{acknowledgements}
train/arxiv
BkiUc0g5qhDACrbibZ9o
5
1
\section{Introduction} The strong gravitational lensing by galaxy clusters not only constrains their mass distributions \citep[e.g.][]{tc01,szb+08} but also provides an independent way to study high-redshift faint background galaxies, which are otherwise unobservable \citep[e.g.][]{mf93,mkm+03}. Furthermore, the statistics of gravitational arcs in lensing clusters can be used to constrain the paradigm of structure formation and cosmology \citep[e.g.][]{bhc+98,lmj+05}. Up to now, about 120 strong lensing clusters have been discovered \citep[e.g.][]{lp89,sef+91,lgh+99,zg03,gky+03,ste+05,hgo+08}. Almost all previous searches for the lensing systems are based on the ground-based telescopes with seeing between $0.5''$ and $1.5''$. \citet{ste+05} used the {\it HST} WFPC2 data and found 104 giant arcs in 54 clusters. Recently, based on 240 rich clusters identified from the Sloan Digital Sky Survey \citep[SDSS,][]{yaa+00} and the follow-up deep imaging observations from the Wisconsin-Indiana-Yale NOAO 3.5\,m telescope and the University of Hawaii 88 inch telescope, \citet{hgo+08} uncovered 16 new lensing clusters with giant arcs, with 12 likely lensing clusters and 9 possible candidates. In this letter, we report the discovery of 4 strong gravitational lensing systems by clusters in the SDSS DR6. The Einstein rings or giant arcs we found from the SDSS images suggest that they are excellent strong lensing systems. In addition, we found 5 {\it probable} and 4 {\it possible} lensing systems by galaxy clusters. \section{Lensing systems discovered from the SDSS DR6 images} The SDSS provides five broad bands ($u$, $g$, $r$, $i$, and $z$) for photometry and follow-up spectroscopy. The star-galaxy separation is reliable to the limit of $r=21.5$ \citep{lgi+01}. The photometric data reaches the limit of $r=22.5$, with a mean seeing of 1.43$''$ in $r$-band \citep{slb+02}. Several valuable lenses have been {\it previously} discovered in the SDSS data, though the survey is shallow (55~s exposure with a 2.5~m telescope) making it difficult to detect faint giant arcs. \citet{atl+07} have serendipitously found the ``8 O'clock arc'' from a Lyman break galaxy at $z=2.73$ lensed by a luminous red galaxy SDSS J002240.91+143110.4 at $z=0.38$. \citet{bem+07,beh+09} looked for multiple blue objects around luminous red galaxies, and discovered the ``Cosmic horseshoe'', an almost complete Einstein ring of diameter $10''$ around SDSS J114833.15+193003.5 at $z=0.444$, and also the ``Cheshire Cat'', which consists of two galaxies at $z=0.97$ and $z>1.4$ lensed by a combination of two giant early galaxies, SDSS J103847.95+484917.9 and SDSS J103839.20+484920.3 at $z=0.426$ and 0.432, respectively. The lenses in the above systems are galaxies. Because there are a large number of high-redshift ($z\ge0.3$) clusters in the SDSS, they can act as efficient lenses. \citet{ead+07} preformed a systematic search for giant arcs in 825 SDSS maxBCG clusters \citep{kma+07} with masses $M\ge 2\times10^{14}~M_{\odot}$, and found no gravitational arcs. However, they published a serendipitous discovery of the Hall's arc lensed by a cluster, SDSS J014656.0$-$092952 at $z=0.447$. \citet{osk08} found the giant arc of a galaxy at $z=1.018$ lensed by a massive cluster SDSS J120923.7+264047 at $z=0.558$. \citet{lba+08} found a galaxy at $z=2.00$ lensed by SDSS J120602.09+514229.5, the brightest galaxy in a cluster. \citet{sso+08} found a post-starburst galaxy at $z=0.766$ lensed by a cluster galaxy at $z=0.349$. {\it We searched for the lensing systems} by visual inspections of the color images of a large sample of clusters. We first searched for clusters by determining the luminous cluster members ($M_r\le-21$) using the photometric redshifts of galaxies brighter than $r=21.5$ \citep{cbc+03}. A cluster at $z$ is identified when the number of member galaxies reaches 8 within a projected radius of 0.5 Mpc and a photometric redshift ranges between $z\pm\Delta z$. Here, we set $\Delta z=0.04(1+z)$ to allow variable uncertainties of photometric redshifts at different redshifts. These criteria can significantly reduce the false detection rate of clusters. From the SDSS DR6, we identified $\sim40,000$ clusters of $0.05< z <0.6$ with an overdensity greater than 4.5 (Wen et al. in preparation). Secondly, we searched for gravitational lensing features in these clusters by inspecting the composite ($g$, $r$ and $i$) color images from the SDSS web page\footnote{http://cas.sdss.org/astro/en/} independently by at least by three authors. We found 13 new candidates for lensing systems, in addition to the known cases in 6 clusters, as listed in Table~\ref{lens.tab}. In addition to the name, redshift and the mass of cluster, we give the angular separation of the arc from the bright central galaxy, the magnitude and color of the brightest part of the arc and notes in Table~\ref{lens.tab}. \begin{table}[h!] \centering \begin{minipage}[]{100mm} \caption[]{Lensing systems found to be associated with the SDSS clusters.} \label{lens.tab}\end{minipage} \vspace{-3mm} \fns \tabcolsep 1.5mm \begin{tabular}{lllllll} \noalign{\smallskip}\hline\noalign{\smallskip Cluster name& $z$ & M$_{200}$ & $\theta$ & $r$ & $g-r$ & Reference, Notes\\ & & ({\tiny $10^{14}M_{\odot}$})& ($''$) & & &\\ \hline\noalign{\smallskip} SDSS J014656.0$-$092952 & 0.447 & $>$41.0& 13.6 & 21.13$\pm$0.09 & 0.54$\pm$0.13& 1, three giant arcs\\ NSC J082722.2+223244 & 0.335 & 23.8& 4.4 & 20.16$\pm$0.05 &--0.13$\pm$0.06& 2, multiple images\\ MACS J113313.1+500840 & 0.389 & 11.6& 10.0 & 21.62$\pm$0.13 &--0.10$\pm$0.16& 3, giant arc\\ SDSS J120602.0+514229 & 0.442 & $>$7.7& 4.3 & 20.29$\pm$0.05 & 0.21$\pm$0.06& 4, arc near galaxy\\ SDSS J120923.6+264046 & 0.558 & $>$28.7& 11.3 & 22.37$\pm$0.26 & 0.82$\pm$0.42& 5, giant arc\\ NSCS J124034.0+450923 & 0.278 & 5.3 & 3.1 & 19.89$\pm$0.03 &--0.10$\pm$0.04& 6, arc near galaxy\\[1mm] SDSS J090002.6+223404 & 0.489 & $>$6.6& 8.4 & 20.41$\pm$0.06 & 0.06$\pm$0.07& 7*, almost certain:Einstein ring?\\ SDSS J111310.6+235639 & 0.324 & 26.8& 12.8 & 21.61$\pm$0.11 & 1.53$\pm$0.28& 7, almost certain:giant arc\\ SDSS J134332.8+415503 & 0.418 & 11.7& 12.4 & 21.09$\pm$0.12 & 0.10$\pm$0.16& 7, almost certain:giant arc\\ SDSS J223831.3+131955 & 0.413 & 9.7& 9.3 & 21.73$\pm$0.16 & 0.57$\pm$0.24& 7*, almost certain:Einstein ring\\[1mm] SDSS J095739.1+050931 & 0.442 & $>$5.8& 8.0 & 20.31$\pm$0.07 & 0.18$\pm$0.09& 7, probable:giant arc\\ SDSS J113740.0+493635 & 0.448 & $>$5.6& 3.8 & 20.39$\pm$0.04 & 0.04$\pm$0.05& 6,7*, probable:arc near galaxy\\ maxBCG J120735.9+525459 & 0.278 & 7.0& 11.3 & 20.80$\pm$0.07 & 1.21$\pm$0.15& 7, probable:giant arc\\ SDSS J131811.5+394226 & 0.475 & $>$6.6& 8.9 & 20.55$\pm$0.09 & 0.44$\pm$0.13& 7, probable:giant arc\\ NSCS J122648.0+215157 & 0.418 & 46.2& 10.8 & 21.74$\pm$0.19 & 0.17$\pm$0.22& 7, probable:giant arc\\[1mm] SDSS J123736.2+553342 & 0.410 & 13.4& 4.6 & 20.14$\pm$0.04 & 0.18$\pm$0.05& 6,7*, possible:blue arc?\\ SDSS J131534.2+233301 & 0.517 & $>$5.7& 5.5 & 20.36$\pm$0.08 & 0.96$\pm$0.15& 7, possible:arc?\\ SDSS J162132.3+060719 & 0.343 & 7.4& 16.2 & 19.88$\pm$0.06 & 1.20$\pm$0.13& 7, possible:giant arc?\\ SDSS J172336.1+341158 & 0.431 & $>$8.5& 4.3 & 20.91$\pm$0.05 &--0.26$\pm$0.06& 7*, possible:multiple images?\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \tablecomments{0.9\textwidth}{Here we list the name, redshift and mass of cluster. The angular separation ($\theta$) between the arc and the central galaxy, the $r$-band magnitude and color ($g-r$) of the brightest part of arc, and also the references and notes on the lensing systems, are given in the following columns. References: (1) \citet{ead+07}; (2) \citet{sso+08}; (3) \citet{ste+05}; (4) \citet{lba+08}; (5) \citet{osk08}; (6) \citet{beh+09}; (7) This work. *: CASSOWARY candidates, see the end of Sect.3.} \end{table} The cluster mass, $M_{200}$, was estimated from the summed $r$-band luminosity within $r_{200}$. Here, $r_{200}$ is the radius within which the mean mass density is 200 times that of the critical cosmic mass density. The mass-to-light ratio was calibrated by comparing the cluster masses derived by \citet{rb02} with the summed $r$-band luminosities of a sample of clusters (Wen et al. in preparation). Such determined masses are the lower limits for clusters at $z>0.42$, because their members are incomplete at the faint end of $M_r=-21$. {\it By careful inspections of the SDSS images} of these candidates, we almost certain found that 4 of them are lensing systems (see Fig.~\ref{lens sure}) though the images are somewhat shallow. For the system of SDSS J090002.6+223404, the three arclets (A, B, C) around the two central galaxies nearly form a circle with a radius of 8.4$''$, which may be an Einstein ring if the arclets are the images of the same background source. SDSS J223831.3+131955 shows a faint Einstein ring with a radius of 9.3$''$ . SDSS J111310.6+235639 and SDSS J134332.8+415503 show tangential giant arcs with respect to the bright central galaxies with a separation of more than $12''$. The faint but clear arcs usually have blue color compared with the bright central galaxies. \begin{figure}[h!!] \vs\vs \resizebox{35mm}{!}{\includegraphics{f1a.eps}}~% \resizebox{35mm}{!}{\includegraphics{f1b.eps}}~% \resizebox{35mm}{!}{\includegraphics{f1c.eps}}~% \resizebox{35mm}{!}{\includegraphics{f1d.eps}}\\[1mm] \resizebox{35mm}{!}{\includegraphics{f1e.eps}}~% \resizebox{35mm}{!}{\includegraphics{f1f.eps}}~% \resizebox{35mm}{!}{\includegraphics{f1g.eps}}~% \resizebox{35mm}{!}{\includegraphics{f1h.eps}}\\[1mm] \caption{\baselineskip 3.6mm The SDSS composite color images ($g$, $r$ and $i$) of 4 clusters which show Einstein rings or giant arcs. They are almost certain gravitational lensing systems. The images have a field of view of 1.2$'\times$1.2$'$. The negative images are also shown in the second rows to see more clearly the lensing features. \label{lens sure}} \end{figure} \begin{figure} \resizebox{35mm}{!}{\includegraphics{f2a.eps}}~% \resizebox{35mm}{!}{\includegraphics{f2b.eps}}~% \resizebox{35mm}{!}{\includegraphics{f2c.eps}}~% \resizebox{35mm}{!}{\includegraphics{f2d.eps}}\\[1mm] \resizebox{35mm}{!}{\includegraphics{f2e.eps}}~% \resizebox{35mm}{!}{\includegraphics{f2f.eps}}~% \resizebox{35mm}{!}{\includegraphics{f2g.eps}}~% \resizebox{35mm}{!}{\includegraphics{f2h.eps}}\\[1mm] \resizebox{35mm}{!}{\includegraphics{f2i.eps}}~% \resizebox{35mm}{!}{\includegraphics{f2j.eps}} \caption{\baselineskip 3.6mm Same as Fig.~\ref{lens sure}, but for 5 clusters which are {\it probable} lensing systems. The system of NSCS J122648.0+215157 in the last row was found from a merging cluster during the proof-reading stage of this paper. \label{lens prob}} \resizebox{35mm}{!}{\includegraphics{f3a.eps}}~% \resizebox{35mm}{!}{\includegraphics{f3b.eps}}~% \resizebox{35mm}{!}{\includegraphics{f3c.eps}}~% \resizebox{35mm}{!}{\includegraphics{f3d.eps}}\\[1mm] \resizebox{35mm}{!}{\includegraphics{f3e.eps}}~% \resizebox{35mm}{!}{\includegraphics{f3f.eps}}~% \resizebox{35mm}{!}{\includegraphics{f3g.eps}}~% \resizebox{35mm}{!}{\includegraphics{f3h.eps}}\\[1mm] \centering \begin{minipage}[]{105mm} \caption{Same as Fig.~\ref{lens sure}, but for 4 clusters which are {\it possible} lensing systems. \label{lens poss}}\end{minipage} \end{figure} The total mass (including dark matter) within the Einstein ring $r_{\rm E}=D_{\rm l} \theta_{\rm E}$ can be estimated by, \begin{equation} M(<r_{\rm E})=\frac{c^2r_{\rm E}^2}{4G}\frac{D_{\rm s}}{D_{\rm l}D_{\rm ls}}, \end{equation} where $D_{\rm s}$ and $D_{\rm l}$ are the angular diameter distances of the source and lens from the observer, and $D_{\rm ls}$ is the angular diameter distance of the source from the lens. We estimate the angular Einstein ring $\theta_{\rm E}$ by $\theta$; this is an approximation due to the non-sphericity of clusters. In an $\Lambda$CDM cosmology (H$_0=$72 ${\rm km~s}^{-1}$ ${\rm Mpc}^{-1}$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$), we obtained the mass $M(<r_{\rm E})=2.3\times10^{13}~M_{\odot}$ for SDSS J090002.6+223404 if we assume the source redshift of $z_{\rm s}=1$, or $M(<r_{\rm E})=1.6\times10^{13}~M_{\odot}$ if $z_{\rm s}=2$. Similarly, for SDSS J223831.3+131955, the mass is $M(<r_{\rm E})=2.3\times10^{13}~M_{\odot}$ if $z_{\rm s}=1$ or $M(<r_{\rm E})=1.7\times10^{13}~M_{\odot}$ if $z_{\rm s}=2$. We also found another 5 clusters which are {\it probable} lensing systems (see Fig.~\ref{lens prob}). All of them show blue arclets, which are tangential to the bright central galaxies and are distinct in color from the red cluster galaxies. Notably, the giant arcs in the clusters, SDSS J095739.1+050931 and NSCS J122648.0+215157, are faint, and have very large separations ($>10''$) from the central galaxy. SDSS J113740.0+493635 has a blue arc, which is very close ($3.8''$) to the central red galaxy. In the cluster SDSS maxBCG J120735.9+525459, the blue giant arc stands out among the red member galaxies. Another 4 clusters are {\it possible} lensing systems (see Fig.~\ref{lens poss}). The blue images around the central galaxy of SDSS J123736.2+553342 may be the lensed image of a background source. However, this bright blue object on the top can also be a foreground galaxy. In the cluster SDSS J131534.2+233301, the three faint arclets may form a half ring surrounding several member galaxies rather than the brightest galaxy. These arclets may be independent, or they may just be faint member galaxies. In the cluster SDSS J162132.3+060719, the arc is tangential to the bright central galaxy and has a large separation ($16.2''$), but does not have much color difference from the cluster galaxies. It may also be due to a combination of an edge-on galaxy plus other faint objects. The outstanding blue arclet in SDSS J172336.1+341158 is very close ($4.3''$) to the central red galaxy. It may be the lensed and enhanced image of the background galaxy, but the possibility of the foreground galaxy can not be excluded. \section{Final Remarks} Using the SDSS data, we almost certainly found 4 cluster lenses, plus 5 probable and 4 possible lenses. Together with 6 known lenses of clusters, there are at least 10 lensing systems identified from SDSS, which preferably have redshifts around 0.4 and masses of at least $5\times10^{14}~M_{\odot}$. The separations of the blue arcs or rings from the central red galaxy are usually several arcseconds. If the 19 clusters listed in Table~\ref{lens.tab} are all lenses, 6 of them have redshifts of $0.2<z<0.4$, and 13 of $0.4<z<0.6$. Comparing them with 7568 and 10121 clusters with masses $5\times10^{14}~M_{\odot}$ in the corresponding redshift ranges (Wen et al. in preparation), we found that the occurrence probability of lensing clusters in the shallow SDSS images increases from $7.9\times10^{-4}$ in the range of $0.2<z<0.4$ to $1.3\times10^{-3}$ of $0.4<z<0.6$. The tendency of the increase in lensing probability with redshift is consistent with that of \citet{gky+03}. Follow-up observations are necessary to confirm the {\it probable} or {\it possible} lensing systems in 9 clusters. Unfortunately, we can not easily access large optical telescopes to make the follow-up confirmation. We also publish these candidates with a list and the SDSS color images to encourage followup observations by others. After we submitted this paper to this journal and to astro-ph, we learned from Dr.~V.~Belokurov that 5 objects with ``*'' in Table~\ref{lens.tab} have been listed in their CASSOWARY catalogue (see http://www.ast.cam.ac.uk/research/cassowary/) as lensing candidates CSWA 19, CSWA 10, CSWA 7, CSWA 13 and CSWA 14, respectively. The images of CSWA 7 and CSWA 13 have been published in \citet{beh+09}. These 5 objects therefore are independent ``re-discoveries''. \noindent{\bf NOTES ADDED IN PROOF:} 1) We found another probable lensing system by a merging cluster, NSCS J122648.0+215157 \citep{ldg+04}, which we have added to Fig.~\ref{lens prob} and Table~\ref{lens.tab}; 2) We noted that Kubo et al. (2008) just submitted a paper to report their SDSS arc survey results, including the follow-up observations of two lensing systems we found independently, SDSS J111310.6+235639 and J113740.0+493635. \normalem \begin{acknowledgements} We thank Prof. Xiang-Ping Wu and Shude Mao for a carefully reading of the manuscript and the referee for helpful comments. The authors are supported by the National Natural Science Foundation of China (NSFC, Nos.10521001, 10773016 and 10833003) and the National Key Basic Research Science Foundation of China (2007CB815403). Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. \end{acknowledgements} \bibliographystyle{raa}
train/arxiv
BkiUdGzxK6Ot9Pm3vMOL
5
1
\section{Introduction} Much of our current knowledge regarding star-forming patterns and circumstellar disk evolution derives from study of molecular cloud complexes within a few hundred parsecs of the Sun. Among these are a large number of lower-mass clouds such as Taurus and, more infrequently, dense clouds like Orion, which is the prototypical high-mass and high density star forming region. While nearby cloud complexes serve as our primary empirical guide to understanding the formation and early evolution of stars, it is important that we study more than just the nearest examples. One little-studied-to-date star-forming region beyond the solar neighborhood is the North American and Pelican Nebulae (NGC\,7000 and IC\,5070, respectively), towards $l$=85, $b$=$-$0.5, about 600\,pc from Earth. This region is probably more representative of star formation in the disk of the Milky Way than Taurus or Orion, it is also the next closest region after Orion with a substantial numbers of intermediate and high mass stars. The physical appearance of the nebulae on optical images is thought to reflect a combination of a large, background \ion{H}{2} region (W80) overlaid by several foreground, dark clouds, with the edges of the dark clouds illuminated by the optically-unseen primary exciting star of the \ion{H}{2} region. Because these nebulae lie essentially in the plane of the Galaxy and, as seen by us, along a spiral arm, study of the region is further confused by the juxtaposition of young stars and star-forming regions at a variety of distances. For the purposes of the rest of the paper, we will refer to the entire region as the NANeb, or the NAN complex. \citet{herbig-1958} was one of the first to study the star formation associated with these nebulae. He identified 68 H$\alpha$\ emission line stars from objective-prism plates of the region, including a small cluster of pre-main sequence (PMS) stars in the ``Gulf of Mexico" portion of the NANeb. Many of the young stars identified by Herbig lie along the bright rims of the dark clouds, possibly indicating that they are recent, second generation star formation products triggered by the expanding \ion{H}{2} region from the first generation of stars in the NANeb. Herbig estimated a rough distance of 500\,pc for the star-forming complex, but with a variety of caveats related to whether this distance applied to some or all of the stars and nebulae. The best available modern distance for the region appears to be that of \citet{laugalys-2002}, who estimate a distance of $\simeq$600 pc. We will use this distance for the purposes of the paper with the caveats noted above. \citet{Bally-1980} mapped the CO associated with the dark clouds that define the Gulf of Mexico and Atlantic Ocean, and estimated that the mass in gas still present is of order $3\times10^4\,M_\odot$. Their interpretation of the CO data was that the remaining dark clouds are the remnants of a much larger molecular cloud complex that has largely been disrupted by the hot, massive stars that were formed a few million years ago. \citet{Cambresy-2002} used the Two-Micron All-Sky Survey (2MASS; \citealp{Skrutskie-2006}) point-source catalog to map the extinction and star clustering towards this region, deriving extinctions of $A_v \geq$ 35 magnitudes in some portions of the darkest cloud (Lynds\,935). Cambr\'esy et al. also identified nine apparent star clusters in the 2MASS data. \citet{Bally-2003} obtained narrow-band H$\alpha$ and [\ion{S}{2}] imaging of the region of the Pelican nebula, and identified a number of new HH objects and collimated outflows, mostly originating from the interface region between the dark cloud and the \ion{H}{2} region. In a couple of cases, the flows originate from near the tip of elephant trunks. The exciting sources for most of the flows could not be identified from the existing data. \citet{armond-2003} obtained additional narrow and broad band optical and near-IR imaging of the NANeb, and identified 28 more HH objects, including many associated with the PMS stars of Herbig's Gulf of Mexico cluster. More recently, the probable exciting source for the \ion{H}{2} region has been identified as an O5 star behind about $A_V$=10 mag of extinction, located inside the dark cloud LDN\,935 separating the North America and the Pelican Nebulae \citep{Comeron-2005b}. We have conducted a large ($\sim$9 deg$^2$) infrared imaging survey with Spitzer Space Telescope \citep{Werner-2004} of this region, along with supporting data obtained in the optical for the $\sim 2.4\arcdeg \times 1.7\arcdeg$ central region. This present paper, the first of a series, presents the Infrared Array Camera (IRAC) data. Future papers will cover the Multiband Imaging Photometer for Spitzer (MIPS) data, with special emphasis on the Gulf of Mexico cluster (Rebull et~al.\ 2009; hereafter R09), and the optical classification spectroscopy (Hillenbrand et~al.\ in prep). First, we present the observational details for the IRAC observations (\S\ref{sec:obs}). We then use the IRAC colors to select a minimally contaminated, if not complete, sample of YSO candidates (\S\ref{sec:ysos}), and discuss their properties (classes, spatial distribution) in the context of other star-forming regions. Because IRAC is very sensitive to emission from HH objects, we then discuss the IRAC observations of one of the HH objects found in this region (\S\ref{sec:hhobj}). \section{Observations, Ancillary Data, and Basic Data Reduction} \label{sec:obs} \begin{deluxetable}{lccl} \tablecaption{Summary of IRAC observations (programs 20015 and 462)\label{tab:obs}} \tablewidth{0pt} \tablehead{ \colhead{Program : field-ID} & \colhead{map center} & \colhead{AORKEY} } \startdata P20015 : 12 & 20h55m30.00s,+44d59m00.0s & 16790528 \\ P20015 : 13 & 20h53m06.00s,+44d59m00.0s & 16790784 \\ P20015 : 21 & 20h57m54.00s,+44d23m00.0s & 16791040 \\ P20015 : 22 & 20h55m30.00s,+44d23m00.0s & 16791296 \\ P20015 : 23 & 20h53m06.00s,+44d23m00.0s & 16791552 \\ P20015 : 24 & 20h50m42.00s,+44d23m00.0s & 16791808 \\ P20015 : 31 & 20h57m54.00s,+43d47m00.0s & 16792064 \\ P20015 : 32 & 20h55m30.00s,+43d47m00.0s & 16792320 \\ P20015 : 33 & 20h53m06.00s,+43d47m00.0s & 16792576 \\ P20015 : 34 & 20h50m42.00s,+43d47m00.0s & 16792832 \\ P20015 : 42 & 20h55m30.00s,+43d11m00.0s & 16793088 \\ P20015 : 43 & 20h53m06.00s,+43d11m00.0s & 16793344 \\ P00462 : 0 & 20h50m38.00s,+45d05m10.0s & 24251392 \\ P00462 : 1 & 21h01m13.00s,+44d54m43.0s & 24251648 \\ P00462 : 2 & 20h58m51.00s,+45d12m42.0s & 24251904 \\ P00462 : 3 & 20h59m05.00s,+42d59m38.0s & 24252160 \\ P00462 : 4 & 21h00m17.00s,+43d23m25.0s & 24252416 \\ P00462 : 5 & 21h01m50.00s,+44d10m34.0s & 24252672 \\ P00462 : 6 & 20h50m19.00s,+42d58m37.0s & 24252928 \\ P00462 : 7 & 20h48m44.00s,+43d17m51.0s & 24253184 \\ \enddata \end{deluxetable} \subsection{Observations} The Spitzer observations of the NAN complex were obtained as part of the Cycle-2 program 20015 (PI: L. Rebull). Additional observations of the ``corners'' of the map were obtained as part of program 462 in an effort to increase the legacy value of the data set. The IRAC observations from program 20015 were obtained 9-11 August 2006, amd the IRAC observations from program 462 were obtained 15-27 November 2007. IRAC \citep{Fazio-2004} observes at 3.6, 4.5, 5.8, and 8\,$\mu$m. The Cycle-2 IRAC observations were designed to cover the region of highest extinction in a manner as independent of observing constraints as possible. Figure \ref{fig:mosaic_optical} is the NANeb region in the optical (from the Digitized Palomar Optical Sky Survey). The region covered by our IRAC map is indicated, as is the $A_v=5$ contour from the \cite{Cambresy-2002} extinction map. The observations were broken into 12 astronomical observation requests (AORs); the AORKEYs are given in Table~\ref{tab:obs}. This region is at a high ecliptic latitude (+57$^{\circ}$), so the field of view rotates by about a degree a day, necessitating small individual AOR coverage in order to completely tile the region without leaving gaps. Also, because of the high ecliptic latitude, asteroids are not a significant concern, and we therefore obtained all of the imaging at a single epoch. Each mapping AOR was constructed with the same strategy -- at each map step, four dither positions were observed, each with high-dynamic-range (HDR) exposures of 0.6 and 12.0 second frame times. For this strategy, the on-line SENS-PET for a high-background region reports 3-$\sigma$ point-source sensitivities of 7.2, 10.7, 66, and 78\,$\mu$Jy for the 4 IRAC bands, respectively. Differential source count histograms for our point sources are very similar to those for other star-forming regions observed in a similar manner - see in particular Figure 8 of Winston et al. 2007 (ApJ 669, 493) for 3.6 $\mu$m; the NaNeb histograms indicate that our 90\% completeness limits are about [3.6] = 15 decreasing to about [8.0] = 12.5. For our purposes, however, the more important number is the completeness for detecting objects in all four channels. We estimate this by determining where the four-channel differential source counts drops below the 3.6 $\mu$m source counts by more than 20\%, which occurs at about [3.6] = 12.2. This is about as expected given that four-channel catalog is normally limited by the detections in the 8.0 $\mu$m channel. This completeness limit corresponds to a spectral type of M4 or M1 according to the BCAH98 isochrones \citep{Baraffe-1998} for 1 or 5 Myr, respectively, a distance of 600\,pc (assuming no reddening) and the temperature to spectral type scale defined by \cite{Luhman-2003b}. The faintest object in our final catalog has [3.6]$\sim$16, corresponding to a mass of 0.02-0.03\,M$_\odot$\ at 1-5\,Myr age, assuming the 3.6\,$\mu$m flux is photospheric. The remaining IRAC observations from program 462 were designed using the fixed-cluster observing mode to cover the irregularly-shaped regions to the same depth as the main map and to create a final map that is approximately square in shape. The map center given in Table~\ref{tab:obs} is the approximate center of each AOR. We used the software developed by R.~Gutermuth and T.~Megeath (private communication) for use by the IRAC GTO and Gould's Belt teams to construct these AORs. \begin{figure*} \plotone{mosaic_optical_lr} \caption{Optical image ($\sim 4\times3.2$ degrees) of the NANeb region from the Palomar all-sky survey. North is up, East is to the left. The left and right edges are at 21h02m32s and 20h46m52s; the top and bottom edges are at 45d25m40s and 42d40m56s. The region observed by IRAC is indicated by red lines. The green line is a smoothed $A_v$=5 contour from the \cite{Cambresy-2002} extinction map. The galactic coordinate of the center of this image is l=84.8, b=-0.6.} \label{fig:mosaic_optical} \end{figure*} \subsection{Basic Data Reduction} We started with the Spitzer Science Center (SSC) pipeline-produced basic calibrated data (BCDs), version S14.4. We ran the IRAC Artifact Mitigation code written by S.\ Carey and available on the SSC website. We constructed mosaics from the corrected BCDs using the SSC mosaicking and point-source extraction software package MOPEX \citep{Makovoz-2005}. The mosaics have a pixel scale of 1.22$\arcsec$ px$^{-1}$, very close to the native pixel scale. Figure \ref{fig:mosaic_I1} shows the mosaics in channel~1 (3.6\,$\mu$m), and Figure \ref{fig:mosaic_3colors} shows a 3-color IRAC mosaic (4.5, 5.8, and 8\,$\mu$m). We performed aperture photometry on the combined long- and short-exposure mosaics separately using the output of the APEX (par of the MOPEX software) detection algorithm (APEX is a part of the MOPEX package) and the ``aper'' IDL procedure from the IDLASTRO library. We used a 2 pixel radius aperture and a sky annulus of 2-6 pixels. The (multiplicative) aperture corrections we used follow the values given in the IRAC Data Handbook: 1.213, 1.234, 1.379, 1.584 for IRAC channels 1, 2, 3, and 4, respectively. Fluxes have been converted to magnitude given the zero magnitude fluxes of 280.9\,$\pm$4.1, 179.7\,$\pm$2.6, 115.0\,$\pm$1.7 and 64.1\,$\pm$0.9 for channels 1, 2, 3, and 4 respectively (IRAC Data Handbook). We take our photometry errors to be that given by the IDL procedure. We have checked that the uncertainties derived by the IDL procedure are consistent with the dispersion in IRAC colors for bright, off-cloud sources. For the longer integration times, the estimated average errors on our photometry in the 4 IRAC channels are 0.020, 0.025, 0.031, 0.035 magnitudes for bright stars, increasing to $\sim$0.1 mag for channels 1-4 at 15, 14.7, 13.2, 12.7 mag. Since the sources have been extracted individually from the mosaiced BCDs, we have used the overlap between BCDs to estimate our astrometric precision. The histogram of coordinate differences shows a one sigma RMS uncertainty of 0.3$\arcsec$ ($\sim 1/4$ of pixel) in both directions. The APEX source detection algorithm has a tendency to identify multiple sources within the PSF of a single bright source which can cause significant confusion at the bandmerging stage. Because our nominal fluxes are from the 2 pixel aperture photometry, any object which has a companion within 2 pixels will have a confused flux. The photometry lists were therefore cleaned of multiple sources prior to the bandmerging. Objects with a companion within 2 pixels had one of the sources removed. The source chosen for removal was the one with the lowest signal-to-noise ratio. We extracted photometry from the long and short exposure mosaics separately for each channel, and merged these source lists together by position using a search radius of 2 pixels (2.44$\arcsec$) to obtain a catalog for each channel. The magnitude cutoff where we transition to using the long rather than the short exposure photometry corresponds to magnitudes of 11, 10, 8.4, and 7.5 magnitudes for channels 1, 2, 3 and 4 respectively. Because of the complex nebulosity in this region, APEX (like any point-source detection algorithm) can be fooled by structure in the nebulosity. This effect is most apparent at 8\,$\mu$m, where there are also the fewest stellar (point) sources detected. To attempt to remove false sources, we have compared photometry obtained via a 2 and 3-pixel aperture. After applying the aperture correction given in the IRAC Data Handbook, we eliminated sources with a difference of magnitude $>0.2$ mag; doing so rejected 32\% of the raw detections in IRAC's 8\,$\mu$m band. This process is only applied to the 8\,$\mu$m channel because the nebulosity is most prominent there (due to the presence of strong PAH bands at 7.6 and 8.6\,$\mu$m). Also, because there are the fewest point sources detected in this band (see Table~\ref{tab:stat}), the chances of there being two legitimate sources within 2-3 pixels of each other are much lower than in channels 1 or 2. Spot-checking the images confirms this assertion. We acknowledge this step may eliminate some true YSOs from our 4-band catalog. We deem this to be acceptable because our goal is to produce a minimally contaminated catalog of YSOs, not a complete catalog. \begin{figure*} \plotone{mosaic_i1_lr} \caption{ Mosaic of the NANeb region from IRAC channel 1 (3.6\,$\mu$m). Orientation and map center are the same as previous Figure; overlays from previous Figure are included for reference. } \label{fig:mosaic_I1} \end{figure*} \begin{figure*}\centering \plotone{mosaic_2+3+4_lr} \caption{ A 3-color view of the IRAC NANeb with 4.5~$\mu$m in blue, 5.8~$\mu$m in green, and 8~$\mu$m in red. Orientation and map center are the same as Figure~\ref{fig:mosaic_optical}; overlays from Figure~\ref{fig:mosaic_optical} are included for reference.} \label{fig:mosaic_3colors} \end{figure*} \subsection{Ancillary Data \& Bandmerging} \label{sec:bandmerging} Table \ref{tab:stat} provides statistics for our entire catalog of objects; an object is included in our master catalog even if it is only detected in a single band. In order to create our final multi-wavelength catalog which we will use to identify new YSOs, we first merged the four individual IRAC source lists together, starting with IRAC-1, taking the closest source within 1$\arcsec$ as the best match. The radius of 1$\arcsec$ has been chosen based on our astrometric precision of 0.3$\arcsec$ (see section above) and the density of sources in the 3.6\,$\mu$m image ({\it i.e.} the most crowded image). In order to provide photometry at other bands, we cross-matched our catalog to 2MASS, taking the closest source within 1$\arcsec$, and then to our optical catalog, again taking the closest source within 1$\arcsec$. Out of the 63\,084 objects detected at all 4 IRAC channels, 93\% have a 2MASS counterpart. For the entire region, 10\% of the final catalog stars have an optical counterpart; out of just the region covered by the optical photometry data, 28\% have an optical counterpart. The MIPS catalog will be discussed in detail in R09. In summary, we have covered approximately the same area as the IRAC map, and we performed the source extraction using APEX. $BVI$ images were obtained with the KPNO~0.9\,m telescope on four photometric nights in June 1997. An area of approximately 2.4$\times$1.7\,deg$^2$ was covered by mosaicing the $23.2^\prime \times 23.2^\prime$ FOV CCD in a grid with overlap of typically $3^\prime$ along each border. Exposures of 10 sec and 500 sec enabled unsaturated photometry between $V=11$ and 20 mag. In IRAF, images were bias subtracted and flat fielded with sky flats taken each night. Sources were identified using a 5$\sigma$ detection threshold and the DAOFIND task. The photometry was measured through an aperture of 5 pixels radius with background determined as the median in an annulus from 7-14 pixels where the plate scale was 0.68$^{\prime\prime}$/pixel. Aperture corrections were applied. Absolute photometric calibration was achieved through observation of Landolt standards and further self-calibrated across the overlap regions using stars in common between frames. First the short frames were calibrated to the long frames then the long frames were tied across the distinct telescope pointings, ignoring individual photometric errors larger than 0.1 mag. The average of the 43 spatial frame-to-frame offsets was 0.024 mag with dispersion 0.041 mag and a few frames requiring offsets as large as 0.1 mag. The individual offsets themselves have standard deviation typically 0.02-0.03 mag and standard deviation in the mean $<$0.01 mag, which we take as the self-calibration error. Photometry was adopted from the long frames except when in the nonlinear or saturated regimes of the CCD response in which case the short frame photometry was adopted; these criteria were applied at $B<$15.5, $V<$14.5, $I<$13.5. Astrometry was obtained using the HST Guide Star Catalog and the TASTROM task and is estimated accurate to $<$0.3" ($1\sigma$) based on stars in the overlap regions. Additional $RI$ frames were obtained on a fifth night over a more limited area, $2\times 1.5\,$deg$^2$. These were reduced in a similar way, requiring a 5$\sigma$ detection threshold. \begin{deluxetable*}{lrcccccccc} \tablecaption{\update{03/09} Numbers of IRAC sources extracted from the NANeb data. \label{tab:stat}} \tabletypesize{\scriptsize} \tablehead{% \colhead{Band} & \colhead{Number of sources}& \multicolumn{7}{c}{Fractional number of sources which match (in \%)\tablenotemark{a}}\\ & & \colhead{$[3.6]$} & \colhead{$[4.5]$} & \colhead{$[5.8]$} & \colhead{$[8.0]$} & \colhead{$[3.6+4.5]$} & \colhead{$[3.6+4.5+5.8]$} & \colhead{$[3.6+4.5+5.8+8.0]$}% } \startdata $[3.6]$ & 558053 & \ldots& 75 & 32 & 12 & 75 & 30 & 11\\ $[4.5]$ & 510922 & 82 & \ldots& 33 & 13 & 82 & 33 & 12\\ $[5.8]$ & 200875 & 90 & 83 & \ldots& 32 & 83 & 83 & 31\\ $[8.0]$ & 77898 & 84 & 83 & 82 & \ldots& 83 & 81 & 81\\ $JHK_s$ & 220799 & 90 & 86 & 65 & 27 & 86 & 64 & 26\\ $BVI$ & 16595 & 85 & 83 & 72 & 40 & 83 & 71 & 39\\ $[24]$ & 4232 & 51 & 51 & 48 & 43 & 51 & 48 & 43\\ \enddata \tablenotetext{a}{As an example of how to interpret this table, the ``75" in cell ([3.6], [4.5]) means that 75\% of the stars detected at 3.6\,$\mu$m also are detected at 4.5\,$\mu$m} \end{deluxetable*} \section{Identification and Characterization of our YSO Sample} \label{sec:ysos} \subsection{Selection of YSO candidates} Several studies in the literature, usually focusing on subregions of this complex, have identified a total of $\sim$170 YSOs in the area covered by our IRAC map, using a variety of techniques such as NIR excess or H$\alpha$ emission. Most of these objects are earlier than K7. A complete list of the new Spitzer data for these previously-known YSOs will appear in R09. Now, with our new, comprehensive multi-wavelength view of the complex, we can begin to create a census of the members of this complex having infrared excesses. However, doing so is difficult, as it requires an extensive weeding-out of the galactic and extragalactic contaminants. To begin this task, we have opted to create a minimally contaminated sample of Spitzer-selected YSO candidates, as opposed to identifying every possible YSO candidate. By miniminally contaminated sample, we simply mean a sample which includes as few non-YSOs as possible ({\it i.e.} from which AGB stars, AGN, and any other objects whose IR colors mimic YSOs have been eliminated). We identify only stars with excesses at IRAC wavelengths as YSOs -- hence any member without excess ({\it i.e.} Class~III YSOs) or stars with excess only at wavelengths greater than 8\,$\mu$m (inner cleared regions or transitional disks) will not be included. We now discuss the selection criteria we have used to select our reliable YSO sample. Our selection method is described below and illustrated in Figure \ref{fig:iraccolorcolor}. We initially require detection at all four IRAC bands, which substantially limits the catalog (see Table~\ref{tab:stat}) and then we follow the IRAC four-band source characterization described in \citet[][section~4.1]{Gutermuth-2008a}. Using their extragalactic contamination criteria, we have first rejected 272 sources as having colors consistent with galaxies dominated by PAH emission. This selection has been made in the $[4.5]-[5.8] / [5.8]-[8]$ and $[3.6]-[5.8] / [4.5]-[5.8]$ planes, where we reject objects seen in the grey zones plotted in panel (a) and (b) of Figure \ref{fig:iraccolorcolor}; this selection is defined by two sets of inclusive equations: \begin{equation} \begin{array}{c} \left\{ \begin{array}{c} [4.5] - [5.8] < \frac{1.05}{1.2}([5.8]-[8.0]-1) \\ \& \\ \left[4.5\right]-[5.8] < 1.05 \\ \& \\ \left[5.8\right]-[8.0] > 1 \end{array}\right.\\\\ \ \ \mathrm{OR} \ \ \\\\ \left\{ \begin{array}{c} [3.6]-[5.8]<\frac{1.5}{2} ([4.5]-[8.0]-1) \\ \& \\ \left[3.6\right]-[5.8] < 1.5 \\ \& \\ \left[4.5\right]-[8.0] > 1 \end{array}\right. \end{array} \end{equation} We then rejected 823 more sources having colors consistent with AGN in the [4.5] vs.\ [4.5]$-$[8.0] plane; the region of color space used to select AGN-like sources is plotted in panel (c) of Figure \ref{fig:iraccolorcolor} and is defined by the following equations: \begin{equation} \begin{array}{c} \left\{ \begin{array}{c} [4.5]-[8.0] > 0.5 \\ \& \\ \left[4.5\right] > 13.5 + ([4.5]-[8.0]-2.3)/0.4 \\ \& \\ \left[4.5\right] > 13.5 \end{array}\right.\\ \\ \ \ \&\ \ \\ \\ \left\{ \begin{array}{c} [4.5] > 14 + ([4.5] -[8.0] -0.5)\\ \mathrm{OR} \\ \left[4.5\right] > 14.5 - ([4.5] - [8.0] - 1.2)/0.3 \\ \mathrm{OR} \\ \left[4.5\right] > 14.5 \end{array}\right. \end{array} \end{equation} (see Appendix~A of \citealt{Gutermuth-2008a} for more details). Note that the AGN-like source selection has been made in the observed [4.5] vs.\ [4.5]$-$[8.0] plane, whereas \cite{Gutermuth-2008a} use the dereddened plane. \cite{Gutermuth-2008a} are able to work easily in the dereddened plane because they have a high spatial resolution $A_v$ map. Because the NAN complex is much further away than NGC\,1333, we do not have the ability to construct such a high-resolution $A_V$ map. We tested the implications of this limitation by using the \cite{Cambresy-2002} extinction map to deredden individual objects. Just 25 sources would be dropped by performing this selection in the dereddened plane; they are fairly uniformly spread over the entire mapped region (and hence are likely contaminants rather than YSOs). Just 3 of these are in the Gulf of Mexico region (see R09) where we do not expect very much background contamination due to the very high reddening. Satisfied that this decision does not significantly affect our final sample of YSOs, we have chosen to keep our source selection in the observed plane. Once these background contaminants are removed from consideration, we are left with a population of 61,989 IRAC sources, dominated by objects with the apparent colors of stellar photospheres. As in \cite{Gutermuth-2008a}, we have further selected YSO candidates in the $[4.5]-[8]$ {\it vs.} $[3.6]-[5.8]$ color-color diagram meeting the following criteria: \begin{equation} \left\{\begin{array}{c} \left[4.5\right]-[8]>0.5\\ \&\\ \left[3.6\right]-[5.8]>0.35\\ \&\\ \left[3.6\right]-[5.8]\le \frac{0.14}{0.04} \times \left([4.5]-[8]- 0.5\right)+0.5 \end{array}\right. \end{equation} This selection is indicated in Figure~\ref{fig:iraccolorcolor}, panel {\it b}. This leaves 1657\ candidates.\update{03/11} Continuing to follow \cite{Gutermuth-2008a}, we used their selection criteria to identify objects from the 1657\ candidates whose IRAC fluxes may be contaminated by emission lines from shocks given their $[3.6]-[4.5]$ and $[4.5]-[5.8]$ colors by the equations: \begin{equation} \left\{ \begin{array}{c} [3.6] - [4.5] > \frac{1.2}{0.55} ([4.5]-[5.8] -0.3)+0.8\\ \&\\ \left[4.5\right]-[5.8] \leq 0.85\\ \&\\ \left[3.6\right]-[4.5] > 1.05 \end{array}\right. \label{eq:shoked} \end{equation} We found only 2 YSO candidates that matched these criteria (205608.3+433654.2 and 205702.1+433431.3). Their SEDs are compatible with their being real YSOs whose aperture photometry is affected by emission from a compact, circumstellar nebula, causing an excess at 4.5\,$\mu$m relative to the two adjoining IRAC bands. Moreover, these 2 objects are located inside the ``Gulf of Mexico'' where the concentration of YSOs is the highest in the cloud, and where previous optical emission line surveys have found numerous HH regions. Given these considerations, we retain these objects in our YSO candidate list. Of the 1657\ YSO candidates, 972 (59\%) have a 2MASS counterpart\update{03/11}. Out of the entire region, 131 (7.9\%) have an optical counterpart; out of the region covered by the optical data, 10.4\% have an optical counterpart. All but 63 of the YSO candidates are inside the region covered by MIPS data\update{03/11 with MIPS patches}, and 45\% of the candidates have a MIPS 24\,$\mu$m counterpart. We note that the percentage of 2MASS and optical YSO counterparts is less than for the whole IRAC 4-band catalog (92\% and 28\% respectively, see section~\ref{sec:bandmerging}); this is due to the fact that YSOs are mostly located inside highly reddened regions and are not detected in our shorter wavelengths. With this selection of $\sim$1600 YSOs, we have increased the number of likely YSOs in the NANeb region by an order of magnitude. We compare the positions of both the YSO candidates and background contaminants in a 2MASS / IRAC color color diagram in panel {\it d} of Figure \ref{fig:iraccolorcolor}. A large fraction (90\%) of sources flagged as galaxies or AGN fall in the region occupied by reddened main sequence stars, and hence blueward in $K_s-[4.5]$ color from the YSOs. Just 5\% of the YSO candidates are located in this same region, two-thirds of which are classified as intermediate between Class~II and Class~III (see \S\ref{sec:classes}). It is hard to estimate the contamination fraction of our YSO candidates without additional data (e.g. spectroscopic or X-ray confirmation). However, we can make a few worst case estimates. Because our survey area is fairly large, it includes some sections which are outside the main star-forming region and have relatively few candidate YSOs. The north-east corner of our survey region (see Figure \ref{fig:mosaic_3colors}), for example, offers one such relatively YSO-poor area. Specifically, we define a ``field" region as being the area above a line from the mid-point of the East edge of Figure \ref{fig:mosaic_3colors} to the mid-point of the North edge of the figure. If all the candidate YSOs in this region are contaminants, we derive a contaminant surface density of 32 objects per square degree. Assuming the AGN and other contaminants are uniformly distributed over our survey region, this yields an upper limit to the fraction of contaminants in our survey of 17\%. \begin{figure*}\centering \plottwo{i2i3i3i4_lr}{i1i3i2i4_lr} \plottwo{i2i2i4_lr}{jhki2_lr} \caption{IRAC color-color and color magnitude diagrams used to reject background contaminants (panel {\it a}, {\it b} and {\it c}) and select YSOs candidates (panel {\it b}). An additional diagram shows the distribution of background contaminants and YSO candidates in a 2MASS-IRAC color-color diagram limited to targets with 2MASS counterparts (panel {\it d}). In all diagrams, blue triangles are sources flagged as PAH-emission sources (galaxies) located in the darkest gray area of panel {\it a} or {\it b}. Red squares are sources flagged as AGN, selected in the gray area of panel {\it c}. Finally, YSOs candidates are plotted with black cross symbols; they have been selected in the brightest gray area of panel {\it b} if not previously flagged as background contaminants. The reddening vectors correspond to an $A_K$ of 10 in panels {\it a}, {\it b} and {\it c}, and $A_K$ of 2 in panel {\it d}. We averaged the extinction law given in \cite{Flaherty-2007} for Serpens, Orion and Ophiuchus.} \label{fig:iraccolorcolor} \end{figure*} \subsection{NANeb age estimation} We have used optical photometry in order to constrain the age of the NANeb complex. Figure \ref{fig:vvi} shows the location of YSOs with optical and 2MASS photometry in an optical color-magnitude diagram. Also shown are \cite{Siess-2000} isochrones, where the color-$T_{\rm eff}$ conversion has been tuned so that the 100\,Myr isochrone follows the single-star sequence in the Pleiades \citep{Stauffer-1996,Jeffries-2007}. The number of YSOs in this diagram is limited by the fact that it requires optical counterparts. This means that most of sources from this diagram are located inside the less-extincted regions of the NANeb, on the edge of the Pelican Nebulae (80\% are located inside region of $A_v<5$, according to the \citealp{Cambresy-2002} extinction map). The embedded Class~I sources are not well-detected in the optical data, and our YSO selection criteria do not allow us to detect Class~III sources, so this sample is mostly limited to Class~II sources. \begin{figure} \centering \plotone{vvi} \caption{$V/V-I$ color magnitude diagram for our YSO candidates. The solid lines are, from right to left, tuned Siess models (see text) for 1 3 and 10\,Myr scaled to the NANeb distance. A reddening vector of $A_v$=5 is plotted.} \label{fig:vvi} \end{figure} Our age estimates are uncertain for all of the normally expected reasons (e.g. uncertainties in the isochrones and their transfer to the observational plane; the need for and imprecision of the reddening corrections; the effect of spots and UV excesses on the estimated colors and effective temperatures of our target stars; binarity). However, it is apparent in Figure \ref{fig:vvi} that most of our YSO candidates (with optical counterparts) are younger than 10~Myr according to the isochrone tracks. The median age is slightly older than 3\,Myr, but the most embedded and probably youngest regions of NANeb are excluded from this optical CMD. The median age and the age dispersion is comparable to what found in NGC~2264 by \cite{Rebull-2002}. Since the reddening vector is essentially parallel to the isochrones, and YSOs with optical counterpart have low reddening, the median age will be minimally affected by reddening. \subsection{Classes of YSO Candidates} \label{sec:classes} \begin{figure}\centering \plotone{distrib_alpha} \caption{ Normalized histograms of the distribution of spectral slope, $\alpha$. The continuous line is the histogram for the YSO candidates for the whole sample. All other IRAC sources from the NANeb are also indicated (in gray), as are the 330 YSO candidates we have found from the \cite{Harvey-2006} highly reliable catalog (dashed line).} \label{fig:alphahisto} \end{figure} We classified this set of highly likely NANeb YSOs, according to the \citet{Lada-1987} Class I/II/III system, updated by \cite{Andre-1994} where the ``flat'' class has been added between Class~I and II. Classes are assigned based on the spectral index, defined by $$ \alpha = \frac{d\ \log{ \lambda F(\lambda)}}{d\,\log{\lambda}} $$ where $F(\lambda)$ is the flux at the wavelength $\lambda$. For Class~I, 0.3$\le\alpha$; for flat sources, $-0.3\le \alpha < 0.3$; for Class\,II, $-1.6\le \alpha < -0.3$; and for Class~III, $\alpha<-1.6$. Note that because we are making a Spitzer-based selection of YSO candidates, by definition {\em none} of our candidates will have a photospheric slope, so our Class~III inventory is guaranteed to be incomplete. Further observations at other wavelengths ({\it e.g.} with Chandra) are required to find such objects. In the following, to avoid confusion, we call our YSO candidates with $\alpha<-1.6$ Class~II-III. \sly{Cite peoples who use IRAC only to define Classes} We fit $\alpha$ over the 4 IRAC channel points from 3.6 to 8 $\mu$m.\update{03/11} From the total 1657\ YSO candidates, we found that 1059 (64\%) are Class~II, 184 (11\%) are flat and 198 (12\%) are Class~I. We classified the remaining 216 (13\%) YSOs as Class~II-III\ because they have $\alpha$ between stellar photospheres ($\alpha = -2.7$) and Class~II ($\alpha = -1.6$). A histogram of $\alpha$ for our YSO candidate sample is plotted in Figure~\ref{fig:alphahisto}. This figure also contains, for comparison, a histogram of the values of $\alpha$ for our entire NANeb catalog (mostly field stars), and that for the YSO candidates derived from the highly reliable Serpens catalog obtained by \cite{Harvey-2006,Harvey-2007a,Harvey-2007b} (see discussion below). Plots similar to those appearing here (or in R09) can be found in most of the c2d series of papers on Serpens, Perseus, Ophiuchus, Chamaeleon II, and Lupus. We picked Serpens for our primary comparison here because it is thought to be similar in age to the NANeb, it contains a deeply embedded cluster similar to the Gulf of Mexico, it is in the Galactic plane ($l$=32, $b$=+5), and \cite{Harvey-2006,Harvey-2007a,Harvey-2007b} performed a Spitzer-specific source selection. However, there are some important differences. The Spitzer maps of Serpens are only 0.85\,deg$^2$, nearly 6 times smaller than our map. The selection method for finding YSO candidates is much different in \cite{Harvey-2006,Harvey-2007a,Harvey-2007b}, for example, requiring a MIPS-24 detection, and the differences are most strongly apparent in the selection of Class~III objects. In order to fairly compare our data with the Serpens data, we used the highly reliable Serpens catalog (of all objects, not just their YSO candidates) provided by the c2d team as part of their final Legacy delivery (available on the SSC website) and applied our selection method to those data. From the sample of Serpens YSOs which pass our selection criteria, we calculated $\alpha$ in the same way that we did for the NANeb, from 3.6 to 8\,$\mu$m. It is these values which appear in Figure~\ref{fig:alphahisto}. Our selection and classification scheme yield 52 Class~I, 24 Flat disk, 189 Class~II and 65 Class~II-III\ Serpens YSOs. The number of Class~II-III\ objects is hard to compare to that found in \cite{Harvey-2006} since they use the 24$\mu$m data in their classification scheme ; but the number of Class~I, Flat and Class~II are roughly comparable: 30, 33, 163 respectively in \cite{Harvey-2006}. As a means to estimate a comparative evolutionary age for the NANeb stars, one can determine the ratio of the number of Class\,II (``older'') to Class\,I or Flat sources (``younger''), and compare that ratio to that derived for other clusters, such as Serpens. For our entire NANeb YSO sample, we derive a ratio N(Class\,II)/N(Class\,I+Flat) = 1059/382 = $2.78\pm0.17$. Using our similarly analysed data for Serpens, we derive this ratio as $2.49\pm0.33$ (roughly comparable to what was derived by Harvey et al. 2006 - $2.6\pm0.4$, where they used 2MASS through 24\,$\mu$m data to compute alpha). The ratios for NANeb and Serpens are comparable, suggesting that the mean ages of the two YSO samples are indeed similar. \cite{Harvey-2006,Harvey-2007a,Harvey-2007b} used a series of Spitzer-based selection criteria to select a sample of YSOs; their final selection in \cite{Harvey-2007b} primarily uses color-color and color-magnitude diagrams to assign a liklihood that a given object is extragalactic. The process starts with requiring detection in all 4 IRAC bands as well as MIPS-24. Only about half of our minimally contaminated YSO sample has a MIPS-24 detection, so the Harvey et~al.\ selection process by its very nature could only retrieve about half of our sample. However, $\sim$90\% of our highly reliable sample with MIPS-24 detections is also retrieved by a Harvey et~al.-style source selection. R09 contains a special discussion of the Gulf of Mexico, which is an interesting area containing hundreds of deeply embedded young stars, mostly associated with 3 subclusters. We note here that our requirement for having all four bands of IRAC for our source selection omits many of the objects in this region, and this requirement will be relaxed in R09 to find cluster members. However, using our selection criteria and our best possible YSO sample (as defined above) results in a ratio of N(Class~II)/N(Class~I+Flat) that is statistically significantly different inside the Gulf cluster than outside of it. The ratio, with its Poissonian error, is 901/261=3.45\,$\pm$0.24 outside the Gulf of Mexico and 158/121=1.31\,$\pm$0.16 inside this region. Moreover, we find comparatively very few Class~II-III\ objects in the Gulf of Mexico region. Both these findings indicate that the Gulf of Mexico cluster is, in evolutionary terms, the youngest region within the NAN complex. \update{03/11} We explore now, how the NANeb reddening can change our YSO classification. Indeed \cite{Muench-2007}, appendix~A, demonstrated that for a deeply embedded cluster (A$_v \sim$40), the IRAC SED slope of a typical K6 Class~II member of IC\,348 would be reddened into an apparent flat spectrum. According to the \cite{Cambresy-2002} extinction map, however, 97\% of YSOs are located in region where the A$_V$ is less than 10. Since the \cite{Cambresy-2002} map yields extinctions up to about 30, we assume that for extinctions $\leq$ 10 the predictions should be reliable (i.e. not ``saturated" because the extinctions are so high that no background stars are present within the grid point). To test the effect of reddening on our classification, we have dereddened our SEDs prior to the spectral slope calculation, according to the Cambresy et al. map. The numbers of Class~I, Flat disk and Class~II given by the dereddened SEDs - 186, 177, 1043 - are quite similar to those using the observed SEDs - 198, 184 and 1059, making the N(Class~II)/N(Class~I+Flat) ratio equal to 2.87$\pm$0.17 instead of 2.78$\pm$0.17. This small differences does not change our main conclusions about the spatial distribution of Class~I and Class~II. There is another means by which we can gauge the effect of extinction on our YSO classification. \cite{Gutermuth-2008a} distinguished protostars and Class~II by there $[4.5]-[5.8]$ color excess. They chose this criterion specifically because it is relatively immune to extinction (because the interstellar extinction curve is relatively flat between these two wavelengths). \cite{Gutermuth-2008a} selected protostars as sources matching the following equation: \begin{equation} \left\{ \begin{array}{c} \left[4.5\right]- [5.8] > 1 \\ \mathrm{OR} \\ \left[4.5\right]-[5.8] > 0.7 \& [3.6]-[4.5]>0.7 \end{array}\right. \label{eq:protostars} \end{equation} The \cite{Gutermuth-2008a} classification scheme does not include flat-spectrum sources as a separate class. Figure \ref{fig:protostars} shows the location of Class~I / Flat / Class~II and Class~II-III\ YSOs in the $[4.5]-[8.0] / [3.6]-[4.5]$ plan superimposed on the protostar area defined above. The location of our Class~I and Class~II sources agree well with the regions defined in this diagram: 95\% of our Class~I and 99\% of our Class~II sources would be classified as protostars and Class~II by the Gutermuth et al. criteria. Our flat spectrum sources lie on the border between Class~I and Class~II, with roughly equal numbers on either side of the \cite{Gutermuth-2008a} boundary. \begin{figure}\centering \plotone{i2i3i1i2} \caption{Spitzer color-color diagram. Open squares are Class~II-III, plus symbols are Class~II, green triangles are classified as Flat spectrum sources and circles are Class~I. The green area is the protostar selection area defined by \cite{Gutermuth-2008a}. A reddening vector of A$_k$=10 is plotted.} \label{fig:protostars} \end{figure} \subsection{Spatial Distribution of YSO Candidates} \label{sec:spatial} \begin{figure*} \centering \epsscale{0.5} \plotone{distrib_contam} \plotone{distrib_class} \epsscale{1} \caption{TOP: the spatial distribution of background contaminants (red squares: AGN like sources, blue triangles: galaxies; see text and Figure \ref{fig:iraccolorcolor}). BOTTOM: the spatial distribution of YSO candidates, where symbols denote the object Class (circles: Class\,I or Flat, $+$: Class\,II, square: Class\,II-III). In both panels, we plot contours of YSO density (see text for more details); the contour levels correspond to values plotted in Figure \ref{fig:evolR}, from 200 to 2000 YSOs/deg$^2$. The contour which surrounds half of the total YSO population ($\sim$1000 YSOs/deg$^2$) is highlighted in green and with a thicker contour line. The background gray area indicates the $A_V\geq$5 dark cloud from the \cite{Cambresy-2002} map.} \label{fig:distrib} \end{figure*} Now that we have a minimally contaminated sample of YSO candidates, we can look at their spatial distribution and investigate the degree of clustering and the relative spatial distribution of YSOs as a function of their class. We have computed a YSO density map of our observed region. We used a Kernel method \citep{Silverman-1986} which yields a smooth isodensity contour map from the projected position of objects. In each point of the space $(\alpha, \delta)$, the density is set, due to the contribution of all $n$ points, by the kernel density estimator: $$D(\alpha, \delta) = \frac{1}{h^2}\sum_{i=1}^{n} K(\alpha, \alpha_i, \delta, \delta_i)$$ where $K$ is the kernel. For the kernel, we adopted a Gaussian shape : $$K(\alpha, \alpha_i, \delta, \delta_i) = \frac{1}{2\pi} \exp{ \frac{-\left( (\delta-\delta_i)^2 + (\alpha-\alpha_i)^2 \cos^2\delta \right)}{2h^2}}$$ where $h$ is the smoothing parameter. We adopted $h=0.05\arcdeg$ (0.5 pc), which is approximately the size of the smallest group of YSOs in the NANeb discernible `by eye'. Figure \ref{fig:distrib} shows contours of the density map we obtained, where both background contaminants and YSO candidates are plotted in two different panels. Using these density contours, we find first that the background contaminants (galaxies or AGN) are relatively evenly distributed across the field of view. However, we notice a lack of background contaminant sources on the central dark cloud. The lack of contaminants in the high extinction regions can be quantitatively explained simply due to the fact that our survey is magnitude-limited, and the extra extinction causes the observed fraction of galaxies to drop out of our sample. We checked this assumption by creating an artificial map of randomly distributed background sources reddened by the extinction map of \cite{Cambresy-2002}. We then applied a magnitude cut to this artificial sample, corresponding to the effective faint limit imposed by our 4-band YSO selection criteria. The resulting spatial distribution of selected galaxies mimics well the observed galaxy distribution in Figure 7(top), confirming our assumption. Our YSO candidates are located primarily on the central dark cloud (Lynds 935), and are for the most part highly clustered. Further, we find that half of our YSO population is located in regions denser than 1000~YSOs/deg$^2$ ($\sim$10$^5$~YSOs/pc$^2$) and cover a relatively small fraction of the sky ($\sim$0.5 deg$^2$ compared to the $\sim$9 deg$^2$ observed). From the distribution of YSOs, we distinguished by eye 8 main clusters (for a discussion of more quantitative means to identify clusters of YSOs from similar catalogs, see for example Jorgensen et al. 2008). We determined the center of each cluster as the local peak of the YSO density. We arbitrarily define the boundaries of these clusters as the YSO density contour at a level of 1/4 of the maximum density peak. The location, the size, the number of stars and the mean $A_v$ for each cluster is provided in Table \ref{tab:clusters}. Five of these clusters have been previously identified by \cite{Cambresy-2002}; they are the most prominent clusters located inside the Gulf of Mexico and in the central part of the NANeb. Figure~\ref{fig:clusters} shows the location of both our clusters and clusters defined by Cambr\'esy et al. One can see in this figure that cluster \#4 of \cite{Cambresy-2002} does not exist in our YSO candidate distribution. Clusters \#7 and \#8 of Cambresy appear in our YSO distribution, but are similar to many other small clusters in the NANeb; they are not as densely populated as the ones we have defined. Because the \cite{Cambresy-2002} technique is sensitive to both WTTs and CTTs, whereas ours essentially excludes WTTs, it is reasonable that our two methods will yield somewhat different results. It is likely that the clusters found by \cite{Cambresy-2002} but not by us may be the evolutionarily older clusters in the region. \begin{figure}\centering \plotone{distrib_cluster} \caption{Location of clusters shown on the distribution of the $\sim$1600 YSO candidates (plus symbols). Solid red lines are contours of clusters we adopted (see text for more details). Our eight clusters are labelled with large, bold numbers and no brackets. The location of clusters found by \cite{Cambresy-2002} are also indicated by smaller bracketed labels.} \label{fig:clusters} \end{figure} \begin{deluxetable}{ccccccc} \tablecaption{NANeb clusters \label{tab:clusters}} \tablehead{id& RA & Dec & Area & YSOs & $<A_v>$ & idc \\ & J(2000) & J(2000) & deg$^2$& Count & mag} \startdata 1 &20 56 17.4 & +43 38 18.8 & 0.015 & 48 & 18.12 & 1 \\ 2 &20 57 07.1 & +43 48 21.8 & 0.025 & 120 & 16.47 & 2 \\ 3 &20 58 19.0 & +43 53 32.5 & 0.023 & 74 & 4.93 & 3b \\ 4 &20 55 40.5 & +44 06 20.0 & 0.025 & 39 & 5.68 & \dots \\ 5 &20 50 54.9 & +44 24 18.1 & 0.067 & 121 & 5.54 & 5 \\ 6 &20 53 35.2 & +44 27 57.4 & 0.075 & 130 & 6.27 & 6 \\ 7 &20 59 14.2 & +44 16 41.3 & 0.022 & 22 & 1.48 & \dots \\ 8 &20 47 45.1 & +43 43 29.4 & 0.017 & 28 & \dots & \dots \enddata \tablenotetext{}{The id, center coordinate and area of identified clusters are indicated on the first 4 columns, followed by the number of YSO members and the average reddening from the extinction map of \cite{Cambresy-2002}. The last column is the cluster label from \cite{Cambresy-2002}, if appropriate. } \end{deluxetable} \begin{figure} \centering \epsscale{0.8} \plotone{histo_kern} \caption{Ratios of object classes as a function of YSO density. The $x$-axis corresponds to the YSO density contours plotted in Figure \ref{fig:distrib} from hight density to low density (corresponding to small and large surface surrounded by each contour on Figure \ref{fig:distrib}). Inside each contour, we calculated several fractional numbers: TOP: the fraction of YSO candidates (filled dot symbols linked by solid line) and background contaminants (filled triangle liked by dot-dashed line); the red dashed line shows the surface area surrounded by each contour as a fraction of the total covered surface ($\sim$9 deg$^2$) MIDDLE: same, but for the fraction of (Class~I+Flat) / Class~II. BOTTOM: same, but for the fraction of Class~II / Class~II-III. Note that for the last value, $X$=0 corresponds physically to the boundary of our IRAC coverage. } \label{fig:evolR} \epsscale{1} \end{figure} In the bottom panel of Figure \ref{fig:distrib}, we plot the location of YSOs with different symbols corresponding to their class. To investigate in more detail a possible spatial segregation between Class~I(+flat), II, and II-III, we used the YSO density contours to define regions within which we have computed the ratio of N(Class~I + Flat) / N(Class~II) and N(Class~II) / N(Class~II-III). The behavior of these ratios as a function of the density contour limit appears in Figure \ref{fig:evolR}. This Figure shows that in dense regions (i.e., more clustered regions), the proportion of Class\,II objects compared to the (presumably) younger Class~I's is statistically lower than in the entire observed region by a factor of $\sim$1.3. This is consistent with expectations - either due to high velocity stars moving away from their birth sites, or as a result of the overall expansion of young clusters as they age due to, for example, removal of their remaining molecular cloud material by stellar winds. Moreover, the ratio N(Class~II)/N(Class~II-III) decreases from denser to less dense regions, as expected for the same reasons, by a factor of $\sim$2. Figure \ref{fig:evolR} also shows the fractional number of background contaminants as compared to the fractional surface area inside each of the YSO density contours. The proportional number of background contaminants follows very closely the proportional surface area. This suggests that sources flagged as contaminants are really background objects and not YSOs; otherwise one would expect an excess of sources flagged as background inside dense regions. Another way to characterize the degree of clustering, used by many authors, is the distribution of nearest neighbors. For each of our YSO candidates, we calculated the distance to the 4th nearest YSO candidates; Figure~\ref{fig:dist_tennearest} shows the histogram of this distance, displayed for Class~I+Flat, Class~II and Class~II-III. We find a statistically significant difference between the 3 histograms: using a Kolmogorov-Smirnov test, the probability that each of these distributions is drawn from the same parent distribution as either of the other two distributions is less than 0.02\%. This is consistent with what we found above: the Class~I population, usually assumed to be younger, is significantly more clustered than Class~II-III. \begin{figure}\centering \plotone{distrib_nearest_log} \caption {Histogram of the log of the distance of the 4th nearest YSO (in degrees) for each of our YSO candidates. The red dashed line is for Class~I+Flat spectrum sources, the solid black line for Class~II stars and the blue dash-dot histogram for Class~II-III. The gray histogram represents the remaining population of non-cluster members. The Class~I population, usually assumed to be younger, is significantly more clustered than Class~II-III; this is consistent with expectations that, as stars dynamically evolve, they move away from their birthplace.} \label{fig:dist_tennearest} \end{figure} \section{HH Objects} \label{sec:hhobj} \cite{Ogura-2002} identified some HH objects using H$\alpha$ over a small region of the complex near bright-rimmed cloud (BRC) 31. \cite{Bally-2003} studied the entire Pelican Nebula using H$\alpha$, [\ion{N}{2}], and [\ion{S}{2}], finding several new HH objects. The 3.5 to 8.5\,$\mu$m wavelength range covered by the IRAC four channels is quite suitable to study deeply embedded young stellar outflows, since this range includes some of the brightest collisionally excited pure rotational H$_2$ emission lines \citep{Wright-1996,Noriega-Crespo-2004a,Noriega-Crespo-2004b} as well as other H$_2$ vibrational lines \citep{Smith-2005}. At the longer wavelengths, the MIPS channels sample a rich combination of atomic/ionic and molecular lines that include some of the best atomic species to study outflows, e.g. [Fe~II] 25.98\,$\mu$m, [O~I] 63.18\,$\mu$m and [C~II] 157.74\,$\mu$m (see e.g. \citealt{Molinari-2000,Morris-2004}) that fall within the 24, 70 and 160\,$\mu$m bandpasses, respectively, plus of course continuum emission from cold dust. We discuss below our observations of one HH object discovered previously, HH~555. HH~555 itself is not an embedded flow (like e.g. HH~211) since it is clearly detected at optical wavelengths (H$\alpha$ and [S~II] 6717/31\,\AA) and its spectrum is consistent with that of a shock excited gas moving supersonically \citep{Bally-2003}. HH~555 belongs to that special class of irradiated jets, like HH~399 in the Trifid nebula \citep{Yusef-Zadeh-2005} that are found in active star forming regions and are surrounded by a bath of UV ionizing photons from recently formed massive stars. The case for HH~555 is particularly interesting because the jet and counter-jet are bent westward as soon as they arise from their embedded source at the dense tip of the pillar, indicating the presence of stellar ``sidewind'' \citep{Masciadri-2001}. The pillar itself is being photoevaporated leading to the formation of a two-shock structure (see \citealt{Kajdic-2007} for details). The length of time that the jet will maintain its bipolar morphology is expected to depend on the strength of the impinging ionizing flux. The two-shock structure developed by the interacting winds actually creates a shield that protects both the jet and the pillar, and therefore, one would expect lower ionization on the West side of the outflow. Indeed, the Spitzer images combined with the ground based H$\alpha$ emission (Figure \ref{fig:ha+8+24}) confirms this stratification, if one assumes that the 8\,$\mu$m and 24\,$\mu$m~emission arises mostly from H$_2$ and [S~II], as is the case in other jets. This also explains why the 24\,$\mu$m source lies in the middle of the 8\,$\mu$m~and 24\,$\mu$m~jets, but is slightly offset with respect to H$\alpha$ (see e.g. \citealt{Kajdic-2007}, Fig.~3). Our 70\,$\mu$m images do not show a clear signature of the jet, and this could be due to a combination of things: lack of sensitivity, artifacts along the scanning direction and physics ([O~I] 63.18\,$\mu$m is not present, neither is cold dust). Careful analysis of the data do show the presence of the source at 70\,$\mu$m~at the tip of the pillar. The pillar itself, although faint, is distinguished and its emission is likely to be due to [O~I] photodissociated emission. We measured the flux at 24 and 70\,$\mu$m using a small aperture of 7\arcsec\ and 16\arcsec\ respectively, centered at 20h51m19.52s +44d25m38.3s. We found 10$\pm$3~mJy at 24\,$\mu$m and 830$\pm$220~mJy at 70\,$\mu$m. \begin{figure*} \plotone{hh555_ha_8um_24um_z1_2_lr} \caption{Three color image of HH~555 arising from the tip of a dense pillar using H$\alpha$ (blue), IRAC 8\,$\mu$m (green) and MIPS 24\,$\mu$m (red). The FOV is $\sim 6\arcmin \times 4\arcmin$, and North is up and East is left. The mid/far infrared bipolar jet ($\sim$ 1\arcmin~in length) is centered on a 24\,$\mu$m~ embedded source. The stratification in ionization along the jet from East-to-West (the gradient in color) can be explained by the shield created during the two (stellar and photoevaporation) wind interaction \citep{Kajdic-2007}.} \label{fig:ha+8+24} \end{figure*} \section{Conclusion} We have combined mid-IR photometry from Spitzer/IRAC with $JHK_s$ data from 2MASS to provide a sensitive photometric census of objects towards the North America and Pelican Nebulae. We have used IRAC color and magnitude diagnostics to identify more than 1600 sources with infrared excess characteristic of young stars surrounded by a disk. Our YSO candidates are located primarily on the central dark cloud (Lynds 935). We identified 8 main clusters, which together contain about a third of all the YSOs in the region; 5 of these clusters were previously recognized by \cite{Cambresy-2002}. The sources we identify as background galaxies or AGN are relatively uniformly spread across the observed region. These two properties suggest a low level of contamination in our YSO sample. We have assigned our YSOs classifications in the Class\,I/Flat/II/III system according to their infrared SED slope. The proportion of Class~I+Flat compared to Class~II is variable within the NANeb, with the dark clouds called ``Gulf of Mexico'' having a remarkably higher proportion of Class~I's, suggesting these clusters have a younger age than the rest of NANeb. The ``Gulf of Mexico'' region will be discussed in more detail in R09 where the MIPS data for NANeb will be presented. We compared optical photometry of a small sample of the YSO candidates to evolutionary models in order to provide an age estimate for the NANeb. We found that most of these YSOs appear younger than 10\,Myr according to the isochrones of \cite{Siess-2000}, with the median age being 3\,Myr. However, the most embedded and probably youngest YSOs were not included in this sample because our optical photometry is less sensitive than our IRAC observations and because the youngest stars are very red. However, despite all the caveats regarding this age estimation, it is the most robust published so far. Finally, thanks to our infrared images from 3.6 to 70\,$\mu$m, we have provided new clues on the nature of HH~555, a peculiar HH object reported in \cite{Bally-2003}. We detected a source at 70\,$\mu$m at the base of the bi-polar jets. Our data add support to the hypothesis that the bent shape of HH~555 is a result of having the jet propagate in an environment with a stellar ``sidewind''. \acknowledgements This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. This research has made use of NASA's Astrophysics Data System (ADS) Abstract Service, and of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of data products from the Two Micron All-Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation. These data were served by the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of the Digitized Sky Surveys, which were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions. The research described in this paper was partially carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
train/arxiv
BkiUdg44eIZjkiQT3oSJ
5
1
\section{Introduction} By an atomic monoid, we mean a commutative cancellative semigroup with unit element such that every non-unit has a factorization as a finite product of atoms (irreducible elements). The multiplicative monoid consisting of the nonzero elements from a noetherian domain is such a monoid. Let $H$ be an atomic monoid. Then $H$ is factorial (that is, every non-unit has a unique factorization into atoms) if and only if $H$ is a Krull monoid with trivial class group. The first objective of factorization theory is to describe the various phenomena related to the non-uniqueness of factorizations. This is done by a variety of arithmetical invariants such as sets of lengths (including all invariants derived from them, such as the elasticity and the set of distances) and by the catenary and tame degrees of the monoids. The second main objective is to then characterize the finiteness (or even to find the precise value) of these arithmetical invariants in terms of classical algebraic invariants of the objects under investigation. To illustrate this, we mention some results of this type (a few classical ones and some very recent). The following result by Carlitz (achieved in 1960) is considered as a starting point of factorization theory: the ring of integers $\mathfrak o_K$ of an algebraic number field has elasticity $\rho (\mathfrak o_K) = 1$ if and only if its class group has at most two elements (recall that, by definition, $H$ is half-factorial if and only if its elasticity $\rho (H) = 1$). A non-principal order $\mathfrak o$ in an algebraic number field has finite elasticity if and only if, for every prime ideal $\mathfrak p$ containing the conductor, there is precisely one prime ideal $\overline{\mathfrak p}$ in the principal order $\overline{ \mathfrak o}$ such that $\overline{\mathfrak p} \cap \mathfrak o = \mathfrak p$. This result (achieved by Halter-Koch in 1995) has far reaching generalizations (achieved by Kainrath) to finitely generated domains and to various classes of Mori domains satisfying natural finiteness conditions (for all this, see \cite{Ca60, HK95b, Ka05a,Ka10a}). An integral domain is a Krull domain if and only if its multiplicative monoid of nonzero elements is a Krull monoid, and a noetherian domain is Krull if and only if it is integrally closed. A reduced Krull monoid is uniquely determined by its class group and by the distribution of prime divisors in the classes (see Lemma \ref{3.3} for a precise statement). Suppose $H$ is a Krull monoid with class group $G$ and let $G_P \subset G$ denote the set of classes containing prime divisors. Suppose that $G_P = G$. In that case, it is comparatively easy to show that any of the arithmetical invariants under discussion is finite if and only if $G$ is finite (the precise values of arithmetical invariants---when $G$ is finite---are studied by methods of Additive and Combinatorial Number Theory; see \cite[Chapter 6]{Ge-HK06a} or \cite{Ge09a} for a survey on this direction). However, only very little is known so far on the arithmetic of $H$ when $G$ is infinite and $G_P$ is a proper subset of $G$. The present paper provides an in-depth study of the arithmetic of Krull monoids having an infinite cyclic class group. This situation was studied first by Anderson, Chapman and Smith in 1994 \cite{An-Ch-Sm94c}, then by Hassler \cite{Ha02c}, and the most recent progress (again due to Chapman et al.) was achieved in \cite{B-C-R-S-S10}. We continue this work. The arithmetical properties under investigation are discussed in Section \ref{2} and at the beginning of Section \ref{5}. The required material on Krull monoids, together with a list of relevant examples, is summarized in Section \ref{3}. Our main results are Theorems \ref{main-theorem-I}, \ref{main-theorem-II}, \ref{STSL_thm} and Corollary \ref{final-cor}. Along the way, we introduce new methods (see the proofs of Proposition \ref{rel_prop} and of Theorem \ref{prop-chain}) and solve an old problem proposed in 1994 (see the equivalence of $(a)$ and $(e)$ in Theorem \ref{main-theorem-I}). A more detailed discussion of the main results is shifted to the relevant sections where we have the required terminology at our disposal. \section{Preliminaries} \label{2} Our notation and terminology are consistent with \cite{Ge-HK06a}. We briefly gather some key notions. We denote by $\mathbb N$ the set of positive integers, and we put $\mathbb N_0 = \mathbb N \cup \{0\}$. For real numbers $a, b \in \mathbb R$, we set $[a, b] = \{ x \in \mathbb Z \mid a \le x \le b \}$. For a subset $X$ of (possibly negative) integers, we use $\gcd X$ and $\lcm X$ to denote the greatest common divisor and least common multiple, respectively, and their values are always chosen to be nonnegative regardless of the sign of the input. \medskip Let $L, L' \subset \mathbb Z$. We set $-L = \{-a \mid a \in L \}$, $L^+ = L \cap \mathbb N$ and $L^- = L \cap (-\mathbb N)$. We denote by $L+L' = \{a+b \mid a \in L,\, b \in L' \}$ their {\it sumset}. If $\emptyset \ne L \subset \mathbb N$, we call \[ \rho (L) = \sup \Bigl\{ \frac{m}{n} \; \Bigm| \; m, n \in L \Bigr\} = \frac{\sup L}{\min L} \, \in \mathbb Q_{\ge 1} \cup \{ \infty \} \] the \ {\it elasticity} \ of $L$, and we set $\rho (\{0\}) = 1$. Distinct elements $k, l \in L$ are called \emph{adjacent} if $L \cap [\min\{k,l\},\max\{k,l\}]=\{k,l\}$. A positive integer $d \in \mathbb N$ is called a \ {\it distance} \ of $L$ \ if there exist adjacent elements $k,l \in L$ with $d=|k-l|$. We denote by \ $\Delta (L)$ \ the {\it set of distances} of $L$. Note that $\Delta (L) = \emptyset$ if and only if $|L| \le 1$, and that $L$ is an arithmetical progression with difference $d \in \mathbb N$ if and only if $\Delta (L) \subset \{d\}$. We need the following generalization of an arithmetical progression. Let $d \in \mathbb N$, \ $M \in \mathbb N_0$ \ and \ $\{0,d\} \subset \mathcal D \subset [0,d]$. Then $L$ is called an {\it almost arithmetical multiprogression} \ ({\rm AAMP} \ for short) \ with \ {\it difference} \ $d$, \ {\it period} \ $\mathcal D$, \ and \ {\it bound} \ $M$, \ if \[ L = y + (L' \cup L^* \cup L'') \, \subset \, y + \mathcal D + d \mathbb Z \] where \begin{itemize} \item $L^*$ is finite and nonempty with $\min L^* = 0$ and $L^* = (\mathcal D + d \mathbb Z) \cap [0, \max L^*]$ \item $L' \subset [-M, -1]$ \ and \ $L'' \subset \max L^* + [1,M]$ \item $y \in \mathbb Z$. \end{itemize} Note that an AAMP is finite and nonempty. An AAMP with period $\{0,d\}$ is called an \emph{almost arithmetical progression} (AAP for short). \medskip By a {\it monoid}, we mean a commutative, cancellative semigroup with unit element; we denote the unit element by $\boldsymbol{1}$. Let $H$ be a monoid. We denote by $\mathcal A (H)$ the set of atoms (irreducible elements) of $H$, by $H^{\times}$ the group of invertible elements, and by $H_{{\text{\rm red}}} = \{ a H^{\times} \mid a \in H \}$ the associated reduced monoid of $H$. We call elements $a,b \in H$ associated (in symbols $a\simeq b$) if $aH^{\times}=bH^{\times}$. We say that $H$ is reduced if $|H^{\times}| = 1$. We denote by $\mathsf q (H)$ a quotient group of $H$ with $H \subset \mathsf q (H)$, and for a prime element $p \in H$, let $\mathsf v_p \colon \mathsf q (H) \to \mathbb Z$ be the $p$-adic valuation. For a subset $H_0 \subset H$, we denote by $[H_0] \subset H$ the submonoid generated by $H_0$ and by $\langle H_0 \rangle \subset \mathsf q (H)$ the subgroup generated by $H_0$. For elements $a, b \in H$, we frequently use, in case $a \t b$, the notation $a^{-1}b$ to denote the element $c \in H$ with $ac=b$; yet, we mention explicitly if we shift our investigations from $H$ to the quotient group of $H$. For a set $P$, we denote by $\mathcal F (P)$ the \ {\it free $($abelian$)$ monoid} \ with basis $P$. Then every $a \in \mathcal F (P)$ has a unique representation in the form \[ a = \prod_{p \in P} p^{\mathsf v_p(a) } \quad \text{with} \quad \mathsf v_p(a) \in \mathbb N_0 \ \text{ and } \ \mathsf v_p(a) = 0 \ \text{ for almost all } \ p \in P \,. \] We call $|a|= \sum_{p \in P}\mathsf v_p(a)$ the \emph{length} of $a$. The free monoid \ $\mathsf Z (H) = \mathcal F \bigl( \mathcal A(H_{\text{\rm red}})\bigr)$ \ is called the \ {\it factorization monoid} \ of $H$, and the unique homomorphism \[ \pi \colon \mathsf Z (H) \to H_{{\text{\rm red}}} \quad \text{satisfying} \quad \pi (u) = u \quad \text{for each} \quad u \in \mathcal A(H_{\text{\rm red}}) \] is called the \ {\it factorization homomorphism} \ of $H$. For $a \in H$ and $k \in \mathbb N$, the set \[ \begin{aligned} \mathsf Z_H (a) = \mathsf Z (a) & = \pi^{-1} (aH^\times) \subset \mathsf Z (H) \quad \text{is the \ {\it set of factorizations} \ of \ $a$} \,, \\ \mathsf Z_k (a) & = \{ z \in \mathsf Z (a) \mid |z| = k \} \quad \text{is the \ {\it set of factorizations} \ of \ $a$ of length \ $k$}, \quad \text{and} \\ \mathsf L_H (a) = \mathsf L (a) & = \bigl\{ |z| \, \bigm| \, z \in \mathsf Z (a) \bigr\} \subset \mathbb N_0 \quad \text{is the \ {\it set of lengths} \ of $a$} \,. \end{aligned} \] By definition, we have \ $\mathsf Z(a) = \{\boldsymbol{1}\}$ and $\mathsf L (a) = \{0\}$ for all $a \in H^\times$. The monoid $H$ is called \begin{itemize} \item {\it atomic} \ if \ $\mathsf Z(a) \ne \emptyset$ \ for all \ $a \in H$. \item a {\it \text{\rm BF}-monoid} (a bounded factorization monoid) \ if \ $\mathsf L (a)$ is finite and nonempty for all \ $a \in H$. \item {\it half-factorial} \ if \ $|\mathsf L (a)| = 1$ \ for all $a \in H$. \end{itemize} \smallskip We repeat the arithmetical concepts which are used throughout the whole paper. Some more specific notions will be recalled at the beginning of Section \ref{5}. Let $H$ be atomic and $a \in H$. Then \ $\rho (a) = \rho \bigl( \mathsf L (a) \bigr)$ \ is called the {\it elasticity} of $a$, and the {\it elasticity} of $H$ is defined as \[ \rho (H) = \sup \{ \rho (b) \mid b \in H \} \in \mathbb R_{\ge 1} \cup \{\infty\} \,. \] We say that $H$ has \ {\it accepted elasticity } \ if there exists some $b \in H$ with $\rho(b)=\rho(H)$. Let $k \in \mathbb N$. If $H \ne H^{\times}$, then \[ \mathcal V_k (H) \ = \ \bigcup_{k \in \mathsf{L}(a), a \in H} \mathsf{L}(a) \] is the union of all sets of lengths containing $k$. When $ H^\times=H$, we set $\mathcal V_k (H) =\{k\}$. In both cases, we define $\rho_k (H) = \sup \mathcal V_k (H)$ and $\lambda_k (H)= \min \mathcal V_k (H)$. Clearly, we have $\mathcal V_1 (H) =\{1\}$ and $k \in \mathcal V_k (H)$. By its definition, $H$ is half-factorial if and only if $\mathcal V_k (H) = \{k\}$ for each $k \in \mathbb N$. We denote by \[ \Delta (H) \ = \ \bigcup_{b \in H } \Delta \bigl( \mathsf L (b) \bigr) \ \subset \mathbb N \] the {\it set of distances} of $H$, and by \( \mathcal{L}(H) = \{\mathsf{L} (b) \mid b \in H \} \) the {\it system of sets of lengths} of $H$. \smallskip Let $z,\, z' \in \mathsf Z (H)$. Then we can write \[ z = u_1 \cdot \ldots \cdot u_lv_1 \cdot \ldots \cdot v_m \quad \text{and} \quad z' = u_1 \cdot \ldots \cdot u_lw_1 \cdot \ldots \cdot w_n\,, \] where $l,\,m,\, n\in \mathbb N_0$ and $u_1, \ldots, u_l,\,v_1, \ldots,v_m,\, w_1, \ldots, w_n \in \mathcal A(H_{\text{\rm red}})$ are such that \[ \{v_1 ,\ldots, v_m \} \cap \{w_1, \ldots, w_n \} = \emptyset\,. \] Then $\gcd(z,z')=u_1\cdot\ldots\cdot u_l$, and we call \[ \mathsf d (z, z') = \max \{m,\, n\} = \max \{ |z \gcd (z, z')^{-1}|, |z' \gcd (z, z')^{-1}| \} \in \mathbb N_0 \] the {\it distance} between $z$ and $z'$. If $\pi (z) = \pi (z')$ and $z \ne z'$, then \begin{equation}\label{E:Dist} 2 + \bigl| |z |-|z'| \bigr| \le \mathsf d (z, z') \end{equation} by \cite[Lemma 1.6.2]{Ge-HK06a}. For subsets $X, Y \subset \mathsf Z (H)$, we set \[ \mathsf d (X, Y) = \min \{ \mathsf d (x, y ) \mid x \in X, \, y \in Y \} \,, \] and thus $X \cap Y \ne \emptyset$ if and only if $\mathsf d (X, Y) = 0$. \medskip We recall the concepts of the (monotone) catenary and tame degrees (see also the beginning of Section \ref{7}). The \ {\it catenary degree} \ $\mathsf c(a)$ \ of the element $a$ is the smallest $N \in \mathbb N_0 \cup \{ \infty\}$ \ such that, for any two factorizations \ $z,\, z'$ \ of \ $a$, there exists a finite sequence \ $z = z_0\,, \, z_1\,, \ldots, z_k = z'$ \ of factorizations of \ $a$ \ such that \ $\mathsf d (z_{i-1}, z_i) \le N \quad \text{for all} \quad i \in [1,k]$. The \ {\it monotone catenary degree} \ $\mathsf{c}_{{\text{\rm mon}}} (a)$ is defined in the same way with the additional restriction that $|z_0| \le \ldots \le |z_k|$ or $|z_0| \ge \ldots \ge |z_k|$. We say that the two factorizations $z$ and $z'$ can be concatenated by a (monotone) $N$-chain if a sequence fulfilling the above conditions exists. Moreover, \[ \mathsf c(H) = \sup \{ \mathsf c(b) \mid b \in H\} \in \mathbb N_0 \cup \{\infty\} \quad \text{and} \quad \mathsf c_{{\text{\rm mon}}} (H) = \sup \{ \mathsf c_{{\text{\rm mon}}} (b) \mid b \in H\} \in \mathbb N_0 \cup \{\infty\} \quad \, \] denote the \ {\it catenary degree} \ and the \ {\it monotone catenary degree} of $H$. Clearly, we have $\mathsf c (a) \le \mathsf c_{{\text{\rm mon}}} (a)$ for all $a \in H$, as well as $\mathsf c (H) \le \mathsf c_{{\text{\rm mon}}} (H)$, and \eqref{E:Dist} implies that $2 + \sup \Delta (H) \le \mathsf c (H)$. For $x \in \mathsf Z (H)$, let $\mathsf t (a,x) \in \mathbb N_0 \cup \{\infty\}$ denote the smallest $N\in \mathbb N_0 \cup \{\infty\}$ with the following property{\rm \,:} \begin{enumerate} \smallskip \item[] If $\mathsf Z(a) \cap x\mathsf Z(H) \ne \emptyset$ and $z \in \mathsf Z(a)$, then there exists $z' \in \mathsf Z(a) \cap x\mathsf Z(H)$ such that $\mathsf d (z,z') \le N$. \end{enumerate} For subsets $H' \subset H$ and $X \subset \mathsf Z(H)$, we define \[ \mathsf t (H',X) = \sup \big\{ \mathsf t (b,x) \, \big| \, b \in H', x \in X\big\} \in \mathbb N_0 \cup \{\infty\}\,. \] $H$ is called \ {\it locally tame} \ if \ $\mathsf t (H,u) < \infty$ \ for all $u \in \mathcal A(H_{{\text{\rm red}}})$ (see the beginning of Section \ref{4} and Definition \ref{def-tamely-gen}). \section{Krull monoids: Basic Properties and Examples} \label{3} The theory of Krull monoids is presented in detail in the monographs \cite{HK98, Gr01, Ge-HK06a}. Here we first gather the required terminology. After that, we recall some facts concerning transfer homomorphisms, since the arithmetic of Krull monoids is studied via such homomorphisms. In particular, we deal with block homomorphisms (which are transfer homomorphisms) from Krull monoids into the associated block monoids. At the end of this section, we discuss examples of Krull monoids with infinite cyclic class group. \smallskip \noindent {\bf Krull monoids.} Let \ $H$ \ and \ $D$ \ be monoids. A monoid homomorphism $\varphi \colon H \to D$ \ is called \begin{itemize} \smallskip \item a {\it divisor homomorphism} if $\varphi(a)\mid\varphi(b)$ implies that $a \t b$ for all $a,b \in H$. \smallskip \item {\it cofinal} \ if for every $a \in D$ there exists some $u \in H$ such that $a \t \varphi(u)$. \smallskip \item a {\it divisor theory} (for $H$) if $D = \mathcal F (P)$ for some set $P$, $\varphi$ is a divisor homomorphism, and for every $p \in P$ (equivalently for every $a \in \mathcal{F}(P)$), there exists a finite subset $\emptyset \ne X \subset H$ satisfying $p = \gcd \bigl( \varphi(X) \bigr)$. \end{itemize} Note that, by definition, every divisor theory is cofinal. We call $\mathcal{C}(\varphi)=\mathsf q (D)/ \mathsf q (\varphi(H))$ the class group of $\varphi $ and use additive notation for this group. For \ $a \in \mathsf q(D)$, we denote by \ $[a] = [a]_{\varphi} = a \,\mathsf q(\varphi(H)) \in \mathsf q (D)/ \mathsf q (\varphi(H))$ \ the class containing \ $a$. We recall that $\varphi$ is cofinal if and only if $\mathcal{C}(\varphi) = \{[a]\mid a \in D \}$, and if $\varphi$ is a divisor homomorphism, then $\varphi(H)= \{a \in D \mid [a]=[1]\}$. If $\varphi \colon H \to \mathcal F (P)$ is a cofinal divisor homomorphism, then \[ G_P = \{[p] = p \mathsf q (\varphi(H)) \mid p \in P \} \subset \mathcal{C}(\varphi) \] is called the \ {\it set of classes containing prime divisors}, and we have $[G_P] = \mathcal{C}(\varphi)$ (for a converse, see Lemma \ref{lem_char}). If $H \subset D$ is a submonoid, then $H$ is called \emph{cofinal} (\emph{saturated}, resp.) in $D$ if the imbedding $H \hookrightarrow D$ is cofinal (a divisor homomorphism, resp.). The monoid $H$ is called a {\it Krull monoid} if it satisfies one of the following equivalent conditions (\cite[Theorem 2.4.8]{Ge-HK06a}; see \cite{Ki-Ki-Pa07a} for recent progress){\rm \,:} \begin{itemize} \item $H$ is $v$-noetherian and completely integrally closed. \smallskip \item $H$ has a divisor theory. \smallskip \item $H_{{\text{\rm red}}}$ is a saturated submonoid of a free monoid. \end{itemize} In particular, $H$ is a Krull monoid if and only if $H_{{\text{\rm red}}}$ is a Krull monoid. Let $H$ be a Krull monoid. Then a divisor theory $\varphi \colon H \to \mathcal F (P)$ is unique up to unique isomorphism. In particular, the class group $\mathcal C ( \varphi)$ defined via a divisor theory of $H$ and the subset of classes containing prime divisors depend only on $H$. Thus it is called the {\it class group} of $H$ and is denoted by $\mathcal C (H)$. In fact, for every Krull monoid the map, defined via assigning to each $a\in H$ the principal ideal it generates, from $H$ to $\mathcal{I}_v^{*}(H)$---the monoid of $v$-invertible $v$-ideals of $H$, which is a free monoid with basis $\mathfrak X (H)$---is a divisor theory, and thus $\mathcal{C}(H)$ is the $v$-class group of $H$ (up to isomorphism). \smallskip \noindent {\bf Transfer homomorphisms.} We recall some of the main properties which are needed in the sequel (details can be found in \cite[Section 3.2]{Ge-HK06a}). \begin{definition} \label{3.1} A monoid homomorphism \ $\theta \colon H \to B$ is called a \ {\it transfer homomorphism} \ if it has the following properties: \smallskip \begin{enumerate} \item[] \begin{enumerate} \item[{\bf (T\,1)\,}] $B = \theta(H) B^\times$ \ and \ $\theta ^{-1} (B^\times) = H^\times$. \smallskip \item[{\bf (T\,2)\,}] If $u \in H$, \ $b,\,c \in B$ \ and \ $\theta (u) = bc$, then there exist \ $v,\,w \in H$ \ such that \ $u = vw$, \ $\theta (v) \simeq b$ \ and \ $\theta (w) \simeq c$. \end{enumerate}\end{enumerate} \end{definition} \medskip \noindent Every transfer homomorphism $\theta$ gives rise to a unique extension $\overline \theta \colon \mathsf Z(H) \to \mathsf Z(B)$ satisfying \[\qquad \quad \overline \theta (uH^\times) = \theta (u)B^\times \quad \text{for each} \quad u \in \mathcal A(H)\,. \] For $a \in H$, we denote by \ $\mathsf c (a, \theta)$ \ the smallest $N \in \mathbb N_0 \cup \{\infty\}$ with the following property: \smallskip \begin{enumerate} \item[] If $z,\, z' \in \mathsf Z_H (a)$ and $\overline \theta (z) = \overline \theta (z')$, then there exist some $k \in \mathbb N_0$ and factorizations $z=z_0, \ldots, z_k=z' \in \mathsf Z_H (a)$ such that \ $\overline \theta (z_i) = \overline \theta (z)$ and \ $\mathsf d (z_{i-1}, z_i) \le N$ for all $i \in [1,k]$ \ (that is, $z$ and $z'$ can be concatenated by an $N$-chain in the fiber \ $\mathsf Z_H (a) \cap \overline \theta ^{-1} (\overline \theta (z)$)\,). \end{enumerate} \smallskip\noindent Then \[ \mathsf c (H, \theta) = \sup \{\mathsf c (a, \theta) \mid a \in H \} \in \mathbb N_0 \cup \{\infty\} \] denotes the {\it catenary degree in the fibres}. \begin{lemma} \label{3.2} Let $\theta \colon H \to B$ and $\theta' \colon B \to B'$ be transfer homomorphisms of atomic monoids. \begin{enumerate} \item For every $a \in H$, we have $\overline \theta(\mathsf Z_H(a)) = \mathsf Z_B(\theta(a))$ and \ $\mathsf L_H(a) = \mathsf L_B(\theta(a))$. \smallskip \item $\mathsf c (B) \le \mathsf c (H) \le \max \{ \mathsf c (B), \mathsf c (H, \theta) \}$, \ $\mathsf c_{{\text{\rm mon}}} (B) \le \mathsf c_{{\text{\rm mon}}} (H) \le \max \{ \mathsf c_{{\text{\rm mon}}} (B), \mathsf c (H, \theta) \}$ and $\delta (B) = \delta (H)$. \smallskip \item For every \ $a \in H$ \ and all \ $k, l \in \mathsf L (a)$, \ we have \ $\mathsf d \bigl( \mathsf Z_k (a), \mathsf Z_l (a) \bigr) = \mathsf d \bigl( \mathsf Z_k \bigl( \theta(a)\bigr), \mathsf Z_l \bigl( \theta (a) \bigr) \bigr)$. \smallskip \item For every $a\in H$, we have $\mathsf{c}(a, \theta' \circ \theta) \le \max\{\mathsf{c}(a,\theta), \mathsf{c}(\theta(a),\theta')\}$. \newline In particular, $\mathsf{c}(H, \theta' \circ \theta) \le \max\{\mathsf{c}(H, \theta), \mathsf{c}(B, \theta')\}$. \end{enumerate} \end{lemma} \begin{proof} 1. This follows from \cite[Proposition 3.2.3]{Ge-HK06a}. \smallskip 2. The first statement follows from Theorem 3.2.5.4, the second from Lemma 3.2.6 in \cite{Ge-HK06a}, and the third from \cite[Theorem 3.14]{Ge-Gr09b}. \smallskip 3. Let $a \in H$ and $k, l \in \mathsf L (a)$. If $z, z' \in \mathsf Z (a)$ with $|z| = k$ and $|z'| = l$, then $|\overline \theta (z)| = k$, $|\overline \theta (z')| = l$ and $\mathsf d \bigl(\overline \theta (z),\, \overline \theta(z') \bigr) \le \mathsf d(z,z')$, which implies that $\mathsf d \bigl( \mathsf Z_k \bigl( \theta(a)\bigr), \mathsf Z_l \bigl( \theta (a) \bigr) \bigr) \le \mathsf d \bigl( \mathsf Z_k (a), \mathsf Z_l (a) \bigr)$. To verify the reverse inequality, let $\overline z_1, \overline z_2 \in \mathsf Z ( \theta (a))$ be given. We pick any $z_1 \in \mathsf Z (a)$ with $\overline \theta (z_1) = \overline z_1$. By \cite[Proposition 3.2.3.3.(c)]{Ge-HK06a}, there exists a factorization $z_2 \in \mathsf Z (a)$ such that $\overline{\theta} (z_2) = \overline z_2$ and $\mathsf d (z_1, z_2) = \mathsf d ( \overline z_1, \overline z_2)$. Since $|z_i| = |\overline z_i|$ for $i \in \{1,2\}$, it follows that $\mathsf d \bigl( \mathsf Z_k (a), \mathsf Z_l (a) \bigr) \le \mathsf d \bigl( \mathsf Z_k \bigl( \theta(a)\bigr), \mathsf Z_l \bigl( \theta (a) \bigr) \bigr)$. \smallskip 4. We recall that $\theta' \circ \theta$ is a transfer homomorphism (see the paragraph after \cite[Definition 3.2.1]{Ge-HK06a}). Let $a\in H$. Let $z,z'\in \mathsf{Z}_H(a)$ with $\overline{\theta' \circ \theta}(z)=\overline{\theta' \circ \theta}(z')$. Let $\overline{z}= \overline{\theta}(z)$ and $\overline{z'}= \overline{\theta}(z')$. We have $\overline{z},\overline{z'}\in \mathsf{Z}_{B}(\theta(a))$ and $\overline{\theta'}(\overline{z})=\overline{\theta'}(\overline{z'})$. Thus, by the definition of $\mathsf{c}(\theta(a),\theta')$, there exist some $k\in \mathbb{N}_0$ and $\overline{z}=\overline{z_0}, \ldots, \overline{z_k}=\overline{z'} \in \mathsf Z_B (\theta(a))$ such that \ $\overline{\theta'} (\overline{z_i}) = \overline{\theta'} (\overline{z})$ and \ $\mathsf d (\overline{z_{i-1}}, \overline{z_i}) \le \mathsf{c}(\theta(a),\theta')$ for each $i \in [1,k]$. Let $z_0=z$. Again, by \cite[Proposition 3.2.3.3.(c)]{Ge-HK06a}, for each $i<k$, there exists some factorization $z_{i+1} \in \mathsf{Z}_H (a)$ such that $\overline{\theta} (z_{i+1}) = \overline{z_{i+1}}$ and $\mathsf d (z_i, z_{i+1}) = \mathsf d (\overline{z_{i}}, \overline{z_{i+1}})$. Now, we have $\overline{\theta}(z_k)= \overline{z'}= \overline{\theta}(z')$. Thus, by the definition of $\mathsf{c}(a, \theta)$, there exist some $l \in \mathbb{N}_0$ and $z_k=y_0, \ldots, y_l=z' \in \mathsf Z_H (a)$ such that \ $\overline \theta (y_i) = \overline \theta (z')$ and \ $\mathsf d (y_{i-1}, y_i) \le \mathsf{c}(a, \theta)$ for each $i \in [1,l]$. Since $\overline \theta (y_i) = \overline \theta (z')$ clearly implies $\overline{\theta'\circ \theta} (y_i) = \overline{\theta'\circ \theta} (z')$, we get that the $\max \{\mathsf{c}(\theta(a), \theta'), \mathsf{c}(a, \theta)\}$-chain $z=z_0, \dots, z_k=y_0, \dots, y_l=z'$ has the required properties. \end{proof} \smallskip \noindent {\bf Monoids of zero-sum sequences.} Let \ $G$ \ be an additive abelian group, \ $G_0 \subset G$ \ a subset and $\mathcal F (G_0)$ the free monoid with basis $G_0$. According to the tradition of combinatorial number theory, the elements of $\mathcal F(G_0)$ are called \ {\it sequences } over \ $G_0$. Thus a sequence $S \in \mathcal F (G_0)$ will be written in the form \[ S = g_1 \cdot \ldots \cdot g_l = \prod_{g \in G_0} g^{\mathsf v_g (S)} \,, \] and we use all the notions (such as the length) as in general free monoids. Again using traditional language, we refer to \ $\mathsf v_g(S)$ \ as the {\it multiplicity} of $g$ in $S$ and refer to a divisor of $S$ as a {\it subsequence}. If $T|S$, then $T^{-1}S$ denotes the subsequence of $S$ obtained by removing the terms of $T$. We call the set \ $\supp (S) = \{ g_1, \ldots, g_l\}\subset G_0$ \ the \ {\it support} \ of $S$, \ $\sigma (S) = g_1+ \ldots + g_l\in G$ \ the \ {\it sum} \ of $S$, \ and define \begin{eqnarray}\nonumber \Sigma (S) &=& \Bigl\{ \sum_{i \in I} g_i \mid \emptyset \ne I \subset [1,l] \Bigr\} \ \subset \ G \quad \mbox { and, for } \mbox k \in \mathbb N \,, \\ \nonumber \Sigma_k(S) &=& \Bigl\{ \sum_{i \in I} g_i \mid I \subset [1,l] ,\,|I|=k\Bigr\} \ \subset \ G.\end{eqnarray} We set \ $-S = (-g_1) \cdot \ldots \cdot (-g_l)$. If $G = \mathbb Z$, then we define \[ S^+ = \prod_{g \in G_0^+} g^{\mathsf v_g (S)} \quad \text{and} \quad S^- = \prod_{g \in G_0^-} g^{\mathsf v_g (S)} \,, \] and thus we have $S = S^+ S^- 0^{\mathsf v_0 (S)}$. The monoid \[ \mathcal B(G_0) = \{ S \in \mathcal F(G_0) \mid \sigma (S) =0\} \] is called the \ {\it monoid of zero-sum sequences} \ over \ $G_0$, and its elements are called \ {\it zero-sum sequences} \ over \ $G_0$. A sequence $S\in \mathcal F(G_0)$ is zero-sum free if it has no proper, nontrivial zero-sum subsequence (note the trivial/empty sequence is defined to have sum zero). For every arithmetical invariant \ $*(H)$ \ defined for a monoid $H$, we write $*(G_0)$ instead of $*(\mathcal B(G_0))$. In particular, we set \ $\mathcal A (G_0) = \mathcal A (\mathcal B (G_0))$. We define the \ {\it Davenport constant} \ of $G_0$ by \[ \mathsf D (G_0) = \sup \bigl\{ |U| \, \bigm| \; U \in \mathcal A (G_0) \bigr\} \in \mathbb N_0 \cup \{\infty\} \,, \] which is a central invariant in zero-sum theory (see \cite{Ga-Ge06b}, and also \cite{Ge09a} for its relevance in factorization theory). Clearly, $\mathcal B (G_0) \subset \mathcal F (G_0)$ is saturated, and hence $\mathcal B (G_0)$ is a Krull monoid. We note that $\mathcal B (G_0) \subset \mathcal F (G_0)$ is cofinal if and only if for each $g \in G_0$ there is a $B \in \mathcal B (G_0)$ with $\mathsf v_g (B) > 0$ (see \cite[Proposition 2.5.6]{Ge-HK06a}); if this is the case, then the set $G_0$ is called \emph{condensed}. For a condensed set $G_0$, the class group of $\mathcal{B}(G_0) \hookrightarrow \mathcal{F}(G_0)$ is $\langle G_0 \rangle$, and the subset of classes containing prime divisors is $G_0$. For $G_0 \subset \mathbb Z$, we have that $G_0$ is condensed if and only if either $G_0^+ \ne \emptyset$ and $G_0^- \ne \emptyset$ or $G_0 \subset \{0\}$. The latter case, which in our context can be disregarded (see Lemma \ref{3.3}), is frequently automatically excluded by some of the conditions we impose in our results; if not, we impose the extra condition $|G_0| \ge 2$ to this end. \smallskip \noindent {\bf Block monoids associated to Krull monoids.} We will make substantial use of the following result (\cite[Section 3.4]{Ge-HK06a}). \begin{lemma} \label{3.3} Let $H$ be a Krull monoid, $\varphi \colon H \to F = \mathcal F (P)$ a cofinal divisor homomorphism, $G = \mathcal C (\varphi)$ its class group, and $G_P \subset G$ the set of classes containing prime divisors. Let $\widetilde{\boldsymbol \beta} \colon F \to \mathcal F (G_P)$ denoted the unique homomorphism defined by $\widetilde{\boldsymbol \beta} (p) = [p]$ for all $p \in P$. \begin{enumerate} \item The homomorphism $\boldsymbol \beta = \widetilde{\boldsymbol \beta} \circ \varphi \colon H \to \mathcal B (G_P)$ is a transfer homomorphism with $\mathsf c (H, \boldsymbol \beta) \le 2$. In particular, it has all the properties mentioned in Lemma \ref{3.2}. \item $\mathcal B (G_P) \subset \mathcal F (G_P)$ is saturated and cofinal. If $G$ is infinite cyclic, then $G_P \subset G$ is a condensed set and $|G_P|\ge 2$. \end{enumerate} \end{lemma} The homomorphism $\boldsymbol \beta$ is called the {\it block homomorphism}, and $\mathcal B (G_P)$ is called the {\it block monoid} associated to $\varphi$. If $\varphi$ is a divisor theory, then $\mathcal B (G_P)$ is called the block monoid associated to $H$. \smallskip \noindent {\bf One more theorem and examples}. The following lemma highlights the strong connection between the algebraic structure of a Krull monoid and its class group and provides a realization result (see \cite[Theorem 2.5.4]{Ge-HK06a}). Let $G$ be an abelian group and $(m_g)_{g \in G}$ a family of cardinal numbers. We say $H$ has \ {\it characteristic \ $( G, (m_g)_{g \in G} )$} \ if there is a group isomorphism \ $\Phi \colon G\tilde{\to} \mathcal C(H)$ \ such that ${\rm card} ( P \cap \Phi(g)) = m_g$ for every $g \in G$. \begin{lemma} \label{lem_char} Let $G$ be an abelian group, $(m_g)_{g \in G}$ a family of cardinal numbers and $G_0 = \{g \in G \mid m_g \ne 0\}$. \begin{enumerate} \item The following statements are equivalent{\rm \,:} \begin{enumerate} \item[(a)] There exists a Krull monoid $H$ and a group isomorphism \ $\Phi \colon G \to \mathcal C(H)$ \ such that \newline ${\rm card} ( P \cap \Phi(g)) = m_g$ for every $g \in G$. \item[(b)] $G= [G_0]$, and $G = [G_0 \setminus \{g\}]$ for every $g \in G_0$ with $m_g =1$. \end{enumerate} \smallskip \item Two Krull monoids \ $H$ \ and \ $H'$ \ have the same characteristic if and only if \ $H_{{\text{\rm red}}} \cong H_{{\text{\rm red}}}'$. \end{enumerate} \end{lemma} \medskip \begin{examples}~ \smallskip {\bf 1. Domains.} A domain $R$ is a Krull domain if and only if its multiplicative monoid of nonzero elements is a Krull monoid. As a special case of Claborn's Realization Theorem, there is the following result: For every subset $G_0 \subset \mathbb Z$ with $[G_0] = \mathbb Z$, there is a Dedekind domain $R$ and an isomorphism $\Phi \colon G \to \mathcal C (R)$ such that $\Phi (G_0) = \{ g \in \mathcal C (R) \mid g \cap \mathfrak X (R) \ne \emptyset \}$ (\cite[Theorem 3.7.8]{Ge-HK06a}. More results of this flavor are discussed in \cite[Section 3.7]{Ge-HK06a} and \cite[Section 5]{Ge-HK92a}. Let $R$ be a domain and $H$ a monoid such that $R[H]$ is a Krull domain. There are a variety of results on the class group of $R[H]$, which provide many explicit monoid domains having infinite cyclic class group (\cite[\S 16]{Gi84}, see also \cite{Ki01}). Generalized power series domains that are Krull are studied in \cite{Ki-Pa01}. \medskip {\bf 2. Zero-sum sequences.} Let $G_0 \subset \mathbb Z$ be a subset such that $[G_0 \setminus \{g\}] = \mathbb Z$ for all $g \in G_0$. Then the monoid of zero-sum sequences $\mathcal B (G_0)$ is a Krull monoid with class group isomorphic to $\mathbb Z$, and $G_0$ corresponds to the set of classes containing prime divisors (\cite[Proposition 2.5.6]{Ge-HK06a}). \medskip {\bf 3. Module theory.} Let $R$ be a (not necessarily commutative) ring and $\mathcal C$ a class of (right) $R$-modules---closed under finite direct sums, direct summands and isomorphisms---such that $\mathcal C$ has a set $V ( \mathcal C)$ of representatives (that is, every module $M \in \mathcal C$ is isomorphic to a unique $[M] \in V( \mathcal C))$. Then $V ( \mathcal C)$ becomes a commutative semigroup under the operation $[M] + [N] = [M \oplus N]$, which carries detailed information about the direct-sum behavior of modules in $\mathcal C$, e.g., whether or not the Krull--Remak--Azumaya--Schmidt Theorem holds, and, when it does not, how badly it fails. If every module $M \in \mathcal C$ has a semilocal endomorphism ring, then $\mathcal V (C)$ is a Krull monoid (\cite{Fa02}). For situations where this condition is satisfied and when the class group of $\mathcal V (C)$ is cyclic, we refer to recent work of Facchini, Hassler, Wiegand et al. (see, for example, \cite{Wi01, F-H-K-W06, Fa06a, Fa-He06a}). \medskip {\bf 4. Diophantine monoids.} A Diophantine monoid is a monoid which consists of the set of solutions in nonnegative integers to a system of linear Diophantine equations. In more technical terms, if $m, n \in \mathbb N$ and $A \in M_{m,n} ( \mathbb Z)$, then $H = \{ \boldsymbol x \in \mathbb N_0^n \mid A \boldsymbol x = \boldsymbol 0 \}$ is a Diophantine monoid. Moreover, $H$ is a Krull monoid, and if $m = 1$, then its class group is cyclic and there is a characterization of when it is infinite (\cite[Theorem 1.3]{Ch-Kr-Oe00}, \cite[Proposition 4.3]{Ch-Kr-Oe02}; see also \cite[Theorem 2.7.14]{Ge-HK06a} and \cite[Chapter II.8]{Gr01}). \end{examples} \section{Arithmetical Properties Equivalent to the Finiteness of $G_P^+$ or $G_P^-$} \label{4} Before we formulate our main characterization result, Theorem \ref{main-theorem-I}, we recall a recent characterization of tameness, which is in contrast with our present results. Let $H$ be an atomic monoid. For an element $b \in H $, let \ $\omega (H, b)$ \ denote the smallest \ $N \in \mathbb N_0 \cup \{\infty\} $ \ with the following property{\rm \,:} \begin{enumerate} \smallskip \item[] For all $n \in \mathbb N $ and $a_1, \ldots, a_n \in H $, if $b \t a_1 \cdot \ldots \cdot a_n $, then there exists a subset $\Omega \subset [1,n] $ such that $|\Omega | \le N $ and \[ b \Bigm| \, \prod_{\nu \in \Omega} a_\nu \,. \] \end{enumerate} Clearly, $b \in H$ is a prime if and only if $\omega (H, b) = 1$, and so the $\omega (H, \cdot)$ values measure how far away atoms are from primes. They are closely related to the local tame degrees $\mathsf t (H, \cdot)$. A detailed study of their relationship can be found in \cite[Section 3]{Ge-Ha08a}, but here we mention only two simple facts (to simplify the formulation, we suppose that $H$ is reduced): \begin{itemize} \item $\omega (H, u) \le \mathsf t (H, u)$ for all $\boldsymbol{1} \ne u \in H$ which are not prime (this follows from the definition). \item $\sup \{ \mathsf t (H, u) \mid u \in \mathcal A (H) \} < \infty$ if and only if $\sup \{ \omega (H, u) \mid u \in \mathcal A (H) \} < \infty$ (\cite[Proposition 3.5]{Ge-Ka10a}). \end{itemize} The monoid $H$ is said to be \ {\it tame} \ if the above suprema are finite. Note that the finiteness in Proposition \ref{tame-charact}.1 holds without any assumption on $G_P$ (indeed, it holds for all $v$-noetherian monoids \cite[Theorem 4.2]{Ge-Ha08a}). In particular, one should compare Propositions \ref{tame-charact}.1 and \ref{tame-charact}.2.(c) and Theorem \ref{main-theorem-I}.(b). \smallskip \begin{proposition} \label{tame-charact} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors. \begin{enumerate} \item $\omega (H, u) < \infty$ for all $u \in \mathcal A (H)$. \smallskip \item If $\varphi$ is a divisor theory, then the following statements are equivalent{\rm \,:} \begin{enumerate} \item[(a)] $G_P$ \ is finite. \item[(b)] $\mathsf D (G_P) < \infty$. \item[(c)] $H$ is tame. \end{enumerate} \end{enumerate} \end{proposition} \smallskip The equivalence of the three properties is a special case of \cite[Theorem 4.2]{Ge-Ka10a}. It is essential that the imbedding is a divisor theory and not only a cofinal divisor homomorphism. Indeed, if $G_0 = \{-1\} \cup \mathbb N$, then $\mathcal B (G_0) \hookrightarrow \mathcal F (G_0)$ is a cofinal divisor homomorphism, $\mathsf D (G_0) = \infty$, but $\mathcal B (G_0)$ is factorial and hence tame (see also Lemmas \ref{lem_char} and \ref{transfer-to-finite}). \medskip \begin{theorem} \label{main-theorem-I} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors. The following statements are equivalent{\rm \,:} \begin{enumerate} \item[(a)] $G_P^+$ \ or \ $G_P^-$ \ is finite. \smallskip \item[(b)] $H$ is locally tame, i.e., $\mathsf t (H, u) < \infty$ for all $u \in \mathcal A (H_{{\text{\rm red}}})$. \smallskip \item[(c)] The catenary degree $\mathsf c (H)$ is finite. \smallskip \item[(d)] The set of distances $\Delta (H)$ is finite. \smallskip \item[(e)] The elasticity $\rho(H)$ is a rational number. \smallskip \item[(f)] $\rho_2 (H)$ is finite. \smallskip \item[(g)] There exists some $M \in \mathbb{N}$ such that, for each $k \in \mathbb{N}$, we have $\rho_{k+1}(H) - \rho_k(H) \le M$. \smallskip \item[(h)] There exists some $M \in \mathbb{N}$ such that, for each $k \in \mathbb{N}$, the set $\mathcal{V}_k(H)$ is an {\rm AAP} with difference $\min \Delta(H)$ and bound $M$. \end{enumerate} \end{theorem} \medskip We point out the crucial implications in the above result. Suppose that $(a)$ holds. Then $(b)$, $(c)$, $(e)$, $(g)$ and $(h)$ are strong statements on the arithmetic of $H$. The conditions $(d)$ and $(f)$ are very weak arithmetical statements (indeed, the implications $(e) \Rightarrow (f)$, $(g) \Rightarrow (f)$ and $(h) \Rightarrow (f)$ hold trivially in any atomic monoid). The crucial point is that $(d)$ and $(f)$ both imply $(a)$. In \cite{An-Ch-Sm94c}, it was first proved that (in the setting of Krull domains) $(a)$ is equivalent to the finiteness of the elasticity $\rho (H)$, and the problem was put forward whether or not $\rho (H)$ would always be rational; part (e) shows that this is indeed so. In \cite{B-C-R-S-S10}, it was recently shown that $(a)$ is equivalent to $(c)$ as well as to $(d)$ (also in the setting of Krull monoids). We will give a complete proof of all implications, not only because our setting is slightly more general---being valid for any divisor \emph{homomorphism} rather than divisor \emph{theory} (recall, as noted earlier, that Proposition \ref{tame-charact}.2 does not hold in this slightly more general setting, and so there is indeed sometimes a difference between a divisor theory and homomorphism)---but also because we need all the required tools regardless (in particular, for the monotone catenary degree in Section \ref{5}), and thus little could be saved by not doing so. Note, if the equivalent conditions of Theorem \ref{main-theorem-I} hold, then \cite[Theorem 4.2]{Ga-Ge09b} implies that \[ \lim_{k \to \infty} \frac{|\mathcal V_k (H)|}{k} = \frac{1}{\min \Delta(H)} \Bigl( \rho (H) - \frac{1}{\rho (H)} \Bigr) \,. \] Under a certain additional assumption, the sets $\mathcal V_k (H)$ are even arithmetical progressions and not only AAPs (\cite[Theorem 3.1]{Fr-Ge08}; for more on the sets $\mathcal V_k (H)$, see (\cite[Theorem 3.1.3]{Ge09a}). As mentioned in the introduction, there are characterizations of arithmetical properties in various algebraic settings. In most of them, the finiteness of the elasticity is equivalent to the finiteness of all $\rho_k (H)$ (though this does not hold in all atomic monoids). But in none of these settings is the finiteness of the elasticity equivalent to the finiteness of the catenary degree. The reader may want to compare Proposition \ref{tame-charact} and Theorem \ref{main-theorem-I} with \cite[Corollary 3.7.2]{Ge-HK06a}, \cite[Theorem 4.5]{Ka10a} or \cite[Theorem 4.4]{Ge-Ha08a}. \smallskip The remainder of this section is devoted to the proof of Theorem \ref{main-theorem-I}. We start with the necessary preparations. \begin{lemma} \label{Lambert} Let $G_0 \subset \mathbb Z$ be a condensed subset. Then \[ |U^+| \le |\inf G_0| \qquad \text{for each atom} \qquad U \in \mathcal A (G_0) \,. \] If in particular $G_0$ is finite, then $\mathsf{D}(G_0) \le \max G_0 + |\min G_0|$. \end{lemma} \begin{proof} This is due to Lambert (\cite{La87a}); for a proof in the present terminology, see \cite[Theorem 3.2]{B-C-R-S-S10}. \end{proof} \begin{lemma} \label{ex_of_atom} Let $G_0 \subset \mathbb Z$ be a condensed subset such that $G_0^+$ is infinite. For each $S \in \mathcal{F}(G_0^-)$, there exists some $U\in \mathcal{A}(G_0)$ with $S \mid U$. \end{lemma} \begin{proof} Let $d= \gcd (G_0^-)$. Then $[G_0^-]\subset -d \mathbb{N}$ and there exists some $g \in \mathbb{N}$ such that $-gd - d\mathbb{N}\subset [G_0^-]$. Since $G_0^+$ is infinite, let $b \in G_0^+$ with $b > |\sigma(S)| + gd$, and let $\beta \in [1, d]$ be minimal such that $\beta b \in d \mathbb{N}$. By the definition of $g$, there exists some $S'\in \mathcal{F}(G_0^-)$ such that $\sigma(S') = - (\beta b -|\sigma(S)|)=-(\beta b+\sigma(S))$. Thus, $\sigma(b^{\beta}SS')=0$ and, by the minimality of $\beta$, it follows that $b^{\beta}SS'$ is an atom. \end{proof} The next lemma uses ideas from the proof of Theorem 3.1 in \cite{B-C-R-S-S10}. It will be used for the investigation of the catenary degree as well as for the monotone catenary degree (Proposition \ref{6.4}). \begin{lemma} \label{3.4} Let $G_0 \subset \mathbb Z$ be a condensed subset such that $G_0^-$ is finite and nonempty. Let $A \in \mathcal B (G_0)$ ne nontrivial and $z, \overline z \in \mathsf Z (A)$ with $|z| \le |\overline z|$. Then there exists a $U \in \mathcal A (G_0)$ with $U \t \overline z$ and a factorization $\widehat z \in \mathsf Z (A) \cap U \mathsf Z (G_0)$ such that $\mathsf d (z, \widehat z) \le \bigl(|\min G_0| + |G_0^-|^2 \bigr) \ |\min G_0|$. \end{lemma} \begin{proof} Let $\overline z = U_1 \cdot \ldots \cdot U_m$ and $z = V_1 \cdot \ldots \cdot V_l$ where $l, m \in \mathbb N$ and $U_1, \ldots, U_m, V_1, \ldots, V_l \in \mathcal A (G_0)$. We proceed in two steps. Note we may assume $0\nmid A$, else the lemma is trivial taking $U=0$ and $\hat z=z$. \medskip 1. We assert that there is an $i \in [1, m]$ and a set $I \subset [1, l]$ such that \[ |I| \le |\min G_0 | + |G_0^-|^2 \ \quad \text{and} \quad U_i \Bigm| \prod_{\nu \in I} V_{\nu} \,. \] We assume $l > |G_0^-|$, since otherwise the claim is obvious. Since \[ \sum_{i=1}^m \max \Bigl\{ \frac{\mathsf v_g (U_i)}{\mathsf v_g (A)} \mid g \in G_0^- \Bigr\} \le \sum_{i=1}^m \sum_{g \in G_0^-} \frac{\mathsf v_g (U_i)}{\mathsf v_g (A)} = \sum_{g \in G_0^-} \Bigl( \frac{1}{\mathsf v_g (A)} \sum_{i=1}^m \mathsf v_g (U_i) \Bigr) = |G_0^-| \,, \] there exists an $i \in [1, m]$ such that \begin{equation}\label{toto} \frac{\mathsf v_g (U_i)}{\mathsf v_g (A)} \le \frac{|G_0^-|}{m} \,. \end{equation} For each $g \in G_0^-$, there is an $I_g \subset [1, l]$ with $|I_g| = |G_0^-|$ such that \[ \mathsf v_g \Bigl( \prod_{\nu \in I_g} V_{\nu} \Bigr) \ge \frac{|G_0^-| \mathsf v_g (A)}{l} \,. \] Hence, since $l \le m$, it follows by \eqref{toto} that \[ \mathsf v_g \Bigl( \prod_{\nu \in I_g} V_{\nu} \Bigr) \ge \frac{|G_0^-| \mathsf v_g (A)}{l} \ge \frac{m \mathsf v_g (U_i)}{\mathsf v_g (A)} \frac{\mathsf v_g (A)}{l} = \mathsf v_g (U_i) \,. \] Since by Lemma \ref{Lambert} we have $|U_i^+| \le | \min G_0 |$, there is an $I_0 \subset [1, l]$ with $|I_0| \le |\min G_0|$ such that \[ \mathsf v_g (U_i) \le \mathsf v_g \Bigr( \prod_{\nu \in I_0} V_{\nu} \Bigr) \quad \text{for all} \quad g \in G_0^+ \,. \] Then, for $I = I_0 \cup \bigcup_{g \in G_0^-} I_g$, we get $\mathsf v_g (U_i) \le \mathsf v_g \Bigl( \prod_{\nu \in I} V_{\nu} \Bigr)$ for each $g \in G_0$, i.e., $U_i \t \prod_{\nu \in I} V_{\nu}$. Noting that $|I| \le |\min G_0| + |G_0^-|^2$, the argument is complete. \smallskip 2. By part 1, we may suppose without restriction that $U_1 \t \prod_{\nu=1}^k V_{\nu}$ with $k \le \bigl( | \min G_0| + |G_0^-|^2 \bigr)$. We consider a factorization $V_1 \cdot \ldots \cdot V_k = W_1 W_2 \cdot \ldots \cdot W_n$, where $U_1 = W_1, W_2, \ldots , W_n \in \mathcal A (G_0)$, and by Lemma \ref{Lambert}, \[ \begin{aligned} n & \le |(W_1 \cdot \ldots \cdot W_n)^+| = |(V_1 \cdot \ldots \cdot V_k)^+| \\ & \le k \ |\min G_0| \le \bigl( |\min G_0| + |G_0^-|^2 \bigr) \ |\min G_0| \,. \end{aligned} \] Now we set $\widehat z = W_1 \cdot \ldots \cdot W_n V_{k+1} \cdot \ldots \cdot V_l$ and get \[ \mathsf d (z, \widehat z) \le \max \{k, n\} \le \bigl( |\min G_0| + |G_0^-|^2 \bigr) \ |\min G_0| \,. \qedhere \] \end{proof} \begin{lemma} \label{lem_rhok} Let $G_0\subset \mathbb{Z}$ be a condensed set such that $G_0^-$ is finite and nonempty. \begin{enumerate} \item There exists some $M \in \mathbb{N}$ such that $\rho_{k+1}(G)\le 1+kM$ for each $k \in \mathbb{N}_0$. More precisely, \begin{enumerate} \item if $G_0$ is infinite, then for each $k \in \mathbb{N}$, \[1\le \rho_{k+1} (G_0) - \rho_{k} (G_0) \le 2 \ |\min G_0|.\] \item if $G_0$ is finite, then for each $k \in \mathbb{N}$, \[ 1\le \rho_{k+1} (G_0) - \rho_{k} (G_0) \le \mathsf D (G_0)-1 \,. \] \end{enumerate} \item For each $k \in \mathbb{N}$, \[ -1 \le \lambda_{k}(G_0) - \lambda_{k+1}(G_0) < \bigl( |\min G_0| + |G_0^-|^2 \bigr) \ |\min G_0| \,. \] \end{enumerate} \end{lemma} \begin{proof} 1. We recall that $\rho_1(G_0)=1$. It thus suffices to establish the bounds on $\rho_{k+1} (G_0) - \rho_{k} (G_0) $. By Lemma \ref{Lambert}, we know $\rho_k(G_0)\leq k\cdot |\min G_0^-|<\infty$. \noindent 1.(a) The left inequality is trivial and it remains to verify the right inequality. Let $m = |\min G_0|$. Let $l \in \mathbb{N}$, and let $A_1, \dots, A_{k+1}, U_{1}, \dots, U_{l} \in \mathcal{A}(G_0)$ be such that \[A_1 \cdot\ldots\cdot A_{k+1}= U_1\cdot \ldots \cdot U_{l} \, .\] We have to show that $l \le \rho_k(G_0) +2m$. By Lemma \ref{Lambert}, we know that $|A^+| \le m$ for each $A \in \mathcal{A}(G_0)$. Thus, we may assume that $(A_{k}A_{k+1})^+ \mid U_1 \cdot \ldots \cdot U_{2m}$. Then $(\prod_{j=2m+1}^{l}U_j)^+ \mid \prod_{i=1}^{k-1} A_i$. Let $S= (\prod_{j=2m+1}^{l}U_j)^-$. By Lemma \ref{ex_of_atom}, there exists some $A_{k}'\in \mathcal{A}(G_P)$ with $S \mid A_{k}'$. We consider $B=(\prod_{i=1}^{k-1} A_i)A_{k}'$, which is a product of $k$ atoms. We observe that $\prod_{j=2m+1}^{l}U_j\mid B$. Thus, $\max \mathsf{L}(B)\ge l - 2m$, establishing the claim. \smallskip \noindent 1.(b) This follows from \cite[Proposition 3.6]{Ge-Ka10a} (see also Lemma 4.3 in that paper and note that $\mathsf D (G_0) \ge 2$). \smallskip \noindent 2. The left inequality is trivial and it remains to verify the right inequality. Let $s = \lambda_{k+1} (G_0)$ and let $U_1, \ldots, U_s, A_1, \ldots, A_{k+1} \in \mathcal A (G_0)$ be such that \[ U_1 \cdot \ldots \cdot U_s = A_1 \cdot \ldots \cdot A_{k+1} \,. \] After renumbering if necessary, Lemma \ref{3.4} implies that $A_1 \t U_1 \cdot \ldots \cdot U_j$ and $U_1 \cdot \ldots \cdot U_j = A_1 W_2 \cdot \ldots \cdot W_i$ with $W_1, \ldots , W_i \in \mathcal A (G_0)$ and $i \le \bigl( |\min G_0| + |G_0^-|^2 \bigr) |\min G_0| =M_2$ (note that, in order to apply Lemma \ref{3.4}, we used that $s \le k+1$). Then \[ W_2 \cdot \ldots \cdot W_i U_{j+1} \cdot \ldots \cdot U_s = A_2 \cdot \ldots \cdot A_{k+1}, \] and hence \[ \begin{aligned} \lambda_{k} (G_0) & \le \min \mathsf L (A_2 \cdot \ldots \cdot A_{k+1}) \le \min \mathsf L (U_{j+1} \cdot \ldots \cdot U_s) + \min \mathsf L (W_2 \cdot \ldots \cdot W_i) \\ & \le s - j + i-1 \le \lambda_{k+1} (G_0) + (M_2 - 1) \,. \qedhere \end{aligned} \] \end{proof} We continue with a lemma that is used when investigating the sets of distances and local tameness. To simplify the formulation, we introduce the following notation. For $a \in -\mathbb{N}$ and $b \in \mathbb{N}$, let $V_{a,b}$ denote the unique atom with support $\{a,b\}$, that is $V_{a,b}= a^{\alpha}b^{\beta}$ with $\alpha= \lcm(a,b)/|a|$ and $\beta= \lcm(a,b)/b$. \begin{lemma} \label{lem_gap} Let $G_0 \subset \mathbb{Z}$ and let $v \in \mathbb{N}$. Suppose there exist distinct $a,a_2 \in G_0^-$ and $b,b_1 \in G_0^+$ that satisfy $b_1 \ge b|a|$ and $|a_2| \ge (v b_1 +b)|a|$. For a given $z \in \mathsf{Z}((V_{a, b_1} V_{a_2,b})^v)$, let $z_0$ be the (unique) minimal divisor of $z$ such that $\mathsf{v}_{a_2}(\pi(z_0^{-1}z))=0$, and let $t(z)= \mathsf{v}_{b_1}(\pi(z_0))$. Then, \[|z| \in \left[\frac{b_1}{\lcm(a,b)} \ t(z) - D ,\frac{b_1}{\lcm(a,b)} \ t(z)+ D \right] \quad \text{where} \quad D = v (b+|a|) \gcd(a,b) \, .\] Moreover, if $t(z)=0$, then $z = V_{a,b_1}^v\cdot V_{a_2,b}^v$. \end{lemma} Since it is relevant in applications of this lemma, we point out that $D$ depends neither on $a_2$ nor on $b_1$. \begin{proof} To simplify notation without suppressing the information on the origin of certain quantities, we set $\alpha = \mathsf{v}_a(V_{a,b})$, $\alpha_1= \mathsf{v}_a(V_{a,b_1})$, and $\alpha_2 = \mathsf{v}_{a_2}(V_{a_2,b}) $. Likewise, we set $\beta = \mathsf{v}_b(V_{a,b}) $, $\beta_1= \mathsf{v}_{b_1}(V_{a,b_1})$, and $\beta_2 = \mathsf{v}_{b}(V_{a_2,b})$. From the explicit description or applying Lemma \ref{Lambert}, we get $\beta , \beta_{1}\in [1,|a|]$ and $\alpha, \alpha_2 \in [1, b]$. Let $z = U_1 \cdot \ldots \cdot U_m$, where $U_1, \ldots, U_m \in \mathcal A (G_0)$, and $k, l \in [1,m]$ with $k \le l$ be such that \begin{itemize} \item $a_2 \t U_{\nu}$ \ for each $\nu \in [1, k]$, \item $a_2 \nmid U_{\nu}$ and $b_1 \t U_{\nu}$ \ for each $\nu \in [k+1, l]$, and \item $a_2 \nmid U_{\nu}$ and $b_1 \nmid U_{\nu}$ \ for each $\nu \in [l+1, m]$; \end{itemize} in particular, $z_0 = U_{1}\cdot \ldots \cdot U_k \in \mathsf{Z}(G_0)$. Also note that $U_{\nu}=V_{a,b}$ for each $\nu \in [l +1, m]$. For $\nu \in [1, k]$, we have \[ U_{\nu} = a_2^{ \alpha_{\nu, 2}} a^{ \alpha_{\nu, 1} } b_1^{ \beta_{\nu,1} } b^{ \beta_{\nu, 2} } \,, \] where $ \alpha_{\nu,2} \in \mathbb N$ and $ \alpha_{\nu,1} , \beta_{\nu, 1} , \beta_{\nu, 2} \in \mathbb N_0$. By the assumption on $|a_2|$ and since $\beta, \beta_1 \in [1, |a|]$, we have $|a_2| \ge v \beta_1 b_1 + \beta b$. Thus, in view of $\mathsf v_{b_1}(\pi(z))=\beta_1 v$, it follows that $\beta_{\nu, 2} \ge \beta$. Hence $ \alpha_{\nu, 1} \le \alpha-1$, since otherwise $V_{a,b} \mid U_{\nu}$, which is impossible (as $a_2|U_\nu$). Let $\alpha_2' = \mathsf{v}_{a}( \pi(z_0))$ and $\beta_2' = \mathsf{v}_{b}( \pi(z_0))$. In view of $ \alpha_{\nu, 1} \le \alpha - 1$, $k \le v \alpha_2$ and $\alpha, \alpha_2 \in [1, b]$, we have $0 \le \alpha_2' \le v b^2$. We note that $\sigma( \pi(z_0)^-) = v \alpha_2 a_2 + \alpha_2'a$, and thus \[t(z) b_1 + \beta_2' b = v \alpha_2 |a_2| + \alpha_2'|a|,\] i.e., $\beta_2' = b^{-1}(v \alpha_2 |a_2| + \alpha_2'|a| - t(z)b_1)$. In particular, note that if $t(z)=0$, then, since $$\sigma(b^{\mathsf v_b(\pi(z))})=v\cdot \sigma(b^{\mathsf v_b(V_{a_2,b})})=-v\cdot \sigma(a_2^{\mathsf v_{a_2}(V_{a_2,b})})$$ implies $\mathsf{v}_b((V_{a,b_1}V_{a_2,b})^v) = b^{-1}(v \alpha_2 |a_2|)$, it follows that $\alpha_2' = 0$ and $z_0= V_{a_2,b}^v$; this establishes the ``moreover''-statement. Consequently, \begin{equation} \label{lem_gap_eq_1} b^{-1}( v \alpha_2 |a_2| - t(z) b_1 )\le \beta_2' \le b^{-1}(v \alpha_2 |a_2| + v b^2 |a| - t(z)b_1). \end{equation} For $\nu \in [k+1, l]$, we have \[ U_{\nu} = b_1^{\beta_{\nu,1}''} b^{\beta_{\nu,2}''} a^{\alpha_{\nu,1}''} \,, \] with $\beta_{\nu,1}'' \in \mathbb N$ and $\alpha_{\nu,1}'',\beta_{\nu,2}'' \in \mathbb N_0$. We have $\alpha_{\nu,1}''|a| \ge b_1$. Thus, by the assumption on $b_1$ and since $\alpha\in [1,b]$, we get $\alpha_{\nu,1}'' \ge \alpha$, and hence $\beta_{\nu, 2}'' \le \beta-1$ (as otherwise $U_\nu=V_{a,b}$ with $b_1|U_\nu$ but $b_1\nmid V_{a,b}$, a contradiction). Let $\beta_2'' = \mathsf{v}_{b}( \prod_{\nu= k+1}^l U_{\nu})$. We note that $l - k \le \mathsf v_{b_1} ((V_{a,b_1}V_{a_2,b})^v) - t(z)= v \beta_1 - t(z)\le v |a| - t(z) \le v|a|$. Thus, we obtain that \begin{equation} \label{lem_gap_eq_2} 0 \le \beta_2'' \le (l-k) (\beta-1) \le v|a| (\beta-1) \le v |a|^2\,. \end{equation} Let $\beta_2''' = \mathsf{v}_b(\prod_{\nu = l +1}^mU_{\nu})$. We have \[\beta_2'''= \mathsf{v}_{b}((V_{a,b_1}V_{a_2,b})^v)- \beta_2' -\beta_2'' = v \beta_2 - \beta_2' - \beta_2''.\] In combination with \eqref{lem_gap_eq_1} and \eqref{lem_gap_eq_2}, we get that \[v \beta_2 - b^{-1}\bigl(v \alpha_2 |a_2| + v b^2 |a| - t(z)b_1\bigr) - v |a|^2 \le \beta_2''' \le v \beta_2 - b^{-1}\bigl( v \alpha_2 |a_2| - t(z) b_1 \bigr).\] Thus, since $\beta_2 = b^{-1} \alpha_2 |a_2|$ (in view of $V_{a_2,b}=a_2^{\alpha_2}b^{\beta_2}$), it follows that \begin{equation}\label{pillow}\beta_2''' \in \frac{b_1}{b} t(z) + [ - vb|a| - v|a|^2 ,0 ].\end{equation} Since $U_{\nu}=V_{a,b}$ for each $\nu \in [l +1, m]$, it follows that $\beta_2''' = (m-l)\beta$. Since $k \in [0, vb]$ and $l - k \in [0,v|a|]$, we get that $m \in (m-l)+[0, v (b+|a|)]$. Combining with $\beta_2''' = (m-l)\beta$ and \eqref{pillow} then yields \[ m \in \left[ \frac{b_1}{b\beta} t(z) - \frac{ vb|a| + v|a|^2}{\beta} , \frac{b_1}{b\beta} t(z) + v (b+|a|) \right]\, ,\] and, since $\beta \le |a|$, we have $v (b+|a|) \le v (b+|a|)|a|/\beta$. Substituting the explicit value of $\beta$, the claim follows. \end{proof} The following proposition is a major portion of Theorem \ref{main-theorem-I}. \begin{proposition} \label{rel_prop} Let $G_0 \subset \mathbb{Z}$ be a condensed set such that $G_0^-$ is finite and nonempty. Then $\rho(G_0)$ is a rational number. \end{proposition} To prove this result, we need the concept of factorizations with respect to a (not necessarily minimal) generating set. This idea is also used in the recent paper \cite{C-K-D-H10}, where a generalized set of distances is studied for numerical monoids. Let $H$ be a monoid and $S \subset H_{{\text{\rm red}}}\setminus \{\boldsymbol{1}\}$ a subset. We call $\mathsf{Z}^S (H) = \mathcal{F} (S)$ the factorization monoid of $H$ with respect to $S$. The homomorphism $\pi_{H}^S=\pi^S \colon \mathsf{Z}^S (H) \to H_{{\text{\rm red}}}$ defined by $\pi^{S}(z)= \prod_{u \in S} u^{\mathsf{v}_{u}(z)}$ is called the factorization homomorphism of $H$ with respect to $S$. For $a\in H$, we set $\mathsf{Z}^{S}_H(a)=\mathsf{Z}^{S}(a)= (\pi^{S})^{-1}(aH^{\times})$; we call this the set of factorizations in $S$ of $a$. The set $\mathsf{L}^{S}(a) = \{|z| \mid z \in \mathsf{Z}^{S}(a) \}$ is called the set of lengths of $a$ with respect to $S$. We note that $\mathsf{Z}^{S}(a)\neq \emptyset$ for each $a \in H$ if and only if $S$ generates $H_{{\text{\rm red}}}$ (as a monoid). If $S$ generates $H_{{\text{\rm red}}}$, then $\mathcal A (H_{{\text{\rm red}}}) \subset S$ by \cite[Proposition 1.1.7]{Ge-HK06a}. If $S= \mathcal{A}(H_{{\text{\rm red}}})$, then $\mathsf Z^S (a) = \mathsf Z (a)$, and all other notions coincide with the usual ones. Suppose that $S \subset H_{{\text{\rm red}}}$ is a generating set. For $a \in H$, let $\rho^S(a)= \rho (\mathsf{L}^S(a))$ denote the elasticity of $a$ with respect to $S$, and $\rho^S(H)= \sup \{\rho^S (a) \mid a \in H\}$ the elasticity of $H$ with respect to $S$; note that $0 \in \mathsf{L}^{S}(a)$ if and only if $\mathsf{L}^{S}(a)=\{0\}$, i.e., $a \in H^{\times}$. We say that the elasticity of $H$ with respect to $S$ is accepted if there exists some $a \in H $ with $\rho^{S}(a)= \rho^{S}(H)$. The proof of the following result is a direct modification of the one for the (usual) elasticity of finitely generated monoids (\cite[Theorem 3.1.4]{Ge-HK06a}) and contains it as the special case $S= \mathcal{A}(H_{{\text{\rm red}}})$. \begin{lemma} \label{rel_lem_taurat} Let $H$ be a monoid and $S \subset H_{{\text{\rm red}}}\setminus \{\boldsymbol{1}\}$ a finite generating set of $H_{{\text{\rm red}}}$. Then $\rho^S(H)$ is finite and accepted, in particular, rational. \end{lemma} \begin{proof} By construction, $\mathsf{Z}^S(H) \time \mathsf{Z}^S(H)$ is a finitely generated free monoid. Obviously, $Z=\{(x,y) \in \mathsf{Z}^S(H) \time \mathsf{Z}^S(H) \mid \pi^S(x)=\pi^S(y)\}$ is a saturated submonoid, thus finitely generated by \cite[Proposition 2.7.5]{Ge-HK06a}. Let $Z^{\bullet}= Z \setminus Z^{\times}$; clearly $|Z^{\times}|=1$ and, for each $(x,y)\in Z^{\bullet}$, we have that both $|x|\neq 0$ and $|y|\neq 0$. We note that $\rho^S(H) = \sup \{|x|/|y| \mid (x,y)\in Z^{\bullet}\}$. We assert that $\sup \{|x|/|y| \mid (x,y)\in Z^{\bullet}\}= \sup \{|x|/|y| \mid (x,y)\in \mathcal{A}(Z)\}$. Since $\mathcal{A}(Z)$ is finite, this implies the result. Let $s=(x_s,y_s)\in Z^{\bullet}$ and let $s= t_1\cdot\ldots\cdot t_l$ with $t_i=(x_i,y_i) \in \mathcal{A}(Z)$ be a factorization of $s$ in the monoid $Z$. We have, using the standard inequality for the mediant, \[\frac{|x_s|}{|y_s|}= \frac{\sum_{i=1}^{l}|x_i|}{\sum_{i=1}^{l}|y_i|}\le \max \left\{\frac{|x_i|}{|y_i|} \mid i \in [1,l] \right\},\] showing that $\sup \{|x|/|y| \mid (x,y)\in Z^{\bullet}\}\le \sup \{|x|/|y| \mid (x,y)\in \mathcal{A}(Z)\}$. The other inequality being trivial, the claim follows. \end{proof} For a condensed set $G_0 \subset \mathbb{Z}$ with $|G_0| \ge 2$, we define \[ \mathcal{B}(G_0)^+ =\{B^+\mid B \in \mathcal{B}(G_0)\} \ \text{ and} \ \mathcal{A}(G_0)^+ =\{A^+\mid A \in \mathcal{A}(G_0)\} \,. \] \begin{lemma} \label{rel_lembas} Let $G_0 \subset \mathbb{Z}$ be a condensed set with $|G_0|\ge 2$. \begin{enumerate} \item $\mathcal{B}(G_0)^+ \subset \mathcal{F}(G_0^+)$ is a submonoid. \smallskip \item $\mathcal{A}(G_0)^+$ is a generating set of $\mathcal{B}(G_0)^+$. \smallskip \item $|F| \le |\inf G_0^-|$ for each $F \in \mathcal{A}(G_0)^+$. \end{enumerate} \end{lemma} \begin{proof} The first two claims are immediate, and the last one is a direct consequence of Lemma \ref{Lambert}. \end{proof} Clearly, $\mathcal{A}(G_0)^+$ contains $\mathcal{A}(\mathcal{B}(G_0)^+)$, the set of atoms of $\mathcal{B}(G_0)^+$, yet it is in general not equal to this set; by definition, we have that $F \in \mathcal{A}(G_0)^+$ if and only if there exists some $A \in \mathcal{A}(G_0)$ such that $F=A^+$, yet $F \in \mathcal{A}(\mathcal{B}(G_0)^+)$ if and only if, for each $B \in \mathcal{B}(G_0)$ with $F=B^+$, we have $B\in \mathcal{A}(G_0)$. Moreover, $\mathcal{B}(G_0)^+ $ is in general not a saturated submonoid of $\mathcal{F}(G_0^+)$. The following technical result is used to partition $\mathcal{A}(G_0)$ into finitely many classes (cf.~below). \begin{lemma} \label{rel_lemt} Let $G_0 \subset \mathbb{Z}$ be a condensed set such that $G_0^-$ is finite and nonempty. Let $F \in \mathcal{F}(G_0^+)$, $g \in \supp (F)$ with $g \ge |G_0^-| \, |\min G_0^-| \, \lcm(G_0^-)$, and $k \in \mathbb{N}$ with $g'=g+k\lcm (G_0^- ) \in G_0^+$. Then \ $F \in \mathcal{A}(G_0)^+$ \ if and only if \ $g'g^{-1}F \in \mathcal{A}(G_0)^+$. \end{lemma} \begin{proof} We set $T= g'g^{-1}F \in \mathcal{F}(G_0^+)$. Suppose $F \in \mathcal{A}(G_0)^+$. Let $R \in \mathcal{F}(G_0^-)$ such that $FR \in \mathcal{A}(G_0)$. Since $\sigma(F) \ge g \ge |G_0^-| \, |\min G_0^-| \, \lcm(G_0^-)$, there exists some $a \in G_0^-$ such that $\mathsf{v}_a(R)\ge \lcm (G_0^- )$. Let $R_1= Ra^{k\lcm (G_0^- )/|a|}$. Then $TR_1 \in \mathcal{B}(G_0)$. Assume to the contrary that $TR_1$ is not an atom, say $TR_1 = (T'R_1')(T''R_1'')$, where $g'\mid T'$, $T=T'T''$ and $R_1=R_1'R_1''$. Let $l'\in \mathbb{N}_0$ be maximal such that $a^{l'\lcm (G_0^- )/|a|}\mid R_1'$ and let $l=\min \{l',k\}$. We note that $a^{-l\lcm (G_0^- )/|a|}R_1'\mid R$. Moreover, since \begin{eqnarray}\nonumber |\sigma(a^{-l\lcm (G_0^- )/|a|}R_1')|&\ge& g' - l\lcm (G_0^- ) \ge (k-l)\lcm (G_0^- ) + |G_0^-| \, |\min G_0^-| \, \lcm(G_0^-)\\\nonumber &\geq& (k-l)\cdot \lcm(G_0^-)+\sum_{x\in G_0^-}|x|\left(\frac{\lcm(G_0^-)}{|x|}-1\right),\end{eqnarray} there exists a subsequence $R_2'\mid a^{-l\lcm (G_0^- )/|a|}R_1'$ such that $\sigma(R_2')= -(k-l)\lcm (G_0^- )$. We set $R_0 = R_2'^{-1} a^{-l\lcm (G_0^- )/|a|}R_1'$. Then $\sigma(R_0) = \sigma(R_1') + k\lcm (G_0^- )$. Thus $\sigma(gg'^{-1}T'R_0)=0$, yet $gg'^{-1}T'R_0\mid FR$, contradicting that $TR_1$ is not an atom. Suppose $T \in \mathcal{A}(G_0)^+$. Let $R'\in \mathcal{F}(G_0^-)$ be such that $TR'\in \mathcal{A}(G_0)$. Since $$-\sigma(R_1)=\sigma(T)\ge g'\ge k\cdot \lcm (G_0^- )+ |G_0^-| \, |\min G_0^-| \, \lcm(G_0^-)\geq k\cdot \lcm(G_0^-)+\sum_{x\in G_0^-}|x|\left(\frac{\lcm(G_0^-)}{|x|}-1\right),$$ there exists a subsequence $R_1'\mid R'$ with $\sigma(R_1') = -k\cdot \lcm (G_0^- )$. Let $R= R_1'^{-1}R'$. Then $FR$ is a zero-sum sequence. Assume $FR$ is not an atom, say $FR= (F'R_2')(F''R_2'')$, where $g \mid F'$, $F=F'F''$ and $R=R_2'R_2''$. Then $g'g^{-1}F'R_2'R_1'\mid TR'$ and it is a zero-sum sequence, contradicting that $FR$ is not an atom. \end{proof} Let $G_0 \subset \mathbb{Z} \setminus \{0\}$ be a condensed set such that $G_0^-$ is finite and nonempty. In view of Lemma \ref{rel_lemt}, we introduce the following relation on $G_0^+$. For $g,h \in G_0^+$, we say that $g$ is equivalent to $h$ if $g= h$ or if $g,h \ge |G_0^-| \, |\min G_0^-| \, \lcm(G_0^-)$ and $g \equiv h \mod \lcm (G_0^- )$. This relation is an equivalence relation and it partitions $G_0^+$ into finitely many---namely, less than $|G_0^-| \, |\min G_0^-| \, \lcm(G_0^-) + \lcm (G_0^- )$---equivalence classes; we denote the equivalence class of $g$ by $\kappa(g)$ and also use $\kappa$ to denote the extension of this map to $\mathcal{F}(G_0^+)$. We note that $\kappa(\mathcal{A}(G_0)^+)$ is a finite set, since it consists of sequences over the finite set $\kappa(G_0^+)$ and the length of each sequence is at most $|\min G_0^-|$ by Lemma \ref{rel_lembas}. Moreover, it is a generating set of the monoid $\kappa(\mathcal{B}(G_0)^+)$. In order to study factorizations, we extend $\kappa$ to $\mathsf{Z}(G_0)$ via \[ \kappa(A_1 \cdot \ldots \cdot A_l)= \kappa(A_1^+)\cdot \ldots \cdot \kappa(A_l^+). \] This is an element of $\mathcal{F}(\kappa(\mathcal{A}(G_0)^+))$, i.e., $\mathsf{Z}^{\kappa(\mathcal{A}(G_0)^+)}(\kappa(\mathcal{B}(G_0)^+))$; for brevity, we denote this factorization monoid by $\mathsf{Z}^{\kappa}$. Likewise, for $F \in \kappa (\mathcal{B}(G_0)^+)$, we denote $\mathsf{Z}^{\kappa(\mathcal{A}(G_0)^+)}(F)$ by $\mathsf{Z}^{\kappa}(F)$; $\pi^{\kappa(\mathcal{A}(G_0)^+)}$ by $\pi^{\kappa}$; and $\rho^{\kappa(\mathcal{A}(G_0)^+)}$ by $\rho^{\kappa}$. The homomorphism $\kappa \colon \mathsf{Z}(G_0)\to \mathsf{Z}^{\kappa}$ is epimorphic. We note that, for $B \in \mathcal{B}(G_0)$, we have that $\kappa(\mathsf{Z}(B)) \subset (\pi^{\kappa})^{-1}(\kappa(B^+))$, and in general, this is a proper inclusion. However, we have, for each $F \in \mathcal{B}(G_0)^+$, by Lemma \ref{rel_lemt}, \begin{equation} \label{rel_eq_1} (\pi^{\kappa})^{-1}(\kappa(F))\;= \bigcup_{B \in \mathcal{B}(G_0),\, B^+=F} \kappa(\mathsf{Z}(B)) , \end{equation} whenever $G_0 \subset \mathbb{Z} \setminus \{0\}$ is condensed with $G_0^-$ finite and nonempty. \begin{lemma} \label{rel_lem_eq} Let $G_0 \subset \mathbb{Z}\setminus \{0\}$ be a condensed set such that $G_0^-$ is finite and nonempty. \begin{enumerate} \item For each $B \in \mathcal{B}(G_0)$, we have $\rho(B) \le \rho^{\kappa}(\kappa(B^+))$. In particular, $\rho(G_0)\le \rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+))$. \item If $G_0$ is infinite, then $\rho(G_0) = \rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+))$. \end{enumerate} \end{lemma} \begin{proof} 1. Let $B \in \mathcal{B}(G_0) \setminus \{1\}$, $x,y\in \mathsf{Z}(B)$ with $|x|=\max \mathsf{L}(B)$ and $|y|= \min \mathsf{L}(B)$. Since $\kappa(x),\kappa(y)\in \mathsf{Z}^{\kappa}(\kappa(B^+))$, we have that $\rho(B)=|x|/|y|= |\kappa(x)|/|\kappa(y)|\le \rho^{\kappa}(\kappa(B^+))$. The additional claim is clear. 2. By part 1, it remains to show that $\rho(G_0)\ge \rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+))$. By Proposition \ref{rel_lem_taurat} and since $\kappa(\mathcal{A}(G_0)^+)$ is finite, we know that $\rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+))$ is accepted. Let $B_{\kappa}\in \kappa(\mathcal{B}(G_0)^+)$ be such that $\rho^{\kappa}(B_{\kappa})= \rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+))$, and let $x_{\kappa}, y_{\kappa}\in \mathsf{Z}^{\kappa}(B_{\kappa})$ be such that $|x_{\kappa}|/|y_{\kappa}| = \rho^{\kappa}(B_{\kappa})$. By \eqref{rel_eq_1}, we know that there exist $B_x,B_y \in \mathcal{B}(G_0)$ with $B_x^+ = B_y^+ $, $x \in \mathsf{Z}(B_x)$ with $\kappa(x)=x_{\kappa}$, and $y \in \mathsf{Z}(B_y)$ with $\kappa(y)=y_{\kappa}$; in particular, we have $\kappa(B_x^+)= \kappa(B_y^+)=B_{\kappa}$. Let $n \in \mathbb{N}$. Since $G_0^+$ is infinite, Lemma \ref{ex_of_atom} yields some $U_n \in \mathcal{A}(G_0)$ with $(B_x^n)^- \mid U_n$ and $U_n^-=(B_x^n)^-$. We set $D_n= B_y^nU_n$ and note that, since $(B_x^n)^+ = (B_y^n)^+$ and $(B_x^n)^-|U_n^-$, the sequence $B_x^n$ is a proper subsequence of $D_n$. Thus, \[\min \mathsf{L}(D_n)\le |y^n|+1 = n|y_{\kappa}|+1 \quad\quad \text{and} \quad\quad \max \mathsf{L}(D_n)\ge |x^n|+1 = n|x_{\kappa}|+1.\] So we get \[\rho(D_n)\ge \frac{n|x_{\kappa}|+1}{n|y_{\kappa}|+1}. \] Thus, for each $n \in \mathbb{N}$, \[\rho(G_0)\ge \frac{n|x_{\kappa}|+1}{n|y_{\kappa}|+1},\] and letting $n\rightarrow \infty$, we have \[ \rho(G_0) \ge \frac{|x_{\kappa}|}{|y_{\kappa}|}= \rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+)) \,. \qedhere \] \end{proof} \begin{proof}[{\bf Proof of Proposition \ref{rel_prop}}] Since $\rho(G_0)= \rho(G_0 \setminus \{0\})$, we may assume that $0 \notin G_0$. If $G_0$ is finite, then $\mathcal{B}(G_0)$ is finitely generated \cite[Theorem 3.4.2.1]{Ge-HK06a}, and thus the elasticity is rational by Lemma \ref{rel_lem_taurat} (applied with $S= \mathcal{A}(H_{{\text{\rm red}}})$). Suppose $G_0$ is infinite. By Lemma \ref{rel_lem_eq}, we have that $\rho(G_0)= \rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+))$, and by Lemma \ref{rel_lem_taurat}, we know that $\rho^{\kappa}(\kappa(\mathcal{B}(G_0)^+))$ is rational. \end{proof} \bigskip \begin{proof}[{\bf Proof of Theorem \ref{main-theorem-I}}]~ \smallskip (a) \,$\Rightarrow$\, (b) \ Without restriction, we may suppose that $G_P^-$ is finite. Let $u \in \mathcal A (H_{{\text{\rm red}}})$. We have to show that $\mathsf t (H, u) < \infty$. If $u$ is prime, then $\mathsf t (H, u) = 0$. Suppose that $u$ is not prime. Let $a \in H$ and $a'=aH^{\times}$ be such that $u \mid a'$. Let $z = v_1 \cdot \ldots \cdot v_n \in \mathsf Z (a)$. There is a minimal subset $\Omega \subset [1, n]$, say $\Omega = [1,k]$, such that $u \t v_1 \cdot \ldots \cdot v_k$ and $k \le |\varphi_{{\text{\rm red}}}(u)|$. We consider any factorization of $v_1 \cdot \ldots \cdot v_k$ containing $u$, say $v_1 \cdot \ldots \cdot v_k = u_1 \cdot \ldots \cdot u_l$, where $u = u_1, \ldots , u_l \in \mathcal A (H_{{\text{\rm red}}})$. For $i \in [1,k]$ and $j \in [1, l]$, we set $V_i = \boldsymbol \beta (v_i)$ and $U_j = \boldsymbol \beta (u_j)$. Then $U_1, \ldots , U_l, V_1 , \ldots , V_k \in \mathcal A (G_P)$. Since $u$ is not a prime and $\Omega$ is minimal, it follows that $0 \nmid V_1 \cdot \ldots \cdot V_k$. Hence, for every $j \in [1,l]$, $U_j$ contains an element from $G_P^+$, and Lemma \ref{Lambert} implies that \[ l \le |(U_1 \cdot \ldots \cdot U_l)^+| = |(V_1 \cdot \ldots \cdot V_k)^+| \le k \ |\min G_P^-| \le |\varphi_{{\text{\rm red}}}(u)|\ |\min G_P^-| \,. \] Setting $z' = u_1 \cdot \ldots \cdot u_l v_{k+1} \cdot \ldots \cdot v_n$, we infer that $\mathsf d (z, z') \le \max \{k, l \} \le |\varphi_{{\text{\rm red}}}(u)| \ |\min G_P^-|$, and hence $\mathsf t (H, u) \le |\varphi_{{\text{\rm red}}}(u)| \ |\min G_P^-|$. \smallskip (a) \,$\Rightarrow$\, (c) \ Without restriction, we may suppose that $G_P^-$ is finite. By Lemma \ref{3.3}, it suffices to show that $\mathsf c (G_P) < \infty$. We set $M =\bigl( |\min G_P| + |G_P^-|^2 \bigr) \ |\min G_P|$, and assert that $\mathsf c (A) \le M$ for all $A \in \mathcal B (G_P)$. To do so, we proceed by induction on $\max \mathsf L (A)$. If $A \in \mathcal B (G_P)$ with $\max \mathsf L (A) \le M$, then $\mathsf c (A) \le \max \mathsf L (A) \le M$. Let $A \in \mathcal B (G_P)$, let $z, \overline z \in \mathsf Z (A)$ with $|z| \le |\overline z|$, and suppose that $\mathsf c (B) \le M$ for all $B \in \mathcal B (G_P)$ with $\max \mathsf L (B) < \max \mathsf L (A)$. By Lemma \ref{3.4}, there is a $U \in \mathcal A (G_P)$ and a factorization $\widehat z \in \mathsf Z (A) \cap U \mathsf Z (G_P)$ such that $U \t \overline z$ and $\mathsf d (z, \widehat z) \le M$, say $\widehat z = U \widehat y$ and $\overline z = U \overline y$ with $\widehat y, \overline y \in \mathsf Z (B)$ and $B = U^{-1}A$. Since $\max \mathsf L (B) < \max \mathsf L (A)$, there is an $M$-chain $\widehat y = y_0, \ldots, y_k = \overline y$ of factorizations of $B$, and hence $z, \widehat z=Uy_0, Uy_1, \ldots, Uy_k=U\overline y = \overline z$ is an $M$-chain of factorizations concatenating $z$ and $\overline z$. \smallskip (a) \,$\Rightarrow$\, (e) Without restriction, we may suppose that $G_P^-$ is finite. The claim follows by Proposition \ref{rel_prop} and Lemma \ref{3.3}. \smallskip (c) \,$\Rightarrow$\, (d) \ and (e) \,$\Rightarrow$\, (f) \ hold for all atomic monoids (\cite[Proposition 1.4.2 and Theorem 1.6.3]{Ge-HK06a}). \smallskip (b) \,$\Rightarrow$\, (a), (d) \,$\Rightarrow$\, (a), and (f) \,$\Rightarrow$\, (a) \ Assume to the contrary that $G_P^+$ and $G_P^-$ are both infinite. We show that $\mathcal B (G_P)$ is not locally tame, which implies that $H$ is not locally tame (\cite[Theorem 3.4.10.6]{Ge-HK06a}). Along the way, we show that $\rho_2 (G_P) = \infty$ and that $\Delta (G_P)$ is infinite, which by Lemma \ref{3.3} implies the according statements for $H$. We set $a = \max G_P^-$ and $b = \min G_P^+$. Using the notation of Lemma \ref{lem_gap}, let $U = V_{a,b}= a^{\alpha} b^{\beta} \in \mathcal A (G_P)$. We pick an arbitrary $N \in \mathbb N_{\ge 2}$ and show that $\mathsf t (G_P, U) \ge N$, which implies the assertion. We intend to apply Lemma \ref{lem_gap} with $v=1$. Thus, let $D = |a|(b+|a|)\gcd(a,b) $, let $b_1 \in G_P^+$ be such that \[\frac{b_1}{\lcm(a,b)} \ge N + D, \] and let $a_2 \in G_P^-$ be such that $|a_2| \ge (b_1+b)|a|$. Let $\alpha_1,\alpha_2,\beta_1,\beta_2 \in \mathbb N$ be such that $V_{a,b_1} = a^{\alpha_1} b_1^{\beta_1}$ and $V_{a_2,b} = a_2^{\alpha_2} b^{\beta_2}$ are elements of $\mathcal{A}(G_P)$. We note that all conditions of Lemma \ref{lem_gap} with $v=1$ are fulfilled. Since $\alpha \le b \le \alpha_1$ and $\beta \le |a| \le \beta_2$, we have $U \t V_{a,b_1} V_{a_2,b}$, and therefore $ \mathsf{Z}(V_{a,b_1}V_{a_2,b})\cap U \mathsf{Z}(G_P) \neq \emptyset$. Let $z \in \mathsf{Z}(V_{a,b_1}V_{a_2,b})\setminus \{V_{a,b_1} \cdot V_{a_2,b_1} \}$, which exists in view of $U|V_{a,b_1}V_{a_2,b}$. By Lemma \ref{lem_gap}, we get that $t(z)\neq 0$, and thus that \[|z|\ge \frac{b_1}{\lcm(a,b)} - D \ge N .\] This shows that $\max \Delta \bigl( \mathsf L (V_{a,b_1}V_{a_2,b}) \bigr) \ge N-2$, $\mathsf t(G_p,U)\geq N$ and \[ \rho_2 (G_P) \ge \max \mathsf L (V_{a,b_1}V_{a_2,b}) \ge N \,. \] \smallskip (a) \,$\Rightarrow$\, (g) \ This follows from Lemma \ref{lem_rhok}. \smallskip (g) \,$\Rightarrow$\, (f) \ We have $\rho_2 (H) \le M + \rho_1 (H) = M+1$, where $M$ is as given by $(g)$. \smallskip (a) \,$\Rightarrow$\, (h) \ If $(a)$ holds, then $(d)$ and $(g)$ hold. Thus all assumptions of \cite[Theorem 4.2]{Ga-Ge09b} are fulfilled, and $(h)$ follows. \smallskip (h) \,$\Rightarrow$\, (f) \ We have $\rho_2 (H) = \sup \mathcal V_2 (H) < \infty$. \end{proof} \section{Arithmetical Properties stronger than the Finiteness of $G_P^+$ or $G_P^-$} \label{5} Let $H$ be a Krull monoid and $G_P \subset G$ as always (see Theorem \ref{main-theorem-II} below). In this section, we discuss arithmetical properties which are finite if $G_P$ is finite or $\min \{ |G_P^+|, |G_P^-| \} = 1$, and whose finiteness implies that $G_P^+$ or $G_P^-$ is finite. However, it will turn out that none of the implications can be reversed (with one possible exception for $(c)\Rightarrow (b4)$, which remains open), and that the finiteness of these properties cannot be characterized by the size of $G_P^+$ and $G_P^-$ but also depends on the structure of these sets. We start with some definitions and then formulate the main result. \begin{definition} Let $H$ be an atomic monoid and $\pi \colon \mathsf Z(H) \to H_{{\text{\rm red}}}$ the factorization homomorphism. \smallskip \begin{enumerate} \item For $z \in \mathsf Z(H)$, we denote by \ $\delta (z)$ \ the smallest $N \in \mathbb N_0$ with the following property: if $k \in \mathbb N$ is such that $k$ and $|z|$ are adjacent lengths of $\mathsf L \bigl( \pi(z) \bigr)$, then \[ \mathsf{d}(z, \mathsf{Z}_k(a) )\le N \,. \] Globally, we define \[ \delta (H) = \sup \{\, \delta (z) \mid z \in \mathsf Z (H) \} \in \mathbb N_0 \cup \{\infty\} \,, \] and we call \ $\delta (H)$ \ the \ {\it successive distance} \ of $H$. \smallskip \item We say that the \ {\it Structure Theorem for Sets of Lengths} holds (for the monoid $H$) \ if $H$ is atomic and there exist some $M \in \mathbb N_0$ and a finite, nonempty set $\Delta^* \subset \mathbb N$ such that, for every $a \in H$, the set of lengths $\mathsf L (a)$ is an {\rm AAMP} with some difference $d \in \Delta^*$ and bound $M$. In that case, we say more precisely that the Structure Theorem for Sets of Lengths holds with set $\Delta^*$ and bound $M$. \end{enumerate} \end{definition} \medskip \begin{theorem} \label{main-theorem-II} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G = \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. We denote by $G_P \subset G$ the set of classes containing prime divisors and consider the following conditions{\rm \,:} \smallskip \begin{enumerate} \item[(a)] $G_P$ \ is finite or \ $\min \{ |G_P^+|, |G_P^-| \} = 1$. \medskip \item[(b1)] The Structure Theorem for Sets of Lengths holds for $H$ with set $\Delta(G_P)$. \item[(b2)] The successive distance $\delta (H)$ is finite. \item[(b3)] The monotone catenary degree $\mathsf c_{{\text{\rm mon}}} (H)$ is finite. \item[(b4)] There is an $M \in \mathbb N$ such that, for all $a \in H$ and for each two adjacent lengths \ $k, \, l \in \mathsf L (a) \cap [\min \mathsf L (a) + M, \, \max \mathsf L (a) - M]$, \ we have $\mathsf d \bigl( \mathsf Z_k (a), \mathsf Z_l (a) \bigr) \le M$. \medskip \item[(c)] $G_P^+$ \ or \ $G_P^-$ is finite. \end{enumerate} Then we have \begin{enumerate} \item Condition $(a)$ implies each of the conditions $(b1)$ to $(b4)$. \item Each of the conditions $(b1)$ to $(b4)$ implies $(c)$. \item $(b2) \Rightarrow (b3) \Rightarrow (b4)$. \end{enumerate} \end{theorem} \medskip We briefly discuss the newly introduced arithmetical properties and point out the trivial implications in the above result. The successive distance of $H$ was introduced by Foroutan in \cite{Fo06a} in order to study the monotone catenary degree. For Krull monoids with finite class group, an explicit upper bound for the successive distance was recently given in \cite[Theorem 6.5]{Fr-Sc10a}. Note that, by definition, $\delta (H) < \infty$ implies that $\Delta (H)$ is finite. The significance of the Structure Theorem for Sets of Lengths will be discussed at the beginning of Section \ref{6}. Note that, if it holds for a monoid $H$, then $H$ is a \text{\rm BF}-monoid with finite set of distances $\Delta (H)$. Moreover, if $G_P = \mathbb Z$, then the Structure Theorem badly fails: indeed, then every finite subset $L \subset \mathbb N_{\ge 2}$ occurs as a set of lengths by Kainrath's Theorem \cite[Theorem 7.4.1]{Ge-HK06a}; for recent progress in this direction see \cite{Ch-Sc10a}. The implications $(b2) \Rightarrow (b4)$ and $(b3) \Rightarrow (b4)$ follow from the definitions. A condition implying $(b1)$ as well as $(b4)$ is given in Proposition \ref{4.2}. The bound $M$ in $(b4)$ reflects the fact that, in many settings, factorizations $z$ of an element $a \in H$ show more unusual phenomena if their length $|z|$ is close either to $\max \mathsf L (a)$ or to $\min \mathsf L (a)$ (the reader may want to consult \cite[Theorem 4.9.2]{Ge-HK06a}, \cite[Theorem 3.1]{Fo-Ha06b}, \cite[Theorem 3.1]{Fo-Ha06a} and the associated examples showing the relevance of the bound $M$). In Sections \ref{6} and \ref{7}, we obtain results showing that, even under the more restrictive assumption that $\varphi$ is a divisor theory, the Conditions $(b1)$ to $(b4)$ do not imply $(a)$ (Proposition \ref{STSL_prop1}), and $(c)$ does not imply $(b1)$ to $(b3)$ (Theorem \ref{STSL_thm}, Proposition \ref{STSL_prop1}, Proposition \ref{STSL_prop2} and Proposition \ref{7.1}). Proposition \ref{STSL_prop2} shows that $(b3)$ does not imply $(b2)$. Moreover, $(b1)$, $(b2)$ and $(b3)$ may hold as well as may fail even if $\min \{ |G_P^+|, |G_P^-| \} = 2$. Most of the observed phenomena (around the non-reversibility of implications) have not been pointed out before in any $v$-noetherian monoid, and in particular not in a Krull monoid. Finally, by Theorem \ref{main-theorem-II}, a Krull monoid $H$ satisfies strong arithmetical properties both when $G_P$ is finite as well as when $\min \{ |G_P^+|, |G_P^-| \} = 1$. Note that an arithmetical difference between these two cases was pointed out in Proposition \ref{tame-charact}. The remainder of this section is devoted to the proof of Theorem \ref{main-theorem-II}, which heavily uses Theorem \ref{main-theorem-I}. We start with the necessary preparations. To show that $(a)$ implies each of the Conditions $(b1)$ to $(b4)$, we will construct transfer homomorphisms to finitely generated monoids. \begin{lemma} \label{transfer-to-finite} Let $G_0 \subset \mathbb{Z}$ be a condensed set with $\min \{ |G_0^+|,\,|G_0^-|\}=1$, say $G_0^- = \{-n\}$. The map \[ \varphi \colon \begin{cases} \mathcal{B}(G_0)& \to \mathcal{F}(G_0 \setminus \{-n\})\\ B & \mapsto (-n)^{-\mathsf{v}_{-n}(B)}B \end{cases} \] is a cofinal divisor homomorphism. Its class group $\mathcal{C}(\varphi)$ is isomorphic to a subgroup of $\mathbb{Z}/n \mathbb{Z}$, and the set of classes containing prime divisors corresponds to $\{b+ n\mathbb{Z}\mid b \in G_0 \setminus \{-n\}\}$. In particular, the class group of the Krull monoid $\mathcal{B}(G_0)$ is a finite cyclic group. \end{lemma} \begin{proof} Clearly, $\varphi$ is a cofinal monoid homomorphism. In order to show that $\varphi$ is a divisor homomorphism, let $A,B \in \mathcal{B}(G_0)$ be such that $\varphi(A)\mid \varphi(B)$. We have to verify that $A \mid B$, and for that it suffices to check that $\mathsf{v}_{-n}(A)\le \mathsf{v}_{-n}(B)$. For each $C \in \mathcal{B}(G_0)$, we have $\mathsf{v}_{-n}(C)= \sigma(C^+)/n$ and $\sigma(C^+)=\sigma(\varphi(C))$. Since $\varphi(A)\mid \varphi(B)$, we have $\sigma(\varphi(A))\le \sigma(\varphi(B))$, and thus $\mathsf{v}_{-n}(A)\le \mathsf{v}_{-n}(B)$ follows. Now, we show that, for $F_1, F_2 \in \mathcal{F}(G_0 \setminus \{-n\})$, we have $F_1 \in F_2 \mathsf{q}(\varphi(\mathcal{B}(G_0)))$ if and only if $\sigma(F_1)\equiv \sigma(F_2) \mod n $. This establishes the results regarding $\mathcal{C}(\varphi)$ and the set of classes containing prime divisors. First, suppose that \ $\sigma(F_1) \equiv \sigma(F_2) \mod n$. We note that $F_iF_j^{n-1}(-n)^{(\sigma(F_i) + (n-1)\sigma(F_j))/n}\in \mathcal{B}(G_0)$, for $i,j \in \{1,2\}$. Thus, $F_j^n$ and $F_iF_j^{n-1}$ are elements of $ \varphi(\mathcal{B}(G_0))$ for $i,j \in \{1,2\}$. Since $F_1= F_2 (F_1F_2^{n-1})(F_2^{-n})$, the claim follows. Since \ $\sigma(\varphi(C))\equiv 0 \mod n$ \ for each $C \in \mathcal{B}(G_0)$, the converse claim follows. By \cite[Theorem 2.4.7]{Ge-HK06a}, the class group of $\mathcal{B}(G_0)$ is an epimorphic image of a subgroup of $\mathcal{C}(\varphi)$, and thus it is a finite cyclic group. \end{proof} The following example shows that $\mathcal{C}(\varphi)$ can be a proper subgroup of $\mathbb{Z}/n \mathbb{Z}$ and that $\mathcal{C}(\varphi)$ can be distinct from the class group of $\mathcal{B}(G_0)$. However, if $[G_0]= \mathbb{Z}$, then $\mathcal{C}(\varphi)=\mathbb{Z}/n\mathbb{Z}$; and, applying \cite[Theorem 3.1]{Sc09f}, there is a simple and explicit method to determine the class group of $\mathcal{B}(G_0)$ from $\mathcal{C}(\varphi)$ as well as the subset of classes containing prime divisors (note that $\mathcal{C}(\varphi)$ is a torsion group). \begin{example} Let $d_1,d_2 \in \mathbb{N}_{\ge 2}$, $n=d_1d_2$ and $G_0= \{-n, d_1\}$. Then $G_0$ fulfils all assumptions of Lemma \ref{transfer-to-finite}, and with $\varphi$ as in Lemma \ref{transfer-to-finite}, we get that $\mathcal{C}(\varphi)= \langle d_1 + n \mathbb{Z}\rangle \subsetneq \mathbb{Z}/n \mathbb{Z}$. However, $\mathcal{B}(G_0)$ is factorial, and thus its class group is trivial. \end{example} \begin{proposition} \label{nice_prop} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors. Suppose that $G_P$ is finite or that $\min \{|G_P^+|, |G_P^-|\}=1$. Then there exists a transfer homomorphism $\theta \colon H \to H_0$ into a finitely generated monoid $H_0$ such that $\mathsf c(H, \theta)\le 2$. Moreover, the following statements hold. \begin{enumerate} \item $\mathcal{L}(H)= \mathcal{L}(H_0)$, in particular, the Structure Theorem for Sets of Lengths holds for $H$ with $\Delta (H)=\Delta(H_0)$ and some bound $M$, and $\rho(H)= \rho(H_0)$ is finite and accepted. \item $\delta(H)= \delta(H_0) < \infty$. \item $\mathsf c_{{\text{\rm mon}}} (H) \le \max \{\mathsf c_{{\text{\rm mon}}} (H_0),2\} < \infty$. \end{enumerate} \end{proposition} \begin{proof} First we show the existence of the required transfer homomorphism. For this, we recall that a monoid of zero-sum sequences over a finite set is finitely generated (\cite[Theorem 3.4.2]{Ge-HK06a}). If $G_P$ is finite, then $\boldsymbol \beta \colon H \to \mathcal B (G_P)$ has the desired properties by Lemma \ref{3.3}. Now suppose that $\min \{|G_P^+|, |G_P^-|\} = 1$, say $G_P^-= \{-n\}$, and set $G_0 = \{b + n \mathbb Z \mid b \in G_P^+ \} \subset \mathbb{Z}/n \mathbb{Z}$. Using Lemmas \ref{3.3} and \ref{transfer-to-finite}, we have block homomorphisms $\boldsymbol \beta \colon H \to \mathcal B (G_P)$ and $\boldsymbol \beta' \colon \mathcal B (G_P) \to \mathcal B (G_0)$. By Lemma \ref{3.2}, the composition $\theta = \boldsymbol \beta' \circ \boldsymbol \beta \colon H \to \mathcal B (G_0)$ still has the required properties. Again, by Lemmas \ref{3.2} and \ref{3.3}, it suffices to verify the additional statements for finitely generated monoids: we refer to \cite[Theorem 4.4.11]{Ge-HK06a} for the Structure Theorem, to \cite[Theorem 3.1.4]{Ge-HK06a} for the elasticity and the successive distance, and to \cite[Theorem 5.1]{Fo06a} for the monotone catenary degree. \end{proof} \begin{lemma} \label{monotone-delta1} Let $H$ be an atomic monoid, $a \in H$ and $z, \,z' \in \mathsf Z (a)$ and $l = \bigl| |z| - |z'| \bigr|$. Then there exists some $z'' \in \mathsf Z (a)$ such that $|z''| = |z'|$ and \ $\mathsf d (z, z'') \le l \delta (H)$. \end{lemma} \begin{proof} See \cite[Lemma 3.1.3]{Ge-HK06a}. \end{proof} \begin{lemma} \label{monotone-delta2} Let $H$ be an atomic monoid with $\delta (H) < \infty$. Let $M \in \mathbb N$, $a \in H$, $u \in \mathcal A (H_{{\text{\rm red}}})$ and $z, \widehat z, \overline z \in \mathsf Z (a)$ be such that \[ |z| \le |\overline z|, \ u \t \overline z, \ u \t \widehat z \quad \text{and} \quad \mathsf d (z, \widehat z) \le M \,. \] Then there is a $z' \in \mathsf Z (a) \cap u \mathsf Z (H)$ such that $|z| \le |z'| \le |\overline z|$ and $\mathsf d (z, z') \le M + \Bigl( M + \max \Delta (H) \Bigr) \delta (H)$. \end{lemma} \begin{proof} Let $v \in H$ be such that $vH^{\times}=u$. We set $b = v^{-1}a$, $\overline z = u \overline y$ and $\widehat z = u \widehat y$, where $\overline y, \widehat y \in \mathsf Z (b)$. If $|z| \le |\widehat z| \le |\overline z|$, then $z' = \widehat z$ fulfills the requirements. If not, then either $|\widehat z| < |z|$ or $|\overline z| < |\widehat z|$, and we decide these two cases separately. \smallskip \noindent {\sc Case 1:} \, $|\widehat z| < |z|$. Since $|\widehat y| = |\widehat z|-1 \in \mathsf L (b)$ and $|\overline y| = |\overline z| - 1 \in \mathsf L (b)$, there is a \[ k \in \mathsf L (b) \cap [|z|-1, |\overline z|-1] \quad \text{with} \quad k \le |z|-1+\max \Delta (H) \,. \] Let $y'' \in \mathsf Z (b)$ with $|y''| = k$. Then \[ \begin{aligned} |y''| - |\widehat y| & = k - |\widehat z|+ 1 \le |z|-1+\max \Delta (H) - |\widehat z|+ 1 \\ & \le \mathsf d ( z, \widehat z) + \max \Delta (H) \le M + \max \Delta (H) \,. \end{aligned} \] Thus, by Lemma \ref{monotone-delta1}, there is a $y' \in \mathsf Z (b)$ with $|y'| = |y''|$ and $\mathsf d ( \widehat y, y') \le \Bigl( M + \max \Delta (H) \Bigr) \delta (H)$. Then $z' = u y' \in \mathsf Z (a) \cap u \mathsf Z (H)$ with $|z'| = 1 + k \in [|z|, |\overline z|]$ and \[ \mathsf d (z, z') \le \mathsf d (z, \widehat z) + \mathsf d (u \widehat y, u y') \le M + \Bigl( M + \max \Delta (H) \Bigr) \delta (H) \,. \] \smallskip \noindent {\sc Case 2:} \, $|\overline z| < |\widehat z|$. By Lemma \ref{monotone-delta1}, there is a $y' \in \mathsf Z (b)$ with $|y'| = |\overline y|$ and \[ \begin{aligned} \mathsf d (\widehat y, y') & \le \Bigl( |\widehat y| - |\overline y| \Bigr) \delta (H) = \Bigl( |\widehat z| - |\overline z| \Bigr) \delta (H) \\ & \le \Bigl( |\widehat z| - |z| \Bigr)\delta (H) \le \mathsf d (\widehat z, z) \delta (H) \le M \delta (H) \,. \end{aligned} \] Then $z' = uy' \in \mathsf Z (a) \cap u \mathsf Z (H)$ with $|z'| = |\overline z|$ and \[ \mathsf d (z, z') \le \mathsf d (z, \widehat z) + \mathsf d (u \widehat y, u y') \le M + M\delta (H) \,. \qedhere \] \end{proof} \begin{proposition} \label{6.4} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid with infinite cyclic class group $\mathcal C(\varphi)$. If the successive distance $\delta (H)$ is finite, then the monotone catenary degree $\mathsf c_{{\text{\rm mon}}} (H)$ is finite. \end{proposition} \begin{proof} We set $G = \mathcal C(\varphi)$, identify $G$ with $\mathbb Z$ and denote by $G_P \subset G$ the set of classes containing prime divisors. Suppose that $\delta (H) < \infty$. Lemma \ref{3.3} shows $\delta (H) = \delta (G_P)$ and that it suffices to verify that $\mathsf c_{{\text{\rm mon}}} (G_P) < \infty$. Note that $\Delta (G_P)$ is finite (since $\delta(G_P)<\infty$), and thus by Theorem \ref{main-theorem-I} we get that (say) $G_P^-$ is finite. We set $M = \bigl( |\min G_P| + |G_P^-|^2 \bigr) \ |\min G_P | $ and assert that \[ \mathsf c_{{\text{\rm mon}}} (G_P) \le M + \Bigl( M + \max \Delta (H) \Bigr) \delta (H) = M^{\ast} \,. \] For this, we have to show that $\mathsf c_{{\text{\rm mon}}} (A) \le M^{\ast}$ for all $A \in \mathcal B (G_P)$, and we proceed by induction on $\max \mathsf L (A)$. If $A \in \mathcal B (G_P)$ with $\max \mathsf L (A) = 1$, then $A \in \mathcal A (G_P)$ and $\mathsf c_{{\text{\rm mon}}} (A) = 0$. Now let $A \in \mathcal B (G_P)$ with $\max \mathsf L (A) > 1$ and suppose that $\mathsf c_{{\text{\rm mon}}} (B) \le M^{\ast}$ for all $B \in \mathcal B (G_P)$ with $\max \mathsf L (B) < \max \mathsf L (A)$. \smallskip We pick $z, \overline z \in \mathsf Z (A)$ with $|z| \le |\overline z|$ and must find a monotone $M^{\ast}$-chain of factorizations from $z$ to $\overline z$. By Lemma \ref{3.4} there is a $U \t \overline z$ with $U \in \mathcal A (G_P)$ and a $\widehat z \in \mathsf Z (A) \cap U \mathsf Z (G_P)$ such that $\mathsf d (z, \widehat z) \le M$. By Lemma \ref{monotone-delta2}, there is a $z' \in \mathsf Z (A) \cap U \mathsf Z (G_P)$ such that $|z| \le |z'| \le |\overline z|$ and $\mathsf d (z, z') \le M^{\ast}$. Now we set \[ B = U^{-1}A,\quad \overline z = U \overline y \quad \text{and} \quad z' = U y', \] where $\overline y, y' \in \mathsf Z (B)$. Since $\max \mathsf L (B) < \max \mathsf L (A)$, the induction hypothesis gives a monotone $M^*$-chain $y' = y_1, \ldots, y_k = \overline y$ of factorizations of $B$ from $y'$ to $\overline y$. Therefore \[ z, z' = Uy'=Uy_1, Uy_2, \ldots , Uy_k = U \overline y = \overline z \] is a monotone $M^{\ast}$-chain of factorizations of $A$ from $z$ to $\overline z$. \end{proof} \bigskip \begin{proof}[{\bf Proof of Theorem \ref{main-theorem-II}}] 3. The implication $(b3) \Rightarrow (b4)$ follows since, for $a \in H$ and each two adjacent lengths \ $k, \, l \in \mathsf L (a)$, we have, by definition, $\mathsf d \bigl( \mathsf Z_k (a), \mathsf Z_l (a) \bigr) \le \mathsf c_{{\text{\rm mon}}} (H)$. The implication $(b2) \Rightarrow (b3)$ is Proposition \ref{6.4}. \smallskip 1. By Proposition \ref{nice_prop}, we know that $(a)$ implies $(b1)$, $(b2)$, and $(b3)$; and, by part 3, we know that $(b3)$ implies $(b4)$. \smallskip 2. By definition, each of $(b1)$, $(b2)$ and $(b3)$ implies the finiteness of $\Delta (H)$. Thus, Theorem \ref{main-theorem-I} implies the assertion. It remains to show that $(b4)$ implies $(c)$. Suppose that $(b4)$ holds with some $M \in \mathbb{N}$ and assume to the contrary that $(c)$ does not hold, i.e., $G_P^+$ and $G_P^-$ are both infinite. We proceed similarly to the proof of Theorem \ref{main-theorem-I}, part $(b) \Rightarrow (a)$. We set $a = \max G_P^-$ and $b = \min G_P^+$ and let $\alpha \in [1,b]$ and $\beta \in [1, |a|]$ be such that $V_{a,b}= a^{\alpha}b^{\beta} \in \mathcal{A}(G_P)$. We intend to apply Lemma \ref{lem_gap} with $v=3$. Thus, let $D = 3|a|(b+|a|)\gcd(a,b)$, let $b_1 \in G_P^+$ with \[\frac{b_1}{\lcm(a,b)} \ge 2 D + M, \] and let $a_2 \in G_P^-$ with $|a_2| \ge (3 b_1 +b)|a| $. Let $\alpha_1,\alpha_2,\beta_2,\beta_2 \in \mathbb{N}$ be such that $V_{a,b_1} = a^{\alpha_1}b_1^{\beta_1}$ and $V_{a_2,b} = a_2^{\alpha_2}b^{\beta_2}$ are elements of $\mathcal{A}(G_P)$. First, we assert that there exist $z_0, z_1, z_2, z_3 \in \mathsf{Z}((V_{a,b_1}V_{a_2,b})^3)$ with, where $t(\cdot)$ is defined as in Lemma \ref{lem_gap}, \[t(z_0)< t(z_1)< t(z_2)< t(z_3).\] We note that $V_{a,b}\t V_{a,b_1}V_{a_2,b}$ (by the same reasoning used in the proof of Theorem \ref{main-theorem-I}), and thus there exists some $y \in \mathsf{Z}(V_{a,b_1}V_{a_2,b})$ with $t(y)\neq 0$. For $i \in [0,3]$, we set $z_i= y^i(V_{a,b_1}\cdot V_{a_2,b})^{3-i}$. Then we have $t(z_i)= i t(y)$, establishing the claim. Let $z_0', z_1', z_2', z_3' \in \mathsf{Z}((V_{a,b_1}V_{a_2,b})^3)$ be such that $t(z_0')< t(z_1')< t(z_2')< t(z_3')$ and such that there exists no $z \in \mathsf{Z}((V_{a,b_1}V_{a_2,b})^3)$ with $t(z_1')< t(z) < t(z_2')$. By Lemma \ref{lem_gap}, we get, for $i\in [0,2]$, that \[|z_{i+1}'|-|z_i'| \ge \frac{b_1}{\lcm(a,b)} \bigl( t(z_{i+1}') - t(z_i')\bigr) - 2D \ge M . \] Since $\min \mathsf{L}((V_{a,b_1}V_{a_2,b})^3) \le |z_0'| < |z_1'| < |z_2'|< |z_3'| \le \max \mathsf{L}((V_{a,b_1}V_{a_2,b})^3)$, we get that \[|z_1'|,|z_2'| \in \bigl[\min\mathsf{L}\bigl((V_{a,b_1}V_{a_2,b})^3\bigr) + M, \max \mathsf{L}\bigl((V_{a,b_1}V_{a_2,b})^3\bigr)-M \bigr].\] Let \[ k = \max \left( \mathsf{L}\bigl((V_{a,b_1}V_{a_2,b})^3\bigr) \cap \left[\frac{b_1}{\lcm(a,b)} \ t(z_1') - D ,\frac{b_1}{\lcm(a,b)} \ t(z_1') + D \right] \right) \] and \[ l = \min \left( \mathsf{L}\bigl((V_{a,b_1}V_{a_2,b})^3\bigr) \cap \left[\frac{b_1}{\lcm(a,b)} \ t(z_2') - D ,\frac{b_1}{\lcm(a,b)} \ t(z_2') + D \right] \right)\ ; \] note that, by Lemma \ref{lem_gap}, $|z_1'|$ and $|z_2'|$ are elements of the former and the latter set, respectively, and also note that the two intervals are disjoint. In particular, we have $|z_1'|\le k < l \le |z_2'|$. Since there exists no $z \in \mathsf{Z}((V_{a,b_1}V_{a_2,b})^3)$ with $t(z_1')< t(z) < t(z_2')$, it follows by Lemma \ref{lem_gap} that $k$ and $l$ are adjacent lengths. Since $k - l \ge \frac{b_1}{\lcm(a,b)} - 2D \ge M $ and by \eqref{E:Dist}, we have $\mathsf d \bigl( \mathsf Z_k (a), \mathsf Z_l (a) \bigr) \ge M + 2$, a contradiction to the assumption that $(b4)$ holds with $M$. \end{proof} \section{The Structure Theorem for Sets of Lengths} \label{6} The Structure Theorem for Sets of Lengths is a central finiteness result in factorization theory. Apart from Krull monoids---which will be discussed below---the Structure Theorem holds, among others, for weakly Krull domains with finite $v$-class group and for Mori domains $A$ with complete integral closure $\widehat A = R$ for which the conductor $\mathfrak f = (A \negthinspace : \negthinspace R) \ne \{0\}$ and $\mathcal C (R)$ and $R/ \mathfrak f$ are both finite (see \cite[Section 4.7]{Ge-HK06a} for an overview, and \cite{Ge-Gr09b, Ge-Ka10a} for recent progress). Moreover, it was recently shown that the Structure Theorem is sharp for Krull monoids with finite class group \cite{Sc09a}. Let $H$ be a Krull monoid and $G_P \subset G$ as always. By Theorem \ref{main-theorem-II}, it suffices to consider the situation when $G_P^+$ is finite and $2 \le |G_P^-| < \infty$. Essentially, all results so far which establish the Structure Theorem for some class of monoids use the machinery of pattern ideals and tame generating sets (presented in detail in \cite[Section 4.3]{Ge-HK06a}). First, we repeat these concepts and outline their significance for the Structure Theorem. However, Proposition \ref{4.6} shows that in our situation this approach is not applicable in general. The main result of this section, Theorem \ref{STSL_thm}, provides a full characterization of when the Structure Theorem holds. Although the setting is special, it shows that, in Theorem \ref{main-theorem-II}, condition $(b1)$ does not imply condition $(a)$, and it provides---together with Proposition \ref{4.6}---the first example of any Krull monoid for which the Structure Theorem holds without tame generation of pattern ideals. Furthermore, note by Lemma \ref{lem_char} that, for the sets $G_P$ considered in Theorem \ref{main-theorem-II}, there actually exists a Krull monoid such that $G_P$ is the set of classes containing prime divisors with respect to a divisor theory of $H$. Likewise, all previous examples of monoids $H$ with finite monotone catenary degree $\mathsf c_{{\text{\rm mon}}}(H)$ have been achieved by using that $\delta(H)$ is finite. However, in Proposition \ref{STSL_prop2}, we give the first example of a monoid $H$ with $\mathsf c_{{\text{\rm mon}}}(H)<\infty$ but $\delta(H)=\infty$. \begin{definition} \label{def-tamely-gen} Let $H$ be an atomic monoid, let $\mathfrak a \subset H$ and let $A \subset \mathbb Z$ be a finite nonempty subset. \begin{enumerate} \item We say that a subset \ $L \subset \mathbb Z$ \ {\it contains the pattern \ $A$} \ if there exists some $y \in \mathbb Z$ such that $y +A \subset L$. We denote by \ $\Phi(A) = \Phi_H (A)$ \ the set of all $a \in H$ for which \ $\mathsf L(a)$ contains the pattern $A$. \smallskip \item Now $\mathfrak a$ is called a \ {\it pattern ideal} \ if $\mathfrak a = \Phi (B)$ for some finite nonempty subset $B \subset \mathbb Z$. \smallskip \item A subset $E \subset H$ is called a \ {\it tame generating set} \ of $\mathfrak a$ \ if $E \subset \mathfrak a$ and there exists some $N \in \mathbb N$ with the following property: for every $a \in \mathfrak a$, \ there exists some \ $e \in E$ \ such that \[ e \t a\,, \quad \sup \mathsf L(e) \le N \quad \text{and} \quad \mathsf t(a, \mathsf Z(e)) \le N\,. \] In this case, we call $E$ a \ {\it tame generating set with bound \ $N$}, and we say that $\mathfrak a$ \ is \ {\it tamely generated}. \end{enumerate} \end{definition} The significance of tamely generated pattern ideals stems from the following result. \medskip \begin{proposition} \label{4.2} Let $H$ be a \text{\rm BF}-monoid with finite nonempty set of distances $\Delta (H)$ and suppose that all pattern ideals of $H$ are tamely generated. Then there exists a constant $M \in \mathbb N_0$ such that the following properties are satisfied{\rm \,:} \begin{enumerate} \item[(a)] The Structure Theorem for Sets of Lengths holds with $\Delta (H)$ and bound $M$. \smallskip \item[(b)] For all $a \in H$ and for each two adjacent lengths \ $k, \, l \in \mathsf L (a) \cap [\min \mathsf L (a) + M, \, \max \mathsf L (a) - M]$, \ we have $\mathsf d \bigl( \mathsf Z_k (a), \mathsf Z_l (a) \bigr) \le M$. \end{enumerate} \end{proposition} \begin{proof} The first statement follows from \cite[Theorem 4.3.11]{Ge-HK06a} and the second from \cite[Proposition 5.4]{Ge-Ka10a}. \end{proof} \medskip \begin{proposition} \label{4.6} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors. Suppose that \begin{itemize} \item $G_P^+$ is infinite and \item there are \ $a_1, \, a_2 \in G_P^-$ \ and \ $b \in G_P^+$ \ such that \[ a_1 \frac{\gcd (a_2,b)}{\gcd (a_1, a_2, b)} \equiv a_2 \frac{\gcd (a_1,b)}{\gcd (a_1, a_2, b)} \mod b \quad {but} \quad a_1 \frac{\gcd (a_2,b)}{\gcd (a_1, a_2, b)} \neq a_2 \frac{\gcd (a_1,b)}{\gcd (a_1, a_2, b)} \,. \] \end{itemize} Then both $H$ and $\mathcal B (G_P)$ have a pattern ideal which is not tamely generated. \end{proposition} \begin{proof} By \cite[Proposition 3.14]{Ge-Gr09b}, it suffices to show that $\mathcal B (G_P)$ has a pattern ideal which is not tamely generated. First we show that $\mathcal B (\{a_1, a_2, b\})$ is half-factorial. By Lemma \ref{transfer-to-finite}, it suffices to show that $\mathcal B ( \{ a_1 + b \mathbb Z, a_2 + b \mathbb Z \})$ is half-factorial. By \cite[Proposition 5]{Ge90d}, this follows by (indeed, it is equivalent to) the congruence that $a_1$, $a_2$, and $b$ fulfil by assumption. We set $\alpha_1 = b/\gcd(a_1,b)$, $\beta_1= |a_1|/\gcd (a_1,b)$, $\alpha_2 = b/\gcd( a_2, b)$, $\beta_2 = |a_2| /\gcd (a_2, b)$ and observe that, by rearranging our assumption $a_1 \frac{\gcd (a_2,b)}{\gcd (a_1, a_2, b)} \neq a_2 \frac{\gcd (a_1,b)}{\gcd (a_1, a_2, b)}$, we have $d=a_1 \alpha_1- a_2 \alpha_2\neq 0$, say $d>0$. Noting that $\alpha_1a_1=\lcm(a,b)$ and $\alpha_2a_2=\lcm(a_2,b)$, we can consider the two atoms \[ U_1 = a_1^{\alpha_1} b^{\beta_1} \quad \text{and} \quad U_2 = {a_2}^{\alpha_2} b^{\beta_2} \in \mathcal A (G_P) \,. \] Since $G_P^+$ is infinite, it contains arbitrarily large elements. Let $N \in G_P^+\setminus \{b\}$. We define \[ \gamma = \min \{\mathsf v_N (U) \mid U \in \mathcal A ( \{a_1, a_2, b, N\}) \ \text{with} \ N \t U \} \,. \] Since $N^{|a_1|} a_1^N \in \mathcal B (G_P)$, it follows that $\gamma \in [1, |a_1|]$. Now we pick an atom $U_N \in \mathcal A ( \{ a_1, a_2, b, N \})$ with $\gamma = \mathsf v_N (U_N)$ for which $\mathsf v_{b} (U_N)$ is minimal, say \[ U_N = N^{\gamma} b^{\beta} a_1^{M_1} {a_2}^{M_2} \in \mathcal A (G_P), \quad \text{where} \quad \beta, \gamma, M_1, M_2 \in \mathbb N_0 \ \text{depend on}\ N \,. \] If $M_2 \ge |a_1|$, then $U'_N=U_N a_1^{|a_2|} {a_2}^{a_1}$ has sum zero, and by the minimality of $\mathsf v_N (U_N)$ and $\mathsf v_{b} (U_N)$, it is an atom (as each atom must have at least one positive element). Thus, we may additionally choose $U_N$ such that $M_2 < |a_1|$, which implies (recall $a_2<0$) \begin{equation} \label{eq_M1} M_1 = \frac{1}{|a_1|} \Bigl( \gamma N + \beta b + a_2M_2 \Bigr) \ge \frac{1}{|a_1|} \Bigl( \gamma N + a_2|a_1| \Bigr) \ge \frac{N}{|a_1|}+a_2 \,. \end{equation} In view of this inequality, we may suppose that $N$ is sufficiently large to guarantee that $M_1\ge |a_2| \alpha_1 \alpha_2 $. Note that, since $U_N$ is an atom and $M_1\ge |a_2| \alpha_1 \alpha_2 \geq \alpha_1$, we have $\beta < \beta_1$. We consider the element \[ A_N = U_N {U_2}^{M_1} \in \mathcal B (G_P) \,. \] Let $k \in \Bigl[ 0, \bigl\lfloor \frac{M_1}{|a_2| \alpha_1 \alpha_2} \bigr\rfloor \Bigr]$. Then we have \[ U_{N,k} = N^{\gamma} b^{\beta} a_1^{M_1 + (a_2 \alpha_1 \alpha_2)k} {a_2}^{M_2 + (|a_1| \alpha_1 \alpha_2)k} \in \mathcal B (G_P) \,, \] and by the minimality of $\gamma$ and $\beta$, it follows that $U_{N,k} \in \mathcal A (G_P)$. Clearly, we get \[ z_{N,k} = U_{N,k} U_1^{-a_2 \alpha_2 k} {U_2}^{M_1 + a_1 \alpha_1 k} \in \mathsf Z (A_N) \,. \] This shows that \begin{equation}\label{spiffyspaff} \mathsf L (A_N) \supset \left\{ M_1 + 1 + d k \Bigm| k \in \left[ 0, \left\lfloor \frac{M_1}{|a_2| \alpha_1 \alpha_2} \right\rfloor \right] \right\} \,. \end{equation} Thus, we have $A_N \in \Phi ( \{0, d \})$ for each sufficiently large $N \in G_P^+$. \medskip Let $E_N \in \Phi ( \{0, d \})$ with $E_N \t A_N$. Since $\{a_1, a_2, b \}$, is half-factorial, it follows that $N \t E_N$. By the definition of $\gamma$, there is a $U_N' \in \mathcal A (G_P)$ with $N^{\gamma} \t U_N' \t E_N$. Note that \cite[Lemma 1.6.5.6]{Ge-HK06a} shows that $\mathsf t (A_N, U_N') \le \mathsf t (A_N, \mathsf Z (E_N) )$. Let $A_N = U_N' W_N$ with $W_N \in \mathcal B (G_P)$. Then $\supp (W_N) = \{a_1, a_2, b \}$ and hence $|\mathsf L (W_N)| = 1$. Thus all factorizations in $\mathsf Z (A_N) \cap U_N' \mathsf Z (G_P)$ have the same length. We pick some factorization $z_N \in \mathsf Z (A_N) \cap U_N' \mathsf Z (G_P)$. Clearly, there is a factorization $z_N^* \in \mathsf Z (A_N)$ such that (in view of \eqref{spiffyspaff}) \[ \begin{aligned} \bigl| |z_N| - |z_N^*| \bigr| \ge \frac{\max \mathsf L (A_N) - \min \mathsf L (A_N)}{2} \ge \frac{d}{2} \left\lfloor \frac{M_1}{|a_2| \alpha_1 \alpha_2} \right\rfloor \,. \end{aligned} \] This implies that \[ \begin{aligned} \mathsf t (A_N, \mathsf Z (E_N) ) \ge \mathsf t (A_N, U_N') & \ge \min \{ \mathsf d (z_N^*, y_N) \mid y_N \in \mathsf Z (A_N) \cap U_N' \mathsf Z (G_P) \} \\ & \ge \min \{ \bigl| |z_N^*| - |y_N| \bigr| \mid y_N \in \mathsf Z (A_N) \cap U_N' \mathsf Z (G_P) \} \\ & \ge \bigl| |z_N| - |z_N^*| \bigr| \ge \frac{d}{2} \left\lfloor \frac{M_1}{|a_2| \alpha_1 \alpha_2} \right\rfloor \,. \end{aligned} \] Since $N$ can be arbitrarily large and by \eqref{eq_M1}, we get that $\Phi ( \{0, d \})$ is not tamely generated. \end{proof} We will frequently make use of the following simple observation. Let $G$ be an abelian group and $G_1 \subset G_0 \subset G$ subsets. Then $\mathcal B (G_1) \subset \mathcal B (G_0)$ is a divisor-closed submonoid, and hence $\mathcal{L}(G_1) \subset \mathcal{L}(G_0)$. Therefore, if the Structure Theorem holds for $\mathcal B (G_0)$, then it holds for $\mathcal B (G_1)$. In particular, if condition $(b)$ holds, then the Structure Theorem holds for all $\mathcal B (G_0)$ with $G_0 \subset G_P$, and if $(b)$ fails, then the Structure Theorem fails for all $\mathcal B (G_0)$ with $G_P \subset G_0$---where $G_P$ is as below. \medskip \begin{theorem} \label{STSL_thm} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors. Suppose that $1 \in G_P^+$ and $G_P^- = \{-d,-1\}$ for some $d \in \mathbb N$. Then the following statements are equivalent{\rm \,:} \begin{enumerate} \item[(a)] The Structure Theorem for Sets of Lengths holds for $H$. \smallskip \item[(b)] $G_P^+ \setminus d \mathbb{Z}$ \ is finite or a subset of \ $1+ d\mathbb{Z}$. \end{enumerate} \end{theorem} \medskip The remainder of this section is devoted to the proof of Theorem \ref{STSL_thm}. \begin{lemma} \label{STSL_lemt} Let $H$ be an atomic monoid. Suppose that there exists some $e \in \mathbb{N}$ such that, for each $N \in \mathbb{N}$, there exists some $a \in H$ such that $\mathsf{L}(a)\cap [\min \mathsf{L}(a),\min \mathsf{L}(a)+N]\subset \min \mathsf{L}(a)+e \mathbb{Z}$, yet $\mathsf{L}(a)\not\subset \min \mathsf{L}(a)+ e \mathbb{Z}$. Then the Structure Theorem does not hold for $H$. \end{lemma} \begin{proof} We assume to the contrary that there exists some finite nonempty set $\Delta^{\ast}\subset \mathbb{N}$ and some $M\in \mathbb{N}$ such that, for each $b \in H$, the set $\mathsf{L}(b)$ is an AAMP with difference $d\in \Delta^{\ast}$ and bound $M$. Let $D= 2\ \lcm ( \Delta^{\ast})$. Let $N \ge 2M + D$ and let $a \in H$ with the properties from the statement of the lemma. Let $l_1 =\min \mathsf{L}(a)$ and $l_2= \max \mathsf{L}(a)$. Note that $l_2 \ge l_1+N$ (by the property assumed for $a$). By assumption, we get that $\mathsf{L}(a)$ is an AAMP, i.e., \[ \mathsf L(a) = y + (L' \cup L^* \cup L'') \, \subset \, y + \mathcal D + d \mathbb Z \] where $d \in \Delta^{\ast}$, $\{0,d\}\subset \mathcal{D}\subset [0,d]$, $L^*$ is finite nonempty with $\min L^* = 0$ and $L^* =(\mathcal D + d \mathbb Z) \cap [0, \max L^*]$, $L' \subset [-M, -1]$ \ and \ $L'' \subset \max L^* + [1,M]$, and $y \in \mathbb N$. Since $$l_2\geq l_1+N\geq l_1+2M+D\geq l_1-\min L'+M+D,$$ it follows that $[l_1 - \min L', l_1- \min L'+D-1]\cap \mathsf{L}(a)\subset L^{\ast}$, and thus \[ [l_1 - \min L', l_1- \min L'+D-1]\cap \mathsf{L}(a)= [l_1 - \min L', l_1- \min L'+D-1]\cap (y + \mathcal{D}+d\mathbb{Z}). \] On the other hand, by the property assume for $a$, and since $N\geq 2M+D\geq -\min L'+D$, we have \[ [l_1 - \min L', l_1- \min L'+D-1]\cap \mathsf{L}(a)\subset l_1+e\mathbb{Z}. \] Thus \[A=[ - \min L', - \min L'+D-1]\cap (y-l_1 + \mathcal{D}+d\mathbb{Z}) \subset e\mathbb{Z}.\] Since $D \ge 2d$, it follows that for each $d'\in \mathcal{D}$ there exists some $k\in \mathbb{Z}$ and $\epsilon \in \{-1,1\}$ such that $y-l_1+d'+kd,y-l_1+d'+(k+\epsilon) d \in A $. Thus $e \mid d$ and, furthermore, $e \mid y-l_1+d'$. Consequently, $y + \mathcal{D}+d\mathbb{Z} \subset l_1+e \mathbb{Z}$. This yields a contradiction, since $\mathsf{L}(a) \subset y + \mathcal{D}+d\mathbb{Z}$, yet $\mathsf{L}(a)\not \subset l_1 +e \mathbb{Z}$ by hypothesis. \end{proof} \begin{lemma} \label{STSL_lem1} Let $d \in \mathbb{N}$, $e \in [2,d-1]$ with $\gcd(e,d)>1$ and $G_0 \subset \mathbb Z$. If $\{-d,-1,1\}\subset G_0$ and $G_0^+ \cap (e+d\mathbb{Z})$ is infinite, then the Structure Theorem does not hold for $\mathcal B(G_0)$. \end{lemma} \begin{proof} We may assume $d \ge 4$, since otherwise there exists no $e \in [2,d-1]$ with $\gcd(e,d)>1$. Let $k \in \mathbb{N}$ such that $e+dk \in G_0$; by assumption, we know that arbitrarily large $k$ with this property exist, and we thus may impose that $k \ge 10$. Let $f \in \mathbb{N}$ be minimal such that $ef \in d \mathbb{N}$, say $ef= du$. Since $\gcd(e,d)>1$, we see that $f \in [2, d/2]$ and $u \le e/2 \leq d/2$. We consider the sequence \[B= (e+dk)^f(-d)^{u+fk}(-1)^{d(u+fk)}1^{d(u+fk)}.\] Since $ef=du$, we have $B \in \mathcal{B}(G_0)$. First, we consider two specific factorizations of $B$. Then, we investigate the length of all factorizations of $B$ of small length. Let \[z_1 = ((e+dk)^f(-d)^{u+fk})\cdot ((-1)1)^{d(u+fk)}\] and \[z_2 = ((e+dk)(-1)^{e+dk})^f\cdot ((-d)1^d)^{u+fk}.\] We note that $z_1, z_2 \in \mathsf{Z}(B)$ and that $|z_1|= 1+d(u+fk)$ and $|z_2|= f+(u+fk)$. Since $f-1 \notin (d-1)\mathbb{Z}$ (as $f\in [2,d/2]$), we have $|z_1|-|z_2|\notin (d-1)\mathbb{Z}$. We claim that there exists an absolute positive constant $c$ such that, for each $z \in \mathsf{Z}(B)$ with \[|z| \le |z_2| + c(d-1)k,\] we have \[|z| - |z_2| \in (d-1)\mathbb{N}_0.\] By Lemma \ref{STSL_lemt} and since $k$ can be arbitrarily large, this implies that the Structure Theorem does not hold. Thus, it suffices to establish this claim. For definiteness, we set $c=1/6$ (it is apparent from the subsequent argument that it only has to be less than $1/2$). Let \[z= A_1 \cdot \ldots \cdot A_s U_1 \cdot \ldots \cdot U_t\in \mathsf{Z}(B)\] with $A_i, U_j\in \mathcal{A}(G_0)$, and $(e+dk)\mid A_i$ and $(e+dk)\nmid U_j$ for all $i,j$. We proceed to show that $\mathsf{v}_{e+dk}(A_i)= 1$ for each $i$, i.e., $s=f$. Clearly, $\mathsf{v}_{(-1)1}(z)\le |z|$, and thus we have \[ \begin{split} \mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_s))& \ge d(u+fk)-|z| \\ & \ge d(u + fk) - (f+ u+ fk + c(d-1)k) \\ & = (f-2)(e+dk) + 2(e+dk) - ( f+ u+fk + c(d-1)k ) \\ & \ge (f-2)(e+dk) + dk - ( d/2 + d + dk/2 + cdk) \\ & > (f-2)(e+dk) + d(k - 3/2 - k/2 - ck). \end{split} \] Since $c=1/6$ and $k\ge 10$, we have $k(1/2 - c)- 3/2 \ge 1$. So we have \begin{equation}\label{ladyloo}\mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_s))\ge (f-2)(e+dk) + d.\end{equation} If $s\le f-1$, then, since $\mathsf{v}_{-1}(A_i)\le e+dk$ for each $i$, we conclude from \eqref{ladyloo} that $\mathsf{v}_{-1}(A_i)\ge d$ for each $i$, implying (since $\supp(A_i^-)\subset \{-1,-d\}$) that $\mathsf{v}_{e+dk}(A_i)= 1$ for each $i$, contradicting $s\leq f-1$. Thus $s=f$. We have $U_j \in \{(-1)1, ((-d)1^d)\}$ for each $j$. Thus \[ z=A_1 \cdot \ldots \cdot A_f ((-1)1)^a ((-d)1^d)^b \] where $a = d(u+fk) - \mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_f)) $ and $b = u+fk - \mathsf{v}_{-d}(\pi(A_1 \cdot \ldots \cdot A_f))$. We have \[|z|= f + (u+fk)(d+1)- (\mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_f)) + \mathsf{v}_{-d}(\pi(A_1 \cdot \ldots \cdot A_f)))\] and, since \[ d\cdot \mathsf{v}_{-d}(\pi(A_1 \cdot \ldots \cdot A_f)) + \mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_f))= (u+fk)d,\] this implies \[|z|= f + u + fk + (d-1)\mathsf{v}_{-d}(\pi(A_1 \cdot \ldots \cdot A_f)),\] establishing $|z|-|z_2| \in (d-1)\mathbb{N}_0$. \end{proof} \begin{lemma} \label{STSL_lem2} Let $d \in \mathbb{N}$, $e \in [1,d-1]$ with $\gcd(e,d)=1$ and $G_0 \subset \mathbb Z$. If $\{-d,-1,1\}\subset G_0$, $G_0^+ \cap (e+d\mathbb{Z})$ is infinite and $G_0^+ \setminus ((e+d\mathbb{Z}) \cup d \mathbb{Z})$ is nonempty, then the Structure Theorem does not hold for $\mathcal{B}(G_0)$. \end{lemma} \begin{proof} We may assume $d \ge 3$, as the hypotheses are null otherwise. Since $G_0^+ \setminus ((e+d\mathbb{Z}) \cup d \mathbb{Z})$ is nonempty, let $f \in [1,d-1]\setminus \{e\}$ and $\ell \in \mathbb{N}_0$ be such that $f+d\ell\in G_0^+$. Since $\{-d,-1,1\}\subset G_0$, $G_0^+ \cap (e+d\mathbb{Z})$ is infinite, let $k \in \mathbb{N}$ be such that $e+dk \in G_0^+$ and $e+dk \ge f + d \ell$. Since $\gcd(e,d)=1$, let $x \in [1,d-1]$ be the integer such that $f+xe\in d \mathbb{Z}$, say $f+xe=ud$. Since $f\in [1,d-1]\setminus \{e\}$, we have $x \neq d-1$ and $u\leq d-1$. We proceed similarly to Lemma \ref{STSL_lem2}. We consider the following element of $\mathcal{B}(G_0)$: \[B = (f+d\ell)(e+dk)^x (-d)^{u+xk+\ell}(-1)^{d(u+xk+\ell)}1^{d(u+xk+\ell)}.\] Again, we first consider two specific factorizations of $B$, namely \[z_1 = ((f+d\ell)(e+dk)^x (-d)^{u+xk+\ell})\cdot ((-1)1)^{d(u+xk+\ell)}\] and \[z_2 = ((f+d\ell)(-1)^{f+d\ell})\cdot((e+dk)(-1)^{e+dk})^x\cdot ((-d)1^d)^{u+xk+\ell}.\] The respective lengths of these factorizations are $1+ d(u+xk+\ell)$ and $1+x+ (u+xk+\ell)$. Thus, $|z_1|-|z_2| \notin (d-1)\mathbb{Z}$. As in Lemma \ref{STSL_lem1}, we show that there exists a positive $c$, now depending on $d$ (but not on $k$), such that, for each $z \in \mathsf{Z}(B)$ with \[|z| \le |z_2| + c(d-1)k,\] we have \[|z| - |z_2| \in (d-1)\mathbb{N}_0,\] which again completes the proof by Lemma \ref{STSL_lemt}. We set $c= 1/(d-1)$ (this choice is not optimal). Let \[z= A_1 \cdot \ldots \cdot A_s((-1)1)^a ((-d)1^d)^b\] where $A_i \notin \{(-1)1, (-d)1^d\}$. We proceed to show that $|A_i^+|=1$ for each $i$. From the definition of $B$, we have $s \le x+1$. Again, $\mathsf{v}_{(-1)1}(z)\le |z|$, and thus \[ \begin{split} \mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_s))& \ge d(u+xk+\ell)-|z|\\ & \ge d(u+xk+\ell) - (1+x+ (u+xk+\ell) + c(d-1)k) \\ & = (x-1)(e+dk) + (f+d\ell) + (e+dk) - ( 1 + x + (u+xk+\ell) + c(d-1)k) \\ & \ge (x-1)(e+dk) + (f+d\ell) + (e+dk) - (d-1 + (d-1+(d-2)k+\ell) + c(d-1)k)\\ & \ge (x-1)(e+dk)+ d + 2k - 3d - c(d-1)k. \end{split}\] Since $c= 1/(d-1)$, we have, for $k \ge 3d$, \[\mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_s))\ge (x-1)(e+dk)+ d.\] If $s=x+1$, the claim is obvious. Thus, assume $s\le x$. Since $\mathsf{v}_{-1}(A_i)\le e+dk$ for each $i$ (recall that $e+dk \ge f+d\ell$), we get that $\mathsf{v}_{-1}(A_i)\ge d$ for each $i$, establishing the claim (since $\supp(A_i^-)\subset \{-1,-d\}$). Thus \[z=A_1 \cdot \ldots \cdot A_s ((-1)1)^a ((-d)1^d)^b,\] where $a = d(u+xk+\ell) - \mathsf{v}_{-1}(\pi(A_1 \cdot \ldots \cdot A_s)) $ and $b = (u+xk+\ell) - \mathsf{v}_{-d}(\pi(A_1 \cdot \ldots \cdot A_s))$. We have \[|z|= s + (d+1)(u+xk+\ell) - (\mathsf{v}_{-1}(\pi(A_1\cdot \ldots \cdot A_s)) + \mathsf{v}_{-d}(\pi(A_1\cdot \ldots \cdot A_s))).\] We note that if $f+\ell d\neq 1$, then $s=1+x$, and if $f+\ell d= 1$, then $s=x$. Moreover, if the former holds true, then \[ d\cdot \mathsf{v}_{-d}(\pi(A_1\cdot \ldots \cdot A_f)) + \mathsf{v}_{-1}(\pi(A_1\cdot \ldots \cdot A_f))= d(u+xk+\ell),\] whereas if the latter holds true, then \[ d \cdot \mathsf{v}_{-d}(\pi(A_1\cdot \ldots \cdot A_f)) + \mathsf{v}_{-1}(\pi(A_1\cdot \ldots \cdot A_f))= d(u+xk+\ell)-1.\] In both cases, this implies \[|z|= 1+x + (u+xk+\ell) +(d-1) \mathsf{v}_{-d}(\pi(A_1\cdot \ldots \cdot A_s)).\] establishing $|z|-|z_2| \in (d-1)\mathbb{N}_0$, as claimed. \end{proof} \begin{proposition} \label{STSL_prop} Let $\{-1,1\} \subset G_0 \subset \mathbb{Z}$ with $G_0^-$ finite such that the Structure Theorem holds for $\mathcal B (G_0)$. For each $-d\in G_0^-$, at least one of the following statements holds{\rm \,:} \begin{enumerate} \item[(a)] $|G_0^+ \setminus d \mathbb{Z}| < \infty$. \item[(b)] $G_0^+ \setminus d \mathbb{Z} \subset 1 + d \mathbb{Z}$. \end{enumerate} \end{proposition} \begin{proof} The claim is trivial for $d\le 2$. Suppose $d \ge 3$. Let $E\subset [0,d-1]$ be such that $G_0^{+}\cap (e+d \mathbb{Z})$ is infinite for each $e \in E$. If there exists some $e\in E \setminus \{0\}$ with $\gcd(e,d)> 1$, Lemma \ref{STSL_lem1} yields a contradiction. Thus, $\gcd(e,d)=1$ for each $e \in E \setminus \{0\}$. By Lemma \ref{STSL_lem2} we get that if $\gcd(e,d)=1$, then $e=1$ (note that $1\in G_0^+$), and moreover, in this case we have $G_0^+ \subset ((1+ d \mathbb{Z}) \cup d\mathbb{Z})$. \end{proof} Now, we show that the Structure Theorem indeed holds for monoids of zero-sum sequences over sets of the form considered in Theorem \ref{STSL_thm} not covered by the above results. Moreover, we investigate the finiteness of the successive distance for these sets. Again, note that the set $F_0 \cup d\mathbb{N}$ in the result below does not fulfil condition $(a)$ of Theorem \ref{main-theorem-II}, yet by Lemma \ref{lem_char} it can occur as the subset of classes containing prime divisors of a Krull monoid, even with respect to a divisor theory, showing that the conditions $(b1)$, $(b2)$, and $(b3)$ do not imply $(a)$, not even combined. \begin{proposition} \label{STSL_prop1} Let $d \in \mathbb{N}_{\ge 2}$ and $F_0\subset \mathbb{Z}$ with $F_0^- = \{-d,-1\}$. \begin{enumerate} \item The Structure Theorem holds for $\mathcal{B}(F_0 \cup d\mathbb{N})$ if and only if it holds for $\mathcal{B}(F_0\cup \{d\})$. More precisely, for each $L\in \mathcal{L}(F_0 \cup d\mathbb{N})$, there exists some $y \in \mathbb{N}_0$ such that $-y + L \in \mathcal{L}(F_0 \cup \{d\})$. \item $\delta(F_0 \cup d\mathbb{N}) = \delta(F_0 \cup \{d\})$ \item There is a map $\psi: \mathcal{B}(F_0 \cup d\mathbb{N})\rightarrow \mathcal{B}(F_0 \cup \{d\})$ such that, for each $B \in \mathcal{B}(F_0 \cup d\mathbb{N})$ and adjacent lengths $k$ and $l$ of $\mathsf L (B)$, we have $\mathsf d(\mathsf{Z}_k(B),\mathsf{Z}_l(B))\leq \mathsf d(\mathsf{Z}_k(\psi(B)),\mathsf{Z}_l(\psi(B)))$ with $k$ and $l$ adjacent lengths of $\mathsf L(\psi(B))$. \end{enumerate} In particular, if $F_0$ is finite, then the Structure Theorem holds for $\mathcal{B}(F_0 \cup d\mathbb{N})$, and $\delta(F_0 \cup d\mathbb{N})$ and $\mathsf c_{{\text{\rm mon}}}(F_0 \cup d\mathbb{N})$ are finite. \end{proposition} \begin{proof} Let $G_0=F_0 \cup d \mathbb{N}$ and $G_1= F_0 \cup \{d\}$. \smallskip 1. Since $G_1 \subset G_0$, one implication is clear and it remains to show that if the Structure Theorem holds for $\mathcal{B}(G_1)$, then it holds for $\mathcal{B}(G_0)$. Indeed, the more precise assertion we establish shows that the Structure Theorem holds with the same bound and the same set of differences. Let $\psi \colon \mathcal{F}(G_0) \to \mathcal{F}(G_1)$ denote the monoid homomorphism defined via $\psi(g)=g$ for $g \notin d \mathbb{N}$ and $\psi(kd)=d^k$ for $kd\in d\mathbb{N}$. We note that $\sigma(S)= \sigma(\psi(S))$ for each $S \in \mathcal{F}(G_0)$; thus $\psi$ yields a homomorphism, and indeed an epimorphism, from $\mathcal{B}(G_0)$ to $\mathcal{B}(G_1)$. Moreover, we observe that if $A \in \mathcal{A}(G_0)$ with $kd \mid A$, for some $k \in \mathbb{N}$, then $A^+ = kd$. This implies that, for such an atom, $\psi(A)=d^{k}(-1)^{d\ell}(-d)^{k - \ell}$ and $(d(-1)^d)^{\ell} \cdot (d(-d))^{k -\ell}\in \mathsf{Z}(\psi(A))$ is the unique factorization of $\psi(A)$. We denote this factorization by $\overline{\psi}(A)$ and we note that $|\overline{\psi}(A)| = \sigma(A^+)/d$. Setting $\overline{\psi}(A) = A$ for each atom not of this form, i.e., $A \in \mathcal{A}(G_0)$ with $\supp(A)\cap d\mathbb{N}=\emptyset$, and extending this map to $\mathsf{Z}(G_0)$, we get a homomorphism, indeed an epimorphism, $\overline{\psi} \colon \mathsf{Z}(G_0) \to \mathsf{Z}(G_1)$. Since $\pi(\overline{\psi}(z))=\psi(\pi(z))$, we see that $\overline{\psi}(\mathsf{Z}(B))\subset \mathsf{Z}(\psi(B))$ for each $B \in \mathcal{B}(G_0)$. Moreover, for $B \in \mathcal{B}(G_0)$ and $z \in \mathsf{Z}(B)$, we have, denoting $F=\prod_{g \in d\mathbb{N}}g^{\mathsf{v}_g(B)}$, that $|\overline{\psi}(z)|= |z| + (\sigma(F)/d - |F|)$. In particular, the value of $|\overline{\psi}(z)|- |z|$ is the same for each $z \in \mathsf{Z}(B)$. Thus, to establish our claim on sets of lengths, it suffices to show that $\overline{\psi}(\mathsf{Z}(B))= \mathsf{Z}(\psi(B))$ for each $B \in \mathcal{B}(G_0)$. Let $B \in \mathcal{B}(G_0)$ and again let $F = \prod_{g \in d\mathbb{N}}g^{\mathsf{v}_g(B)}=\prod_{i=1}^{|F|}(k_id)$, where $k_i\in \mathbb N$. Let $z'\in \mathsf{Z}(\psi(B))$. There exists a unique decomposition $z'=z_1'z_2'$ such that $z_1'$ is minimal with $d^{\sigma(F)/d}\mid \pi(z_1')$ (note that $\mathsf{v}_d(\psi(B))=\sigma(F)/d$). We have $|z_1'|= \sigma(F)/d$. Write $z_1'= \prod_{i=1}^{|F|}y'_i$ such that each factor $y'_i\in \mathsf Z(\psi(B))$ contains exactly $|y'_i|=k_i$ atoms. Then letting $A_i = (k_id)d^{-k_i}\pi(y'_i)$, we have $A_i \in \mathcal{A}(G_0)$, and so $z=A_1\cdot \ldots \cdot A_{|F|}z_2'$ is a factorization of $B$ with $\overline{\psi}(z) = \psi(A_1)\cdot \ldots \cdot \psi(A_{|F|})z_2'=y'_1\cdot \ldots \cdot y'_sz_2'= z'$, establishing our claim. \smallskip 2. Since $\delta(G_1)\le \delta(G_0)$ is obvious, we only have to show that $\delta(G_0)\le \delta(G_1)$. We show the following slightly stronger result. Let $B \in \mathcal{B}(G_0)$ and $z \in \mathsf{Z}(B)$. Then $\delta(z)\le \delta(\overline{\psi}(z))$. Let $F$ and $z= z_1z_2$ be defined as above, and let $z_1=\prod_{i=1}^{|F|}A_i$ and let $A_i^+ = k_id$, where $k_i\in \mathbb{N}$. Moreover, let $z' = \overline{\psi}(z)$ and let $z' = z_1'z_2'$ with $z_1' = \overline{\psi}(z_1)$ and $z_2'=\overline{\psi}(z_2)=z_2$. Additionally, let $y_i'= \overline{\psi}(A_i)$ for each $i$. Let $j \in \mathbb{Z}$ be such that $|z|$ and $|z|+j$ are adjacent lengths of $\mathsf L (B)$. By the already established result for sets of lengths, it follows that $|\overline{\psi}(z)|$ and $|\overline{\psi}(z)|+j$ are adjacent lengths of $\mathsf L (\psi(B))$. Thus, by definition, there exists some factorization $x'\in \mathsf{Z}(\psi(B))$ with $|x'|= |\overline{\psi}(z)|+j$ and $d(x', \overline{\psi}(z))\le \delta(\overline{\psi}(z))$. Let $x'=x_1'x_2'$ with $x_1'$ minimal such that $d^{\sigma(F)/d}\mid \pi(x_1')$. We note that \begin{equation}\label{kio1}\mathsf d(z',x')= \mathsf d(z_1',x_1') + \mathsf d(z_2',x_2').\end{equation} Thus, by re-indexing appropriately, we find a \begin{equation}\label{kio2}t\leq \mathsf d(z_1',x_1')\end{equation} such that $\prod_{i=t+1}^{|F|}y_i'\mid x_1'$. Let $x_1''= x'_1\left(\prod_{i=t+1}^{|F|}y_i'\right)^{-1}$. As we argued at the end of part 1, there exists, for $i\leq t$, factorizations $y''_i\in \mathsf Z(\psi(B))$, each containing exactly $|y'_i|=k_i$ atoms, such that $\prod_{i=1}^t y_i''=x_1''$. For $i \leq t$, let $A_i'= d^{-k_i}(k_id)\pi(y_i'')$, and for $i\in [t+1,|F|]$, let $A'_i=A_i$. Then, with $x_1=\prod_{i=1}^{|F|}A'_i$ and $x_2=x_2'$, we have that $x=x_1x_2$ is a factorization of $B$, and since $\overline{\psi}(x) = x_1''(\prod_{i=t+1}^{|F|}y_i')x_2'=x_1'x_2'$, we get that $|x|-|z|=|\overline{\psi}(x)|-|\overline{\psi}(z)|=|x'|-|\overline{\psi}(z)|= j$. Finally, using \eqref{kio1} and \eqref{kio2}, we have $$\mathsf d(z,x)\le \mathsf d(z_1,x_1)+\mathsf d(z_2,x_2) \le t+ \mathsf d(z_2,x_2)\le \mathsf d(z_1',x_1') + \mathsf d(z_2',x_2')= \mathsf d(z',x'),$$ establishing the claim. \smallskip 3. The argument is just a variation on the proof of parts 1 and 2. \smallskip We now address the additional statements. Suppose that $F_0$ is finite. By Proposition \ref{nice_prop}, we know that the Structure Theorem holds for $\mathcal{B}(F_0 \cup \{d\})$ and that $\delta(F_0\cup \{d\})$ is finite. Thus, by parts 1 and 2, we get that the Structure Theorem holds for $\mathcal{B}(F_0 \cup d \mathbb{N})$ and that $\delta(F_0\cup d \mathbb{N})$ is finite. Since $\delta(F_0\cup d \mathbb{N})$ is finite, Proposition \ref{6.4} implies that $\mathsf c_{{\text{\rm mon}}}(F_0 \cup d \mathbb{N})$ is finite. \end{proof} The systems of sets of lengths of $\mathcal{B}(F_0 \cup d\mathbb{N})$ and $\mathcal{B}(F_0\cup \{d\})$ are very closely related, but they are different in general. For finite $F_0$, the elasticity of $\mathcal{B}(F_0\cup \{d\})$ is accepted (Proposition \ref{nice_prop}), yet we see in Corollary \ref{STSL_cor} that this is in general not the case for $\mathcal{B}(F_0 \cup d\mathbb{N})$. \begin{proposition} \label{STSL_prop2} Let $d \in \mathbb{N}_{\ge 2}$ and $G_0 = \{-d, -1\} \cup (1 + d\mathbb{N}_0) \cup d \mathbb{N}_0$. \begin{enumerate} \item The Structure Theorem holds for $\mathcal B (G_0)$. More precisely, each $L \in \mathcal{L}(G_0)$ is an arithmetical progression with difference $d-1$. \smallskip \item For each $B \in \mathcal{B}(G_0)$ and adjacent lengths $k$ and $l$ of $\mathsf L (B)$, we have $\mathsf d(\mathsf{Z}_k(B),\mathsf{Z}_l(B))=d+1$. \smallskip \item $\delta(G_0)= \infty$. \smallskip \item $\mathsf c_{{\text{\rm mon}}}(G_0) = d+1$. \end{enumerate} \end{proposition} \begin{proof} Before we start the argument for the individual parts, we start with some general remarks. We begin by investigating $\mathcal{A}(G_0)$. Let $A \in \mathcal{A}(G_0)$. If $kd \mid A$ for some $k \in \mathbb{N}_0$, then $A=(kd)(-1)^{dl}(-d)^{k-l}$ for some $l \in [0,k]$. In particular, we thus have two atoms containing $d$, namely $U_1= d(-1)^d$ and $U_d = d(-d)$. Suppose $\supp(A) \cap d \mathbb{N}_0 = \emptyset$. Then $A^+ = \prod_{i=1}^{|A^+|}(1+k_id)$ with $k_i\in \mathbb{N}_0$. It follows that $|A^+|\in \{1,d\}$. Moreover, if $|A^+|=d$, then $-1 \nmid A$ and therefore $A=A^+(-d)^{\sigma(A^+)/d}$. Thus, either $|A^+|=1$ or else $|A^+|=d$ and $A=A^+(-d)^{\sigma(A^+)/d}$. Conversely, each zero-sum sequence $B\in \mathcal{B}(G_0\setminus \{0\})$ with $B^+ =\prod_{i=1}^d (1+k_id)$, $k_i \in \mathbb{N}_0$ and $-1 \notin \supp(B^-)$ is an atom. Let $B \in \mathcal{B}(G_0 \setminus \{0\})$ and let $z \in \mathsf{Z}(B)$. In view of the considerations just made, there exists a unique decomposition $z=z_1z_d$ such that for each $A\mid z_1$ we have $|A^+|=1$ and for each $A \mid z_d$ we have $|A^+|=d$. We denote $|z_d|$ by $t_d(z)$. Since $|B^+|=|z_1|+d |z_d|$, it follows that \begin{equation} \label{STSL_eq2} |z|=|z_1|+|z_d|= |B^+| - (d-1)|z_d| = |B^+| - (d-1)t_d(z), \end{equation} i.e., $|z|$ is determined by $B^+$ and $t_d(z)$. By Proposition \ref{STSL_prop1} and since $0$ is a prime, it suffices to consider the set $G_1 = \{-d, -1\} \cup (1 + d\mathbb{N}_0) \cup \{d\}$ for the proof of parts 1 and 3. \bigskip 1. Let $B \in \mathcal{B}(G_1)$. Let $z \in \mathsf{Z}(B)$ and let $z=z_1z_d$ be defined as above. We observe that, since $\mathsf{v}_{-1}(A)\ge 1$ for each $A$ that neither fulfils $|A^+|=d$ nor equals $U_d$, we have \begin{equation} \label{STSL_eq} t_d(z)\ge \frac{\bigl(|B^+| - \mathsf{v}_{d}(B)\bigr) - \mathsf{v}_{-1}(B)}{d} \end{equation} By \eqref{STSL_eq2}, we get that $\mathsf{L}(B)$ is contained in an arithmetical progression with difference $(d-1)$. In view of this, it suffices to establish the following claim. \noindent \textbf{Claim 1:} If $|z|< \max \mathsf{L}(B)$, then there exists some $z'\in \mathsf{Z}(B)$ with $|z'|= |z|+(d-1)$ and $\mathsf d(z,z')=d+1$; in particular, $t_d(z')= t_d(z)-1$. Moreover, $\mathsf d(z,z')\leq d+1$. To prove this, we first investigate the case $|z|= \max \mathsf{L}(B)$. \noindent \textbf{Claim 2:} If $t_d(z)=0$ or $\mathsf{v}_{-1}(A)\le 1$ for each $A \mid z$, then $|z|= \max \mathsf{L}(B)$. \noindent \emph{Proof of Claim 2.} If $t_d(z)=0$, the claim is clear by \eqref{STSL_eq2}. Thus, assume $\mathsf{v}_{-1}(A)\le 1$ for each $A \mid z$. In view of the characterization of atoms, it follows that $z_1=z_1'U_d^{\mathsf{v}_d(B)}$ and $\mathsf{v}_{-1}(A)=1$ for each atom $A \mid z_1'$. In particular, we have $|z_1|= \mathsf{v}_{-1}(B)+ \mathsf{v}_{d}(B)$. In view of $d\cdot t_d(z)= |B^+|- |z_1|$, this implies \[ t_d(z)= \frac{\bigl(|B^+| - \mathsf{v}_{d}(B)\bigr) - \mathsf{v}_{-1}(B)}{d}.\] Thus equality holds in \eqref{STSL_eq}, which by \eqref{STSL_eq2} implies that $|z|$ is maximal. \noindent \emph{Proof of Claim 1.} Suppose $|z|< \max \mathsf{L}(B)$. By Claim 2, we know that $t_d(z)> 0$ and that there exists some atom $C \mid z$ such that $\mathsf{v}_{-1}(C)> 1$. In view of the characterization of atoms given above, we have $\mathsf{v}_{-1}(C) \ge d$ and $|C^+|=1$. Since $t_d(z)>0$, there exists some atom $A_d\mid z$ with $|A_d^+| = d$. Let $z= A_dCz_0$. We consider the zero-sum sequences $B_1=(-d)^{-1}A_d(-1)^d$ and $B_2= (-1)^{-d}C(-d)$. Clearly, $\pi(B_1B_2z_0)=B$. We note that $B_2$ is an atom as $|B_2^+|=1$. Yet, since $|B_1^+|=d$ but $\mathsf{v}_{-1}(B_1)\ge 1$, we get that $B_1$ is not an atom; more precisely, $\max\mathsf{L}(B_1)=d$. Thus, replacing the two atoms $A_d$ and $C$ in $z$ by the atom $B_2$ and any factorization of length $d$ of $B_1$ completes the claim. \smallskip 2. By Proposition \ref{STSL_prop1}.3 and since $0$ is a prime, it suffices to consider $G_1$ for finding an upper bound on $\mathsf d(\mathsf{Z}_k(B),\mathsf{Z}_l(B))$. Thus, by Claim 2, we get that $\mathsf d(\mathsf{Z}_k(B),\mathsf{Z}_l(B))\le d+1$. The converse inequality follows by \eqref{E:Dist} in view of Proposition \ref{STSL_prop1}.3 and part 1. \smallskip 3. We consider $B=(1+kd)^{d}d^{1+kd}(-d)^{1+kd}(-1)^{d(1+ kd)}$. We note that $\mathsf{L}(B)= \{2+kd, 1+d+kd\}$ and $z=\bigl((1+kd)^{d}(-d)^{1+kd}\bigr)\cdot (d(-1)^d)^{1+kd}$ is its only factorization of length $2+kd$. The factorization $z'=\bigl((1+kd)(-1)^{1+kd}\bigr)^d\cdot \bigl(d(-d)\bigr)^{1+kd}$ has length $1+d+kd$ and $\mathsf d(z',z)= |z'|= 1+d+kd$, implying that $\delta(B)\ge 1+d+kd$, and the claim follows by letting $k\rightarrow \infty$. \smallskip 4. By part 2 and since $0$ is prime, it is sufficient to show that, for any two factorizations $z,\,y\in \mathsf Z(G_0\setminus\{0\})$ with $\pi(z)=\pi(y)$, we have that: if $|z|=|y|$, then $z$ and $y$ can be concatenated by a monotone $2$-chain. Clearly, in this case monotone means that each factorization in this chain has length $|z|$, i.e., we claim that $z$ and $y$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$. We proceed by induction on $|z|$. Let $z,\,y\in \mathsf Z(G_0)$ with $\pi(z)=\pi(y)$ and suppose that $|z|=|y|$. If $|z|=1$, the statement is trivial. Thus, assume $|z|\ge 2$ and that the statement is true for factorizations of length at most $|z|-1$. We make the following claim. \noindent \textbf{Claim 3:} There exist $z',\,y'\in \mathsf Z(\pi(z))$ with $|z'|=|y'|=|z|$ such that $z$ and $z'$, as well as $y$ and $y'$, can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$ and $\gcd\{z',y'\}\neq 1$. We assume this claim is true and complete the argument. Let $z'$ and $y'$ be factorizations with the claimed properties and let $U \in \mathcal{A}(G_0)$ with $U \mid \gcd \{z',y'\}$. We set $z'' = U^{-1}z'$ and $y'' = U^{-1}y'$. By induction hypothesis, there exists a $2$-chain $z''=z_0'',z_1'', \dots, z_s''=y''$ in $\mathsf{Z}_{|z''|}(\pi(U^{-1}z'))$. We note that $U\cdot z_i'' \in \mathsf{Z}_{|z|}(\pi(z))$ for each $i \in [0,s]$. Thus, $z'$ and $y'$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$. Combining these three chains, the result follows. \noindent \emph{Proof of Claim 3.} If $0 \mid z$, then $0 \mid y$ and the claim is trivial. Thus, assume $0 \nmid z$. Let $z=z_1z_d$ and $y=y_1y_d$ be as defined at the beginning of the proof and recall that $|z|= |y|$ is equivalent to $t_d(z)=t_d(y)$. Before starting the actual argument, we make three subclaims. \noindent \textbf{Claim 3.1:} Let $h \mid \pi(z_1)$ and $g \mid \pi(z_d)$ with $g,h \in 1+ d\mathbb{N}_0$ and $h \le g$. Then there exists a factorization $x$ of $\pi(z)$ such that, with $x=x_1x_d$ as above, $\pi(x_1)^+= \pi(z_1)^+gh^{-1}$ and $\pi(x_d)^+ = \pi(z_d)^+hg^{-1}$ and $\mathsf{d}(z,x)\le 2$; in particular, $|x|=|z|$. To see this, let $A_h \mid z_1$ and $A_g\mid z_d$ with $h \mid A_h$ and $g \mid A_g$. We set $A_h'= hA_g g^{-1}(-d)^{-(g-h)/d}$ and $A_g'= g A_h h^{-1}(-d)^{(g-h)/d}$. Note that this process is well-defined and that $A_g'$ and $A_h'$ are atoms by the above characterization of atoms. Let $x=zA_g'A_h'A_g^{-1}A_h^{-1}$. Noting that $x_1= A_g'A_h^{-1}z_1$ and $x_d= A_h'A_g^{-1}z_d$, the claim is established. \noindent \textbf{Claim 3.2:} Suppose that $t_d(z)=0$. Then $z$ and $y$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$. Informally, each atom in $z$ and $y$ contains exactly one positive element, hence distinct atoms containing the same positive element can only differ in the negative part. Successively exchanging $(-1)^d$ for $-d$ and vice versa, for suitable pairs of atoms, we can construct such a chain. To give a formal argument, we use the independent material of Section 7 which follows. Note that, in this case, $|z|=|y|=|\pi(z)^+|$ and $\mathcal A(\mathcal E(G_P^-))=\{(-d,-d), (-1,-1), ((-1)^d,-d), (-d,(-1)^d)\}$. Thus $G'\cong \mathbb Z$ with $G_0'=\{0,1,-1\}$, where $G'$ and $G_0'$ are as defined before Theorem \ref{prop-chain}, whence $\mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-))=2$ by \eqref{teent}. Hence Theorem \ref{prop-chain} shows that there is a $2$-chain concatenating $z$ and $y$. \noindent \textbf{Claim 3.3:} Suppose that $t_d(z)=|z|$. Then $z$ and $y$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$. Informally, since in this case $\supp(\pi(z))=\{-d\}$, we can apply an argument similar to the one in Claim 3.1, without additional condition on the relative size of $g$ and $h$. To get a formal argument, note that in this case $\pi(z) \in \mathcal{B}(G_0 \setminus \{-1\})$. By Lemma \ref{transfer-to-finite}, we get that the block monoid associated to $\mathcal{B}(G_0 \setminus \{-1\})$ is $\mathcal{B}(\{0 + d \mathbb{Z},1 + d\mathbb{Z}\})\subset \mathcal B(\mathbb Z/d\mathbb Z)$. However, $\mathcal{B}(\{0 + d \mathbb{Z},1 + d\mathbb{Z}\})$ is factorial, and thus its catenary degree is $0$; also note that the former monoid is thus half-factorial. Since the catenary degree in the fibers of the block homomorphism is $2$ (see Lemma \ref{3.3}), the claim follows. Now, we give the actual proof of Claim 3. In view of Claim 3.2, we may assume that $t_d(z)> 0$. Hence, let $S \mid \pi(z)$ be a subsequence with $\supp(S) \subset 1 +d \mathbb{N}_0$ and $|S|=d$. Moreover, assume that $\sigma(S)$ is minimal among all such subsequences of $\pi(z)$. We assert that there exists some $x' \in \mathsf{Z}_{|z|}(\pi(z))$ such that $S \mid \pi(x_d')$ and $z$ and $x'$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$. Let $x' \in \mathsf{Z}_{|z|}(\pi(z))$ be a factorization such that $z$ and $x'$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$ and such that $S'= \gcd\{\pi(x_d'), S\}$ is maximal. We show that $S'=S$. Assume to the contrary that $S'\neq S$. Let $h \mid \pi(x_1')$ with $hS'\mid S$. We observe that there exists some $g \mid S'^{-1}\pi(x_d')$ with $g \in 1 + d \mathbb{N}_0$ and $g \ge h$; otherwise, the sequence $gh^{-1}S$ would contradict the minimality of $\sigma(S)$. We apply Claim 3.1 to $x'$ (with these elements $g$ and $h$) and denote the resulting factorization by $x''$. Since it can be concatenated to $z$ by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$ and yet $hS'\mid \gcd\{\pi(x_d''), S\}$, its existence contradicts the maximality of $S'$ for $x'$. Thus $S'=S$. Since $S \mid \pi(x_d')$, we have that $U= S(-d)^{\sigma(S)/d}\mid \pi(x'_d)$. Let $z_d'\in \mathsf{Z}(\pi(x'_d))$ with $U \mid z_d'$. Since $t_d(\pi(x'_d))= |x'_d|$, Claim 3.3 applied to $x'_d$ yields that $x_d'$ and $z_d'$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|x_d'|}(\pi(x_d'))$. We set $z'= z_d'x_1'$ and observe that $x'$ and $z'$, and thus $z$ and $z'$, can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$ and $U \mid z'$ In the same way, noting that $S$ depends only on $\pi(z)$ and not on $z$, we get a factorization $y' \in \mathsf{Z}_{|z|}(\pi(z))$ with $U \mid y'$ such that $y$ and $y'$ can be concatenated by a $2$-chain in $\mathsf{Z}_{|z|}(\pi(z))$. Since $U \mid \gcd\{z', y'\}$, the claim is established. \end{proof} \medskip \begin{proof}[{\bf Proof of Theorem \ref{STSL_thm}}] By Lemma \ref{3.3}, it suffices to consider $\mathcal B (G_P)$. The case $d=1$ is trivial. Suppose $d \ge 2$. One direction is merely Proposition \ref{STSL_prop}. The other one follows, for the first type of set, by Proposition \ref{STSL_prop1}, and for the second type of set, by Proposition \ref{STSL_prop2}. \end{proof} By \cite{An-Ch-Sm94c}, it is known that Krull monoids with infinite cyclic class group can have finite, non-accepted elasticity. The following result shows that, even if the Structure Theorem holds, the elasticity is not necessarily accepted. \begin{corollary} \label{STSL_cor} Let all assumptions be as in Theorem \ref{STSL_thm}. Suppose that the Structure Theorem holds for $H$. Then exactly one of the following two statements holds{\rm \,:} \begin{enumerate} \item[(a)] $H$ is half-factorial or $G_P$ is finite. \item[(b)] $\rho(H)=d$ and the elasticity is not accepted. \end{enumerate} \end{corollary} \begin{proof} Half-factorial monoids obviously have accepted elasticity and monoids with $G_P$ finite also have accepted elasticity (Proposition \ref{nice_prop}). Thus, we assume that $H$ is not half-factorial and that $G_P$ is infinite, and show that under these assumptions $\rho(H)=d$ and the elasticity is not accepted. Note that since $H$ is not half-factorial, we have $d \ge 2$. We recall that if $A \in \mathcal{A}(G_P)$ with $(-1)\mid A$, then $|A^+|=1$ (as explained in the proof of Proposition \ref{STSL_prop2}). Let $B \in \mathcal{B}(G_P)$. We show that $\rho(B)< d$. Assume to the contrary $\rho(B)\ge d$. That is, there exist $z,z'\in\mathsf{Z}(B)$ such that $|z'|/|z| \ge d$. By Lemma \ref{Lambert}, we know that $|A^+|\le d$ for each $A\in \mathcal{A}(G_P)$. Thus, we get $|z| \ge \mathsf{v}_0(z) + |B^+|/d$, whereas clearly $|z'| \le \mathsf{v}_0(z') + |B^+|$. Consequently, we have $\rho(B)\le d$, and $\rho(B)=d$ is equivalent to the following: $|A^+|=d$ for each atom $A \mid z$ and $|A'^+|=1$ for each atom $A' \mid z'$. It follows that $\mathsf{v}_{-1}(B)=0$, i.e., $B \in \mathcal{B}(G_P\setminus \{-1\})$. By \cite{An-Ch-Sm94c}, or Lemma \ref{transfer-to-finite} and \cite[Proposition 6.3.1]{Ge-HK06a}, we get that $\rho (\mathcal{B}(G_P\setminus \{-1\}))\le \rho(\mathbb{Z}/d \mathbb{Z}) = d/2 < d$, a contradiction. It remains to show that $\rho(G_P) \ge d$. We may assume that $0 \notin G_P$. We note the existence of the two atoms $1(-1)$ and $1^d(-d)$ in $\mathcal{A}(G_P)$. Thus, $1$ and $1^d$ are elements of $\mathcal{A}(G_P)^+$. Thus, $\rho^{\kappa}(1^d)\ge d$, and the claim follows by Lemma \ref{rel_lem_eq}. \end{proof} Our proofs that the Structure Theorem does not hold rely on the existence of a single exceptional factorization, yet the following example illustrates that sets of lengths can deviate by more than a single element (or a globally bounded number of elements) from being an AAMP. \begin{example} Let $d, k,l \in \mathbb{N}$ and $e \in [1,d-1]$, and set $B = (e+ k d)( -e + \ell d)1^{(k+\ell)d}(-1)^{(k+\ell)d}(-d)^{k +\ell}$. Then \[ \begin{split} \mathsf{L}(B)= \{1+ k+ \ell + (k+\ell)(d-1)\} & \cup \{1 + e +k+ \ell + i(d-1) \mid i \in [ k, k+\ell-1]\} \\ & \cup \{ 2 -e + k + \ell+ i(d-1) \mid i \in [\ell, \ell+ k]\} \\ & \cup \{ 2 + k + \ell + i(d-1) \mid i \in [0, k +\ell -1] \}. \end{split} \] \end{example} \section{Chains of factorizations} \label{7} In a large class of monoids and domains satisfying natural (algebraic) finiteness conditions, the catenary degree is finite (see \cite{Ge-HK06a} for an overview and \cite{C-G-L-P-R06, Ge-Ha08b, C-G-L09, Ka10a} for some recent work). However, the understanding of the structure of the concatenating chains is still very limited. Whereas, on the one hand, the finiteness of the monotone catenary degree is a rare phenomenon (inside the class of objects having finite catenary degree), the following two positive phenomena have been observed. First, in a large class of monoids, all problems with the monotonicity of concatenating chains occur only at the beginning and the end of concatenating chains (\cite[Theorem 1.1]{Fo-Ge05}, \cite[Theorem 3.1]{Fo-Ha06b}). Second, in various settings, there is a large subset consisting of `big' elements having extremely nice concatenating chains (see \cite[Theorem 4.3]{Ge97d}, \cite[Theorems 7.6.9 and 9.4.11]{Ge-HK06a}). Let $H$ be a Krull monoid with infinite cyclic class group and $G_P \subset G$ as always. By Theorem \ref{main-theorem-II}, it suffices to consider the situation where $G_P^+$ is infinite and $2 \le |G_P^-| < \infty$. Our first result points out that, in general, the monotone catenary degree is infinite. In contrast to this, the main result (Corollary \ref{final-cor}) shows that there is a constant $M^*$ such that, for a large class of elements $a$, any two factorizations $z$ and $y$ of $a$ with $y$ having maximal length can be concatenated by a monotone $M^*$-chain of factorizations and thus, for those factorizations $z$ and $y$ of $a$ neither of which need be of maximal length, there is an $M^*$-chain between $z$ and $y$ which `changes direction' at most once. \begin{proposition} \label{7.1} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors. Suppose that $-d_1,-d_2,d_1d_2\in G_P$, where $3\leq d_1<d_2$, $\gcd(d_1,d_2)=1$ and $d_1-1\nmid d_2-1$, and that $G_P$ contains infinitely many positive integers congruent to $d_1+d_2$ modulo $d_1d_2$. Let $d=\gcd(d_1-1,d_2-1)$. Then, for every $M,\,N\geq 0$, there exists $a \in H$ and $z,\,z'\in \mathsf Z(a)$ such that \begin{eqnarray} \label{z'-size}&&|z'|=|z|+d\leq |z|+d_1-2,\\&&\label{notnearedges} |z|\in [\min \mathsf L (a) + N, \, \max \mathsf L (a) - N], \mbox{ and}\\ \label{distance-thingy}&&\mathsf d \Bigl(z , \bigcup_{i=1}^{|Z|+d_1-2}\mathsf Z_i(a)\setminus \{z \} \Bigr) > M.\end{eqnarray} In particular, $\mathsf c_{{\text{\rm mon}}} (H) = \infty$ and $\delta(H)=\infty$ \end{proposition} \begin{proof} That $\mathsf c_{{\text{\rm mon}}} (H)=\delta(H) = \infty$ follows from \eqref{z'-size} and \eqref{distance-thingy}, so we need only show \eqref{z'-size}, \eqref{notnearedges} and \eqref{distance-thingy} hold. By Lemma \ref{3.3}, it suffices to prove the assertions for $\mathcal B (G_P)$. We may also w.l.o.g. assume $$N\geq d_2-1\;\mbox{ and } \; M\geq d_1,$$ as the theorem holding for large values of $M$ and $N$ implies it holding for all smaller values. \smallskip In view of the hypotheses, there exists $L\in G_P$ with \begin{eqnarray} \label{that}L&>& d_2M\geq d_1d_2,\\ \label{L-congruences} L\equiv d_1\mod d_2&\;\mbox{ and } \;& L\equiv d_2 \mod d_1.\end{eqnarray} Let $B\in \mathcal B(\{d_1d_2,-d_1,-d_2,L\})\subset \mathcal B(G_P)$ be the sequence \[ B = L^{2d_1d_2N}(-d_2)^{2d_1LN}(-d_1)^{2d_2LN}(d_1d_2)^{2LN} \,. \] Let $$A_1=L^{d_1}(-d_1)^L \;\mbox{ and } \; A_{2}=L^{d_2}(-d_2)^L.$$ Since $\gcd(d_1,d_2)=1$, it follows, in view of \eqref{L-congruences} and by reducing modulo $d_1$ and $d_2$, respectively, that $A_1$ and $A_{2}$ are both atoms. Also define \[ B_1=(d_1d_2)(-d_1)^{d_2}\;\mbox{ and } \; B_{2}= (d_1d_2)(-d_2)^{d_1} \,, \] which, since they both contain exactly one positive integer, must also be atoms. In view of \eqref{L-congruences}, define \[ A_0=L(-d_2)^{\frac{L-d_1}{d_2}}(-d_1), \] which is an atom for the same reasons as those for the $B_i$. Let $z\in \mathsf Z (B)$ be given by \[ z=A_1^{d_2N} A_{2}^{d_1N} B_1^{LN} B_{2}^{LN} \,. \] Since $d=\gcd(d_1-1,d_2-1)$, it follows that there exists an integer $l\in [1,d_2-1]$ such that $$l(d_2-d_1)\equiv -d\mod d_2-1.$$ Let \begin{equation}\label{lprime} l' = \frac{l(d_2-d_1)+d}{d_2-1}\in \mathbb{N}.\end{equation} Then, since $d=\gcd(d_1-1,d_2-1)\leq d_1-1$, it follows that $1\leq l'\leq l\leq d_2-1$. Note that we have the identities $$\pi(A_1^{d_2}B_2^L)=\pi(A_2^{d_1}B_1^L)\;\mbox{ and } \; \pi(A_2B_1)=\pi(A_0^{d_2}B_2).$$ Thus, by considering the definition of $z$ and recalling that $N\geq d_2-1\geq l\geq l'$, we see that \[ z' = A_1^{d_2N-ld_2} A_{2}^{d_1N+ld_1-l'} A_0^{l'd_2} B_1^{LN+lL-l'} B_{2}^{LN-lL+l'} \] is another factorization $z'\in \mathsf Z(B)$ besides $z$. Note that $|z'|-|z|=-l(d_2-d_1)+l'(d_2-1)=d$. Moreover, since $d_1-1\nmid d_2-1$, $d_1<d_2$ and $\gcd(d_1-1,d_2-1)=d$, it follows that $d<d_1-1$. Thus \eqref{z'-size} holds. Also, the factorizations $$A_2^{2d_1N} B_{1}^{2LN}\in \mathsf Z (B)\;\mbox{ and } \; A_{1}^{2d_2N} B_{2}^{2LN}\in \mathsf Z (B)$$ show that \[ \min \mathsf L (B)+N \leq \min \mathsf L (B)+(d_2-d_1)N \leq |z|\leq \max \mathsf L(B)-(d_2-d_1)N \leq \max \mathsf L(B)-N \,, \] whence \eqref{notnearedges} holds. It remains to establish \eqref{distance-thingy}. We begin with the following claim. \noindent {\bf Claim 1:} If $A|B$ is an atom with $d_1d_2\in \supp(A)$, then $d_1d_2$ is the only positive element dividing $A$ and $\mathsf v_{d_1d_2}(A)=1$. Suppose instead that $a \t A (d_1d_2)^{-1}$ with $a\in \{L,d_1d_2\}$. Then we must have $\mathsf v_{-d_2}(A)<d_1$ and $\mathsf v_{-d_2}<d_1$, else $(d_1d_2)(-d_1)^{d_2}$ or $(d_1d_2)(-d_2)^{d_1}$ would be a proper, nontrivial zero-sum subsequence dividing $A$, contradicting that $A$ is an atom. But now (in view of \eqref{that}) $$2d_1d_2>-\sigma (A^-)=\sigma (A^+)\geq a+d_1d_2\geq \min\{L,d_1d_2\}+d_1d_2=2d_1d_2,$$ a contradiction. So Claim 1 is established. \bigskip In view of Claim 1, we see that, in any factorization $y$ of $B$, there will always be $2LN$ atoms $A$ having $A (d_1d_2)^{-1}$ consisting entirely of negative terms. Thus the length of any factorization of $B$ is determined entirely by the number of atoms containing an $L$. Moreover, by considering sums modulo $d_i$, we find (in view of \eqref{L-congruences} and $\gcd(d_1,d_2)=1$) that $(d_1d_2)(-d_1)^{d_2}$ and $(d_1d_2)(-d_2)^{d_1}$ are the only atoms dividing $B$ which contain $d_1d_2$. As a result, we in fact have the factorization of $B$ completely determined by how the $2d_1d_2N$ terms equal to $L$ are factored (that is, if $y_L|y$ is the subfactorization consisting of all atoms containing an $L$, then $\pi(y_L^{-1}y)$ has a unique factorization, which will always have length $2LN$). We continue with the next claim. \noindent {\bf Claim 2:} If $A|B$ is an atom with $L,-d_1,-d_2\in \supp(A)$, then $\mathsf v_L(A)=1$. Suppose instead that $L^2|A$. In view of \eqref{L-congruences} and \eqref{that}, both $\frac{L-d_1}{d_2}$ and $\frac{L-d_2}{d_1}$ are positive integers. Consequently, we must have $\mathsf v_{-d_1}(A)<\frac{L-d_2}{d_1}$ and $\mathsf v_{-d_2}<\frac{L-d_1}{d_2}$, else $$L(-d_1)^{(L-d_2)/d_1}(-d_2) \; \mbox{ or }\; L(-d_2)^{(L-d_1)/d_2}(-d_1)$$ would be a proper, nontrivial zero-sum subsequence dividing $A$, contradicting that $A$ is an atom. But now $$2L-d_1-d_2>-\sigma (A^-)=\sigma (A^+)\geq 2L,$$ a contradiction. So Claim 2 is established. \bigskip In view of \eqref{L-congruences}, $\gcd(d_1,d_2)=1$ and Claims 1 and 2, we see that if $A|B$ is an atom with $L\in \supp(A)$, then either\begin{itemize} \item[(a)] $A=A_1$ and $\mathsf v_{-d_2}(A)=0$, \item[(b)] $A=A_{2}$ and $\mathsf v_{-d_1}(A)=0$, or \item[(c)] $\mathsf v_L(A)=1$ and $\mathsf v_{d_1d_2}(A)=0$.\end{itemize} Let $y\in \mathsf Z(B)$ be a factorization with $\mathsf d(z,y)\leq M$ and let $y_L|y$ and $z_L|z$ be the corresponding sub-factorizations consisting of all atoms which contain an $L$. In view of the definition of $z$, since $\mathsf d(z,y)\leq M$ and $L>d_2M$ (by \eqref{that}), and since $(d_1d_2)(-d_1)^{d_2}$ is the only atom containing a $-d_1$ in $z_L^{-1}z$, it follows that $$\mathsf v_{-d_1}(\pi(y_L))\leq \mathsf v_{-d_1}(\pi(z_L))+Md_2=d_2NL+Md_2<d_2NL+L;$$ thus the multiplicity $m_1$ of the atom $A_1$ in $y$ is at most $d_2N$ (since each such atom $A_1$ requires $L$ terms equal to $-d_1$). Likewise, $$\mathsf v_{-d_2}(\pi(y_L))\leq \mathsf v_{-d_2}(\pi(z_L))+Md_1= d_1NL+Md_1<d_1NL+L,$$ whence the multiplicity $m_{2}$ of the atom $A_{2}$ in $y$ is at most $d_1N$. Let $m_0$ be the number of atoms dividing $y$ containing exactly one term $L$. Hence, since all atoms containing an $L$ must be of one of the three previously described forms, it follows that \begin{equation}\label{tist}d_1m_1+d_2m_{2}+m_0=\mathsf v_{L}(B)=2d_1d_2N.\end{equation} Let $m'_0$, $m'_1$ and $m'_2$ be analogously defined for $z$ instead of $y$. Then $m'_0=0$, $m'_1=d_2N$ and $m'_2=d_1N$. In view of \eqref{tist} and the comments after Claim 1, and since $m_1\leq d_2N=m'_1$ and $m_{2}\leq d_1N=m'_2$, it follows that $$|y|= |z|+(m'_1-m_1)(d_1-1)+(m'_2-m_2)(d_2-1)\geq |z|;$$ moreover, unless $m_1=m'_1$ and $m_{2}=m'_2$, then $|y|\geq |z|+d_1-1$. On the other hand, if $m_1=m'_1=d_2N$ and $m_{2}=m'_2=d_1N$, then $m_0=0$ (in view of \eqref{tist}), whence $z_L=y_L$ (recalling that all atoms containing an $L$ must be of one of the three previously described forms), from which $z=y$ follows by the comments after the proof of Claim 1. Consequently, we conclude that $\mathsf d(z,y)\leq M$ implies either $y=z$ or $|y|\geq |z|+d_1-1$, which establishes \eqref{distance-thingy}, completing the proof. \end{proof} The following lemma helps describe when an atom can contain more than one positive term. \begin{lemma}\label{breakapart-lemma} Let $G_0\subset \mathbb Z$ be a condensed set such that $G_0^-$ is finite and nonempty. Let $M=|\min G_0|$, let $U\in \mathcal A(G_0)$ and let $R|U^-$ be the subsequence consisting of all negative integers with multiplicity at least $M-1$ in $U$. Suppose there is some $L\in \Sigma(U^+)\setminus \{\sigma(U^+)\}$ such that \begin{equation}\label{hyplem}|U^+|\geq 2,\quad L \geq (M-1)^2, \;\mbox{ and } \; \sigma(U^+)\geq L+(M-1)^2.\end{equation} Then the following statements hold: \begin{enumerate} \item There is some $a\in \supp(U)\cap G_0^-$ with $\mathsf v_{a}(U)\geq M-1$, i.e., $R$ is nontrivial. \smallskip \item For any such $a\in \supp(R)$, we have $(-L+a\mathbb Z)\cap \Sigma(U^-)=\emptyset$. \smallskip \item There exists a subsequence $R'|U^-$ with $R|R'$ such that $L\notin \langle \supp(R')\rangle=n\mathbb Z$ and $|{R'}^{-1}U^-|\leq n-2$; in particular, $\supp(R)\subset \supp(R')\subset n\mathbb Z$ does not generate $\mathbb Z$. \end{enumerate} \end{lemma} \begin{proof} 1. Let $U_L|U^+$ be a proper subsequence with sum equal to $L$. Note that $|G_0^-|\leq M$. Thus $\sigma(U^+)\geq L\geq(M-1)^2> (M-2)|G_0^-|$, whence the pigeonhole principle implies that there is some $a\in \supp(U)\cap G_0^-$ with $\mathsf v_{a}(U)\geq M-1$. \medskip 2. Let $a|U^-$ with $\mathsf v_{a}(U)\geq M-1$ and let $\phi_a \colon \mathbb Z\rightarrow \mathbb Z/a\mathbb Z$ denote the natural homomorphism. We say that a sequence $T$ is a zero-sum sequence (zero-sum free, resp.) modulo $a$ if $\phi_a(T)\in \mathcal{F}(\mathbb Z/a\mathbb Z)$ has the respective property. Suppose $(-L+a\mathbb Z)\cap \Sigma(U^-)$ is nonempty and let $S$ be a zero-sum free modulo $a$ subsequence $S|U^-$ (possibly trivial) with $\sigma(S)\equiv -L\mod a$. Note that any zero-sum free modulo $a$ subsequence $T|U^-$ has length at most $\mathsf D(\mathbb Z/a\mathbb Z)-1=|a|-1$ \cite[Theorem 5.1.10]{Ge-HK06a}, and thus \begin{equation}\label{zsf-bound}|\sigma(T)|\leq (|a|-1)\cdot |\min \bigl( (\supp(U)\cap G_0^-)\setminus \{a\} \bigr)|\leq (M-1)^2\leq L;\end{equation} in particular, $|\sigma(S)|\leq (M-1)^2\leq L$. Now factor $S^{-1}U^-=S_0S_1 \cdot \ldots \cdot S_ta^{\mathsf v_{a}(U^-)}$, where $S_0$ is zero-sum free modulo $a$ and each $S_i$, for $i\geq 1$, is an atom modulo $a$. In view of $|\sigma(S_0)|\leq (M-1)^2$ (from \eqref{zsf-bound}) and the hypothesis $\sigma(U^+)\geq L+(M-1)^2$, we have \begin{equation}\label{tlot}|\sigma(SS_1\cdot\ldots\cdot S_t a^{\mathsf v_{a}(U^-)})|=|\sigma(S_0^{-1}U^-)|\geq L.\end{equation} If $|\sigma(SS_1\cdot \ldots \cdot S_t)|\leq L$, then it follows, in view of \eqref{tlot} and the definitions of $S$ and the $S_i$, that we can append on to $SS_1\cdot\ldots\cdot S_t$ a sufficient number of terms equal to $a$ so as to obtain a subsequence $B_L|S_0^{-1}U^-$ with $SS_1\cdot\ldots\cdot S_t|B_L$ and $\sigma(B_L)=-L$, and now $U_LB_L|U$ is a proper, nontrivial zero-sum subsequence, contradicting that $U$ is an atom. Therefore $|\sigma(SS_1\cdot \ldots \cdot S_t)|> L$, and let $t'<t$ be the maximal non-negative integer such that $|\sigma(SS_1\cdot \ldots \cdot S_{t'})|\leq L$, which exists in view of $|\sigma(S)|\leq (M-1)^2\leq L$. By its maximality, we have \begin{equation}\label{luckstack}|\sigma(S_1\cdot \ldots \cdot S_{t'})|> L-|\sigma(S)|-|\sigma(S_{t'+1})|\geq L-|\sigma(S)|-|a|M,\end{equation} where the second inequality follows by recalling that $S_{t'+1}$ is an atom modulo $a$ and thus has length at most $\mathsf D(\mathbb Z/a\mathbb Z)=|a|$. From the definitions of all respective quantities, both the left and right hand side of \eqref{luckstack} is divisible by $a$, whence \[ |\sigma(S_1\cdot \ldots \cdot S_{t'})|\geq L-|\sigma(S)|-|a|(M-1) \,. \] But now we see, in view of $\mathsf v_{a}(U)\geq M-1$ and the definition of $t'$, that we can append on to $SS_1\cdot\ldots\cdot S_{t'}$ a sufficient number of terms equal to $a$ so as to obtain a subsequence $B_L|S_0^{-1}U^-$ with $SS_1\cdot\ldots\cdot S_{t'}|B_L$ and $\sigma(B_L)=-L$, once again contradicting that $U$ is an atom. So we conclude that $(-L+a\mathbb Z)\cap \Sigma(U^-)$ is empty. \medskip 3. In view of part 2, we see that \begin{equation}\label{lizzard}-L\notin \langle a\rangle +\Sigma(U^-).\end{equation} Now, if $|a^{-\mathsf v_a(U^-)}U^-|\leq |a|-2$, then $\supp(R)=\{a\}$ (recall $|a|\leq M$ and $\mathsf v_g(R)\geq M-1$ for all $g\in \supp(R)$) and the final part of the lemma holds with $R'=R$ in view of \eqref{lizzard}. Therefore we may assume $y=|a^{-\mathsf v_a(U^-)}U^-|\geq |a|-1$. Note that \eqref{lizzard} implies that \[\phi_a(-L)\notin \Sigma_y(\phi_a(a^{-\mathsf v_a(U^-)}U^-)0^y)=\Sigma(\phi_a(U^-))\neq \mathbb Z/a\mathbb Z.\] As a result, applying the Partition Theorem (see \cite[Theorem 3]{Gr05b}) to $\phi_a(a^{-\mathsf v_a(U^-)}U^-)0^y$, now yields part 3 (to be more precise, we apply that result with sequences $S = S' = \phi_a(a^{-\mathsf v_a(U^-)}U^-)0^y$ and number of summands $n=y$; also note that the resulting coset from the Partition Theorem must be a subgroup in view of the high multiplicity of $0$ and that $R|R'$ since $\mathsf v_g(R)\geq M-1>|a|-2$ for all $g\in \supp(R)$). \end{proof} Before stating the next result, we need to first introduce some notions. Let $G_0\subset \mathbb Z\setminus \{0\}$ be a condensed set such that $G_0^-$ is finite and nonempty, and let $B\in \mathcal B(G_0)$. If $z=A_1\cdot\ldots\cdot A_n\in \mathsf Z(B)$, with $A_i\in \mathcal A(G_0)$, then we let $$z^+=A_1^+\cdot\ldots\cdot A_n^+\in \mathcal F(\mathcal A(G_0)^+)$$ and $\mathsf Z(B)^+ = \{z^+ \mid z \in \mathsf Z(B) \}$. We can then define a partial order on $\mathsf Z(B)^+$ by declaring, for $z^+,\, y^+ \in \mathsf Z(B)^+$, that $z^+ \le y^+$ when $z^+=A_1^+\cdot\ldots\cdot A_n^+\in \mathsf Z(B)^+$, where $A_i\in \mathcal A(G_0)$, \begin{eqnarray}\nonumber y=(B_{1,1}\cdot \ldots \cdot B_{1,k_1})\cdot (B_{2,1}\cdot \ldots \cdot B_{2,k_2})\cdot \ldots \cdot (B_{n,1}\cdot\ldots \cdot B_{n,k_n})\\ \;\mbox{ with }\;B_{j,i}\in \mathcal A(G_0)\;\mbox{ and }\;A_j^+ = B_{j,1}^+ \cdot \ldots \cdot B_{j,k_j}^+\; \mbox{ for } \; j\in [1,n]\;\mbox{ and }\; i\in [1,k_j]. \nonumber \end{eqnarray} We then define $\Upsilon(B)$ to be all those factorizations $z\in \mathsf Z(B)$ for which $z^+\in \mathsf Z(B)^+$ is maximal with respect to this partial order. Note that, if $z,\,y\in \mathsf Z(B)$ with $z^+\lneqq y^+$, then $|z|<|y|$. Thus $\Upsilon(B)$ includes all factorizations $z\in \mathsf Z(B)$ of maximal length $|z|=\max \mathsf L(B)$, and equality holds, namely \begin{equation}\label{Upsilon-equivalence}\Upsilon(B)=\{z\in \mathsf Z(B)\mid |z|=|B^+|\},\end{equation} when $\max \mathsf L(B)=|B^+|$. If $H$ is a Krull monoid, $\varphi \colon H \to \mathcal F (P)$ a cofinal divisor homomorphism and $a \in H$, then we define \[ \Upsilon (a) = \{ z \in \mathsf Z (a) \mid \overline{\boldsymbol \beta} (z) \in \Upsilon ( \boldsymbol \beta (a)) \} \,. \] For a pair of monoids $H\subset D$, we recall the definition of the \emph{relative Davenport constant}, originally introduced in \cite {Ge97c} and denoted $\mathsf D(H,D)$, which is the minimum $N\in \mathbb N\cup \{\infty\}$ such that if $z\in \mathsf Z(D)=\mathcal F(\mathcal A(D))$ with $\pi(z)\in H$, then there exists $z'|z$ with $\pi(z')\in H$ and $|z'|\leq N$. Next, we introduce two new monoids associated to $\mathcal F(G_0)$. We assume that $\emptyset \neq G_0\subset \mathbb Z \setminus \{0\}$, yet here we do not assume that $G_0$ is condensed. Consider the free monoid $\mathcal F(G_0)\times \mathcal F(G_0)$ and let $$\mathcal E(G_0)=\{(S_1,S_2)\in \mathcal F(G_0)\times \mathcal F(G_0)\mid \sigma(S_1)=\sigma(S_2)\}\subset \mathcal F(G_0)\times \mathcal F(G_0)$$ the subset of pairs of sequences with equal sum and $$\mathcal S(G_0)=\{(S_1,S_2)\in \mathcal F(G_0)\times \mathcal F(G_0)\mid S_1=S_2\}\subset \mathcal E(G_0)\subset \mathcal F(G_0)\times \mathcal F(G_0)$$ the subset of symmetric pairs. Note both $\mathcal E(G_0)$ and $\mathcal S(G_0)$ are monoids; furthermore, $\mathcal S(G_0)$ is saturated and cofinal in $\mathcal E(G_0)$, and $\mathcal E(G_0)$ is saturated and cofinal in $\mathcal F(G_0)\times \mathcal F(G_0)$. Thus, if we let $G'$ denote the class group of the inclusion $\mathcal S(G_0)\hookrightarrow \mathcal E(G_0)$ and let $$G_0'=\{[u]\in G'\mid u\in \mathcal A(\mathcal E(G_0))\}\subset G',$$ then \cite[Lemma 4.4]{Ge97c} shows that (recall that, due to the cofinality, the definition of the class group in that paper is equivalent to the present one) \begin{equation}\label{teent}\mathsf D(\mathcal S(G_0),\mathcal E(G_0))=\mathsf D(G_0').\end{equation} Note that, if $(S_1,S_2)\in \mathcal A(\mathcal E(G_0))$, then $S_1(-S_2)\in \mathcal A(G_0\cup -G_0)$, whence $|S_1|+|S_2|\leq \mathsf D (G_0\cup -G_0)$; by \cite[Theorem 3.4.2.1]{Ge-HK06a}, we know that, for a finite subset $P$ of an abelian group, we have both $\mathsf D(P)$ and $\mathcal A(P)$ finite. Consequently, if $G_0$ is finite, then $\mathsf D(G_0\cup -G_0)$ is finite, whence $\mathcal A(\mathcal E(G_0))$ is finite, which in turn implies $G_0'$, and hence also $\mathsf D(G_0')$, is finite. Therefore, in view of \eqref{teent}, we conclude that \begin{equation}\label{relative-D-is-finite} \mathsf D(\mathcal S(G_0),\mathcal E(G_0))<\infty\end{equation} for $G_0$ finite. \medskip \begin{theorem}\label{prop-chain} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors, and suppose that $G_P^-$ is finite. Let $a \in H$ and $M=|\min (\supp( \boldsymbol \beta (a)))|$. \begin{enumerate} \item For any factorization $z\in \mathsf Z(a)$, there exists a factorization $y \in \Upsilon (a)$ and a chain of factorizations $z=z_0,\ldots,z_r=y$ of $a$ such that \[ |z|=|z_0|\leq \cdots\leq |z_r|=|y|\;\mbox{ and } \; \mathsf d(z_i,z_{i+1})\leq \max\{M\cdot \mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-)),2\}<\infty \] for all $i\in [0,r-1]$; in fact $\overline{\boldsymbol \beta} (z_0)^+ \leq \overline{\boldsymbol \beta} (z_1)^+ \leq \ldots \leq \overline{\boldsymbol \beta} (z_r)^+$, where $\leq$ is the partial order from the definition of $\Upsilon( \boldsymbol \beta (a))$. \smallskip \item For any two factorizations $z,\,y\in \Upsilon(a)$ with $\overline{\boldsymbol \beta} (z)^+ = \overline{\boldsymbol \beta} (y)^+$, there exists a chain of factorizations $z=z_0,\ldots,z_r=y$ of $a$ such that \[ \overline{\boldsymbol \beta} (z)^+ = \overline{\boldsymbol \beta} (z_i)^+ = \overline{\boldsymbol \beta} (y)^+\;\mbox{ and } \; \mathsf{d}(z_i,z_{i+1})\leq \max \{\mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-)),2\}<\infty \] for all $i\in [0,r-1]$; in particular, $|z|=|z_i|=|y|$ for all $i\in [0,r]$. \end{enumerate} \end{theorem} \begin{proof} We set $B = \boldsymbol \beta (a)$. By Lemma \ref{3.3}, it suffices to prove the assertion for $\mathcal B (G_P)$ and $B$. As $0$ is a prime divisor of $\mathcal B(G_P)$, we may w.l.o.g. assume $0\notin \supp(B)$. Note $\mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-))<\infty$ follows from \eqref{relative-D-is-finite}. Also, for $z_i,\,z_{i+1}\in \mathsf Z(S)$, we have $|z_i|\leq |z_{i+1}|$ whenever $z_i^+\leq z_{i+1}^+$, and $|z_i|=|z_{i+1}|$ whenever $z_i^+=z_{i+1}^+$ (where $\leq$ is the partial order from the definition of $\Upsilon(B)$). Let $z\in \mathsf Z(B)$ and let $y\in \Upsilon(B)$ with $z^+\leq y^+$. We will construct a chain of factorizations $z=z_0,\ldots,z_r$ of $B$ such that $z_i^+\leq z_{i+1}^+$, either $z_r=y$ or $z^+<z_r^+$, and \begin{eqnarray}\label{ippi}\mathsf d(z_i,z_{i+1})&\leq& M\cdot\mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-))<\infty\;\mbox{ (when } z_i^+<z_{i+1}^+\mbox{ )} \\ \mathsf d(z_i,z_{i+1})&\leq& \mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-))<\infty\;\mbox{ (when }z_i^+=z_{i+1}^+\mbox{ )},\label{ippy}\end{eqnarray} for $i\in [0,r-1]$. Since both parts of the proposition follow by iterative application of this statement, the proof will be complete once we show the existences of such a chain of factorizations $z=z_0,\ldots,z_r=y$. Since $z^+\leq y^+$, we have \begin{eqnarray}\nonumber z&=&A_1\cdot \ldots \cdot A_n\\ \nonumber y&=&(B_{1,1}\cdot \ldots\cdot B_{1,k_1})\cdot (B_{2,1}\cdot \ldots \cdot B_{2,k_2})\cdot \ldots \cdot (B_{n,1}\cdot\ldots \cdot B_{n,k_n})\end{eqnarray} with $A_j,\,B_{j,i}\in \mathcal A(G_0)$ and $A_j^+=B_{j,1}^+ \cdot \ldots \cdot B_{j,k_j}^+$, for $j\in [1,n]$ and $i\in [1,k_j]$. Then $A_j^+=B_{j,1}^+ \cdot \ldots \cdot B_{j,k_j}^+$ and $\sigma(A_j)=\sigma(B_{j,i})=0$, for all $j$ and $i$. Thus, for $j\in [1,n]$, let \[ T_j= (A^-_j , (B^-_{j,1}\cdot \ldots \cdot B^-_{j,k_j}))\in \mathcal E(G_P^-). \] For each $j\in [1,n]$, let $$T_{j,1}\cdot \ldots\cdot T_{j,l_j}\in \mathsf Z(\mathcal E(G_P^-))$$ be a factorization of $T_j$ with each $T_{j,i}\in \mathcal A(\mathcal E(G_P^-))$. Now let \begin{equation}\label{tiddybit}T=\prod_{j=1}^n \prod_{i=1}^{l_j}T_{j,i}\in \mathsf Z(\mathcal E(G_P^-)).\end{equation} However, since $z,\,y\in \mathsf Z(B)$ both factor the same element $B$, we in fact have $$\pi(T)\in \mathcal S(G_P^-).$$ Let $T=T'T''$ where $T'|T$ is the maximal length sub-factorization with all atoms dividing $T'$ from $\mathcal S(G_P^-)$. If $T''=1$, then $A_j=\prod_{i=1}^{k_j}B_{j,i}$ for every $j\in [1,n]$. In view of $A_j,B_{j,i} \in \mathcal{A}(G_P)$, we get $k_j=1$ for every $j \in [1,n]$, that is $z=y$, and so there is nothing to show. Therefore we may assume $T''$ is nontrivial and proceed by induction on $|z|$ and then $|T''|$, assuming \eqref{ippi} and \eqref{ippy} hold for $z'\in \mathsf Z(B)$ when $z^+< {z'}^+$ or when $z^+={z'}^+$ and $|R''|<|T''|$, where $R''$ is defined for $z'$ as $T''$ was for $z$. Let $W=\prod_{j\in J} \prod_{i\in I_j}T_{j,i}$ be a nontrivial subsequence of $T''$, where $J\subset [1,n]$ and $I_j\subset [1,l_j]$ for $j\in J$, such that $\pi(W)\in \mathcal S(G_P^-)$. Note, since $\pi(T')\in \mathcal S(G_P^-)$ (by definition) and since $\pi(T)\in \mathcal S(G_P^-)$ (by \eqref{tiddybit}), we have $\pi(T'')\in \mathcal S(G_P^-)$, whence we may w.l.o.g. assume $|W|\leq \mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-))$ (in view of the definition of the relative Davenport constant). Write $W=\prod_{j\in J} W_j$ with each $W_j=\prod_{i\in I_j}T_{j,i}\in \mathsf Z(\mathcal E(G_P^-))$. Moreover, for $j\in J$, let $\pi(W_j)=(X_j,Y_j) \in \mathcal{E}(G_P^-)$. Define a new factorization $z_1=z_1'\cdot\ldots\cdot z_n'\in \mathsf Z(G_P^-)$ by letting $z_j'=A_j$ for $j\notin J$ and letting $z_j'\in \mathsf Z(A_jX_j^{-1}Y_j)$ for $j\in J$---by construction $X_j$ is a subsequence of $A_j$, and since $(X_j, Y_j)\in \mathcal E(G_P^-)$, we have $\sigma(X_j)=\sigma(Y_j)$, and thus $\sigma(A_jX_j^{-1}Y_j)=\sigma(A_j)=0$ for all $j\in J$, so $z_1$ is well defined. Also, since $\pi(W)=\pi(\prod_{j\in J} W_j)\in \mathcal S(G_P^-)$, it follows (by definition of $\mathcal S(G_P^-)$) that $$\prod_{j\in J}X_j=\prod_{j\in J}Y_j,$$ and thus $z_1\in \mathsf Z(B)$. Moreover, by construction, we have $z^+\leq z_1^+$, and by Lemma \ref{Lambert}, we have $|B_j|\leq M$ for all $j$. Thus \begin{equation}\label{whash}\mathsf d(z,z_1)\leq M|J|\leq M|W|\leq M\cdot \mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-)).\end{equation} Additionally, if $z\in \Upsilon(B)$, then $z^+\leq z_1^+$ implies that $z^+=z_1^+=y^+$, whence $|z|=|z_1|$ and $|z_j'|=1$ for all $j$, in which case the estimate \eqref{whash} improves to $$\mathsf d(z,z_1)\leq |J|\leq |W|\leq \mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-)).$$ Finally, if $z^+=z_1^+$, then, by construction, the sequence $R=R'R''$---whose role for $z_1$ is analogous to the role of $T=T'T''$ for $z$---can be defined so that $R''=T''W^{-1}$, in which case $|R''|<|T''|$. Consequently, applying the induction hypothesis to $z_1$ completes the proof. \end{proof} \medskip \begin{corollary} \label{final-cor} Let $H$ be a Krull monoid and $\varphi \colon H\to \mathcal{F}(P)$ a cofinal divisor homomorphism into a free monoid such that the class group $G= \mathcal{C}(\varphi)$ is an infinite cyclic group that we identify with $\mathbb Z$. Let $G_P \subset G$ denote the set of classes containing prime divisors, and suppose that $G_P^-$ is finite. Let $a \in H$ with $\max \mathsf L(a)=|\boldsymbol \beta (a)^+| + \mathsf v_0 \bigl( \boldsymbol \beta (a) \bigr)$ and let $M=|\min (\supp( \boldsymbol \beta (a)))|$. Then, for any factorization $z\in \mathsf Z(a)$ and any factorization $y\in \mathsf Z(a)$ with $|y|=|\max \mathsf L(a)|$, there exists a chain of factorizations $z=z_0,\ldots,z_r=y$ of $a$ such that $|z|=|z_0|\leq \cdots\leq |z_r|=|y|$ and \[\mathsf d(z_i,z_{i+1})\leq \max\{M \cdot \mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-)), 2\} \leq \max\{|\min G_P|\cdot \mathsf D(\mathcal S(G_P^-),\mathcal E(G_P^-)),2\} <\infty\] for all $i\in [0,r-1]$. \end{corollary} \begin{proof} This follows from directly from Theorem \ref{prop-chain} in view of \eqref{Upsilon-equivalence}. \end{proof} \medskip We end this section with a result showing that the assumption $\max \mathsf L(a)=|\boldsymbol \beta (a)^+| + \mathsf v_0 \bigl( \boldsymbol \beta (a) \bigr)$ holds for a large class of $a\in H$. We formulate the result in the setting of zero-sum sequences. Since $\mathcal B(G_P)$ is factorial when $M=|\min G_P|\leq 1$, the assumption $M\geq 2$ below is purely for avoiding distracting technical points in the statement and proof. \medskip \begin{proposition} Let $G_0\subset \mathbb Z \setminus \{0\}$ be a condensed set with $|G_0| \ge 2$. Let $B\in \mathcal B(G_0)$ be such that, for $M=|\min (\supp(B))|$, we have $M\geq 2$ and $\min (\supp(B)^+)\geq M(M^2-1)$. Then, at least one of the following statements holds{\rm \,:} \begin{itemize} \item[(a)] There exists a subset $A\subset \supp(B^-)$ and a factorization $z\in \mathsf Z(B)$ such that $\langle \supp(B^+)\rangle \not\subset \langle A\rangle $ (in particular, $\langle A\rangle \neq \mathbb Z$) and every atom $U|z$ has \begin{equation}\label{thenumber}\mathsf v_x(U)\leq 2M-2\quad \mbox{ for all }\quad x\in \supp(B)\setminus A.\end{equation} \item[(b)]\begin{itemize}\item[(i)] $\max \mathsf L(B)=|B^+|$, and \item[(ii)] for any factorization $z\in \mathsf Z(B)$, there exists a chain of factorizations $z=z_0,\ldots,z_r$ of $B$ such that $$|z|=|z_0|<\cdots<|z_r|= |B^+|\;\mbox{ and } \; \mathsf d(z_i,z_{i+1})\leq M^2$$ for all $i\in [0,r-1]$. \end{itemize} \end{itemize} \end{proposition} \begin{proof} We assume (a) fails and show that (b) follows. Note, by Lemma \ref{Lambert}, that $\mathsf v_x(U)\leq M\leq 2M-2$ holds for any atom $U\in \mathcal A(G_0)$ and $x\geq 0$, whence \eqref{thenumber} can only fail for some $x\in G_0^-$. To establish (i) and (ii), we need only show that, given an arbitrary factorization $z\in \mathsf Z(B)$ with $|z|<|B^+|$, there is another factorization $z'\in \mathsf Z(B)$ with $|z|<|z'|$ and $\mathsf d(z,z')\leq M^2$. We proceed to do so. Let $z\in \mathsf Z(B)$ with $|z|<|B^+|$. Then there must exists some atom $U_0|z$ such that $|U_0^+|\geq 2$. Let $A\subset \supp(B)$ be all those $a$ for which there exists some atom $V|z$ with $\mathsf v_a(V)\geq 2M-1$. We must have \begin{equation}\label{rex}\langle \supp(B^+)\rangle\subset \langle A\rangle ,\end{equation} else (a) holds. Let $a_1,\ldots,a_t\in A$ be those elements such that $\mathsf v_a(U_0)\leq M-2$, let $a_{t+1},\ldots,a_{|A|}$ be the remaining element of $A$ and, for all $i \in [1, t]$, let $U_i|z$ be an atom with $\mathsf v_{a_i}(U_i)\geq 2M-1$. Note that $U_i\neq U_0$ for $i\leq t$ since otherwise $$2M-1\leq \mathsf v_{a_i}(U_i)=\mathsf v_{a_i}(U_0)\leq M-2\leq 2M-2,$$ a contradiction. Also, $t<|A|\leq M$ since otherwise $$2M(M^2-1)\leq 2\min( \supp(B^+))\leq\sigma(U_0^+)=-\sigma(U_0^-)\leq M(2M-2),$$ a contradiction. We proceed to describe a procedure to swap only negative integers between the $U_i$ which results in new blocks $U'_0,U'_1,\ldots,U'_t\in \mathcal B(G_0)$ with $U'_0U'_1 \cdot \ldots \cdot U'_t = U_0U_1 \cdot \ldots \cdot U_t$, with ${U'^+_i}=U^+_i$ for all $i$, and with $U'_0$ not an atom. Once this is done, then, letting $z_i\in \mathsf Z(U'_i)$, we can define $z'$ to be $$z'=z_0z_1\cdot \ldots \cdot z_tU_0^{-1}U_1^{-1} \cdot \ldots \cdot U_t^{-1}z.$$ Then $|z'|>|z|$ in view of $U'_0$ not being an atom, while, in view of $t\leq |A|-1\leq M-1$ and Lemma \ref{Lambert}, we have $$\mathsf d(z,z')\leq \sum_{i=0}^{t}|U_i^+|\leq (t+1)M\leq M^2.$$ Thus the proof of (i) and (ii) will be complete once we show such a process exists. Observe, for $i\in [1,t]$, that we can exchange $a_i^{c_{i,j}}|U_i$ for $c_{i,j}^{a_i}|U_0$ provided there is some term $c_{i,j}\in \supp(U^-_0)$ with $\mathsf v_{c_{i,j}}(U_0)\geq a_i$ and $\mathsf v_{a_i}(U_i)\geq c_{i,j}$, and this will result in two new zero-sum subsequences obtained by only exchanging negative terms. The idea in general is to repeatedly and simultaneously perform such swaps for the $a_i$ using disjoint sequences \begin{equation}\label{whiff1}\prod_{i=1}^{t}\bigl(c_{i,1}^{a_i}c_{i,2}^{a_i} \cdot \ldots \cdot c_{i,r_i}^{a_i}\bigr) \Bigm|U_0a_{t+1}^{-M+1} \cdot \ldots \cdot a_{|A|}^{-M+1}\end{equation} with \begin{equation}\label{whiff2}\sum_{j=1}^{r_i-1}|c_{i,j}|<M-1\quad\mbox{ but }\quad \sum_{j=1}^{r_i}|c_{i,j}|\geq M-1\end{equation} for all $i \in [1, t]$, and let $U'_0,U'_1,\ldots,U'_t$ be the resulting zero-sum sequences. Then $\mathsf v_{a_i}(U'_0)\geq M-1$ for $i\geq t+1$ by construction, and $\mathsf v_{a_i}(U'_0)\geq \sum_{j=1}^{r_i}|c_{i,j}|\geq M-1$ for $i\leq t$; consequently, in view of $\min(\supp(B^+))\geq M(M^2-1)\geq (M-1)^2$ and $|{U'_0}^+|=|U_0^+|\geq 2$, we see that we can apply Lemma \ref{breakapart-lemma} to $U'_0$, whence \eqref{rex} and $\mathsf v_a(U'_0)\geq M-1$ for $a\in A$ imply that $U'_0$ cannot be an atom, and hence the $U'_i$ have the desired properties. Thus it remains to show that a sequence satisfying \eqref{whiff1} and \eqref{whiff2} exists and that each $a_i$, for all $i \in [1, t]$, has sufficient multiplicity in $U_i$. Note that \eqref{whiff2} and the definition of $a_i\in A$ imply $$\sum_{j=1}^{r_i}|c_{i,j}|\leq \sum_{j=1}^{r_i-1}|c_{i,j}|+|c_{i,r_i}|\leq M-2+M\leq \mathsf v_{a_i}(U_i)$$ for all $i \in [1, t]$. Thus the multiplicity of each $a_i$ in $U_i$ is large enough to perform such simultaneous swaps. Also, \begin{equation}\label{stern}\big|\sigma \bigl(\prod_{i=1}^{t}(c_{i,1}^{a_i}c_{i,2}^{a_i} \cdot \ldots \cdot c_{i,r_i}^{a_i})\bigr)\big|\leq \sum_{i=1}^{t}(2M-2)|a_i|.\end{equation} We turn our attention now to showing \eqref{whiff1} and \eqref{whiff2} hold. We can continue to remove subsequences $c_{i,j}^{a_i}|U_0a_{t+1}^{-M+1} \cdot \ldots \cdot a_{|A|}^{-M+1}$ until the multiplicity of every term is less than $M$. But this means a sequence satisfying \eqref{whiff1} and \eqref{whiff2} can be found, in view of the estimate \eqref{stern}, provided \[ |\sigma({U}^-_0)|-(M-1)\sum_{i=t+1}^{|A|}|a_i|-M(M-1)|\supp(B^-)|\geq \sum_{i=1}^{t}(2M-2)|a_i| \,. \] However, if this fails, then we have (since $|U_0^+|\geq 2$) \begin{eqnarray}\nonumber 2M(M^2-1)&\leq& 2 \min (\supp(B^+))\leq \sigma(U_0^+)=-\sigma(U_0^-)=|\sigma(U_0^-)| \\ &< & \sum_{i=1}^{t}(2M-2)|a_i|+(M-1)\sum_{i=t+1}^{|A|}|a_i|+M(M-1)|\supp(B^-)|\nonumber\\ \nonumber &<& (2M-2)\sum_{i=1}^{|A|}|a_i| + M(M^2-1)\leq (2M-2)\sum_{i=1}^{M}i+M(M^2-1)\\\nonumber &=& 2M(M^2-1),\end{eqnarray} a contradiction, completing the proof. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
train/arxiv
BkiUdmY4eIXh8CN2a8mw
5
1
\section{Introduction} \setlength{\parindent}{0.3cm} \textnormal{\indent{Choosing} the right resource to get a recommendation on items is a critical component in human decision making. With the emergence of the web, consumers are being exposed to a huge number of choices. On the other side, sellers are challenged by of the diversity of users' interests. In addition to that, nowadays, it is easy to perform a large number of transactions in a small amount of time and as a result, the sale volume is increasingly growing. Recommender Systems (RS) are designed to address the natural dual need by both consumers and sellers. Current RS can be classified as Content Filtering and Collaborative Filtering approaches. RS generally use Collaborative Filtering because it uses the entire user-base information explicitly to recommend items. We will be focusing on Collaborative-Filtering and, for the rest of the paper, we will be referring to it as CF. The process of computing the similarity between users in CF requires that those users should share common rated items which is not practical in real life as systems generally process a large number of items. As a result, it is very likely to happen that two random users have no common items and the RS fail to predict a rating.\\ \indent{Inspired} by the work presented in \cite{Massa}, we propose a novel solution which uses trust network to achieve better rating prediction accuracy and a higher rating coverage. Our key contributions are:\\ \textnormal{\indent{\textbf{Investigating factors effecting CF technique}}: We perform extensive experimental evaluation of the CF algorithm performance on three real-world datasets (MovieLens, Epinions, Flixster) and show that CF performs differently on every dataset. \\ \indent{\textbf{\emph{Trust-Aware Neighbourhood} Algorithm}}: We propose a novel approach to overcome the expensive execution time of \cite{Massa} by limiting the trusted neighbours involved in the rating prediction. We show a significant increase of the rating coverage and an improvement of the rating accuracy on Epinions and Flixster datasets. \\ \indent{\textbf{\emph{Hybrid Trust-Aware Neighbourhood} Algorithm}}: Focusing on cold-start users, we propose an algorithm that incorporates user rating behaviour and trusted network to increase the neighbourhood and, as a results, increase the item rating coverage without the detriment to prediction accuracy. }}\vspace{-1.4em} \section{Background and Related work} \setlength{\parindent}{0.3cm} \textnormal{\indent{The} basic idea behind CF technique is to recommend items for an active user by finding users who have similar rating behaviour as that active user. The rating prediction for an item j by user a $P_{a,j}$ is calculated by the following standard formula \cite{Su}: \begin{equation} \label{eq:weighted sum of others ratings} P_{a,j}=\bar{r}_a+\frac{\sum_{u \in U}(r_{u,j}-\bar{r}_u)\cdot w_{a,u}}{\sum_{u \in U}|w_{a,u}|} \end{equation} Where {$\bar{r}_a$} and {$\bar{r}_u$} are the average ratings for user \emph{a} and \emph{u} on all other items and {$w_{a,u}$} is the weight between user \emph{a} and \emph{u}.}\\ \indent{Using} social information to improve RS is a recent active research field: In \cite{Liu}, different approaches are implemented to select neighbours contributing to CF recommendation, with best rating accuracy achieved when nearest neighbours are aggregated with social friends. Even though authors proposed interesting approaches to incorporate social networks in RS, they used a laboratory environment to build the dataset (not an existing real life dataset).There are many example of trust-based models and SocialMF \cite{flixster} is one of them. \cite{flixster} uses Matrix Factorization technique and trust propagation mechanism for recommendation in social networks where social influence is injected in the recommendation model. Even though this model improved the rating prediction accuracy of the RS, the training phase execution time to learn the parameters of the model was expensive. Additionally, the authors did not consider the RS rating coverage evaluation metric. On the contrary, the main contribution in \cite{Hwang} was to improve the rating coverage by incorporating more users in the recommendation process by considering trustors as well as trustees as neighbours in the recommendation. While coverage was increased, the traditional techniques accuracy was preserved. In our work, we aim to increase the rating accuracy as well as the coverage. Similar results to \cite{Hwang} where achieved in \cite{Victor} where authors combined the best of traditional CF technique and trust-based recommendation techniques to present the \emph{EnsembleTrustCF}. While authors focused on a special type of items (controversial ones) ignoring other points (ex. scalability, execution time), our work covers all items and aims to increase the accuracy and coverage in an efficient manner. In \cite{Ma}, trust and distrust relationships are used separately to improve the recommendation prediction process. Authors suggest Matrix Factorization technique with regulations terms constraining the trust and distrust relationships between users and show a rating accuracy improvement. Even though this model is proven theoretically to be scalable over large datasets, it has only been tested on one relatively small dataset without studying the execution time of the model. \vspace{-0.7em} \section{Factors Affecting CF Technique} \textnormal{\indent{Though} there is a huge commercial interest in CF technique, there is little published research on the relative performance of various factors including the similarity computation techniques and the dataset characteristics. User-based algorithm is a type of CF that computes the predicted rating $P_{a,j}$ based on rating information from neighbours defined as the set of users who rated the same item as active user $u_a$. This technique is a predominant type of CF algorithm that shows good prediction accuracy in practice \cite{Su}. One critical step in this algorithm is to compute the similarity between users $w_{a,u}$ which can be performed using various algorithms. In this work, we chose the following four standard similarity computation techniques: "Correlation", "Vector-Similarity", "Inverse-User-Frequency" and "Case-Amplification" that are used in \cite{MassaCode}. To evaluate every technique, we use the following evaluation metrics: MAE, RMSE, MAUE (The average value of MAE for all users) and RMSUE (The average value of RMSE for all users)\cite{Su}. We will only be showing results of RMSUE due to space constraints.} \vspace{-1.4em} \subsection{Datasets} \setlength{\parindent}{0.3cm} \textnormal{\indent{We} experimented with three different real-world large datasets. The film recommendation MovieLens dataset, Epinions dataset where products are reviewed by users and Flixster dataset that comes from a social networking service where every user has a list of friends coming from different social sites (e.g Facebook, Myspace). MovieLens and Epinions's ratings range from 1 to 5 while Flixster's ratings range from 1 to 10. Table \ref{table:general statistics of datasets} summarizes the global statistics of the three used datasets: \vspace{-1.5em} \begin{table}[!ht] \caption{MovieLens, Epinions and Flixster datasets Statistics} \label{table:general statistics of datasets} \centering \small \begin{tabular}{|l|l|l|l|l|l|} \hline Name & Users & Items & Ratings & Spars. & Avg Rating \\ \hline MovieLens & 943 & 1,682 & 100,000 & 93.67\% & 106.04 \\ \hline Epinions & 49,290 & 139,738 & 664,824 & 99.99\% & 13.49 \\ \hline Flixster & 1 million & 49,000 & 8.2 million & 99.98\% & 55.00 \\ \hline \end{tabular} \end{table} \vspace{-3em} \begin{table}[!ht] \centering \small \caption{Percentage of different "cold-start" user types in the datasets} \label{table:cold start in Flixster and Epinions} \begin{tabular}{|l|l|l|l|} \hline Users Types & \textit{MovieLens} &\textit{Flixster} & \textit{Epinions} \\ \hline \textbf{"No-rating" }Users & 0 & 90.19 & 18.51 \\ \hline \textbf{"Few-rating"} Users & 0 & 7.81 & 34.31 \\ \hline \textbf{Total "Cold-start"} Users & 0 & 98.00 & 52.83 \\ \hline \end{tabular} \end{table} \vspace{-1.5em} \subsection{Similarity Computation Effect on CF} \textnormal{\indent{Our} CF algorithm implementation is an extension to the "state of the art" RS library available in \cite{MassaCode}.} \begin{figure}[!ht] \centering \small \includegraphics[width=2.6in] {EpinionsFixsterEpinions} \caption{Similarity computation effect on MovieLens, Epinions and Flixster datasets} \label{fig:EpinionsFixsterEpinions} \end{figure} \vspace{-1em}From Fig.\ref{fig:EpinionsFixsterEpinions} we notice that \emph{Case Amplification} generally is the worst among the other algorithms. This is because, unlike the others, this technique does not consider the weight value between users when computing the prediction. Next, MovieLens dataset has the lowest RMSUE value (0.9305 for "Correlation") compared to Epinions and Flixster datasets (1.2844 and 0.9762 respectively). We can further observe that CF algorithm performs better in Flixster dataset compared to Epinions.\\ \indent{From} those observations, our intuition is that (regardless of the similarity computation technique) the CF algorithm performs differently in the three datasets based on its characteristics. From Table \ref{table:cold start in Flixster and Epinions}, MovieLens dataset has 0\% of cold-start users (who have rated at least 20 items) and this may be the main reason behind its low RMSE values, which are statistically significantly better than the Flixster ones at the \emph{p} $<$ 0.01 level. The reason behind the fact that Flixster has better predictions compared to Epinions, even though more than half users are cold-start in both datasets (98.00\% and 52.83\% respectively), may be that every dataset has a different type of cold-start users. Cold-start users may either be users who have no ratings ("No-rating" users) or users with 1-5 ratings ("Few-rating" users). From Table \ref{table:cold start in Flixster and Epinions}, most cold start users in Flixster are "No-Rating" users (90.19\%) and, on the other hand, most cold-start users in Epinions are "Few-Rating" ones (34.31\%). "No-Rating" users do not effect the rating prediction because they do not contribute in the rating computation (they are not considered neighbours). On the contrary, "Few-Rating" users may badly effect the rating prediction because, even though few items are shared between the active user and those users, i.e. the rating behaviour is different, they still contribute in the prediction. Furthermore, the average rating in Flixster is 55.00 items per user but is 13.49 in Epinions (Table \ref{table:general statistics of datasets}). More ratings per user means more users contributing in the CF technique and leads to better rating prediction. \section{Trust-Aware Neighbourhood} \subsection{Overview} \textnormal{\indent{It} is impossible to gather ratings from all users on all items. If we use "Trust network", we can overcome this limitation where trust network can be gathered from external social sites. Motivated by this, we propose \emph{Trust-Aware Neighbourhood} (\emph{T-A Neighbourhood}) algorithm where we suggest a new definition of "neighbourhood". In \cite{Massa}, the distance for each user who rated item \emph{i} from user \emph{u} in the trust network needs to be computed. To void this expensive computation that we have tested and it took more than weeks to finish, we instead consider only the trusted users of \emph{u}. Our intuition is that there is a similar rating behaviour between users who trust each other. The method is not compared to baseline method because of computations limitation Additionally, we expect the rating coverage to increase, especially with rich trust network datasets. Unlike \cite{Massa} where the weight in Eq.\ref{eq:weighted sum of others ratings} is considered an estimated trust value, the proposed algorithm uses the "Correlation" as the weight which is the key point to overcome the scalability issue in \cite{Massa} where we don't have to compute the minimum distance between the target user and all of its neighbours.\\} \vspace{-2em} \begin{figure}[!ht] \centering \includegraphics[width=3in] {TrustAwareNeighbourhoodCF.jpg} \caption{\emph{T-A Neighbourhood} Example} \label{fig:Trust-Aware NeighbourhoodCF} \end{figure} \vspace{-1em}\\ \indent{Figure \ref{fig:Trust-Aware NeighbourhoodCF}} presents a demonstrative example of running \emph{T-A Neighbourhood} algorithm on an n$\times$m user-item rating matrix $\mathscr{A}$. For predicting $P_{5,13}$: First, we consider the list of neighbours of $u_5$ which is \{10,8,2\}, as opposed to \{8,2\} using \cite{Massa} technique ($P_{10,13}$ is not available) where d=1 or \{8,2,20,16\} where d=2. After computing the weights between users, Eq.\ref{eq:weighted sum of others ratings} is used to compute $P_{5,13}$. The demonstrative example shows how we can limit the neighbourhood list from \{8,2,20,16\} using \cite{Massa} to \{10,8,2\} using \emph{T-A Neighbourhood} in order to reduce the expensive execution time. \vspace{-1em} \subsection{Evaluation} \textnormal{ \indent{The} datasets used for evaluation are: Epinions which contains 487,181 issued trust statements and 7.2 direct neighbour per user and Flixster dataset (users in Flixster can specify a list of friends so, for the context of trust networks, we assume a friend is a trusted user) which has 26.7 million social friendship relation and a 27 average friends per user. Regarding the rating prediction accuracy (Fig.\ref{fig:Trust-Aware NeighbourhoodCF Results}), we can observe that, generally, \emph{T-A Neighbourhood} CF algorithm provides better rating prediction accuracy compared to traditional CF. For example, in Epinions dataset, the relative error was reduced from 1.2855 to 1.1509 on a random set containing 1,000 users. \emph{T-A Neighbourhood} is statistically significantly better than the traditional CF algorithm for both datasets Epinions and Flixster at the \emph{p} $<$ 0.01 level for all data samples. The difference in the results improvement in Epinions dataset is more obvious than in Flixster. Furthermore, the \emph{T-A Neighbourhood} performance is more substantial in smaller subsets of Epinions dataset where we used only 100, 500 and 1,000 users and we will later discuss the reason behind this behaviour. \vspace{-1em} \begin{figure}[!ht] \centering \includegraphics[width=3.1in]{ResultsNeighbourhoodFlixsterEpinions-new} \caption{\emph{T-A Neighbourhood} RMSUE/Rating Coverage on Flixster \& Epinions datasets for different user sets} \label{fig:Trust-Aware NeighbourhoodCF Results} \end{figure} \vspace{-1em} Fig. \ref{fig:Trust-Aware NeighbourhoodCF Results} also shows an increase of the rating coverage when using \emph{T-A Neighbourhood} algorithm compared to the traditional CF technique. For instance, in Flixster dataset, the rating coverage is almost doubled from 20.65\% to 53.98\% (1,000 user). Additionally, Flixster's rating coverage is higher than the Epinions's one (especially with 50,000 users).\\} \textnormal{\indent{Results} showed a rating accuracy increase which may be a result of increasing the rating coverage. Our intuition is that the rating coverage increased in \emph{T-A Neighbourhood} compared to the traditional CF technique for two reasons: First, a user's average number of ratings is low compared to the average number users an active user trust. In Flixster, the average ratings per user is 8.2 (considering the "No Rating" cold-start users) compared to an average of 27.0 direct trusted user. Second, the amount of common items between neighbours in \emph{T-A Neighbourhood} technique is higher than the amount of those items in the traditional CF technique. This my be the main reason behind having an improvement of rating prediction accuracy in Epinions dataset (Fig \ref{fig:Trust-Aware NeighbourhoodCF Results}). Furthermore, experiments of having small subsets of the whole dataset presents the situation where we have a small item-rating matrix compared to a large trust network. Having a rich trust network does not only increase the number of neighbours for an active user, it also increase the amount of common items and, as a result, achieves a substantial improvement over the traditional CF algorithm. This behaviour is more obvious in Epinions subsets compared to Flixster subsets mainly because the average rating per user in Epinions is low compared to a higher user rating average in Flixster (13.49 and 55.00 respectively from Table \ref{table:general statistics of datasets}). Note that running \emph{T-A Neighbourhood} algorithm on cold-starts, with fewer than 5 ratings, produces bad rating coverage (less than 1\% in Epinions dataset). Next, will present a solution to increase the rating coverage for such users.} \textnormal{\\\indent{The} execution time of \emph{T-A Neighbourhood} takes {$O(m \times n \times t)$} steps in every prediction, where \emph{m} is the number of users, \emph{n} is the number of items and \emph{t} is the number of trust statements. This execution time depends on \emph{m} and \emph{t} values (as \emph{n} is usually small). \emph{T-A Neighbourhood} algorithm will be less expensive than \cite{Massa} if $t \times n$ value is smaller than $m^2$. We observed from the experiments a drastic difference in execution time between the two algorithms where \cite{Massa} took 63 hours to run compared to 20 minutes of execution time of \emph{T-A Neighbourhood} technique. This is mainly because the number of items \emph{n} is small compared to the size of trust network \emph{t} and the product \emph{n}$\times$ \emph{t} is much less than $m^2$.} \vspace{-1em} \section{Hybrid Trust-Aware Neigh. } \subsection{Overview} \textnormal{\indent{To} overcome the low cold-start item rating coverage of \emph{T-A Neighbourhood} CF algorithm, we propose a new approach called \emph{Hybrid Trust-Aware Neighbourhood} (H T-A Neigh.). This novel technique suggests a new definition for "neighbourhood" suitable only for cold-starters users. According to the "Hybrid" approach, a neighbour is a user who rated an item \emph{i} or is a trusted user by the active user. We expect that the number of neighbours for an active cold-start user to increase and as a result, the rating coverage will increase.\vspace{-1.3em} \begin{figure}[!ht] \centering \includegraphics[width=3in]{HybridTrustAwareNeighbourhoodCF.jpg} \caption{\emph{H T-A neighbourhood} Example} \protect \label{fig:Hybrid trust-aware neighbourhood cf process} \end{figure} \vspace{-1.5em} \\} \textnormal{\indent{Figure} \ref{fig:Hybrid trust-aware neighbourhood cf process} presents the process of \emph{H T-A Neigh.} algorithm on an \emph{n}$\times$\emph{m} user-item rating matrix. To compute $P_{5,13}$, for example, we first get the directly connected users to $u_5$ which are \{2,8,10\}. Next, we get users who rated $i_{13}$ \{2,8,11\}. The final list of neighbours is \{2,8,10\} $\cup$ \{2,8,11\} which is \{2,8,10,11\}. The "Correlation" between $u_5$ and any user $\in \{2,8,10,11\}$ is computed to predict $P_{5,13}$ using Eq.\ref{eq:weighted sum of others ratings}.} \vspace{-2em} \subsection{Evaluation} \textnormal{\indent{Experiments} showed that Epinions cold-start user's RMSUE value was reduced from 1.49 with traditional CF to 1.47 with \emph{H T-A} approach and similar results were achieved for Flixster dataset. Furthermore, we achieved the same RMSUE values when using \emph{T-A Neigh.} and the \emph{Hybrid} approach. The rating accuracy was not further improved with the \emph{Hybrid} approach because few users are added to the recommendation (cold-start users have rated no more than 5 items) and this may not have an noticeable effect on the prediction accuracy. Experiments also showed a jump in the rating coverage when using \emph{H T-A} over both traditional CF and \emph{T-A Neigh.}. Epinions's rating coverage increased from almost 0\% to 20.57\% while Flixster's coverage was almost doubled (59\% to 98\%). Flixster's item rating coverage was reduced from 59\% using traditional CF to 20\% using \emph{T-A Neighbourhood} algorithm because we used a random set of 5,000 trust statement (not the whole trust network) for memory space limitations. The rating coverage increase comes from the fact that we are incorporating more neighbours so the probability of having common items among users increases which means RS is able to rate more items. Additionally, Flixster dataset reached a rating coverage close to 100\% which means the RS can predict a rating for almost all items due to the high average user rating.} \vspace{-1em} \section{Conclusion} \textnormal{\indent{We} proposed \emph{T-A Neigh.} algorithm that incorporates trust network in the rating prediction process and shows a substantial improvement in the item rating coverage and accuracy especially in Epinions dataset which has fewer ratings per user. Focussing on cold-start users, we proposed \emph{H T-A Neigh.} algorithm which reached a near complete rating coverage for Flixster dataset while keeping the same \emph{T-A Neigh.} rating accuracy. We can further augment the work to an elastic cloud based environment implementation which takes as input a rating prediction to be computed $P_{u,i}$ and a specific budget limit. If the user has a large set of trusted users, then $P_{u,i}$ will be computed with a small value of the maximum propagation distance. If the user has few trusted users, then the algorithm increases the maximum propagation distance till reaching the budget limit. We expect this elastic algorithm to achieve high and efficient rating prediction accuracy value computation.} \vspace{-1em} \bibliographystyle{abbrv}
train/arxiv
BkiUfxzxK0zjCsHedL3f
5
1
\section{Introduction}\label{sec:intro} Among the non-exceptional trigonometric solutions of the Yang-Baxter equation (R-matrices) \cite{Bazhanov:1984gu, Bazhanov:1986mu, Jimbo:1985ua, Kuniba:1991yd}, the R-matrices associated with $D_{n+1}^{(2)}$ are -- by far -- the most complicated. It is therefore not surprising that relatively few results are known about the corresponding integrable quantum spin chains. Bethe ansatz solutions for the closed $D_{n+1}^{(2)}$ chains with periodic boundary conditions were proposed by Reshetikhin \cite{Reshetikhin:1987}. Following the pioneering work of Sklyanin \cite{Sklyanin:1988yz}, the study of open integrable $D_{n+1}^{(2)}$ chains was initiated in \cite{Martins:2000xie}, and was pursued further in \cite{Malara:2004bi, Nepomechie:2017hgw}. New families of solutions of the $D_{n+1}^{(2)}$ boundary Yang-Baxter equation (K-matrices) were recently proposed in \cite{Nepomechie:2018wzp}. These K-matrices depend on the discrete parameters $p$ (which can take $n+1$ possible values $p=0, \ldots, n$) and $\varepsilon$ (which can take two possible values $\varepsilon = 0, 1$), see (\ref{functions}). The open spin chains constructed with these K-matrices have quantum group symmetry corresponding to removing the $p^{th}$ node from the $D_{n+1}^{(2)}$ Dynkin diagram, namely, $U_{q}(B_{n-p}) \otimes U_{q}(B_{p})$ (for both $\varepsilon = 0, 1$). These spin chains also have a $p \leftrightarrow n-p$ duality symmetry. Bethe ansatz solutions for the open $D_{n+1}^{(2)}$ spin chains with $\varepsilon=0$ (and all the possible values of $p$) were proposed in \cite{Nepomechie:2017hgw, Nepomechie:2018nvl}. However, the open $D_{n+1}^{(2)}$ spin chains with $\varepsilon=1$ have so far resisted solution. In an effort to address this problem, we consider here the simplest case $n=1$; that is, we consider the open $U_{q}(B_{1})$-invariant $D_{2}^{(2)}$ spin chain with $\varepsilon=1$ and the two possible values of $p$ (namely, 0 and 1). This model has potential applications to black hole physics \cite{RobertsonSaleur}. We propose a Bethe ansatz solution that is similar to the one for $\varepsilon=0$; however, unlike the latter solution, it is {\em not} complete: this solution describes only the transfer-matrix eigenvalues with {\em odd} degeneracy. It remains a challenge to account for the eigenvalues with even degeneracy, which may be related to a higher symmetry of the transfer matrix. The outline of this paper is as follows. In Sec. \ref{sec:basics}, we briefly review the construction of the transfer matrix and list some of its useful properties. In Sec. \ref{sec:EV}, we try to determine the eigenvalues of the transfer matrix with $p=0$. We arrive at a compact expression for the eigenvalues (\ref{factored}), (\ref{chi}) and corresponding Bethe equations (\ref{BE}), which unfortunately do not give all the eigenvalues. However, for the eigenvalues which {\em can} be described in this way, we find even simpler expressions for the eigenvalues (\ref{square}) and Bethe equations (\ref{BEsimple}), which closely resemble those of the XXZ chain. We then argue that this Bethe ansatz describes the eigenvalues with odd degeneracy. In Sec. \ref{sec:p1}, we consider the case $p=1$. We consider a proposal for the missing eigenvalues in Sec. \ref{sec:BAII}, which is motivated by a preliminary algebraic Bethe analysis presented in an appendix. Our brief conclusions are in Sec. \ref{sec:conclusion}. \section{Basics}\label{sec:basics} In this section, we briefly review the construction of the transfer matrix for the integrable open $U_{q}(B_{1})$-invariant $D_{2}^{(2)}$ spin chain, with a 4-dimensional vector space at each site. We also list some useful properties of this transfer matrix. We begin by recalling its two basic building blocks: an R-matrix and a K-matrix. \subsection{R-matrix} For the $16 \times 16$ R-matrix $R(u)$, we use the expression for the $D_{n+1}^{(2)}$ R-matrix in the fundamental (vector) representation given in Appendix A of \cite{Nepomechie:2017hgw} with $n=1$. This R-matrix depends on the anisotropy parameter $\eta$. In addition to the Yang-Baxter equation, it satisfies the unitarity relation \be R_{12}(u)\, R_{21}(-u) = \zeta(u) \,, \qquad \zeta(u) = 16 \sinh^{2}(u+2\eta)\, \sinh^{2}(u-2\eta) \,, \label{unitarity} \ee and the crossing-unitarity relation \be M^{-1}_{1}\, R_{12}(-u-2\rho)^{t_{1}}\, M_{1}\, R_{21}(u)^{t_{1}} = \zeta(u+\rho) \,, \qquad \rho = -2\eta \,, \ee where $M$ is the diagonal $4 \times 4$ matrix \be M = \diag\left(e^{2\eta}\,, 1\,, 1\,, e^{-2\eta} \right) \,. \ee \subsection{K-matrix} For the right K-matrix $K^{R}(u)$, we take \cite{Nepomechie:2018wzp} {\small \be K^{R}(u) = \left( \begin{array}{c|c|cc|c|c} k_{-}(u)\, \id_{p \times p} & & & & & \\ \hline & g(u)\, \id_{(n-p) \times (n-p)} & & & &\\ \hline & & k_{1}(u) & k_{2}(u) & &\\ & & k_{2}(u) & k_{1}(u) & & \\ \hline & & & & g(u)\, \id_{(n-p) \times (n-p)} & \\ \hline & & & & & k_{+}(u)\, \id_{p \times p} \end{array} \right) \,, \label{KR} \ee} with \begin{align} k_{\mp}(u) &= e^{\mp 2u} \,, \non \\ g(u) &= \frac{\cosh(u-(n-2p)\eta + \frac{i\pi}{2}\varepsilon)}{\cosh(u+(n-2p)\eta - \frac{i\pi}{2}\varepsilon)} \,, \non \\ k_{1}(u) &= \frac{\cosh(u) \cosh((n-2p)\eta + \frac{i\pi}{2}\varepsilon)} {\cosh(u+(n-2p)\eta)+\frac{i\pi}{2}\varepsilon)} \,, \non \\ k_{2}(u) &= -\frac{\sinh(u) \sinh((n-2p)\eta + \frac{i\pi}{2}\varepsilon)} {\cosh(u+(n-2p)\eta + \frac{i\pi}{2}\varepsilon)} \,, \label{functions} \end{align} see also \cite{Martins:2000xie}. Since we restrict our attention here to $D_{2}^{(2)}$, which corresponds to $n=1$, the matrix $K^{R}(u)$ is $4 \times 4$. There are two possible values of $p$ (namely, 0 and 1), and we now set $p=0$. (We consider the $p=1$ case in Sec. \ref{sec:p1}.) As emphasized in the Introduction, we focus in this paper on the case $\varepsilon = 1$. For the left K-matrix $K^{L}(u)$, we take \cite{Nepomechie:2018wzp} \be K^{L}(u) = K^{R}(-u-\rho)^{t}\, M \,, \label{KLfundam} \ee so that the transfer matrix has quantum-group symmetry. \subsection{Transfer matrix} The transfer matrix $t(u)$ for an open integrable quantum spin chain of length $N$ is given by \cite{Sklyanin:1988yz, Nepomechie:2018wzp} \be t(u) = \tr_a K^{L}_{a}(u)\, T_a(u; \{\theta_{j}\})\, K^{R}_{a}(u)\, \widehat{T}_a(u; \{\theta_{j}\}) \,, \label{transfer} \ee where the monodromy matrices with inhomogeneities $\{ \theta_{1}\,, \ldots\,, \theta_{N} \}$ are given by \begin{align} T_a(u; \{\theta_{j}\}) &= R_{aN}(u-\theta_{N})\ R_{a N-1}(u-\theta_{N-1})\ \cdots R_{a1}(u-\theta_{1}) \,, \non \\ \hat T_a(u; \{\theta_{j}\}) &= R_{1a}(u+\theta_{1})\ \cdots R_{N-1 a}(u+\theta_{N-1})\ R_{Na}(u+\theta_{N}) \,, \label{monodromyinhomo} \end{align} and the trace in (\ref{transfer}) is over the (4-dimensional) auxiliary space. \subsection{Properties of the transfer matrix}\label{subsec:transferprops} By construction, the transfer matrix satisfies the commutativity property \be \left[ t(u) \,, t(v) \right] = 0 \,. \ee The transfer matrix also obeys the functional relations (see \cite{Nepomechie:2017hgw} and references therein) \be t(\theta_{j})\, t(\theta_{j} +2\eta) = \Delta(\theta_{j}) \,, \qquad j = 1, \ldots, N\,, \label{fr1} \ee where \begin{align} h^{(R)}(u) &= 2^{9} e^{-2\eta} \cosh(u-2\eta)\, \cosh^{2}(u-\eta)\, \sinh(u-5\eta)\, \sinh(u-4\eta)\, \non \\ & \qquad \times \sinh^{2}(2u-6\eta)\, \sinh(2u-4\eta)\, \sinh(u-\eta) \,, \\ h^{(L)}(u) &= 2^{7} e^{2\eta} \csch(u-7\eta)\cosh(u-6\eta)\, \sinh(u-4\eta) \non \\ & \qquad \times \sinh^{2}(2u-8\eta)\, \sinh^{2}(2u-4\eta)\, \sinh(2u-12\eta)\, \sinh(u-3\eta) \,, \\ h(u) &= h^{(L)}(u)\, h^{(R)}(u) \prod_{k=1}^{N} \zeta(u-4\eta + \theta_{k})\, \zeta(u-4\eta - \theta_{k})\,, \\ \Delta(u) &= \frac{h(u+4\eta)}{\zeta(2u)\, \zeta(2u+2\eta)\, \zeta(2u+4\eta)} \,, \end{align} and $\zeta(u)$ is defined in (\ref{unitarity}). The transfer matrix also has $i \pi$ periodicity \be t(u) = t(u + i \pi)\,, \label{periodicity} \ee as well as crossing symmetry \be t(u) = t(-u+2\eta) \,. \label{crossing} \ee Finally, the transfer matrix has the particular value \be \lim_{u\rightarrow 0} \frac{t(u)}{\sinh u} \Big\vert_{\{ \theta_{j} \} = 0} = 2^{4N} \sinh^{4N-3}(2\eta) \sinh^{2}(4\eta) \sinh(\eta) \csch(3\eta) \,. \label{particular} \ee \section{Eigenvalues of the transfer matrix}\label{sec:EV} This section, which is devoted to determining the transfer-matrix eigenvalues, contains most of our new results. We first show in Sec. \ref{subsec:eigprops} that the transfer-matrix properties listed in Sec. \ref{subsec:transferprops} do not suffice to determine the eigenvalues. We then formulate in Sec. \ref{subsec:conjecture} a conjecture for the eigenvalues, which is developed further in Sec. \ref{subsec:BAI}. \subsection{Properties of the eigenvalues}\label{subsec:eigprops} Let $\Lambda(u)$ denote the eigenvalues of the transfer matrix $t(u)$. It follows from the transfer-matrix properties (\ref{fr1}) - (\ref{particular}) that the eigenvalues satisfy similar properties: \be \Lambda(\theta_{j})\, \Lambda(\theta_{j} +2\eta) = \Delta(\theta_{j}) \,, \qquad j = 1, \ldots, N\,, \label{Lfr1} \ee \be \Lambda(u) = \Lambda(u + i \pi)\,, \label{Lperiodicity} \ee \be \Lambda(u) = \Lambda(-u+2\eta)\,, \label{Lcrossing} \ee \be \lim_{u\rightarrow 0} \frac{\Lambda(u)}{\sinh u}\Big\vert_{\{ \theta_{j} \} = 0} = 2^{4N} \sinh^{4N-3}(2\eta) \sinh^{2}(4\eta) \sinh(\eta) \csch(3\eta) \,. \label{Lparticular} \ee Moreover, the eigenvalues have the asymptotic behavior \be \Lambda(u) \sim 2 e^{\pm 4 N (u- \eta)} \left[\cosh( 4\eta(N-m+\tfrac{1}{2})) + 1 \right]\quad \mbox{ for } u \rightarrow \pm \infty \,, \label{Lasymptotic} \ee where $m$ is a non-negative integer. In order to proceed further, it is convenient to consider rescaled eigenvalues $\lambda(u)$, defined such that \be \Lambda(u) = \phi(u)\, \lambda(u) \,, \qquad \phi(u) = \frac{\sinh(u) \sinh(u-2\eta)}{\sinh(u+\eta) \sinh(u-3\eta)} \,. \label{rescaled} \ee The rescaled eigenvalues $\lambda(u)$ do not have any poles for finite $u$, and do not have zeros at $u=0, 2\eta$. The rescaled eigenvalues have the properties \be \lambda(\theta_{j})\, \lambda(\theta_{j} +2\eta) = \frac{\Delta(\theta_{j})}{\phi(\theta_{j})\, \phi(\theta_{j}+2\eta)} \,, \qquad j = 1, \ldots, N\,, \label{fffr1} \ee and \begin{align} \lambda(u) & = \lambda(u + i \pi)\,, \label{periodicity2}\\ \lambda(u) & = \lambda(-u+2\eta)\,, \label{crossing2} \\ \lambda(0) \Big\vert_{\{ \theta_{j} \} = 0} &= 2^{4N} \sinh^{4N-4}(2\eta) \sinh^{2}(4\eta) \sinh^{2}(\eta)\,, \label{part2} \\ \lambda(u) & \sim 2 e^{\pm 4 N (u- \eta)} \left[\cosh( 4\eta(N-m+\tfrac{1}{2})) + 1 \right]\quad \mbox{ for } u \rightarrow \pm \infty \,. \label{asym2} \end{align} The periodicity (\ref{periodicity2}) and asymptotic behavior (\ref{asym2}) imply that the eigenvalues have the form \be \lambda(u) = \sum_{k=-2N}^{2N}\lambda_{k} e^{2k u} \,, \label{expansionlambda} \ee where $\lambda_{k}$ are $u$-independent coefficients, of which there are $4N+1$. However, the crossing symmetry (\ref{crossing2}) relates the coefficients $\lambda_{k>0}$ to $\lambda_{k<0}$. Hence, there are $2N+1$ independent coefficients for $\lambda(u)$. The functional relations (\ref{fffr1}) provide $N$ constraints. The asymptotic behavior (\ref{asym2}) provides one constraint (the behavior at $-\infty$ follows from the behavior at $+\infty$ together with crossing symmetry), and (\ref{part2}) provides one more constraint, for a total of only $N+2$ constraints. Therefore, for $N>1$, these constraints do {\em not} suffice to determine $\lambda(u)$. We have tried to obtain additional constraints by formulating functional relations involving fused transfer matrices, as in e.g. \cite{Hao:2014fha, Li:2018xrb}. However, this introduces even more unknown coefficients (to describe the eigenvalues of the fused transfer matrices, similarly to (\ref{expansionlambda})), and does not seem to help. In the next subsection, we conjecture an expression for $\lambda(u)$ that is compatible with the above constraints. \subsection{Formulating a conjecture for $\lambda(u)$}\label{subsec:conjecture} In view of the result for $\varepsilon=0$ \cite{Nepomechie:2017hgw, Nepomechie:2018nvl}, let us assume that the rescaled eigenvalues $\lambda(u)$ have the form \be \lambda(u) = Z_{1}(u) + Z_{2}(u) + Z_{3}(u) + Z_{4}(u) \,, \label{Ansatz} \ee where \begin{align} Z_{1}(u) & = a(u)\, \frac{Q(u+\eta)\, Q(u+\eta+i \pi)}{Q(u-\eta)\, Q(u-\eta+i \pi)} \prod_{k=1}^{N}16\sinh^{2}(u-\theta_{k}-2\eta)\sinh^{2}(u+\theta_{k}-2\eta)\,, \non \\ Z_{2}(u) & = b(u)\, \frac{Q(u-3\eta)\, Q(u+\eta+i \pi)}{Q(u-\eta)\, Q(u-\eta+i \pi)} \non \\ & \quad \times \prod_{k=1}^{N}16\sinh(u-\theta_{k})\sinh(u-\theta_{k}-2\eta)\sinh(u+\theta_{k})\sinh(u+\theta_{k}-2\eta) \,, \non \\ Z_{3}(u) & = b(-u+2\eta)\, \frac{Q(u+\eta)\, Q(u-3\eta+i \pi)}{Q(u-\eta)\, Q(u-\eta+i \pi)} \non \\ & \quad \times \prod_{k=1}^{N}16\sinh(u-\theta_{k})\sinh(u-\theta_{k}-2\eta)\sinh(u+\theta_{k})\sinh(u+\theta_{k}-2\eta) \,, \non \\ Z_{4}(u) & = a(-u+2\eta)\,\frac{Q(u-3\eta)\, Q(u-3\eta+i \pi)}{Q(u-\eta)\, Q(u-\eta+i \pi)} \prod_{k=1}^{N}16\sinh^{2}(u-\theta_{k})\sinh^{2}(u+\theta_{k})\,, \label{Zs} \end{align} and \be Q(u) = \prod_{j=1}^{m}\sinh(\tfrac{1}{2}(u- u_{j}))\, \sinh(\tfrac{1}{2}(u+ u_{j})) \,, \label{Q} \ee where the functions $a(u)$ and $b(u)$ are still to be determined. The function $a(u)$ can readily be seen to be given by \be a(u) = \frac{\cosh^{2}(u-2\eta)}{\cosh^{2}(u-\eta)} \,, \label{afunc} \ee either from the functional relation (\ref{fffr1}), or by explicitly computing the reference-state eigenvalue for small values of $N$ and with values of the inhomogeneities chosen such that only $Z_{1}(u)$ is nonzero, as explained in detail in \cite{Nepomechie:2018nvl}. Note that $a(u)$ (\ref{afunc}) has a double-pole at $u=\eta + \frac{i \pi}{2}$. The function $b(u)$ must have the same double-pole as $a(u)$ in order for $\lambda(u)$ (\ref{Ansatz}) to be analytic. We therefore set \be b(u) = \frac{c(u)}{\cosh^{2}(u-\eta)} \,, \label{bform} \ee where $c(u)$ is finite at $u=\eta + \frac{i \pi}{2}$. The function $b(u)$ must also satisfy \be b(u) + b(-u+2\eta) = \frac{2\cosh(u)\cosh(u-2\eta)}{\cosh^{2}(u-\eta)} \ee in order to ensure that $\lambda(u)$ is correct for the reference state, for which $Q(u)=1$. Therefore, $c(u)$ satisfies \be c(u) + c(-u+2\eta) = 2\cosh(u)\cosh(u-2\eta) \,. \label{c1} \ee The condition that the residue of $\lambda(u)$ (\ref{Ansatz}) at the double-pole vanishes implies \be c'(\eta \pm \frac{i \pi}{2}) = 0 \,, \label{c2} \ee where prime denotes differentiation. Finally, let us assume that $b(u)$ (and therefore also $c(u)$) is $i \pi$ periodic \footnote{The weaker assumption $$ b(-u+2\eta) = b(u + i \pi) \,, \qquad b(u) = b(u + 2 i \pi) \,, $$ is also compatible with the $i \pi$ periodicity of $\lambda(u)$ (\ref{periodicity2}), and leads to $$ b(u) = \frac{\cosh(u)\cosh(u-2\eta)}{\cosh^{2}(u-\eta)} + \beta \frac{\sinh(u-\eta)}{\cosh^{2}(u-\eta)} \,, $$ where $\beta$ is a free parameter, cf. (\ref{bfunc}). However, even for $N=2$, we cannot find any value of $\beta$ for which (\ref{Ansatz}) gives all the eigenvalues.} \be b(u) = b(u + i \pi)\,, \ee and has the asymptotic behavior \be \lim_{u\rightarrow \pm \infty} b(u) = \mbox{ finite } \ee (which is compatible with (\ref{asym2})), which imply that $c(u)$ has the form \be c(u) = \sum_{k=-1}^{1}c_{k} e^{2k u} \,, \ee where $c_{k}$ are independent of $u$. The constraints (\ref{c1}) and (\ref{c2}) then uniquely determine $c(u)$ to be given by \be c(u) = \cosh(u)\cosh(u-2\eta) \,. \ee It follows from (\ref{bform}) that $b(u)$ is given by \be b(u) = b(-u+2\eta) = \frac{\cosh(u)\cosh(u-2\eta)}{\cosh^{2}(u-\eta)} \,. \label{bfunc} \ee In summary, we conjecture that the rescaled eigenvalues $\lambda(u)$ are given by (\ref{Ansatz}) and (\ref{Zs}), where $Q(u)$, $a(u)$ and $b(u)$ given by (\ref{Q}), (\ref{afunc}) and (\ref{bfunc}), respectively. This ansatz satisfies all the constraints (\ref{fffr1}) - (\ref{part2}). \subsection{Bethe ansatz}\label{subsec:BAI} We observe that this expression for $\lambda(u)$ can be factored as follows \footnote{For the case $\varepsilon=0$ \cite{Nepomechie:2017hgw, Nepomechie:2018nvl}, and presumably also for the periodic chain \cite{Reshetikhin:1987}, such a factorization is possible for all the eigenvalues, hence it may hold at the level of the transfer matrix.} \be \lambda(u) = \chi(u)\, \chi(u + i\pi) \,, \label{factored} \ee where $\chi(u)$ is defined by \begin{align} \chi(u) & = \frac{\cosh(u-2\eta)}{\cosh(u-\eta)} \frac{Q(u+\eta)}{Q(u-\eta)} \prod_{k=1}^{N}4\sinh(u-\theta_{k}-2\eta)\sinh(u+\theta_{k}-2\eta) \non \\ & + \frac{\cosh(u)}{\cosh(u-\eta)} \frac{Q(u-3\eta)}{Q(u-\eta)} \prod_{k=1}^{N}4\sinh(u-\theta_{k})\sinh(u+\theta_{k}) \,, \label{chi} \end{align} which satisfies $\chi(-u+2\eta) = \chi(u)$. The requirement that the residues of $\chi(u)$ vanish at $u=u_{j} + \eta$ leads to the Bethe equations for $\{ u_{1}, \ldots, u_{m} \}$ \begin{align} \MoveEqLeft \prod_{l=1}^{N} \frac{\sinh(u_{j} -\theta_{l} + \eta)}{\sinh(u_{j} -\theta_{l} - \eta)} \frac{\sinh(u_{j} + \theta_{l} + \eta)}{\sinh(u_{j} + \theta_{l} - \eta)} \non \\ & = \frac{\sinh(u_{j}+\eta)\, \cosh(u_{j}-\eta)} {\sinh(u_{j}-\eta)\, \cosh(u_{j}+\eta)} \prod_{k=1;\, k \ne j}^{m} \frac{\sinh(\tfrac{1}{2}(u_{j} - u_{k}) + \eta)} {\sinh(\tfrac{1}{2}(u_{j} - u_{k}) - \eta)} \frac{\sinh(\tfrac{1}{2}(u_{j} + u_{k}) + \eta)} {\sinh(\tfrac{1}{2}(u_{j} + u_{k}) - \eta)} \,, \non \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad j = 1, \ldots, m\,. \label{BE} \end{align} Unfortunately, this Bethe ansatz solution is {\em not} complete: we have checked numerically for small values of $N$ that this solution gives some, but not all, of the transfer-matrix eigenvalues. However, for every eigenvalue that we {\em do} find, the number of Bethe roots ($m$) is even, and all the Bethe roots come in pairs separated by exactly $i \pi$ \be \{ u_{j}\,, u_{j} + i \pi \}\,, \qquad j = 1\,, \ldots\,, \frac{m}{2} \,. \label{pairs} \ee Assuming that the Bethe roots always form pairs of the form (\ref{pairs}), then the Q-function (\ref{Q}) becomes (up to an irrelevant overall factor) \be Q(u) = \prod_{j=1}^{\frac{m}{2}}\sinh(u- u_{j})\, \sinh(u+ u_{j}) \,, \label{QQ} \ee and therefore $Q(u)$ becomes $i\pi$ periodic \be Q(u) = Q(u + i\pi) \,. \ee It follows that $\chi(u)$ (\ref{chi}) also becomes $i\pi$ periodic, and therefore $\lambda(u)$ (\ref{factored}) becomes a perfect square \be \lambda(u) = \chi(u)^{2} \,. \label{square} \ee The requirement that the residues of $\chi(u)$ vanish at $u=u_{j} + \eta$ now leads to the simplified Bethe equations \begin{align} \MoveEqLeft \prod_{l=1}^{N} \frac{\sinh(u_{j} -\theta_{l} + \eta)}{\sinh(u_{j} -\theta_{l} - \eta)} \frac{\sinh(u_{j} + \theta_{l} + \eta)}{\sinh(u_{j} + \theta_{l} - \eta)} \non \\ & = \frac{\sinh(u_{j}+\eta)} {\sinh(u_{j}-\eta)} \prod_{k=1;\, k \ne j}^{\frac{m}{2}} \frac{\sinh(u_{j} - u_{k} + 2\eta)} {\sinh(u_{j} - u_{k} - 2\eta)} \frac{\sinh(u_{j} + u_{k} + 2\eta)} {\sinh(u_{j} + u_{k} - 2\eta)} \,, \non \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad j = 1, \ldots, \frac{m}{2}\,. \label{BEsimple} \end{align} Interestingly, these Bethe equations are similar to those for the spin-1/2 XXZ chain. We have solved the simplified Bethe equations (\ref{BEsimple}) with all $\theta_{l}=0$ numerically (for some generic value of anisotropy parameter $\eta$) for $N=1, 2, \ldots, 5$; we have then computed the corresponding eigenvalues (for some generic value of spectral parameter $u$) using (\ref{rescaled}), (\ref{chi}) and (\ref{square}); and we have compared with the eigenvalues obtained by direct diagonalization of the transfer matrix (\ref{transfer}). The results are summarized in Tables \ref{table:N1} - \ref{table:N5}. For a given value of $N$, each table reports the degeneracy (the number of times a given eigenvalue appears), the multiplicity (the number of times a given degeneracy appears), and $m$ (twice the number of Bethe roots of the simplified Bethe equations (\ref{BEsimple}) that are needed to describe an eigenvalue with the given degeneracy). A question mark (?) means that an eigenvalue with the given degeneracy cannot be described by this Bethe ansatz. For example, from Table \ref{table:N2}, we can see that for $N=2$, there is one eigenvalue with degeneracy 5 which corresponds to the reference state ($m=0$); there are two eigenvalues with degeneracy 1 which are each described by 1 Bethe root ($m=2$); there is one eigenvalue with degeneracy 3 which is described by 2 Bethe roots ($m=4$); and there is one eigenvalue with degeneracy 6 which cannot be described by this Bethe ansatz. Note that $m$ takes even values from 0 to $2N$. (We do not report the actual Bethe roots and eigenvalues in order to avoid having prohibitively large tables.) An inspection of these tables shows that our Bethe ansatz describes all the eigenvalues with {\em odd} degeneracy, but does not describe any of the eigenvalues with even degeneracy. We conjecture that this is true for generic values of $\eta$ and for all values of $N$. For a given value of $N$, let ${\cal N}_{odd}$ and ${\cal N}_{even}$ denote the total number of eigenvalues (given by the product degeneracy $\times$ multiplicity) with odd and even degeneracy, respectively. Clearly, \be {\cal N}_{odd} + {\cal N}_{even} = 4^{N} \,. \ee From Tables \ref{table:N1}-\ref{table:N5}, we can see that the fraction of eigenvalues with odd degeneracy rapidly decreases as $N$ increases, as summarized in Table \ref{table:oddfraction}. \begin{table}[h!] \centering \begin{tabular}{|c|ccccc|} \hline $N$ & 1 & 2 & 3 & 4 & 5 \\ \hline ${\cal N}_{odd}/4^{N}$ & 1 & 0.625 & 0.375 & 0.210938 & 0.117188 \\ \hline \end{tabular} \caption{Fraction of eigenvalues with odd degeneracy}\label{table:oddfraction} \end{table} We expect that the ``missing'' eigenvalues (i.e., the eigenvalues with even degeneracy, which cannot be described by this Bethe ansatz) {\em cannot} be expressed as perfect squares, as in (\ref{square}). We have verified this for $N=2$, in which case all the eigenvalues can be explicitly computed as functions of $u$ and $\eta$. \subsubsection{Degeneracies and symmetries}\label{subsubsec:degensym} On the basis of $U_{q}(B_{1})$ symmetry alone, one would expect that every eigenvalue of the transfer matrix has odd degeneracy \cite{Nepomechie:2017hgw, Nepomechie:2018wzp}. For example, for $N=2$: \be \left({\bf 3} \oplus {\bf 1}\right)^{\otimes 2} = 2\cdot {\bf 1} \oplus 3 \cdot{\bf 3} \oplus {\bf 5} \,; \ee and for $N=3$: \be \left({\bf 3} \oplus {\bf 1}\right)^{\otimes 3} = 5 \cdot {\bf 1} \oplus 9 \cdot {\bf 3} \oplus 5 \cdot {\bf 5} \oplus {\bf 7} \,. \ee However, we can easily see from Tables \ref{table:N2} and \ref{table:N3} that the actual degeneracies are {\em higher}: for $N=2$, one pair of ${\bf 3}$'s becomes degenerate (giving a 6-fold degenerate eigenvalue); and for $N=3$, two pairs of ${\bf 5}$'s become degenerate (giving two 10-fold degenerate eigenvalues), three pairs of ${\bf 3}$'s become degenerate (giving three 6-fold degenerate eigenvalues), and one pair of ${\bf 1}$'s becomes degenerate (giving a 2-fold degenerate eigenvalue). We have conjectured in \cite{Nepomechie:2017hgw, Nepomechie:2018wzp} that these higher (even) degeneracies are due to an additional symmetry of the transfer matrix that {\em doubles} the degeneracy of certain eigenvalues, for both $\varepsilon=0$ and $\varepsilon=1$. Indeed, for $N=2$, we have explicitly constructed an involutory matrix that maps one ${\bf 3}$ to another ${\bf 3}$, commutes with all the $U_{q}(B_{1})$ generators, and commutes with $t(u)$. However, an extension of this construction to general values of $N$ is still not known. Our new observation here is that the ``missing'' eigenvalues are precisely those that would become degenerate as the result of this additional symmetry. \section{The case $p=1$}\label{sec:p1} The results discussed so far in Secs. \ref{subsec:transferprops} and \ref{sec:EV} are for $p=0$. We now consider the case $p=1$. To this end, it is convenient to now change notation so that the dependence on $p$ becomes manifest, e.g. $K^{R, L}(u) \mapsto K^{R, L}(u, p)$, $t(u) \mapsto t(u, p)$, $\Lambda(u) \mapsto \Lambda(u, p)$, etc. In particular, the result (\ref{rescaled}) becomes \be \Lambda(u, 0) = \phi(u, 0)\, \lambda(u) \,, \qquad \phi(u, 0) = \frac{\sinh(u) \sinh(u-2\eta)}{\sinh(u+\eta) \sinh(u-3\eta)} \,. \label{rescaledp0} \ee For $p=1$, we obtain in a similar way \be \Lambda(u, 1) = \phi(u, 1)\, \lambda(u) \,, \qquad \phi(u, 1) = \frac{\sinh(u) \sinh(u-2\eta)}{\sinh^{2}(u-\eta)} \,, \label{rescaledp1} \ee where $\lambda(u)$ is again given by (\ref{Ansatz}), (\ref{factored}), etc. The Bethe equations are therefore also the same as before. In other words, only the overall factor changes. This result is consistent with the $p\leftrightarrow n-p$ duality symmetry that was mentioned in the Introduction. Indeed, the transfer matrix has the symmetry \cite{Nepomechie:2018wzp} \be {\cal U}\, t(u,p)\, {\cal U}^{-1} = f(u,p)\, t(u,n-p) \,, \label{duality} \ee where ${\cal U}$ is a certain operator acting in the quantum space, and $f(u,p)$ is a scalar function given by \be f(u,p) = f^{L}(u,p)\, f^{R}(u,p) \,, \label{ffunction} \ee with \begin{align} f^{R}(u,p) &= \frac{\cosh(u-(n-2p)\eta + \frac{i\pi}{2}\varepsilon)} {\cosh(u+(n-2p)\eta - \frac{i\pi}{2}\varepsilon)} \,, \non \\ f^{L}(u,p) &= \frac{\cosh(u-(n+2p)\eta + \frac{i\pi}{2}\varepsilon)} {\cosh(u-(3n-2p)\eta - \frac{i\pi}{2}\varepsilon)} \,. \end{align} It follows that the corresponding eigenvalues are related by \be \Lambda(u,p) = f(u,p)\, \Lambda(u,n-p) \,. \label{duality2} \ee Substituting (\ref{rescaledp0}) and (\ref{rescaledp1}) into (\ref{duality2}) with $n=p=1$ leads to the constraint \be f(u,1) = \frac{\phi(u, 1)}{\phi(u, 0)} = \frac{\sinh(u+\eta) \sinh(u-3\eta)}{\sinh^{2}(u-\eta)} \,, \ee which is indeed consistent with (\ref{ffunction}) for $\varepsilon=1$. \section{An ansatz for the missing eigenvalues?}\label{sec:BAII} Let us now consider the following ansatz for the ``missing'' eigenvalues with $p=0$ \be \lambda(u) = Z_{1}(u) - Z_{2}(u) - Z_{3}(u) + Z_{4}(u) + Z_{5}(u)\,, \label{tAnsatz} \ee where the functions $Z_{1}(u), \ldots, Z_{4}(u)$ are given as before by (\ref{Zs}), and \begin{align} Z_{5}(u) = 4b(u)\, \prod_{k=1}^{N}16\sinh(u-\theta_{k})\sinh(u-\theta_{k}-2\eta)\sinh(u+\theta_{k})\sinh(u+\theta_{k}-2\eta)\,. \label{tZs} \end{align} This ansatz is very similar to the previous one (\ref{Ansatz}), except for some signs and the shift of all the eigenvalues by $Z_{5}(u)$, which does not depend on the Q-function. This ansatz also satisfies the constraints (\ref{fffr1})-(\ref{part2}). It is motivated by a preliminary algebraic Bethe ansatz analysis, which is presented in Appendix \ref{sec:ABA}. The expression (\ref{tAnsatz})-(\ref{tZs}) for the eigenvalues, up to the shift, can be factored as follows \be \lambda(u) = \tilde{\chi}(u)\, \tilde{\chi}(u + i\pi) + Z_{5}(u) \,, \label{tfactored} \ee where $\tilde{\chi}(u)$ is defined by \begin{align} \tilde{\chi}(u) & = \frac{\cosh(u-2\eta)}{\cosh(u-\eta)} \frac{Q(u+\eta + i \pi)}{Q(u-\eta)} \prod_{k=1}^{N}4\sinh(u-\theta_{k}-2\eta)\sinh(u+\theta_{k}-2\eta) \non \\ & - \frac{\cosh(u)}{\cosh(u-\eta)} \frac{Q(u-3\eta + i \pi)}{Q(u-\eta)} \prod_{k=1}^{N}4\sinh(u-\theta_{k})\sinh(u+\theta_{k}) \,, \label{tchi} \end{align} which satisfies $\tilde{\chi}(-u+2\eta) = -\tilde{\chi}(u)$. Requiring that the residues of $\tilde{\chi}(u)$ vanish at $u=u_{j} + \eta$ leads to the following Bethe equations \begin{align} \MoveEqLeft \prod_{l=1}^{N} \frac{\sinh(u_{j} -\theta_{l} + \eta)}{\sinh(u_{j} -\theta_{l} - \eta)} \frac{\sinh(u_{j} + \theta_{l} + \eta)}{\sinh(u_{j} + \theta_{l} - \eta)} \non \\ & = \prod_{k=1;\, k \ne j}^{m} \frac{\cosh(\tfrac{1}{2}(u_{j} - u_{k}) + \eta)} {\cosh(\tfrac{1}{2}(u_{j} - u_{k}) - \eta)} \frac{\cosh(\tfrac{1}{2}(u_{j} + u_{k}) + \eta)} {\cosh(\tfrac{1}{2}(u_{j} + u_{k}) - \eta)} \,, \non \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad j = 1, \ldots, m\,. \label{BEII} \end{align} These Bethe equations are unusual, as they involve $\cosh$ instead of $\sinh$ on the RHS. (For the $\varepsilon=0$ case \cite{Martins:2000xie, Nepomechie:2017hgw, Nepomechie:2018nvl}, the Bethe equations are the same as (\ref{BEII}) except with $\sinh$ on the RHS.) The ansatz (\ref{tfactored})-(\ref{BEII}) is correct for $m=1$. Indeed, we prove it in Appendix \ref{sec:ABA}, and we have confirmed numerically that this ansatz with $m=1$ correctly describes eigenvalues with even degeneracy for $N =2$ (degeneracy 6), $N =3$ (degeneracy 10) and $N =4$ (degeneracy 14). We expect that, for general $N$, this ansatz with $m=1$ describes eigenvalues with degeneracy $4N-2$. We also find numerically that this ansatz with $m=3$ describes the eigenvalue for $N =3$ with degeneracy 2. Unfortunately, we have not succeeded to find more examples of even-degeneracy eigenvalues with $m>1$. Hence, it appears that this ansatz cannot account for all the missing eigenvalues. \section{Conclusions}\label{sec:conclusion} One of the aims of this paper is to draw attention to the unexpected difficulty in solving the integrable quantum-group invariant $D_{n+1}^{(2)}$ spin chains with $\varepsilon=1$. We focused for simplicity on the case $n=1$. Using standard assumptions, we arrived at a Bethe ansatz solution (\ref{factored})-(\ref{BE}) that is not complete. Indeed, the argument in Sec. \ref{subsec:conjecture} could be regarded as a ``no-go theorem'', which we hope will motivate others to find a better approach. We believe that we did succeed to describe a subset of the transfer-matrix eigenvalues, namely, those with odd degeneracy. The remarkably simple solution (\ref{square})-(\ref{BEsimple}) suggests that there may be a connection to the XXZ model. Unfortunately, as $N$ increases, the fraction of eigenvalues with odd degeneracy rapidly decreases. The ``missing'' eigenvalues (namely, those with even degeneracy) may be related to a higher symmetry of the transfer matrix, as discussed in Sec. \ref{subsubsec:degensym}. It would be interesting to also understand this symmetry better. The ansatz (\ref{tfactored})-(\ref{BEII}) may also provide a hint about the eventual complete solution. \section*{Acknowledgments} We thank Niall Robertson and Hubert Saleur for encouragement, and Nicolas Cramp\'e for discussions. RN also thanks Wen-Li Yang for valuable discussions, and for his warm hospitality at the Institute of Modern Physics at Northwest University in Xian. AR also thanks Marius de Leeuw, Anton Pribytok and Paul Ryan for helpful discussions. RN was supported in part by the Chinese Academy of Sciences President's International Fellowship Initiative Grant No. 2018VMA0017, and by a Cooper fellowship. AR was supported by the S\~ao Paulo Research Foundation FAPESP under the process \# 2017/03072-3 and \# 2015/00025-9. RP thanks the Institut Denis Poisson for hospitality and the support from FAPESP and the Coordination for the Improvement of Higher Education Personnel (CAPES), process \# 2017/02987-8 and \#88881.171877/2018-01
train/arxiv
BkiUdOLxK6nrxq6DngEP
5
1
\section{Introduction} \label{sec:intro} In many practical situations a user is able to select the best options among a finite set of choices, however they are unable to state explicitly the motivations for their choices. A notable example is in industrial applications where the manufactured product has to satisfy several qualitative requirements that are known to trained staff, but such requirements were never expressed explicitly. In such cases, the definition of quantitative objectives would allow for an explicit multi-objective optimization which would lead to better options. However, measuring the objectives in a quantitative way is often technically difficult and costly. In this context, we would like to improve the quality of the manufactured product using directly the feedback provided by the user's choices (``these products are better than those''), i.e. we would like to learn the ``choice function'' of the user and find the inputs that optimize this function. In this paper we propose a Bayesian framework to learn choice functions from a dataset of observed choices. Our framework learns a latent mapping of objectives that are consistent with the given choices, therefore we are also able to optimize them with a multi-objective Bayesian optimization algorithm. \section{Background} The main contributions of this paper leverage four topics: (1) Bayesian Optimisation (BO); (2) preferential BO; (3) multi-objective BO; (4) choice functions learning. In this section we briefly review the state of the art of each topic. \subsection{Bayesian Optimisation (BO)} BO \cite{jones1998efficient} aims to find the global maximum of an unknown function which is expensive to evaluate. For a scalar real-valued function $g$ on a domain $\Omega \subset \mathbb{R}^{n_x}$, the goal is to find a global maximiser $ {\bf x}^o =\arg \max_{{\bf x} \in \Omega} g({\bf x}) $. BO formulates this as a sequential decision problem -- a trade-off between learning about the underlying function $g$ (exploration) and capitalizing on this information in order to find the optimum ${\bf x}^o$ (exploitation). BO relies on a probabilistic surrogate model, usually a Gaussian Process (GP) \cite{rasmussen2006gaussian}, to provide a posterior distribution over $g$ given a dataset $\mathcal{D}=\{({\bf x}_i, g({\bf x}_i)): ~~i=1,2,\dots,N\}$ of previous evaluations of $g$. It then employs an \textit{acquisition function} (e.g. Expected Improvement \cite{jones1998efficient,mockus1978application}, Upper Credible Bound \cite{srinivas2009gaussian}) to select the next candidate option (solution) ${\bf x}_{N+1}$. While the true function $g$ is expensive-to-evaluate, the surrogate-based acquisition function is not, and it can thus be efficiently optimized to compute an optimal candidate to be evaluated on $g$. This process is repeated sequentially until some stopping criterion is achieved. \subsection{Preferential Bayesian Optimisation (PBO)} In many applications, evaluating $g$ can be either too costly or not always possible. In these cases, the objective function $g$ may only be accessed via preference judgments, such as ``this is better than that'' between two candidate options ${\bf x}_i,{\bf x}_j$ like in A/B tests or recommender systems (pairwise comparisons are usually called \textit{duels} in the BO and bandits literature). In such situations, PBO \cite{shahriari2015taking} can be used. This approach requires the agent to simply compare the final outcomes of two different candidate options and indicate which they prefer, that is the evaluation is binary either ${\bf x}_i$ ``better than'' ${\bf x}_j$ or ${\bf x}_i$ ``worse than'' ${\bf x}_j$. % In the PBO context, the state-of-the-art surrogate model is based on a method for preference learning developed in \cite{ChuGhahramani_preference2005}. This method assumes that there is an unobservable latent function value $f({\bf x}_i)$ associated with each training sample ${\bf x}_i$, and that the function values $\{f({\bf x}_i): ~~i=1,2,\dots,N\}$ preserve the preference relations observed in the dataset, that is $f({\bf x}_i)\geq f({\bf x}_j)$ whenever ${\bf x}_i$ ``better than'' ${\bf x}_j$. % As in the BO setting, by starting with a GP prior on $f$ and, by using the likelihood defined in \cite{ChuGhahramani_preference2005}, we obtain a posterior distribution over $f$ given a dataset of preferences. This posterior distribution is not a GP and several approximations \cite{ChuGhahramani_preference2005,houlsby2011bayesian} were proposed. In \cite{houlsby2011bayesian}, the authors showed that GP preference learning is equivalent to GP classification with a transformed kernel function. By using this reformulation, the authors easily derive two approximations for the posterior based on (i) the Laplace Approximation (LP) \cite{mackay1996bayesian,williams1998bayesian}; (ii) Expectation Propagation (EP) \cite{minka2001family}. The LP approximation was then used to develop a framework for PBO \cite{shahriari2015taking} and a new acquisition function, inspired by Thomson sampling, was proposed in \cite{gonzalez2017preferential}. More recently, \cite{benavoli2020preferential} showed that the posterior of GP preference learning is a Skew GP \cite{Benavoli_etal2020,benavoli2021}. Based on this exact model, the authors derived a PBO framework which outperformed both LP and EP based PBO. Although in this work we focus on GPs as surrogate model, it is worth to mention alternative approaches for PBO developed by \cite{BDG08,pmlr-v32-zoghi14,sadigh2017active,bemporad2019active}. PBO was recently extended \cite{siivola2020preferential} to the batch case by allowing agents to express preferences for a batch of options. However, as depicted in Figure \ref{fig:1} where an agent expresses preferences among 5 options, in batch PBO there can be only one batch winner. In fact, PBO assumes that two options are always comparable.\footnote{More precisely, the underlying GP-based model implies a total order and so two options may also be equivalent. When PBO is applied to the multi-objective case such as for instance $[g_1(x_i),g_2(x_i)]$, it is therefore assumed that the agent's preferences are determined by a weighted combination of the objectives $w_1 g_1(x_1)+w_2g_2(x_2)$.} \begin{figure}[htp] \begin{center} \includegraphics[width=5cm]{cartesian} \end{center} \caption{Batch PBO: the preferred option is in green.} \label{fig:1} \end{figure} \subsection{Multi-objective (MO) optimization} % The goal of MO optimization is to identify the set of \textit{Pareto optimal} options (solutions) such that any improvement in one objective means deteriorating another. Without loss of generality, we assume the goal is to maximize all objectives. Let ${\bf g}({\bf x}):\Omega \rightarrow \mathbb{R}^{n_o}$ be a vector-value objective function with ${\bf g}({\bf x})=[g_1({\bf x}),\dots,g_{n_o}({\bf x})]^\top$, where $n_0$ is the number of objectives. We recall the notions of Pareto dominated options and non-dominated set. \begin{definition}[Pareto dominate option] \label{def:paretodom} Consider a set of options $\mathcal{X} \subset \Omega$. An option ${\bf x}_1 \in \mathcal{X}$ is said to Pareto dominate another option ${\bf x}_2 \in \mathcal{X}$, denoted as ${\bf x}_1 \succ {\bf x}_2$, if both the following conditions are true: \begin{enumerate} \item for all $j \in \{1,2,\dots,n_o\}$, $g_j ({\bf x}_1) \geq g_j ({\bf x}_2 )$; \item $\exists ~j \in \{1,2,\dots,n_o\}$, such that $g_j ({\bf x}_1) > g_j ({\bf x}_2 )$. \end{enumerate} \end{definition} \begin{definition}[Non-dominated set] \label{def:paretoset} Among a set of options $A=\{{\bf x}_1,\dots,{\bf x}_m\}$, the non-dominated set of options $A'$ are those that are not dominated by any member of $A$, i.e. \begin{equation*} A' = \{ {\bf x} \in A : \nexists {\bf x}^\prime \in A \text{ such that } {\bf x}^\prime \succ {\bf x} \}. \end{equation*} \end{definition} Given the set of options $\mathcal{X}$, MO aims to find the non-dominated set of options $\mathcal{X}^{nd}$, called the \textit{Pareto set}. The set of evaluations $\mathbf{g}(\mathcal{X}^{nd})$ is called \textit{Pareto front}. MO BO have only be developed for standard (non-preferential) BO, where multi-objectives can directly be evaluated. Many approaches rely on scalarisation to transform the MO problem into a single-objective one, like ParEGO \cite{knowles2006parego} and TS-TCH \cite{paria2020flexible} (which randomly scalarize the objectives and use Expected Improvement and, respectively, Thompson Sampling). \cite{keane2006statistical} derived an \textit{expected improvement} criterion with respect to multiple objectives. \cite{ponweiser2008multiobjective} proposed an \textit{hypervolume-based infill} criterion, where the improvements are measured in terms of hypervolume (of the Pareto front) increase. Other acquisition functions have been proposed in \cite{emmerich2011hypervolume,picheny2015multiobjective,hernandez2016predictive,wada2019bayesian,belakaria2019max}. The most used acquisition function for MO BO is \textit{expected hypervolume improvement}. In fact, maximizing the hypervolume has been shown to produce very accurate estimates \cite{zitzler2003performance,couckuyt2014fast,hupkens2015faster,emmerich2016multicriteria,yang2017computing,yang2019multi} of the Pareto front. \subsection{Choice function} \label{sec:choice} Individuals are often confronted with the situation of choosing between several options (alternatives). These alternatives can be goods that are going to be purchased, candidates in elections, food etc. We model options, that an agent has to choose, as real-valued vectors ${\bf x} \in \mathbb{R}^{n_x}$ and identify the sets of options as finite subsets of $\mathbb{R}^{n_x}$. Let $\mathcal{Q}$ denote the set of all such finite subsets of $\mathbb{R}^{n_x}$. \begin{definition} A choice function $C$ is a set-valued operator on sets of options. More precisely, it is a map $C: \mathcal{Q} \rightarrow \mathcal{Q}$ such that, for any set of options $A \in \mathcal{Q}$, the corresponding value of $C$ is a subset $C(A)$ of $A$ (see for instance \cite{aleskerov2007utility}). \end{definition} The interpretation of choice function is as follows. For a given option set $A \in \mathcal{Q}$, the statement that an option ${\bf x}_j \in A$ is rejected from $A$ (that is, ${\bf x}_j \notin C(A)$) means that there is at least one option ${\bf x}_i \in A$ that an agent strictly prefers over ${\bf x}_j$. The set of rejected options is denoted by $R(A)$ and is equal to $A\backslash C(A)$. Therefore choice functions represent non-binary choice models, so they are more general than preferences. \\ It is important to stress again that the statement ${\bf x}_j \notin C(A)$ implies there is at least one option ${\bf x}_i \in A$ that an agent strictly prefers over ${\bf x}_j$. However, the agent is not required to tell us which option(s) in $C(A)$ they strictly prefer to ${\bf x}_j$. This makes choice functions a very easy-to-use tool to express choices. As depicted in Figure \ref{fig:2}, the agent needs to tell us only the options they selected (in green) without providing any justification for their choices (we do not know which option in the green set dominates ${\bf x}_4$). \begin{figure}[htp] \begin{center} \includegraphics[width=5cm]{cartesian_choice} \end{center} \caption{Example of choice function for $A=\{x_1,x_2,\dots,x_5\}$: $C(A)=\{x_1,x_2,x_3\}$ highlighted in green and $R(A)=\{x_4,x_5\}$ in red.} \label{fig:2} \end{figure} By following this interpretation, the set $C(A)$ can also be seen as the \textit{non-dominated set} in the Pareto sense for some latent function. In other words, let us assume that there is a latent vector function ${\bf g}({\bf x}_i)=[g_1({\bf x}_i),\dots,g_{n_e}({\bf x}_i)]^\top$, for some dimension $n_e$, which embeds the options ${\bf x}_i$ into a space $\mathbb{R}^{n_e}$. The choice set can then be represented through a Pareto set of non-dominated options. For example, in Fig.~\ref{fig:2}, $n_e=2$. This approach was proposed in \cite{pfannschmidt2020learning} to learn choice functions. In particular, to learn the latent vector function, the authors devise a differentiable loss function based on a hinge loss. Furthermore, they add two additional terms to the loss function: (i) an $L^2$ regularization term; (ii) a multidimensional scaling (MDS) loss to ensure that options close to each other in the inputs space $\mathcal{X}$ will also be close in the embedding space $\mathbb{R}^{n_e}$. This loss function is then used to learn a (deep) multi-layer perceptron to represent the embedding. \subsection{Contributions} In this work, we devise a novel multi-objective PBO based on choice functions. We follow the interpretation of choice functions as set function that select non-dominated sets for an unknown latent function. First we derive a Bayesian framework to learn the function from a dataset of observed choices. This framework is based on a Gaussian Process prior on the unknown latent function vector.\footnote{Compared to the approach proposed in \cite{pfannschmidt2020learning}, the GP-based model is more sound -- no multidimensional scaling is necessary -- and it is a generative model.} We then build an acquisition function to select the best next options to evaluate. We compare this method against an oracle that knows the true value of the latent functions and we show that, by only working with choice function evaluations, we converge to the same results. \section{Bayesian learning of Choice functions} In this work we consider options ${\bf x}\in \mathbb{R}^{n_x}$ and, for ${\bf x}\in \mathbb{R}^{n_x}$, we model each latent function in the vector ${\bf f}({\bf x})=[f_1({\bf x}),\dots,f_{n_e}({\bf x})]^\top$ as an independent GP \cite{rasmussen2006gaussian}: \begin{equation} \label{eq:prior} f_j({\bf x}) \sim \text{GP}_j(0,k_j({\bf x},{\bf x}')), ~~~~j=1,2,\dots,n_e. \end{equation} Each GP is fully specified by its kernel function $k_j(\cdot,\cdot)$, which specifies the covariance of the latent function between any two points. In all experiments in this paper, the GP kernel is Matern 3/2 \cite{rasmussen2006gaussian}. \subsection{Likelihood for general Choice functions} Having defined the prior on ${\bf f}$, we can now focus on the likelihood. We propose a new likelihood to model the observed choices of the agent. Given a set of observed choices $\mathcal{D}=\{(A_k,C(A_k)): \text{ for } k=1,\dots,N\}$, we are interested in learning a Pareto-embedding ${\bf f}$ coherent with this data in the sense that $C(A_k)=P_{{\bf f}}(A_k)$, where $P_{{\bf f}}(A_k)$ denotes the Pareto non-dominated options in $A_k$. Assume that $A_k=\{{\bf x}_1,\dots,{\bf x}_m\}$ and let $I \subset \{1,2,\dots,m\}$ be the subset of indices of the options in $C(A_k)$, let $J_k$ be equal to $\{1,2,\dots,m\}\backslash I_k$ and let $D=\{1,2,\dots,n_e\}$ the vector of dimensions of the latent space. Based on Definition \ref{def:paretodom}, the choice of the agent expressed via $C(A_k)$ implies that: \begin{align} \label{eq:likcond1} \neg &\left( \min_{d \in D} (f_d({\bf x}_i)-f_d({\bf x}_j))< 0, ~~\forall i \in I_k\right), \forall j \in J_k,\\ \label{eq:likcond2} &\min_{d \in D} (f_d({\bf x}_p)-f_d({\bf x}_i))< 0, ~~\forall i, p \in I_k, p\neq i. \end{align} These equation express the conditions in Definition~\ref{def:paretodom}. Condition \eqref{eq:likcond1} means that, for each option $x_j \in J_k$, it's not true ($\neg$ stands for logical negation) that all options in $I_k$ are worse than $x_j$, i.e. there is at least an option in $I_k$ which is better than $x_j$. Condition \eqref{eq:likcond2} means that, for each option in $I_k$, there is no better option in $I_k$. This requires that the latent functions values of the options should be consistent with the choice function implied relations. Given $A_k,C(A_k)$, the likelihood function $p(C(A_k),A_k|{\bf f})$ is one when \eqref{eq:likcond1}-\eqref{eq:likcond2} hold and zero otherwise. In practice not all choices might be coherent and we can treat this case by considering that the latent vector function ${\bf f}({\bf x}_i)=[f_1({\bf x}_i),\dots,f_{n_e}({\bf x}_i)]^\top$ is corrupted by a Gaussian noise ${\bf v}_i$ with zero mean vector and covariance $\sigma^2 \mathbb{I}_{n_e}$.\footnote{We assume the noise variance is the same in each dimension but this can easily be relaxed.} Then we require conditions \eqref{eq:likcond1} and \eqref{eq:likcond2} to only hold probabilistically. This leads to the following likelihood function for the pair $A_k,C(A_k)$: \begin{widetext} \small \begin{align} \nonumber &p(C(A_k),A_k|{\bf f})= \prod_{j \in J_k}\Bigg(1-\int \prod_{i \in I_k}\left(\mathcal{I}_{(-\infty,0)}\left(\min_{d \in D} (f_d({\bf x}_i)+v_{di}-f_d({\bf x}_j)-v_{dj})\right)N({\bf v}_{i};0,\sigma^2\mathbb{I}_d)d{\bf v}_{i} \right)N({\bf v}_{j};0,\sigma^2\mathbb{I}_d)d{\bf v}_{j}\Bigg)\\ \label{eq:like0} & \prod_{i, p \in I_k, p \neq i}\int \left(1-\mathcal{I}_{[0,\infty)}\left( \min_{d \in D} (f_d({\bf x}_p)+v_{dp}-f_d({\bf x}_i)-v_{di})\right)\right)N({\bf v}_{p};0,\sigma^2\mathbb{I}_d)N({\bf v}_{i};0,\sigma^2\mathbb{I}_d)d{\bf v}_{p}d{\bf v}_{i}, \end{align} \end{widetext} where $v_{di}$ denotes the $d$-th component of the vector ${\bf v}_i$ and $\mathcal{I}_B$ is the indicator function of the set $B$. We now provide two results which allows us to simplify \eqref{eq:like0}. We first compute the integral in the third product in \eqref{eq:like0}. \begin{lemma} \label{lem:0} \begin{equation} \begin{aligned} &\int \mathcal{I}_{[0,\infty)}\left(\min_{d \in D} (f_d({\bf x}_p)+v_{dp}-f_d({\bf x}_j)-v_{dj})\right)\\ &N({\bf v}_{p};0,\sigma^2\mathbb{I}_d)N({\bf v}_{j};0,\sigma^2\mathbb{I}_d)d{\bf v}_{p}d{\bf v}_{j}\\ &=\prod_{d \in D} \Phi\left(\frac{f_d({\bf x}_p)-f_d({\bf x}_j)}{\sqrt{2}\sigma}\right). \end{aligned} \end{equation} \end{lemma} All proofs are in the supplementary material. We now focus on the first integral in \eqref{eq:like0}, which can be simplified as follows. \begin{lemma} \label{lem:2} \begin{equation} \label{eq:lemma2} \begin{aligned} &\int \prod_{i \in I_k} \Big(\mathcal{I}_{(-\infty,0)}\left(\min_{d \in D} (f_d({\bf x}_i)+v_{di}-f_d({\bf x}_j)-v_{dj})\right)\\ &N({\bf v}_{i};0,\sigma^2\mathbb{I}_d)d{\bf v}_{i} \Big)N({\bf v}_{j};0,\sigma^2\mathbb{I}_d)d{\bf v}_{_j}\\ &=\int \prod_{i \in I_k} \left[1- \prod_{d \in D} \Phi\left(\frac{f_d({\bf x}_i)-f_d({\bf x}_j)-v_{dj}}{\sigma}\right)\right] \\ &N({\bf v}_{j};0,\sigma^2\mathbb{I}_d)d{\bf v}_{j}. \end{aligned} \end{equation} \end{lemma} Note that eq.~\eqref{eq:lemma2} is an expectation (with respect to $ N({\bf v}_{j};0,\sigma^2\mathbb{I}_d)$) of a product of Gaussian CDFs $\Phi(\cdot)$ whose argument only depends on $v_{dj}$. We can thus write the above multidimensional integral as a sum of products of univariate integrals which can be computed efficiently, for instance by using a Gaussian quadrature rule. Therefore, the likelihood of the choices $\mathcal{D}=\{(A_k,C(A_k)): \text{ for } k=1,\dots,N\}$ given the latent vector function ${\bf f}$ can then be written as follows. \begin{theorem} \label{thm:1} The likelihood is \begin{equation} p(\mathcal{D}|{\bf f})=\prod_{k=1}^N p(C(A_k),A_k|{\bf f}) \end{equation} with \begin{equation} \label{eq:likelihood} \begin{aligned} &p(C(A_k),A_k|{\bf f})\\ =&\prod_{j \in J_k}\Bigg(1-\int \prod_{i \in I_k} \left(1- \prod_{d \in D} \Phi\left(\frac{f_d({\bf x}_i)-f_d({\bf x}_j)-v_{dj}}{\sigma}\right)\right)\\ &N({\bf v}_{j};0,\sigma^2\mathbb{I}_d)d{\bf v}_{j}\Bigg)\\ & \prod_{i, p \in I_k, p\neq i}\left( 1-\prod_{d \in D} \Phi\left(\frac{f_d({\bf x}_p)-f_d({\bf x}_j)}{\sqrt{2}\sigma}\right)\right).\\ \end{aligned} \end{equation} \end{theorem} \subsubsection{Likelihood for batch preference learning} In case $n_e=1$ (the latent dimension is one), we have that $|C(A_k)|=1$. This means the agent always selects a single best option. In this case, the likelihood \eqref{eq:likelihood} simplifies to \begin{equation} \label{eq:likelihood1} \begin{aligned} p(C(A_k),A_k|{\bf f}) =&\int \prod_{j \in J_k}\Phi\left(\frac{f({\bf x}_i)-f({\bf x}_j)-v_{j}}{\sigma}\right)\\ &N(v_{j};0,\sigma^2)dv_{j}.\\ \end{aligned} \end{equation} The above likelihood is equal to the batch likelihood derived in \cite[Eq.3]{siivola2020preferential} and reduces to the likelihood derived in \cite{ChuGhahramani_preference2005} when $|R(A_k)|=1$ (that is the batch 2 case, i.e., $|A_k|=2$). This shows that the likelihood in \eqref{eq:likelihood} encompasses batch preference-based models. \subsection{Posterior} \label{sec:posterior} The posterior probability of ${\bf f}$ is \begin{equation} \label{eq:post} p({\bf f}|\mathcal{D})=\frac{p({\bf f})}{p(\mathcal{D})} \prod_{k=1}^N p(C(A_k),A_k|{\bf f}), \end{equation} where the prior over the component of ${\bf f}$ is defined in \eqref{eq:prior}, the likelihood is defined in \eqref{eq:likelihood} and the probability of the evidence is $p(\mathcal{D})= \int p(\mathcal{D}|{\bf f})p({\bf f}) d{\bf f}$. The posterior $p({\bf f}|\mathcal{D})$ is intractable because it is neither Gaussian nor Skew Gaussian Process (SkewGP) distributed. In this paper we propose an approximation schema for the posterior similar to the one proposed in \cite{benavoli2020preferential,benavoli2021}. In \cite{benavoli2020preferential}, an analytical formulation of the posterior is available, the marginal likelihood is approximated with a lower bound and inferences are computed with an efficient rejection-free slice sampler \cite{gessner2019integrals}. In~\cite{benavoli2020preferential,benavoli2021} such approximation schema showed better performance in active learning and BO tasks than LP and EP. Here we do not have an analytical formulation for the posterior therefore we use a Variational (ADVI) approximation \cite{kucukelbir2015automatic} of the posterior to learn the hyperparameters $\boldsymbol{\theta}$ of the kernel and, then, for fixed hyperparameters, we compute the posterior of $p({\bf f}|\mathcal{D},\boldsymbol{\theta})$ via elliptical slice sampling (ess) \cite{pmlrv9murray10a}.\footnote{We implemented our model in PyMC3 \cite{salvatier2016probabilistic}, which provides implementations of ADVI and ess. Details about number of iterations and tuning are reported in the supplementary.} \subsection{Prediction and Inferences} \label{sec:prediction} Let $A^*=\{{\bf x}^*_1,\dots,{\bf x}^*_m\}$ be a set including $m$ test points and ${\bf f}^*=[{\bf f}({\bf x}_1^*),\dots,{\bf f}({\bf x}_m^*)]^\top$. The conditional predictive distribution $p({\bf f}^*|{\bf f})$ is Gaussian and, therefore, \begin{equation} \label{eq:pred} p({\bf f}^*|\mathcal{D})= \int p({\bf f}^*|{\bf f})p({\bf f}|\mathcal{D}) d{\bf f} \end{equation} can be easily approximated as a sum using the samples from $p({\bf f}|\mathcal{D})$. In choice function learning, we are interested in the inference: \begin{equation} \label{eq:infer} P(C(A^*),A^*|\mathcal{D})=\int p(C(A^*),A^*|{\bf f}^*) p({\bf f}^*|\mathcal{D}) d{\bf f}^*, \end{equation} which returns the posterior probability that the agent chooses the options $C(A^*)$ from the set of options $A^*$. Given a finite set $\mathcal{X}$ (that is $A^*=\mathcal{X}$), we can use \eqref{eq:infer} to compute the probability that a subset of $\mathcal{X}$ is the set of non-dominated options. This provides an estimate of the Pareto-set $\mathcal{X}^{nd}$ based on the learned GP-based latent model. \section{Latent dimension selection} \label{sec:lat} In the previous section, we provided a Bayesian model for learning a choice function. The model $\mathcal{M}_{n_e}$ is conditional on the pre-defined latent dimension $n_e$ (that is, the dimension of the vector of the latent functions ${\bf f}({\bf x})=[f_1({\bf x}),\dots,f_{n_e}({\bf x})]^\top$). Although, it is sometimes reasonable to assume the number of criteria defining the choice function to be known (and so the dimension $n_e$), it is crucial to develop a statistical method to select $n_e$. We propose a forward selection method. We start learning the model $\mathcal{M}_{1}$ and we increase the dimension $n_e$ in a stepwise manner (so learning $\mathcal{M}_{2},\mathcal{M}_{3},\dots$) until some model selection criterion is optimised. Criteria like AIC and BIC are inappropriate for the proposed GP-based choice function model, since its nonparametric nature implies that the number of parameters increases also with the size of the data (as $n_e \times m$). We propose to use instead the \textit{Pareto Smoothed Importance sampling Leave-One-Out} cross-validation (PSIS-LOO) \cite{vehtari2017practical}. Exact cross-validation requires re-fitting the model with different training sets. Instead, PSIS-loo can be computed easily using the samples from the posterior. We define the Bayesian LOO estimate of out-of-sample predictive fit for the model in \eqref{eq:post}: \begin{equation} \label{eq:bloo} \varphi=\sum_{k=1}^N p(z_k|z_{-k}), \end{equation} where $z_k=(C(A_k),A_k)$, $z_{-k}=\{(C(A_i),A_i)\}_{i=1,i\neq k}^N$, \begin{equation} \label{eq:bloo1} p(z_k|z_{-k})=\int p(z_k|{\bf f})p({\bf f}|z_{-k})d{\bf f}. \end{equation} As derived in \cite{gelfand1992model}, we can evaluate \eqref{eq:bloo1} using the samples from the full posterior, that is ${\bf f}^{(s)}\sim p({\bf f}|\{z_k,z_{-k}\})=p({\bf f}|\mathcal{D})$ for $s=1,\dots,S$.\footnote{We compute these samples using elliptical slice sampling.} We first define the importance weights: $$ w^{(s)}_k=\frac{1}{p(z_k|{\bf f}^{(s)})}\propto \frac{p({\bf f}^{(s)}|z_{-k})}{p({\bf f}^{(s)}|\{z_k,z_{-k}\})} $$ and then approximate \eqref{eq:bloo1} as: \begin{equation} \label{eq:bloo2} p(z_k|z_{-k})\approx \frac{\sum_{s=1}^S w^{(s)}_k p(z_k|{\bf f}^{(s)})}{\sum_{s=1}^S w^{(s)}_k }. \end{equation} It can be noticed that \eqref{eq:bloo2} is a function of $p(z_k|{\bf f}^{(s)})$ only, which can easily be computed from the posterior samples. Unfortunately, a direct use of \eqref{eq:bloo2} induces instability because the importance weights can have high variance. To address this issue, \cite{vehtari2017practical} applies a simple smoothing procedure to the importance weights using a Pareto distribution (see \cite{vehtari2017practical} for details). In Section \ref{sec:latsim}, we will show that the proposed PSIS-LOO-based forward procedure works well in practice. \section{Choice-based Bayesian Optimisation} \label{sec:acq} In the previous sections, we have introduced a GP-based model to learn latent choice functions from choice data. We will now focus on the acquisition component of Bayesian optimization. In choice-based BO, we never observe the actual values of the functions. The data is $(\mathcal{X},\{C(A_k),A_k\}_{k=1}^N\})$, where $\mathcal{X}$ is the set of the $m$ training inputs (options), $A_k$ is a subset of $\mathcal{X}$ and $C(A_k) \subseteq A_k$ is the choice-set for the given options $A_k$. We denote the Pareto-set estimated using the GP-based model as $\hat{\mathcal{X}}^{nd}$. In choice-based BO, the objective is to seek a new input point ${\bf x}$. Since ${\bf g}$ can only be queried via a choice function, this is obtained by optimizing w.r.t.\ ${\bf x}$ an acquisition function $\alpha({\bf x},\hat{\mathcal{X}}^{nd})$, where $\hat{\mathcal{X}}^{nd}$ is the current (estimated) Pareto-set. We define the acquisition function $\alpha({\bf x},\hat{\mathcal{X}}^{nd})$, with the aim to find a point that dominates the points in $\hat{\mathcal{X}}^{nd}$. That is, given the set of options $A^*={\bf x} \cup \hat{\mathcal{X}}^{nd}$, we aim to find ${\bf x}$ such that $C(A^*)=\{{\bf x}\}$. The acquisition function must also consider the trade-off between exploration and exploitation. Therefore, we propose an acquisition function $\alpha({\bf x},\hat{\mathcal{X}}^{nd})$ which is equal to the $\gamma$\% (in the experiments we use $\gamma=95$) Upper Credible Bound (UCB) of $p(C(A^*),A^*|{\bf f}^*)$ with ${\bf f}^* \sim p({\bf f}^*|\mathcal{D})$, $A^*={\bf x} \cup \hat{\mathcal{X}}^{nd}$ and $C(A^*)=\{{\bf x}\}$. \\ Note that the requirement for our acquisition function is strong. We could also define $\tilde{\alpha}({\bf x},\hat{\mathcal{X}}^{nd})$ with different objectives in mind. For example we could seek to find a point ${\bf x}$ which allows to reject at least one option in $\hat{\mathcal{X}}^{nd}$. We opted for UCB over the probability $p(C(A^*),A^*|{\bf f}^*)$ because it leads to a fast to evaluate acquisition function. In particular we only need to compute one probability for each new function evaluation. In future work we will study alternative approaches and the trade-off between more costly acquisition function evaluations and faster convergence. After computing the maximum of the the acquisition function, denoted with ${\bf x}_{new}$, consistently with the definition of the acquisition function, we should query the agent to express their choice among the set of options in $A^*={\bf x}_{new} \cup \hat{\mathcal{X}}^{nd}$. However, $\hat{\mathcal{X}}^{nd}$ can be a very large set and human cognitive ability cannot efficiently compare more than five options. Therefore, by using the GP-based latent model, we select four options in $\hat{\mathcal{X}}^{nd}$ which have the highest probability of being dominated by ${\bf x}_{new}$ and query the agent on a five options set.\footnote{Details about the procedure we use to select these 4 options are reported in the supplementary material.} \section{Numerical experiments} First, we assume $n_e=n_o$ (that is we assume that the latent dimension is known) and evaluate the performance of our algorithm on (1) the tasks of learning choice functions; (2) the use of choice functions in multi-objective BO. Second, we evaluate the latent dimension selection procedure discussed in Section \ref{sec:lat} on simulated and real datasets. \subsection{Choice functions learning} \paragraph{Toy experiment} We develop a simple toy experiment as a controlled setting for our initial assessment. We consider the bi-dimensional vector function ${\bf g}(x)=[\cos(2x),-sin(2 x)]^\top$ with $x \in \mathbb{R}$. \begin{tabular}{c}\vspace{-0.3cm} ~~~~~~~~~~~~~\includegraphics[height=3cm]{truefunc.pdf} \vspace{0cm} \end{tabular} We use ${\bf g}$ to define a choice function. For instance, consider the set of options $A_k=\{-1,0,2\}$, given that $$ \begin{aligned} {\bf g}(-1)&=[-0.416,-0.909]\\ {\bf g}( 0)&=[1,0]\\ {\bf g}( 2)&=[-0.65,0.75]\\ \end{aligned} $$ we have that $C(A_k)=\{0,2\}$ and $R(A_k)=A_k \backslash C(A_k)=\{-1\}$. In fact, one can notice that $[1,0]$ dominates $[-0.416,-0.909]$ on both the objectives, and $[1,0]$ and $[-0.65,0.75]$ are incomparable. We sample $200$ inputs $x_i$ at random in $[-4.5,4.5]$ and, using the above approach, we generate \begin{itemize} \item $N=50$ random subsets $\{A_k\}_{k=1}^N$ of the 200 points each one of size $|A_k|=3$ (respectively $|A_k|=5$) and computed the corresponding choice pairs $(C(A_k),A_k)$ based on ${\bf g}$; \item $N=150$ random subsets $\{A_k\}_{k=1}^N$ each one of size $|A_k|=3$ (respectively $|A_k|=5$) and computed the corresponding choice pairs $(C(A_k),A_k)$ based on ${\bf g}$; \end{itemize} for a total of four different datasets. Fixing the latent dimension $n_e=2$, we then compute the posterior means and $95\%$ credible intervals of the latent functions learned using the model introduced in Section \ref{sec:posterior}. The four posterior plots are shown in Figure \ref{fig:post}. By comparing the 1st with the 3rd plot and the 2nd with the 4th plot, it can be noticed how the posterior means become more accurate (and the credible interval smaller) at the increase of the size dataset (from N=50 to N=150 choice-sets). By comparing the 1st with the 2nd plot and the 3rd with the 4th plot, it is evident that estimating the latent function becomes more complex at the increase of $|A_k|$. The reason is not difficult to understand. Given $A_k$, $R(A_k)$ includes the set of rejected options. These are options that are dominated by (at least) one of the options in $C(A_k)$, but we do not know which one(s). This uncertainty increases with the size of $|A_k|$, which makes the estimation problem more difficult. \begin{figure*} \centering \begin{tabular}{cccc} \includegraphics[height=3cm,trim={0.8cm 0.0cm 1.5cm 0.0cm }, clip]{latentfunc_50_3.pdf} & \includegraphics[height=3.0cm,trim={0.8cm 0.0cm 1.5cm 0.0cm }, clip]{latentfunc_50_5.pdf} & \includegraphics[height=3.0cm,trim={0.8cm 0.0cm 1.5cm 0.0cm }, clip]{latentfunc_150_3.pdf} & \includegraphics[height=3.0cm,trim={0.8cm 0.0cm 1.5cm 0.0cm }, clip]{latentfunc_150_5.pdf} \\ $N=50,~|A_k|=3$ & $N=50,~|A_k|=5$ & $N=150,~|A_k|=3$ & $N=150,~|A_k|=5$\\ \end{tabular} \caption{Posterior mean and $95\%$ credible intervals of the two latent functions for the four artificial datasets.} \label{fig:post} \end{figure*} Considering the same $200$ inputs $x_i$ generated at random in $[-4.5,4.5]$, we add Gaussian noise to ${\bf g}$ with $\sigma=0.1$ and generate two new training datasets with $|A_k|=3$ and (i) $N=100$; (ii) $N=300$. We aim to compute the predictive accuracy of the GP-based model for inferring $C(A_l)$ from the set of options $A_l$ on additional $300$ unseen pairs $\{C(A_l),A_l\}$. We denote the model learned using the $N=100$ and $N=300$ dataset respectively by Choice-GP100 and Choice-GP300. We compare their accuracy with that of an independent GP regression model which has direct access to ${\bf g}$: that is we modelled each dimension of ${\bf g}$ as an independent GP and compute the posterior of the components of ${\bf g}$ using GP regression from $200$ data pairs $(x_i,g_1(x_i))$ and, respectively, $(x_i,g_2(x_i))$. We refer to this model as Oracle-GP: ``Oracle'' because it can directly query ${\bf g}$. The accuracy is: \begin{center}\vspace{-0.2cm} \scalebox{0.85}{ \begin{tabular}{ccc} \textbf{Choice-GP100} & \textbf{Choice-GP300} & \textbf{Oracle-GP}\\ \hline 0.54 & 0.72 & 0.77\\ \end{tabular}}\vspace{-0.2cm} \end{center} averaged over $5$ repetitions of the above data generation process. It can be noticed that the increase of $N$, the accuracy of Choice-GP gets closer to that of Oracle-GP. This confirms the goodness of the learning framework developed in this paper. \paragraph{Real-datasets} We now focus on four benchmark datasets for multi-output regression problems. Table \ref{tab:charac} displays the characteristics of the considered datasets. \vspace{-0.15cm} \begin{table}[H] \begin{center} {\small \scalebox{0.8}{ \begin{tabular}{lccc} \hline {\bf Dataset} & {\bf \#Instances} & {\bf \#Attributes} & {\bf \#Outputs} \\ \hline enb & 768 & 6 & 2 \\ % jura & 359 & 6 & 2 \\ real-estate & 414 & 5 & 2 \\ slump & 103 & 7 & 2 \\ \hline \end{tabular}} } \end{center} \caption{Characteristics of the datasets.} \label{tab:charac} \end{table}\vspace{-0.5cm} More details on the used datasets are in the supplementary material. By using 5-fold cross-validation, we divide the dataset in training and testing pairs. The target values in the training set are used to generate choice functions based pairs $(C(A_k),A_k)$ with $|A_k|=3$ and (i) $N=100$; (ii) $N=300$. From the test dataset, we generated $N=200$ pairs. As before we denote the model learned using the $N=100$ and $N=300$ dataset respectively by Choice-GP1 and Choice-GP3 and compare their accuracy against that of Oracle-GP (learned on the training dataset by independent GP regression). The accuracy is: \begin{center} \scalebox{0.8}{ \begin{tabular}{c|cccc} & \textbf{Choice-GP100} & \textbf{Choice-GP300} & \textbf{Oracle-GP}\\ \hline enb & 0.74 & 0.77&0.77\\ jura & 0.44 & 0.47 & 0.53\\ real-estate & 0.50 & 0.60 & 0.64\\ slump & 0.26 & 0.39 & 0.45 \end{tabular}} \end{center} As before, it can be noticed that the increase of $N$, the accuracy of Choice-GP gets closer to that of Oracle-GP. \begin{figure*}[htp!] \centering \begin{tabular}{ll} \includegraphics[height=3.8cm,trim={0.6cm 0.0cm 0.0cm 0.0cm }, clip]{Barnin.pdf} \includegraphics[height=3.8cm,trim={0.5cm 0.0cm 0.0cm 0.0cm }, clip]{ZDT1_4_2.pdf} \includegraphics[height=3.8cm,trim={0.48cm 0.0cm 0.0cm 0.0cm }, clip]{ZDT2_3_2.pdf} \\ \includegraphics[height=3.8cm,trim={0.5cm 0.0cm 0.0cm 0.0cm }, clip]{DTLZ1.pdf} \includegraphics[height=3.8cm,trim={0.5cm 0.0cm 0.0cm 0.0cm }, clip]{Kursawe.pdf} \includegraphics[height=3.8cm,trim={0.5cm 0.0cm 0.0cm 0.0cm }, clip]{VehicleSafety.pdf} \end{tabular} \caption{Results over 15 repetitions. x-axis denotes the number of iterations and y-axis the log-hypervolume difference.} \label{fig:BO} \end{figure*} \subsection{Bayesian Optimisation} We have considered for ${\bf g}({\bf x})$ five standard multi-objective benchmark functions: Branin-Currin ($n_x = 2$, $n_o = 2$), ZDT1 ($n_x = 4$, $n_o = 2$), ZDT2 ($n_x = 3$, $n_o = 2$), DTLZ1 ($n_x = 3$, $n_o = 2$), Kursawe ($n_x = 3$, $n_o = 2$) and Vehicle-Safety\footnote{The problem of determining the thickness of five reinforced components of a vehicle's frontal frame \cite{yang2005metamodeling}. This problem was previously considered as benchmark in \cite{daulton2020differentiable}.} ($n_x = 5$, $n_o = 3$). These are minimization problems, which we converted into maximizations so that the acquisition function in Section \ref{sec:acq} is well-defined. We compare the Choice-GP BO (with $n_e=n_o$) approach proposed in this paper against ParEGO.\footnote{We use the BoTorch implementation \cite{balandat2020botorch}.} For ParEGO, we assume the algorithm can query directly ${\bf g}({\bf x})$ and, therefore, we refer to it as Oracle-ParEGO.\footnote{The most recent MO BO approaches mentioned in Section \ref{sec:intro} outperform ParEGO. We use ParEGO only as an Oracle reference.} Conversely, Choice-GP BO can only query ${\bf g}({\bf x})$ via choice functions. We select $|A_k|=5$ and use UCB as acquisition function for Choice-GP BO. We also consider a quasi-random baseline that selects candidates from a Sobol sequence denoted as `Sobol`. We evaluate optimization performance on the five benchmark problems in terms of log-hypervolume difference, which is defined as the difference between the hypervolume of the true Pareto front\footnote{This known for the six benchmarks.} and the hypervolume of the approximate Pareto front based on the observed data $\mathcal{X}$. Each experiment starts with $20$ initial (randomly selected) input points which are used to initialise Oracle-ParEGO. We generate $7$ pairs $\{C(A_k),A_k\}$ of size $|A_k|=5$ by randomly selecting $7$ subsets $A_k$ of these $20$ points. These choices $\{C(A_k),A_k\}_{k=1}^7$ are used to initialise Choice-GP BO. A total budget of $80$ iterations are run for both the algorithms. Further, each experiment is repeated 15 times with different initialization. In these experiments we optimize the kernel hyperparameters by maximising the marginal likelihood for Oracle-ParEGO and its variational approximation for Choice-GP. Figure \ref{fig:BO} reports the performance of the three methods. Focusing on Branin-Currin, DTLZ1, Kursawe, and Vehicle-Safety, it can be noticed how Choice-GP BO convergences to the performance of the Oracle-ParEGO at the increase of the number of iterations. The convergence is clearly slower because Choice-GP BO uses qualitative data (choice functions) while Oracle-ParEGO uses quantitative data (it has directly access to ${\bf g}$). However, the overall performance shows that the proposed approach is very effective. In DLTZ1 and DLTZ2, Choice-GP BO outperforms Oracle-ParEGO. The bad performance of Oracle-ParEGO is due to the used acquisition function, which does not correctly balance exploitation-exploration in these two benchmarks. Instead, the UCB acquisition function for Choice-GP BO works well in all the benchmarks.\\ \textbf{Computational complexity:} The simulations were performed in a standard laptop. On average, the time to learn the surrogate model and optimise the acquisition function goes from 30s ($N=20$) to 180s ($N=80+20=100$). \subsection{Unknown latent dimension} \label{sec:latsim} We assume that $n_e$ is unknown and evaluate the latent dimension selection procedure proposed in Section \ref{sec:lat}. First, we consider the one-dimensional $ g(x)=\cos(2x)$ and so $n_o=1$. We generate $10$ training datasets $(C(A_k),A_k)$ with $|A_k|=3$ and sizes $N=30$ and, respectively, $N=300$ and 10 test datasets with size $300$. The following table reports the PSIS-LOO (averaged over the five repetitions) and the average accuracy on the test set for four Choice-GP models with latent dimension $n_e=1,2,3,4$.\vspace{-0.2cm} \begin{center} \scalebox{0.68}{ \begin{tabular}{|l|c|c|c|c|} \hline \rowcolor{celestialblue} & \multicolumn{2}{c|}{\textbf{N=30}} & \multicolumn{2}{c|}{\textbf{N=300}}\\ \hline $n_e$ & PSIS-LOO & acc.\ test & PSIS-LOO & acc.\ test\\ \hline \textbf{1} (10/10) & -10 & 0.75 & -75 & 0.93\\ 2 &-35 & 0.64 & -165 & 0.91\\ 3 & -44 & 0.64 & -333 & 0.86\\ 4 & -69& 0.62 & -388 & 0.84\\ \hline \end{tabular}} \end{center} By using PSIS-LOO (computed on the training dataset) as latent dimension selection criterion, we were able to correctly select the true latent dimension in all the repetitions (10 out of 10). The selected model has also the highest accuracy in the test set. We now focus on the bi-dimensional vector function ${\bf g}(x)=[\cos(2x),-sin(2 x)]^\top$ and consider three different sizes for the training dataset $N=30,50,300$. \vspace{-0.3cm} \begin{center} \scalebox{0.68}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline \rowcolor{celestialblue} & \multicolumn{2}{c|}{\textbf{N=30}} & \multicolumn{2}{|c|}{\textbf{N=50}} & \multicolumn{2}{|c|}{\textbf{N=300}}\\ \hline $n_e$ & PSIS-LOO & acc.\ test & PSIS-LOO & acc.\ test & PSIS-LOO & acc.\ test\\ \hline 1 & -56 & 0.20 &-89 & 0.23 & -493 & 0.30\\ \textbf{2} & -39 & 0.32 & -47 & 0.51 & -236 & 0.72\\ 3 & -39 & 0.32& -49 & 0.49 & -269 & 0.65\\ 4 & -42 & 0.30 & -53 & 0.43 & -277 & 0.64\\ \hline \end{tabular}} \end{center}\vspace{-0.1cm} For $N=30$, $n_e=2$ has the highest PSIS-LOO in 4/10 cases, $n_e=3$ in 4/10 cases and $n_e=4$ in 2/10 cases. For $N=50$, $n_e=2$ has the highest PSIS-LOO in 6/10 cases and $n_e=3$ in 4/10 cases. For $N=300$, the PSIS-LOO selects $n_e=2$ in 10/10 cases. Note that, $n_e=1$ is never selected. Considering that the models are nested $\mathcal{M}_{1} \subset \mathcal{M}_{2} \subset \mathcal{M}_{3}...$, this shows that the selection procedure works well even with small datasets by never selecting a latent dimension that is smaller than the actual one. Moreover, PSIS-LOO is able to select the correct dimension ($n_e=2$) at the increase of $N$. In the supplementary material, we have reported a similar analysis for the datasets in Table \ref{tab:charac} confirming that the procedure also works for real-datasets. \section{Conclusions} We have developed a Bayesian method to learn choice functions from data and applied to choice function based Bayesian Optimisation (BO). As future work, we plan to develop strategies to speed up the learning process by exploring more efficient ways to express the likelihood. We also intend to explore different acquisition functions for choice function BO. \bibliographystyle{ieeetr}
train/arxiv
BkiUeLU4eIOjR7hF24wI
5
1
\section{Introduction} For years, quantum dissipation has been treated as a resource rather than as a detrimental effect to generate a quantum state \cite{Prl106090502,arXiv10052114,Njp11083008,Nat4531008,Jpa41065201,Prl107120502,Np5633} in quantum open systems modeled by the Lindblad-Markovian master equation \cite{Cmp48119} ($\hbar=1$) \begin{align}\label{eq0-1} \dot{\rho}=&-i[H_{0},\rho]+\mathcal{L}\rho,\cr \mathcal{L}\rho=&\sum_{k}L_{k}\rho L_{k}^{\dag}-\frac{1}{2}(L_{k}^{\dag}L_{k}\rho+\rho L_{k}^{\dag}L_{k}), \end{align} where the overdot stands for a time derivative and $L_{k}$ are the so-called Lindblad operators. By using dissipation, one can generate high-fidelity quantum states without accurately controlling the initial state or the operation time (usually, the longer the operation time is, the higher is the fidelity). Besides, dissipation dynamics is shown to be robust against parameter (instantaneous) fluctuations \cite{Prl106090502}. Due to these advantages, many schemes \cite{arXiv11101024,Pra84022316,Pra83042329,Pra82054103,Prl89277901,Prl117210502,Pra84064302,Nat504415,Prl111033607,Pra95022317,Npto10303,Prl117040501,Prl115200502} have been proposed for dissipation-based quantum state generation in recent years based on different physical systems. Generally speaking, to generate quantum states by quantum dissipation, the key point is to find (or design) a unique stationary state (marked as $|S\rangle$) which can not be transferred to other states while other states can be transferred to it. That is, the reduced system should satisfy \begin{align}\label{eq0-3} H_{0}|M\rangle\neq 0,\ H_{0}|S\rangle=0,\ \tilde{L}_{k}^{\dag}|S\rangle\neq 0,\ \tilde{L}_{k}|S\rangle=0, \end{align} where $|M\rangle$ ($M\neq S$) are the orthogonal partners of the state $|S\rangle$ in a reduced system satisfying $\langle M|S\rangle=0$ and $\sum_{M}|M\rangle\langle M|+|S\rangle\langle S|=\bm{1}$, and $\tilde{L}_{k}$ are the effective Lindblad operators. Hence, if the system is in $|M\rangle$, it will always be transferred to other states because $H_{0}|M\rangle\neq 0$ and $\tilde{L}_{k}^{\dag}|S\rangle\neq 0$, while if the system is in $|S\rangle$, it remains invariant. Therefore, the process of pumping and decaying continues until the system is finally stabilized into the stationary state $|S\rangle$. To show such a dissipation process in more detail, we introduce a function $\dot{V}$ to describe the system evolution speed, where $V=\text{Tr}(\rho\rho_{s})$ {is known as the Lyapunov function \cite{Ddbook}} and $\rho_{s}$ is the density matrix of the target state $|S\rangle$. {Lyapunov control is a form of local optimal control with numerous variants \cite{Ddbook,Pra80052316,A4498,Njp11105034}, which has the advantage of being sufficiently simple to be amenable to rigorous analysis and has been used to manipulate open quantum systems \cite{Pra80052316,Pra80042305,Pra82034308}. For example, Yi \emph{et al.} proposed a scheme in 2009 to drive a finite-dimensional quantum system into the decoherence-free subspaces by Lyapunov control \cite{Pra80052316}.} When the system evolves into a target state at a final time $t_{f}$, i.e., $\rho|_{t=t_{f}}\rightarrow\rho_{s}$, $V$ approaches a maximum value $V=1$. Based on Eqs. (\ref{eq0-1}) and (\ref{eq0-3}), we find \begin{eqnarray}\label{eq0-4} \dot{V}=\text{Tr}[(-i[H_{0},\rho]+\mathcal{L}\rho)\rho_{s}] =\sum_{k}\Gamma_{k}\langle E_{k}|\rho| E_{k}\rangle\geq0, \end{eqnarray} in which we have assumed $\tilde{L}_{k}=\sqrt{\Gamma_{k}}|S\rangle\langle E_{k}|$, with $\Gamma_{k}$ being the effective dissipation rates and $|E_{k}\rangle$ being the effective excited states. Obviously, the evolution speed strongly dependents on the effective dissipation rates and the total population of effective excited states. Hence, according to the dissipation dynamics, we have $\langle E_{k}|\rho|E_{k}\rangle\rightarrow 0$ when $t\rightarrow\infty$, which means $\dot{V}|_{t\rightarrow \infty}=0$. However, as is known, such a process is generally much slower than a unitary evolution process because of the small effective dissipation rates. It would be a serious issue to realize large-scale integrated computation if it takes too long to generate the desired quantum states. In view of this, the preponderance of dissipation-based approaches would lose if a future technique would present an ideal dissipation-free system. Therefore, accelerating the dissipation dynamics without losing its advantages should signal a significant improvement for quantum computation. Now that a unitary evolution process is much faster than a dissipation process, we are guided to ask, is it possible to accelerate the dissipation dynamics by using coherent control fields? {In Ref. \cite{Pra80052316} the authors mentioned that Lyapunov control may have the ability to shorten the convergence time for an open system. Therefore, in this paper, we will seek additional coherent control fields according to Lyapunov control to accelerate dissipation dynamics.} The strategy of accelerating dissipation dynamics is to add a simple and realizable coherent control Hamiltonian $H_{c}$ to increase the value of $\dot{V}$ in Eq. (\ref{eq0-4}). The state evolution equation in this case becomes \begin{align}\label{eq0-5} \dot{\rho}=-i[H_{0}+H_{c},\rho]+\mathcal{L}\rho, \end{align} where $H_{c}=\sum_{n}f_{n}(t)H_{n}$ is the additional control Hamiltonian, $H_{n}$ are time independent, and control functions $f_{n}(t)$ are realizable and real valued. The corresponding evolution speed reads \begin{align}\label{eq0-6} \dot{V}_{a}=&\text{Tr}[(-i[H_{0},\rho]+\mathcal{L}\rho)\rho_{s}]\cr &+\sum_{n}f_{n}(t)\text{Tr}[(-i[H_{n},\rho])\rho_{s}]. \end{align} We use the symbol $\dot{V}_{a}$ to distinguish from the original evolution speed $\dot{V}$. The control functions $f_{n}(t)$ should be carefully chosen to ensure that $V_{a}|_{t=t_{f}'}=1$ and $\dot{V}_{a}|_{t=t_{f}'}=0$. For this goal, the simplest choice for $f_{n}(t)$ is \cite{Pra80052316} \begin{align}\label{eq0-7} f_{n}(t)=\text{Tr}[(-i[H_{n},\rho])\rho_{s}]. \end{align} As can be seen from Eq. (\ref{eq0-3}), the Hamiltonian $H_{0}$ is just used to ensure that $|S\rangle$ is a stationary state, while, by adding additional coherent fields, it is easy to find $(H_{0}+H_{c})|S\rangle\neq 0$ (for $\rho\neq\rho_{s}$ corresponding to $t< t_{f}$), which means $|S\rangle$ is actually not a stationary state when $t<t_{f}$. For $t\rightarrow t_{f}$, according to Eq. (\ref{eq0-7}), we have $f_{n}(t_{f})=0$ since $\rho|_{t=t_{f}}\rightarrow \rho_{s}$. Thus, $H_{c}=0$, so that $|S\rangle$ becomes a unique stationary state when $t=t_{f}$. That is, when $t<t_{f}$, the coherent fields and dissipation work together to drive the system to state $|S\rangle$, while when $t\rightarrow t_{f}$, the additional coherent fields vanish and the system becomes steady. It can also be understood as, in the current approach, $|S\rangle$ is not a stationary state until the population is totally transferred to it. Obviously, such a process is significantly different from the previous dissipation-based schemes \cite{Prl106090502,Prl117210502,Pra84064302,Nat504415,Prl111033607,Pra95022317}, in which $|S\rangle$ is the unique stationary state during the whole evolution. Usually, part of $H_{0}$ can be chosen as $H_{n}$ to make sure that $H_{n}$ is realizable. In this case, the additional coherent control fields can be actually regarded as a modification on Hamiltonian $H_{0}$. So, the current approach can be actually understood as a parameter optimization approach for dissipation-based quantum state generation. In the following, we will verify the accelerating approach with applications to quantum state generation. \textit{Application I: Single-atom superposition state.} We first consider a three-level $\Lambda$ atom with an excited state $|e\rangle$ and two ground states $|g_{1}\rangle$ and $|g_{2}\rangle$ to illustrate our accelerating approach. The transition $|e\rangle\leftrightarrow|g_{1,(2)}\rangle$ is resonantly driven by a laser field with a Rabi frequency $\Omega_{1,(2)}$. The Hamiltonian in the interaction picture is thus written as $H_{0}=\Omega_{0}(\sin{\theta}|e\rangle\langle g_{1}|+\cos{\theta}|e\rangle\langle g_{2}|)+H.c.$, where $\Omega_{0}=\sqrt{\Omega_{1}^{2}+\Omega_{2}^{2}}$ and $\theta=\arctan{\frac{\Omega_{1}}{\Omega_{2}}}$. The Lindbald operators in this $\Lambda$ system associated with atomic spontaneous emission are $L_{1}=\sqrt{\gamma_{1}/2}|g_{1}\rangle\langle e|$ and $L_{2}=\sqrt{\gamma_{2}/2}|g_{2}\rangle\langle e|$, respectively. Then, we introduce the orthogonal states $|S\rangle=\cos{\varphi}|g_{1}\rangle-\sin{\varphi}|g_{2}\rangle$ and $|T\rangle=\sin{\varphi}|g_{1}\rangle+\cos{\varphi}|g_{2}\rangle$ to rewrite the Hamiltonian $H_{0}$ as $H_{0}=\Omega_{S}|e\rangle\langle S|+\Omega_{T}|e\rangle\langle T|+H.c.$, where $\Omega_{S}=\Omega_{0}\sin{(\theta-\varphi)}$ and $\Omega_{T}=\Omega_{0}\cos{(\theta-\varphi)}$. Accordingly, by choosing $\gamma_{1}=\gamma_{2}=\gamma$, we obtain two effective Lindblad operators $\tilde{L}_{S}=\sqrt{\gamma/2}|S\rangle\langle e|$ and $\tilde{L}_{T}=\sqrt{\gamma/2}|T\rangle\langle e|$. It is clear that if we choose $\theta=\varphi$, the effective driving field between $|e\rangle$ and $|S\rangle$ with a Rabi frequency $\Omega_{S}$ will be switched off and the condition in Eq. (\ref{eq0-3}) will be satisfied. In this case, according to dissipation dynamics, the system will be stabilized into the stationary state $|S\rangle$. Beware that the present application example maybe similar with that in Ref. \cite{Pra82034308} proposed by Wang \emph{et al.} which used Lyapunov control to drive an open system (with a four-level atom driven by two lasers) into a decoherence-free subspace. Here we need to emphasize that, in this paper, we focus on analyzing the evolution speed and how the Lyapunov control can accelerate the dissipation dynamics. By choosing $t_{f}=10/\Omega_{0}$, the evolution speed $\dot{V}$ and time-dependent population for state $|S\rangle$ versus $\gamma$ are displayed in Figs. \ref{fig3} (a) and (b), respectively. As shown in the figure, to obtain the target state $|S\rangle$ in a relatively high fidelity $\geq0.95$ within a fixed evolution time $t_{f}=10/\Omega_{0}$, the decay rate should be at least $\gamma\geq \Omega_{0}$ ($P_{S}|_{t=t_{f}}=0.9506$ when $\gamma=\Omega_{0}$). \begin{figure}[b] \scalebox{0.23}{\includegraphics {dv_vd_t_r1.eps}} \caption{ Single-atom superposition state preparation: The comparison with respect to the evolution speed between the traditional dissipation dynamics and the accelerated dissipation dynamics. (a) and (c): The evolution speeds given according to Eq. (\ref{eq0-4}) and Eq. (\ref{eq0-6}) versus $\gamma$, respectively. (b) and (d): The time-dependent populations governed by the traditional dissipation dynamics and the accelerated dissipation dynamics, respectively. } \label{fig3} \end{figure} To accelerate such a process by additional coherent control fields, we choose the control Hamiltonians $H_{n}$ as $H_{1}=\mu_{1}|e\rangle\langle g_{1}|+H.c.$ and $H_{2}=\mu_{2}|e\rangle\langle g_{2}|+H.c.$, where $\mu_{1}$ and $\mu_{2}$ are two arbitrary time-independent parameters used to control the intensities of the control fields. By choosing $\mu_{1}=0.8$ and $\mu_{2}=0.6$ as an example, the optimized evolution speed $\dot{V_{a}}$ given according to Eq. (\ref{eq0-6}) is shown in Fig. \ref{fig3} (c). Contrasting Figs. \ref{fig3} (c) with (a), it is clear that the evolution speed has been significantly improved, especially, when the decay rate $\gamma$ is relatively small. For example, when $\gamma=0.5\Omega_{0}$, the maximum value of the evolution speed has been increased from $\dot{V}^{max}\approx0.08$ to $\dot{V_{a}}^{max}\approx 0.26$. While for a relatively large decay rate, the increasing effect is relatively weak. This is because the control functions $f_{n}(t)$ are mainly decided by the instantaneous distance $d=1-\text{Tr}(\rho\rho_{s})$ from the target state according to Eq. (\ref{eq0-7}). In general, $f_{n}(t)$ are in direct proportion to $d$. In a certain period of time, more population will be transferred to the stationary state $|S\rangle$ with a relatively large decay rate (see Fig. \ref{fig3}). That is, the instantaneous value of $\text{Tr}(\rho\rho_{s})$ decreases with the increase of $\gamma$. Accordingly, the control functions $f_{n}(t)$ will fade away along with the increase of the decay rate $\gamma$. To show the fidelity of the accelerated state generation in more detail, we display the fidelity of the target state $|S\rangle$ versus operation time $\mathcal{T}=t_{f}-t_{i}$ ($t_{i}=0$ is the initial time) and decay rate $\gamma$ in Fig. \ref{fig4} (a). For clarity, in the following, we will use the symbols $\mathcal{T}_{t}$ and $\mathcal{T}_{a}$ to express operation times via traditional dissipation dynamics and accelerated dissipation dynamics, respectively. It is clear from Fig. \ref{fig4} (a) that the efficiency of state generation has been remarkably improved since a relatively high fidelity ($F_{S}\approx0.95$) of the target state $|S\rangle$ can be achieved even when the operation time is only $\mathcal{T}_{a}=5/\Omega_{0}$. The shapes for the additional control fields are shown to be smooth curves [see Fig. \ref{fig4} (b) with $\gamma=0.8\Omega_{0}$ as an example] which can be easily realized in practice. For example, one can use electrooptic modulators to implement such coherent fields. \begin{figure}[b] \scalebox{0.32}{\includegraphics {P_t_and_gamma_lp.eps}} \caption{ Single-atom superposition state preparation: (a) Fidelity $F_{S}$ of the accelerated dissipation scheme versus $\mathcal{T}_{a}$ and $\gamma$, where the fidelity $F_{S}$ is defined by $F_{S}=\langle S|\rho|S\rangle|_{t=t_{f}}$ expressing the final population for the target state. (b) The coherent control fields for the accelerated dissipation scheme when $\gamma=0.8\Omega_{0}$. In general, the intensity for the additional coherent control fields should be smaller than $\Omega_{1,(2)}$ } \label{fig4} \end{figure} Affected by the real experimental environment, there is usually a stochastic kind of noise that should be considered in realizing the scheme. Assume that the Hamiltonian $H_{0}$ is perturbed by some stochastic part $\eta H_{s}$ describing amplitude noise. A stochastic Schr\"{o}dinger equation in a closed system (in the Stratonovich sense) is then $\dot{\psi}(t)=[H_{0}+\eta H_{s}\xi(t)]\psi(t)$, where $\xi(t)=\partial_{t}W_{t}$ is heuristically the time derivative of the Brownian motion $W_{t}$. $\xi(t)$ satisfies $\langle\xi(t)\rangle=0$ and $\langle\xi(t)\xi(t')\rangle=\delta(t-t')$ because the noise should have zero mean and the noise at different times should be uncorrelated. Then, we define $\rho_{\xi}(t)=|\psi_{\xi}(t)\rangle\langle\psi_{\xi}(t)|$, and the dynamical equation without dissipation for $\rho_{\xi}$ is thus given as \begin{align}\label{eq1-12} \dot{\rho}_{\xi}=-i[H_{0},\rho_{\xi}]-{i\eta}[H_{s},\xi\rho_{\xi}]. \end{align} After averaging over the noise, Eq. (\ref{eq1-12}) becomes $\dot{\rho}\simeq-i[H_{0},\rho]-{i\eta}[H_{s},\langle\xi\rho_{\xi}\rangle]$, where $\rho=\langle\rho_{\xi}\rangle$ \cite{Njp14093040}. According to Novikov's theorem in the case of white noise, we have $\langle\xi\rho_{\xi}\rangle=\frac{1}{2}\langle\frac{\delta\rho_{\xi}}{\delta\xi(t')}\rangle|_{t'=t}=-\frac{i\eta}{2}[H_{s},\rho]$. Hence, when both the amplitude noise and dissipation are considered, the dynamics of the open system will be governed by \begin{align}\label{eq1-14} \dot{\rho}\simeq-i[H_{0},\rho]+\mathcal{N}\rho+\mathcal{L}\rho, \end{align} where $\mathcal{N}\rho=-\frac{\eta^2}{2}[H_{s},[H_{s},\rho]]$. \begin{figure}[t] \scalebox{0.32}{\includegraphics {f_vs_eta_C1.eps}} \caption{ Single-atom superposition state preparation: The comparison with respect to the robustness against amplitude-noise error between the traditional dissipation dynamics and the accelerated dissipation dynamics. (a) $F_{S}$ versus $\eta$ and $\gamma$ via traditional dissipation dynamics with $\mathcal{T}_{t}=20/\Omega_{0}$. (b) $F_{S}$ versus $\eta$ and $\gamma$ via the accelerated dissipation dynamics with $\mathcal{T}_{a}=10/\Omega_{0}$. Here $\mathcal{T}_{a}=10/\Omega_{0}$ is chosen to make the highest fidelity for each $\gamma$ in Fig. \ref{fig5} (b) to be the same as that in Fig. \ref{fig5} (a) as far as possible. } \label{fig5} \end{figure} For the current three-level scheme, we consider an independent amplitude noise in $\Omega_{1}$ as well as in $\Omega_{2}$ with the same intensity $\eta^2$, and the noise term in Eq. (\ref{eq1-14}) is thus \begin{eqnarray}\label{eq1-15} \mathcal{N}\rho=-\frac{\eta^2}{2}([H_{s1},[H_{s1},\rho]]+[H_{s2},[H_{s2},\rho]]), \end{eqnarray} where $H_{s1}=\Omega_{1}|e\rangle\langle g_{1}|+H.c.$ and $H_{s2}=\Omega_{2}|e\rangle\langle g_{2}|+H.c.$. According to Eq. (\ref{eq1-14}), the robustness against amplitude-noise error for the dissipation-based state generation without the additional coherent control fields is shown in Fig. \ref{fig5} (a), in which the operation time is chosen as $\mathcal{T}_{t}=20/\Omega_{0}$. Only a $\sim2\%$ deviation will occur in the fidelity as shown in the figure with a relatively small decay rate $\gamma\leq1$ and the noise intensity is $\eta=0.1$. The robustness of the scheme against amplitude-noise error is better when the decay rate gets larger. For comparison, the robustness against the amplitude-noise error of the accelerated dynamics governed by $\dot{\rho}\simeq-i[H_{0}+H_{c},\rho]+\mathcal{N}\rho+\mathcal{L}{\rho}$, is shown in Fig. \ref{fig5} (b) with operation time $\mathcal{T}_{a}=10/\Omega_{0}$. The result shows the robustness of the accelerated scheme with respect to amplitude-noise error is almost the same with that of the traditional scheme. A stochastic noise with intensity $\eta=0.1$ also causes a deviation of about $2\%$ on the fidelity when $\gamma\leq1$, and the influence of noise decreases with increasing $\gamma$. That is, we have confirmed that the approach by adding coherent control fields can realize the goal of accelerating the dissipation process without losing the advantage of robustness against parameter fluctuations. \begin{figure}[t] \scalebox{0.16}{\includegraphics {model2.eps}} \caption{ Two-atom entanglement preparation: (a) Level diagram of a single atom. The optical pumping laser for the two atoms differs by a relative phase of $\pi$. (b) The effective transitions for two-atom trapped system. With the effective driving fields and decays, ultimately, the system will be stabilized into the state $|S\rangle$. } \label{fig21} \end{figure} \textit{Application II: two-atom entanglement.} We consider two $\Lambda$ atoms with a level structure, as shown in Fig. \ref{fig21} (a) (marked as atom $A$ and atom $B$), which are trapped in an optical cavity. The transition $|g_{1}\rangle_{m}\leftrightarrow|e\rangle_{m}$ ($m=A,B$) is resonantly driven by a laser with Rabi frequency $\Omega_{m}$, and the transition $|g_{2}\rangle_{m}\leftrightarrow|e\rangle_{m}$ is coupled to the quantized cavity field resonantly with coupling strength $\lambda$. Besides, we apply a microwave field with Rabi frequency $\Omega_{MW}$ to drive the transition between ground states $|g_{1}\rangle_{m}$ and $|g_{2}\rangle_{m}$ with detuning $\delta$. The Hamiltonian for this system in an interaction picture reads \begin{small} \begin{align}\label{eq2-1} H_{0}= \sum_{m=A,B}\Omega_{m}|e\rangle_{m}\langle g_{1}|+\Omega_{MW}e^{i\delta t}|g_{2}\rangle_{m}\langle g_{1}| \cr &+\lambda|e\rangle_{m}\langle g_{2}|a+H.c., \end{align} \end{small} where $a$ denotes the cavity annihilation operator. The corresponding dynamics of the current system is described by the master equation in Eq. (\ref{eq0-1}). The Lindbald operators associated with atomic spontaneous emission and cavity decay are $L_{m_{1}}=\sqrt{\gamma_{1}/2}|g_{1}\rangle_{m}\langle e|$, $L_{m_{2}}=\sqrt{\gamma_{2}/2}|g_{2}\rangle_{m}\langle e|$ ($m=A,B$), and $L_{C}=\sqrt{\kappa}a=\sqrt{\kappa}|0\rangle_{C}\langle 1|$, where $\kappa$ is the cavity decay rate and $|k\rangle_{C}$ ($k=0,1$) denotes the photon number in the cavity. Referring to the formula of quantum Zeno dynamics \cite{Prl89080401Jpa41493001}, we write the Hamiltonian $H_{0}$ as $H_{0}=\Omega(H_{p}+K H_{q})$, where $\Omega=\sqrt{\Omega_{A}^2+\Omega_{B}^2+\Omega_{MW}^{2}}$, $K=g/\Omega$, $H_{p}$ stands for the dimensionless interaction Hamiltonian between the atom and the classical field, and $H_{q}$ denotes the counterpart between the atom and the quantum cavity field. When the strong coupling limit $K\rightarrow\infty$ is satisfied, we obtain the effective Hamiltonian $H_{0}^{eff}=\Omega_{a}(\sum_{l}P_{l}H_{p}P_{l}+K\epsilon_{l}P_{l})$, where $P_{l}$ is the eigenprojection and $\epsilon_{l}$ is the corresponding eigenvalue of $H_{q}$: $H_{q}=\sum_{l}\epsilon_{l} P_{l}$. Assuming the system is initially in the Zeno dark subspace ($\epsilon_{l}=0$) spanned by $|\psi_{1}\rangle=|g_{1}g_{2}\rangle_{A,B}|0\rangle_{C}$, $|\psi_{2}\rangle=|g_{2}g_{1}\rangle_{A,B}|0\rangle_{C}$, $|\psi_{3}\rangle=|g_{2}g_{2}\rangle_{A,B}|0\rangle_{C}$, $|\psi_{4}\rangle=|g_{1}g_{1}\rangle_{A,B}|0\rangle_{C}$, and $|D\rangle=\frac{1}{\sqrt{2}}(|eg_{2}\rangle_{A,B}-|g_{2}e\rangle_{A,B})|0\rangle_{C}$, the effective Hamiltonian reduces to ($\Omega_{A}=-\Omega_{B}$ and $\Omega_{0}=\sqrt{\Omega_{A}^2+\Omega_{B}^2}$) \begin{small} \begin{align}\label{eq2-4} H_{0}^{eff}=&\frac{\Omega_{0}}{\sqrt{2}}|D\rangle\langle T|+\sqrt{2}\Omega_{MW}e^{i\delta t}|\psi_3\rangle\langle T| \cr &+\sqrt{2}\Omega_{MW}e^{-i\delta t}|\psi_{4}\rangle\langle T| +H.c., \end{align} \end{small} where $|S\rangle=(|\psi_{1}\rangle-|\psi_{2}\rangle)\sqrt{2}$ and $|T\rangle=(|\psi_{1}\rangle+|\psi_{2}\rangle)\sqrt{2}$. Accordingly, the effective Lindblad operators are $\tilde{L}_{G}=\sqrt{{\gamma_{2}}/{2}}|\psi_{3}\rangle\langle D|$, $\tilde{L}_{S}=\sqrt{{\gamma_{1}}/{4}}|S\rangle\langle D|$, and $\tilde{L}_{T}=\sqrt{{\gamma_{1}}/{4}}|T\rangle\langle D|$. The cavity field has been decoupled in the effective Hamiltonian when the Zeno condition is satisfied thus the cavity decay can be neglected. Figure \ref{fig21} (b) shows the effective transitions of reduced system. \begin{figure}[t] \centering \scalebox{0.32}{\includegraphics {P_t_and_gamma_five.eps}} \caption{ Two-atom entanglement preparation: The comparison with respect to the two-atom entanglement generation between the traditional dissipation dynamics and the accelerated dissipation dynamics. (a) $P_{S}$ versus $\gamma_{1}$ via traditional dissipation dynamics. (b) $P_{S}$ versus $\gamma_{1}$ via accelerated dissipation dynamics. The basic parameters in plotting the figure are $\lambda=10\Omega_{0}$, $\delta=0.15\Omega_{0}$, $\kappa=0.5\Omega_{0}$, $\gamma_{2}=0.5\gamma_{1}$, and $\Omega_{MW}=0.2\Omega_{0}$. The initial state is selected as $\rho_{0}=|\psi_{1}\rangle\langle\psi_{1}|$. } \label{fig22} \end{figure} The time-dependent population for the target state $|S\rangle$ versus decay rate $\gamma_{1}$ is shown in Fig. \ref{fig22} (a). Obviously, an operation time $\mathcal{T}_{t}=30/\Omega_{0}=300/\lambda$ is not enough to generate the entangled state $|S\rangle$ [the maximum population for $|S\rangle$ in Fig. \ref{fig5} (a) is only $0.8548$]. A further study shows that for $\gamma_{1}\leq2\Omega_{0}$, an operation time $\mathcal{T}_{t}\geq 1000/\lambda=100/\Omega_{0}$ is necessary in order to obtain a relatively high-fidelity ($F_{S}\geq0.9$) entanglement. Such results can be also found in the previous schemes for the generation of two-atom entanglement. For example, in Ref. \cite{Prl106090502}, by choosing parameters similar to those in plotting Fig. \ref{fig22} (a), the time required for entanglement generation with fidelity $F_{S}\geq 0.9$ is $\mathcal{T}_{t}\geq 1300/\lambda=130/\Omega_{0}$. The control Hamiltonians to accelerate entanglement generation are chosen as $H_{1}=\mu_{1}|e\rangle_{A}\langle g_{1}|+H.c.$ and $H_{2}=\mu_{2}|e\rangle_{B}\langle g_{1}|+H.c.$. We randomly select $\mu_{1}=1$ and $\mu_{2}=1.5$ as an example to show time-dependent $P_{S}$ versus $\gamma_{1}$ in Fig. \ref{fig22} (b). One can find from Fig. \ref{fig22} that the entanglement generation has been accelerated by the additional coherent control fields. An operation time $\mathcal{T}_{a}\leq 20/\Omega_{0}$ is enough to generate two-atom entanglement with fidelity $F_{S}\geq0.9$. In fact, by choosing suitable parameters for a specified decay rate, the fidelity can be further improved (See Fig. \ref{fig6}). As shown in the figure, for decay rate $\gamma_{1}=0.5\Omega_{0}$ [See Fig. \ref{fig6} (a)], the optimal parameters are $\delta=0$ and $\Omega_{MW}\sim0.25\Omega_{0}$, and the corresponding fidelity is $F_{S}\sim0.97$; for decay rate $\gamma_{1}=\Omega_{0}$ [See Fig. \ref{fig6} (b)], when $\delta\sim0.6\Omega_{0}$ and $\Omega_{MW}\sim0.15\Omega_{0}$, we have the highest fidelity $F_{S}\sim0.96$; for decay rate $\gamma_{1}=2\Omega_{0}$ [See Fig. \ref{fig6} (c)], the highest fidelity $F_{S}\sim0.96$ appears when $\delta\sim0.5\Omega_{0}$ and $\Omega_{MW}\sim0.2\Omega_{0}$. The experimentally achievable values for cooperativity are around $C=\lambda^2/(\gamma_{1}\kappa)\approx 100$ \cite{Prl97083602Prl101203602}, corresponding to $\gamma_{1}\approx2\Omega_{0}$ and $\kappa\approx0.5\Omega_{0}$. For $\lambda=(2\pi)35$MHz, with the experimentally achievable parameters, the operation time required for the entanglement generation is only about 1.3 $\mu$s, which is much shorter than the typical decoherence time scales for this system. \begin{figure}[t] \centering \scalebox{0.6}{\includegraphics {F_vs_parameters.eps}} \caption{ Two-atom entanglement preparation: The fidelity $F_{S}$ versus detuning $\delta$ and Rabi frequency $\Omega_{MW}$ with (a) $\gamma_{1}=0.5\Omega_{0}$; (b) $\gamma_{1}=\Omega_{0}$; (c) $\gamma_{1}=2\Omega_{0}$. The basic parameters in plotting the figure are $\lambda=10\Omega_{0}$, $\kappa=0.5\Omega_{0}$, and $\gamma_{2}=0.5\gamma_{1}$. The initial state is selected as $\rho_{0}=|\psi_{1}\rangle\langle\psi_{1}|$. } \label{fig6} \end{figure} In conclusion, we have investigated the possibility of accelerating dissipation-based state generation in a three-level system and a trapped two-atom system. From both analytical and numerical evidence, we have shown that the speed for a system to reach the target state has been significantly improved with additional coherent control fields, without losing the advantage of robustness against parameter fluctuations. Notably, the additional control fields are given basically according to the definition of the system evolution speed via dissipation dynamics [see Eq. (\ref{eq0-4})], while there are in fact other definitions that can be used and the control fields would be accordingly changed. So, in the future, it would be interesting to study the behavior of the given additional coherent control fields based on other definitions of the evolution speed. This work was supported by the National Natural Science Foundation of China under Grants No. 11575045, No. 11374054 and No. 11674060.
train/arxiv
BkiUdm85qsMAIxrFvUnm
5
1
\section{Introduction} Natural and manufactured phenomena abound where thin materials develop internal stresses, deform out of plane and exhibit nontrivial 3d shapes. Nematic glasses \cite{modes2010disclination,modes2010gaussian}, natural growth of soft tissues \cite{goriely2005differential,yavari2010geometric} and manufactured polymer gels \cite{kim2012thermally,klein2007shaping,wu2013three} are chief examples. Such incompatible prestrained materials may be key constituents of micro-mechanical devices and be subject to actuation. A model postulates that these plates may reduce internal stresses by undergoing large out of plane deformations $\mathbf{u}$ as a means to minimize an elastic energy $E[\mathbf{u}]$ that measures the discrepancy between a reference (or target) metric $G$ and the orientation preserving realization $\mathbf{u}$ of it. The strain tensor $\boldsymbol{\epsilon}_{G}(\nabla\mathbf{u})$, given by \begin{equation}\label{eqn:3Dprestrain} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) := \frac12 \big( \nabla\mathbf{u}^T \nabla\mathbf{u} - G \big), \end{equation} measures such discrepancy and yields the following elastic energy functional for prestrained isotropic materials in a 3d reference body $\mathcal{B}$ and without external forcing \begin{equation}\label{E:prestrain-energy} E[\mathbf{u}] := \int_{\mathcal{B}} \mu \Big| G^{-1/2} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-1/2} \Big|^2+ \frac{\lambda}{2}\tr\Big( G^{-1/2} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-1/2} \Big)^2, \end{equation} where $\mu,\lambda$ are the Lam\'e constants \cite{efrati2009,efrati2011hyperbolic,sharon2010mechanics}. A deformation $\mathbf{u}:\mathcal{B}\to\mathbb{R}^3$ such that $\boldsymbol{\epsilon}_{G}(\nabla\mathbf{u})=\mathbf{0}$ is called isometric immersion. If such a map exists, then the material can attain a stress-free equilibrium configuration, i.e., $E[\mathbf{u}]=0$. However, the existence of an isometric immersion $\mathbf{u}$ of class $H^2(\mathcal{B})$ for any given smooth metric $G$ is not guaranteed in general and it constitutes an outstanding problem in differential geometry. In the absence of such a map, the infimum of $E[\mathbf{u}]$ is strictly positive and the material has a residual stress at free equilibria. Slender elastic bodies are of special interest in many applications and our main focus. In this case, the 3d domain $\mathcal{B}$ can be viewed as a tensor product of a 2d domain $\Omega$, the midplane, and an interval of length $s$, namely $ \Omega \times (-\frac{s}{2},\frac{s}{2}). $ Developing dimensionally-reduced models as $s\to0$ is a classical endeavor in nonlinear elasticity. Upon rescaling $E[\mathbf{u}]$ with a factor of the form $s^{-\beta}$, several $2d$ models can be derived in the limit $s\to0$. A geometrically nonlinear reduced energy was obtained formally by Kirchhoff in his seminal work of 1850. An ansatz-free rigorous derivation for isotropic materials was carried out in the influential work of Friesecke, James and M\"uller in 2002 \cite{frie2002b} via $\Gamma$-convergence for $\beta=3$. This corresponds to the bending regime of the nonlinear Kirchhoff plate theory. If the target metric $G$ is the identity matrix, there is no in-plane stretching and shearing of the material leaving bending as the chief mechanism of deformation; an excellent example examined in \cite{frie2002b} is the bending of a sheet of paper. For a generic metric $G$ that does not depend on $s$ and is uniform across the thickness, Efrati, Sharon and Kupferman derived a 2d energy which decomposes into stretching and bending components \cite{efrati2009}; the former scales linearly in $s$ whereas the latter does it cubically. The first fundamental form of the midplane characterizes stretching while the second fundamental form accounts for bending. The thickness parameter $s$ appears in the reduced energy and determines the relative weight between stretching and bending. The asymptotic limit $s\to0$ requires a choice of scaling exponent $\beta$. The bending regime $\beta=3$ has been studied by Lewicka and collaborators \cite{lewicka2011,lewicka2016}, while \cite{frie2006,bella2014metric,lewicka2010foppl,lewicka2015variational} discussed other exponents $\beta$. For instance, $\beta=5$ corresponds to the F\"oppl von K\'{a}rman plate theory, which is suitable for moderate deformations. Different energy scalings select specific asymptotic relations between the prestrain metric $G$ and deformations $\mathbf{u}$. For instance, for $\beta=3$ and metrics $G$ of the form \begin{equation}\label{E:prestrain-metric} G(\mathbf{x}',x_3)=G(\mathbf{x}')= \begin{bmatrix} g(\mathbf{x}') & \mathbf{0} \\ \mathbf{0} & 1 \end{bmatrix} \quad \forall \, \mathbf{x}'\in \Omega, \, x_3\in(-s/2,s/2), \end{equation} with $g\in\mathbb{R}^{2\times2}$ symmetric uniformly positive definite, the first fundamental form $\I[\mathbf{y}]$ of parametrizations $\mathbf{y}:\Omega\to\mathbb{R}^3$ of the midplane must satisfy the following pointwise metric constraint as $s\to0$ \begin{equation}\label{E:metric-constraint} \I[\mathbf{y}](\mathbf{x}') = g(\mathbf{x}') \quad \forall \, \mathbf{x}'\in \Omega; \end{equation} this account for the stretching and shearing of the midplane. Moreover, the scaled elastic energy $s^{-3} E[\mathbf{u}]$ turns out to $\Gamma$-converge to the reduced bending energy \begin{equation}\label{E:reduced-bending} E[\mathbf{y}] = \frac{\mu}{12}\int_{\Omega}\Big|g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, g^{-\frac{1}{2}}\Big|^2+\frac{\lambda}{2\mu+\lambda}\tr\Big(g^{-\frac{1}{2}}\, \II[\mathbf{y}] \, g^{-\frac{1}{2}} \Big)^2, \end{equation} which depends solely on the second fundamental form $\II[\mathbf{y}]$ of $\mathbf{y}$ in the absence of external forcing \cite{lewicka2011,lewicka2016}. It is known that $E[\mathbf{y}]>0$ provided that the Gaussian curvature of the surface $\mathbf{y}(\Omega)$ does not vanish identically \cite{lewicka2011,lewicka2016}. We illustrate this in Figure \ref{F:efrati}. \begin{figure}[htbp] \begin{center} \includegraphics[width=5.5cm,trim=300 300 300 300, clip]{clamped.png} \includegraphics[width=5.5cm,trim=300 300 300 300, clip]{free.png} \label{F:efrati} \end{center} \caption{\small A trapezoidal-like plate is glued at the three edges of a square of unit size and is free at the remaining side, as suggested in \cite{efrati2009} (left). For $s$ small, the plate cannot sustain the in-plane compression and buckles up. The deformation $\mathbf{y}(x_1,x_2)=(x_1,x_2,x_1^2(1-x_1)^2x_2^2)^T$ mimics this configuration and yields a target metric $g=\I[\mathbf{y}]$; $\mathbf{y}$ and $g$ are thus compatible. Upon freeing the boundary conditions, the plate changes shape to a non-flat configuration with the same metric (right).} \end{figure} In this article, we present a numerical study of the minimization of \eqref{E:reduced-bending} subject to the constraint \eqref{E:metric-constraint} with either Dirichlet or \emph{free} boundary conditions. We start in Section \ref{sec:prob_statement} with a justification of \eqref{E:prestrain-energy} followed by a formal derivation of \eqref{E:metric-constraint} and \eqref{E:reduced-bending} as the asymptotic limit of $s^{-3} E[\mathbf{u}]$ as $s\to0$. Moreover, we show an equivalent formulation that basically replaces the second fundamental form $\II[\mathbf{y}]$ by the Hessian $D^2\mathbf{y}$, which makes the constrained minimization problem amenable to computation. This derivation is, however, trickier than that in \cite{bartels2013,bonito2018} for single layer plates and \cite{bonito2015,bonito2017,bonito2020discontinuous} for bilayer plates. The numerical treatment of the ensuing fourth order problem is a challenging and exciting endeavor. In \cite{bartels2013,bonito2015,bonito2017}, the discretization hinges on Kirchhoff elements for isometries $\mathbf{y}$, i.e., $g=I_2\in\mathbb{R}^{2\times2}$ is the identity matrix. The approximation of $\mathbf{y}$ in \cite{bonito2018,bonito2020discontinuous} relies on a discontinuous Galerkin (dG) method. In all cases, the minimization problem associated with the nonconvex contraint \eqref{E:metric-constraint} resorts to a discrete $H^2$-gradient flow approach. In addition, a $\Gamma$-convergence theory is developed in \cite{bartels2013,bonito2015,bonito2018}. In this paper, inspired by \cite{cockburn1998}, we design a {\it local discontinuous Galerkin} method (LDG) for $g\ne I_2$ that replaces $D^2\mathbf{y}$ by a reconstructed Hessian $H_h[\mathbf{y}_h]$ of the discontinuous piecewise polynomial approximation $\mathbf{y}_h$ of $\mathbf{y}$. Such discrete Hessian $H_h[\mathbf{y}_h]$ consists of three distinct parts: the broken Hessian $D_h^2\mathbf{y}_h$, the lifting $R_h([\nabla_h\mathbf{y}_h])$ of the jump of the broken gradient $\nabla_h\mathbf{y}_h$ of $\mathbf{y}_h$, and the lifting $B_h([\mathbf{y}_h])$ of the jumps of $\mathbf{y}_h$ itself. Lifting operators were introduced in \cite{bassi1997} and analyzed in \cite{brezzi1999,brezzi2000}. The definition of $R_h$ and $B_h$ is motivated by the liftings of \cite{ern2010,ern2011} leading to discrete gradient operators. It is worth pointing out prior uses of $H_h[\mathbf{y}_h]$. Discrete Hessians were instrumental to study convergence of dG for the bi-Laplacian in \cite{pryer} and plates with isometry constraint in \cite{bonito2018}. In the present contribution, $H_h[\mathbf{y}_h]$ makes its debut as a chief constituent of the numerical method. We introduce $H_h[\mathbf{y}_h]$ in Section \ref{sec:method} along with the LDG approximation of \eqref{E:reduced-bending} and the metric defect $D_h[\mathbf{y}_h]$ that relaxes \eqref{E:metric-constraint} and makes it computable. We also discuss two discrete $H^2$-gradient flows, one to reduce the bending energy \eqref{E:reduced-bending} starting from $\mathbf{y}_h^0$, and the other to diminish the stretching energy and make $D_h[\mathbf{y}_h^0]$ as small as possible. The former leads to Algorithm \ref{algo:main_GF} (gradient flow) and the latter to Algorithm \ref{algo:preprocess} (initialization) of Section \ref{sec:method}. We reserve Section~\ref{sec:solve} for implementation aspects of Algorithms \ref{algo:main_GF} and \ref{algo:preprocess}. We present several numerical experiments, some of practical interest, in Section~\ref{sec:num_res} that document the performance of the LDG approach and illustrate the rich variety of shapes achievable with the reduced model \eqref{E:metric-constraint}-\eqref{E:reduced-bending}. We close the paper with concluding remarks in Section \ref{S:conclusions}. \section{Problem statement} \label{sec:prob_statement} Let $\Omega_s:=\Omega\times(-s/2,s/2)\subset\mathbb{R}^3$ be a three-dimensional plate at rest, where $s>0$ denotes the thickness and $\Omega\subset\mathbb{R}^2$ is the (flat) midplane. Given a Riemannian metric $G:\Omega_s\rightarrow\mathbb{R}^{3\times 3}$ (symmetric uniformly positive definite matrix), we consider 3d deformations $\mathbf{u}:\Omega_s\rightarrow\mathbb{R}^3$ driven by the strain tensor $\boldsymbol{\epsilon}_{G}(\nabla\mathbf{u})$ of \eqref{eqn:3Dprestrain} that measures the discrepancy between $\nabla\mathbf{u}^T\nabla\mathbf{u}$ and $G$; hence, the 3d elastic energy $E[\mathbf{u}]=0$ whenever $\boldsymbol{\epsilon}_{G}(\nabla\mathbf{u})=\mathbf{0}$. We say that $G$ is the {\it reference (prestrained or target) metric.} An orientable deformation $\mathbf{u}:\Omega_s\to\mathbb{R}^3$ of class $H^2(\Omega)$ satisfying $\boldsymbol{\epsilon}_{G}(\nabla\mathbf{u})=\mathbf{0}$ is called an {\it isometric immersion.} We assume that $G$ does not depend on $s$ and is uniform throughout the thickness, as written in \eqref{E:prestrain-metric} with $g:\Omega\rightarrow\mathbb{R}^{2\times 2}$ symmetric uniformly positive definite \cite{lewicka2011,efrati2009}. If $g^{1/2}$ denotes the square root of $g$, we have \begin{equation} \label{def:Ginv} G^{\frac{1}{2}}= \begin{bmatrix} g^{\frac{1}{2}} & \mathbf{0} \\ \mathbf{0} & 1 \end{bmatrix}, \quad G^{-\frac{1}{2}}= \begin{bmatrix} g^{-\frac{1}{2}} & \mathbf{0} \\ \mathbf{0} & 1 \end{bmatrix}. \end{equation} In Section \ref{S:energy} we rederive, following \cite{efrati2009}, the elastic energy $E[\mathbf{u}]$ advocated in \cite{lewicka2011,lewicka2016}. We reduce the 3d model to a 2d plate model in Section \ref{S:reduced-model}. To this end, we perform a formal asymptotic analysis as $s\to0$ but also consider the pre-asymptotic regime $s>0$. We discuss the notion of admissibility in Section \ref{S:admissibility} and derive an equivalent reduced energy better suited for computation in Section \ref{S:simplified-energy}. We will use the following notation below. The $i^{th}$ component of a vector $\mathbf{v}\in\mathbb{R}^n$ is denoted $v_i$ while for a matrix $A\in\mathbb{R}^{n\times m}$, we write $A_{ij}$ the coefficient of the $i^{th}$ row and $j^{th}$ column. The gradient of a scalar function is a column vector and for $\mathbf v:\mathbb R^m \rightarrow \mathbb R^n$, we set $(\nabla \mathbf v)_{ij} := \partial_j v_i$, $i=1,..,n$, $j=1,...,m$. The Euclidean norm of a vector is denoted $|\cdot|$. For matrices $A,B\in\mathbb{R}^{n\times m}$, we write $A:B:=\tr(B^TA)=\sum_{i=1}^{n}\sum_{j=1}^{m}A_{ij}B_{ij}$ and $|A|:=\sqrt{A:A}$ the Frobenius norm of $A$. To have a compact notation later, for higher-order tensors we set \begin{equation}\label{E:higher-order} \mathbf{A}=(A_k)_{k=1}^n\in\mathbb{R}^{n\times m\times m} ~ \Rightarrow ~ \tr(\mathbf{A}) = \big( \tr(A_k) \big)_{k=1}^n, \quad |\mathbf{A}| = \left(\sum_{k=1}^n |A_k|^2 \right)^{\frac12}. \end{equation} Furthermore, we will frequently use the convention \begin{equation}\label{E:higher-order_B} B\mathbf{A}B := (BA_kB)_{k=1}^3 \in \mathbb R^{3 \times 2 \times 2}, \end{equation} for $\mathbf{A} \in \mathbb R^{3 \times 2 \times 2}$ and $B \in \mathbb R^{2\times 2}$. In particular, for $\mathbf{y}:\mathbb R^2 \rightarrow \mathbb R^3$, we will often write \begin{equation}\label{e:notation_terms_A} g^{-1/2} \, D^2 \mathbf{y} \, g^{-1/2} = \left(g^{-1/2} \, D^2 y_k \, g^{-1/2}\right)_{k=1}^3, \end{equation} which, combined with \eqref{E:higher-order}, yields \begin{equation}\label{e:notation_terms} \begin{split} \big|g^{-1/2} \, D^2 \mathbf{y} \, g^{-1/2}\big| &= \left( \sum_{k=1}^3 \big|g^{-1/2} \, D^2 y_k \, g^{-1/2} \big|^2 \right)^{1/2}, \\ \tr \big(g^{-1/2} \, D^2 \mathbf{y} \, g^{-1/2} \big) &= \left( \tr \big(g^{-1/2} \, D^2 y_k \, g^{-1/2} \big)\right)_{k=1}^3. \end{split} \end{equation} Finally, $\Id_n$ will denote the identity matrix in $\mathbb{R}^{n\times n}$. \subsection{Elastic energy for prestrained plates}\label{S:energy} We present, following \cite{efrati2009}, a simple derivation of the energy density $W(\nabla\mathbf{u} \, G^{-1})$ for prestrained materials. This hinges on the well-established theory of hyperelasticity, and reduces to the classical St. Venant-Kirchhoff model provided $G=I_3$. Such model for isotropic materials reads \begin{equation} \label{def:W_iso} W(F):=\mu |\boldsymbol{\epsilon}_{\Id}|^2+\frac{\lambda}{2}\tr(\boldsymbol{\epsilon}_{\Id})^2, \quad \boldsymbol{\epsilon}_{\Id}(F):=\frac{1}{2}\left(F^TF-\Id_3\right). \end{equation} Here, $F$ is the deformation gradient, $\boldsymbol{\epsilon}_{\Id}$ is the Green-Lagrange strain tensor and $\lambda$ and $\mu$ are the (first and second) Lam\'e constants. This implies \begin{equation} \label{def:W2_iso} D^2W(\Id_3)(F,F)=2\mu |e|^2+\lambda\tr(e)^2, \quad e:=\frac{F+F^T}{2}. \end{equation} We point out that in \cite{frie2002b}, the strain tensor $\boldsymbol{\epsilon}_{\Id}=\boldsymbol{\epsilon}_{\Id}(F)$ of \eqref{def:W_iso} is set to be $\boldsymbol{\epsilon}_{\Id}(F)=\sqrt{F^TF}-\Id_3$, which yields the same relation \eqref{def:W2_iso}, and thus the same $\Gamma$-limit discussed below. Given an arbitrary point $\mathbf{x}_0\in\Omega_s$, we consider the linear transformation $\mathbf{r}_0(\mathbf{x}) := G^{1/2}(\mathbf{x}_0) (\mathbf{x}-\mathbf{x}_0)$; hence $\nabla\mathbf{r}_0(\mathbf{x}) = G^{1/2}(\mathbf{x}_0)$. The map $\mathbf{r}_0$ can be viewed as a local re-parametrization of the deformed 3d elastic body, and $\mathbf{z}=\mathbf{r}_0(\mathbf{x})$ is a new local coordinate system. This induces the deformation $\mathbf{U}(\mathbf{z}) := \mathbf{u}(\mathbf{x})$ and \[ \mathbf{u} = \mathbf{U} \circ \mathbf{r}_0 \quad\Rightarrow\quad \nabla \mathbf{u} (\mathbf{x}) = \nabla_{\mathbf{z}} \mathbf{U}(\mathbf{z}) \, G^{\frac12}(\mathbf{x}_0), \] where $\nabla_{\mathbf{z}}$ denotes the gradient with respect to the variable $\mathbf{z}$. The deviation of $\nabla\mathbf{u}^T \nabla\mathbf{u}$ from the reference metric $G$ at $\mathbf{x}=\mathbf{x}_0$ is thus given by \eqref{eqn:3Dprestrain} \[ \boldsymbol{\epsilon}_{G}(\nabla \mathbf{u}) = \frac12 \big( \nabla\mathbf{u}^T \nabla\mathbf{u} - G \big) = \frac12 G^{\frac12} \big( \nabla_{\mathbf{z}}\mathbf{U}^T \nabla_{\mathbf{z}}\mathbf{U} - I_3 \big) G^{\frac12} = G^{\frac12} \boldsymbol{\epsilon}_{\Id}(\nabla_{\mathbf{z}}\mathbf{U}) G^{\frac12}. \] The energy density $W(\nabla_{\mathbf{z}}\mathbf{U})$ at $\mathbf{z}=\mathbf{r}_0(\mathbf{x})$ with $\mathbf{x}=\mathbf{x}_0$ associated with $\boldsymbol{\epsilon}_{\Id}(\nabla_{\mathbf{z}}\mathbf{U})$, which minimizes when $\boldsymbol{\epsilon}_{\Id}(\nabla_{\mathbf{z}}\mathbf{U})$ vanishes, is governed by \eqref{def:W_iso} for isotropic materials according to the theory of hyperelasticity. What we need to do now is to rewrite this energy density in terms of $\nabla\mathbf{u}$ at $\mathbf{x}=\mathbf{x}_0$, namely $W(\nabla_{\mathbf{z}}\mathbf{U}) = W(\nabla\mathbf{u} \, G^{-1/2})$, whence \begin{equation}\label{E:new-energydensity} W(\nabla\mathbf{u} \, G^{-1/2}) = \mu \Big| G^{-1/2} \, \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) \, G^{-1/2} \Big|^2+ \frac{\lambda}{2}\tr\Big( G^{-1/2} \, \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) \, G^{-1/2} \Big)^2. \end{equation} This motivates the definition of hyperelastic energy for prestrained materials \begin{equation} \label{def:EG} E[\mathbf{u}] := \int_{\Omega_s} W \big(\nabla\mathbf{u}(\mathbf{x}) G(\mathbf{x})^{-\frac{1}{2}}\big)d\mathbf{x}-\int_{\Omega_s}\mathbf{f}_s(\mathbf{x})\cdot\mathbf{u}(\mathbf{x}) d\mathbf{x}, \end{equation} where $\mathbf{f}_s:\Omega_s \rightarrow \mathbb R^3$ is a prescribed forcing term and $W$ is given by \eqref{E:new-energydensity}. Note that the pointwise decomposition $G(\mathbf{x}_0)=\nabla \mathbf{r}_0(\mathbf{x}_0)^T\nabla \mathbf{r}_0(\mathbf{x}_0)$ is always possible because $G(\mathbf{x}_0)$ is symmetric positive definite. However, a global transformation $\mathbf{r}$ such that $\nabla \mathbf{r}^T\nabla\mathbf{r}=G$ everywhere need not exist in general because $G$ is not required to be immersible in $\mathbb{R}^3$. This is referred to as {\it incompatible elasticity} in \cite{efrati2009}. Moreover, the infimum of $E[\mathbf{u}]$ in \eqref{def:EG} should be strictly positive if the Riemann curvature tensor associated with $G$ does not vanish identically \cite{lewicka2011}. \subsection{Reduced model}\label{S:reduced-model} It is well-known that the case $E[\mathbf{u}]\sim s$ corresponds to a stretching of the midplane $\Omega$ (membrane theory) while pure bending occurs when $E[\mathbf{u}]\sim s^3$ (bending theory); see \cite{frie2006}. We examine now the formal asymptotic behavior of $s^{-3} E[\mathbf{u}]$ as $s\to0$; see also \cite{efrati2009}. We start with the assumption \cite{frie2002a,lewicka2011,lewicka2016} \begin{equation}\label{E:kirchhoff_quad} \mathbf{u}(\mathbf{x})=\mathbf{y}(\mathbf{x}')+x_3\alpha(\mathbf{x}')\boldsymbol{\nu}(\mathbf{x}')+\frac12 x_3^2\beta(\mathbf{x}')\boldsymbol{\nu}(\mathbf{x}') \qquad\forall \, \mathbf{x}'\in\Omega, ~ x_3 \in (-s/2,s/2), \end{equation} where $\mathbf{y}:\Omega\rightarrow \mathbb R^3$ describes the deformation of the mid-surface of the plate, $\boldsymbol{\nu}(\mathbf{x}'):=\frac{\partial_1\mathbf{y}(\mathbf{x}')\times\partial_2\mathbf{y}(\mathbf{x}')}{|\partial_1\mathbf{y}(\mathbf{x}')\times\partial_2\mathbf{y}(\mathbf{x}')|}$ is the unit normal vector to the surface $\mathbf{y}(\Omega)$ at the point $\mathbf{y}(\mathbf{x}')$, and $\alpha,\beta:\Omega\rightarrow\mathbb{R}$ are functions to be determined. Compared to the usual Kirchhoff-Love assumption \begin{equation}\label{E:kirchhoff} \mathbf{u}(\mathbf{x}',x_3) = \mathbf{y}(\mathbf{x}') + x_3 \, \boldsymbol{\nu}(\mathbf{x}') \qquad\forall \, \mathbf{x}'\in\Omega, ~ x_3 \in (-s/2,s/2), \end{equation} \eqref{E:kirchhoff_quad} not only restricts fibers orthogonal to $\Omega$ to remain perpendicular to the surface $\mathbf{y}(\Omega)$ but also allows such fibers to be inhomogeneously stretched. We rescale the forcing term in \eqref{def:EG} as follows \begin{equation}\label{E:forcing} \mathbf{f}(\mathbf{x}'):=\lim_{s\to 0^+} s^{-3}\int_{-s/2}^{s/2}\mathbf{f}_s(\mathbf{x}',x_3) \, dx_3 \qquad\forall \, \mathbf{x}'\in\Omega, \end{equation} and assume the limit to be finite. However, for the asymptotics below we omit this term for simplicity from the derivation and focus on the energy density $W$ in \eqref{def:EG}. Denoting by $\nabla'$ the gradient with respect to $\mathbf{x}'$ and writing $\mathbf{b}(\mathbf{x}'):=\alpha(\mathbf{x}')\boldsymbol{\nu}(\mathbf{x}')$ and $\mathbf{d}(\mathbf{x}'):=\beta(\mathbf{x}')\boldsymbol{\nu}(\mathbf{x}')$, we have for all $\mathbf{x}= (\mathbf{x}',x_3)\in\Omega_s$ \begin{equation*} \nabla\mathbf{u}(\mathbf{x}) = \left[ \nabla'\mathbf{y}(\mathbf{x}')+x_3\nabla' \mathbf{b}(\mathbf{x}')+\frac12x_3^2\nabla'\mathbf{d}(\mathbf{x}'), \mathbf{b}(\mathbf{x}')+x_3\mathbf{d}(\mathbf{x}')\right]\in\mathbb{R}^{3\times 3}. \end{equation*} Using the relations \begin{equation*} \boldsymbol{\nu}^T\boldsymbol{\nu}=1 \quad \mbox{and} \quad \boldsymbol{\nu}^T\nabla'\mathbf{y}=\boldsymbol{\nu}^T\nabla'\boldsymbol{\nu}=\mathbf{d}^T\nabla'\boldsymbol{\nu}=\mathbf{d}^T\nabla'\mathbf{y}=\mathbf{b}^T\nabla'\boldsymbol{\nu}=\mathbf{b}^T\nabla'\mathbf{y}=\mathbf{0}, \end{equation*} we easily get \begin{align*} \nabla\mathbf{u}^T\nabla\mathbf{u} &= \begin{bmatrix} \nabla'\mathbf{y}^T\nabla'\mathbf{y} & \mathbf{0} \\ \mathbf{0} & \alpha^2 \end{bmatrix} +x_3 \begin{bmatrix} \nabla'\mathbf{y}^T\nabla'\mathbf{b}+\nabla'\mathbf{b}^T\nabla'\mathbf{y} & \nabla'\mathbf{b}^T\mathbf{b} \\ \mathbf{b}^T\nabla'\mathbf{b} & 2\alpha\beta \end{bmatrix} \\ &+x_3^2 \begin{bmatrix} \frac12(\nabla'\mathbf{y}^T\nabla'\mathbf{d}+\nabla'\mathbf{d}^T\nabla'\mathbf{y})+\nabla'\mathbf{b}^T\nabla'\mathbf{b} & \frac12\nabla'\mathbf{d}^T\mathbf{b}+\nabla'\mathbf{b}^T\mathbf{d} \\ \frac12\mathbf{b}^T\nabla'\mathbf{d}+\mathbf{d}^T\nabla'\mathbf{b} & \beta^2 \end{bmatrix}+ h.o.t. \end{align*} Moreover, since \begin{equation*} |\boldsymbol{\nu}|^2=1, \quad \partial_j\mathbf{b}=(\partial_j\alpha)\boldsymbol{\nu}+\alpha\partial_j\boldsymbol{\nu} \quad \mbox{and} \quad \boldsymbol{\nu}\cdot\partial_j\mathbf{y}=0 \quad \mbox{for } j=1,2, \end{equation*} we have \begin{equation*} \nabla'\mathbf{b}^T\nabla'\mathbf{y}=\alpha\nabla'\boldsymbol{\nu}^T\nabla'\mathbf{y} \quad \mbox{and} \quad \nabla'\mathbf{b}^T\mathbf{b}=\alpha\nabla'\alpha. \end{equation*} Therefore, the expression $2G^{-1/2} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-1/2}$ becomes \begin{equation*} G^{-\frac{1}{2}}\nabla\mathbf{u}^T\nabla\mathbf{u} G^{-\frac{1}{2}}-\Id_3 = A_1+2x_3A_2+x_3^2A_3 + \mathcal{O}(x_3^3), \end{equation*} where \begin{align*} A_1 \! &:= \! \begin{bmatrix} g^{-\frac{1}{2}} \, \I[\mathbf{y}] \, g^{-\frac{1}{2}}-\Id_2 & \mathbf{0} \\ \mathbf{0} & \alpha^2-1 \end{bmatrix}, \\ A_2 \! &:= \! \begin{bmatrix} - \alpha g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, g^{-\frac{1}{2}} & \frac12 \alpha g^{-\frac12}\nabla'\alpha \\ \frac12\alpha \nabla'\alpha^Tg^{-\frac12} & \alpha\beta \end{bmatrix}, \\ A_3 \! &:= \! \begin{bmatrix} g^{-\frac12}(\nabla'\mathbf{b}^T\nabla'\mathbf{b}+\frac12(\nabla'\mathbf{y}^T\nabla'\mathbf{d}+\nabla'\mathbf{d}^T\nabla'\mathbf{y}))g^{-\frac12} & \frac12g^{-\frac12}(\nabla'\mathbf{d}^T\mathbf{b}+2\nabla'\mathbf{b}^T\mathbf{d}) \\ \frac12(\nabla'\mathbf{d}^T\mathbf{b}+2\nabla'\mathbf{b}^T\mathbf{d})^Tg^{-\frac12} & \beta^2 \end{bmatrix} \end{align*} are independent of $x_3$ and \[ \I[\mathbf{y}] = \nabla'\mathbf{y}^T\nabla'\mathbf{y} \quad \mbox{and} \quad \II[\mathbf{y}] = -\nabla'\boldsymbol{\nu}^T\nabla'\mathbf{y} \] are the first and second fundamental forms of $\mathbf{y}(\Omega)$, respectively. To evaluate the two terms on the right-hand side of \eqref{E:new-energydensity}, we split them into powers of $x_3$. We first deal with the pre-asymptotic regime, in which $s>0$ is small, and next we consider the asymptotic regime $s\to0$. \medskip\noindent {\bf Pre-asymptotics.} To compute $s^{-3}\int_{\Omega_s} \big| G^{-\frac12} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-\frac12}\big|^2$, we first note that \[ \Big|G^{-\frac12} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-\frac12} \Big|^2 \!= \! \frac14 |A_1|^2 + x_3 A_1 \!:\! A_2 + \frac{x_3^2}{2} A_1 \!:\! A_3 + x_3^2|A_2|^2 + \mathcal{O}(x_3^3), \] all the terms with odd powers of $x_3$ integrate to zero on $[-s/2,s/2]$, and those terms hidden in $\mathcal{O}(x_3^3)$ integrate to an $\mathcal{O}(s)$ contribution after rescaling by $s^{-3}$. We next realize that \begin{align*} s^{-3} \int_{-s/2}^{s/2} dx_3 \int_\Omega |A_1|^2 d\mathbf{x}' & = s^{-2} \int_\Omega \big|A_1\big|^2 d\mathbf{x}' \\ s^{-3} \int_{-s/2}^{s/2} x_3^2 \, dx_3 \int_\Omega A_1 \!:\! A_3 d\mathbf{x}' & = \frac{1}{12} \int_\Omega A_1 \!:\! A_3 d\mathbf{x}' \\ s^{-3} \int_{-s/2}^{s/2} x_3^2 \, dx_3 \int_\Omega |A_2|^2 d\mathbf{x}' & = \frac{1}{12} \int_\Omega \big|A_2\big|^2 d\mathbf{x}', \end{align*} and exploit that $s^{-3}\int_{\Omega_s} \big| G^{-\frac12} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-\frac12}|^2\le\Lambda$ independent of $s$ to find that \[ \Big| \int_\Omega A_1 \!:\! A_3 d\mathbf{x}' \Big| \le s \, \Big( s^{-2} \int_\Omega |A_1|^2d\mathbf{x}' \Big)^{\frac12} \Big(\int_\Omega |A_3|^2d\mathbf{x}' \Big)^{\frac12} \le C \Lambda^{\frac12} s \] is a higher order term because $\int_\Omega |A_3|^2d\mathbf{x}'\le C^2$. We thus obtain the expression \[ s^{-3}\int_{\Omega_s} \big| G^{\frac12} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{\frac12}\big|^2 = \frac{1}{4s^2} \int_\Omega \big|A_1\big|^2 d\mathbf{x}' + \frac{1}{12} \int_\Omega \big|A_2\big|^2 d\mathbf{x}' + \mathcal{O}(s). \] We proceed similarly with the second term in \eqref{E:new-energydensity} to arrive at \begin{align*} \tr\big( G^{-\frac12} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-\frac12} \big)^2 = &\frac14 \, \tr(A_1)^2 + x_3 \, \tr(A_1) \, \tr(A_2) + \frac12 \, x_3^2 \, \tr(A_1) \, \tr(A_3) \\& + x_3^2 \, \tr(A_2)^2 + \mathcal{O}(x_3^3), \end{align*} and \begin{equation*} s^{-3}\int_{\Omega_s} \tr\big( G^{-\frac12} \boldsymbol{\epsilon}_{G}(\nabla\mathbf{u}) G^{-\frac12} \big)^2 = \frac{1}{4s^2} \int_\Omega \tr\big(A_1\big)^2 d\mathbf{x}' + \frac{1}{12} \int_\Omega \tr\big(A_2\big)^2 d\mathbf{x}' + \mathcal{O}(s). \end{equation*} In view of \eqref{E:new-energydensity} and \eqref{def:EG}, we deduce that the rescaled elastic energy $s^{-3} E[\mathbf{u}]\approx E_s[\mathbf{y}] + E_b[\mathbf{y}]$ for $s$ small, where the two leading terms are the {\it stretching energy} \begin{equation}\label{E:stretching} E_s[\mathbf{y}] = \frac{1}{8s^2} \int_\Omega \Big(2\mu \big|A_1\big|^2 +\lambda \tr\big(A_1\big)^2 \Big) d\mathbf{x}' \end{equation} and the {\it bending energy} \begin{equation}\label{E:bending} E_b[\mathbf{y}] = \frac{1}{24} \int_{\Omega}\Big(2\mu \big|A_2\big|^2 +\lambda\tr \big(A_2\big)^2\Big) d\mathbf{x}' \end{equation} with $A_1$ and $A_2$ depending on $\I[\mathbf{y}]$ and $\II[\mathbf{y}]$, respectively. \medskip\noindent {\bf Asymptotics.} We now let the thickness $s\to0$ and observe that for the scaled energy to remain uniformy bounded, the integrant of the stretching energy must vanish with a rate at least $s^2$. By definition of $A_1$, this implies that the parametrization $\mathbf{y}$ must satisfy the metric constraint $g^{-\frac12} \, \I[\mathbf{y}] \, g^{-\frac12} = I_2$, or equivalently $\mathbf{y}$ is an {\it isometric immersion} of $g$ \begin{equation}\label{eqn:2Dprestrain} \nabla'\mathbf{y}^T\nabla'\mathbf{y} = g \quad \mbox{a.e. in } \Omega, \end{equation} and $\alpha^2\equiv 1$. Since $E_s[\mathbf{y}]=0$, we can take the limit for $s\to0$ and neglect the higher order terms to obtain the following expression for the reduced elastic energy \begin{equation}\label{E:reduced_w} \lim_{s\rightarrow 0}\frac{1}{s^3}\int_{\Omega_s}W(\nabla\mathbf{u} G^{-\frac{1}{2}})d\mathbf{x} = \frac{1}{24}\int_{\Omega}\Big(\underbrace{2\mu|A_2|^2+\lambda\tr(A_2)^2}_{=:w(\beta)}\Big)d\mathbf{x}', \end{equation} where, using the definition of $A_2$, $w(\beta)$ is given by \begin{equation*} w(\beta)=2\mu|g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, g^{-\frac{1}{2}}|^2+2\mu \beta^2+\lambda(-\tr(g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, g^{-\frac{1}{2}})+\beta)^2 \end{equation*} because $\alpha^2\equiv 1$. In order to obtain deformations with minimal energies, we now choose $\beta=\beta(\mathbf{x}')$ such that $w(\beta)$ is minimized. Since \begin{equation*} \frac{dw}{d\beta}=4\mu\beta+2\lambda\Big(-\tr\big(g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, g^{-\frac{1}{2}}\big)+\beta \Big) = 0 \quad \mbox{and} \quad \frac{d^2w}{d\beta^2}=4\mu+2\lambda>0, \end{equation*} we get \begin{equation*} \beta=\frac{\lambda}{2\mu+\lambda}\tr\big(g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, g^{-\frac{1}{2}}\big), \end{equation*} which gives \begin{equation*} w(\beta)=2\mu \big|g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, g^{-\frac{1}{2}}\big|^2+\frac{2\mu\lambda}{\lambda+2\mu}\tr\big(g^{-\frac12}\II[\mathbf{y}]g^{-\frac12}\big)^2. \end{equation*} Finally, the right-hand side of \eqref{E:reduced_w} has to be supplemented with the forcing term that we have ignored in this derivation but scales correctly owing to definition \eqref{E:forcing}. In the sequel, we relabel the bending energy $E_b[\mathbf{y}]$ as $E[\mathbf{y}]$, add the forcing and replace $\mathbf{x}'$ by $\mathbf{x}$ (and drop the notation $'$ on differential operators) \begin{equation}\label{E:final-bending} E[\mathbf{y}] = \frac{\mu}{12}\int_{\Omega}\Big(\big|g^{-\frac{1}{2}}\II[\mathbf{y}] g^{-\frac{1}{2}}\big|^2+\frac{\lambda}{2\mu+\lambda}\tr\big(g^{-\frac{1}{2}}\II[\mathbf{y}] g^{-\frac{1}{2}}\big)^2\Big)d\mathbf{x} - \int_\Omega \mathbf{f}\cdot\mathbf{y} d\mathbf{x}. \end{equation} This formal procedure has been justified via $\Gamma$-convergence in \cite{frie2002a,frie2002b} for isometries $\I[\mathbf{y}]=I_2$ and in \cite[Corollary 2.7]{lewicka2011}, \cite[Theorem 2.1]{lewicka2016} for isometric immersions $\I[\mathbf{y}]=g$. Moreover, as already observed in \cite{frie2002b}, we mention that using the Kirchhoff-Love assumption \eqref{E:kirchhoff} instead \eqref{E:kirchhoff_quad} yields a similar bending energy, namely we obtain \eqref{E:final-bending} but with $\lambda$ instead of $\frac{\mu\lambda}{2\mu+\lambda}$. \subsection{Admissibility}\label{S:admissibility} We need to supplement \eqref{E:final-bending} with suitable boundary conditions for $\mathbf{y}$ for the minimization problem to be well-posed. For simplicity, we consider Dirichlet and free boundary conditions in this paper, but other types of boundary conditions are possible. Let $\Gamma_D \subset \partial \Omega$ be a (possibly empty) open set on which the following Dirichlet boundary conditions are imposed: \begin{equation}\label{eq:dirichlet} \mathbf{y}=\boldsymbol{\varphi} \quad \mbox{and} \quad \nabla\mathbf{y}=\Phi \quad \mbox{on } \Gamma_D, \end{equation} where $\boldsymbol{\varphi}:\Omega\rightarrow\mathbb{R}^3$ and $\Phi:\Omega\rightarrow\mathbb{R}^{3\times 2}$ are sufficiently smooth and $\Phi$ satisfies the compatibility condition $\Phi^T\Phi=g$ a.e. in $\Omega$. The set of {\it admissible} functions is \begin{equation} \label{def:admiss} \A(\boldsymbol{\varphi},\Phi):=\left\{\mathbf{y}\in \V(\boldsymbol{\varphi},\Phi): \, \nabla\mathbf{y}^T\nabla\mathbf{y}=\g \,\, \mbox{a.e. in } \Omega\right\}, \end{equation} where the affine manifold $\V(\boldsymbol{\varphi},\Phi)$ of $H^2(\Omega)$ is defined by \begin{equation} \label{def:space_BC} \V(\boldsymbol{\varphi},\Phi):=\left\{\mathbf{y}\in [H^2(\Omega)]^3: \, \restriction{\mathbf{y}}{\Gamma_D}=\boldsymbol{\varphi}, \, \restriction{\nabla\mathbf{y}}{\Gamma_D}=\Phi\right\}. \end{equation} Our goal is to obtain \begin{equation} \label{prob:min_Eg} \mathbf{y}^*:=\textrm{argmin}_{\mathbf{y}\in\A(\boldsymbol{\varphi},\Phi)} E(\mathbf{y}), \end{equation} but this minimization problem is highly nonlinear and seems to be out of reach both analytically and geometrically. In fact, whether or not there exists a smooth {\it global} deformation $\mathbf{y}$ from $\Omega\subset\mathbb{R}^n$ into $\mathbb{R}^N$ satisfying the metric constraint \eqref{eqn:2Dprestrain}, a so-called {\it isometric immersion}, is a long standing problem in differential geometry \cite{han2006isometric}. Note that $\nabla\mathbf{y}$ is full rank if $\mathbf{y}$ is an isometric immersion; if in addition $\mathbf{y}$ is injective, then we say that $\mathbf{y}$ is an {\it isometric embedding.} For $n=2$, Nash's theorem guarantees that an isometric embedding exists for $N=10$ (Nash proved it for $N=17$, while it was further improved to $N=10$ by Gromov \cite{gromov1986}). When $N=3$, as in our context, a given metric $g$ may or may not admit an isometric immersion. Some elliptic and hyperbolic metrics with special assumptions have isometric immersions in $\mathbb{R}^3$ \cite{han2006isometric}. We assume implicitly below that $\A(\boldsymbol{\varphi},\Phi)$ is non-empty, thus there exists an isometric immersion that satisfies boundary conditions, but now we discuss an illuminating example in polar coordinates \cite{efrati2011hyperbolic, poznyak1995small}. \medskip\noindent {\bf Change of variables and polar coordinates.} If $\boldsymbol{\zeta}=(\zeta_1,\zeta_2):\widetilde{\Omega}\to\Omega$ is a change of variables $\boldsymbol{\xi}\mapsto\mathbf{x}$ into Cartesian coordinates $\mathbf{x}=(x_1,x_2)\in\Omega$ and $\mathbf{J}(\boldsymbol{\xi})$ is the Jacobian matrix, then the target metrics $\widetilde{g}(\boldsymbol{\xi})$ and $g(\mathbf{x})=g(\boldsymbol{\zeta}(\boldsymbol{\xi}))$ satisfy \begin{equation}\label{E:Jacobian} \widetilde{g}(\boldsymbol{\xi}) = \mathbf{J}(\boldsymbol{\xi})^T g(\boldsymbol{\zeta}(\boldsymbol{\xi})) \mathbf{J}(\boldsymbol{\xi}), \quad \mathbf{J}(\boldsymbol{\xi}) = \begin{bmatrix} \partial_{\xi_1} \zeta_1(\boldsymbol{\xi}) & \partial_{\xi_2} \zeta_1(\boldsymbol{\xi}) \\ \partial_{\xi_1} \zeta_2(\boldsymbol{\xi}) & \partial_{\xi_2} \zeta_2(\boldsymbol{\xi}) \end{bmatrix}. \end{equation} Let $\boldsymbol{\xi}=(r,\theta)$ indicate polar coordinates with $r\in I=[0,R]$ and $\theta\in[0,2\pi)$. If $g=I_2$ is the identity matrix (i.e., $\I[\mathbf{y}]=I_2$) and $\eta(r)=r$, then $\widetilde{g}(\boldsymbol{\xi})$ reads \begin{equation}\label{E:metric-eta} \widetilde{g}(r,\theta) = \begin{bmatrix} 1 & 0 \\ 0 & \eta(r)^2 \end{bmatrix}. \end{equation} We now show that some metrics of the form of \eqref{E:metric-eta} with $\eta(r)\ne r$ are still isometric immersible provided $\eta$ is sufficiently smooth. Consider the case $|\eta'(r)|\le1$ along with the parametrization \begin{equation}\label{E:embedding} \widetilde{\mathbf{y}}(r,\theta) = (\eta(r) \cos\theta, \eta(r) \sin\theta, \psi(r) )^T. \end{equation} Since $\partial_r\widetilde{\mathbf{y}}\cdot\partial_\theta\widetilde{\mathbf{y}}=0$ and $|\partial_\theta\widetilde{\mathbf{y}}|^2 = \eta(r)^2$, if $\psi$ satisfies $|\partial_r \widetilde{\mathbf{y}}|^2 = \eta'(r)^2 + \psi'(r)^2 = 1$, we realize that $\widetilde{\mathbf{y}}$ is an isometric embedding compatible with \eqref{E:metric-eta}. On the other hand, if $|\eta'(r)|\ge1$ and $a \ge \max_{r\in I} |\eta'(r)|$ is an integer, then the parametrization \begin{equation}\label{E:immersion} \widetilde{\mathbf{y}}(r,\theta) = \Big( \frac{\eta(r)}{a} \cos (a\theta), \frac{\eta(r)}{a} \sin (a\theta), \int_0^r \sqrt{1-\frac{\eta'(t)^2}{a^2}} dt \Big)^T \end{equation} is an isometric immersion compatible with \eqref{E:metric-eta} but not an isometric embedding. We will construct in Section \ref{S:gel-discs} a couple of isometric embeddings computationally. We also point out that \eqref{E:metric-eta} accounts for {\it shrinking} if $0\le\eta(r)<r$ and {\it stretching} if $\eta(r)>r$. To see this, let $\gamma_r(\theta) = (r,\theta)^T$, $\theta \in [0,2\pi)$, be the parametrization of a circle in $\Omega$ centered at the origin and of radius $r$, and let $\Gamma_r(\theta) = \widetilde{\mathbf{y}}(\gamma_r(\theta))$ be its image on $\widetilde{\mathbf{y}}(\widetilde{\Omega})=\mathbf{y}(\Omega)$. The length $\ell(\Gamma_r)$ satisfies \[ \ell(\Gamma_r) \!=\! \int_0^{2\pi} \Big| \frac{d}{d\theta} \Gamma_r(\theta) \Big| d\theta =\! \int_0^{2\pi} \sqrt{\gamma_r'(\theta)^T \widetilde{g}(r,\theta) \gamma_r'(\theta)} d\theta =\! \int_0^{2\pi} \eta(r) d\theta = \ell(\gamma_r) \frac{\eta(r)}{r}, \] and the ratio $\eta(r)/r$ acts as a shrinking/stretching parameter. \medskip\noindent {\bf Gaussian curvature.} Since $E[\mathbf{y}]>0$ provided that the Gaussian curvature $\kappa = \det (\II[\mathbf{y}])\det (\I[\mathbf{y}])^{-1}$ of the surface $\mathbf{y}(\Omega)$ does not vanish identically \cite{lewicka2011,lewicka2016}, it is instructive to find $\kappa$ for a deformation $\widetilde{\mathbf{y}}$ so that $\I[\widetilde{\mathbf{y}}]=\widetilde{g}$ is given by \eqref{E:metric-eta}. Since the formula for change of variables for $\II[\widetilde{\mathbf{y}}]$ is the same as that in \eqref{E:Jacobian} for $\widetilde{g}=\I[\widetilde{\mathbf{y}}]$, we realize that $\kappa$ is independent of the parametrization of the surface. According to Gauss's Theorema Egregium, $\kappa = \det (\II[\widetilde{\mathbf{y}}])\det (\I[\widetilde{\mathbf{y}}])^{-1}$ can be rewritten as an expression solely depending on $\I[\widetilde{\mathbf{y}}]$. Do Carmo gives an explicit formula for $\kappa$ in case $\widetilde{g}=\I[\widetilde{\mathbf{y}}]$ is diagonal \cite[Exercise 1, p.237]{carmo1976}, which reduces to \begin{equation}\label{E:Gauss-curvature} \kappa = -\frac{\eta''(r)}{\eta(r)} \end{equation} for $\widetilde{g}$ of the form \eqref{E:metric-eta}. Alternatively, we may express $\II[\widetilde{\mathbf{y}}]_{ij} = \partial_{ij} \widetilde{\mathbf{y}}\cdot\widetilde{\boldsymbol{\nu}}$, where $\widetilde{\boldsymbol{\nu}}(r,\theta)$ is the unit normal vector to the surface $\widetilde{\mathbf{y}}(\widetilde{\Omega})$ at the point $\widetilde{\mathbf{y}}(r,\theta)$, in terms of the orthonormal basis $\{\widetilde{\boldsymbol{\nu}},\partial_r\widetilde{\mathbf{y}},\eta(r)^{-1}\partial_\theta\widetilde{\mathbf{y}}\}$ as follows. First observe that \begin{align*} |\partial_r\widetilde{\mathbf{y}}|^2=1 \quad &\Rightarrow\quad \partial_{rr}\widetilde{\mathbf{y}}\cdot\partial_r\widetilde{\mathbf{y}}=0, \quad \partial_{\theta r}\widetilde{\mathbf{y}} \cdot \partial_r\widetilde{\mathbf{y}} = 0, \\ |\partial_\theta \widetilde{\mathbf{y}}|^2 = \eta^2(r) \quad &\Rightarrow\quad \partial_{r\theta}\widetilde{\mathbf{y}}\cdot\partial_\theta \widetilde{\mathbf{y}}=\eta(r)\eta'(r), \quad \partial_{\theta\theta}\widetilde{\mathbf{y}} \cdot\partial_\theta\widetilde{\mathbf{y}}=0, \\ \partial_r\widetilde{\mathbf{y}}\cdot\partial_\theta\widetilde{\mathbf{y}} = 0 \quad &\Rightarrow\quad \partial_{rr}\widetilde{\mathbf{y}} \cdot \partial_\theta \widetilde{\mathbf{y}} = 0. \end{align*} This yields \[ \partial_{rr}\widetilde{\mathbf{y}} = (\partial_{rr}\widetilde{\mathbf{y}}\cdot\widetilde{\boldsymbol{\nu}}) \widetilde{\boldsymbol{\nu}}, \quad \partial_{\theta\theta} \widetilde{\mathbf{y}} = (\partial_{\theta\theta}\widetilde{\mathbf{y}}\cdot\widetilde{\boldsymbol{\nu}}) \widetilde{\boldsymbol{\nu}} + (\partial_{\theta\theta}\widetilde{\mathbf{y}}\cdot\partial_r\widetilde{\mathbf{y}})\partial_r\widetilde{\mathbf{y}}, \] whence \[ \II[\widetilde{\mathbf{y}}]_{rr} \II[\widetilde{\mathbf{y}}]_{\theta\theta} = (\partial_{rr}\widetilde{\mathbf{y}}\cdot\widetilde{\boldsymbol{\nu}}) (\partial_{\theta\theta}\widetilde{\mathbf{y}}\cdot\widetilde{\boldsymbol{\nu}}) = \partial_{rr}\widetilde{\mathbf{y}} \cdot \partial_{\theta\theta} \widetilde{\mathbf{y}}. \] We next differentiate $\partial_{rr}\widetilde{\mathbf{y}} \cdot \partial_\theta \widetilde{\mathbf{y}} = 0$ and $\partial_{r\theta}\widetilde{\mathbf{y}}\cdot\partial_\theta \widetilde{\mathbf{y}}=\eta(r)\eta'(r)$ with respect to $\theta$ and $r$, respectively, to obtain \[ \partial_{rr}\widetilde{\mathbf{y}}\cdot\partial_{\theta\theta}\widetilde{\mathbf{y}} = \partial_{r\theta}\widetilde{\mathbf{y}} \cdot \partial_{r\theta}\widetilde{\mathbf{y}} - \eta'(r)^2 - \eta(r)\eta''(r). \] We finally notice that $\partial_{r\theta}\widetilde{\mathbf{y}} = (\partial_{r\theta}\widetilde{\mathbf{y}}\cdot\widetilde{\boldsymbol{\nu}})\widetilde{\boldsymbol{\nu}} + \frac{\eta'(r)}{\eta(r)} \partial_\theta \widetilde{\mathbf{y}}$, whence \[ \big(\II[\widetilde{\mathbf{y}}]_{r\theta}\big)^2 = (\partial_{r\theta}\widetilde{\mathbf{y}}\cdot\widetilde{\boldsymbol{\nu}})^2 = \partial_{r\theta}\widetilde{\mathbf{y}} \cdot \partial_{r\theta}\widetilde{\mathbf{y}} - \eta'(r)^2. \] Therefore, we have derived $\det\II[\widetilde{\mathbf{y}}]= \II[\widetilde{\mathbf{y}}]_{rr} \II[\widetilde{\mathbf{y}}]_{\theta\theta} - \big(\II[\widetilde{\mathbf{y}}]_{r\theta}\big)^2 = -\eta(r)\eta''(r)$ and as $\det\I[\widetilde{\mathbf{y}}]=\eta(r)^2$, we obtain \eqref{E:Gauss-curvature}. This expression will be essential in Section \ref{S:gel-discs}. \subsection{Alternative energy}\label{S:simplified-energy} The expression \eqref{E:final-bending} involves the second fundamental form $\II[\mathbf{y}] = -\nabla \boldsymbol{\nu}^T \nabla \mathbf{y} = (\partial_{ij}\mathbf{y}\cdot\boldsymbol{\nu})_{ij=1}^2$ and is too nonlinear to be practically useful. To render \eqref{prob:min_Eg} amenable to computation, we show now that $\II[\mathbf{y}]$ can be replaced by the Hessian $D^2\mathbf{y}$ without affecting the minimizers. This is the subject of next proposition, which uses the notation \eqref{e:notation_terms} for $g^{-1/2} D^2\mathbf{y} g^{-1/2}$. \begin{prop}[alternative energy] \label{prop:link_II_D2y} Let $\mathbf{y}=(y_k)_{k=1}^3:\Omega\rightarrow\mathbb{R}^3$ be a sufficiently smooth orientable deformation and let $g=\I[\mathbf{y}]$ and $\II[\mathbf{y}]$ be the first and second fundamental forms of $\mathbf{y}(\Omega)$. Then, there exist functions $f_1,f_2:\Omega\rightarrow\mathbb{R}_{\geq 0}$ depending only on $g$ and its derivatives, with precise definitions given in the proof, such that \begin{equation} \label{eqn:link_part1} \big|\g^{-\frac{1}{2}} \, D^2 \mathbf{y} \, \g^{-\frac{1}{2}} \big|^2 = \big|\g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, \g^{-\frac{1}{2}} \big|^2 + f_1, \end{equation} and \begin{equation} \label{eqn:link_part2} \big|\tr \big(\g^{-\frac{1}{2}} \, D^2\mathbf{y} \, \g^{-\frac{1}{2}} \big)\big|^2 = \tr \big(\g^{-\frac{1}{2}} \, \II[\mathbf{y}] \, \g^{-\frac{1}{2}} \big)^2 + f_2. \end{equation} \end{prop} \begin{proof} First of all, because $\mathbf{y}$ is smooth and orientable, the second derivatives $\partial_{ij}\mathbf{y}$ of the deformation $\mathbf{y}$ can be (uniquely) expressed in the basis $\{\partial_1\mathbf{y},\partial_2\mathbf{y},\boldsymbol{\nu}\}$ as \begin{equation} \label{eqn:Christoffel} \partial_{ij}\mathbf{y} = \sum_{l=1}^2\Gamma_{ij}^l \, \partial_l\mathbf{y}+\II_{ij}[\mathbf{y}] \, \boldsymbol{\nu}, \end{equation} where $\frac{\partial_1 \mathbf{y} \times \partial_2 \mathbf{y}}{|\partial_1 \mathbf{y} \times \partial_2 \mathbf{y}|}$ is the unit normal and $\Gamma_{ij}^l$ are the so-called Christoffel symbols of $\mathbf{y}(\Omega)$. Since $\Gamma_{ij}^l$ are intrinsic quantitites, they can be computed in terms of the coefficients $g_{ij}$ of $g$ and their derivatives \cite{carmo1976}; they do not depend explicitly on $\mathbf{y}$. We start with the proof of relation (\ref{eqn:link_part1}). To simplify the notation, let us write $a=\g^{-\frac{1}{2}}$. Using (\ref{eqn:Christoffel}) we get \begin{align*} (a \, \II[\mathbf{y}] \, a)_{ij} \, \boldsymbol{\nu} & =\sum_{m,n=1}^2a_{im} \big(\II_{mn}[\mathbf{y}] \, \boldsymbol{\nu} \big) \, a_{nj} \\ & = \sum_{m,n=1}^2a_{im}(\partial_{mn}\mathbf{y})a_{nj} - \sum_{m,n=1}^2a_{im}\left(\sum_{l=1}^2\Gamma_{mn}^l\partial_l\mathbf{y}\right) a_{nj}, \end{align*} or equivalently, rearranging the above expression, \begin{equation*} (a \,D^2 \mathbf{y} \, a)_{ij}= (a \,\II[\mathbf{y}] \, a)_{ij}\boldsymbol{\nu} + \sum_{m,n=1}^2a_{im}\left(\sum_{l=1}^2\Gamma_{mn}^l\partial_l\mathbf{y}\right) a_{nj}. \end{equation*} Since the unit vector $\boldsymbol{\nu}$ is orthogonal to both $\partial_1\mathbf{y}$ and $\partial_2\mathbf{y}$, the right-hand side is an $l_2$-orthogonal decomposition. Computing the square of the $l_2$-norms yields \begin{equation} \label{eqn:part1_ij} \sum_{k=1}^3(a \, D^2y_k \, a)_{ij}^2 = (a \, \II[\mathbf{y}] \, a)_{ij}^2 + f_{ij} \end{equation} with \begin{equation*} f_{ij} := \sum_{l_1,l_2=1}^2g_{l_1l_2}\sum_{m_1,m_2,n_1,n_2=1}^2a_{im_1}a_{im_2}\Gamma_{m_1n_1}^{l_1}\Gamma_{m_2n_2}^{l_2}a_{n_1j}a_{n_2j}. \end{equation*} Functions $f_{ij}$ do not depend explicitly on $\mathbf{y}$ but on $g$ and first derivatives of $g$. Therefore, summing \eqref{eqn:part1_ij} over $i,j$ from $1$ to $2$ gives \eqref{eqn:link_part1} with $f_1:=\sum_{i,j=1}^2f_{ij}$. The proof of (\ref{eqn:link_part2}) is similar. Since $\tr(a\,\II[\mathbf{y}] \, a)\,\boldsymbol{\nu} = \sum_{i=1}^2(a \, \II[\mathbf{y}] \, a)_{ii} \,\boldsymbol{\nu}$ it suffices to take $i=j$ and sum over $i$ in the previous derivation to arrive at \eqref{eqn:link_part2} with \begin{equation*} f_2:=\sum_{l_1,l_2=1}^2g_{l_1l_2}\sum_{i_1,i_2,m_1,m_2,n_1,n_2=1}^2a_{i_1m_1}a_{i_2m_2}\Gamma_{m_1n_1}^{l_1}\Gamma_{m_2n_2}^{l_2}a_{n_1i_1}a_{n_2i_2}. \end{equation*} This completes the proof because $f_2$ does not dependent explicitly on $\mathbf{y}$. \end{proof} \begin{remark}[alternative energy] As stated, Proposition~\ref{prop:link_II_D2y} is valid for smooth deformations $\mathbf{y}$ and metric $g$. It turns out that for $\mathbf{y} \in [H^2(\Omega)]^3$ and $g \in [H^1(\Omega) \cap L^\infty(\Omega)]^{2\times 2}$, the key relation \eqref{eqn:Christoffel} holds a.e. in $\Omega$ and so does the conclusion of Proposition~\ref{prop:link_II_D2y}. For the interested reader, we refer to \cite{BGNY2020}. \end{remark} Proposition \ref{prop:link_II_D2y} (alternative energy) shows that the solutions of \eqref{prob:min_Eg} with the energy $E[\mathbf{y}]$ given by \eqref{E:final-bending} are the same as those given by the energy \begin{equation} \label{def:Eg_D2y} E(\mathbf{y}):=\frac{\mu}{12} \int_{\Omega}\left(\Big|\g^{-\frac{1}{2}} \, D^2\mathbf{y} \, \g^{-\frac{1}{2}}\Big|^2+\frac{\lambda}{2\mu+\lambda} \left|\tr\big(\g^{-\frac{1}{2}} \, D^2\mathbf{y} \, \g^{-\frac{1}{2}}\big)\right|^2\right)-\int_{\Omega}\mathbf{f}\cdot\mathbf{y}. \end{equation} The Euler-Lagrange equations characterizing local extrema $\mathbf{y}\in[H^2(\Omega)]^3$ of \eqref{def:Eg_D2y} \begin{equation} \label{equ:Gateaux} \delta E[\mathbf{y};\mathbf{v}]=0 \quad \forall \mathbf{v}\in[H^2(\Omega)]^3, \end{equation} can be written in terms of the first variation of $E[\mathbf{y}]$ in the direction $\mathbf{v}$ given by \begin{equation} \label{def:Diff_E_D2y} \begin{split} \delta E[\mathbf{y};\mathbf{v}] :=& \frac{\mu}{6} \int_{\Omega}\big(\g^{-\frac{1}{2}} \, D^2\mathbf{y}\, \g^{-\frac{1}{2}} \big): \big(\g^{-\frac{1}{2}} \, D^2\mathbf{v} \, \g^{-\frac{1}{2}} \big) \\ & + \frac{\mu\lambda}{6(2\mu+\lambda)} \int_{\Omega}\tr \big(\g^{-\frac{1}{2}} \, D^2\mathbf{y} \, \g^{-\frac{1}{2}} \big) \cdot \tr \big(\g^{-\frac{1}{2}} \, D^2\mathbf{v} \, \g^{-\frac{1}{2}} \big) -\int_{\Omega}\mathbf{f}\cdot\mathbf{v}. \end{split} \end{equation} The presence of the trace term in \eqref{def:Diff_E_D2y} makes it problematic to find the governing partial differential equation hidden in \eqref{equ:Gateaux} (strong form). However, when $\lambda=0$, integration by parts shows that $P_k:=\g^{-1} \, D^2y_k \, \g^{-1}\in\mathbb{R}^{2 \times 2}$ for $k=1,2,3$ satisfies \begin{equation*} \delta E[\mathbf{y};\mathbf{v}] = \frac{\mu}{6} \sum_{k=1}^3 \left( \int_{\Omega}\di\di P_k \,v_k \!-\! \int_{\partial \Omega} \di P_k\cdot\mathbf{n} v_k + \int_{\partial \Omega} P_k\mathbf{n} \cdot\nabla v_k\right) \!-\! \int_\Omega \mathbf{f} \cdot \mathbf{v}, \end{equation*} where $\mathbf{n}$ is the outwards unit normal vector to $\partial\Omega$. On the other hand, if $g=\Id_2$ in which case $\mathbf{y}$ is an {\it isometry}, then $E[\mathbf{y}]$ in \eqref{E:final-bending} and \eqref{def:Eg_D2y} are equal and reduce to \begin{equation}\label{E:g=1} E[\mathbf{y}]=\frac{\alpha}{2}\int_{\Omega}|D^2\mathbf{y}|^2-\int_{\Omega}\mathbf{f}\cdot\mathbf{y}, \quad \alpha:=\frac{\mu(\mu+\lambda)}{3(2\mu+\lambda)} \end{equation} thanks to the relations for isometries \cite{bartels2013,bonito2015,bonito2018} \begin{equation}\label{E:bilaplacian} |\II[\mathbf{y}]|=|D^2\mathbf{y}|=|\Delta\mathbf{y}|=\tr(\II[\mathbf{y}]). \end{equation} The strong form of the Euler-Lagrange equation for a minimizer of \eqref{E:g=1} reads $\alpha \di\di D^2 \mathbf{y} = \alpha \Delta^2 \mathbf{y} = \mathbf{f}$. This problem has been studied numerically in \cite{bartels2013,bonito2018}. \looseness=-1 \section{Numerical scheme} \label{sec:method} We propose here a {\it local discontinuous Galerkin} (LDG) method to approximate the solution of the problem \eqref{prob:min_Eg}. LDG is inspired by, and in fact improves upon, the previous dG methods \cite{bonito2020discontinuous,bonito2018} but they are conceptually different. LDG hinges on the explicit computation of a discrete Hessian $H_h[\mathbf{y}_h]$ for the discontinuous piecewise polynomial approximation $\mathbf{y}_h$ of $\mathbf{y}$, which allows for a direct discretization of $E_h[\mathbf{y}_h]$ in \eqref{def:Eg_D2y}, including the trace term. We refer to the companion paper \cite{BGNY2020} for a discussion of convergence of discrete global minimizers of $E_h$ towards those of $E$; a salient feature is that the stability of the LDG method is retained even when the penalty parameters are arbitrarily small. We organize this section as follows. In Section \ref{subsec:LDG} we introduce the finite dimensional space $\V_h^k$ of discontinuous piecewise polynomials of degree $k\ge2$, along with the discrete Hessian $H_h[\mathbf{y}_h]$. We also discuss the discrete counterparts $E_h$ and $\A_{h,\varepsilon}^k(\boldsymbol{\varphi},\Phi)$ of the energy $E$ and the admissible set $\A(\boldsymbol{\varphi},\Phi)$, respectively. In Section \ref{subsec:GF}, we present a discrete gradient flow to minimize the energy $E_h$. Finally, in Section \ref{subsec:preprocess}, we show how to prepare suitable initial conditions for the gradient flow (preprocessing). \subsection{LDG-type discretization} \label{subsec:LDG} From now on, we assume that $\Omega \subset \mathbb R^2$ is a polygonal domain. Let $\{\Th\}_{h>0}$ be a shape-regular but possibly graded elements $\K$, either triangles or quadrilaterals, of diameter $h_{\K} := \textrm{diam}(\K)\leq h$. In order to handle hanging nodes (necessary for graded meshes based on quadrilaterals), we assume that all the elements within each domain of influence have comparable diameters. We refer to Sections 2.2.4 and 6 of Bonito-Nochetto \cite{BoNo} for precise definitions and properties. At this stage, we only point out that sequences of subdivisions made of quadrilaterals with at most one hanging node per side satisfy this assumption. Let $\Eh=\Eh^0\cup\Eh^b$ denote the set of edges, where $\Eh^0$ stands for the set of interior edges and $\Eh^{b}$ for the set of boundary edges. We assume a compatible representation of the Dirichlet boundary $\Gamma_D$, i.e., if $\Gamma_D \not = \emptyset$ then $\Gamma_D$ is the union of (some) edges in $\Eh^{b}$ for every $h>0$, which we indicate with $\Eh^{D}$; note that $\Gamma_D$ and $\Eh^{D}$ are empty sets when dealing with a problem with free boundary conditions. Let $\Eh^a:=\Eh^0\cup\Eh^{D}$ the set of {\it active edges} on which jumps and averages will be computed. The union of these edges give rise to the corresponding skeletons of $\Th$ \begin{equation}\label{E:skeleton} \Gamma_h^0 := \cup \big\{e: e\in\Eh^0 \big\}, \quad \Gamma_h^D := \cup \big\{e: e\in\Eh^D \big\}, \quad \Gamma_h^a := \Gamma_h^0 \cup \Gamma_h^D. \end{equation} If $h_e$ is the diameter of $e\in\Eh$, then $\h$ is the piecewise constant mesh function \begin{equation} \label{eqn:mesh_function} \h:\E_h \rightarrow \mathbb R_+, \qquad \restriction{\h}{e}:= h_e \quad \forall e\in\Eh. \end{equation} From now on, we use the notation $(\cdot,\cdot)_{L^2(\Omega)}$ and $(\cdot,\cdot)_{L^2(\Gamma_h^a)}$ to denote the $L^2$ inner products over $\Omega$ and $\Gamma_h^a$, and a similar notation for subsets of $\Omega$ and $\Gamma_h^a$. \medskip\noindent {\bf Broken spaces.} For an integer $k\geq 0$, we let $\mathbb{P}_k$ (resp. $\mathbb{Q}_k$) be the space of polynomials of total degree at most $k$ (resp. of degree at most $k$ in the each variable). The reference unit triangle (resp. square) is denoted by $\widehat\K$ and for $\K\in \mathcal T_h$, we let $F_\K:\widehat\K \rightarrow \K$ be the generic map from the reference element to the physical element. When $\mathcal T_h$ is made of triangles the map is affine, i.e., $F_\K \in \mathbb [\mathbb P_1]^2$, while $F_\K \in [\mathbb Q_1]^2$ when quadrilaterals are used. If $k \ge 2$, the (\textit{broken}) finite element space $\V_h^k$ to approximate each component of the deformation $\mathbf{y}$ (modulo boundary conditions) reads \begin{equation} \label{def:Vhk_tri} \V_h^k:=\left\{v_h\in L^2(\Omega): \,\, \restriction{v_h}{\K}\circ F_{\K}\in\mathbb{P}_k \quad(\textrm{resp. } \mathbb{Q}_k) \quad \forall \K \in\Th \right\} \end{equation} if $\Th$ is made of triangles (resp. quadrilaterals). We define the broken gradient $\nabla_h v_h$ of $v_h\in\V_h^k$ to be the gradient computed elementwise, and use similar notation for other piecewise differential operators such as the broken Hessian $D_h^2 v_h=\nabla_h\nabla_h v_h$. We now introduce the jump and average operators. To this end, let $\mathbf{n}_e$ be a unit normal to $e\in\Eh^0$ (the orientation is chosen arbitrarily but is fixed once for all), while for a boundary edge $e\in\Eh^b$, $\mathbf{n}_e$ is the outward unit normal vector to $\partial\Omega$. For $v_h \in \V_h^k$ and $e \in \E_h^0$, we set \begin{equation} \label{def:jump} \restriction{\jump{v_h}}{e} := v_h^{-}-v_h^+, \quad \restriction{\jump{\nabla_h v_h}}{e} := \nabla_h v_h^{-} - \nabla_h v_h^+, \quad \end{equation} where $v_h^{\pm}(\mathbf{x}):=\lim_{s\rightarrow 0^+}v_h(\mathbf{x}\pm s\mathbf{n}_e)$ for $\mathbf{x} \in e$. We compute the jumps componentwise provided the function $v_h$ is vector or matrix-valued. In what follows, the subindex $e$ is omitted when it is clear from the context. In order to deal with Dirichlet boundary data $(\boldsymbol{\varphi},\Phi)$ we resort to a Nitsche approach; hence we do not impose essential restrictions on the discrete space $[\V_h^k]^3$. However, to simplify the notation later, it turns out to be convenient to introduce the discrete sets $\V_h^k(\boldsymbol{\varphi},\Phi)$ and $\V_h^k(\boldsymbol{0},\boldsymbol{0})$ which mimic the continuous counterparts $\V(\boldsymbol{\varphi},\Phi)$ and $\V(\boldsymbol{0},\boldsymbol{0})$ but coincide with $[\V_h^k]^3$. In fact, we say that $\mathbf{v}_h\in[\V_h^k]^3$ belongs to $\V_h^k(\boldsymbol{\varphi},\Phi)$ provided the boundary jumps of $\mathbf{v}_h$ are defined to be \begin{equation}\label{E:bd-jumps} [\mathbf{v}_h]_e := \mathbf{v}_h - \boldsymbol{\varphi}, \quad [\nabla_h \mathbf{v}_h]_e := \nabla_h \mathbf{v}_h - \Phi, \quad \forall \, e\in\Eh^D. \end{equation} We stress that $\|[\mathbf{v}_h]\|_{L^2(\Gamma_h^D)} \to 0$ and $\|[\nabla_h \mathbf{v}_h]\|_{L^2(\Gamma_h^D)} \to 0$ imply $\mathbf{v}_h\to \boldsymbol{\varphi}$ and $\nabla_h \mathbf{v}_h \to\Phi$ in $L^2(\Gamma_D)$ as $h\to0$; hence the connection between $\V_h^k(g,\Phi)$ and $\V(g,\Phi)$. Therefore, we emphasize again that the sets $[\V_h^k]^3$ and $\V_h^k(\boldsymbol{\varphi},\Phi)$ coincide but the latter carries the notion of boundary jump, namely \begin{equation}\label{discrete-set} \V_h^k (\boldsymbol{\varphi},\Phi) := \Big\{ \mathbf{v}_h\in [\V_h^k]^3: \ [\mathbf{v}_h]_e, \, [\nabla_h \mathbf{v}_h]_e \text{ given by \eqref{E:bd-jumps} for all } e\in\Eh^D \Big\}. \end{equation} When free boundary conditions are imposed, i.e., $\Gamma_D = \emptyset$, then we do not need to distinguish between $\V^k_h(\boldsymbol{\varphi},\Phi)$ and $[\V_h^k]^3$. However, we keep the notation $\V^k_h(\boldsymbol{\varphi},\Phi)$ in all cases thereby allowing for a uniform presentation. We define the {\it average} of $v_h \in \V_h^k$ across an edge $e\in \Eh$ to be \begin{equation} \label{def:avrg} \avrg{v_h}_{e} := \left\{\begin{array}{ll} \frac{1}{2}(v_h^{+}+v_h^{-}) & e\in\Eh^0 \\ v_h^{-} & e\in\Eh^b, \end{array}\right. \end{equation} and apply this definition componentwise to vector and matrix-valued functions. As for the jump notation, the subindex $e$ is drop when it is clear from the context. \medskip\noindent {\bf Discrete Hessian.} To approximate the elastic energy \eqref{def:Eg_D2y}, we propose an LDG approach. Inspired by \cite{bonito2018,pryer}, the idea is to replace the Hessian $D^2\mathbf{y}$ by a discrete Hessian $H_h[\mathbf{y}_h]\in\left[L^2(\Omega)\right]^{3\times 2\times 2}$ to be defined now. To this end, let $l_1,l_2$ be non-negative integers (to be specified later) and consider two {\it local lifting operators} $r_e:[L^2(e)]^2\rightarrow[\V_h^{l_1}]^{2\times 2}$ and $b_e:L^2(e)\rightarrow[\V_h^{l_2}]^{2\times 2}$ defined for $e\in\Eh^a$ by \begin{gather} r_e(\boldsymbol{\phi}) \in [\V_h^{l_1}]^{2\times 2}: \, \int_{\omega_e}r_e(\boldsymbol{\phi}):\tau_h = \int_e\avrg{\tau_h}\mathbf{n}_e\cdot\boldsymbol{\phi} \quad \forall \tau_h\in [\V_h^{l_1}]^{2\times 2}\label{def:lift_re}, \\ b_e(\phi) \in [\V_h^{l_2}]^{2\times 2}: \, \int_{\omega_e} b_e(\phi):\tau_h = \int_e\avrg{\di \tau_h}\cdot\mathbf{n}_e\phi \quad \forall \tau_h\in [\V_h^{l_2}]^{2\times 2} \label{def:lift_be}. \end{gather} It is clear that $\supp(r_e(\boldsymbol{\phi}))=\supp(b_e(\phi))=\omega_e$, where $\omega_e$ is the patch associated with $e$ (i.e., the union of two elements sharing $e$ for interior edges $e\in\Eh^0$ or just one single element for boundary edges $e\in\Eh^b$). We extend $r_e$ and $b_e$ to $[L^2(e)]^{3\times 2}$ and $[L^2(e)]^3$, respectively, by component-wise applications. The corresponding {\it global lifting operators} are then given by \begin{equation}\label{E:global-lifting} R_h := \sum_{e\in\Eh^a} r_e : [L^2(\Gamma_h^a)]^2 \rightarrow [\V_h^{l_1}]^{2\times 2}, \quad B_h := \sum_{e\in\Eh^a} b_e : L^2(\Gamma_h^a) \rightarrow [\V_h^{l_2}]^{2\times 2}. \end{equation} This construction is simpler than that in \cite{bonito2018} for quadrilaterals. We now define the {\it discrete Hessian operator} $H_h:\V_h^k(\boldsymbol{\varphi},\Phi)\rightarrow\left[L^2(\Omega)\right]^{3\times 2\times 2}$ to be \begin{equation} \label{def:discrHess} H_h[\mathbf{v}_h] := D_h^2 \mathbf{v}_h -R_h(\jump{\nabla_h\mathbf{v}_h})+B_h(\jump{\mathbf{v}_h}). \end{equation} For a given polynomial degree $k\ge2$, a natural choice for the degree of the liftings is $l_1=l_2=k-2$ for triangular elements and $l_1=l_2=k$ for quadrilateral elements. However, any nonnegative values for $l_1$ and $l_2$ are suitable. We anticipate that in the numerical experiments presented in Section~\ref{sec:num_res}, we use $l_1=l_2=k$ with $k=2$. We refer to \cite{BGNY2020} for properties of $H_h[\mathbf{y}_h]$ but we point out one now to justify its use. Let $\Gamma_D \not = \emptyset$ and data $(\boldsymbol{\varphi},\Phi)$ be sufficiently smooth, and let $\{ \mathbf{y}_h \}_{h>0} \subset \V_h^k(\boldsymbol{\varphi},\Phi)$ satisfy \begin{equation}\label{e:def_triple} \|\mathbf{y}_h\|_{H_h^2(\Omega)}^2:=\| D^2_h \mathbf{y}_h \|_{L^2(\Omega)}^2 + \| \h^{-\frac{1}{2}} \jump{\nabla_h \mathbf{y}_h }\|_{L^2(\Gamma_h^a)}^2 + \| \h^{-\frac{3}{2}} \jump{\mathbf{y}_h }\|_{L^2(\Gamma_h^a)}^2 \leq \Lambda \end{equation} for a constant $\Lambda$ independent of $h$. If $\mathbf{y}_h$ converges in $[L^2(\Omega)]^3$ to a function $\mathbf{y} \in [H^2(\Omega)]^3$, then $H_h[\mathbf{y}_h]$ converges weakly to $D^2 \mathbf{y}$ in $[L^2(\Omega)]^{3\times 2 \times 2}$. We also refer to \cite{pryer,BGNY2020} for similar results for the Hessian and to \cite{ern2011} for the gradient operator. \medskip\noindent {\bf Discrete energies.} We are now ready to introduce the discrete energy on $\V_h^k(\boldsymbol{\varphi},\Phi)$ \begin{equation} \label{def:Eh} \begin{aligned} E_h[\mathbf{y}_h] := & \frac{\mu}{12} \int_{\Omega} \Big|\g^{-\frac{1}{2}} \, H_h[\mathbf{y}_{h}] \, \g^{-\frac{1}{2}} \Big|^2 \\ & + \frac{\mu\lambda }{12(2\mu+\lambda)} \int_{\Omega} \Big|\tr\big(\g^{-\frac{1}{2}} \, H_h[\mathbf{y}_h] \, \g^{-\frac{1}{2}}\big) \Big|^2 \\ & +\frac{\gamma_1}{2}\|\h^{-\frac{1}{2}}\jump{\nabla_h\mathbf{y}_h}\|_{L^2(\Gamma_h^a)}^2+\frac{\gamma_0}{2}\|\h^{-\frac{3}{2}}\jump{\mathbf{y}_h}\|_{L^2(\Gamma_h^a)}^2 -\int_{\Omega}\mathbf{f}\cdot\mathbf{y}_h , \end{aligned} \end{equation} where $\gamma_0,\gamma_1>0$ are stabilization parameters; recall the notation \eqref{E:higher-order} and \eqref{E:higher-order_B}. One of the most attractive feature of the LDG method is that $\gamma_0,\gamma_1$ are not required to be sufficiently large as is typical for interior penalty methods \cite{bonito2018}. We refer to Section~\ref{sec:num_res} for numerical investigations of this property and to \cite{BGNY2020} for theory. Note that the Euler-Lagrange equation $\delta E_h[\mathbf{y}_h;\mathbf{v}_h]=0$ in the direction $\mathbf{v}_h$ reads \begin{equation}\label{E:E-L} a_h(\mathbf{y}_h,\mathbf{v}_h) = F(\mathbf{v}_h) \quad\forall \, \mathbf{v}_h\in\V_h^k(\mathbf{0},\mathbf{0}), \end{equation} where \begin{equation} \label{def:form_ah} \begin{split} a_h(\mathbf{y}_h,\mathbf{v}_h) := & \frac{\mu}{6} \int_{\Omega} \left(\g^{-\frac{1}{2}}H_h[\mathbf{y}_{h}] \g^{-\frac{1}{2}}\right):\left(\g^{-\frac{1}{2}}H_h[\mathbf{v}_{h}] \g^{-\frac{1}{2}}\right) \\ & +\frac{\mu\lambda }{6(2\mu+\lambda)} \int_{\Omega}\tr\left(\g^{-\frac{1}{2}}H_h[\mathbf{y}_{h}] \g^{-\frac{1}{2}}\right) \cdot \tr\left(\g^{-\frac{1}{2}}H_h[\mathbf{v}_{h}] \g^{-\frac{1}{2}}\right) \\ & +\gamma_1 \big(\h^{-1}\jump{\nabla_h\mathbf{y}_h},\jump{\nabla_h\mathbf{v}_h} \big)_{L^2(\Gamma_h^a)}+\gamma_0 \big(\h^{-3}\jump{\mathbf{y}_h},\jump{\mathbf{v}_h} \big)_{L^2(\Gamma_h^a)}, \end{split} \end{equation} and \begin{equation} \label{def:form_Fh} F(\mathbf{v}_h) := \int_{\Omega}\mathbf{f}\cdot\mathbf{v}_h; \end{equation} compare with \eqref{equ:Gateaux} and \eqref{def:Diff_E_D2y}. We reiterate that finding the strong form of \eqref{def:Diff_E_D2y} is problematic because of the presence of the trace term. Yet, it is a key ingredient in the design of discontinuous Galerkin methods such as the interior penalty method and raises the question how to construct such methods for \eqref{def:Diff_E_D2y}. The use of reconstructed Hessian in \eqref{def:form_ah} leads to a numerical scheme without resorting to the strong form of the equation. \medskip\noindent {\bf Constraints.} We now discuss how to impose the Dirichlet boundary conditions \eqref{eq:dirichlet} and the metric constraint \eqref{eqn:2Dprestrain} discretely. The former is enforced via the Nitsche approach and thus is not included as a constraint in the discrete admissible set as in \eqref{def:admiss}; this turns out to be advantageous for the analysis of the method \cite{bonito2018}. The latter is too strong to be imposed on a polynomial space. Inspired by \cite{bonito2018}, we define the {\it metric defect} as \begin{equation} \label{def:PD} D_h[\mathbf{y}_h] := \sum_{\K\in\Th}\left|\int_{\K} \left(\nabla\mathbf{y}_h^T\nabla\mathbf{y}_h-g\right)\right| \end{equation} and, for a positive number $\varepsilon$, we define the {\it discrete admissible set} to be \begin{equation*} \A_{h,\varepsilon}^k:=\Big\{\mathbf{y}_h\in \V^k_h(\boldsymbol{\varphi},\Phi): \quad D_h[\mathbf{y}_h]\leq \varepsilon\Big\}. \end{equation*} Therefore, the discrete minimization problem, discrete counterpart of \eqref{prob:min_Eg}, reads \begin{equation} \label{prob:min_Eg_h} \min_{\mathbf{y}_h\in\A_{h,\varepsilon}^k} E_h[\mathbf{y}_h]. \end{equation} Problem \eqref{prob:min_Eg_h} is nonconvex due to the structure of $\A_{h,\varepsilon}^k$. Its solution is non-trivial and is discussed next. \subsection{Discrete gradient flow} \label{subsec:GF} To find a local minimizer $\mathbf{y}_h$ of $E_h[\mathbf{y}_h]$ within $\A_{h,\varepsilon}^k$, we design a discrete gradient flow associated with the discrete $H^2$-norm on $\V_h^k(\mathbf{0},0)$ \begin{equation}\label{def:H2metric} \begin{aligned} (\mathbf{v}_h,\mathbf{w}_h)_{H_h^2(\Omega)} := & \sigma (\mathbf{v}_h,\mathbf{w}_h)_{L^2(\Omega)} + (D^2_h\mathbf{v}_h,D^2_h\mathbf{w}_h)_{L^2(\Omega)} \\ & +(\h^{-1}\jump{\nabla_h\mathbf{v}_h},\jump{\nabla_h\mathbf{w}_h})_{L^2(\Gamma_h^a)}+(\h^{-3}\jump{\mathbf{v}_h},\jump{\mathbf{w}_h})_{L^2(\Gamma_h^a)}, \end{aligned} \end{equation} where $\sigma=0$ if $\Gamma_D \not = \emptyset$ and $\sigma>0$ if $\Gamma_D = \emptyset$. The latter corresponds to free boundary conditions and guarantees that $(\cdot,\cdot)_{H_h^2(\Omega)}$ is a scalar product \cite{bonito2020discontinuous,BGNY2020}. Given an initial guess $\mathbf{y}_h^0\in\A_{h,\varepsilon}^k$ and a pseudo-time step $\tau>0$, we compute iteratively $\mathbf{y}_h^{n+1}:=\mathbf{y}_h^{n}+\delta\mathbf{y}_h^{n+1} \in \V_h^k(\boldsymbol{\varphi},\Phi)$ that minimizes the functional \begin{equation}\label{gf:minimization} \mathbf{w}_h \,\,\mapsto\,\,\frac{1}{2\tau}\|\mathbf{w}_h-\mathbf{y}_h^n\|_{H_h^2(\Omega)}^2+E_h[\mathbf{w}_h] \quad\forall \, \mathbf{w}_h\in\V_h^k(\boldsymbol{\varphi},\Phi), \end{equation} under the following \textit{linearized metric constraint} for the increment $\delta\mathbf{y}_h^{n+1}$ \begin{equation}\label{gf:constraint} L_T[\mathbf{y}_h^n;\delta\mathbf{y}_h^{n+1}] := \int_T(\nabla\delta\mathbf{y}_h^{n+1})^T\nabla\mathbf{y}_h^n+(\nabla\mathbf{y}_h^n)^T\nabla\delta\mathbf{y}_h^{n+1}=0 \quad \forall \K\in\Th. \end{equation} The proposed strategy is summarized in Algorithm \ref{algo:main_GF}. \RestyleAlgo{boxruled} \begin{algorithm}[htbp] \SetAlgoLined Given a target metric defect $\varepsilon>0$, a pseudo-time step $\tau>0$ and a target tolerance $tol$\; Choose initial guess $\mathbf{y}_h^0\in\A_{h,\varepsilon}^k$\; \While{$\tau^{-1}|E_h[\mathbf{y}_h^{n+1}]-E_h[\mathbf{y}_h^{n}]|>$tol}{ \textbf{Solve} \eqref{gf:minimization}-\eqref{gf:constraint} for $\delta\mathbf{y}_h^{n+1}\in\V^k_h(\mathbf{0},\mathbf{0})$\; \textbf{Update} $\mathbf{y}_h^{n+1} = \mathbf{y}_h^{n}+\delta\mathbf{y}_h^{n+1}$\; } \caption{(discrete-$H^2$ gradient flow) Finding local minima of $E_h$} \label{algo:main_GF} \end{algorithm} \noindent We refer to Section~\ref{sec:solve} for a discussion on the implementation of Algorithm~\ref{algo:main_GF}. We show in \cite{BGNY2020} that the discrete gradient flow satisfies the following properties: \begin{itemize} \item \textbf{Energy decay}: If $\delta\mathbf{y}_h^{n+1}$ is nonzero, then we have \begin{equation}\label{eqn:control_energy} E_h[\mathbf{y}_h^{n+1}] < E_h[\mathbf{y}_h^n]. \end{equation} \item \textbf{Control of metric defect}: If $D_h[\mathbf{y}_h^0]\le\varepsilon_0$ and $E_h[\mathbf{y}_h^0]<\infty$, then all the iterates $\mathbf{y}_h^n$ satisfy $\mathbf{y}_h^n \in \A^k_{h,\varepsilon}$, i.e., \begin{equation} \label{eqn:control_defect} D_h[\mathbf{y}_h^n] \leq \varepsilon := \varepsilon_0 + \tau \Big(c_1 E_h[\mathbf{y}_h^0] + c_2 \big(\|\boldsymbol{\varphi}\|_{H^1(\Omega)}^2 + \|\Phi\|_{H^1(\Omega)}^2 + \|\mathbf{f}\|_{L^2(\Omega)}^2 \big)\Big), \! \end{equation} where $c_1,c_2$ depend on $\Omega$ if $\Gamma_D \not = \emptyset$ and also on $\sigma$ if $\Gamma_D = \emptyset$ but are independent of $n$, $h$ and $\tau$. Moreover, $c_2=0$ when $\Gamma_D = \emptyset$, as we assume $\mathbf{f}=\mathbf{0}$ in the free boundary case. \end{itemize} These two properties imply that the energy $E_h$ decreases at each step of Algorithm \ref{algo:main_GF} until a local extrema of $E_h$ restricted to $\A^k_{h,\varepsilon}$ is attained. \subsection{Initialization} \label{subsec:preprocess} The choice of an initial deformation $\mathbf{y}^0_h$ is a very delicate matter. On the one hand, we need $\varepsilon_0$ in \eqref{eqn:control_defect} as small as possible because the discrete gradient flow cannot improve upon the initial metric defect $D_h[\mathbf{y}^0_h]\le\varepsilon_0$. On the other hand, the only way to compensate for a large initial energy $E_h[\mathbf{y}_h^0]$ is to take very small fictitious time steps $\tau$ that may entail many iterations of the gradient flow to reduce the energy. The value of $E_h[\mathbf{y}_h^0]$ is especially affected by the mismatch between the Dirichlet boundary data $(\boldsymbol{\varphi},\Phi)$ and the trace of $\mathbf{y}^0_h$ and $\nabla_h\mathbf{y}^0_h$ that enter via the penalty terms in \eqref{def:form_ah} of LDG. Therefore, the role of the initialization process is to construct $\mathbf{y}^0_h$ with $\varepsilon_0$ relatively small and $E_h[\mathbf{y}_h^0]$ of moderate size upon matching the boundary data $(\boldsymbol{\varphi},\Phi)$ as well as possible whenever $\Gamma_D \ne \emptyset$. Notice that in some special cases, it is relatively easy to find such a $\mathbf{y}^0_h$. For instance, when $g=\Id_2$ and $\Gamma_D \ne \emptyset$, this has been achieved in \cite{bonito2018} with a flat surface and a continuation technique. For $g\ne\Id_2$ immersible, i.e., for which there exists a deformation $\mathbf{y}\in[H^2(\Omega)]^3$ such that $\I[\mathbf{y}]=g$, finding a good approximation $\mathbf{y}_h^0$ of $\mathbf{y}$ remains problematic and is the subject of this section. \medskip\noindent {\bf Metric preprocessing.}\label{sss:prestrain} We recall that the stretching energy $E_s[\mathbf{y}]$ of \eqref{E:stretching} must vanish for the asymptotic bending limit to make sense. We can monitor the deviation of $E_s[\mathbf{y}]$ from zero to create a suitable $\mathbf{y}_h^0$. Upon setting $\alpha^2=1$, we first observe that, since $g$ is uniformly positive definite, the first term in \eqref{E:stretching} satisfies \[ \int_\Omega \big|g^{-\frac12} \, \I[\mathbf{y}] \, g^{-\frac12} - \Id_2 \big|^2 \approx \int_\Omega \big| \I[\mathbf{y}] - g \big|^2 = \int_\Omega \big| \nabla\mathbf{y}^T \nabla\mathbf{y} - g \big|^2; \] the same happens with the second term. We thus consider the discrete energy \begin{equation} \label{def:E_prestrain_PP} \widetilde{E}_h [\widetilde \mathbf{y}_h] := \frac{1}{2}\int_{\Omega} \big|\nabla_h\widetilde \mathbf{y}_h^T\nabla_h\widetilde \mathbf{y}_h-g \big|^2 \end{equation} and propose a discrete $H^2$-gradient flow to reduce it similar to that in Section \ref{subsec:GF}. We proceed recursively: given $\widetilde \mathbf{y}^n_h$ we compute $\widetilde \mathbf{y}^{n+1}_h:=\widetilde \mathbf{y}^n_h+\delta \widetilde\mathbf{y}_h^{n+1}$ by seeking the increment $\delta \widetilde\mathbf{y}_h^{n+1}\in\V^k_h(\mathbf{0},\mathbf{0})$ that satisfies for all $\mathbf{v}_h\in\V^k_h(\mathbf{0},\mathbf{0})$ \begin{equation}\label{pre-gf:variation2} \widetilde{\tau}^{-1}(\delta\widetilde \mathbf{y}_h^{n+1},\mathbf{v}_h)_{H_h^2(\Omega)}+s_h(\widetilde \mathbf{y}_h^n;\delta\widetilde \mathbf{y}_h^{n+1},\mathbf{v}_h)= -s_h(\widetilde \mathbf{y}_h^n; \widetilde \mathbf{y}^n_h,\mathbf{v}_h), \end{equation} where $\widetilde{\tau}$ is a pseudo time-step parameter, not necessarily the same as $\tau$ in Algorithm~\ref{algo:main_GF}, and $s_h(\widetilde \mathbf{y}_h^n;\cdot,\cdot)$ is the variational derivative of $\widetilde{E}_h$ linealized at $\widetilde \mathbf{y}_h^n$ \begin{equation}\label{pre-gf:bilinear} s_h(\widetilde \mathbf{y}_h^n; \mathbf{w}_h,\mathbf{v}_h):=\int_{\Omega}\Big(\nabla_h\mathbf{v}_h^T\nabla_h\mathbf{w}_h+\nabla_h\mathbf{w}_h^T\nabla_h\mathbf{v}_h\Big):\Big((\nabla_h\widetilde \mathbf{y}_h^n)^T\nabla_h\widetilde \mathbf{y}_h^n-g \Big). \end{equation} This flow admits a unique solution at each step because the left-hand side of \eqref{pre-gf:variation2} is coercive, namely \begin{equation}\label{E:coercive} \|\mathbf{v}_h\|_{H_h^2(\Omega)}^2 \lesssim \widetilde{\tau}^{-1} (\mathbf{v}_h,\mathbf{v}_h)_{H_h^2(\Omega)} + s_h(\widetilde \mathbf{y}_h^n; \mathbf{v}_h,\mathbf{v}_h) \quad\forall\, \mathbf{v}_h\in\V^k_h(\mathbf{0},\mathbf{0}). \end{equation} Moreover, this flow stops whenever either of the following two conditions is met: \begin{itemize} \item the prestrain defect $D_h$ reaches a prescribed value $\tilde \varepsilon_0$, i.e, $D_h[\widetilde \mathbf{y}_h^{n+1}]\le \widetilde \varepsilon_0$; \item the energy $\widetilde E_h$ becomes stationary, i.e., $\widetilde \tau^{-1}|\tilde E_h[\widetilde \mathbf{y}^{n+1}]-\widetilde E_h[\widetilde \mathbf{y}^{n}] |\leq \widetilde{tol}$. \end{itemize} Monotone decay of $\widetilde{E}_h$ in \eqref{def:E_prestrain_PP} is not guaranteed by the flow because of the evaluation of $\delta\widetilde{E}_h$ at $\widetilde\mathbf{y}_h^n$. However, in all the numerical experiments proposed in Section~ \ref{sec:num_res}, the latter property is observed for $\widetilde \tau$ sufficiently small. Upon choosing suitable parameters $\widetilde \varepsilon_0$ and $\widetilde{\tau}$, this procedure produces initial configurations $\mathbf{y}_h^0$ with small metric defect $D_h[\mathbf{y}_h^0]$, but it has one important drawback: {\it flat configurations are local minimizers of \eqref{def:E_prestrain_PP} irrespective of $g$}. To see this, suppose that the current iterate $\widetilde{\mathbf{y}}_h^n$ of \eqref{pre-gf:variation2} is flat, i.e., $\widetilde{\mathbf{y}}_h^n=(y_1,y_2,0)$, and let $\delta\widetilde{\mathbf{y}}_h^{n+1}=(d_1,d_2,d_3)\in\V^k_h(\mathbf{0},\mathbf{0})$, where the functions $y_i$ and $d_i$ depend on $(x_1,x_2)\in\Omega$. Take now $\mathbf{v}_h=(0,0,\phi)\in \V^k_h(\mathbf{0},\mathbf{0})$ and note that \begin{equation*} \nabla\mathbf{v}_h^T \nabla\widetilde{\mathbf{y}}_h^n = \begin{bmatrix} 0 & 0 & \partial_1\phi \\ 0 & 0 & \partial_2\phi \end{bmatrix} \begin{bmatrix} \partial_1 y_1 & \partial_2 y_1 \\ \partial_1 y_2 & \partial_2 y_2 \\ 0 & 0 \end{bmatrix} = \mathbf{0} = (\nabla\widetilde{\mathbf{y}}_h^n)^T \nabla\mathbf{v}_h, \end{equation*} whence the right-hand side of \eqref{pre-gf:variation2} vanishes. Since $(\delta\widetilde{\mathbf{y}}_h^{n+1},\mathbf{v}_h)_{H_h^2(\Omega)} = (d_3,\phi)_{H_h^2(\Omega)}$, taking $\phi=d_3$ and utilizing \eqref{E:coercive} we deduce $d_3=0$ because, as already pointed out, $\|\cdot\|_{H_h^2(\Omega)}:= (\cdot,\cdot)^{1/2}_{H_h^2(\Omega)}$ defines a norm on $\V^k_h(\mathbf{0},\mathbf{0})$. This shows that the next iterate $\widetilde{\mathbf{y}}_h^{n+1}$ of \eqref{gf:minimization} is also flat and we need another mechanism to deform a flat surface out of plane provided $g$ does not admit a flat immersion. We discuss this next. A second drawback of \eqref{gf:minimization} is that the stretching energy $\widetilde{E}_h$ is just first order and cannot accommodate the Dirichlet boundary condition $\nabla\mathbf{y}=\Phi$ on $\Gamma_D$. We again need an additional preprocessing of the boundary conditions which we present next. \medskip\noindent {\bf Boundary conditions preprocessing.}\label{sss:bc_d} We pretend that $g=\Id_2$ momentarily, and rely on \eqref{E:g=1} and \eqref{E:bilaplacian} to consider the bi-Laplacian problem provided $\Gamma_D\ne\emptyset$ \begin{equation}\label{bi-Laplacian} \Delta^2\widehat \mathbf{y} = \widehat \mathbf{f}\quad \mbox{in } \Omega, \quad \widehat \mathbf{y} = \boldsymbol{\varphi} \quad \mbox{on } \Gamma_D, \quad \nabla\widehat \mathbf{y} = \Phi \quad \mbox{on } \Gamma_D, \end{equation} where typically $\widehat\mathbf{f}=\mathbf{0}$. This vector-valued problem is well-posed and gives, in general, a non-flat surface $\widehat{\mathbf{y}}(\Omega)$. We use the LDG method with boundary conditions imposed \emph{\`a la Nitsche} to approximate the solution $\widehat\mathbf{y}\in\V(\boldsymbol{\varphi},\Phi)$ of \eqref{bi-Laplacian}: \begin{equation}\label{bi-Laplacian-system} \widehat \mathbf{y}_h\in\V^k_h(\boldsymbol{\varphi},\Phi): \quad c_h(\widehat \mathbf{y}_h,\mathbf{v}_h) = (\widehat\mathbf{f},\mathbf{v}_h)_{L^2(\Omega)} \quad\forall\,\mathbf{v}_h\in\V^k_h(\mathbf{0},\mathbf{0}). \end{equation} Here, $c_h(\widehat \mathbf{y}_h,\mathbf{v}_h)$ is defined similarly to \eqref{def:form_ah} using the discrete Hessian \eqref{def:discrHess}, i.e., \begin{equation}\label{bi-Laplacian_bilinear} \begin{split} c_h(\mathbf{w}_h,\mathbf{v}_h) := & \int_{\Omega}H_h[\mathbf{w}_h] : H_h[\mathbf{v}_h] \\ &+\widehat\gamma_1(\h^{-1}\jump{\nabla_h\mathbf{w}_h},\jump{\nabla_h\mathbf{v}_h})_{L^2(\Gamma_h^a)} +\widehat\gamma_0(\h^{-3}\jump{\mathbf{w}_h},\jump{\mathbf{v}_h})_{L^2(\Gamma_h^a)}, \end{split} \end{equation} where $\widehat{\gamma}_0$ and $\widehat{\gamma}_1$ are positive penalty parameters that may not necessarily be the same as their counterparts $\gamma_0$ and $\gamma_1$ used in the definition of $E_h$. Then $\widehat\mathbf{y}_h$ satisfies (approximately) the given boundary conditions on $\Gamma_D$ and $\widehat\mathbf{y}_h(\Omega)$ is, in general, non-flat. Instead, if $\Gamma_D=\emptyset$ (free boundary condition), then an obvious choice is $\widehat{\mathbf{y}}=(id,0)^T$, where $id(x)=x$ for $x \in \Omega$, but the surface $\widehat{\mathbf{y}}(\Omega)=\Omega\times0$ is flat. To get a surface out of plane, we consider a somewhat ad-hoc procedure: we solve \eqref{bi-Laplacian} with a fictitious forcing $\widehat\mathbf{f}\ne\mathbf{0}$ supplemented with the Dirichlet boundary condition $\boldsymbol{\varphi}(x)=(x,0)^T$ for $x \in \partial\Omega$ but obviating $\Phi$ and jumps of $\nabla_h\widehat{\mathbf{y}}_h$ on $\Gamma_h^b$ in \eqref{bi-Laplacian_bilinear}. This corresponds to enforcing discretely a variational (Neumann) boundary condition $\Delta\widehat{\mathbf{y}} = 0$ on $\partial\Omega$. We summarize the previous discussion of preprocessing in Algorithm \ref{algo:preprocess}, which \RestyleAlgo{boxruled} \begin{algorithm}[htbp] \SetAlgoLined Given $\widetilde{tol}$ and $\widetilde\varepsilon_0$\; \eIf{$\Gamma_D\ne\emptyset$ (Dirichlet boundary condition)}{ \textbf{Solve} \eqref{bi-Laplacian-system} for $\widehat \mathbf{y}_h\in\V_h^k(\boldsymbol{\varphi},\Phi)$ with $\widehat\mathbf{f}=\mathbf{0}$\; }{\textbf{Solve} \eqref{bi-Laplacian-system} for $\widehat \mathbf{y}_h$ with $\widehat\mathbf{f}\ne\mathbf{0}$, $\boldsymbol{\varphi}=(id,0)$ and without $\Phi$\;}{ Set $\widetilde{\mathbf{y}}_h^0=\widehat \mathbf{y}_h$\; } \While{$\widetilde \tau^{-1}\big|E_h[\widetilde{\mathbf{y}}_h^{n+1}]-E_h[\widetilde{\mathbf{y}}_h^{n}]\big|> \widetilde{tol}$ \emph{ and } $D_h[\widetilde{\mathbf{y}}_h^{n+1}] > \widetilde\varepsilon_0$}{ \textbf{Solve} \eqref{pre-gf:variation2} for $\delta\widetilde{\mathbf{y}}_h^{n+1}\in\V^k_h(\mathbf{0},\mathbf{0})$\; \textbf{Update} $\widetilde{\mathbf{y}}_h^{n+1} = \widetilde{\mathbf{y}}_h^{n}+\delta\widetilde{\mathbf{y}}_h^{n+1}$ \; } \textbf{Set} $\mathbf{y}_h^0=\widetilde{\mathbf{y}}_h^{n+1}$. \caption{Initialization step for Algorithm \ref{algo:main_GF}.} \label{algo:preprocess} \end{algorithm} consists of two separate steps: the \textit{boundary conditions} and \textit{metric} preprocessing steps. When $\Gamma_D \not = \emptyset$ (Dirichlet boundary condition), the former constructs a solution $\widehat\mathbf{y}_h$ to \eqref{bi-Laplacian-system} with $\widehat\mathbf{f}=\mathbf{0}$, whence $\widehat\mathbf{y}_h \approx\boldsymbol{\varphi}$ and $\nabla_h\widehat\mathbf{y}_h \approx \Phi$ on $\Gamma_D$. Instead, when $\Gamma_D = \emptyset$ (free boundary condition), $\widehat\mathbf{y}_h$ solves \eqref{bi-Laplacian-system} again but now with $\widehat\mathbf{f}\ne\mathbf{0}$ and a suitable boundary condition for $\mathbf{y}_h$ on $\partial\Omega$ that guarantee $\widehat\mathbf{y}_h(\Omega)$ is non-flat. The output of this step is then used as an initial guess for the metric preprocessing step \eqref{pre-gf:variation2}. It is conceivable that more efficient or physically motivated algorithms could be designed to construct initial guesses. We leave these considerations for future research. As we shall see in Section~\ref{sec:num_res}, different initial deformations can lead to different equilibrium configurations corresponding to distinct local minima of the energy $E_h$ in \eqref{def:Eh}. These minima are generally physically meaningful. \section{Implementation} \label{sec:solve} We make a few comments on the implementation of the gradient flow \eqref{gf:minimization}-\eqref{gf:constraint}, built in Algorithm \ref{algo:main_GF}, and the resulting linear algebra solver used at each step. \subsection{Linear constraints} We start by discussing how the linearized metric constraint \eqref{gf:constraint} is enforced using piecewise constant Lagrange multipliers in the space \begin{equation*} \Lambda_h:=\left\{\lambda_h:\Omega\to\mathbb{R}^{2\times2}: \,\, \lambda_h^T=\lambda_h, \,\, \lambda_h\in\big[\V_h^0\big]^{2\times 2}\right\}. \end{equation*} We define the bilinear form $b_h^n$ for any $(\mathbf{v}_h,\boldsymbol{\mu}_h)\in\V^k_h(\mathbf{0},\mathbf{0})\times\Lambda_h$ to be \begin{equation}\label{def:bh} b_h^n(\mathbf{v}_h,\boldsymbol{\mu}_h):=\sum_T\int_T(\nabla \mathbf{v}_h^T\nabla\mathbf{y}_h^n+(\nabla\mathbf{y}_h^n)^T\nabla\mathbf{v}_h):\boldsymbol{\mu}_h. \end{equation} We observe that $b_h^n$ depends on $\mathbf{y}_h^n$ and that $b_h^n(\delta\mathbf{y}_h^{n+1},\boldsymbol{\mu}_h)=0$ for all $\boldsymbol{\mu}_h\in\Lambda_h$ implies \eqref{gf:constraint}, i.e., $L_T[\mathbf{y}_h^n;\delta\mathbf{y}_h^{n+1}]=0$ for all $T\in\Th$. Therefore, recalling the forms $a_h$ and $F_h$ in \eqref{def:form_ah} and \eqref{def:form_Fh}, the augmented system for the Euler-Lagrange equation \eqref{E:E-L} incorporating the gradient flow step and the linearized metric constraint reads: seek $(\delta\mathbf{y}_h^{n+1},\boldsymbol{\lambda}_h^{n+1})\in\V^k_h(\mathbf{0},\mathbf{0})\times \Lambda_h$ such that \begin{equation}\label{gf:system} \begin{aligned} \tau^{-1}(\delta\mathbf{y}_h^{n+1},\mathbf{v}_h)_{H_h^2(\Omega)} \!+\! a_h(\delta\mathbf{y}_h^{n+1},\mathbf{v}_h) \!+\! b_h^n(\mathbf{v}_h,\boldsymbol{\lambda}_h^{n+1})& \!=\! F_h(\mathbf{v}_h) \!-\! a_h(\mathbf{y}_h^n,\mathbf{v}_h) \\ b_h^n(\delta\mathbf{y}_h^{n+1},\boldsymbol{\mu}_h) & \!=\! 0 \end{aligned} \end{equation} for all $(\mathbf{v}_h,\boldsymbol{\mu}_h)\in\V^k_h(\mathbf{0},\mathbf{0})\times \Lambda_h$. Since $\mathbf{y}_h^n\in\V_h^k(\boldsymbol{\varphi},\Phi)$, whence $\mathbf{y}_h^{n+1}=\mathbf{y}_h^n+\delta\mathbf{y}_h^{n+1}\in\V_h^k(\boldsymbol{\varphi},\Phi)$, the effect of the Dirichlet boundary data $(\boldsymbol{\varphi},\Phi)$ is implicitly contained in $a_h(\mathbf{y}_h^n,\mathbf{v}_h)$ when $\Gamma_D$ is not empty. \subsection{Solvers} Let $\{\boldsymbol{\varphi}_h^i\}_{i=1}^N$ be a basis for $\V^k_h(\mathbf{0},\mathbf{0})$ and let $\{\boldsymbol{\psi}_h^i\}_{i=1}^M$ be a basis for $\Lambda_h$. The discrete problem \eqref{gf:system} is a {\it saddle-point problem} of the form \begin{equation}\label{discrete_system} \begin{bmatrix} A & B_n^T \\ B_n & 0 \end{bmatrix} \begin{bmatrix} \boldsymbol{\delta Y}_h^{n+1} \\ \boldsymbol{\Lambda}_h^{n+1} \end{bmatrix} = \begin{bmatrix} \mathbf{F}_n \\ \mathbf{0} \end{bmatrix}. \end{equation} Here, $(\boldsymbol{\delta Y}_h^{n+1},\boldsymbol{\Lambda}_h^{n+1})$ are the nodal values of $(\delta\mathbf{y}_h^{n+1},\boldsymbol{\lambda}_h^{n+1})$ in these bases, $A=(A_{ij})_{i,j=1}^N\in\mathbb{R}^{N\times N}$ is the matrix corresponding to the first two terms of \eqref{gf:system} \begin{equation*} A_{ij}:=\tau^{-1}(\boldsymbol{\varphi}_h^j,\boldsymbol{\varphi}_h^i)_{H_h^2(\Omega)} + \widetilde A_{ij} \quad \mbox{with} \quad \widetilde A_{ij}:=a_h(\boldsymbol{\varphi}_h^j,\boldsymbol{\varphi}_h^i), \quad i,j=1,\ldots,N, \end{equation*} while the matrix $B_n\in\mathbb{R}^{M\times N}$ corresponds to the bilinear form $b_h^n$ and is given by \begin{equation*} (B_n)_{ij}:=b_h^n(\boldsymbol{\varphi}_h^j,\boldsymbol{\psi}_h^i) \quad i=1,\ldots,M, \, j=1,\ldots,N. \end{equation*} The vector $\mathbf{F}_n\in\mathbb{R}^N$ accounts for the right-hand-side of \eqref{gf:system}. It reads $\mathbf{F}_n=\mathbf{F}+\mathbf{L}-\widetilde A \mathbf{Y}^n$, where $\mathbf{Y}^n$ contains the nodal values of $\mathbf{y}_h^n$ in the basis $\{\boldsymbol{\varphi}_h^i\}_{i=1}^N$ while $\mathbf{F}=(F_i)_{i=1}^N$ and $\mathbf{L}=(L_i)_{i=1}^N$ are defined by \begin{equation*} F_i:= F_h(\boldsymbol{\varphi}_h^i) \quad \mbox{and} \quad L_i:= -a_h(\bar \mathbf{0},\boldsymbol{\varphi}_h^i), \quad i=1,\ldots,N. \end{equation*} Here, $\bar \mathbf{0}$ denotes the zero function in the space $\V_h(\boldsymbol{\varphi},\Phi)$ and $\mathbf{L}$ contains the liftings of the boundary data. Since $B_n$ and $\mathbf{F}_n$ depend explicitly on the current deformation $\mathbf{y}_h^n$, they have to be re-computed at each iteration of Algorithm \ref{algo:main_GF} (gradient flow). In contrast, the matrices $A$ and $\widetilde A$ and the vector $\mathbf{L}$, which are the most costly to assemble because of the reconstructed Hessians, are independent of the iteration number $n$ and can thus be computed once for all. More precisely, to compute the element-wise contribution on a cell $T$, the discrete Hessian \eqref{def:discrHess} of each basis function associated with $T$ along with those associated with the neighboring cells is computed. Recall that for any interior edge $e\in\Eh^i$, the support of the liftings $r_e$ and $b_e$ in \eqref{def:lift_re} and \eqref{def:lift_be} is the union of the two cells sharing $e$ as an edge. We employ direct solvers for these small systems. We proceed similarly for the computation of the liftings of the boundary data $\boldsymbol{\varphi}$ and $\Phi$. Once the discrete Hessians are computed, the rest of the assembly process is standard. Incidentally, we note that the proposed LDG approach couples the degree of freedom (DoFs) of all neighboring cells (not only the cell with its neighbors). As a consequence, the sparsity pattern of LDG is slightly larger than it for a standard symmetric interior penalty dG (SIPG) method. However, the stability properties of LDG are superior to those of SIPG \cite{BGNY2020}. System \eqref{discrete_system} can be solved using the \textit{Schur complement method}. Denoting $S_n:=B_nA^{-1}B_n^T$ the Schur complement matrix, the first step determines $\boldsymbol{\Lambda}_h^{n+1}$ satisfying \begin{equation} \label{eqn:Schur} S_n\boldsymbol{\Lambda}_h^{n+1}=B_nA^{-1}\mathbf{F}_n, \end{equation} followed by the computation of $\delta \mathbf{Y}_h^{n+1}$ solving \begin{equation}\label{E:deltaY} A\delta \mathbf{Y}_h^{n+1}=\mathbf{F}_n-B_n^T\boldsymbol{\Lambda}_h^{n+1}. \end{equation} Because the matrix $A$ is independent of the iterations, we pre-compute its LU decomposition once for all and use it whenever the action of $A^{-1}$ is needed in \eqref{eqn:Schur} and \eqref{E:deltaY}. Furthermore, a conjugate gradient algorithm is utilized to compute $\boldsymbol{\Lambda}_h^{n+1}$ in \eqref{eqn:Schur} to avoid assembling $S_n$. The efficiency of the latter depends on the condition number of the matrix $S_n$, which in turn depends on the inf-sup constant of the saddle-point problem \eqref{discrete_system}. Leaving aside the preprocessing step, we observe in practice that solving the Schur complement problem \eqref{eqn:Schur} is the most time consuming part of the simulation. Finally, we point out that the stabilization parameters $\gamma_0$ and $\gamma_1$ influence the number of Schur complement iterations: more iterations of the gradient conjugate algorithm are required for larger stabilization parameter values. We refer to Tables \ref{tab:iso_vertical_load_0025} and \ref{tab:iso_vertical_load_0025_LDG_SIPG} below for more details. \section{Numerical experiments} \label{sec:num_res} In this section, we present a collection of numerical experiments to illustrate the performance of the proposed methodology. We consider several prestrain tensors $g$, as well as both $\Gamma_D\neq\emptyset$ (Dirichlet boundary condition) and $\Gamma_D=\emptyset$ (free boundary condition). The Algorithms \ref{algo:main_GF} and \ref{algo:preprocess} are implemented using the \textrm{deal.ii} library \cite{bangerth2007} and the visualization is performed with \textrm{paraview} \cite{ayachit2015}. The color code is the following: (multicolor figures) dark blue indicates the lowest value of the deformation's third component while dark red indicate the largest value of the deformation's third component; (unicolor figures) magnitude of the deformation's third component. For all the simulations, we fix the polynomial degree $k$ of the deformation $\mathbf{y}_h$ and $l_1,l_2$ for the two liftings of the discrete Hessian $H_h[\mathbf{y}_h]$ to be \[ k=l_1=l_2=2. \] Moreover, unless otherwise specified, we set the Lam\'e coefficients to $\lambda=8$ and $\mu=6$, and the stabilization parameters for \eqref{def:form_ah} and \eqref{bi-Laplacian_bilinear} to be \[ \gamma_0=\gamma_1=1, \qquad \widehat \gamma_0=\widehat \gamma_1=1. \] In striking contrast to \cite{bonito2018,bonito2020discontinuous}, these parameters do not need to be large for stability purposes. When $\Gamma_D=\emptyset$, we set $\epsilon=1$ in \eqref{def:H2metric}. Finally, we choose $tol=10^{-6}$ for the stopping criteria in Algorithm \ref{algo:main_GF} (gradient flow). To record the energy $E_h$ and metric defect $D_h$ after the three key procedures described in Section \ref{sec:method}, we resort to the following notation: {\it BC PP} (boundary conditions preprocessing); {\it Metric PP} (metric preprocessing); {\it Final} (gradient flow). \subsection{Vertical load and isometry constraint}\label{S:vertical-load} This first example has been already investigated in \cite{bartels2013,bonito2018}. We consider the square domain $\Omega=(0,4)^2$, the metric $g=\Id_2$ (isometry) and a vertical load $\mathbf{f}=(0,0,0.025)^T$. Moreover, the plate is clamped on $\Gamma_D=\{0\}\times[0,4]\cup[0,4]\times\{0\}$, i.e., we prescribe the Dirichlet boundary condition \eqref{eq:dirichlet} with \begin{equation*} \boldsymbol{\varphi}(x_1,x_2)=(x_1,x_2,0)^T, \quad \Phi=[I_2,\mathbf{0}]^T \qquad (x_1,x_2) \in \Gamma_D. \end{equation*} Finally, we set the Lam\'e constant $\lambda=0$ thereby removing the trace term in \eqref{def:Eh}. No preprocessing step is required because the flat plate, which corresponds to the identity deformation $\mathbf{y}_h^0(\Omega)=\Omega$, satisfies the metric constraint and the boundary conditions. For the discretization of $\Omega$, we use $\ell=0,1,2,\cdots$ to denote the refinement level and consider uniform partitions $\mathcal T_\ell$ consisting of squares $\K$ of side-length $4/2^\ell$ and diameters $h_{\K}=h=\sqrt{2}/2^{\ell-2}$. The pseudo-time step used for the discretization of the gradient flow is chosen so that $\tau=h$. The discrete energy $E_h[\mathbf{y}_h]$ and metric defect $D_h[\mathbf{y}_h]$ for $\ell=3,4,5$ are report in Table \ref{tab:iso_vertical_load_0025} along with the number of gradient flow iterations (GF Iter) required to reach the targeted stationary tolerance and the range of number of iterations (Schur Iter) needed to solve the Schur complement problem \eqref{eqn:Schur}. Note that in this case we have $D_h[\mathbf{y}_h^0]=0$, namely $\mathbf{y}_h^0\in\mathbb{A}_{h,\varepsilon_0}^k$ with $\varepsilon_0=0$. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Nb. cells & DoFs & $\tau=h$ & $E_h$ & $D_h$ & GF Iter & Schur Iter \\ \hline 64 & 1920 & $\sqrt{2}/2$ & -1.002E-2 & 1.062E-2 & 11 & [60,65] \\ \hline 256 & 7680 & $\sqrt{2}/4$ & -9.709E-3 & 5.967E-3 & 17 & [85,101] \\ \hline 1024 & 30720 & $\sqrt{2}/8$ & -8.762E-3 & 2.962E-3 & 28 & [118,148] \\ \hline \end{tabular} \vspace{0.3cm} \caption{Effect of the numerical parameters $h$ and $\tau=h$ on the energy and prestrain defect for the vertical load example using $\gamma_0=\gamma_1=1$. As expected \cite{bartels2013,bonito2015,bonito2018}, we observe that $D_h[\mathbf{y}_h]$ is $\mathcal{O}(h)$. The number of iterations needed by the gradient flow and for each Schur complement solver increases with the resolution. } \label{tab:iso_vertical_load_0025} \end{center} \end{table} We point out that the SIPG method analyzed in \cite{bonito2018} requires $\gamma_0=5000$ and $\gamma_1=1100$ in this example. We report in Table \ref{tab:iso_vertical_load_0025_LDG_SIPG} the performance of both methods with this choice of stabilization parameters but using the definition of the mesh function \eqref{eqn:mesh_function} rather than $\h(\mathbf{x})=\max_{\K\in\mathcal T}h_T$ as in \cite{bonito2018}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{ } & \multicolumn{4}{c|}{LDG} & \multicolumn{4}{c|}{SIPG} \\ \hline $\tau=h$ & $E_h$ & $D_h$ & GF Iter & Schur Iter & $E_h$ & $D_h$ & GF Iter & Schur Iter \\ \hline $\sqrt{2}/2$ & -8.28E-3 & 7.71E-3 & 7 & [302,321] & -8.30E-3 & 7.72E-3 & 7 & [284,307] \\ \hline $\sqrt{2}/4$ & -6.63E-3 & 3.45E-3 & 14 & [557,605] & -6.64E-3 & 3.46E-3 & 13 & [556,600] \\ \hline $\sqrt{2}/8$ & -4.88E-3 & 1.34E-3 & 37 & [788,831] & -4.90E-3 & 1.34E-3 & 35 & [787,833] \\ \hline \end{tabular} \vspace{0.3cm} \caption{Comparison of the LDG and SIPG methods using the penalization parameters $\gamma_0=5000$, $\gamma_1=1100$ required by the SIPG. The results are similar.} \label{tab:iso_vertical_load_0025_LDG_SIPG} \end{center} \end{table} Based on Table \ref{tab:iso_vertical_load_0025_LDG_SIPG}, we see that the two methods give similar results. The advantage of the LDG approach is that there is no constraint on the stabilization parameters $\gamma_0$ and $\gamma_1$ other than being positive. In contrast, the coercivity of the energy discretized with the SIPG method requires {$\gamma_0$ and $\gamma_1$ to be} sufficiently large (depending on the maximum number of edges of the elements in the subdivision $\mathcal T$ and the constant in the trace inequality) \cite{bonito2018}. For instance, the choice $\gamma_0=\gamma_1=1$ for the SIPG method yields an unstable scheme and the problem \eqref{gf:system} becomes singular after a few iterations of the gradient flow. Moreover, the large values of $\gamma_0,\gamma_1$ are mainly dictated by the penalty of the boundary terms in $E_h[\mathbf{y}_h^0]$ and the need to produce moderate values of $E_h[\mathbf{y}_h^0]$ to prevent very small time steps $\tau$ in \eqref{eqn:control_defect}. Furthermore, within each gradient flow iteration, the solution of the Schur complement problem \eqref{eqn:Schur} using the LDG approach with $\gamma_0=\gamma_1=1$ requires less than a fifth of the iterations (Schur Iter) for SIPG with $\gamma_0=5000$ and $\gamma_1=1100$ at the expense of slightly larger number of iterations of the gradient flow (GF Iter); compare Tables \ref{tab:iso_vertical_load_0025} and \ref{tab:iso_vertical_load_0025_LDG_SIPG}. This documents a superior performance of LDG relative to SIPG. Note that there is an artificial displacement along the diagonal $x_1+x_2=4$ \cite{bonito2018,bartels2013} for this example, which does not correspond to the actual physics of the problem, namely $y = 0$ for $x_1+x_2\le4$. The artificial displacements obtained by the two methods for various meshes are compared in Figure \ref{fig:deformation_accross_diagonal} and Table \ref{T:defection}. \begin{figure}[htbp] \begin{center} \includegraphics[width=6.0cm]{along_diagonal_LDG_pen1} \\ \includegraphics[width=6.0cm]{along_diagonal_LDG} \includegraphics[width=6.0cm]{along_diagonal_SIPG} \caption{Deformation along the diagonal $x_1+x_2=4$. Top: LDG with $\gamma_0=\gamma_1=1$; bottom-left: LDG with $\gamma_0=5000$ and $\gamma_1=1100$; bottom-right: SIPG with $\gamma_0=5000$ and $\gamma_1=1100$. The deflection is slightly larger when $\gamma_0=\gamma_1=1$ while both methods yield similar results when $\gamma_0=5000$ and $\gamma_1=1100$; see Table \ref{T:defection}.}\label{fig:deformation_accross_diagonal} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{ } & \multicolumn{2}{c|}{LDG} & \multicolumn{1}{c|}{SIPG} \\ \hline $\sharp$ ref. & $\qquad\gamma_0=\gamma_1=1\qquad$ & $\gamma_0=5000,\gamma_1=1100$ & $\gamma_0=5000,\gamma_1=1100$ \\ \hline $l=3$ & 0.0478 & 0.0311 & 0.0312 \\ \hline $l=4$ & 0.0443 & 0.0211 & 0.0213 \\ \hline $l=5$ & 0.0365 & 0.0118 & 0.0119 \\ \hline \end{tabular} \vspace{0.3cm} \caption{Deflection $y_3$ along the diagonal $x_1+x_2=4$ for both LDG and SIPG }\label{T:defection} \end{center} \end{table} \subsection{Rectangle with \emph{cylindrical} metric} The domain is the rectangle $\Omega=(-2,2)\times(-1,1)$ and the Dirichlet boundary is $\Gamma_D=\{-2\}\times(-1,1)\cup\{2\}\times(-1,1)$. The mesh $\Th$ is uniform and made of 1024 rectangular cells of diameter $h_{\K}=h=\sqrt{5}/4$ (30720 DoFs) and the pseudo time-step is fixed to $\tau = 0.1$. \subsubsection{\bf \emph{One mode}} \label{sec:one_mode} We first consider the immersible metric \begin{equation} \label{def:metric_one_mode} g(x_1,x_2) = \begin{bmatrix} 1+\frac{\pi^2}{4}\cos\left(\frac{\pi}{4}(x_1+2)\right)^2 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} for which \begin{equation}\label{eq:y_exacy_one} \mathbf{y}(x_1,x_2) = (x_1,x_2,2\sin(\frac{\pi}{4}(x_1+2)))^T \end{equation} is a compatible deformation (isometric immersion), i.e., $\I[\mathbf{y}]=g$. We impose the boundary conditions $\boldsymbol{\varphi} = \mathbf{y}|_{\Gamma_D}$ and $\Phi = \nabla \mathbf{y}|_{\Gamma_D}$, so that $\mathbf{y}\in\V(\boldsymbol{\varphi},\Phi)$ is an admissible deformation and also a global minimizer of the energy. To challenge our algorithm, we start from a flat initial plate and obtain an admissible initial deformation $\mathbf{y}_h^0$ using the two preprocessing steps (BC PP and Metric PP) in Algorithm \ref{algo:preprocess} with parameters \[ \widetilde \tau = 0.05, \quad \widetilde\varepsilon_0=0.1 \quad \mbox{and} \quad \widetilde{tol}=10^{-6}. \] The deformation obtained after applying Algorithms \ref{algo:preprocess} and \ref{algo:main_GF} are displayed in Figure \ref{fig:cylinder_one_mode}. Moreover, the corresponding energy and prestrain defect are reported in Table \ref{tab:cylinder_one_mode}. Notice that the target metric defect $\widetilde\varepsilon_0$ is reached in 49 iterations while 380 iterations of the gradient flow are needed to reach the stationary deformation. \begin{figure}[htbp] \begin{center} \includegraphics[width=4.0cm]{cylinder_one_mode_BC_preprocessed} \includegraphics[width=4.0cm]{cylinder_one_mode_prestrain_preprocessed} \includegraphics[width=4.0cm]{cylinder_one_mode_final} \caption{Deformed plate for the cylinder metric with one mode. Left: BC PP; middle: Metric PP; right: Final.} \label{fig:cylinder_one_mode} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{ } & Initial & BC PP & Metric PP & Final \\ \hline $E_h$ & 120.3590 & 1.1951 & 2.5464 & 1.7707 \\ \hline $D_h$ & 9.8696 & 3.2899 & 9.8609E-2 & 9.5183E-2 \\ \hline \end{tabular} \vspace{0.3cm} \caption{Energy and prestrain defect for the cylinder metric with one mode. All the algorithms behave as intended: the boundary conditions preprocessing (BC PP) reduces the energy by constructing a deformation with compatible boundary conditions, the metric preprocessing (Metric PP) reduces the metric defect and the gradient flow (Final) reduced the energy to its minimal value while keeping a control on the metric defect.} \label{tab:cylinder_one_mode} \end{center} \end{table} Interestingly, when no Dirichlet boundary conditions are imposed, i.e., the free boundary case, then the flat deformation (pure stretching) \begin{equation*} \mathbf{y}(x_1,x_2) = \left(\int_{-2}^{x_1}\sqrt{1+\frac{\pi^2}{4}\cos\left(\frac{\pi}{4}(s+2)\right)^2} ds,x_2,0\right)^T \end{equation*} is also compatible with the metric \eqref{def:metric_one_mode} and has a smaller energy. We observe that $y_1(2,x_2)-y_1(-2,x_2)\approx 5.85478$ for $x_2\in(-2,2)$ corresponds to a stretching ratio of approximately $1.5$. The outcome of Metric PP in Algorithm \ref{algo:preprocess} starting from the flat plate produces an initial deformation with $E_h=0.81755$ and $D_h=0.09574$ using 37 iterations. The stationary solution of the main gradient flow is reached in 68 iterations and produces a flat plate with energy $E_h=0.376257$ and metric defect $D_h=0.0957329$. \subsubsection{\bf \emph{Two modes}} This example is similar to that of Section \ref{sec:one_mode} but with one additional \emph{mode} of higher frequency, namely we consider the immersible metric \begin{equation*} g(x_1,x_2) = \begin{bmatrix} 1+\left(\frac{\pi}{2}\cos\left(\frac{\pi}{4}(x_1+2)\right)+\frac{5\pi}{8}\cos\left(\frac{5\pi}{4}(x_1+2)\right)\right)^2 & 0 \\ 0 & 1 \end{bmatrix}. \end{equation*} In this case, the deformation $$ \mathbf{y}(x_1,x_2)=\left(x_1,x_2, 2\sin\left(\frac{\pi}{4}(x_1+2)\right)+\frac{1}{2}\sin\left(\frac{5\pi}{4}(x_1+2)\right)\right)^T $$ is compatible (isometric immersion) with the metric and we impose the corresponding Dirichlet boundary conditions on $\Gamma_D$ as in Section \ref{sec:one_mode}. Using the same setup as in Section \ref{sec:one_mode}, Algorithm \ref{algo:preprocess} produced a suitable initial guess in 1271 iterations, while Algorithm \ref{algo:main_GF} terminated after 1833 steps. The deformations obtained after each of the three main procedures are given in Figure \ref{fig:cylinder_two_modes}. The corresponding energy and prestrain defect are reported in Table \ref{tab:cylinder_two_modes}. We see that the main gradient flow decreases the energy upon bending the shape but keeping the metric defect roughly constant. \begin{figure}[htbp] \begin{center} \includegraphics[width=4.1cm]{cylinder_two_modes_BC_preprocessed} \includegraphics[width=4.1cm]{cylinder_two_modes_prestrain_preprocessed} \includegraphics[width=4.1cm]{cylinder_two_modes_final} \caption{Deformed plate for the cylinder metric with two modes. Left: BC PP; middle: Metric PP; right: Final. Compare with Figure~\ref{fig:cylinder_one_mode} corresponding to the metric \eqref{def:metric_one_mode} (one mode).} \label{fig:cylinder_two_modes} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{ } & Initial & BC PP & Metric PP & Final \\ \hline $E_h$ & 413.7400 & 5.5344 & 28.9184 & 13.0706 \\ \hline $D_h$ & 25.2909 & 26.1854 & 9.9997E-2 & 1.0178E-1 \\ \hline \end{tabular} \vspace{0.3cm} \caption{Energy and metric defect for the cylinder metric with two modes. Compare with Table~\ref{tab:cylinder_one_mode} corresponding to one mode.} \label{tab:cylinder_two_modes} \end{center} \end{table} \subsection{Rectangle with a \emph{catenoidal-helicoidal} metric} Let $\Omega$ be a rectangle to be specified later and let the metric be \begin{equation} \label{def:g_cate_heli} g(x_1,x_2) = \begin{bmatrix} \cosh(x_2)^2 & 0 \\ 0 & \cosh(x_2)^2 \end{bmatrix}. \end{equation} Notice that the family of deformations $\mathbf{y}^\alpha:\Omega\rightarrow\mathbb{R}^3$, $0\leq \alpha \leq \frac{\pi}{2}$, defined by \begin{equation} \label{def_y_alpha} \mathbf{y}^{\alpha}:=\cos(\alpha)\bar \mathbf{y} + \sin(\alpha)\tilde{\mathbf{y}} \end{equation} with \begin{equation*} \bar \mathbf{y}(x_1,x_2)= \begin{bmatrix} \sinh(x_2)\sin(x_1) \\ -\sinh(x_2)\cos(x_1) \\ x_1 \end{bmatrix}, \quad \tilde \mathbf{y}(x_1,x_2)= \begin{bmatrix} \cosh(x_2)\cos(x_1) \\ \cosh(x_2)\sin(x_1) \\ x_2 \end{bmatrix}, \end{equation*} are all compatible with the metric \eqref{def:g_cate_heli}. The parameter $\alpha=0$ corresponds to an {\it helicoid} while $\alpha=\pi/2$ represents a {\it catenoid}. Furthermore, the energy $E[\mathbf{y}^\alpha]$ defined in \eqref{def:Eg_D2y} (or equivalently $E[\mathbf{y}^\alpha]$ given in \eqref{E:final-bending}) has the same value for all $\alpha$. To see this, it suffices to note that the second fundamental form of $\mathbf{y}^\alpha$ is given by \begin{equation*} \II[\mathbf{y}^\alpha] = \begin{bmatrix} -\cos(\alpha) & \sin(\alpha) \\ \sin(\alpha) & \cos(\alpha) \end{bmatrix}, \quad D^2y^\alpha_k=\cos(\alpha)D^2\bar y_k+\sin(\alpha)D^2\tilde y_k, \end{equation*} where $y^\alpha_k=(\mathbf{y}^\alpha)_k$ is the $k$th component of $\mathbf{y}^\alpha$ for $k=1,2,3$. In the following sections, we show how the two extreme deformations can be obtained either by imposing the adequate boundary conditions or by starting with an initial configuration sufficiently close to the energy minima. \subsubsection{\bf \emph{Catenoid case}} We consider the domain $\Omega=(0,6.25)\times(-1,1)$. The mesh $\Th$ consists of 896 (almost square) rectangular cells of diameter $h_{\K}=h\approx 0.17$ (26880 DoFs). We do not impose any boundary conditions on the deformations, which corresponds to $\Gamma_D=\emptyset$ (free boundary condition). We apply Algorithm \ref{algo:preprocess} (initialization) and start the metric preprocessing with $\widetilde{\mathbf{y}}_h^0=\widehat \mathbf{y}_h$, the solution to the bi-Laplacian problem \eqref{bi-Laplacian} with fictitious force $\widehat \mathbf{f} = (0,0,4)^T$ and boundary condition $\boldsymbol{\varphi}(\mathbf{x})=(\mathbf{x},0)$ on $\partial\Omega$ (but without $\Phi$). Moreover, we use three tolerances $\widetilde{tol} = 0.1, \, 0.025, \, 0.01$ for this preprocessing to investigate the effect on Algorithm \ref{algo:main_GF} (gradient flow). Figure \ref{fig:catenoid_2} depicts final configurations produced by Algorithm \ref{algo:main_GF} with the outputs of Algorithm \ref{algo:preprocess}. Corresponding energies and metric defects are given in Table \ref{tab:catenoid}. We see that the metric defect diminishes, as $\widetilde{tol}$ decreases, and the surface tends to a full (closed) catenoid as expected from the relation \eqref{def_y_alpha} with $\alpha =\pi/2$. \begin{figure}[htbp] \begin{center} \begin{tabular}{ccc} \includegraphics[width=4.0cm]{catenoid_ref3_2}& \includegraphics[width=4.0cm]{catenoid_ref3}& \includegraphics[width=4.0cm]{catenoid_ref3_3}\\ \includegraphics[width=4.0cm]{catenoid_ref3_2_side}& \includegraphics[width=4.0cm]{catenoid_ref3_side}& \includegraphics[width=4.0cm]{catenoid_ref3_3_side} \end{tabular} \caption{Final configurations for the \emph{catenoidal-helicoidal} metric with free boundary condition using tolerances $\widetilde{tol}= 0.1$ (left), 0.025 (middle) and 0.01 (right) for the metric preprocessing of Algorithm \ref{algo:preprocess}. The second row offers a different view of the final deformations.}\label{fig:catenoid_2} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c}{ } & \multicolumn{2}{|c|}{$\widetilde{tol}=0.1$} & \multicolumn{2}{|c|}{$\widetilde{tol}=0.025$} & \multicolumn{2}{|c|}{$\widetilde{tol}=0.01$} \\ \cline{2-7} \multicolumn{1}{c|}{ } & Algo 2 & Algo 1 & Algo 2 & Algo 1 & Algo 2 & Algo 1\\ \hline $E_h$ & 36.9461 & 4.01094 & 103.838 & 7.42946 & 146.215 & 8.78622\\ \hline $D_h$ & 2.62428 & 3.19839 & 1.36864 & 2.69258 & 0.853431 & 1.83427\\ \hline \end{tabular} \vspace{0.3cm} \caption{Energies $E_h$ and metric defects $D_h$ produced by Algorithms \ref{algo:preprocess} and \ref{algo:main_GF} for the \emph{catenoidal-helicoidal} metric with free boundary condition. We see that the tolerance $\widetilde{tol}$ of Algorithms \ref{algo:preprocess} controls $D_h$ and that Algorithm \ref{algo:main_GF} does not increase $D_h$ much but reduces $E_h$ substantially. The smaller $\widetilde{tol}$ is the closer the computed surface gets to the catenoid, which is closed (see Figure \ref{fig:catenoid_2}).} \label{tab:catenoid} \end{center} \end{table} \subsubsection{\bf \emph{Helicoid shape}} All the deformations $\mathbf{y}^\alpha$ in \eqref{def_y_alpha} are global minima of the energy but the final deformation is not always catenoid-like as in the previous section. In fact, starting with an initial deformation close to $\mathbf{y}^\alpha$ with $\alpha=0$ leads to an helicoid-like shape. We postpone such an approach to Section~\ref{ss:osc}. An alternative to achieve an helicoid-like shape is to enforce the appropriate boundary conditions as described now. We consider the domain $\Omega=(0,4.5)\times(-1,1)$ and enforce Dirichlet boundary conditions on $\Gamma_D=\{0\}\times (-1,1)$ compatible with $\mathbf{y}^\alpha$ given by \eqref{def_y_alpha} with $\alpha=0$. The mesh $\Th$ consists of 640 (almost square) rectangular cells of diameter $h_{\K}=h\approx 0.17$ (19200 DoFs) and the pseudo time-step is $\tau = 0.01$. We apply Algorithm \ref{algo:preprocess} (preprocessing) with $\widetilde \tau = 0.01$, $\widetilde\varepsilon_0=0.1$ and $\widetilde{tol}=10^{-3}$ to obtain the initial deformation $\mathbf{y}_h^0$. The preprocessing stopped after 2555 iterations, meeting the criteria $\widetilde \tau^{-1}|\widetilde E_h[\tilde \mathbf{y}_h^{n+1}]-\widetilde E_h[\tilde \mathbf{y}_h^n]|\leq \widetilde{tol}$, while 2989 iterations of Algorithm \ref{algo:main_GF} (gradient flow) were needed to reach the stationary deformation. Figure \ref{fig:helicoid} displays the output of the boundary conditions preprocessing and the metric preprocessing, the two stages of Algorithm \ref{algo:preprocess}, as well as two views of the output of Algorithm \ref{algo:main_GF}. The corresponding energies and metric defects are reported in Table \ref{tab:helicoid}. \begin{figure}[htbp] \begin{center} \includegraphics[width=3.0cm]{helicoid_BC_preprocessed} \includegraphics[width=3.0cm]{helicoid_prestrain_preprocessed} \includegraphics[width=3.0cm]{helicoid_final} \includegraphics[width=3.0cm]{helicoid_final_view3} \caption{Deformed plate for the \emph{catenoidal-helicoidal} with Dirichlet boundary conditions on the bottom side corresponding to $\{0\}\times (-1,1)$. From left to right: BC PP, Metric PP, and two views (the last from the top) of the output of Algoritm \ref{algo:main_GF}.} \label{fig:helicoid} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{ } & Initial & BC PP & Metric PP & Final \\ \hline $E_h$ & 138020 & 0.658342 & 202.144 & 7.7461 \\ \hline $D_h$ & 5.17664 & 5.16565 & 0.248419 & 1.15764 \\ \hline \end{tabular} \vspace{0.3cm} \caption{Energies and metric defects for the helicoid-like shape with Dirichlet boundary conditions on the bottom side.} \label{tab:helicoid} \end{center} \end{table} \subsection{Disc with positive or negative Gaussian curvature} \label{sec:disc} We now consider a plate consisting of a disc of radius $1$ \begin{equation*} \Omega= \Big\{(x_1,x_2)\in\mathbb{R}^2: \quad x_1^2+x_2^2<1 \Big\}. \end{equation*} We prescribe several immersible metrics $g$ and impose no boundary conditions. The mesh $\Th$ consists of $320$ quadrilateral cells of diameter $0.103553 \leq h_{\K}\leq 0.208375$ (9600 DoFs) and the pseudo time-step is $\tau = 0.01$. Moreover, we initialize the metric preprocessing of Algorithm \ref{algo:preprocess} with the identity function $\widetilde \mathbf{y}_h^0(\mathbf{x})=(\mathbf{x},0)^T$ for $\mathbf{x}\in\Omega$, and $\widetilde\tau=0.05$, $\widetilde\varepsilon_0=0.1$, $\widetilde{tol}=10^{-6}$. \subsubsection{\bf \emph{Bubble - positive Gaussian curvature}} To obtain a bubble-like shape, we consider for any $\alpha>0$ the metric \begin{equation} g(x_1,x_2) = \begin{bmatrix} 1+\alpha\frac{\pi^2}{4}\cos\left(\frac{\pi}{2}(1-r)\right)^2\frac{x_1^2}{r^2} & \alpha\frac{\pi^2}{4}\cos\left(\frac{\pi}{2}(1-r)\right)^2\frac{x_1x_2}{r^2} \\ \alpha\frac{\pi^2}{4}\cos\left(\frac{\pi}{2}(1-r)\right)^2\frac{x_1x_2}{r^2} & 1+\alpha\frac{\pi^2}{4}\cos\left(\frac{\pi}{2}(1-r)\right)^2\frac{x_2^2}{r^2} \end{bmatrix} \end{equation} with $r:=\sqrt{x_1^2+x_2^2}$. A compatible deformation is given by \begin{equation*} \mathbf{y}(x_1,x_2)=\left(x_1,x_2,\sqrt{\alpha}\sin\left(\frac{\pi}{2}(1-r)\right)\right)^T, \end{equation*} i.e., $\mathbf{y}$ is an isometric immersion $\I[\mathbf{y}]=g$. In the following, we choose $\alpha=0.2$. In the absence of boundary conditions and forcing term, the flat configuration $\tilde \mathbf{y}_h^0(\Omega)=\Omega$ has zero energy but has a metric defect of $D_h= 1.0857$. Algorithm \ref{algo:preprocess} (preprocessing) performs 877 iterations to deliver an energy $E_h=35.3261$ and a metric defect $D_h=0.0999797$. Algorithm \ref{algo:preprocess} only stretches the plate which remains flat; see Figure \ref{fig:bubble} (left and middle). Algorithm \ref{algo:main_GF} (gradient flow) then deforms the plate out of plane, and reaches a stationary state after 918 iterations with $E_h=2.08544$, while keeping the metric defect $D_h=0.087839$; see Figure \ref{fig:bubble}-right. We point out that the discussion after \eqref{E:coercive} also applies to Algorithm \ref{algo:main_GF}, i.e., a flat initial configuration ($y_3=0$) will theoretically lead to flat deformations throughout the gradient flow. However, in this example and the ones in Section \ref{S:gel-discs}, the the initial deformation produced by Algorithm \ref{algo:preprocess} has a non-vanishing third component $y_3$ (order of machine precision). Furthermore, Algorithm \ref{algo:preprocess} may also produce discontinuous configurations (as for the initial deformation in Figure~\ref{fig:bubble} left and middle) to accommodate for the constraint and will thus have a relatively large energy due to the jump penalty term. These two aspects combined may be responsible for the main gradient flow Algorithm~\ref{algo:main_GF} to produce out of plane deformations even when starting with a theoretical flat initial configuration. This is the case when starting with a disc with positive Gaussian curvature metric as in Figure~\ref{fig:bubble}. \begin{figure}[htbp] \begin{center} \begin{tabular}{cc||c} \includegraphics[width=4.1cm]{disc_bubble_preprocessed_flat_view}& \includegraphics[width=4.1cm]{disc_bubble_preprocessed}& \includegraphics[width=4.1cm]{disc_bubble_final} \end{tabular} \caption{Deformed plate for the disc with positive Gaussian curvature metric. Algorithm \ref{algo:preprocess} stretches the plate but keeps it flat (left and middle). Algorithm \ref{algo:main_GF} gives rise to an ellipsoidal shape (right).} \label{fig:bubble} \end{center} \end{figure} \subsubsection{\bf \emph{Hyperbolic paraboloid - negative Gaussian curvature}} We consider the immersible metric $g$ with negative Gaussian curvature \begin{equation} g(x_1,x_2) = \begin{bmatrix} 1+x_2^2 & x_1x_2 \\ x_1x_2 & 1+x_1^2 \end{bmatrix}. \end{equation} A compatible deformation is given by $\mathbf{y}(x_1,x_2)=(x_1,x_2,x_1x_2)^T$, i.e., $\I[\mathbf{y}]=g$. In this setting, the flat configuration has a prestrain defect of $D_h= 1.56565$ (still vanishing energy). Algorithm \ref{algo:preprocess} (preprocessing) performs 856 iterations to reach the energy $E_h= 50.3934$ and metric defect $D_h=0.0999757$. Algorithm \ref{algo:main_GF} (gradient flow) executes 1133 iterations to deliver an energy $E_h=1.83112$ and metric defect $D_h=0.0980273$. Again, the metric defect remains basicallly constant throughout the main gradient flow, while the energy is significantly decreased. Figure \ref{fig:hyper_para} shows the initial (left) and final (middle) deformations of Algoritm \ref{algo:preprocess} and the output of Algorithm \ref{algo:main_GF} (right) which exhibit a sadlle point structure. \begin{figure}[htbp] \begin{center} \begin{tabular}{cc||c} \includegraphics[width=4.1cm]{disc_hyper_para_preprocessed_flat_view}& \includegraphics[width=4.1cm]{disc_hyper_para_preprocessed}& \includegraphics[width=4.1cm]{disc_hyper_para_final} \end{tabular} \caption{Deformed plate for the disc with negative Gaussian curvature. Algorithm \ref{algo:preprocess} stretches the plate but keeps it flat (left and middle). Algoritm \ref{algo:main_GF} gives rise to a saddle shape (right). Compare with Figure~\ref{fig:bubble}. } \label{fig:hyper_para} \end{center} \end{figure} We point out that Algoritm \ref{algo:preprocess} gives rise to little gaps between elements of the deformed subdivisions as a consequence of not including jump stabilization terms in the bilinear form \eqref{pre-gf:bilinear}. These gaps are reduced by Algorithm \ref{algo:main_GF}. \subsubsection{\bf \emph{Oscillating boundary}}\label{ss:osc} We construct an immersible metric in polar coordinates $(r,\theta)$ with a six-fold oscillation near the boundary of the disc $\Omega$. Let $\widetilde{g}(r,\theta)=\I[\widetilde \mathbf{y}(r,\theta)]$ be the first fundamental form of the deformation \begin{equation} \label{def:tilde_y_polar} \widetilde \mathbf{y}(r,\theta)= \big(r\cos(\theta),r\sin(\theta),0.2r^4\sin(6\theta)\big). \end{equation} The expression of the prestrain metric $g=\I[\mathbf{y}]$ in Cartesian coordinates is then given by \eqref{E:Jacobian} and $\mathbf{y}(x_1,x_2) = \widetilde \mathbf{y}(r,\theta)$. We set the parameters \[ \tau=0.05,\quad \widetilde\tau=0.05, \quad \widetilde\varepsilon_0=0.1, \quad \widetilde{tol}=10^{-4}, \quad tol=10^{-6}, \] and note that Algorithm \ref{algo:main_GF} (gradient flow) does not necessarily stop at global minima of the energy. Local extrema are frequently achieved and they are, in fact, of particular interest in many applications. To illustrate this property, we consider a couple of initial deformations and run Algorithms \ref{algo:preprocess} and \ref{algo:main_GF}. \smallskip\noindent \textbf{Case 1: boundary oscillation.} We choose $\widetilde\mathbf{y}_h^0$ to be the local nodal interpolation of $\mathbf{y}=\widetilde \mathbf{y}\circ\boldsymbol{\psi}$ into $[\V^k_h]^3$, with $\widetilde\mathbf{y}$ given by \eqref{def:tilde_y_polar}. The output deformations of Algorithms \ref{algo:preprocess} and \ref{algo:main_GF} are depicted in Figure \ref{fig:oscillation_boundary}. The former becomes the initial configuration $\mathbf{y}_h^0$ of Algorithm \ref{algo:main_GF} and is almost the same as $\widetilde\mathbf{y}_h^0$, which is approximately a disc with six-fold oscillations; see Figure \ref{fig:oscillation_boundary} (left). This is due to the fact that $\I[\widetilde\mathbf{y}_h^0]$ is already close to the target metric $g$. Algorithm \ref{algo:main_GF} (gradient flow) breaks the symmetry: two peaks are amplified while the other four are reduced. After the preprocessing, the energy is $E_h=18.0461$ and metric defect is $D_h=0.00208473$. The final energy is $E_h=13.6475$ while the final metric defect is $D_h=0.00528294$. \begin{figure}[htbp] \begin{center} \includegraphics[width=3cm]{ob_initial} \includegraphics[width=3cm]{ob_final} \includegraphics[width=4cm]{ob_final_view2} \caption{Deformed plate for the disc with oscillation boundary using the initial deformation described in \textbf{Case 1}. Left: output of Algorithm \ref{algo:preprocess} (preprocessing); Middle: output of Algorithm \ref{algo:main_GF} (gradient flow); Right: another view of output of Algorithm \ref{algo:main_GF}. } \label{fig:oscillation_boundary} \end{center} \end{figure} \smallskip\noindent \textbf{Case 2: no boundary oscillation.} We run Algorithm \ref{algo:preprocess} with the bi-Laplacian problem \eqref{bi-Laplacian} with fictitious force $\widehat \mathbf{f}=(0,0,1)^T$ and boundary condition $\boldsymbol{\varphi}(\mathbf{x})=(\mathbf{x},0)$ on $\partial\Omega$ (but without $\Phi$). The output of Algorithm \ref{algo:preprocess} is an ellipsoid without oscillatory boundary as in Case 1. This corresponds to an underlying metric rather different from the target $g$. Algorithm \ref{algo:main_GF} (gradient flow) is unable to improve on the metric defect because it is designed to decrease the bending energy. Therefore, the output of Algorithm \ref{algo:main_GF} is again an ellipsoidal surface totally different from that of Case 1 that is displayed in Figure \ref{fig:oscillation_boundary}. In this case, $D_h = 0.801464$ and $E_h = 0.0377544$ leading to a smaller bending energy but larger metric defect when compared with Case 1. \begin{figure}[htbp] \begin{center} \begin{tabular}{cc|cc} \includegraphics[width=0.2\textwidth]{ob_2_PP}& \includegraphics[width=0.2\textwidth, trim=500 300 500 300, clip]{PP_ob2}& \includegraphics[width=0.23\textwidth]{ob_2_new}& \includegraphics[width=0.2\textwidth,trim=500 300 500 300, clip]{final_ob2}\\ (a) & (b) & (c) & (d) \end{tabular} \caption{Ellipsoidal-like deformation of a disc without boundary oscillation when using the initial deformation described in \textbf{Case 2}. (a)-(b): output of Algorithm \ref{algo:preprocess} (preprocessing) with maximal third component $y_3$ of the deformation about $7.8\times 10^{-2}$; (c)-(d): output of Algorithm \ref{algo:main_GF} (gradient flow) with maximal $y_3 \approx 4.4\times 10^{-2}$. (a) and (c) are views from the top while (b) and (d) are views from the side where the third component of the deformation is scaled by a factor 10.} \label{fig:oscillation_boundary_2} \end{center} \end{figure} \subsection{Gel discs}\label{S:gel-discs} Discs made of a NIPA gel with various monomer concentrations can be manufactured in laboratories \cite{sharon2010mechanics,klein2007shaping}. NIPA gels undergo a differential shrinking in warm environments depending on the concentration. Monomer concentrations injected at the center of the disc generate prestrain metrics depending solely on the distance to the center. We thus propose, inspired by \cite[Section 4.2]{sharon2010mechanics}, prestrained metrics $\widetilde{g}(r,\theta)$ in polar coordinates of the form \eqref{E:metric-eta} with \begin{equation}\label{positive-curvature} \eta(r) = \begin{cases} \frac{1}{\sqrt{K}}\sin(\sqrt{K}r) \quad & K>0, \\ \frac{1}{\sqrt{-K}}\sinh(\sqrt{-K}r) \quad & K<0. \end{cases} \end{equation} In view of Section \ref{S:admissibility}, these metrics are immersible, namely there exist compatible deformations $\mathbf{y}$ such that $\I[\mathbf{y}]=g$ (isometric immersions). We now construct computationally isometric embeddings $\mathbf{y}$ for both $K>0$ (elliptic) and $K<0$ (hyperbolic). It turns out that they possess a constant Gaussian curvature $\kappa = K$ according to \eqref{E:Gauss-curvature}. We let the domain $\Omega$ be the unit disc centered at the origin, do not enforce any boundary conditions and let $\mathbf{f} = \mathbf{0}$. The partition of $\Omega$ is as in Section \ref{sec:disc} and \[ \widetilde\tau=0.05,\quad \widetilde\varepsilon_0=0.1,\quad \widetilde{tol}=10^{-4}, \quad tol=10^{-6}. \] {\bf Case $K = 2$ (elliptic):} We use the fictitious force $\widehat \mathbf{f}=(0,0,1)^T$ in Algorithm \ref{algo:preprocess} (preprocessing) and the pseudo-time step $\tau=0.05$ in Algorithm \ref{algo:main_GF} (gradient flow). We obtain a \emph{spherical-like} final deformation; see Figure \ref{fig:gel-disc-p} and Table \ref{tab:gel_disc_1} for the results. \begin{figure}[htbp] \begin{center} \includegraphics[width=4.8cm]{gel_disc_p_preprocessed} \includegraphics[width=4.8cm]{gel_disc_p_final} \caption{Deformed plate for the disc with constant Gaussian curvature $K=2$ (elliptic). Outputs of Algorithm \ref{algo:preprocess} (left) and Algorithm \ref{algo:main_GF} (right).} \label{fig:gel-disc-p} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{ } & Algorithm \ref{algo:preprocess} & Algorithm \ref{algo:main_GF} \\ \hline $E_h$ & 156.404 & 9.35368\\ \hline $D_h$ & 0.0999494 & 0.188454\\ \hline \end{tabular} \vspace{0.3cm} \caption{Energy and prestrain defect for disc with constant curvature $K=2$ (elliptic).} \label{tab:gel_disc_1} \end{center} \end{table} \noindent {\bf Case $K=-2$ (hyperbolic):} We experiment with two different initial deformations for the metric preprocessing of Algorithm \ref{algo:preprocess}: (i) we take the identity map or (ii) we solve the bi-Laplacian problem \eqref{bi-Laplacian} with a fictitious force $\widehat \mathbf{f}=(0,0,1)^T$ and boundary condition $\boldsymbol{\varphi}(\mathbf{x})=(\mathbf{x},0)$ on $\partial\Omega$ (but without $\Phi$). Algorithm \ref{algo:preprocess} produces \emph{saddle-like} surfaces in both cases but with a different number of waves; see Figure \ref{fig:gel-disc-n}. Algorithm \ref{algo:main_GF} uses the pseudo-time steps $\tau=0.00625$ and $\tau=0.0125$ for (i) and (ii), respectively, while the other parameters remain unchanged. Table \ref{tab:gel_disc_2} documents the results. \begin{figure}[htbp] \begin{center} \includegraphics[width=3cm]{gel_disc_n_final} \includegraphics[width=3.4cm]{gel_disc_n_final_2} \includegraphics[width=3.5cm]{gel_disc_n_final_2_view2} \caption{Deformed plate for the disc with constant Gaussian curvature $K=-2$ (hyperbolic). Outputs of Algorithm {\ref{algo:main_GF}} with initialization (i) (left) and initialization (ii) (middle) and another view with initialization (ii) (right). } \label{fig:gel-disc-n} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c||c|c|} \cline{2-5} \multicolumn{1}{c|}{ } & \multicolumn{2}{c||}{Initialization (i)} & \multicolumn{2}{c|}{Initialization (ii)}\\ \cline{2-5} \multicolumn{1}{c|}{ } & Algorithm \ref{algo:preprocess} & Algorithm \ref{algo:main_GF} & Algorithm \ref{algo:preprocess} & Algorithm \ref{algo:main_GF} \\ \hline $E_h$ & 699.396 & 6.92318 & 699.399 & 12.0978\\ \hline $D_h$ & 0.0998791 & 0.245552 & 0.0999183 & 0.232627 \\ \hline \end{tabular} \vspace{0.3cm} \caption{Energy and metric defect for disc with constant Gaussian curvature $K=-2$ (hyperbolic) for two different initial deformations of Algorithm \ref{algo:preprocess}: (i) identity map and (ii) solution to bi-Laplacian with fictitious force.} \label{tab:gel_disc_2} \end{center} \end{table} It is worth mentioning that for the 3d slender model described in \cite{sharon2010mechanics}, it is shown that when $K<0$, the thickness $s$ of the disc influences the number of waves of the minimizing deformation for $K<0$. Our reduced model is asymptotic as $s\to0$ whence it cannot match this feature. However, it reproduces a variety of deformations upon starting Algorithm \ref{algo:preprocess} with suitable initial configurations. \section{Conclusions}\label{S:conclusions} In this article, we design and implement a numerical scheme for the simulation of large deformations of prestrained plates. Our contributions are: \smallskip\noindent 1. {\it Model and asymptotics.} We present a formal asymptotic limit of a 3d hyperelastic energy in the bending regime. The reduced model, rigorously derived in \cite{lewicka2016}, consists of minimizing a nonlinear energy involving the second fundamental form of the deformed plate and the target metric under a nonconvex metric constraint. We show that this energy is equivalent to a simpler quadratic energy that replaces the second fundamental form by the Hessian of the deformation. This form is more amenable to computation and is further discretized. \smallskip\noindent 2. {\it LDG: discrete Hessian.} We introduce a local discontinuous Galerkin (LDG) approach for the discretization of the reduced energy, thereby replacing the Hessian by a reconstructed Hessian. The latter consists of three parts: the broken Hessian of the deformation, a lifting of the jumps of the broken gradient of the deformation, and a lifting of the jumps of the deformation. In contrast to interior penalty dG, the penalty parameters must be positive for stability but not necessarily large. The formulation of the discrete energy with LDG is conceptually simpler and it gives a method with reduced CPU time. This does not account for the computation of the discrete Hessian of each basis function which is done once at the beginning. \smallskip\noindent 3. {\it Discrete gradient flow.} We propose and implement a discrete $H^2$-gradient flow to decrease the discrete energy while keeping the metric defect under control. We emphasize the performance of Algorithm \ref{algo:main_GF} (gradient flow). The construction of suitable initial deformations by Algorithm \ref{algo:preprocess} (initialization) is somewhat ad-hoc leaving room for improvements in future studies. \smallskip\noindent 4. {\it Simulations.} We present several numerical experiments to investigate the performance of the proposed LDG approach and the model capabilities. A rich variety of configurations with and without boundary conditions, some of practical value, are accessible by this computational modeling. We also show a superior performance of LDG relative to the interior penalty dG method of \cite{bonito2018} for $g=I_2$. \section*{Acknowledgment} Ricardo H. Nochetto and Shuo Yang were partially supported by the NSF Grants DMS-1411808 and DMS-1908267. Andrea Bonito and Diane Guignard were partially supported by the NSF Grant DMS-1817691. \bibliographystyle{amsplain}
train/arxiv
BkiUbfg4uBhhxE6LxfIq
5
1
\section{Sketch of Analysis} To analyze the algorithm shown in Figure~\ref{alg:lazy}, first we decompose the regret into a number of terms, which are then bounded one by one. Let $\widetilde{x}_{t+1}^a \sim P(.\,|\,x_t,a,\TTh_t)$, i.e. an imaginary next state sample assuming we take action $a$ in state $x_t$ when parameter is $\widetilde \theta_t$. Also let $\widetilde{x}_{t+1}\sim P(.\,|\,x_t,a_t,\TTh_t)$ and $x_{t+1}\sim P(.\,|\,x_t,a_t,\theta_*)$. By the average cost Bellman optimality equation \citep{bertsekas1995dynamic}, for a system parametrized by $\widetilde \theta_t$, we can write \begin{align} \label{eq:bellman} J(\TTh_t) + h_t(x_t) &= \min_{a\in \cA} \left\{ \ell(x_t,a) + \EE{h_t(\widetilde{x}_{t+1}^a)\,|\,\cF_t, \TTh_t} \right\} \;. \end{align} Here $h_t(x) = h(x, \TTh_t)$ is the differential value function for a system with parameter $\TTh_t$. We assume there exists $H>0$ such that $h_t(x)\in [0,H]$ for any $x\in \cX$. Because the algorithm takes the optimal action with respect to parameter $\widetilde\theta_t$ and $a_t$ is the action at time $t$, the right-hand side of the above equation is minimized and thus \begin{align} \label{eq:bellman2} J(\TTh_t) + h_t(x_t) = \ell(x_t,a_t) + \EE{h_t(\widetilde{x}_{t+1})\,|\,\cF_t, \TTh_t} \;. \end{align} The regret decomposes into two terms as shown in Lemma \ref{lemma:regret-decomposition}. \begin{lemma} \label{lemma:regret-decomposition} We can decompose the regret as follows: \begin{align*} R_T &= \sum_{t=1}^T \EE{\ell(x_t,a_t) - J(\theta_*)} \leq \, H \sum_{t=1}^T \EE{\one{A_t}}+\sum_{t=1}^T \EE{h_{t}(x_{t+1})-h_t(\widetilde x_{t+1})} + \, H \nonumber \\ \end{align*} where $A_t$ denotes the event that the algorithm has changed its policy at time t. \end{lemma} The first term $H \sum_{t=1}^T \EE{\one{A_t}}$ is related to the sequential changes in the differential value functions, $h_{t+1} - h_t$. We control this term by keeping the number of switches small; $h_{t+1} = h_t$ as long as the same parameter $\widetilde\theta_t$ is used. Notice that under DS-PSRL, $\sum_{t=1}^T \one{A_t} \leq \log_2(T)$ always holds. Thus, the first term can be bounded by $H \sum_{t=1}^T \EE{\one{A_t}} \leq H \log_2(T)$. The second term $\sum_{t=1}^T \EE{h_{t}(x_{t+1})-h_t(\widetilde x_{t+1})}$ is related to how fast the posterior concentrates around the true parameter vector. To simplify the exposition, we define \begin{align*} \Delta_t =& \, \int_\cX \Big( P(x\,|\,x_t,a_t,\theta_*) - P(x\,|\,x_t,a_t,\TTh_t) \Big) h_t(x) dx \nonumber \\ =& \, \EE{h_{t}(x_{t+1})-h_t(\widetilde x_{t+1}) \middle | x_t, a_t} \; . \end{align*} Recall that $\widetilde{x}_{t+1}\sim P(.\,|\,x_t,a_t,\TTh_t)$ while $x_{t+1}\sim P(.\,|\,x_t,a_t,\theta_*)$, thus, from the tower rule, we have \begin{align*} \EE{\Delta_t }=\EE{h_{t}(x_{t+1})-h_t(\widetilde x_{t+1})} \; . \end{align*} The following two lemmas bound $\sum_{t=1}^T \EE{\Delta_t } $ under Assumption~\ref{ass:lipschitz} and~\ref{ass:concentrating}. \begin{lemma} \label{lemma:delta1} Under Assumption~\ref{ass:lipschitz}, let $m$ be the number of schedules up to time $T$, we can show: \begin{align*} \EE{\sum_{t=1}^T \Delta_t} &\le C H \sqrt{ T \EE{ \sum_{j=1}^{m} M_{j} \abs{\theta_{*} - \TTh_{j}}^2 }} \; \end{align*} where $M_{j}$ is the number of steps in the $j$th episode. \end{lemma} \begin{lemma} \label{lemma:delta2} Given Assumption \ref{ass:concentrating} we can show: \begin{align*} \EE{ \sum_{j=1}^{m} M_{j} \abs{\theta_{*} - \TTh_{j}}^2 } \le 2 C' \log^2 T \;. \end{align*} \end{lemma} Thus, \begin{align*} \EE{\sum_{t=1}^T \Delta_t} &\le C H \sqrt{ 2 C' T \log^2 T }= O(\sqrt{T } \log T) \;. \end{align*} Combining the above results, we have \begin{align} R_T \leq & \, H \log_2(T) + C H \sqrt{ 2 C' T \log^2 T } + H \nonumber = \, O(CH \sqrt{C' T } \log T) \; . \end{align} This concludes the proof. \section{Proof of lemma \ref{lemma:regret-decomposition}} \begin{proof} For deterministic schedule, \begin{align*} \EE{J(\theta_*)} = \EE{ J(\TTh_{t}) } \;. \end{align*} Thus we can write \begin{align*} R_T &= \sum_{t=1}^T \EE{\ell(x_t,a_t) - J(\theta_*)}\\ &= \sum_{t=1}^T \EE{\ell(x_t,a_t) - J(\TTh_t)} \\ &= \sum_{t=1}^T \EE{h_t(x_t) - \EE{h_t(\widetilde x_{t+1})\,|\, \cF_t,\TTh_t}} \\ &= \sum_{t=1}^T \EE{h_t(x_t) - h_t(\widetilde x_{t+1})} \;. \end{align*} Thus, we can bound the regret using \begin{align*} R_T &= \EE{h_1(x_1) - h_{T+1}(x_{T+1})} \\ &+ \sum_{t=1}^T \EE{h_{t+1}(x_{t+1}) - h_t(\widetilde x_{t+1})}\\ &\le H + \sum_{t=1}^T \EE{h_{t+1}(x_{t+1}) - h_t(\widetilde x_{t+1})}\;, \end{align*} where the second inequality follows because $h_1(x_1)\le H$ and $-h_{T+1}(x_{T+1})\le 0$. Let $A_t$ denote the event that the algorithm has changed its policy at time t. We can write \begin{align*} R_T - H &\leq \sum_{t=1}^T \EE{h_{t+1}(x_{t+1}) - h_t(\widetilde x_{t+1})}\\ &= \sum_{t=1}^T \EE{h_{t+1}(x_{t+1}) - h_t(x_{t+1})} \\ & +\sum_{t=1}^T \EE{h_{t}(x_{t+1})-h_t(\widetilde x_{t+1})}\\ &\leq H \sum_{t=1}^T \EE{\one{A_t}}\\ &+\sum_{t=1}^T \EE{h_{t}(x_{t+1})-h_t(\widetilde x_{t+1})}\;. \end{align*} \end{proof} \section{Proof of lemma \ref{lemma:delta1}} \begin{proof} By Cauchy-Schwarz inequality and Lipschitz dynamics assumption, \begin{align*} \Delta_t &\le \norm{P(.|x_t,a_t,\theta_*) - P(.|x_t,a_t,\TTh_t)}_1 \norm{h_t}_\infty \\ &\le C H \abs{\theta_{*} - \TTh_{t}} \;. \end{align*} Recall that $\TTh_t = \TTh_{\tau_t}$. Let $T_j$ be the length of episode $j$. Because we have $m$ episodes, we can write \begin{align*} \sum_{t=1}^T \Delta_t &\le \sqrt{T \sum_{t=1}^T \Delta_t^2}\\ &= C H \sqrt{ T \sum_{j=1}^{m} \sum_{s=1}^{T_j} \abs{\theta_{*} - \TTh_{j}}^2 } \\ &= C H \sqrt{ T \sum_{j=1}^{m} M_{j} \abs{\theta_{*} - \TTh_{j}}^2 } \,, \end{align*} where $M_{j}$ is the number of steps in the $j$th episode. Thus \begin{align*} \EE{\sum_{t=1}^T \Delta_t} &\le C H \EE{\sqrt{ T \sum_{j=1}^{m} M_{j} \abs{\theta_{*} - \TTh_{j}}^2 }} \\ &\le C H \sqrt{ T \EE{ \sum_{j=1}^{m} M_{j} \abs{\theta_{*} - \TTh_{j}}^2 }} \;. \end{align*} \end{proof} \section{Proof of lemma \ref{lemma:delta2}} \begin{proof} Let $S = \EE{ \sum_{j=1}^{m} M_{j} \abs{\theta_{*} - \TTh_{j}}^2 }$. Let $N_{j}$ be one plus the number of steps in the first $j$ episodes. So $N_{j} = N_{j-1} + M_{j}$ and $N_{0}=1$. We write \begin{align*} S &= \EE{ \sum_{j=1}^{m} N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 \frac{M_{j}}{N_{j-1}} } \\ & \stackrel{(a)}{\le} 2 \EE{ \sum_{j=1}^{m} N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 } \\ & \stackrel{(b)}{\le} 2 \log T \max_j \EE{N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 } \\ & \stackrel{(c)}{\le} 2 C' \log^2 T \;, \end{align*} where (a) follows from the fact that $M_{j}/N_{j-1}\le 2$ for all $j$, (b) follows from \[ \EE{ \sum_{j=1}^{m} N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 } \leq m \max_j \EE{N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 } \] and $m \le \log T$, and (c) follows from Assumption~\ref{ass:concentrating}. \end{proof} \section{Proof of lemma \ref{lemma:poi-lipschitz}} \label{proof:poi-lipschitz} \begin{proof} To simplify the expositions, we use $p$ to denote $P(s=a|X)$ in this proof. Notice that $z(\theta)=\frac{1-p} {1-p ^{1/ \theta}}$. Based on the definition of $\| \cdot\|_1$, we have \small \begin{align} & \, \| P(\cdot|X,a,\theta) - P(\cdot|X,a,\theta') \|_1 \nonumber \\ = & \, \left| p^{\frac{1}{\theta}} - p^{\frac{1}{\theta'}} \right| +\sum_{s \neq a} \left | \frac{P(s | X)}{z(\theta)} - \frac{P(s | X)}{z(\theta')} \right | \nonumber \\ =& \, \left| p^{\frac{1}{\theta}} - p^{\frac{1}{\theta'}} \right| +\left | \frac{1-p ^{1/ \theta}}{1-p} - \frac{1-p ^{1/ \theta'}}{1-p}\right | \sum_{s \neq a} P(s | X) \nonumber \\ =& \, \left| p^{\frac{1}{\theta}} - p^{\frac{1}{\theta'}} \right| +\left | \frac{1-p ^{1/ \theta}}{1-p} - \frac{1-p ^{1/ \theta'}}{1-p}\right | (1-p) \nonumber \\ = &\, 2 \left| p^{\frac{1}{\theta}} - p^{\frac{1}{\theta'}} \right| . \end{align} \normalsize We also define $h(\theta, p) \stackrel{\Delta}{=} p^{\frac{1}{\theta}}$. Based on calculus, we have \small \begin{align} \frac{\partial h}{\partial \theta} (\theta, p) =& \, p^{\frac{1}{\theta}} \log \left(\frac{1}{p} \right) \frac{1}{\theta^2} \nonumber \\ \frac{\partial^2 h}{\partial \theta \partial p} (\theta, p) =& \, \frac{1}{\theta^2} p^{\frac{1}{\theta}-1} \left[\frac{1}{\theta} \log \left( \frac{1}{p} \right)-1 \right]. \end{align} \normalsize The first equation implies that $h$ is strictly increasing in $\theta$, and the second equation implies that for all $\theta>0$, $\frac{\partial h}{\partial \theta} (\theta, p)$ is maximized by setting $p=\exp(-\theta)$. This implies that for all $\theta>0$, we have \[ 0<\frac{\partial h}{\partial \theta} (\theta, p) \leq \frac{\partial h}{\partial \theta} (\theta, \exp(-\theta)) = \frac{1}{e \theta}.\] Hence, for all $\theta \geq 1$, we have $0<\frac{\partial h}{\partial \theta} (\theta, p) \leq \frac{1}{e \theta} \leq \frac{1}{e}$. Consequently, $h(\theta, p)$ as a function of $\theta$ is globally $\left( \frac{1}{e} \right)$-Lipschitz continuous for $\theta \geq 1$. So we have \small \[ \| P(\cdot|X,a,\theta) - P(\cdot|X,a,\theta') \|_1 = 2 \left| p^{\frac{1}{\theta}} - p^{\frac{1}{\theta'}} \right| \leq \frac{2}{e} |\theta -\theta'|. \] \normalsize \end{proof} \section{Posterior Concentration for POI Recommendation} \label{appendix_concentration} Recall that the parameter space $\Theta = \left \{ \theta_1, \ldots, \theta_K \right \}$ is a finite set, and $\theta_*$ is the true parameter. Notice that if $P(s_t=a_t | X_t) $ is close to $0$ or $1$, then the DS-PSRL will not learn much about $\theta_*$ at time $t$, since in such cases $P(s_t | X_t, a_t, \theta)$'s are roughly the same for all $\theta \in \Theta$. Hence, to derive the concentration result, we make the following simplifying assumption: \[ \Delta_P \leq P(s | X) \leq 1-\Delta_P \quad \forall (X,s) \] for some $\Delta_P \in (0, 0.5)$. Moreover, we assume that all the elements in $\Theta$ are distinct, and define \[ \Delta_{\theta} \stackrel{\Delta}{=} \min_{\theta \in \Theta, \theta \neq \theta_*} |\theta - \theta_*| \] as the minimum gap between $\theta_*$ and another $\theta \neq \theta_*$. To simplify the exposition, we also define \begin{align} B \stackrel{\Delta}{=} &\, 2 \max \left \{ \max_{\theta \in \Theta} \max_{p \in [\Delta_P, 1-\Delta_P] } \left | \log \left( \frac{p^{1/\theta}}{p^{1/\theta_*}} \right) \right| , \right. \nonumber \\ & \left. \max_{\theta \in \Theta} \max_{p \in [\Delta_P, 1-\Delta_P] } \left | \log \left( \frac{1- p^{1/\theta}}{1- p^{1/\theta_*}} \right) \right| \right \} \nonumber \\ c_0 \stackrel{\Delta}{=} & \, \frac{ \min \left \{ \ln \left(\frac{1}{\Delta_P}\right) \Delta_P, \ln \left( \frac{1}{1-\Delta_P} \right) (1-\Delta_P) \right \}}{(\max_{\theta \in \Theta} \theta)^2} \nonumber \\ \kappa \stackrel{\Delta}{=} & \, \left( \max_{\theta \in \Theta} \theta - \min_{\theta \in \Theta} \theta \right)^2. \nonumber \end{align} Then we have the following lemma about the concentrating posterior of this problem: \begin{lemma} (Concentration) \label{lemma:poi-concentration} Assume that $\theta_t$ is sampled from $P_t$ at time step $t$, then under the above assumptions, for any $t>2$, we have \begin{align*} \EE{(\theta_t -\theta_*)^2 } & \leq \frac{3}{e c_0^2 t} \frac{1-P_0(\theta_*)}{P_0(\theta_*)} \times \\ &\exp \left \{ - c_0^2 \Delta_\theta^2 t + \sqrt{2 B^2 t \ln \left( K \kappa t^2 \right)} \right \} + \frac{1}{t^2}, \end{align*} where $B$, $c_0$, and $\kappa$ are constants defined above. Note that they only depend on $\Delta_P$ and $\Theta$ \end{lemma} Notice that Lemma~\ref{lemma:poi-concentration} implies that \begin{align*} t \EE{(\theta_t -\theta_*)^2 } \leq & O \left( \exp \left \{ - c_0^2 \Delta_\theta^2 t + \sqrt{2 B^2 t \ln \left( K \kappa t^2 \right)} \right \} \right ) + \frac{1}{t} = O(1) \end{align*} for any $t>2$. This directly implies that $ \max_j \EE{ N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 } =O(1)$. Q.E.D. \subsection{Proof of lemma \ref{lemma:poi-concentration}} \begin{proof} We use $P_0$ to denote the prior over $\theta$, and use $P_t$ to denote the posterior distribution over $\theta$ at the end of time $t$. Note that by Bayes rule, we have \small \[ P_t(\theta) \propto P_0(\theta) \prod_{\tau=1}^{t} P(s_\tau | X_\tau, a_\tau, \theta) \quad \forall t \, \text{ and } \forall \theta \in \Theta . \] \normalsize We also define the posterior log-likelihood of $\theta$ at time $t$ as \small \[ \Lambda_t (\theta) = \log \left \{ \frac{P_t(\theta)}{P_t(\theta_*)} \right \}= \log \left \{ \frac{P_0(\theta)}{P_0(\theta_*)} \prod_{\tau=1}^t \left[ \frac{P(s_\tau | X_\tau, a_\tau, \theta)}{P(s_\tau | X_\tau, a_\tau, \theta_*)} \right] \right \} \] \normalsize \noindent for all $t$ and all $\theta \in \Theta$. Notice that $P_t (\theta) \leq \exp \left[ \Lambda_t (\theta) \right]$ always holds, and $\Lambda_t (\theta_*)=0$ by definition. We also define $p_t \stackrel{\Delta}{=} P(s_t=a_t | X_t) $ to simplify the exposition. Note that by definition, we have \small \[ P(s_t | X_t, a_t, \theta) = \left \{ \begin{array}{ll} p_t^{1/\theta} & \text{if $s_t=a_t$} \\ \frac{P(s_t | X_t)}{1-p_t}(1-p_t^{1/\theta}) & \text{otherwise} \end{array} \right. \] \normalsize Define the indicator $z_t = \mathbf{1} \left \{ s_t = a_t \right \}$, then we have \small \[ \log \left \{ \frac{P(s_t | X_t, a_t, \theta)}{P(s_t | X_t, a_t, \theta_*)} \right \} = z_t \log \left[ \frac{p_t^{1/\theta}}{p_t^{1/\theta_*}}\right] + (1-z_t) \log \left[ \frac{1-p_t^{1/\theta}}{1-p_t^{1/\theta_*}}\right] \] \normalsize Since $p_t$ is $\cF_{t-1}$-adaptive, we have \small \begin{align} & \, \EE{\log \left \{ \frac{P(s_t | X_t, a_t, \theta)}{P(s_t | X_t, a_t, \theta_*)} \right \} \middle | \cF_{t-1} , \theta_* } \nonumber \\ =& \, p_t^{1/\theta_*} \log \left[ \frac{p_t^{1/\theta}}{p_t^{1/\theta_*}}\right] + (1- p_t^{1/\theta_*}) \log \left[ \frac{1-p_t^{1/\theta}}{1-p_t^{1/\theta_*}}\right] \nonumber \\ =& \, - \mathrm{D_{KL}} \left( p_t^{1/\theta_*} \| p_t^{1/\theta} \right) \leq \, - 2 \left( p_t^{1/\theta_*} - p_t^{1/\theta} \right)^2 , \nonumber \end{align} \normalsize where the last inequality follows from Pinsker's inequality. Notice that function $h(x)=p_t^x$ is a strictly convex function of $x$, and $\frac{d h}{dx} (x)= p_t^x \ln(p_t)$, we have \small \[ p_t^{1/\theta}-p_t^{1/\theta_*} \geq \ln(p_t) p_t^{1/\theta_*} (1/\theta - 1/\theta_*)=\ln(1/p_t) p_t^{1/\theta_*} \frac{(\theta-\theta_*)}{\theta \theta_*} \] \normalsize Similarly, we have $p_t^{1/\theta_*}-p_t^{1/\theta} \geq \ln(1/p_t) p_t^{1/\theta} \frac{(\theta_*-\theta)}{\theta \theta_*}$. Consequently, we have \small \begin{align} \left | p_t^{1/\theta}-p_t^{1/\theta_*} \right | \geq & \, \ln(1/p_t) \min \left \{ p_t^{1/\theta_*} , p_t^{1/\theta} \right \} \frac{|\theta-\theta_*|}{\theta \theta_*} \nonumber \\ \geq & \, \ln(1/p_t) p_t \frac{|\theta-\theta_*|}{\theta \theta_*}, \nonumber \end{align} \normalsize where the last inequality follows from the fact $\theta, \theta_* \in [1, \infty)$. Since function $\ln(1/x)x$ is concave on $[0,1]$ and $p_t \in [\Delta_P, 1-\Delta_P]$, we have $ \ln(1/p_t) p_t \geq \min \left \{ \ln(1/\Delta_P) \Delta_P, \ln(1/(1-\Delta_P)) (1-\Delta_P) \right \}$. Define \small \begin{equation} c_0 \stackrel{\Delta}{=} \frac{\min \left \{ \ln \left(1/\Delta_P \right) \Delta_P, \ln \left( 1/(1-\Delta_P) \right) (1-\Delta_P) \right \} }{(\max_{\theta \in \Theta} \theta)^2}, \end{equation} \normalsize then we have $\left | p_t^{1/\theta}-p_t^{1/\theta_*} \right | \geq c_0 |\theta-\theta_*|$. Hence we have \small \[ - \mathrm{D_{KL}} \left( p_t^{1/\theta_*} \| p_t^{1/\theta} \right) \leq - 2 c_0^2 (\theta-\theta_*)^2. \] \normalsize Furthermore, we define \small \begin{align} \xi_t(\theta) \stackrel{\Delta}{=}& \, \log \left \{ \frac{P(s_t | X_t, a_t, \theta)}{P(s_t | X_t, a_t, \theta_*)} \right \} \nonumber \\ -& \, \EE{\log \left \{ \frac{P(s_t | X_t, a_t, \theta)}{P(s_t | X_t, a_t, \theta_*)} \right \} \middle | \cF_{t-1} , \theta_* }. \end{align} \normalsize Obviously, by definition, $ \EE{\xi_t(\theta) \middle | \cF_{t-1} , \theta_* }=0 $. We also define \small \begin{align} B \stackrel{\Delta}{=} & \, 2 \max \left \{ \max_{\theta \in \Theta} \max_{p \in [\Delta_P, 1-\Delta_P] } \left | \log \left( \frac{p^{1/\theta}}{p^{1/\theta_*}} \right) \right| , \right . \nonumber \\ & \left . \max_{\theta \in \Theta} \max_{p \in [\Delta_P, 1-\Delta_P] } \left | \log \left( \frac{1- p^{1/\theta}}{1- p^{1/\theta_*}} \right) \right| \right \}, \end{align} \normalsize then $ \left | \xi_t(\theta) \right | \leq B$ always holds. This allows us to use Azuma's inequality. Specifically, for any $\theta \in \Theta$, any $t$, and any $\delta \in (0,1)$, we have $ \sum_{\tau=1}^t \xi_\tau(\theta) \leq \sqrt{2 B^2 t \ln \left( K/\delta \right)} $ with probability at least $1-\delta/K$. Taking a union bound over $\theta \in \Theta$, we have \small \begin{align} \label{eqn:lemma6_conc1} \sum_{\tau=1}^t \xi_\tau(\theta) \leq \sqrt{2 B^2 t \ln \left( K/\delta \right)} \quad \forall \theta \in \Theta \end{align} \normalsize with probability at least $1-\delta$. Consequently, we have \small \begin{align} \Lambda_t (\theta) =& \, \log \left \{ \frac{P_0(\theta)}{P_0(\theta_*)} \right \} \nonumber \\ +& \, \sum_{\tau=1}^t \left \{ z_\tau \log \left[ \frac{p_\tau^{1/\theta}}{p_\tau^{1/\theta_*}}\right] + (1-z_\tau) \log \left[ \frac{1-p_\tau^{1/\theta}}{1-p_\tau^{1/\theta_*}}\right] \right \} \nonumber \\ =& \, \log \left \{ \frac{P_0(\theta)}{P_0(\theta_*)} \right \} - \sum_{\tau=1}^t \mathrm{D_{KL}} \left( p_\tau^{1/\theta_*} \| p_\tau^{1/\theta} \right) + \sum_{\tau=1}^t \xi_\tau(\theta) \nonumber \\ \leq & \, \log \left \{ \frac{P_0(\theta)}{P_0(\theta_*)} \right \} - 2 c_0^2 (\theta-\theta_*)^2 t + \sum_{\tau=1}^t \xi_\tau(\theta) \end{align} \normalsize Combining the above inequality with equation~\ref{eqn:lemma6_conc1}, we have \small \[ \Lambda_t (\theta) \leq \log \left \{ \frac{P_0(\theta)}{P_0(\theta_*)} \right \} - 2 c_0^2 (\theta-\theta_*)^2 t + \sqrt{2 B^2 t \ln \left( K/\delta \right)} \quad \forall \theta \in \Theta \] \normalsize with probability at least $1-\delta$. Hence, we have \small \begin{align} \label{eqn:lemma6_conc2} P_t (\theta) \leq & \, \exp \left[ \Lambda_t (\theta) \right] \\ \leq & \, \frac{P_0(\theta)}{P_0(\theta_*)} \exp \left \{ - 2 c_0^2 (\theta-\theta_*)^2 t + \sqrt{2 B^2 t \ln \left( K/\delta \right)} \right \} \nonumber \end{align} \normalsize for all $\theta \in \Theta$ with probability at least $1-\delta$. Thus, for any $\cF_{t-1}$ s.t. the above inequality holds, we have \small \begin{align} & \EE{(\theta_t -\theta_*)^2 \middle | \cF_{t-1}, \theta_*} = \sum_{\theta \neq \theta_*} P_t (\theta) (\theta-\theta_*)^2 \nonumber \\ \leq & \sum_{\theta \neq \theta_*} \frac{P_0(\theta)}{P_0(\theta_*)} \exp \left \{ - 2 c_0^2 (\theta-\theta_*)^2 (t-1) \right. \nonumber \\ + & \, \left. \sqrt{2 B^2 (t-1) \ln \left( K/\delta \right)} \right \} (\theta-\theta_*)^2 \end{align} \normalsize For $t>2$, we have \small \[ \exp \left \{ - c_0^2 (\theta-\theta_*)^2 (t-2) \right \} (\theta-\theta_*)^2 \leq \frac{1}{e c_0^2 (t-2)} \leq \frac{3}{e c_0^2 t}, \] \normalsize where the last inequality follows from the fact that $t-2 \geq \frac{t}{3}$. Hence we have \small \begin{align} & \, \EE{(\theta_t -\theta_*)^2 \middle | \cF_{t-1}, \theta_*} \nonumber \\ \leq & \, \frac{3}{e c_0^2 t} \sum_{\theta \neq \theta_*} \frac{P_0(\theta)}{P_0(\theta_*)} \exp \left \{ - c_0^2 (\theta-\theta_*)^2 t + \sqrt{2 B^2 t \ln \left( K/\delta \right)} \right \} \nonumber \\ \leq & \, \frac{3}{e c_0^2 t} \sum_{\theta \neq \theta_*} \frac{P_0(\theta)}{P_0(\theta_*)} \exp \left \{ - c_0^2 \Delta_\theta^2 t + \sqrt{2 B^2 t \ln \left( K/\delta \right)} \right \} \nonumber \\ =& \, \frac{3}{e c_0^2 t} \frac{1-P_0(\theta_*)}{P_0(\theta_*)} \exp \left \{ - c_0^2 \Delta_\theta^2 t + \sqrt{2 B^2 t \ln \left( K/\delta \right)} \right \}, \nonumber \end{align} \normalsize where the second inequality follows from $(\theta-\theta_*)^2 \geq \Delta_\theta^2$. For $\cF_{t-1}$ s.t. inequality~\ref{eqn:lemma6_conc2} does not hold, we use the naive bound $$(\theta_t -\theta_*)^2 \leq \kappa \stackrel{\Delta}{=} \left( \max_{\theta \in \Theta} \theta - \min_{\theta \in \Theta} \theta \right)^2.$$ Since inequality~\ref{eqn:lemma6_conc2} holds with probability at least $1-\delta$, we have \small \begin{align} & \, \EE{(\theta_t -\theta_*)^2 \middle | \theta_*} \\ \leq & \, \frac{3}{e c_0^2 t} \frac{1-P_0(\theta_*)}{P_0(\theta_*)} \exp \left \{ - c_0^2 \Delta_\theta^2 t + \sqrt{2 B^2 t \ln \left( K/\delta \right)} \right \} + \delta \kappa. \nonumber \end{align} \normalsize Finally, by choosing $\delta = \frac{1}{\kappa t^2}$ and taking an expectation over $\theta_*$, we have \small \begin{align} & \, \EE{(\theta_t -\theta_*)^2 } \\ \leq & \, \frac{3}{e c_0^2 t} \frac{1-P_0(\theta_*)}{P_0(\theta_*)} \exp \left \{ - c_0^2 \Delta_\theta^2 t + \sqrt{2 B^2 t \ln \left( K \kappa t^2 \right)} \right \} + \frac{1}{t^2}. \nonumber \end{align} \normalsize \end{proof} \section{The Proposed Algorithm: Deterministic Schedule PSRL} \begin{figure}[h] \begin{center} \framebox{\parbox{8cm}{ \begin{algorithmic} \STATE {\bf Inputs}: $P_1$, the prior distribution of $\theta_*$. \STATE $L \leftarrow 1$. \FOR{$t\gets 1,2,\dots$} \IF{$t = L $} \STATE Sample $\TTh_{t}\sim P_t$. \STATE $L \leftarrow 2L$. \ELSE \STATE $\TTh_{t} \leftarrow \TTh_{t-1}$. \ENDIF \STATE Calculate near-optimal action $a_t \leftarrow \pi^*(x_t, \TTh_t)$. \STATE Execute action $a_t$ and observe the new state $x_{t+1}$. \STATE Update $P_t$ with $(x_t,a_t,x_{t+1})$ to obtain $P_{t+1}$. \ENDFOR \end{algorithmic} }} \end{center} \caption{The DS-PSRL algorithm with deterministic schedule of policy updates.} \label{alg:lazy} \end{figure} In this section, we propose a PSRL algorithm with a deterministic policy update schedule, shown in Figure~\ref{alg:lazy}. The algorithm changes the policy in an exponentially rare fashion; if the length of the current episode is $L$, the next episode would be $2L$. This switching policy ensures that the total number of switches is $O(\log T)$. We also note that, when sampling a new parameter $\widetilde \theta_t$, the algorithm finds the optimal policy assuming that the sampled parameter is the true parameter of the system. Any planning algorithm can be used to compute this optimal policy \citep{sutton1998introduction}. In our analysis, we assume that we have access to the exact optimal policy, although it can be shown that this computation need not be exact and a near optimal policy suffices (see \citep{Abbasi-Yadkori-Szepesvari-2015}). To measure the performance of our algorithm we use Bayes regret $R_T$ defined in Equation~\ref{eqn:bayes_regret}. The slower the regret grows, the closer is the performance of the learner to that of an optimal policy. If the growth rate of $R_T$ is sublinear ($R_T = o(T))$, the average loss per time step will converge to the optimal average loss as $T$ gets large, and in this sense we can say that the algorithm is asymptotically optimal. Our main result shows that, under certain conditions, the construction of such asymptotically optimal policies can be reduced to efficiently sampling from the posterior of $\theta_*$ and solving classical (non-Bayesian) optimal control problems. First we state our assumptions. We assume that MDP is weakly communicating. This is a standard assumption and under this assumption, the optimal average loss satisfies the Bellman equation. Further, we assume that the dynamics are parametrized by a scalar parameter and satisfy a smoothness condition. \begin{ass}[Lipschitz Dynamics] \label{ass:lipschitz} There exist a constant $C$ such that for any state $x$ and action $a$ and parameters $\theta,\theta'\in \Theta \subseteq \Re$, \[ \norm{P(.|x,a,\theta) - P(.|x,a,\theta')}_1 \le C \abs{\theta-\theta'} \;. \] \end{ass} We also make a concentrating posterior assumption, which states that the variance of the difference between the true parameter and the sampled parameter gets smaller as more samples are gathered. \begin{ass}[Concentrating Posterior] \label{ass:concentrating} Let $N_{j}$ be one plus the number of steps in the first $j$ episodes. Let $\theta_j$ be sampled from the posterior at the current episode $j$. Then there exists a constant $C'$ such that \[ \max_{j} \EE{ N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 } \le C' \log T \;. \] \end{ass} The \ref{ass:concentrating} assumption simply says the variance of posterior decreases given more data. In other words, we assume that the problem is learnable and not a degenerate case. \ref{ass:concentrating} was actually shown to hold for two general categories of problems, finite MDPs and linearly parametrized problems with Gaussian noise \cite{Abbasi-Yadkori-Szepesvari-2015}. In addition, in this paper we prove how this assumption is satisfied for a large class of practical problems, such as smoothly parametrized sequential recommendation systems in Section \ref{sec:poi}. Now we are ready to state the main theorem. We show a sketch of the analysis in the next section. More details are in the appendix. \begin{thm} \label{thm:main} Under Assumption~\ref{ass:lipschitz} and \ref{ass:concentrating}, the regret of the DS-PSRL algorithm is bounded as \[ R_T = \widetilde{O}(C \sqrt{C' T}), \] where the $\widetilde{O}$ notation hides logarithmic factors. \end{thm} Notice that the regret bound in Theorem~\ref{thm:main} does not directly depend on $S$ or $A$. Moreover, notice that the regret bound is smaller if the Lipschitz constant $C$ is smaller or the posterior concentrates faster (i.e. $C'$ is smaller). \section{Experiments} In this section we compare through simulations the performance of DS-PSRL algorithm with the latest PSRL algorithm called Thompson Sampling with dynamic episodes (TSDE) \cite{Ouyang2017}. We experimented with the RiverSwim environment \cite{STREHL20081309}, which was the domain used to show how TSDE outperforms all known existing algorithms in \cite{Ouyang2017}. The RiverSwim example models an agent swimming in a river who can choose to swim either left or right. The MDP consists of $K$ states arranged in a chain with the agent starting in the leftmost state ($s = 1$). If the agent decides to move left i.e with the river current then he is always successful but if he decides to move right he might `fail' with some probability. The reward function is given by: $r(s, a) = 5$ if $s = 1$, $a = \text{left}$; $r(s, a) = 10000$ if $s = K$, $a = \text{right}$; and $r(s, a) = 0$ otherwise. \subsection{Scalar Parametrization} For scalar parametrization a scalar value defines the transition dynamics of the whole MDP. We did two types of experiments, In the first experiment the transition dynamics (or fail probability) were the same for all states for a given scalar value. In the second experiment we allowed for a single scalar value to define different fail probabilities for different states. We assumed two probabilities of failure, a high probability $P_1$ and a low probability $P_2$. We assumed we have two scalar values $\{\theta_1, \theta_2\}$. We compared with an algorithm that switches every time-step, which we otherwise call t-mod-1, with TSDE and DS-PSRL algorithms. We assumed the true model of the world was $\theta_*=\theta_2$ and that the agent starts in the left-most state. In the first experiment, $\theta_1$ sets $P_1$ to be the fail probability for all states and $\theta_2$ sets $P_2$ to be the fail probability for all states. For $\theta_1$ the optimal policy was to go left for the states closer to left and right for the states closer to right. For $\theta_2$ the optimal policy was to always go right. The results are shown in Figure \ref{fig:plot-exp1}, where all schedules are quickly learning to optimize the reward. In the second experiment, $\theta_1$ sets $P_1$ to be the fail probability for all states. And $\theta_2$ sets $P_1$ for the first few states on the left-end, and $P_2$ for the remaining. The optimal policies were similar to the first experiment. However the transition dynamics are the same for states closer to the left-end, while the polices are contradicting. For $\theta_1$ the optimal policy is to go left and for $\theta_2$ the optimal policy is to go right for states closer to the left-end. This leads to oscillating behavior when uncertainty about the true $\theta$ is high and policy switching is done frequently. The results are shown in Figure \ref{fig:plot-exp2-50} where t-mod-1 and TSDE underperform significantly. Nonetheless, when the policy is switched after multiple interactions, the agent is likely to end up in parts of the space where it becomes easy to identify the true model of the world. The second experiment is an example where multi-step exploration is necessary. \begin{figure*}[ht!] \begin{center} % \subfigure[]{% \label{fig:plot-exp1} \includegraphics[width=0.39\textwidth]{plot-reward-exp1-prm1-50} }% \subfigure[]{% \label{fig:plot-exp2-50} \includegraphics[width=0.39\textwidth]{plot-reward-exp2-prm1-50} }% \end{center} \caption{% When multi-step exploration is necessary DS-PSRL outperforms.}% \label{fig:subfigures} \end{figure*} \subsection{Multiple Parameters} Even though our theoretical analysis does not account for the case with multiple parameters, we tested empirically our algorithm with multiple parameters. We assumed a Dirichlet prior for every state and action pair. The initial parameters of the priors were set to one (uniform) for the non-zero transition probabilities of the RiverSwim problem and zero otherwise. Updating the posterior in this case is equivalent to updating the parameters after every transition. We did not compare with the t-mod-1 schedule, due to the computational cost of sampling and solving an MDP every time-step. Unlike the scalar case we cannot define a small finite number of values, for which we can pre-compute the MDP policies. The ground truth model used was $\theta_2$ from the second scalar experiment. Our results are shown in Figures \ref{fig:plot-exp2-43-15} and \ref{fig:plot-exp2-58-20}. DS-PSRL performs better than TSDE as we increase the number of parameters. \begin{figure*}[ht!] \begin{center} % \subfigure[]{% \label{fig:plot-exp2-43-15} \includegraphics[width=0.33\textwidth]{plot-reward-exp2-prm43-15} }% \subfigure[]{% \label{fig:plot-exp2-58-20} \includegraphics[width=0.33\textwidth]{plot-reward-exp2-prm58-20} }% \subfigure[The LQ problem.]{% \label{fig:plot-LQ} \includegraphics[width=0.33\textwidth]{plot-LQ} }% \end{center} \caption{% Multiple parameters (a,b) and continuous domain (c).}% \label{fig:subfigures} \end{figure*} \subsection{Continuous Domains} In a final experiment we tested the ability of DS-PSRL algorithm in continuous state and action domains. Specifically, we implemented the discrete infinite horizon linear quadratic (LQ) problem in \cite{Abbasi-Yadkori-Szepesvari-2015, pmlr-v19-abbasi-yadkori11a}: $$x_{t+1} = A_*x_t + B_*u_t + w_{t+1} \text{ and } c_t = x^T_t Qx_t + u^T_t Ru_t,$$ where $t=0,1,...,u_t \in R^d$ is the control at time $t$, $x_t \in R^n$ is the state at time $t$, $c_t \in R$ is the cost at time $t$, $w_{t+1}$ is the `noise', $A_* \in R^{n \times n}$ and $B_* \in R^{n \times d}$ are unknown matrices while $Q \in R^{n \times n}$ and $R \in R^{d \times d}$ are known (positive definite) matrices. The problem is to design a controller based on past observations to minimize the average expected cost. Uncertainty is modeled as a multivariate normal distribution. In our experiment we set $n=2$ and $d=2$. We compared DS-PSRL with t-mod-1 and a recent TSDE algorithm for learning-based control of unknown linear systems with Thompson Sampling \cite{quyang-TSDE-LQ}. This version of TSDE uses two dynamic conditions. The first condition is the same as in the discrete case, which activates when episodes increase by one from the previous episode. The second condition activates when the determinant of the sample covariance matrix is less than half of the previous value. All algorithms learn quickly the optimal $A_*$ and $B_*$ as shown in Figure \ref{fig:plot-LQ}. The fact that switching every time-step works well indicates that this problem does not require multi-step exploration. \section{Introduction} Thompson sampling \citep{Thompson1933}, or posterior sampling for reinforcement learning (PSRL), is a conceptually simple approach to deal with unknown MDPs \citep{Strens:2000:BFR:645529.658114,Osband-Russo-VanRoy-2013}. PSRL begins with a prior distribution over the MDP model parameters (transitions and/or rewards) and typically works in episodes. At the start of each episode, an MDP model is sampled from the posterior belief and the agent follows the policy that is optimal for that sampled MDP until the end of the episode. The posterior is updated at the end of every episode based on the observed actions, states, and rewards. A special case of MDP under which PSRL has been recently extensively studied is MDP with state resetting, either explicitly or implicitly. Specifically, in \citep{Osband-Russo-VanRoy-2013,Osband-VanRoy-2014} the considered MDPs are assumed to have fixed-length episodes, and at the end of each episode the MDP's state is reset according to a fixed state distribution. In \citep{Gopalan-Mannor-2015}, there is an assumption that the environment is ergodic and that there exists a recurrent state under any policy. Both approaches have developed variants of PSRL algorithms under their respective assumptions, as well as state-of-the-art regret bounds, Bayesian in \citep{Osband-Russo-VanRoy-2013,Osband-VanRoy-2014} and Frequentist in \citep{Gopalan-Mannor-2015}. However, many real-world problems are of a continuing and non-resetting nature. These include sequential recommendations and other common examples found in controlled mechanical systems (e.g., control of manufacturing robots), and process optimization (e.g., controlling a queuing system), where `resets' are rare or unnatural. Many of these real world examples could easily be parametrized with a scalar parameter, where each value of the parameter could specify a complete model. These type of domains do not have the luxury of state resetting, and the agent needs to learn to act, without necessarily revisiting states. Extensions of the PSRL algorithms to general MDPs without state resetting has so far produced non-practical algorithms and in some cases buggy theoretical analysis. This is due to the difficulty of analyzing regret under policy switching schedules that depend on various dynamic statistics produced by the true underlying model (e.g., doubling the visitations of state and action pairs and uncertainty reduction of the parameters). Next we summarize the literature for this general case PSRL. The earliest such general case was analyzed as Bayes regret in a `lazy' PSRL algorithm~\citep{Abbasi-Yadkori-Szepesvari-2015}. In this approach a new model is sampled, and a new policy is computed from it, every time the uncertainty over the underlying model is sufficiently reduced; however, the corresponding analysis was shown to contain a gap~\citep{Osband-VanRoy-2016}. A recent general case PSRL algorithm with Bayes regret analysis was proposed in \citep{Ouyang2017}. At the beginning of each episode, the algorithm generates a sample from the posterior distribution over the unknown model parameters. It then follows the optimal stationary policy for the sampled model for the rest of the episode. The duration of each episode is dynamically determined by two stopping criteria. A new episode starts either when the length of the current episode exceeds the previous length by one, or when the number of visits to any state-action pair is doubled. They establish $ \widetilde{O}(HS\sqrt{AT})$ bounds on expected regret under a Bayesian setting, where $S$ and $A$ are the sizes of the state and action spaces, $T$ is time, and $H$ is the bound of the span, and $\widetilde{O}$ notation hides logarithmic factors. However, despite the state-of-the-art regret analysis, the algorithm is not well suited for large and continuous state and action spaces due to the requirement to count state and action visitations for all state-action pairs. In another recent work \citep{Agrawal2017}, the authors present a general case PSRL algorithm that achieves near-optimal worst-case regret bounds when the underlying Markov decision process is communicating with a finite, though unknown, diameter. Their main result is a high probability regret upper bound of $\widetilde{O}(D\sqrt{SAT})$ for any communicating MDP with $S$ states, $A$ actions and diameter $D$, when $T \geq S^5A$. Despite the nice form of the regret bound, this algorithm suffers from similar practicality issues as the algorithm in \citep{Ouyang2017}. The epochs are computed based on doubling the visitations of state and action pairs, which implies tabular representations. In addition it employs a stricter assumption than previous work of a fully communicating MDP with some unknown diameter. Finally, in order for the bound to be true $T \geq S^5A$, which would be impractical for large scale problems. Both of the above two recent state-of-the-art algorithms \citep{Ouyang2017,Agrawal2017}, do not use generalization, in that they learn separate parameters for each state-action pair. In such non-parametrized case, there are several other modern reinforcement learning algorithms, such as UCRL2 \citep{jaksch2010near}, REGAL \citep{bartlett2009regal}, and R-max \citep{brafman2002r}, which learn MDPs using the well-known `optimism under uncertainty' principle. In these approaches a confidence interval is maintained for each state-action pair, and observing a particular state transition and reward provides information for only that state and action. Such approaches are inefficient in cases where the whole structure of the MDP can be determined with a scalar parameter. Despite the elegant regret bounds for the general case PSRL algorithms developed in \citep{Ouyang2017,Agrawal2017}, both of them focus on tabular reinforcement learning and hence are sample inefficient for many practical problems with exponentially large or even continuous state/action spaces. On the other hand, in many practical RL problems, the MDPs are parametrized in the sense that system dynamics and reward/loss functions are assumed to lie in a known parametrized low-dimensional manifold~\citep{Gopalan-Mannor-2015}. Such model parametrization (i.e. model generalization) allows researchers to develop sample efficient algorithms for large-scale RL problems. Our paper belongs to this line of research. Specifically, we propose a novel general case PSRL algorithm, referred to as DS-PSRL, that exploits model parametrization (generalization). We prove an $\widetilde{O}(\sqrt{T})$ Bayes regret bound for DS-PSRL, assuming we can model every MDP with a single smooth parameter. DS-PSRL also has lower computational and space complexities than algorithms proposed in \citep{Ouyang2017,Agrawal2017}. In the case of \citep{Ouyang2017} the number of policy switches in the first $T$ steps is $K_T =O \left( \sqrt{2SAT log(T)} \right)$; on the other hand, DS-PSRL adopts a deterministic schedule and its number of policy switches is $K_T \leq \log(T)$. Since the major computational burden of PSRL algorithms is to solve a sampled MDP at each policy switch, DS-PSRL is computationally more efficient than the algorithm proposed in \citep{Ouyang2017}. As to the space complexity, both algorithms proposed in \citep{Ouyang2017,Agrawal2017} need to store counts of state and action visitations. In contrast, DS-PSRL uses a model independent schedule and as a result does not need to store such statistics. In the rest of the paper we will describe the DS-PSRL algorithm, and derive a state-of-the-art Bayes regret analysis. We will demonstrate and compare our algorithm with state-of-the-art on standard problems from the literature. Finally, we will show how the assumptions of our algorithm satisfy a sensible parametrization for a large class of problems in sequential recommendations. \section{Application to Sequential Recommendations} \label{sec:poi} With `sequential recommendations' we refer to the problem where a system recommends various `items' to a person over time to achieve a long-term objective. One example is a recommendation system at a website that recommends various offers. Another example is a tutorial recommendation system, where the sequence of tutorials is important in advancing the user from novice to expert over time. Finally, consider a points of interest recommendation (POI) system, where the system recommends various locations for a person to visit in a city, or attractions in a theme park. Personalized sequential recommendations are not sufficiently discussed in the literature and are practically non-existent in the industry. This is due to the increased difficulty in accurately modeling long-term user behaviors and non-myopic decision making. Part of the difficulty arises from the fact that there may not be a previous sequential recommendation system deployed for data collection, otherwise known as the cold start problem. Fortunately, there is an abundance of sequential data in the real world. These data is usually `passive' in that they do not include past recommendations. A practical approach that learns from passive data was proposed in \cite{Theocharous:2017:IPI:3030024.3040983}. The idea is to first learn a model from passive data that predicts the next activity given the history of activities. This can be thought of as the `no-recommendation' or passive model. To create actions for recommending the various activities, the authors perturb the passive model. Each perturbed model increases the probability of following the recommendations, by a different amount. This leads to a set of models, each one with a different `propensity to listen'. In effect, they used the single `propensity to listen' parameter to turn a passive model into a set of active models. When there are multiple model one can use online algorithms, such as posterior sampling for Reinforcement learning (PSRL) to identify the best model for a new user \citep{Strens:2000:BFR:645529.658114,Osband-Russo-VanRoy-2013}. In fact, the algorithm used in \cite{Theocharous:2017:IPI:3030024.3040983} was a deterministic schedule PSRL algorithm. However, there was no theoretical analysis. The perturbation function used was the following: \begin{equation} \label{eq:poi-dynamics} P(s|X,a,\theta) = \begin{cases} P(s|X) ^{1/ \theta}, & \text{if } a = s\\ P(s|X)/z(\theta), & \text{otherwise} \end{cases} \end{equation} where $s=$ is a POI, $X=(s_1, s_2 \dots s_t)$ a history of POIs, and $z(\theta)=\frac{\sum_{s \neq a} P(s|X)} {1-P(s=a|X) ^{1/ \theta}}$ is a normalizing factor. Here we show how this model satisfies both assumptions of our regret analysis. \paragraph{Lipschitz Dynamics} We first prove that the dynamics are Lipschitz continuous: \begin{lemma} \label{lemma:poi-lipschitz} (Lipschitz Continuity) Assume the dynamics are given by Equation \ref{eq:poi-dynamics}. Then for all $\theta, \theta' \geq 1$ and all $X$ and $a$, we have \[ \| P(\cdot|X,a,\theta) - P(\cdot|X,a,\theta') \|_1 \leq \frac{2}{e} |\theta -\theta'|. \] \end{lemma} Please refer to Appendix~\ref{proof:poi-lipschitz} for the proof of this lemma. \paragraph{Concentrating Posterior} As is detailed in Appendix~\ref{appendix_concentration} (see Lemma~\ref{lemma:poi-concentration}), we can also show that Assumption~\ref{ass:concentrating} holds in this POI recommendation example. Specifically, we can show that under mild technical conditions, we have \[ \max_j \EE{ N_{j-1} \abs{\theta_{*} - \TTh_{j}}^2 } =O(1) \] \section{Problem Formulation} We consider the reinforcement learning problem in a parametrized Markov decision process (MDP) $(\cX, \cA, \ell, P^{\theta_*} )$ where $\cX$ is the state space, $\cA$ is the action space, $\ell:\cX\times\cA\ra\real$ is the instantaneous loss function, and $P^{\theta_*}$ is an MDP transition model parametrized by $\theta_*$. We assume that the learner knows $\cX$, $\cA$, $\ell$, and the mapping from the parameter $\theta_*$ to the transition model $P^{\theta_*}$, but does not know $\theta_*$. Instead, the learner has a prior belief $P_0$ on $\theta_*$ at time $t=0$, before it starts to interact with the MDP. We also use $\Theta$ to denote the support of the prior belief $P_0$. Note that in this paper, we do not assume $\cX$ or $\cA$ to be finite; they can be infinite or even continuous. For any time $t=1, 2, \ldots$, let $x_t\in\cX$ be the state at time $t$ and $a_t\in\cA$ be the action at time $t$. Our goal is to develop an algorithm (controller) that adaptively selects an action $a_t$ at every time step $t$ based on prior information and past observations to minimize the long-run Bayes average loss \[ \EE{ \limsup_{n\ra\infty} \frac1n\sum_{t=1}^n \ell(x_t,a_t)} . \] Similarly as existing literature \citep{Osband-Russo-VanRoy-2013, Ouyang2017}, we measure the performance of such an algorithm using Bayes regret: \begin{equation} R_T = \EE{ \sum_{t=1}^T \left( \ell(x_t,a_t) - J^{\theta_*}_{\pi^*} \right)} , \label{eqn:bayes_regret} \end{equation} where $J^{\theta_*}_{\pi^*}$ is the average loss of running the optimal policy under the true model $\theta_*$. Note that under the mild `weakly communicating' assumption, $J^{\theta_*}_{\pi^*}$ is independent of the initial state. The Bayes regret analysis of PSRL relies on the key observation that at each stopping time $\tau$ the true MDP model $\theta_*$ and the sampled model $\TTh_\tau$ are identically distributed \citep{Ouyang2017}. This fact allows to relate quantities that depend on the true, but unknown, MDP $\theta_*$, to those of the sampled MDP $\TTh_\tau$ that is fully observed by the agent. This is formalized by the following Lemma \ref{lemma:psrl}. \begin{lemma} \label{lemma:psrl} (Posterior Sampling \citep{Ouyang2017}). Let $(\cF_s)_{s=1}^\infty$ be a filtration ($\cF_s$ can be thought of as the historic information until current time $s$) and let $\tau$ be an almost surely finite $\cF_s$-stopping time. Then, for any measurable function $g$, \begin{equation} \EE{g(\theta_*)|\cF_{\tau} } = \EE{g(\TTh_\tau)|\cF_{\tau}} \;. \label{eq:psrl-lemma} \end{equation} Additionally, the above implies that $\EE{g(\theta_*)} = \EE{g(\TTh_\tau)}$ through the tower property. \end{lemma} \section{Summary and Conclusions} We proposed a practical general case PSRL algorithm, called DS-PSRL with provable guarantees. The algorithm has similar regret to state-of-the-art. However, our result is more generally applicable to continuous state-action problems; when dynamics of the system is parametrized by a scalar, our regret is independent of the number of states. In addition, our algorithm is practical. The algorithm provides for generalization, and uses a deterministic policy switching schedule of logarithmic order, which is independent from the true model of the world. This leads to efficiency in sample, space and time complexities. We demonstrated empirically how the algorithm outperforms state-of-the-art PSRL algorithms. Finally, we showed how the assumptions satisfy a sensible parametrization for a large class of problems in sequential recommendations.
train/arxiv
BkiUbWLxK6-gDz87OMGd
4
0.8
\section*{Acknowledgments} We would like to thank Nikita Blinov, Philip Schuster, Gustavo Marques Tavares, and Natalia Toro for valuable conversations and Gordan Krnjaic for the title recommendation. AB is supported by the U.S. Department of Energy under Contract No. DE-AC02-76SF00515.
train/arxiv
BkiUfcXxK7kjXJwSTDtg
5
1
\section{Introduction} The discovery of superconductivity in doped nickel oxides Nd$_{0.8}$Sr$_{0.2}$NiO$_2$~\cite{Li_2019,Sawatzky19} has attracted intensive interests both in experiment~\cite{Hepting_2020,Q_Li_2020,Zhou_2020,Fu_arXiv,Lee_2020,D_Li_2020,Zeng_arXiv,Goodge_arXiv,BX_Wang_2020,Q_Gu_arXiv,Osada_2020} and theory~\cite{Hepting_2020,Botana_2020,Sakakibara_2020,Hirsch_2019, Nomura_2019,Hirayama_2020,Gao_arXiv,Singh_arXiv, Jiang_2020,Ryee_2020,HuZhang_2020,GM_Zhang_2020,Z_Liu_2020, Wu_2020,Been_arXiv,Lang_arXiv,Leonov_2020,Leonov_arXiv_2, Werner_2019,Petocchi_arXiv,Y_Gu_2020,Liang_Si_2020,Lechermann_2020,Lechermann_arXiv,Karp_2020,Kitatani_arXiv,Y_Wang_arXiv Zhang_2020,LH_Hu_2019,Chang_arXiv, Z_Wang_arXiv, P_Jiang_2019,Choi_2020, Geisler_2020,He_2020,Bernardini_2020c, Talantsev_2020,T_Zhou_2020,Bernardini_2020, Bernardini_2020b,Olevano_2020,Choi_arXiv,Adhikary_arXiv,Nica_arXiv}, because the nickelate might be an analog of the well known high-$T_{\rm c}$ superconductor, cuprates. Recently, the doping dependence has been explored both theoretically~\cite{Kitatani_arXiv} and experimentally~\cite{D_Li_2020,Zeng_arXiv}, and the presence of the superconducting dome has been confirmed~\cite{D_Li_2020,Zeng_arXiv}. The maximum superconducting transition temperature $T_{\rm c}$ is about 15 K, not very high compared to that of the high-$T_{\rm c}$ cuprates. However, because the Bardeen-Cooper-Schrieffer (BCS) phonon mechanism cannot explain the observed $T_{\rm c}$~\cite{Nomura_2019}, the superconducting mechanism is most likely unconventional, in which the electron correlations play an important role~\cite{Sakakibara_2020,Wu_2020,Kitatani_arXiv,Adhikary_arXiv}. A recent observation of $d$-wave like superconducting gap also supports this scenario~\cite{Q_Gu_arXiv}. Here, a natural question arises: is there any possibility to realize $T_{\rm c}$ as high as the cuprates in nickelates? In the cuprates, the superconductivity emerges by doping carriers into the antiferromagnetic Mott insulator having a large magnetic exchange coupling $J$ ($\sim$130 meV)~\cite{Lee_Nagaosa_Wen_2006}. One of the reasons for the large $J$ is because the cuprates belong to the charge-transfer type in the Zaanen-Sawatzky-Allen diagram~\cite{Zaanen_1985}, and the charge-transfer energy $\Delta_{dp}$ (the energy difference between the copper $3d$ and oxygen 2$p$ orbitals) is small among transition metal oxides. Although the mechanism of the high-$T_{\rm c}$ superconductivity is highly controversial, the large $J$ is a plausible factor in enhancing the $d$-wave superconductivity in the cuprates~\cite{Ogata_2008}. This large value of $J$ is certainly a characteristic feature of the cuprates, which makes the cuprates very different from other transition metal oxides. On the other hand, in the case of the nickelate NdNiO$_2$, $\Delta_{dp}$ is larger than that of the cuprates~\cite{Lee_2004}. Thus, naively, we expect smaller $J$ for nickelates. Indeed, a recent experimental estimate using the Raman spectroscopy gives $J = 25$ meV~\cite{Fu_arXiv}. However, it should be noted that the origin of small $J$ in NdNiO$_2$ may be ascribed to another notable difference from the cuprates, namely, NdNiO$_2$ is not a Mott-insulator due to the self-doping effect. In NdNiO$_2$, orbitals in the Nd layer form extra Fermi pockets on top of the large Fermi surface formed by the Ni 3$d_{x^2-y^2}$\xspace orbital, and the Ni 3$d_{x^2-y^2}$\xspace orbital is hole-doped, i.e., the filling of the Ni $3d$ orbitals deviates from $d^9$~\cite{Lee_2004,Botana_2020,Sakakibara_2020,Gao_arXiv,Liang_Si_2020,Olevano_2020}. The self-doping naturally explains the absence of Mott-insulating behavior in NdNiO$_2$. Although it has been shown that the Ni 3$d_{x^2-y^2}$\xspace orbital forms a two-dimensional strongly-correlated system~\cite{Nomura_2019}, $J$ at the $d^9$ configuration with half-filled $d_{x^2-y^2}$\xspace orbital is masked by the self-doping. The experimental estimate should be understood as the $J$ value including the effect of the self-doping, not the $J$ value at the ideal $d^9$ configuration. One of the reasons for the controversy in theory about the size of $J$~\cite{Jiang_2020,Ryee_2020,HuZhang_2020,GM_Zhang_2020,Z_Liu_2020,Wu_2020,Been_arXiv,Lang_arXiv,Leonov_2020,Leonov_arXiv_2} is ascribed to the ambiguity in calculating $J$ (whether we calculate $J$ at $d^9$ filling or $J$ including the self-doping effect). In any case, it is a non-trivial problem whether we can justify the mapping onto a simple spin model to understand the property of NdNiO$_2$. This fact makes NdNiO$_2$ an imperfect analog of the cuprates. Recently, there was a proposal to design cuprate-analog nickelates without the complication of the self-doping~\cite{Hirayama_2020} \footnote{See also Refs.~\cite{Bernardini_2020b,Nica_arXiv} for other attempts to find nickelate superconductors.}. Since NdNiO$_2$ is a layered material, one can systematically propose nickelate family materials by changing the composition of the ``block-layer" \cite{Tokura_1990} between NiO$_2$ layers. Proposed dynamically stable nickelates have smaller Fermi pockets of the block-layer orbitals than NdNiO$_2$. In some materials, the self-doping is completely suppressed, and the ideal $d^9$ system with half-filled 3$d_{x^2-y^2}$\xspace orbital is realized. An {\it ab initio} estimate of Hubbard $U$ using the constrained random-phase approximation (cRPA)~\cite{Aryasetiawan_2004} shows that the correlation strength $U/t$ ($t$: nearest-neighbor hopping) is comparable to that of cuprates~\cite{Hirayama_2020}. Therefore, once such nickelates are synthesized, the mother compounds will be a Mott insulator similarly to the cuprates, and the effective model becomes the Heisenberg model, which gets rid of the ambiguity in calculating $J$. In this paper, we study the strength of $J$ in the two ideal $d^9$ nickelates, which are free from the self-doping (see Sec~\ref{sec_materials} for the details of the materials). We estimate the $J$ value by the following three methods~\cite{Lichtenstein_2013}. First, we start from a single-orbital Hubbard model derived in Ref.~\onlinecite{Hirayama_2020} and then evaluate $J$ by the expansion in terms of $t/U$. Second, we perform an energy mapping between the classical Heisenberg model and the total energy of different magnetic configurations calculated by the LDA+$U$ (LDA: local density approximation) method. Third, we employ a scheme based on the so-called local force theorem. Hereafter, we simply call these three methods ``strong-coupling expansion", ``energy mapping method", and ``local force approach", respectively. We show that the three independent estimates show reasonable agreement and conclude that the $d^9$ nickelates have sizeable $J$ (about 100 meV), which is not far smaller than that of the cuprates. Therefore, the proposed $d^9$ nickelates provide an interesting playground to explore the cuprate-analog high-$T_{\rm c}$ superconductivity. The paper is organized as follows. In Sec.~\ref{sec_materials}, we introduce two ideal $d^9$ nickelates, RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$: a cation with the valence of $2.5+$) and discuss the advantage over NdNiO$_2$. In Sec.~\ref{sec_methods}, we explain the three methods employed in the present study, and we show the results in Sec.~\ref{sec_results}. Section~\ref{sec_summary} is devoted to the summary. \begin{figure*}[tb] \vspace{0.0cm} \begin{center} \includegraphics[width=0.98\textwidth]{fig_structure_band.pdf} \caption{ Crystal structure of (a) RbCa$_2$NiO$_3$\xspace and (c) $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$: a cation with the valence of $2.5+$) and the paramagnetic DFT band structure [(b) RbCa$_2$NiO$_3$\xspace and (d) $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace)]. The blue dotted curves are the Wannier band dispersion of the Ni 3$d_{x^2-y^2}$\xspace single-orbital Hamiltonian. In (b) and (d), the consistent ${\bf k}$ path is employed: $(0,0,0)$ $\rightarrow$ $(\pi/a,0,0)$ $\rightarrow$ $(\pi/a,\pi/a,0)$ $\rightarrow$ $(0,0,0)$ $\rightarrow$ $(0,0,\pi/c)$ $\rightarrow$ $(\pi/a,0,\pi/c)$ $\rightarrow$ $(\pi/a,\pi/a,\pi/c)$ $\rightarrow$ $(0,0,\pi/c)$ (The symbols are different because the primitive cells of RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace are tetragonal and bace-centered tetragonal, respectively). } \label{Fig_crystal_structure} \end{center} \end{figure*} \section{Materials: $d^9$ nickelates} \label{sec_materials} In Ref.~\cite{Hirayama_2020}, various layered nickelates have been systematically proposed. They are classified into ``1213", ``1214", ``H$_2$", and ``G" families, depending on the composition and the type of the block-layer~\cite{Tokura_1990}. Among the four families, the compounds without the self-doping exist in the 1213 and G families. We here take RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$: a cation with the valence of $2.5+$) for a representative of the ideal $d^9$ nickelates belonging to 1213 and G families, respectively (see Figs.~\ref{Fig_crystal_structure}(a) and (c) for the crystal structure). In the following, we employ Ba$_{0.5}$La$_{0.5}$\xspace as $A$. The phonon calculations have shown that both RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace) are dynamically stable~\cite{Hirayama_2020}. We take the crystal structure optimized in Ref.~\cite{Hirayama_2020}, and perform density-functional theory (DFT) calculations~\footnote{Here, we ignore the interface effect~\cite{Geisler_2020,He_2020,Bernardini_2020c} and consider the bulk property. Note that the thickness of the film reaches around 10 nm and there are several tens of NiO$_2$ layers in the sample~\cite{Lee_2020}.}. Figs.~\ref{Fig_crystal_structure}(b) and \ref{Fig_crystal_structure}(d) show the paramagnetic DFT band structure for RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), respectively. As is shown in Ref.~\cite{Hirayama_2020}, only the Ni 3$d_{x^2-y^2}$\xspace orbital crosses the Fermi level. As far as the topology of the band structure is concerned, these systems are more similar to the cuprates than NdNiO$_2$. The advantages of studying these nickelates rather than NdNiO$_2$ are as follows. First, it is still controversial whether the role of Nd-layer (block-layer) orbitals is essential or not. If the hybridization between Ni $3d$ and Nd-layer orbitals is substantial, the Nd-layer orbitals are not only a charge reservoir, but they might give Kondo-like physics~\cite{Sawatzky19,Hepting_2020,GM_Zhang_2020,Z_Wang_arXiv,Y_Gu_2020}. In the cases of the $d^9$ nickelates, RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), the block-layer orbitals do not show up at the Fermi level, and this controversy can be avoided. We can also exclude the possible role of the $4f$ orbitals with localized moments proposed in Refs.~\cite{P_Jiang_2019,Choi_2020}. Another controversial issue for NdNiO$_2$ is to which orbitals the doped holes go ($d^9 \underline{L}$ vs. $d^8$, where $\underline{L}$ denotes a hole in a ligand oxygen). In the case of the cuprates (charge-transfer insulator), the holes are doped into the oxygen $2p$ orbitals. On the other hand, the nickelates have larger $\Delta_{dp}$ and are classified as Mott-Hubbard type~\cite{Hepting_2020,Jiang_2020,Nomura_2019,HuZhang_2020,Fu_arXiv,Goodge_arXiv}. Because there is nonzero hybridization between Ni 3$d_{x^2-y^2}$\xspace and O $2p$ orbitals, some of the holes should be doped into oxygen $2p$ orbitals~\cite{Hirsch_2019,Karp_2020,Lang_arXiv}. However, the amount should be smaller than that of the cuprates. When the system is Mott-Hubbard type and the holes mainly reside in the Ni $3d$ orbitals, another issue arises: which model is more appropriate, the single-orbital or multi-orbital model? In other words, whether the doped $d^8$ configuration favors high-spin state or low-spin state. If the crystal field splitting between Ni 3$d_{x^2-y^2}$\xspace and the other $3d$ orbitals is much larger than the Hund's coupling, holes stay within the Ni 3$d_{x^2-y^2}$\xspace orbital, and the single-orbital model is justified. On this issue, several studies insist that Ni $3d$ multi-orbital nature cannot be ignored~\cite{Jiang_2020,Zhang_2020,Werner_2019,Petocchi_arXiv,LH_Hu_2019,Lechermann_2020,Lechermann_arXiv,Y_Wang_arXiv,Chang_arXiv,Choi_arXiv}. To resolve this issue, we certainly need more experimental evidences. In the cases of RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), compared to NdNiO$_2$, the Ni 3$d_{x^2-y^2}$\xspace orbital is more isolated in energy space from the other $3d$ orbitals [see Figs.~\ref{Fig_crystal_structure}(b) and \ref{Fig_crystal_structure}(d)]: In the case of NdNiO$_2$, due to the dispersion along the $k_z$ direction, the position of the Ni $d_{3z^2-r^2}$\xspace band becomes close to the Fermi level on the $k_z = \pi/c$ plane; however, such $k_z$ dependence is much weaker in RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace). Considering also the above-mentioned absence of the complication from the self-doping, in this study, we adopt the single-orbital Hubbard model as a minimal model for RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace). In the absence of the carrier doping, we can further map onto the spin model with the exchange coupling $J$. \section{Methods} \label{sec_methods} Here, we introduce three different methods to estimate $J$ (see eg., Ref.~\cite{Lichtenstein_2013} for the ideas behind the three methods). We employ the following convention for the spin Hamiltonian: ${\mathcal H} = \sum_{\langle i,j \rangle } J_{ij} {\bf S}_i \cdot {\bf S}_j$, where $\langle i,j \rangle$ is the bond consisting of sites $i$ and $j$, and ${\bf S}_i$ is the spin-1/2 operator at site $i$. $J$ stands for the nearest-neighbor $J_{ij}$ interaction in the NiO$_2$ layer. \subsection{Strong-coupling expansion} \label{sec_method_superexchange} When the single-orbital Hubbard model is a good description, the magnetic interactions in the Mott insulating region can be obtained by strong-coupling perturbation expansion. The strong-coupling expansion becomes valid in the region $U \gtrsim W$ with the bandwidth $W$ (in the square lattice $W =8t$)~\cite{Otsuki_2019}. RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace) with $U/t = $9.522 and 10.637, respectively~\cite{Hirayama_2020}, satisfy the condition $U > W$. In the strong-coupling expansion, the superexchange interaction $J_{\rm s}$ (with $t^4$-order correction term) and cyclic ring-exchange interaction $J_{\rm c}$ are given by $J_{\rm s} = 4 t^2 / U - 24 t^4 / U^3$ and $J_{\rm c} = 80 t^4 / U^3$, respectively~\cite{Takahashi_1977,MacDonald_1988,Delannoy_2009}. If we effectively take into account the effect of the ring-exchange interaction in the nearest-neighbor interaction $J$, the $J$ value becomes \begin{eqnarray} J = J_{\rm s} - 2J_{\rm c} S^2 = \frac{4 t^2} {U} - \frac{64 t^4 }{U^3} \end{eqnarray} with $S=1/2$. \subsection{Energy mapping method} \label{sec_method_LDA+U} Within the LDA+$U$~\cite{Anisimov_1991,Anisimov_1993,Liechtenstein_1995,Cococcioni_2012}, we perform the magnetic calculations. Here, $U$ is introduced into the Ni 3$d$ orbital subspace. We employ $2\times2\times1$ supercell consisting of four conventional cells. We simulate two different magnetic solutions: one is N\'eel type [$(\pi/a,\pi/a,0)$ antiferromagnetic order] and the other is stripe type [$(\pi/a,0,0)$ antiferromagnetic order]. We calculate the energy difference $\Delta E$ between the two antiferromagnetic solutions. When we assume the two-dimensional classical spin-1/2 Heisenberg model up to next-nearest-neighbor magnetic interaction $J'$, $\Delta E$ per formula unit is given by $\Delta E = J/2 - J' \simeq J/2$. We estimate $J$ with this equation. \subsection{Local force approach} \label{sec_method_Liechtenstein} Based on the N\'eel-type solutions of the LDA+$U$ calculations, we estimate $J$ and $J'$ using the local force theorem~\cite{Lichtenstein_2013}. The local force approach estimates the magnetic interactions from the small energy change induced by the infinitesimal spin-rotation from the magnetic solutions (N\'eel-type in the present case). We employ the so-called Lichtenstein formula, which is recently developed in the low-energy Hamiltonian with the Wannier orbitals~\cite{Korotin_2015,Nomoto_2020_1,Nomoto_2020_2}, given by, \begin{align} (-1)^PJ_{ij}= 4T\sum_{\omega_n}{\rm Tr}[G_{ij}(\omega_n) M_j G_{ji}(\omega_n) M_i],\label{eq:licht} \end{align} where $\omega_n= (2n+1) \pi T$ denotes the Matsubara frequency. Here, we set $P=0$ (1) when the spins at $i$ and $j$ sites are aligned parallel (anti-parallel) to each other. The Green's function $G_{ij}$ is defined by $G_{ij}^{-1}(i\omega_n)=(i\omega_n+\mu)\delta_{ij}-{\mathcal H}^0_{ij}$, where ${\mathcal H}^0_{ij}$ is the hopping matrix of the Wannier tight-binding model, and $\mu$ is the chemical potential. Note that ${\mathcal H}^0_{ij}$ is a $N_{{\rm orb}_i}\times N_{{\rm orb}_j}$ matrix, where $N_{{\rm orb}_i}$ is the number of Wannier orbitals at $i$-site including the spin index. In the case of collinear magnets, one may write ${\mathcal H}^0_{ii}$ as ${\mathcal H}^0_{ii}= \varepsilon_{i}\otimes\sigma_0 + m_i\otimes\sigma_z$. Then, $M_i$ is defined by $M_i=m_i \otimes \sigma_x$ and proportional to the exchange splitting at $i$-site $m_i$. Here, we have neglected the spin-dependent hopping term of $M_i$ (see Ref.~\onlinecite{Nomoto_2020_1} for details~\footnote{Note that the $J_{ij}$ value in this paper is defined to be eight times as large as that in Ref. ~\onlinecite{Nomoto_2020_1}}). \subsection{Comparison among the three methods} \label{sec_method_comparison} The strong-coupling expansion gives local (in real-space) $J$, the energy mapping method sees the energy difference between the global and local minima of the magnetic solutions, and the local force method sees the low-energy excitations around the global minimum. These $J$ are complementary to each other, and hence we employ all the three methods. When the Coulomb repulsion is much larger than the bandwidth and the mapping to the Heisenberg model becomes valid, these three methods see the same $J$. As we will show in Sec.~\ref{sec_results}, the three results agree reasonably well as expected from a Mott insulating behavior of the proposed $d^9$ nickelates. \subsection{Calculation conditions} \label{sec_method_condition} The DFT band structure calculations are performed using {\textsc{Quantum ESPRESSO}}\xspace~\cite{QE-2017}. We employ Perdew-Burke-Ernzerhof (PBE)~\cite{Perdew_1996} norm-conserving pseudopotentials downloaded from PseudoDojo~\cite{Setten_2018} [the pseudopotentials are based on ONCVPSP (Optimized Norm-Conserving Vanderbilt PSeudopotential)~\cite{Hamann_2013}]. The energy comparison between the N\'eel- and stripe-type antiferromagnetic solutions is performed using $9\times 9 \times 7$ and $9\times 9 \times 3$ {\bf k}-mesh for RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), respectively. We treat Ba$_{0.5}$La$_{0.5}$\xspace by the virtual crystal approximation. The energy cutoff is set to be 100 Ry for the Kohn-Sham wave functions, and 400 Ry for the electron charge density. For the estimate of $J$ based on the local force approach, we first construct the maximally localized Wannier functions~\cite{Marzari_1997,Souza_2001} for the N\'eel-type antiferromagnetic band structure using RESPACK~\cite{Nakamura_arXiv,RESPACK_URL}. For RbCa$_2$NiO$_3$\xspace, we use $5\times 5 \times 5$ {\bf k}-mesh for the construction of Wannier orbitals. We put Ni $d$, O $p$, Ca $d$, and interstitial-$s$ (located at the interstitial positions surrounded by Ni$^{+}$, Ca$^{2+}$, and Rb$^{+}$ cations) projections. The interstitial orbitals are stabilized because they feel attractions from the surrounded cations~\cite{Nomura_2019}. Then, we obtain 104 orbital (per spin) tight-binding Hamiltonian. For $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), we employ $5\times 5 \times 3$ {\bf k}-mesh for constructing Wannier orbitals. We derive 232 orbital (per spin) tight-binding Hamiltonian using the projections of Ni $d$, O $p$, Br $p$, $A$ $d$, and interstitial-$s$ (located at the interstitial positions surrounded by Ni$^{+}$, $A^{2.5+}$, and Br$^{-}$ ions) orbitals. In the calculation of Eq.~\eqref{eq:licht}, we employ $16\times16\times16$ $\bf k$-mesh and set the inverse temperature $\beta= 200$~eV$^{-1}$ for both cases. We have confirmed that the difference of $J_{ij}$ values at $\beta= 200$ and 400~eV$^{-1}$ is less than 1 \%. We use the intermediate representation basis for the Matsubara frequency summation~\cite{Shinaoka_2017,Chikano_2019,Li_Shinaoka_2020}, and set the cutoff parameter $\Lambda=10^5$, which is sufficiently larger than $W\beta$ where $W$ is the band width. \begin{figure*}[tb] \vspace{0.0cm} \begin{center} \includegraphics[width=0.99\textwidth]{fig_LDA+U.pdf} \caption{ N\'eel-type antiferromagnetic band structure (red curves) and orbital-resolved density of states (per formula unit, per spin) for (a) RbCa$_2$NiO$_3$\xspace and (b) $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), calculated with LDA+$U$ method ($\overline{U} = 3$ eV). The blue dotted curves are band dispersion calculated from the Wannier tight-binding Hamiltonian. The symbols for the high-symmetry ${\bf k}$ points with the underlines are defined based on $2\times2\times1$ supercell consisting of four conventional cells. The origin of the energy axis is set to be the middle of the gap. The orbital-resolved density of states is calculated from the Wannier tight-binding Hamiltonian. ``I-$s$" stands for the interstitial-$s$ orbitals (see Sec.~\ref{sec_method_condition} for the details of the projections used in the Wannier construction). (c) The energy difference $\Delta E$ per formula unit between N\'eel- and stripe-type antiferromagnetic solutions. The N\'eel-type solutions always show lower energy. } \label{Fig_LDA+U} \end{center} \end{figure*} \section{\mbox{\boldmath$J$} in $d^9$ nickelates} \label{sec_results} \begin{figure*}[tb] \vspace{0.0cm} \begin{center} \includegraphics[width=0.99\textwidth]{fig_J.pdf} \caption{ Estimated exchange coupling $J$ for (a) RbCa$_2$NiO$_3$\xspace and (b) $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace). $\overline{U}$ is the Hubbard interaction in the LDA+$U$ calculation (the Coulomb repulsion between the Ni 3$d$ orbitals), which we distinguish from the Hubbard $U$ in the single-orbital Hubbard model used in the strong-coupling expansion (the Coulomb repulsion between the Wannier orbitals made from the Ni 3$d_{x^2-y^2}$ orbital with O $2p$ tails). See text for details. } \label{Fig_J} \end{center} \end{figure*} In the previous study~\cite{Hirayama_2020}, the effective single-orbital Hamiltonians for RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace) are constructed using maximally-localized Wannier functions~\cite{Marzari_1997,Souza_2001} and cRPA~\cite{Aryasetiawan_2004}. The derived nearest-neighbor hopping and Hubbard parameters are $t = -0.352$ eV, $U = 3.347$ eV for RbCa$_2$NiO$_3$\xspace, and $t = -0.337$ eV, $U = 3.586$ eV for $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace). Then, the strong-coupling expansion described in Sec.~\ref{sec_method_superexchange} gives $J = 122$ meV and $J=109$ meV for RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), respectively (see Appendix~\ref{Appendix_dp} for the estimate from three-orbital $d$-$p$ model). Figures~\ref{Fig_LDA+U}(a) and \ref{Fig_LDA+U}(b) show the band structure calculated by the LDA+$U$ method for the N\'eel-type antiferromagnetic state. While the Hubbard $U$ in the single-orbital Hubbard model is the Coulomb repulsion between the Wannier orbitals made from the Ni 3$d_{x^2-y^2}$ orbital with O $2p$ tails, the $U$ interaction in the LDA+$U$ calculation is the Coulomb repulsion between the Ni 3$d$ orbitals. To make the difference clearer, we call $U$ in the LDA+$U$ calculation $\overline{U}$. In Figs.~\ref{Fig_LDA+U}(a) and \ref{Fig_LDA+U}(b), we have used $\overline{U}$ = 3 eV. In contrast to the case of the LDA+$U$ calculation for NdNiO$_2$, where the system stays metal even in the presence of antiferromagnetic order~\cite{Botana_2020,HuZhang_2020,Z_Liu_2020}, both systems become insulating. The top of the valence band has mainly Ni $3d$ character, in agreement with the classification into the Mott-Hubbard type insulator. We see that both systems are insulating even at smaller $\overline{U}$ (= 1 eV). For all the $\overline{U}$ region we studied (1-5 eV), there exists well defined spin-1/2 Ni spin moment. The results suggest that, if these $d^9$ nickelates are synthesized, they become antiferromagnetic Mott insulator as in the cuprates. Figure~\ref{Fig_LDA+U}(c) shows the energy difference $\Delta E$ per formula unit between the N\'eel- and stripe-type antiferromagnetic solutions. $\Delta E$ decreases as $\overline{U}$ increases, which is a natural behavior given that $\Delta E$ is governed by $J$ and the origin of $J$ is the superexchange interaction. In Figs.~\ref{Fig_LDA+U}(a) and \ref{Fig_LDA+U}(b), the band dispersions obtained by the Wannier tight-binding Hamiltonian, which are used in the local force approach, are also shown. The Wannier bands well reproduce the LDA+$U$ magnetic band dispersions. From $\Delta E$ in Fig.~\ref{Fig_LDA+U}(c), we perform the order estimate of $J$ by the energy mapping method with assuming $J'/J = 0.05$ (Sec.~\ref{sec_method_LDA+U}) \footnote{We do not pay special attention to the precise value of the ratio $J'/J$ because we are only interested in the order estimate of $J$.}. Then $J$ is given by $J = \Delta E / 0.45 $. We also estimate $J$ using the local force approach (Sec.~\ref{sec_method_Liechtenstein}). These results on top of the $J$ value estimated by the strong-coupling expansion (see above) are summarized in Figs.~\ref{Fig_J}(a) and \ref{Fig_J}(b) for RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), respectively. The $J$ value in the energy mapping method changes from about 140 meV ($\overline{U}=1$ eV) to 60 meV ($\overline{U}=5$ eV). The local force approach gives $J \simeq 70$-80 meV. These estimates give the same order of $J$ as the strong-coupling expansion results [$J = 122$ meV and $J=109$ meV for RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), respectively]. Although the energy mapping method and local force approach are based on the same LDA+$U$ calculations, we see that there is a discrepancy between the two results at small $\overline{U}$ values (although the difference is no more than two times). It should be noted that the former method sees the global change of the energy between the completely different magnetic patterns, whereas the latter approach only sees the local landscape around the N\'eel-type solutions, as described in Sec.~\ref{sec_method_comparison}. For larger $\overline{U}$, the agreement between these two results becomes better as is expected: The system can be mapped to the classical spin model with a constant $J$ regardless of the assumed magnetic structure in the local force approach. Overall, all the three estimates of $J$ lie within 60-140 meV, and we conclude that the $d^9$ nickelates have a sizable $J$ of the order of 100 meV. The agreement in the order estimate of $J$ among three independent methods shows that RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace) are indeed Mott insulators with the effective model being the Heisenberg model, and the magnetic exchange coupling $J$ is governed by the superexchange interaction [if the materials were, for example, weakly correlated, the three methods would not agree well (see Sec.~\ref{sec_method_comparison})]. Finally, we compare the $J$ value with that of the cuprates. In the cuprates, the magnitude of $J$ is intensively studied by Raman spectroscopy in the early stage~\cite{Lyons_1988_1,Lyons_1988_2,Sugai_1988}. The $J$ value for La$_2$CuO$_4$ is estimated to be about 130 meV~\cite{Singh_1989}. Systematic investigations have shown that the material dependence of $J$ in the cuprates family is weak~\cite{Sulewski_1990,Tokura_1990_raman}. The numerical study on the $d$-$p$ model has also derived $J$ as large as about 130 meV~\cite{Hybertsen_1990}. Compared to the $J$ value of 130 meV for the cuprates, our estimate based on the $d$-$p$ model giving 90-100 meV (see Appendix~\ref{Appendix_dp}) is small, which is consistent with the fact that $\Delta_{dp}$ is larger in the nickelates. However, we note that the $J$ value of about 100 meV is still significantly large, and the $d^9$ nickelates would serve as interesting cuprate-analog materials. \section{Summary} \label{sec_summary} One of the remarkable features of the high $T_{\rm c}$ cuprates is the large exchange coupling $J$, whose size is as large as 130 meV~\cite{Lee_Nagaosa_Wen_2006}. In the present study, we have evaluated the size of $J$ for $d^9$ nickelates from first principles. While the cuprates having small $\Delta_{dp}$ belong to the charge-transfer type in the Zaanen-Sawatzky-Allen diagram~\cite{Zaanen_1985}, nickelates with larger $\Delta_{dp}$ belong to the Mott-Hubbard type. To answer how large $J$ can be expected in the Mott-Hubbard insulating $d^9$ nickelates, we studied RbCa$_2$NiO$_3$\xspace and $A_{2}$NiO$_{2}$Br$_2$\xspace ($A$ = Ba$_{0.5}$La$_{0.5}$\xspace), which were recently proposed theoretically and shown to be free from the self-doping in Ref.~\onlinecite{Hirayama_2020}. By means of the strong-coupling expansion, energy mapping method, and local force approach, we found that $J$ for these nickelates is as large as 100 meV, which is not far smaller than that of the cuprates. This result suggests that the $d^9$ nickelates and cuprates share a notable common feature in the Mott insulating phase, although the former and latter belong to the Mott-Hubbard and charge-transfer regime, respectively. Finally, we note that the proposed $d^9$ nickelates might give rare examples of realizing the square-lattice Hubbard model with sizeable $J$ in real materials. Recent numerical studies show that the phase diagram of the doped Hubbard model is under severe competition between the stripe state with charge/spin modulation and $d$-wave superconductivity~\cite{Zheng_2017,Darmawan_2018,Ohgoe_2020,HC_Jiang_2019}. Therefore, once synthesized, the $d^9$ nickelates will serve as a valuable test-bed system to understand the superconductivity in the Hubbard-like model. They are also an important reference to understand the superconducting mechanism in the cuprates, because they would tell us whether the charge-transfer nature in the cuprates is essential in the high-$T_{\rm c}$ superconductivity or not. \begin{acknowledgments} We acknowledge the financial support by JSPS KAKENHI Grant No. 16H06345 (YN, MH, and RA), 17K14336 (YN), 18H01158 (YN), 19K14654 (TN), 19H05825 (RA), 20K14390 (MH), and 20K14423 (YN). This work was supported by MEXT as ``Program for Promoting Researches on the Supercomputer Fugaku" (Basic Science for Emergence and Functionality in Quantum Matter). A part of the calculations was performed at Supercomputer Center, Institute for Solid State Physics, University of Tokyo. \end{acknowledgments}
train/arxiv
BkiUeszxK3YB9m3uxced
5
1
\section{Introduction} It has recently become clear that there exists a large class of field theories which have a scaling symmetry under which both the energy density and the charge density have a non-trivial anomalous dimension. This observation has been made in studies of field theories whose dynamics can completely be solved in terms of a holographic dual based on Einstein-Maxwell-Dilaton gravity \cite{Gouteraux:2012yr,Gath:2012pg,Gouteraux:2013oca} or on probe brane constructions \cite{Karch:2014mba,Khveshchenko:2014nka}. The anomalous dimension of the energy density, as encoded in the hyperscaling violating exponent $\theta$, has long been recognized to be a quite common phenomenon. It occurs, for example, in statistical physics for theories above their critical dimension. A separate anomalous dimension $\Phi$ of the charge density was however unanticipated. There exist even several papers that argue that non-zero $\Phi$ is impossible, for example \cite{wen1992scaling,sachdev1994quantum}. While many of the holographic examples involved bottom-up toy models, some of the theories which produce non-zero $\Phi$ are fairly standard gauge theories in the limit of a large number of colors. For example, the general Dp/Dq system, that is maximally supersymmetric Yang-Mills theories in any spacetime dimension other than 3+1 coupled to matter multiplets in the fundamental representation, preserving half the supersymmetry and potentially localized to lower dimensional planar defects, have been demonstrated to generate non-zero anomalous dimension $\Phi$ for the conserved baryon number current \cite{Karch:2014mba}. Non-zero $\Phi$ also has potentially interesting applications for the theory of high temperature superconductors. In our earlier work \cite{Hartnoll:2015sea} we demonstrated that transport phenomena in the strange metal phase of the cuprate family can be fitted extremely well, given just a few simple dynamical assumptions, by a critical theory based on dynamical critical exponent $z=4/3$ and $\Phi=-2/3$ together with vanishing hyper-scaling violating exponent $\theta$. The main experiment driving the necessity of non-zero $\Phi$ is the measurement of the temperature dependence of the Hall-Lorenz ratio in \cite{zhang2000determining}. The Lorenz ratio, as well as its Hall version, directly measure the charge of the basic carriers. The fact that this quantity scales in a non-trivial fashion with temperature implies that the charge of the carriers does not act as a constant but has a non-trivial temperature dependence. This is the essence of non-zero exponent $\Phi$. More recent experimental data on the same quantity \cite{matusiak2009enhancement} showed a less clear linear dependence of the Hall-Lorenz ratio as a function of temperature and, in any case, seems to be inconsistent with the findings of \cite{zhang2000determining}. Clearly pinning down this quantity experimentally should be of utmost interest. If one were to interpret the data of \cite{matusiak2009enhancement} as implying a constant Hall-Lorenz ratio, the remaining transport data of the cuprates could be fit with a much more conventional scaling theory \cite{Khveshchenko:2015xea}. Irrespective of the experimental situation in the cuprates, the question of when non-zero $\Phi$ is consistent or required is clearly of theoretical importance as a basic question in quantum field theory. In our earlier work \cite{Hartnoll:2015sea} we already pointed out potential loopholes in the arguments that seemingly forbid anomalous dimensions for conserved currents. But the strongest evidence for the consistency of non-zero $\Phi$ so far still comes from holography. Note that it is important that the theory is only scale and not conformally invariant, in which case the conformal algebra alone would pin down the dimension of any conserved current to its free value. In the non-relativistic context most scale invariant theories are not conformal. The fact that up to date the only known examples of non-zero $\Phi$ are based on holography is somewhat disturbing. In this work we are remedying this situation by constructing explicit field theory examples with non-vanishing $\Phi$. The theories we construct will all be ``large $N$", where $N$ is to be thought of as the number of flavors. $N$ can for example count the different bands of a solid. As we will see, scale invariance only emerges as a symmetry in the theories we consider in the large $N$ limit. However, already at moderately large $N$ the properties of the system are very well approximate by the large $N$ scaling answer. The examples are somewhat trivial in the sense that the anomalous dimension for the current appears as a classical phenomenon. It does not arise from divergences in quantum loops, but from summing up an infinite number of contributions from the $N$ flavors in the $N \rightarrow \infty$ limit. This way our theories automatically avoid arguments based on Ward identities that have been put forward to rule out an anomalous dimension for a conserved current. The organization of this note is as follows. In the next section we present a simple toy model that exhibits how anomalous dimensions arise from large $N$ limits. We give a very simple example of how to obtain a non-vanishing hyperscaling violating exponent out of what is essentially many free systems. In section 3 we give the general construction for non-vanishing $\theta$ based on systems with many flavors (which could be strongly interacting as long as interflavor interactions are suppressed) coupled to general background fields. We give two simple physical examples in section 4. In section 5 we generalize the construction in order to produce a non-vanishing $\Phi$, with a simple physical example for this case in section 6. We discuss finite $N$ corrections in section 7 and conclude with a few comments in section 8. \section{Hyperscaling violation from a non-relativistic multi-band theory.} \label{toymodel} Let us first demonstrate the basic idea of how to get non-trivial scaling exponents from a large number of flavors limit of a multi-band (or multi-flavor) theory in a simple example. For a non-relativistic Fermi gas with dispersion relation $E(p) = p^2/(2m)$ the grand canonical free energy density at zero temperature as a function of chemical potential $\mu$ is given by \beq \label{onep} \omega_0(\mu) = - a \mu^{\frac{d+2}{2}} \eeq for positive $\mu$, and it is zero otherwise. The constant $a$ can easily be determined by filling up the energy levels up to the Fermi energy $E_F=\mu$, but the $\mu$ dependence itself is completely governed by scaling. The system has an underlying scale symmetry with $z=2$ (under which $p$ has dimension 1, $E$ has dimension 2, the mass doesn't scale and the spatial volume has dimension $-d$). Since the free energy density has dimension $d+z$, it has to scale as $\mu^{(d+2)/2}$ as indicated. A simple generalization of the above model is to include a finite off-set in the dispersion relation, \beq E(p) = \frac{p^2}{2m} + M \eeq Clearly all $M$ does is shift the overall energy of all states and the free energy density is given by\footnote{For $M=0$ the form of $\omega$ in \eqref{onep} implies for energy density $\epsilon$ and particle number density $n$ $$n= \frac{d+2}{2} c \mu^{\frac{d}{2}}, \quad \quad \epsilon = \frac{d}{2} c \mu^{\frac{d+2}{2}}.$$ With finite off-set $M$ the particle number density only sees the difference between chemical potential and off-set, so $$n= \frac{d+2}{2} c (\mu-M)^{\frac{d}{2}}$$ and similar for the kinetic energy. However, the energy density also receives a direct contribution from the off-set, so that the full energy density is given by $$ \epsilon = \frac{d}{2} c (\mu-M)^{\frac{d+2}{2}} + n M.$$ Using $\omega=\epsilon-\mu n$, the expression \eqref{withoffset} for the free energy density follow.} \beq \label{withoffset} \omega_0(\mu) = \left \{ \begin{array}{ll} - a (\mu-M)^{\frac{d+2}{2}} & \quad \mbox{for } \mu > M \\ 0 & \quad \mbox{ otherwise} \end{array} \right . . \eeq Note that while $M$ has dimensions of energy and so formally our system is no longer scale invariant, the form of the dispersion relation is still constrained by scaling as long as we account for the fact that $M$ has dimension of energy, dimension $z=2$ that is. The dispersion relation has to take the form \beq E(M,p) = p^2 f(M/p^2) \eeq with $f(x)=(1-x)/(2m)$ for the special case of a simple off-set. Of course for a single band, since $M$ is just some overall shift of all energy levels, we can always set it to zero by a choice of origin. $M$ however becomes meaningful in a multi-band setting, where different bands start at different values of $M$. As an oversimplified example of a multi-band theory let us postulate that we have $N$ flavors of free non-relativistic electrons where the $n$-th flavor has dispersion relation \beq E_n(p) = \frac{p^2}{2m} + M_n \eeq that is, we take all the flavors to have the same effective mass but different off-sets. We order the flavors by their off-sets, that is $M_{n+1} \geq M_n$. The total free energy density for chemical potential $\mu$ is given by \beq \label{eps} \omega = -a \sum_{n=1}^{n_{max}} (\mu-M)^{\frac{d+2}{2}} \eeq where $M_{n_{max}}$ is the largest off-set less than $\mu$. In the limit of an infinite number of flavors, we can replace the sum over $n$ with an integral: \beq \omega = a \int_0^{\mu} dM g(M) (\mu-M_n)^{\frac{d+2}{2}} \eeq where $g(M)$ is the density of flavors, that is $g(M)dM$ counts how many flavors have off-set between $M$ and $M+dM$. This approximation is valid as long as $\mu$ is large compared to the spacing between off-sets. A very special choice for $g(M)$ is when $g(M)$ is a power law. The simplest case is a constant, that is the off-sets are equally spaced: \beq g(M) = \frac{1}{m_0} .\eeq Note that $m_0$ has dimensions of energy; it is exactly the spacing between neighboring off-sets. In this case we can do the integral and obtain \beq \label{epsans} \omega = \frac{a}{m_0} \int_0^{\mu} (\mu-M)^{\frac{d+2}{2}} = \frac{2 a}{(4+d) m_0} \mu^{\frac{d+4}{z}}. \eeq That is, we have a free energy density which still has a scale invariance, but this time apparently with hyperscaling violating exponent $\theta=-2$. We will confirm below that this is indeed the correct interpretation. It is important to note that for this special case for $g(m)$ the functional form of the free energy density $\omega$ is still constrained by scale invariance as long as we account for the scale dependence of the single dimensionful parameter $m_0$ which scales like an energy (that is it has dimension $z=2$). Scaling alone guarantees that \beq \label{simplescaling} \omega(\mu,m_0) = \mu^{\frac{d+2}{2}} f(\mu/m_0). \eeq Now the very fact that we wrote the energy density as an integral over $dM$ with $m_0^{-1}$ only appearing as an overall prefactor immediately tells us that $f(x) \sim x$ and so we can correctly deduce $\omega \sim \mu^{(d+4)/2}$ without even doing the integral. For general power-law density of levels we have (since $g(M) dM$ is dimensionless) \beq \label{power} g(M) = \frac{M^{y-1}}{ m_0^{y} } \eeq where $m_0$ is once again a parameter with dimension of energy that characterizes the distribution of levels. Scaling alone still guarantees that the free energy density takes the form \eqref{simplescaling}. Once again, $m_0^{-y}$ appears as an overall prefactor, so we know $f(x) \sim x^y$ and so $\omega \sim \mu^{(d+2 + 2 y)/2}$, that is we appear to have hyperscaling violating exponent \beq \label{thetatoy} \theta = - 2 y. \eeq This general idea that one can obtain a scale invariant theory by integrating over the mass parameter has also recently been exploited in \cite{pp1,pp2} where it was used to construct an ``unparticle" description of non-Fermi liquids. \section{General Construction including finite temperature and background fields} \label{general} The specific example above demonstrated that in theories with a large number of flavors we can violate naive scaling dimensions and in particular get a free energy density that appears to have a non-zero hyperscaling violating exponent $\theta$. It is very easy to generalize this idea to a generic quantum system or field theory with a large number of flavors, coupled to arbitrary background fields. In particular, we want to turn on a finite chemical potential $\mu$, a finite temperature $T$ and a background vector potential $A_i$ coupled to a conserved particle number current. We assume that each flavor has unit charge under this global particle number symmetry, so that the total current is simply the sum of all the individual flavor currents with equal weight. Each flavor can constitute a strongly coupled system itself, but we assume that the flavors are decoupled from each other so that we can simply get the physical properties of the full system by summing over flavors as above. We assume each flavor is characterized by a parameter $M_n$ with the dimension of energy and the free energy density $\omega$ of each flavor is given by \beq \label{individual} \omega(\mu,T,A_i,M_n) = T^{\frac{d+z}{z}} f(\mu/T,A_i/T^{1/z},M_n/T) \eeq That is, each flavor is scale invariant with the same dynamical critical exponent $z$ as long as one accounts for the non-trivial scaling of the mass parameter $M_n$. The non-relativistic many-band model of the previous subsection, where $M_n$ was the off-set of the $n$-th band gives a simple example with $z=2$. A theory with a large number of relativistic fermions would be an example with $z=1$. $M_n$ in that case is the mass of the $n$-the flavor. In both cases, the functions $f$ are standard textbook expressions for the free Fermi gas. None of the details of $f$ will be important, other than the fact that $f$ goes to zero faster than $1/M$ at large $M/\mu$. This is expected to be the case as long as the energy of the $n$-th flavor is bigger equal than $M$, so that it's contribution to $f$ is suppressed by a large Boltzmann factor at large $M$. This certainly is true in the case of a free non-relativistic or relativistic gas. As in our simple warm-up example, in the large number of flavor limit we can get the free energy of the full system by converting the sum over flavors to an integral \beq \omega_{tot}(\mu,T,A_i, \ldots) = T^{\frac{d+z}{z}} \, \int_0^\infty dM g(M) f(\mu/T,A_i/T^{1/z},M/T). \eeq where the dots stand for all the parameters characterizing the level density $g(M)$. Returning to the special case that $g(M)$ is a power law as in \eqref{power} characterized by a single quantity $m_0$ carrying dimension of energy we can determine the functional form of $\omega_{tot}$ as above. We know that \begin{enumerate} \item $\omega_{tot}$ respects scaling, $\omega_{tot}(\mu,T,A_i,m_0) = T^{\frac{d+z}{z}} F(\mu/T,A_i/T^{1/z},m_0/T)$. \item The constant $m_0^{-y}$ appears in $\omega_{tot}$ only as an overall prefactor. \end{enumerate} This tells us that \beq \omega_{tot} = m_0^{-y} \, T^{\frac{d+z}{z} + y} \, \Omega(\mu/T,A_i/T^{1/z}) . \eeq This is exactly the statement that the full system has a scale invariance characterized by the same dynamical critical exponent $z$ as the single flavor system but with a hyperscaling violating critical exponent \beq \label{theta} \theta = - y z \eeq as in \eqref{thetatoy}. Note that in this expression the background fields $\mu$ and $A_i$ still scale according to their canonical dimensions, that is their anomalous dimension $\Phi=0$. \section{A simple physical example} \label{KK} A theory with an infinite number of flavors may sound fairly exotic, but we can give very simple physical examples of such a many band theory both for the $z=2$ case and the $z=1$ case. The structure of the dispersion relations we require is exactly what one gets from a dimensional reduction. A standard Kaluza-Klein compactification of a free relativistic fermion in $d+1$ spatial dimensions on a circle of radius $R$ gives an infinite tower of flavors in $d$ spatial dimensions of the form we postulated with $M_n=n/R$. Since the masses are all equally spaced this exactly corresponds to the case of a constant density of levels, $y=1$ and so according to \eqref{theta} we have $\theta=-1$ in this case. This is just the statement that when the chemical potential is large compared to the separation of levels, $\mu \gg 1/R$, the system behaves $d+1$ dimensional. From the point of view of the $d$ dimensional theory this appears as a hyperscaling violating exponent $\theta=-1$! In the non-relativistic $z=2$ system we can accomplish the same effect by confining a $d+1$ dimensional system into a $d$ dimensional quantum well. If we take the confining potential to be an infinite square well of width $L$, we get exactly the many-band theory of our toy example with off-set $M_n \sim n^2/L^2$. Since the distance between levels now grows linearly with $n \sim \sqrt{M_n}$ this corresponds to $g(M) \sim 1/\sqrt{M_n}$ or in other words $y=1/2$. With $z=2$, \eqref{theta} once more yields $\theta=-1$. The hyperscaling violating exponent again simply encodes the higher dimensional character of the theory. Of course the confining potential will in general not take this simple form, but the main point is that we can view any 3d system confined to a quantum well (like the copper oxide layers in the cuprate) as a 2d theory with an infinite number of flavors, so the system is intrinsically ``large $N$" for the purposes of the phenomena discussed here. \section{Anomalous scaling for conserved currents, electric and magnetic fields} What we have demonstrated so far is that in systems with many flavors, such as KK reductions or quantum wells, physical quantities can aquire anomalous dimensions from performing the sum over flavors. So far all we accomplished is to obtain a non-trivial hyperscaling violating exponent $\theta$. Non-vanishing $\theta$ has long been appreciated as being an important aspect of critical systems and can be realized without appealing to large $N$ theories, for example in standard critical systems above their critical dimension. A much more puzzling exponent is the anomalous dimension $\Phi$ for conserved currents and consequently for background electric and magnetic field which we recently proposed \cite{Hartnoll:2015sea} to play an important role in the phenomenology of the cuprates based on earlier holographic studies. Holography makes it abundantly clear that $\Phi$ is part of generic critical systems, but no field theory examples with non-vanishing $\Phi$ had been known that did not rely on the holographic duality to determine $\Phi$. We would like to demonstrate that multi-band systems can give us non-vanishing $\Phi$ just as easily as they gave us non-vanishing $\theta$. From the derivation in section \ref{general} it is clear that the reason we ended up with a vanishing $\Phi$ was that the relative strength with which the various flavors coupled to the background gauge field was equal. All flavors had the same charge. We can generalize our previous construction to flavors with non-equal charge. If we denote the charge of the $n$-the flavor as $e(M_n)$ we see that the free energy of the individual flavor is now given by \beq \label{individualtwo} \omega(\mu,T,A_i,M_n) = T^{\frac{d+z}{z}} f\left( \frac{e(M_n) \mu}{T}, \frac{e(M) A_i}{T^{1/z}},\frac{M_n}{T} \right ) \eeq in analogy with \eqref{individual}. That is, in the action for the individual flavor $e(M)$, $\mu$ and $A_i$ only appear in the combination $e(M) \mu$ and $e(M) A_i$ and so any dependence on $e(M)$, $A_i$ and $\mu$ can only be in this product form. For the multi-band model we obtain, \beq \label{tobeintegrated} \omega_{tot}(\mu,T,A_i, \ldots) = T^{\frac{d+z}{z}} \, \int_0^\infty dM g(M) \, f\left( \frac{e(M_n) \mu}{T}, \frac{e(M) A_i}{T^{1/z}},\frac{M_n}{T} \right ). \eeq For the full theory to still respect any kind of scaling symmetry, we this time need both $g(M)$ and $e(M)$ to be given by power laws \beq g(M) = \frac{M^{y-1}}{ m_0^{y} }, \quad \quad e(M) = \frac{M^{\tilde{y}}}{ \tilde{m}_0^{\tilde{y}} }. \eeq $m_0$ and $\tilde{m}_0$ are parameters with dimension of energy and the power with which they appear in $g(M)$ and $e(M)$ respectively is determined by the fact that $e(M)$ is dimensionless whereas $g(M)$ has dimension of energy$^{-1}$. Following the logic of the previous sections we can fix the resulting form of the free energy \beq \label{integral} \omega_{tot} = m_0^{-y} T^{\frac{d+z}{z} + y} \, \Omega \left ( \frac{\mu}{T} \left ( \frac{T}{\tilde{m}_0} \right )^{\tilde{y}} ,\frac{A_i}{T^{1/z}} \left( \frac{T}{\tilde{m}_0} \right )^{\tilde{y}} \right ) . \eeq Concretely, we use that \begin{enumerate} \item $m_0$ appears only as an overall prefactor from $g(M)$ \item $\mu$ and $A_i$ show up in the integrand in the combination $\mu/\tilde{m}_0^{\tilde{y}}$ and $A_i/\tilde{m}_0^{\tilde{y}}$ and so they have to appear in this combination in the final answer. $\tilde{m}_0$ only appears in these combinations, so no other powers of $\tilde{m}_0$ occur. \item The free energy has to respect the scale invariance of the underlying theory with $m_0$ and $\tilde{m}_0$ transforming like energies. \end{enumerate} With the standard \cite{Karch:2014mba} assignments $[\mu]=z-\Phi$ and $[A_i]=1-\Phi$ we see that the final answer \eqref{integral} exactly corresponds to the form the free energy should take in a theory with \beq \theta = - y z, \quad \quad \Phi= \tilde{y} z \eeq \section{A simple physical realization of non-zero $\Phi$} As for our theories with non-zero $\theta$, a simple example of a theory which obtains non-vanishing $\Phi$ via the construction outlined in here can be obtained by looking at a Kaluza-Klein example. If we start with a relativistic field that carries charge $q$ already in $d+1$ spatial dimensions, compactification on circle of radius $R$ will give us a tower of particles with mass $n/R$ in $d$ dimensions, every single one of which will carry charge $q$. This is the theory we discussed in section \ref{KK}. The KK-reduction itself however introduces a new $U(1)$ charge in the system. The quantized momentum along the compact direction appears as an extra global $U(1)$ charge in the $d$ dimensional theory. Under this KK $U(1)$ symmetry the particle with mass $n/R$ carries charge $n$. In the language of our construction this corresponds to $y=\tilde{y}=1$, or in other words \beq \theta=-1, \quad \Phi=1 .\eeq These dimension assignments can of course easily be understood from the higher dimensional point of view. $\theta$, as before, just signals that an extra dimension opens up. $\Phi=1$ implies that the gauge field has dimension 0 instead of its standard dimension 1. This is to be expected, since the background field $A_{\mu}$ in the $d$ dimensional theory is just the metric component $g_{\mu \phi}$ of the higher dimensional theory, where $\phi$ denotes the compact direction. 0 is indeed the standard dimension assigned to the metric tensor under scaling. \section{Finite $N$ effects} Strictly speaking, the scaling symmetry with the non-trivial exponent only emerges in the large $N$, that is in the large number of flavors, limit in the examples we constructed here. However, already at moderately large $N$ is the field theory very well approximated by the large $N$ scaling answer. Scaling with the non-trivial exponents dominates the physics even at finite $N$. To demonstrate this quantitatively, let us return to our simplest toy model of section \ref{toymodel}. In figure \ref{comparison} we compare the finite $N$ answer for 1, 10 and 20 levels to the infinite $N$ scaling answer. While of course for a single level the two disagree wildly one can see that already at the moderately large values of levels the infinite $N$ scaling answer is an extremely good approximation to the full finite $N$ answer. \begin{figure}[t] \subfloat[$N=1$]{\includegraphics[width=2in]{p1.pdf}} \subfloat[$N=10$]{\includegraphics[width=2in]{p10.pdf}} \subfloat[$N=20$]{\includegraphics[width=2in]{p20.pdf}} \caption{\label{comparison} Free energy for $d=2$ as a function of chemical potential. Depicted is the comparison between the finite sum (solid line) with a) $N=1$, b) $N=10$ and c) $N=20$ levels to the scale invariant answer (dashed line) that emerges at large $N$. The free energy is given by \eqref{eps} with $a=1$. $m_0$ in the continuum answer \eqref{epsans} is adjusted to account for the presence of 1, 10, 20 levels in the range of $\mu$ depicted. } \end{figure} One important lesson to take away from this study of finite $N$ effects is that the correct notion of $N$ is to count the number of bands within the energy range one wants to study. For scaling to govern the free energy density within a certain range of temperatures or chemical potentials, the number of bands with energy within this range has to be large for the scaling considered in here to be an approximate symmetry. \section{Comments} \begin{itemize} \item These examples easily avoid any theorems based on Ward identities forbidding anomalies for the current. Note that the construction outlined in here would already assign the currents anomalous dimensions {\it classically}. It is the infinite number of flavors/bands that allows currents to pick up an anomalous scaling transformation. The anomalous dimension here is not due to quantum effects \item We so far neglected interactions between the bands. The flavors within a band can already have arbitrary interactions as long as they do not generate any additional scale. Inter-band interactions will almost certainly renormalize the critical exponents, but since the dimension of the currents was already unconstrained before the interactions are taken into account, any fixed point that emerges in the IR will surely not have to have $\Phi=0$. \item While it is easy to demonstrate that this construction does give non-vanishing $\Phi$, it is less clear that this is how either holography or cuprates accomplish the feat. Note however that the appearances of many flavors appears natural from the point of view of holography, where the extra dimensions gives infinite towers of states. The fact that cuprates have 2d layers also potentially allows an interpretation in terms of a theory with many flavors, even though in this case the notion that different flavors have different charge still would require quite unusual physics. \end{itemize} \section*{Acknowledgements} We would like to thank Claudio Chamon, Philip Phillips and Larry Yaffe for helpful discussions on related topics. Special thanks to Sean Harnoll for numerous discussions and collaborations on the whole circle of ideas relating to $\Phi$. This is partially supported by the US Department of Energy under grant number DE-SC0011637. \bibliographystyle{JHEP}
train/arxiv
BkiUfKvxK6wB9dbuVswy
5
1
\section{Introduction} \label{101} Since the first detection of hot exozodiacal dust (``hot exozodi'') around Vega (\citealt{Absil2006}), about two dozens hot exozodis have been discovered using optical long baseline interferometry (\citealt{Absil2008, Absil2009, Absil2013, DiFolco2007, Defrere2011, Defrere2012, Ertel2014, Ertel2016, Nunez2017}). Presumably accumulating at or close to the sublimation radius, the dust is heated to high temperatures and its emission peaks in the near- (NIR) to mid-infrared (MIR). The detected dust-to-star flux ratios in NIR are at a level of a few per cent or even lower (e.g. \citealt{Ertel2014}). Thus, high precision (contrast) and high angular resolution (${\sim}\unit[0.01]{as}$) are required to observe hot exozodis. The existence of hot exozodis raises questions as dust located at stellar distances of only ${\sim}\unit[0.01-1]{au}$ would be removed by radiative forces on timescales of a few years. To be\break detectable around ${\sim}20\,\%$ of main-sequence stars of all spectral types from A to K at all ages (\citealt{Ertel2014}), the dust has to be continuously replenished or to be trapped in the stellar vicinity for long times, yet the eliciting mechanism has still to be identified (\citealt{vanLieshout2014, Rieke2016, Kral2017, Kimura2020, Pearce2020}).\break\vspace*{-0.4cm} Studies of hot exozodis offer a way to better understand the inner regions of extrasolar planetary systems. In addition, exozodis could help to trace the invisible sources of the dust and the putative planets at larger distances, and hence reveal the architecture of the planetary systems. On the other hand, the possible presence of small grains is a potential problem for the detection of terrestrial planets in the habitable zone (e.g. \citealt{Agol2007, Beckwith2008}). So far, hot exozodis have only been observed in \textit{H} or \textit{K}~band, thus grain sizes and grain compositions could not be constrained sufficiently. Previous modelling of the Spectral Energy Distribution (SED) pointed towards nano- to \mbox{sub-}micrometre sized grains of carbonaceous material, however, larger grains could not be ruled out completely (\citealt{Kirchschlager2017}). \textit{N}~band emission from more temperate (warm) dust near habitable zones has been detected (\citealt{MillanGabet2011, Mennesson2014, Ertel2018, Ertel2020}) but observations at intermediate wavelengths ($\unit[1.6]{\mu m}\lesssim\lambda\lesssim\unit[10]{\mu m}$) are required to study the potential connection of warm and hot dust. The Multi AperTure mid-Infrared SpectroScopic Experiment (MATISSE; \citealt{Lopez2014}) is a second-generation instrument at the Very Large Telescope Interferometer (VLTI), available since 2019. With a spatial resolution of a\break few mas and operating in \textit{L}, \textit{M}~and \textit{N}~band, MATISSE offers\break critical capabilities for the study of hot exozodis. In particular, MATISSE will be able to confine the dust properties of hot exozodis (\citealt{Ertel2018b, Kirchschlager2018}). $\kappa~$Tuc (HD~7788) is an \mbox{F6 IV-V} star located in the constellation Tucana at a distance of $\unit[(21.0\pm0.3)]{pc}$ (\citealt{Gaia2018}) with an effective temperature of $\unit[6474]{K}$, stellar mass ${\sim}\unit[1.35]{M_\odot}$, and an age of ${\sim}\unit[2]{Gyr}$ (\citealt{Ammler2012, Fuhrmann2017, Tokovinin2020}). Significant hot emission was detected in 2012 and 2014 around $\kappa~$Tuc (\citealt{Ertel2014}). However, no significant excess was detected in 2013, making $\kappa~$Tuc the first hot exozodi candidate for significant NIR variability (\citealt{Ertel2016}).\break\vspace*{-0.4cm} In this letter, we present observations in \textit{L}~band of the hot exozodi around $\kappa$~Tuc (Section~\ref{sec_obs}) and a modelling of the observed visibilities and SED with the focus to constrain the dust properties in the circumstellar environment (Sections~\ref{sec3} and \ref{sec_results}). We discuss the results in Section~\ref{sec_conc}. \begin{figure*} \includegraphics[trim=1.6cm 1.4cm 3.4cm 2.3cm, clip=true, width=0.3582\linewidth, page = 1]{Pics/Visibility_HD7788_Paper_referee2.pdf}\hspace*{-0.08cm} \includegraphics[trim=4.2cm 1.4cm 3.4cm 2.3cm, clip=true, width=0.32\linewidth, page = 2]{Pics/Visibility_HD7788_Paper_referee2.pdf}\hspace*{-0.08cm} \includegraphics[trim=1.6cm 1.4cm 5.9cm 2.3cm, clip=true, width=0.3225\linewidth, page = 1]{Pics/Closure_phase.pdf}\\[-0.2cm] \caption{\textit{Left \& Centre}: Measured visibilities and related 1$\sigma$~errors (crosses and bars; observations on 9 July 2019 in black and on 11 July 2019 in grey) along with the expected visibility of the limb-darkened photosphere (blue line) as a function of projected baseline length $B$, for the wavelengths $\lambda=\unit[3.4]{\mu m}$ and $\unit[3.61]{\mu m}$. The thickness of the blue line corresponds to an adopted uncertainty of the stellar diameter. The best-fit model is represented by the magenta line and corresponds to a uniform circumstellar emission (Gaussian profile) with a wavelength-dependent dust-to-star flux ratio $f$. The purple line represents the best-fit of a disc modelling approach (Section~\ref{sec_results}). \textit{Right}: Calibrated closure phases for the four triangles of the observation on 9 July. The 22 spectral elements range from $\unit[3.37]{\mu m}$ to $\unit[3.85]{\mu m}$ and cover the region that show significant dust emission (see Section~\ref{sec3} for further details).\vspace*{-0.3cm}} \label{fig_visdata} \end{figure*} \section{Observations} \label{sec_obs} Interferometric \textit{LM}~and \textit{N}~band data were obtained on 9 and 11 July 2019 using the instrument MATISSE (\citealt{Lopez2014}) on the VLTI (Table~\ref{tab_obs}). The Auxiliary Telescopes (ATs) were arranged in medium configuration and the New Adaptive Optics Module for Interferometry (NAOMI; \citealt{Woillez2019}) was used. We simultaneously obtained visibility measurements on six baselines (baseline lengths between $B=\unit[30]{m}$ and $\unit[95]{m}$). The observations were carried out in LOW spectral resolution (R$\sim$30). In \textit{LM}~band, the fringes were dispersed over 64 spectral pixels between $3.28$ and $\unit[4.57]{\mu m}$, which correspond to about 13 true spectral channels (spectral channels are sampled over 5 spectral pixels on the \textit{LM}~band detector). Our measurements thus cover a significant part of the \textit{L}~band and the very beginning of the \textit{M}~band. In \textit{N} band, $\kappa$~Tuc ($N\sim\unit[1.5]{Jy}$) is too faint for visibility measurements with MATISSE (to date sensitivity limit ${\sim}\unit[15-20]{Jy}$; see MATISSE ESO webpage). Even though \textit{N} band interferometric data were acquired simultaneously to the \textit{LM}~band data, the independent step of \textit{N}~band photometric measurements was thus skipped. No \textit{N} band visibility could be computed and we discarded the \textit{N} band data. In this work, our focus is on the \textit{LM} band data only.\break\vspace*{-0.35cm} Observations$\,$of$\,\kappa$~Tuc$\,$were$\,$framed$\,$by$\,$observations$\,$of$\,$two reference stars to calibrate the instrumental contribution (CAL-SCI-CAL sequence per night).$\,$Calibrators$\,$were$\,$chosen\break from the SearchCal tool (\citealt{Chelli2016}) to be regular main-sequence stars similar in magnitude and position of $\kappa~$Tuc and with a stellar angular diameter as low as possible.\break\vspace*{-0.4cm} The MATISSE data were reduced and calibrated with the$\,$help$\,$of~SUV,$\,$the$\,$VLTI$\,$user$\,$support$\,$service$\,$of$\,$the$\,$JMMC\footnote{Website: \href{http://www.jmmc.fr/suv.htm}{http://www.jmmc.fr/suv.htm}. Here, the last version of the MATISSE pipeline was used which is publicly available at \href{http://www.eso.org/sci/software/pipelines/matisse/}{http://www.eso.org/sci/software/pipelines/matisse/}.\\[-1.3cm]}. A specific aspect of MATISSE is the presence of two Beam Commuting Devices (BCDs) that provide four independent\break beam configurations to calibrate$\,$out$\,$the$\,$effect$\,$of$\,$instrumental defects (\citealt{Lopez2014}): IN-IN, IN-OUT, OUT-IN, and OUT-OUT. A basic observation cycle in \textit{LM}~band thus consists of four 1-min interferometric + photometric exposures, each of them being associated with one BCD position. Here, we focus on the absolute visibilities $V$ which requires to properly examine the four exposures for their visibility accuracy. For the observation on 9~July it turned out that the OUT-OUT exposure had to be discarded as the corresponding visibilities are inconsistent (within 3$\sigma$) with the visibilities of the three other exposures, for most of the baselines. For the same reason, we discarded the IN-IN exposure of the 11 July observation. \begin{table} \vspace*{-0.3cm} \centering \caption{Observations of $\kappa$~Tuc (HD~7788) with VLTI/MATISSE (Program No. 0103.C-0725(A); PI: F. Kirchschlager)\vspace*{-0.15cm}} \begin{tabular}{ l l l l l} \hline ID & Date & Config. & Seeing &Calib.${}^{\star}$\\\hline A & 2019/07/09 & K0-G2-D0-J3 & $\unit[0.8]{as}$ & 1, 2\\ B & 2019/07/11 & K0-G2-D0-J3 & $\unit[0.8]{as}$ & 3, 4\\ \hline \multicolumn{5}{l}{\textbf{Notes.} $({}^{\star})$ Calibrator stars correspond to}\\ \multicolumn{5}{l}{(1) HD 3750 (K1III); (2) HD 8094 (K4III);}\\ \multicolumn{5}{l}{(3) HD 4138 (K4III); and (4) HD 8315 (K0III).}\\[-0.3cm] \end{tabular} \label{tab_obs} \end{table} Besides the `short-term' errors of the pure fundamental source noise, thermal background and detector readout noise, broadband errors on the photometry arise due to variations of the interferometric transfer function as well as due to an imperfect subtraction of the thermal background, which affect the visibility on a timescale of $\sim$min. The transfer function shows variations of less than 2 per cent on average, which is thus not a limiting factor for the visibility accuracy. We take the mean of the three exposures as final visibility for each spectral pixel and their `long-term' error is estimated by the standard deviation of the three exposures. Hence, the data set comprises $2\,\text{(observations)}\,\times\,6\,\text{(baselines)}$ = 12 independent visibility measurements for each wavelength. The final calibrated data set of two exemplary wavelengths plus the closure phase are shown in Fig.~\ref{fig_visdata}.\break\vspace*{-0.7cm} \section{Analysis of the MATISSE data} \label{sec3} When we compare the calibrated data to the expected visibility of the stellar photosphere a visibility deficit is revealed. A stellar companion can be rejected as a source of this deficit as the closure phases are close to zero (Fig.~\ref{fig_visdata}), in agreement with measurements by \cite{Marion2014}. Therefore, the visibility deficit must be caused by circumstellar emission in the field-of-view (FOV). We follow a two-step approach to determine the excess and the properties of the circumstellar material. In the first step (this section), the calibrated visibilities $V$ are fitted by a model consisting of a limb-darkened photosphere surrounded by a uniform disc emission filling the entire FOV of \mbox{MATISSE}. In the second step (Section~\ref{sec_results}), the fluxes derived in the first step are fitted to a disc model where the dust is arranged in a narrow ring and the grains' optical properties are considered. The two-step approach has been used in previous studies to constrain the dust properties (\citealt{Absil2006, Absil2009, DiFolco2007, Absil2008, Defrere2011, Lebreton2013, Kirchschlager2017}) and we will demonstrate at the end of Section~\ref{sec_results} that the visibilities of the disc model are consistent to the MATISSE data which justifies this approach. Using the flux ratio $f$ between the integrated circumstellar and the stellar photospheric emission, the combined visibility with contributions from the bare photosphere and from the circumstellar emission is (\citealt{DiFolco2007}) \begin{align} V_{(\star+\text{CSE})} (B)&=\,(1-f)\,V_\star(B) + f\,V_\text{CSE}(B),\,\text{where}\\ V_\star(B) &= \frac{6}{3-u_\lambda} \left(\left(1-u_\lambda\right)\frac{J_1(x)}{x} + u_\lambda \sqrt{\frac{\pi}{2}} \frac{J_{1.5}(x)}{x^{1.5}}\right) \label{eq1}\\ \text{and}\hspace*{0.2cm}V_\text{CSE}(B) &= \exp{\left(-\frac{x^2_\text{FOV}}{4 \ln{2}}\right)} \label{eq2} \end{align} are the visibilities of a limb-darkening stellar model (\citealt{HanburyBrown1974}) and of the circumstellar emission (symmetric Gaussian). Here, $x =\pi \Theta_\star B/\lambda$, $\Theta_\star=\unit[(0.739\pm0.011)]{mas}$ is the stellar diameter (\citealt{Ertel2014}), $u_\lambda=0.22$ is the linear limb-darkening coefficient in \textit{LM}~band (\citealt{Claret1995}), $J_1(x)$ and $J_{1.5}(x)$ are Bessel functions, $x_\text{FOV} =\pi \Theta_\text{FOV}B/\lambda$, and $\Theta_\text{FOV}=\unit[0.6]{as}$ is the physical FOV in \textit{LM}~band which is equivalent to $\unit[6.3]{au}$ in radius at the distance of $\kappa$~Tuc. Under typical seeing conditions, the sensitivity along the FOV follows a Gaussian profile with a full width at half maximum of the size of the FOV, FWHM$= \unit[0.6]{as}$. The dust-to-star flux ratio $f$ is the only free parameter of our model which has to be fitted for each wavelength. To find the best-fit model, the calibrated MATISSE data of both observation nights are combined and fitted together (12 visibilities for each wavelength), the reduced $\chi^2$ is calculated as a function of $f$ and minimised. The statistical errors of $f$ are obtained by evaluating the probability distribution $p\left(\chi^2\right)=\exp{\left(-\chi^2 -\chi^2_\text{best-fit}\right)}$ around the best-fit model and by determining the corresponding confidence levels of $p$. The visibility distribution of the best-fit model has been overlaid on the plots of the calibrated data (Fig.~\ref{fig_visdata}, magenta line). The results of the fitting procedure are displayed as insets together with the 1$\sigma$ error of the flux ratio, $\pm \Delta f$, and the\break reduced $\chi^2$. The detection of the flux is significant when $f/|\Delta f|\ge3$ which accounts for 22 wavelengths between 3.37 and $\unit[3.85]{\mu m}$ which cover only the \textit{L}~band. This is the first time ever hot exozodiacal dust has been detected in this waveband. The flux ratios of about $\unit[5-7]{\%}$ are distinctly higher than the ratios known from \textit{H}~and \textit{K}~band for typical\break hot exozodis ($\unit[{\sim}1]{\%}$; \citealt{Absil2013,Ertel2014}). On the other hand, no significant (${<}3\sigma$) emission is detected at\break wavelengths above $\unit[3.85]{\mu m}$. This is not because of larger uncertainties of the individual exposures but due to an increased dispersion of the data points of the exposures. The visibilities at wavelengths below $\unit[3.37]{\mu m}$ increase to larger values and show either no significant$\,$deficit$\,$or$\,$no$\,$deficit$\,$at$\,$all. Considering the emission of a stellar black-body ($T_\text{eff}=\unit[6474]{K}$) as a weighting factor for $f$, we calculated the flux of the significant circumstellar emission (Fig.~\ref{fig_SED}). The fluxes show a steeply decreasing slope $F_\nu\propto \lambda^{-\alpha}$ with $\alpha= 3.92_{-1.92}^{+2.68}$. Two features are visible in the spectrum at $\unit[3.5]{\mu m}$ and $\unit[3.75]{\mu m}$. However, these features are not significant as can be seen by distinctly higher values for the reduced $\chi^2$ of $2.491$ for $\lambda =\unit[3.5]{\mu m}$ and $2.306$ for $\lambda =\unit[3.75]{\mu m}$ while it is about $1-1.5$ for neighbouring wavelengths, and are produced by a single visibility measurement on 11 July.\vspace*{-0.3cm} \begin{figure} \centering \includegraphics[trim=1.7cm 1.35cm 3.4cm 5.9cm, clip=true, width=0.9\linewidth, page = 1]{Pics/Excess_Paper.pdf}\\[-0.1cm] \caption{Spectral energy distribution (SED) of the circumstellar dust inferred from \textit{L}~band observations with ${>}3\sigma$ significance.\vspace*{-0.3cm}} \label{fig_SED} \end{figure} \section{Modelling of dust properties} \label{sec_results} Besides the newly obtained MATISSE data, the hot exozodi of $\kappa$~Tuc has been observed three times in \textit{H}~band using VLTI/PIONIER (\citealt{Ertel2014, Ertel2016}). In 2012 and 2014, the circumstellar emission was significant with a flux ratio of $f=\unit[(1.43\pm0.17)]{\%}$ and $\unit[(1.16\pm0.18)]{\%}$, respectively, while the emission in 2013 was insignificant with $f=\unit[(0.07\pm0.16)]{\%}$. A blackbody fit to the SED is found wanting and we can exclude once for all a stellar companion as source of the circumstellar emission. Instead, we fit the SED by a disc model relying on the approach of \cite{Kirchschlager2017}. In order to avoid parameter degeneracies, we keep the disk model as simple as possible and the number of free parameters as low as possible. Similar to the ring model of \cite{Absil2009} and \cite{Defrere2011}, our model is represented by a narrow face-on ring with $R$ as inner radius, outer radius $1.1\,R$, dust mass $M_\text{dust}$, and number density $n(r)\propto r^{-1}$. The ring is composed of compact spherical dust grains with radius $a$. Spherical grains with $2\,a\lesssim \lambda \lesssim 10\,a$ show in general strong interferences in the emission SED which can affect the fitting result. In order to reduce this effect we use a narrow size distribution with width $\Delta a =0.3\,a$ around $a$, where the presence of different sizes causes the interferences to cancel each other out (e.g. \citealt{Kirchschlager2019}). The radial distance is varied in the range $R\in[\unit[0.03]{au}, \unit[3]{au}]$, the grain size in the range $a\in [\unit[1]{nm}, \unit[5]{\mu m}]$, the dust mass in the range $M_\text{dust}\in[\unit[10^{-15}]{M_\oplus},\unit[10^{-3}]{M_\oplus}]$, and three different carbons and one silicate material are considered: Amorphous carbon (\citealt{Rouleau1991}), amorphous carbonaceous dust analogues (\citealt{Jager1998}), crystalline graphite (\citealt{DraineLee1984, DraineLaor1993}), and astronomical silicate (\citealt{WeingartnerDraine2001}). The grains' optical properties are calculated on the basis of Mie theory. Maps of single-scattering and re-emission are generated using an enhanced version of the tool \textsc{debris} (\citealt{Ertel2011}). Dust located within an inner working angle $\Theta=\lambda / (\unit[4\times100]{m})$ is assumed to be unresolved and its radiation is removed from the maps. The heterogeneous sensitivity along the FOV is taken into account by multiplying the simulated maps with a Gaussian function with a FWHM of $\unit[0.4]{as}$ and $\unit[0.6]{as}$ for \mbox{PIONIER} and MATISSE, respectively. Finally, the SED is calculated from the synthesised maps and fitted to the observational data by minimising the reduced $\chi^2$ and by varying the four free parameters $R$, $a$, $M_\text{dust}$, and the dust material. The corresponding confidence levels are again calculated by evaluating the probability function $p\left(\chi^2\right)=\exp{\left(-\chi^2 -\chi^2_\text{best-fit}\right)}$ around the best-fit model. The main challenge is the \textit{H}~band variability and the time that passed between the PIONIER and MATISSE observations, and we follow different fitting approaches to comply with. Firstly (a), we solely fit the 22 significant \textit{L} band data points from MATISSE ($\lambda\in[\unit[3.37]{\mu m},\unit[3.85]{\mu m}$]). These data provide reliable information on the dust at the time of the MATISSE observations. Secondly, we assume that the three \textit{H} band measurements in 2012 to 2014 provide a probable range of the \textit{H} band excess during the MATISSE observations. We thus explore three scenarios when fitting MATISSE and PIONIER data together: (b) We include all three PIONIER measurements as this will give us the most likely result and a reasonable uncertainty on the dust parameters from the uncertainty of the PIONIER data. (c)~We include only the two significant \textit{H} band excesses from 2012 and 2014. This provides a result where the \textit{H} band excess is on the high side of the observed range. (d)~We include only the \textit{H} band non-detection from 2013, which gives the strongest upper limit available for the low side of \textit{H} band excesses observed. The results of our four fits are summarised in Table~\ref{tab_res} and illustrated in Fig.~\ref{fig_bestSED} and~\ref{fig_bestfit}. \begin{figure} \centering \includegraphics[trim=1.75cm 1.35cm 3.4cm 5.9cm, clip=true, width=0.9\linewidth, page = 1]{Pics/SED_all.pdf}\\[-0.1cm] \caption{SED of the circumstellar dust emission around $\kappa$~Tuc composed of the \textit{H}~band (grey) observations from 2012-2014 (\citealt{Ertel2014, Ertel2016}) and \textit{L}~band observations (magenta) from 2019 (this work). The solid lines represent the SEDs of the best-fit\break models for amorphous carbon taking into account four different fitting approaches: (a) Neglecting the NIR~fluxes (black), (b) considering all \textit{H}~band data (blue), (c) considering the 2012 and 2014 \textit{H}~band data (red), (d) considering 2013 \textit{H}~band data (yellow).\vspace*{-0.4cm}} \label{fig_bestSED} \end{figure} \begin{figure} \centering \includegraphics[trim=1.8cm 3.6cm 3.0cm 1.9cm, clip=true, width= 0.9\linewidth, page = 1]{Pics/Parameterraum_abc_0.pdf} \includegraphics[trim=1.8cm 3.6cm 3.0cm 1.9cm, clip=true, width= 0.9\linewidth, page = 1]{Pics/Parameterraum_ac_b.pdf}\\[-0.1cm] \caption{Reduced $\chi^2$ maps of the SED modelling of the hot exozodi of $\kappa$~Tuc as a function of grain size $a$ and disc radius $R$ for the four different fitting approaches (a) - (d). The white crosses indicate the best-fit models and the black solid lines the $1\sigma$ and $3\sigma$ confidence levels. The dust temperatures $T_\text{dust}=\unit[1000]{K}$, $\unit[1500]{K}$, and $\unit[2000]{K}$ are shown as white dashed lines. The dust material is amorphous carbon.\vspace*{-0.3cm}} \label{fig_bestfit} \end{figure} \begin{table} \vspace*{-0.3cm} \centering \caption{Best fitting results derived from SED modelling.\vspace*{-0.25cm}} \begin{tabular}{ l l l l l} \hline Appr. &$\hspace*{-0.3cm}$ $a$ [$\mu$m] &$\hspace*{-0.3cm}$ $R$ [au] &$\hspace*{-0.3cm}$ $T_\text{dust}$ [K] &$\hspace*{-0.3cm}$ $M_\text{dust}\,[10^{-9}\,\text{M}_\oplus]$\\\hline (a) No NIR &$\hspace*{-0.3cm}$ 0.58 &$\hspace*{-0.3cm}$ 0.031 &$\hspace*{-0.3cm}$ 2300 &$\phantom{1}$0.54 \\ (b) 2012-2014 &$\hspace*{-0.3cm}$ 0.58 &$\hspace*{-0.3cm}$ 0.13 &$\hspace*{-0.3cm}$ 1260 &$\phantom{1}$3.11 \\ (c) 2012$\,$\&$\,$2014 &$\hspace*{-0.3cm}$ 0.58 &$\hspace*{-0.3cm}$ 0.1 &$\hspace*{-0.3cm}$ 1430 &$\phantom{1}$2.0\\ (d) 2013 &$\hspace*{-0.3cm}$ 0.58 &$\hspace*{-0.3cm}$ 0.29 &$\hspace*{-0.3cm}$ $\phantom{1}$940 &10.4 \\ \hline\\[-0.7cm] \end{tabular} \label{tab_res} \end{table} For the first approach (a), all considered dust materials are able to reproduce the MIR slope, however, the disc radius and the grain size are weakly constrained with large 1$\sigma$ errors. For amorphous carbon the best-fit model is composed of a disc radius $R=\unit[0.031]{au}$, grain size $a=\unit[0.58]{\mu m}$, mass $M_\text{dust}=\unit[0.54\times10^{-9}]{M_\oplus}$, and the temperature is $\unit[2300]{K}$. For the second approach (b), we find amorphous carbon to be the best-fit material, which is able to reproduce the NIR and MIR fluxes and the MIR slope. The radiation is dominated by thermal re-emission which agrees with a detected lack of scattered radiation in polarisation observations (\citealt{Marshall2016}). The best result ($\chi^2=2.199$) is obtained for the grain size $a=\unit[0.58]{\mu m}$, disc radius $R=\unit[0.13]{au}$, mass $M_\text{dust}=\unit[3.11\times10^{-9}]{M_\oplus}$, and the dust temperature amounts to $T_\text{dust}{\sim}\unit[1260]{K}$. However, within the 1$\sigma$ confidence level, the grain size is not constrained by our modelling and both nanometre-sized grains and those as large as a few micrometre can reproduce the observations. The inner disc radius is limited to $\unit[0.045]{au}<R<\unit[0.52]{au}$ within 1$\sigma$ confidence which corresponds to dust temperatures between $\unit[900]{K}$ and $\unit[1750]{K}$. Besides amorphous carbon, the three other materials (amorphous carbonaceous analogues, crystalline graphite, astronomical silicate) show fitting results which are only slightly weaker ($\chi^2=2.325-2.387$). For the third and fourth approach (c and d), amorphous carbon is adopted as dust material. The best-fit model of the third approach has a disc radius $R=\unit[0.1]{au}$, the dust temperature is $T_\text{dust}{\sim}\unit[1430]{K}$, and the dust mass is $M_\text{dust}=\break \unit[2.0\times10^{-9}]{M_\oplus}$, and that of the fourth approach is $R=\unit[0.29]{au}$, $T_\text{dust}{\sim}\unit[940]{K}$, and $M_\text{dust}=\unit[10.4\times10^{-9}]{M_\oplus}$. Within the $1\sigma$ confidence level the dust location and temperature of both approaches are limited to $\unit[0.032-1.18]{au}$ and $\unit[600-2000]{K}$. The grain size of both approaches amounts again to $a=\unit[0.58]{\mu m}$ but is not constrained within the 1$\sigma$ confidence error. The dust properties and the SED of approach (b) are, as expected, the intermediate values of approach (c) and (d). Finally, we have to verify that the disc model is compatible with the interferometric data. We take the maps of the best-fits of the four approaches and compute their interferometric signal. The closure phases of all disc models are zero as a result of the face-on orientation. In Fig.~\ref{fig_visdata} the computed visibilities of the disc model are shown as a function of baseline for approach (c). We can see that they approximate both the observational data and the visibilities of the uniform circumstellar emission. In particular, the visibility deficits of both circumstellar emission models (disc ring and Gaussian) compared to the pure stellar photosphere are similar. The main difference are the oscillations occurring for the disc model which are caused by the limited extension of the disc ring and which have been noticed also in the studies of \cite{Absil2009} and \cite{Kirchschlager2018}. In summary, we conclude that the disc model is in line with the interferometric data.\vspace*{-0.6cm} \section{Discussion and Conclusions} \label{sec_conc} In this letter, we presented the first detection of hot exozodiacal dust emission in \textit{L}~band. We used the new instrument MATISSE at the VLTI to observe the visibilities of $\kappa$~Tuc at six baselines. Using analytical solutions of a limb-darkened photosphere surrounded by uniform disc emission, we were able to derive significant (${>}3\sigma$) dust-to-star flux ratios of $\unit[5-7]{\%}$ in the wavelength range $\unit[3.37]{\mu m}-\unit[3.85]{\mu m}$. Since the measured closure phases are close to zero, the \mbox{MATISSE} data strongly support the scenario that the excess is caused by circumstellar dust emission and not by a companion. The results present a further confirmation of the existence of hot exozodiacal dust around $\kappa$~tuc that has been detected previously (\citealt{Ertel2014,Ertel2016}). In particular, the existence confirms that the temporal variability seen in previous PIONIER observations is caused by a variability of the dust properties. An explanation for the origin of the variability is beyond the scope of our study. The newly derived \textit{L}~band fluxes and the previously published \textit{H}~band fluxes provided the basis for a SED modelling. The best-fits were obtained for amorphous carbon grains of size $a=\unit[0.58]{\mu m}$ though other carbons and astronomical silicate cannot be ruled out. Moreover, the grain size cannot be confined within the $1\sigma$ confidence level. We note that the theoretical blow-out size of carbon grains around an F6 star amounts to $a_\text{BO}{\sim}\unit[1.4]{\mu m}$ (\citealt{Kirchschlager2013}) and smaller grains should be blown out of the system. However, this value neglects additional trapping mechanisms and has to be considered with caution (see e.g.~\citealt{Kral2017} for discussion). Since the \textit{H}~band data revealed a temporal variability, we combined them with the \textit{L}~band data in different ways. Depending on the approach, the best fits are obtained for a narrow dust ring at a stellar distance in the $\unit[0.1-0.29]{au}$\break range, and thus with$\,$a$\,$temperature$\,$between $\unit[940]{K}\,$and$\,\unit[1430]{K}$\break and total dust mass between $\unit[2\times10^{-9}]{M_\oplus}$ and $\unit[10.4\times10^{-9}]{M_\oplus}$. Within the $1\sigma$ confidence level dust location and temperature are constrained to $\unit[0.032-1.18]{au}$ and $\unit[600-2000]{K}$. The MATISSE observations open a new window to study hot exozodis. Though the \textit{L}~band data alone can hardly determine the dust properties, the combination of \textit{L}~band and NIR~data (\textit{H}~band) constrain the stellar distance of the emission. For a better understanding of the dust properties, and in particular for the location of the hot exozodi emission, simultaneous observations of $\kappa$~Tuc in the NIR and MIR domain are required in the future. The NIR flux has varied with a period of ${\sim}12$ months between 2012 and 2014. Therefore, \mbox{PIONIER} and \mbox{MATISSE} observations conducted within a few months will allow to further constrain the dust location, and several sequences of these combined observations will help to understand the flux variability. Observations with VLTI/GRAVITY in \textit{K}~band could support the study of the hot exozodi by filling the gap in the SED at an intermediate wavelength. The detection or non-detection of dust emission in \textit{N}~band using MATISSE observations of higher quality can potentially determine or rule out the presence of silicate material due to the prominent silicate feature at $\lambda\unit[{\sim}10]{\mu m}$. Finally, high spectral resolution observations by MATISSE can be used to trace \textit{LM}~band features that are consistent with certain materials. We note that $\kappa$~Tuc has a declination of $\delta\approx-69^\circ$ which is too far south to be visible for CHARA/FLUOR (\textit{K}~band) or the Large Binocular Telescope Interferometer (\textit{N}~band). We have proven in this study that MATISSE offers the required sensitivity and spatial resolution for the observation of hot exozodis. In the future, the \textit{L}~band and upcoming \textit{M}~band observations will allow to determine the properties of other hot exozodis. Given the dust-to-star flux ratio of up to $\unit[7]{\%}$ in \textit{L}~band is a frequent phenomenon, MATISSE will most likely allow the discovery of new hot exozodis.\vspace*{-0.7cm} \section*{Acknowledgements} \vspace*{-0.2cm}Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO ID 0103.C-0725(A). FK was supported by European Research Council Grant SNDUST ERC-2015-AdG-694520. SW and AVK acknowledge support by the DFG through grants WO~857/15-2 and KR~2164/15-2. This research has benefited from the help of SUV, the VLTI user support service of the Jean-Marie Mariotti Center ({\href{http://www.jmmc.fr/suv.htm}{http://www.jmmc.fr/suv.htm}).\\[-1.1cm] \section*{Data availability} \vspace*{-0.2cm}The data underlying this article will be shared on reasonable request to the corresponding author. The raw data are publicly available at the ESO archive and at the optical interferometry database of the JMMC. \\[-1.1cm] \bibliographystyle{mnras}
train/arxiv
BkiUbQU5qhLAB-QFucV6
5
1
\section{Introduction} Group environment plays a key role in the evolution of the overall galaxy population in the Universe. Many works in the literature have studied galaxies in groups to understand how this particular environment affects galaxies and their properties (e.g. \citealt{m02,eke04b,balogh04,zmm06,weinmann06,yang09,robot10}). The understanding of how the different processes act upon galaxies requires not only a characterisation of how galaxy properties are related to the environment but also to their motion therein. The action of the different physical mechanisms depend on the dynamics of galaxies within systems (e.g. \citealt{yepes91,fusco98,abadi}). There is evidence that some properties of galaxy systems are closely related to galaxy dynamics (e.g. \citealt{whit93,adami98,biviano02,lares04,ribeiro10}). A possible way to characterise the dynamical state of a galaxy group is analysing its velocity distribution. It is known that a Gaussian velocity distribution is indicative of a group in dynamical equilibrium, while departures from Gaussianity may indicate that perturbative processes are working \citep{menci96,ad}. However, there is a difficulty in determining whether a given velocity distribution differs significantly from Gaussian, mainly when studying smaller systems as galaxy groups with only a few galaxy members. In a recent work, \citet{ad} have demonstrated that a reliable distinction can be made between Gaussian and non-Gaussian groups even for those with low group membership. They conclude that the Anderson-Darling (A-D) goodness-of-fit test is the most reliable statistics to distinguish between relaxed and dynamically disturbed systems even for those with at least 5 galaxy members. Therefore, this statistical method is a very suitable tool to analyse the internal dynamics of a system. Recently, \citet{zm11} (hereafter ZM11) used the Seventh Data Release of the Sloan Digital Sky Survey (hereafter SDSS DR7; \citealt{dr7}) to identify groups of galaxies and study several dependencies of the LF. They found that the characteristic magnitude brightens and the faint end slope becomes steeper as a function of mass. This change in the luminosity function is mainly due to the red spheroids and the varying number contributions of the different galaxy types. They also found evidence of luminosity segregation for massive groups. Moreover, the mass trend of the LF is much more pronounced for groups located in low density regions. However, the effects of the internal dynamics of groups on the galaxy LF is an issue that has not been fully addressed. Therefore, in this paper we extend the work by ZM11 by studying the link between the LF and the dynamical state of groups by means of their galaxy member velocity distributions using the A-D test. The layout of this paper is as follows. In section 2 we describe the group sample. The analysis of the LFs is in section 3. Finally, in section 4 we discuss the results. \section{The sample} For this work, we use the sample of groups constructed by ZM11. This sample has been identified in the Main Galaxy Sample (MGS; \citealt{mgs}) of SDSS DR7 which comprises galaxies down to an apparent magnitude limit of $17.77$ in the $r$ band. In ZM11, the group identification was performed following \citet{mz05}: firstly, a standard Friends-of-Friends ({\em fof}) algorithm links MGS galaxies into groups; and secondly, an improvement of the rich group identification is performed by means of a second identification on galaxy groups which have at least ten members using a higher density contrast. The latter is done in order to split merged systems or to eliminate spurious member detection (see \citealt{diaz05}). The method for estimating group centre positions was refined for groups with at least ten members using an iterative procedure developed by \citet{diaz05}. Due to the well known incompleteness of the MGS for ${\it r}<14.5$, ZM11 excluded galaxies brighter than $14.5$. The linking parameters for the {\em fof} algorithm were set to have a transverse linking length which corresponds to a contour over-density of $\delta \rho/\rho=200$ and a line-of-sight linking length of $200~\rm{km~s^{-1}}$. As in \citet{merchan02}, group virial masses were computed as ${\cal M}=\sigma^2R_{\rm vir}/G$, where $R_{\rm vir}$ is the virial radius of the system, and $\sigma$ is the velocity dispersion of member galaxies \citep{limber60}. The velocity dispersion $\sigma$ was estimated using the line-of-sight velocity dispersion $\sigma_{v}$, $\sigma=\sqrt{3}\sigma_v$. To compute $\sigma_v$ we used the methods described by \citet{beers90}, applying the biweight estimator for groups with richness $N_{gal}\ge 15$ and the gapper estimator for poorer systems. The final group sample comprises 15,961 groups with at least 4 members, adding up to 103,342 galaxies. The group sample has a median redshift, velocity dispersion, virial mass and virial radius of 0.09, $193 \ \rm{km~s^{-1}}$, $2.1\times10^{13} \ M_{\odot} \ h^{-1}$, and $0.9 \ h^{-1} {\rm{Mpc}}$, respectively. \begin{figure} \begin{center} \includegraphics[width=70mm]{f1.eps} \caption{ The normalised line-of-sight velocity distribution of galaxies in Gaussian ({\em upper panel}) and Non-Gaussian ({\em lower panel}) groups. Solid lines are the fitting functions describing the distributions (see text for details). } \label{fig1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=70mm]{f2.eps} \caption{ The group total absolute magnitude vs. virial mass for ZM11 groups which have at least 5 members. The {\em thick solid line} is the least square linear fit between $M^{group}_{^{0.1}r}$ and $\log({\mathcal M}_{vir})$. {\em Vertical line}, ${\mathcal M}_{vir}=5\times 10^{13}M_{\odot}h^{-1}$, is the high mass cut-off, while {\em horizontal line}, $M^{group}_{^{0.1}r}-5\log(h)=-23.5$, is the corresponding high luminosity cut-off obtained from the estimated linear relation and the high mass cut-off value. } \label{fig2} \end{center} \end{figure} Galaxy magnitudes used throughout this paper are Petrosian, are in the AB system and have been corrected for Galactic extinction using the maps by \citet{sch98}. Absolute magnitudes have been computed assuming a flat cosmological model with parameters $\Omega_0=0.3$, $\Omega_{\Lambda}=0.7$ and $H_0=100~h~{\rm km~s^{-1}~Mpc^{-1}}$ and $K-$corrected using the method of \citet{blantonk}~({\small KCORRECT} version 4.1). We have also included evolution corrections to this magnitude following \citet{blantonlf}. We have adopted a band shift to a redshift $0.1$ for the $r$ band (hereafter $^{0.1}r$), i.e. to approximately the mean redshift of the main galaxy sample of SDSS. \begin{figure} \includegraphics[width=90mm]{f3.eps} \caption{STY best fitting Schechter function parameters in the $^{0.1}r$ band as a function of group mass ({\em left panels}) and group total absolute magnitude ({\em right panels}) for the subsamples of G ({\em filled circles}) and NG ({\em open circles}) groups. {\em Upper panels} show the characteristic absolute magnitude, while {\em lower panel} show the variation of the faint end slope. Vertical error bars are the projection of 1$\sigma$ joint error ellipse onto the $\alpha$ and $M^{\ast}$ axes. Horizontal error bars are the 25th and 75th percentiles in each mass/luminosity bin. } \label{fig3} \end{figure} To distinguish between relaxed groups with Gaussian velocity distributions and groups with non-Gaussian dynamics we adopted the A-D goodness-of-fit test. In a recent study, \cite{ad} have demonstrated that the A-D is one of the most reliable and powerful tests to measure departures from an underlying Gaussian distribution. This test does not require binning or graphical analysis of the data. From the outcome of the A-D test, we classify groups into two subsamples: the non-Gaussian (NG) groups are those which have a confidence level above $90\%$ of not having a Gaussian velocity distribution, while Gaussian (G) groups are those whose confidence level of not having a Gaussian velocity distribution is below $50\%$. Since the A-D test is reliable when data sets have at least 5 points \citep{ad}, for the purposes of this work we restrict the ZM11 sample to groups with at least 5 members. From the total of 9,387 groups with at least 5 galaxy members, 479 are classified as NG while 5,250 as G groups. In Fig. \ref{fig1} we show, for our samples of G and NG groups, the stacked distribution of the radial velocity of the galaxies ($V$) relative to their parent group radial velocity ($V_{CM}$) normalised to the group velocity dispersion ($\sigma_v$). We perform a fitting procedure to clearly describe the velocity distribution behaviours. It can be seen ({\em solid lines}, Fig. \ref{fig1}) that the stacked velocity distribution of G groups is well represented by a Gaussian function ({\em upper panel}), while the NG groups show clear departures from a single Gaussian function, being well fit by the sum of two Gaussian functions ({\em lower panel}). The small asymmetry in the velocity profile of NG groups is due to the only 4 groups which have more than 90 members. Among them, 3 have left-skewed radial velocity distribution, and 1 group has a right-skewed one. By excluding these large groups, the stacked radial velocity distribution of NG groups becomes very close to symmetrical, still non-Gaussian and well fit by the sum of two Gaussian functions displaced from each other. \section{The lf of galaxies in groups} This study is based on the analysis of the luminosity function of galaxies in groups as a function of a given group physical property, using the group subsamples defined in the previous section. As in ZM11, the system physical property adopted is the group virial mass. For groups that are not in dynamical equilibrium, i.e. those classified as NG, the virial mass might not be a suitable measure of the system mass. Thus, comparing the LF of G and NG groups as a function of mass can be thought as inappropriate. Thus, to complement the analysis of the LF we use also another group property which is known to be correlated with mass, the group total luminosity (e.g. \citealt{girardi00,pop05,dm05}). We compute group total absolute magnitudes following \citet{moore93} but using the mass dependence of the LF of galaxies in groups as computed by ZM11. We show in Fig. \ref{fig2} the group total absolute magnitudes of the ZM11 groups as a function of their virial masses. It can be seen that there is a real correlation among these parameters. Similarly to the obtained by ZM11, in all cases, we find that the Schechter parametrisation of the LF \citep{schechter76} is appropriate for describing the binned LF\footnote{The binned LF were computed using the $C^-$ method \citep{lb71,cho87}.}. Therefore, our findings below are expressed in terms of the values of the Schechter function shape parameters, $\alpha$ and $M^{\ast}$, only, which we compute using the STY method \citep{sty79}. In Fig. \ref{fig3} we show the best fitting $\alpha$ and $M^{\ast}$ parameters of the $^{0.1}r-$band LFs of galaxies in G and NG groups as a function of their virial masses ({\em left panels}) and total absolute magnitude ({\em right panels}). We use as many bins as possible to probe the mass/absolute magnitude range while having enough data points in each bin to produce reliable estimations of the LF. We use 5 and 3 bins which include the same number of groups for the G and the NG group samples respectively. \begin{figure} \includegraphics[width=90mm]{f4.eps} \caption{ Groups in the single high mass/luminosity bins. {\em Left panels} correspond to groups with ${\mathcal M}_{vir}>5\times 10^{13}M_{\odot}h^{-1}$ and {\em right panels} to groups brighter than $M^{group}_{^{0.1}r}-5\log(h)=-23.5$. From {\em top} to {\em bottom} we show the normalised distributions of: virial mass/total luminosity, number of members and redshift, of G ({\em solid line}) and NG ({\em dotted line}) groups. Error bars in each histogram are Poisson errors. } \label{fig4} \end{figure} \begin{figure*} \includegraphics[width=170mm]{f5.eps} \caption{ STY best fitting Schechter parameters of the $^{0.1}r$ band LFs of G ({\em solid line}) and NG ({\em dotted line}) groups in the single high mass ({\em left panel}) and the single high luminosity ({\em right panel}) bins. We also show their 1, 2 and 3$\sigma$ confidence ellipses. } \label{fig5} \end{figure*} In agreement with previous results (e.g. \citealt{zmm06,robot10}; ZM11) there is a clear brightening in the characteristic magnitude and a decreasing faint end slope as a function of mass for both groups subsamples ({\em left panels}). As expected, the same behaviours are seen as a function of group total absolute magnitude ({\em right panels}). Comparing the LF parameters for G and NG groups, we observe that, within errors, there are no significant differences in the Schechter parameters, with the exception of the high mass/luminosity tails, where there is an indication that galaxies in G groups may have brighter $M^{\ast}$ and steeper $\alpha$ values. In order to explore in detail these possible differences at the high mass/luminosity tails and to assess their reliability, we recompute the LF for massive/luminous groups by using single bins in mass and luminosity. In the single high mass bin we decide to include all groups more massive than $5\times 10^{13}M_{\odot}h^{-1}$. With this choice, we include almost completely the two highest mass bins of G groups in Fig. \ref{fig3} while keeping a number of NG groups large enough to have a reliable statistics. For the single high luminosity bin, we estimate the corresponding value using the previous high mass cut-off and the least square linear relation between group virial mass ($\log({\mathcal M}_{vir})$) and group total absolute magnitude ($M_{^{0.1}r}^{\ast}-5\log(h)$) for the ZM11 group sample shown in Fig. \ref{fig2}. We obtain a group total absolute magnitude cut-off of $-23.5$ in the $^{0.1}r-$band. As in the virial mass case, this high luminosity subsample includes the two highest luminosity bins of G groups in Fig. \ref{fig3}.\\ Now, since we are interesting to demonstrate that the dynamical state of the systems is an indicator of the different luminosity behaviours in the high tails, we matched G and NG group distributions of virial mass and group total absolute magnitude in each subsample (Kolmogorov-Smirnov coefficient $> \ 99.9\%$ in both distributions). After performing this procedure we obtain 1225 G and 123 NG groups for the high mass subsample and 1373 G and 197 NG groups for the high luminosity subsample. In the {\em upper panels} of Fig. \ref{fig4} we show the distributions of mass ({\em left panel}) and group total absolute magnitude ({\em right panel}) for G ({\em solid line}) and NG ({\em dashed line}) groups in these high mass/luminosity subsamples. We also show in this Figure the number of members ({\em middle panels}) and redshift distributions ({\em bottom panels}) of G and NG groups for both subsamples. It can be seen that the NG groups tend have a tail of high membership groups in comparison with the G group subsample. This tail can be directly associated with the low redshift tail observed for NG groups. In the {\em left panel} of Fig. \ref{fig5} we show the STY best fitting Schechter parameters for the high mass subsamples of groups along with their 1, 2 and 3 $\sigma$ confidence levels. Clearly, the LF of galaxies in massive G and NG groups differ at 3$\sigma$ level. The {\em right panel} of Fig. \ref{fig5} shows the LF parameters corresponding to G and NG high luminosity groups. Again, we observe a clear difference in the LF of galaxies in G and NG groups, in this case at a $2\sigma$ significance level. \section{Discussion} ZM11 showed that the LF depends not only on local environment (group mass, group-centric distance) but also on the large scale environment surrounding the groups. Using the same sample of groups, in this work we present evidence that the dynamical state of the system is another important ingredient in the evolution of the luminosity of galaxies. Our results indicate that, for high mass/luminosity groups, the LF of galaxies in G groups have brighter $M^{\ast}$ and steeper $\alpha$, than the LF of galaxies in NG groups. Therefore, the different {\em internal} dynamical state of a system is a clear indicator of a different history in the galaxy luminosity evolution. Systems of galaxies have Gaussian velocity distributions only if they are in dynamical equilibrium. Galaxies in these systems have had enough time to suffer the long term action of several physical processes during their evolution. Galaxy mergers play a central role in galaxy evolution in systems. Some other processes such as strangulation \citep{larson80}, ram pressure \citep{gunn72} and galaxy harassment \citep{farouki81}, are more efficient in high mass systems, where the effect observed for different dynamical state of the systems it has been shown to be more important. The action of all these processes over the group lifetime can produce both, bright and faint galaxies thus providing a plausible explanation for our results. On the other hand, since the stacked velocity distributions for NG groups is well described by two Gaussian functions, it is likely that the non Gaussianity is caused by the presence of a multimodal galaxy population. This behaviour opens the possibility for the non Gaussian velocity distributions to be the consequence of an undergoing merging process (e.g. \citealt{menci96}) or even multiple merging events (e.g. \citealt{girardi05}). The different merging populations inhabiting these systems could still be experiencing the influence of their own parent halo, and hence preserving the galaxy properties corresponding to the individual (smaller) halos that are infalling to form the (larger) non-Gaussian group. Therefore, these smaller entities should supply the non-Gaussian system with less bright galaxy luminosities that correspond to less massive/luminous systems. This scenario supports the observed fainter characteristic absolute magnitude and the shallower faint end slope for non-Gaussian systems. Galaxies inhabiting these non relaxed systems are unlikely to feel the influence of the environmental physical mechanisms described in the previous paragraph, thus preventing the formation of very bright galaxies as well as a large number of faint ones. In agreement with this scenario, \citet{ribeiro10} using the A-D test over a sample of groups from the 2PIGG catalogue \citep{eke04} demonstrated that galaxies in Gaussian groups are significantly more evolved than galaxies in non Gaussian systems. Also, using a subsample of the 2PIGG groups, \citet{ribeiro11} have shown that non Gaussian systems are composed of multiple velocity modes, in concordance with the scenario of secondary infall of clumps at a stage before virialisation. Both previous studies were performed analysing the surroundings of groups out to 4 times the corresponding radius for an overdensity of 200 ($4R_{200}$). In our work we show that a similar behaviour can be observed from the analysis of the internal dynamics of groups (only galaxy members, mostly inside the virial radius) and these different dynamical environments can be evidenced in the galaxy luminosities of high mass/luminous systems. Our results suggest another way to test models of galaxy evolution, since the connection between galaxy luminosities (i.e. astrophysics) and the dynamics of the systems should be present. \section*{Acknowledgements} {\small We thanks to the referee for helpful comments and suggestions that improved the paper. This work has been supported by Consejo Nacional de Investigaciones Cient\'\i ficas y T\'ecnicas de la Rep\'ublica Argentina (CONICET, PIP2011/2013 11220100100336), Secretar\'\i a de Ciencia y Tecnolog\'\i a de la Universidad de C\'ordoba, Argentina. (SeCyT) and Funda\c c\~ao de Amparo \`a Pesquisa do Estado do S\~ao Paulo (FAPESP), Brazil. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.}
train/arxiv
BkiUbu05jDKDyDz_7o8e
5
1
\section{Introduction} Understanding high-frequency market micro-structure in time-series data such as limit order books (LOB) is complicated by a large number of factors including high-dimensionality, trends based on supply and demand, order creation and deletion around price jumps and the overwhelming relative percentage of order cancellations. It makes sense in this inherently noisy environment to take an agnostic approach to the underlying mechanisms inducing this behavior and construct a network which learns to uncover the relevant features from raw data. This removes the bias contained in models using hand-crafted features and other market assumptions such as those in autoregressive models VAR \cite{Zi} and ARIMA \cite{Ar}. Arguably the most successful architecture used to extract features is the convolutional neural network \cite{Le} which makes use of translation equivariance, present in many domains including time-series applications. For time-series however, further inductive biases prove to be beneficial. Convolutional neural networks with a causal temporal bias were introduced in \cite{Oo} to encode long-range temporal dependencies in raw audio signals. Here convolutions are replaced by dilated causal convolutions controlled by a dilation rate. The dilation rate is the number of input values skipped by the filter, thereby allowing the network to act with a larger receptive field. In this work, features from our architecture will come from the output of multiple such dilated causal convolutional layers connected in series. Once we have a collection of features, we would like to do computations with these learned representations to enable context dependent updates. Historically, attention networks were introduced in \cite{Bah} to improve existing long-short term memory (LSTM) \cite{Ho,Gr} models for neural machine translation by implementing a ``soft search" over neighboring words enabling the system to focus only on words relevant to the generation of the next target word. This early work combined attention with RNNs. Shortly after, CNNs were combined with attention in \cite{Xu} and \cite{Ch} for image captioning and question-answering tasks respectively. In \cite{Va}, self-attention was introduced as a stand alone replacement for LSTMs on a wide range of natural language processing tasks leading to state-of-the-art results \cite{De,Ra} which included masked word prediction. Introducing self-attention can be thought of as incorporating an inductive biases into the learning architecture to exploit relational structure in the task environment. This amounts to learning over a graph neural network \cite{Sc,Bat} where nodes are entities given by the learned features which are then updated through a message passing procedure along edges. Results in various applications show that self-attention can better capture long range dependencies in comparison to LSTMs \cite{Da}. More precisely, \cite{Va} introduced the transformer architecture which consists of an encoder and decoder for language translation. Both the encoder and decoder contain the repetition of modules which we refer to as \textit{transformer blocks}. Each transformer block consists of a multi-head self-attention layer followed by normalization, feedforward and residual connections. This is described in detail in Section~\ref{section:architecture}. Combining transformer blocks with convolutional layers for feature extraction is a powerful combination for various tasks. In particular, for complex reasoning tasks in various strategic game environments, the addition of these transformer modules significantly enhanced performance and sample efficiency compared with existing non-relational baselines \cite{Za,Vi,Cl}. In this work we combine the causal convolutional architecture of \cite{Oo} with multiple transformer blocks. Moreover, our transformer blocks contain masked multi-head self-attention layers. By applying a mask to our self-attention functions, we ensure that the ordering of events in our time-series is never violated at each step, ie. entities can only attend to entities in its causal past. We train and test our model on the publicly available FI-2010 data-set\footnote{The ``MNIST" for limit order books.} which is a LOB of five instruments from the Nasdaq Nordic stock market for a ten day period \cite{Nt}. We show that our algorithm outperforms other common and previously state-of-the-art architectures using standard model validation techniques. In summary, inspired by the wavenet architecture of \cite{Oo} where dilated causal convolutions were used to encode long-range temporal dependencies, we use these causal convolutions to build a feature map for our transformer blocks to act on. We refer to our specific architecture as TransLOB. It is a composition of differentiable functions that process and integrate both local and global information from the LOB in a dynamic relational way whilst respecting the causal structure. There are a number of advantages to our architecture outside of the significant increases in performance. Firstly, in spite of the $O(N^2)$ complexity of the self-attention component, our architecture is substantially more sample efficient than existing LSTM architectures for this task. Secondly, the ability to analyse attention distributions provides a clearer picture of internal computations within the model compared with these other methods leading to better interpretability. \section*{Related work} There is now a substantial literature applying deep neural networks to time-series applications, and in particular, limit order books (LOB). Convolutional neural networks (CNN) have been explored in LOB applications in \cite{Do,Ts}. To capture long-range dependencies in temporal behavior, CNNs have been combined with recurrent neural networks (RNN) (typically long-short term memory (LSTM)) which improve on earlier results \cite{Ts2,Zh}. Some modifications to the standard convolutional layer have been used in attempts to infer local interactions over different time horizons. For example, \cite{Zh} uses an inception module \cite{Sz} after the standard convolutional layers for this inference followed by an LSTM to encode relational dynamics. Stand-alone RNNs have been used extensively in market prediction \cite{Di,Fi,Bao} and have been shown to outperform models based on standard multi-layer perceptrons, random forests and SVMs \cite{Ts3}. For time-series applications, recent work \cite{Tr,Qi} uses attention and \cite{La,Ma,Sh} in combination with CNNs. However, there are relatively few references which combine CNNs with transformers to analyse time-series data. We mention \cite{So} which uses a CNN plus multi-head self-attention to analyse clinical time-series behaviour and \cite{Li} which became aware to us during the final write-up of this paper which uses a similar architecture to our own and applied to univariate synthetic and energy sector datasets. As far as we are aware, ours is the first work applying this class of architectures to the multivariate financial domain, with the various subtleties arising in this particular application. \section{Experiments} A limit order book (LOB) at time $t$ is the set of all active orders in a market at time $t$. These orders consist of two sides; the bid-side and the ask-side. The bid-side consists of buy orders and the ask-side consists of sell orders both containing price and volume for each order. Our experiments will use the LOB from the publicly available FI-2010 dataset\footnote{The dataset is available at https://etsin.fairdata.fi/dataset/73eb48d7-4dbc-4a10-a52a-da745b47a649}. A general introduction to LOBs can be found in \cite{Go}. Let $\{p_a^{i}(t),v_a^{i}(t)\}$ denote the price (resp. volume) of sell orders at time $t$ at level $i$ in the LOB. Likewise, let $\{p_b^{i}(t),v_b^{i}(t)\}$ denote the price (resp. volume) of buy orders at time $t$ at level $i$ in the LOB. The bid price $p_b^{1}(t)$ at time $t$ is the highest stated price among active buy orders at time $t$. The ask price $p_a^{1}(t)$ at time $t$ is the lowest stated price among active sell orders at time $t$. A buy order is executed if $p_b^{1}(t)>p_a^{1}(t)$ for the entire volume of the order. Similarly, a sell order is executed if $p_a^{1}(t)<p_b^{1}(t)$ for the entire volume of the order. The FI-2010 dataset is made up of 10 days of 5 stocks from the Helsinki Stock Exchange, operated by Nasdaq Nordic, consisting of 10 orders on each side of the LOB. Event types can be executions, order submissions, and order cancellations and are non-uniform in time. We restrict to normal trading hours (no auction). The general structure of the LOB is contained in Table~\ref{table:lobtable}. \begin{table}[!htb]\tiny \renewcommand{\arraystretch}{1.7} \begin{minipage}{.3\linewidth} \centering \begin{tabular}{ | c |} \hline $(p^{10}_a(t),v^{10}_a(t))$ \\ \hline \vdots \\ \hline $(p^1_a(t),v^1_a(t))$ \\ \hline \rowcolor{Gray} \\ \hline $(p^1_b(t),v^1_b(t))$ \\ \hline \vdots\\ \hline $(p^{10}_b(t),v^{10}_b(t))$ \\ \hline \end{tabular} \caption*{\footnotesize Event $t$} \end{minipage}% \begin{minipage}{.2\linewidth} \centering \begin{tabular}{ | c |} \hline $(p^{10}_a(t+1),v^{10}_a(t+1))$ \\ \hline \vdots \\ \hline $(p^1_a(t+1),v^1_a(t+1))$ \\ \hline \rowcolor{Gray} \\ \hline $(p^1_b(t+1),v^1_b(t+1))$ \\ \hline \vdots\\ \hline $(p^{10}_b(t+1),v^{10}_b(t+1))$ \\ \hline \end{tabular} \caption*{\footnotesize Event $t+1$} \end{minipage}% \begin{minipage}{.2\linewidth} \centering \begin{tabular}{ c } \\ \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots \\ \\ \\ \\ \\ \\ \end{tabular} \end{minipage}% \begin{minipage}{.2\linewidth} \centering \begin{tabular}{ | c |} \hline $(p^{10}_a(t+10),v^{10}_a(t+10))$ \\ \hline \vdots \\ \hline $(p^1_a(t+10),v^1_a(t+10))$ \\ \hline \rowcolor{Gray} \\ \hline $(p^1_b(t+10),v^1_b(t+10))$ \\ \hline \vdots\\ \hline $(p^{10}_b(t+10),v^{10}_b(t+10))$ \\ \hline \end{tabular} \caption*{\footnotesize Event $t+10$} \end{minipage} \caption{Structure of the limit order book.} \label{table:lobtable} \end{table} The data is split into 7 days of training data and 3 days of test data. Preprocessing consists of normalizing the data $x$ according to the \textit{$z$-score} \[ \bar{x}_t = \frac{x_t-\bar{y}}{\sigma_{\overline{y}}} \] where $\overline{y}$ (resp. $\sigma_{\overline{y}}$) is the mean (resp. standard deviation) of the previous days data. Since the aim of this work is to extract the most amount of possible latent information contained in the LOB, we do not include any of the hand-crafted features contained in the FI-2010 dataset. For a detailed description of this dataset we refer the reader to \cite{Nt}. We aim to predict future movements from the (virtual) mid-price. Price direction of the data is calculated using the following smoothed version of the mid-price. This amounts to adjusting for the average volatility of each instrument. The virtual mid-price is the mean \[ p(t) = \frac{p_a^{1}(t) + p_b^{1}(t)}{2} \] between the bid-price and the ask-price. The mean of the next $k$ mid-prices is then \[ m^{+}_k(t) = \frac{1}{k}\sum_{n=0}^k p({t+n}). \] The direction of price movement for the FI-2010 dataset is calculated using the percentage change of the virtual mid-price according to \[ r_k(t)= \frac{m^+_k(t) - p(t)}{p(t)} . \] There exist other more sophisticated methods to determine the direction of price movement at a given time. However, for fair comparison to other work, we utilize this definition and leave other methods for future work. The direction is up $(+1)$ if $r_k(t) > \alpha$, down $(-1)$ if $r_k(t) < -\alpha$ and neutral $(0)$ otherwise, according to a chosen threshold $\alpha$. For the FI-2010 dataset, this has been set to $\alpha=0.002$. We consider the following four test cases $k\in\{10, 20, 50, 100\}$ for the denoising horizon window. The 100 most recent events are used as input to our model. \section{Architecture}\label{section:architecture} In this section we give a detailed account of our architecture. The main two components are a convolutional module and a transformer module. They contain multiple iterations of dilated causal convolutional layers and transformer blocks respectively. A transformer block consists of a specific combination of multi-head self-attention, residual connections, layer normalization and feedforward layers. We took seriously the causal nature of the problem by implementing both causality in the convolutional module and causality in the transformer module through masked self-attention to accurately capture temporal information in the LOB. Our resulting architecture will be referred to as TransLOB. Since each order consists of a price and volume, a state $x_t=\{p_a^i(t),v_a^i(t),p_b^i(t),v_b^i(t)\}_{i=1}^{10}$ at time $t$ is a vector $x_t\in\bb{R}^{40}$. Events are irregularly spaced in time and the 100 most recent events are used as input resulting in a normalized vector $X\in\bb{R}^{100\times 40}$. We apply five one-dimensional convolutional layers to the input $X$, regarded as a tensor of shape $[100,40]$ (ie. an element of $\bb{R}^{100}\otimes\bb{R}^{40}$). All layers are dilated causal convolutional layers with $14$ features, kernel size $2$ and dilation rates $1, 2, 4, 8$ and $16$ respectively. This means the filter is applied over a window larger than its length by skipping input values with a step given by the dilation rate with each layer respecting the causal order. The first layer with dilation rate $1$ corresponds to standard convolution. All activation functions are $\textup{ReLU}$. The full size of the channel filter is used to allow the weights in the filter to infer the relative importance of each level on each side of the mid-price. It is expected that higher weights will be allocated to shallower levels in the LOB since those levels are most indicative of future activity. The output of the convolutional module is a tensor of shape $[100,14]$. This output then goes through layer normalization \cite{Ba} to stabilize dynamics before each feature vector is concatenated with a one-dimensional temporal encoding resulting in a tensor $X$ of shape $[100,15]$. We will refer to $N=100$ as the number of \textit{entities} and $d=15$ as the model dimension. We denote these entities by $e_i$, $1\leq i\leq N$, where $e_i\in E=\bb{R}^{d}$. These entities are then updated through learning in a number of steps. First we introduce an inner product space $H=\bb{R}^d$ with dot product pairing $\<h,h'\>=h\cdot h'$. We employ a multi-head version of self-attention with $C$ channels. Therefore, we choose a decomposition $H=H_1\oplus\ldots\oplus H_C$ and apply a linear transformation \[ T=\bigoplus_{a=1}^C T_a:E\ra\bigoplus_{a=1}^C H_a^{\oplus 3} \] with $H_a$ each of dimension $d/C$. The vectors $(q_{i,(a)},k_{i,(a)},v_{i,(a)})=T_a(e_i)$ are referred to as \textit{query}, \textit{key} and \textit{value} vectors respectively. We arrange these vectors into matrices $Q_a$, $K_a$ and $V_a$ respectively with $N$-rows and $d$-columns. In other words, $Q_a=XW_a^Q$, $K_a=XW_a^K$ and $V_a=XW_a^V$ for weight matrices $W^Q_a$, $W^K_a$ and $W^V_a$ which are vectors in $\bb{R}^{d\times d/C}$. Next we apply the masked scaled dot-product self-attention function \[ \textup{head}_a=V_a' = \textup{Softmax}\left(\textup{Mask}\left(\frac{Q_a K_a^T}{\sqrt{d}}\right)\right)V_a \] resulting in a matrix of refined value vectors for each entity. Here $\textup{Mask}$ substitutes infinitesimal values to entries in the upper right triangle of the applied matrix which forces queries to only pay attention to keys in its causal history via the softmax function. The heads are then concatenated and a final learnt linear transformation is given leading to the multi-head self-attention operation \[ \textup{MultiHead}(X)=\left(\bigoplus_{a=1}^C\textup{head}_a\right)W^O \] where $W^O\in\bb{R}^{d\times d}$. We next add a residual connection and apply layer normalization resulting in \[ Z = \textup{LayerNorm}(\textup{MultiHead}(X) + X ). \] This is followed by a feedforward network $\textup{MLP}$ consisting of a ReLU activation between two affine transformations applied identically to each position, ie. individually to each row of $Z$. The inner layer is of dimension $4\times d=60$. Finally, a further residual connection and final layer normalization is applied to arrive at our updated matrix of entities \[ \textup{TransformerBlock}(X) = \textup{LayerNorm}(\textup{MLP}(Z) + Z). \] The output of the transformer block is the same shape $[N,d]$ as the input. Our updated entities are $e'_i\in\bb{R}^{15}$, $1\leq i\leq N$. After multiple iterations of the transformer block, the output is then flattened and passed through a feedforward layer of dimension $64$ with ReLU activation and L2 regularization. Finally, we apply dropout followed by a softmax layer to obtain the final output probabilities. A schematic of the TransLOB architecture is given in Figure~\ref{figure:schema}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.7]{figure1} \end{center} \caption{Architecture schematic with enclosed convolutional and transformer modules.} \label{figure:schema} \end{figure} For the FI-2010 dataset, we employ two transformer blocks with three heads and with the weights shared between iterations of the transformer block. The hyperparameters are contained in Table~\ref{table:hyper}. No dropout was used inside the transformer blocks. \begin{table}[!ht] \footnotesize \begin{center} \begin{tabular} {ll} \toprule \textbf{Hyperparameter} & \textbf{Value} \\ \midrule Batch size & 32\\ Adam $\beta_1$ & 0.9\\ Adam $\beta_2$ & 0.999\\ Learning rate & $1 \times 10^{-4}$\\ Number of heads & 3 \\ Number of blocks & 2\\ MLP activations & ReLU \\ Dropout rate & 0.1\\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameters for the FI-2010 experiments.} \label{table:hyper} \end{table} \section{Results} Here we record our experimental results for the FI-2010 dataset. The first 7 days were used to train the model and the last 3 days were used as test data. Training was done with mini-batches of size 32. Our metrics include accuracy, precision, recall and F1. All training was done using one K80 GPU on google colab. To be consistent with earlier works using the same dataset, we train and test our model on the horizons $k=\{10,20,50,100\}$. All models were trained for 150 epochs, although convergence was achieved significantly earlier. See Figure~\ref{figure:acc_translob} of Appendix~\ref{appendix:baseline} for an example. The following models were used as comparison. An LSTM was utilized and compared to a support vector machine (SVM) and multi-layer perceptron (MLP) in \cite{Ts3} with favourable results. Results using a stand-alone CNN were reported in \cite{Ts}. This model was reproduced and trained for use as our baseline for the horizon $k=100$. The baseline training and test curves are shown in Figure~\ref{figure:acc_cnn} of Appendix~\ref{appendix:baseline}. In \cite{Ts2} a CNN was combined with an LSTM resulting in the architecture denoted CNN-LSTM. An improvement over the CNN-LSTM architecture, named DeepLOB, was achieved in \cite{Zh} by using an inception module between the CNN and LSTM together with a different choice of convolution filters, stride and pooling. Finally, the architecture C(TABL) refers to the best performing implementation of the temporal attention augmented bilinear network of \cite{Tr}. Our results are shown in Table~\ref{table:horizon10}, Table~\ref{table:horizon20}, Table~\ref{table:horizon50} and Table~\ref{table:horizon100} for each of the horizon choices respectively. The training and test curves with respect to accuracy for $k=100$ are shown in Figure~\ref{figure:acc_translob} of Appendix~\ref{appendix:baseline}. \begin{table}[!htb] \scriptsize \begin{center} \begin{tabular}{|p{3cm}||p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline Model & Accuracy & Precision & Recall & F1\\ \hline\hline SVM \cite{Ts3} &- & 39.62 & 44.92 & 35.88 \\ MLP \cite{Ts3} &- & 47.81 & 60.78 & 48.27\\ CNN \cite{Ts} &- & 50.98 & 65.54 & 55.21 \\ LSTM \cite{Ts3} & - & 60.77 & 75.92 & 66.33 \\ CNN-LSTM \cite{Ts2} & - & 56.00 & 45.00 & 44.00 \\ C(TABL) \cite{Tr} & 84.70 & 76.95 & 78.44 & 77.63 \\ DeepLOB \cite{Zh} & 84.47 & 84.00 & 84.47 & 83.40 \\ TransLOB & \textbf{87.66} & \textbf{91.81} & \textbf{87.66} & \textbf{88.66} \\ \hline \end{tabular} \end{center} \caption{Prediction horizon $k=10$.} \label{table:horizon10} \end{table} \begin{table}[!htb] \scriptsize \begin{center} \begin{tabular}{|p{3cm}||p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline Model & Accuracy & Precision & Recall & F1\\ \hline\hline SVM \cite{Ts3} & - & 45.08 & 47.77 & 43.20 \\ MLP \cite{Ts3} &- & 51.33 & 65.20 & 51.12 \\ CNN \cite{Ts} & - & 54.79 & 67.38 & 59.17 \\ LSTM \cite{Ts3} &- & 59.60 & 70.52 & 62.37 \\ CNN-LSTM \cite{Ts2} & - & - & - & - \\ C(TABL) \cite{Tr} & 73.74 & 67.18 & 66.94 & 66.93 \\ DeepLOB \cite{Zh} & 74.85 & 74.06 & 74.85 & 72.82 \\ TransLOB & \textbf{78.78} & \textbf{86.17} & \textbf{78.78} & \textbf{80.65} \\ \hline \end{tabular} \end{center} \caption{Prediction horizon $k=20$.} \label{table:horizon20} \end{table} \begin{table}[!htb] \scriptsize \begin{center} \begin{tabular}{|p{3cm}||p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline Model & Accuracy & Precision & Recall & F1\\ \hline\hline SVM \cite{Ts3} & - & 46.05 & 60.30 & 49.42 \\ MLP \cite{Ts3} & - & 55.21 & 67.14 & 55.95 \\ CNN \cite{Ts} & - & 55.58 & 67.12 & 59.44 \\ LSTM \cite{Ts3} &- & 60.03 & 68.58 & 61.43 \\ CNN-LSTM \cite{Ts2} & - & 56.00 & 47.00 & 47.00 \\ C(TABL) \cite{Tr} & 79.87 & 79.05 & 77.04 & 78.44 \\ DeepLOB \cite{Zh} & 80.51 & 80.38 & 80.51 & 80.35 \\ TransLOB & \textbf{88.12} & \textbf{88.65} & \textbf{88.12} & \textbf{88.20} \\ \hline \end{tabular} \end{center} \caption{Prediction horizon $k=50$.} \label{table:horizon50} \end{table} \begin{table}[!htb] \scriptsize \begin{center} \begin{tabular}{|p{3cm}||p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline Model & Accuracy & Precision & Recall & F1\\ \hline\hline CNN \cite{Ts} & 63.06 & 63.29 & 63.06 & 62.97 \\ TransLOB & \textbf{91.62} & \textbf{91.63} & \textbf{91.62} & \textbf{91.61} \\ \hline \end{tabular} \end{center} \caption{Prediction horizon $k=100$.} \label{table:horizon100} \end{table} For inspection of our model, we plot the attention distributions for all three heads in the first transformer block. A random sample input was chosen from the horizon $k=10$ test set. Pixel intensity has been scaled for ease of visualization. The vertical axes represent the query index $0\leq i\leq 100$ and the horizontal axes represent the key index $0\leq j\leq 100$. Queries are aware of the distance to keys through the position embedding layer and entities are only updated with memory from the past owing to the attention mask. As can be seen in Figure~\ref{figure:headone}, and Figure~\ref{figure:headtwo} and Figure~\ref{figure:headthree} of Appendix~\ref{appendix:attentiondist}, the different heads learn to attend to different properties of the temporal dynamics. A majority of the queries pay special attention to the most recent keys which is sensible for predicting the next price movement. This is particularly clear in heads two and three. \begin{figure}[h] \begin{center} \includegraphics[scale=0.3, trim = 0.8cm 2cm 0.8cm 1cm, clip]{figure2} \end{center} \caption{First head of the first transformer block.} \label{figure:headone} \end{figure} \section{Discussion} We have shown that the limit order book contains informative information to enable price movement prediction using deep neural networks with a causal and relational inductive bias. This was shown by introducing the architecture TransLOB which contains both a dilated causal convolutional module and a masked transformer module. This architecture was tested on the publicly available FI-2010 dataset achieving state-of-the-art results. We expect further improvements using more sophisticated proprietary additions such as the inclusion of sentiment information from news, social media and other sources. However, this work was developed to exploit only the information contained in the LOB and serves as very strong baseline from which additional tools can be added. Due to the limited nature of the FI-2010 dataset, significant time was spend tuning hyperparameters of our model to negate overfitting. In particular, our architecture was notably sensitive to the initialization. However, due to the very strong performance of the model, together with the flexibility and sensible inductive biases of the architecture, we expect robust results on larger LOB datasets. This is an important second step and will be addressed in future work. In particular, this will allow us to explore the generalization capabilities of the model together with the optimization of important parameters such as the horizon $k$ and threshold $\alpha$. Nevertheless, based on these initial results we argue that further investigation of transformer based models for financial time-series prediction tasks is warranted. The efficiency of our algorithm is another imporant property which makes it amenable to training on larger datasets and LOB data with larger event windows. In spite of the $O(N^2)$ complexity of the self-attention component, our architecture is significantly more sample efficient than existing LSTM architectures for this task such as \cite{Ts3,Ts2,Zh}. However, moving far beyond the window size of 100, to the territory of LOB datasets on the scale of months or years, it would be interesting to explore sparse and compressed representations in the transformer blocks. Implementations of sparsity and compression can be found in \cite{Chi,Su,Lam,Li} and \cite{Ki,Rae} respectively. Looking forward, similar to recent advances in natural language processing, the next generation of financial time-series models should implement self-supervision as pretraining \cite{De,Ra}. Finally, it would be interesting to consider the influence of higher-order self-attention \cite{Cl} in LOB and other financial time-series applications. \section*{Acknowledgements} The author would like to thank Andrew Royal and Zihao Zhang for correspondence related to this project.
train/arxiv
BkiUdnY5qsJBjms2Yx-w
5
1
\section{Introduction} \label{sec:introduction} This paper deals with the so-called sum rules arising from spectral theory and orthogonal polynomials on the real line (OPRL). Our approach uses only probabilistic methods. Given a probability measure $\mu$ with compact support on $\mathbb{R}$, we may encode $\mu$ by the recursion coefficients of orthonormal polynomials in $L^2(\mu)$. A sum rule is an identity between a non-negative functional of these coefficients and an entropy-like functional of $\mu$, each side giving the discrepancy between $\mu$ and some reference measure. When the reference measure is the semicircle distribution \begin{equation} \label{SC0} \operatorname{SC}(dx) = \frac{1}{2\pi}\sqrt{4-x^2}\!\ \mathbbm{1}_{[-2, 2]}(x)\!\ dx, \end{equation} the sum rule was proved with spectral theory method by Killip and Simon in \cite{KS03}\footnote{An exhaustive discussion and history of this sum rule can be found in Section 1.10 of the book \cite{simon2} and a deep analytical proof is in Chapter 3. }. This result is the OPRL counterpart of the classical Szeg{\H o} theorem for orthogonal polynomials on the unit circle (OPUC), where the reference measure is the Lebesgue measure. An important consequence of such equalities is the equivalence of two conditions for the finiteness of both sides, one formulated in terms of Jacobi coefficients and the other as a spectral condition. In the words of Simon \cite{simon2}, these are the \emph{gems} of spectral theory. In \cite{GNROPUC} and \cite{magicrules}, we gave a probabilistic interpretation of these sum rules and a general strategy to construct and prove new sum rules. In the OPRL case, the starting point is a random $n\times n$ Hermitian matrix $X_n$ and a fixed vector $e \in \mathbb{C}^n$. The random spectral measure $\mu_n$ of the pair $(X_n , e)$ is a weighted sum of Dirac masses supported by the (real) eigenvalues of $X_n$. When the density of $X_n$ is unitarily invariant, proportional to $\exp\{ -\frac{\beta}{2} n \mathrm{tr}\!\ V(X)\}$ with a confining potential $V$, we proved that $\mu_n$ satisfies the Large Deviation Principle (LDP) with speed $n$ and good rate function $\mathcal I_{\mathrm{sp}}$ involving the reversed entropy with respect to a measure $\mu_V$. This equilibrium measure $\mu_V$, or reference measure is the minimizer of the rate function or equivalently, the limit of the spectral measure as $n\to\infty$. Besides, in all the classical ensembles (Gaussian, Laguerre and Jacobi ensemble), the random recursion coefficients have a nice probabilistic structure (independence or slight dependence) so that we proved also an LDP for the ``coefficient encoding" of $\mu_n$ with speed $n$ and rate function $\mathcal I_{\mathrm{co}}$. Since a large deviation rate function is unique, this implies the identity $\mathcal I_{\mathrm{sp}} = \mathcal I_{\mathrm{co}}$. For the Gaussian ensemble, this identity is precisely the sum rule of Killip and Simon. For the Laguerre or Jacobi ensemble it leads to new sum rules, with reference measures the Marchenko-Pastur and the Kesten-McKay distributions, respectively. Furthermore, this method could be generalized to measures on the unit circle \cite{GNROPUC} or to operator valued measures \cite{GaNaRomat,GaNaRoJac}. Besides, it provides evidence for the Lukic conjecture \cite{BSZ}, see also \cite{BSZ1} for an exposition of the method. For the measure side, the common feature of these models is the assumption that the equilibrium measure is supported by a single compact interval. In statistical physics terms this is the one-cut case, in contrast to the multi-cut case when the support is a finite union of disjoint compact intervals. In spectral theory, the first situation is called ``no gap" and the second one ``a finite number of open gaps". For the coefficient side, the common feature is sufficient stochastic independence of the Jacobi coefficients. Nevertheless in Section 3.3 of \cite{magicrules}, based on \cite[Proposition 2]{KriRidVir2016}, we conjectured that, under some suitable conditions on $V$, the rate function on the coefficient side could be an expression with some limit involving $\mathrm{tr}\!\ V(T_n)$ as $n\to \infty$, where $T_n$ is the $n$-dimensional Jacobi matrix. Besides, by spectral theory methods, \cite{Nazarov} obtained a more general sum rule, when the reference measure is $A(x) \operatorname{SC}(dx)$ with $A$ a nonnegative polynomial (see the discussion in Section \ref{suonecut}). This is equivalent to start from a one-cut polynomial $V$. Here, we extend our probabilistic method along two directions. Firstly, we show a large deviation theorem for the spectral measure sequence $(\mu_n)_n$ in the multi-cut case, for general potentials $V$ (Theorem \ref{MAIN}). Secondly, when $V$ is a nonnegative polynomial we show an LDP in terms of the Jacobi coefficients (Theorem \ref{MAIN2}). Surprisingly, the rate function in this new LDP contains a remainder term, which actually vanishes in the case of a polynomial potential with one-cut equilibrium measure. These two last results are obtained by a similar method as developed by Breuer, Simon and Zeitouni \cite{BSZ} (for a polynomial potentials in the OPUC case with full support equilibrium measure). Indeed, the crucial argument to tackle the remainder term is the Rakhmanov's theorem (see \cite{rahmanov1977asymptotics}, \cite{Den2004}). The combination of our new LDPs leads to a general gem in the multi-cut polynomial case, (Theorem \ref{abstractgem}), and an exact sum rule in the one-cut-polynomial case, (Theorem \ref{newsumrule}). While convex potentials lead to a one-cut equilibrium measures, the new gem also applies to nonconvex polynomial potentials. We guess that the new sum rule may hold true for more general potentials including in particular one or two logarithmic contribution(s). In \cite{magicrules} Sec. 3.3.1, it is proved that for Laguerre and Jacobi potentials the claim of Theorem \ref{newsumrule} holds true. Other gems, i.e., sets of equivalent conditions for spectral measures in the multi-cut case were given by \cite{eichinger2016jacobi} and \cite{yuditskii2018killip} based on the Jacobi flow approach. Our method yields another expression for the coefficient side which depends on the potential in a natural way. The different formulations illustrate the different point of views: whereas the spectral theoretic methods start from a perturbation of the semicircle law (or free Jacobi matrix), the starting point for our probabilistic approach is a randomization given by the potential $V$. \section{Notations and definitions} \label{sec:sumrules} \subsection{Tridiagonal representations} Let $\mathcal{M}_1$ be the set of all probability measures on $\mathbb{R}$. For $\mu\in\mathcal{M}_1$ with compact but infinite support (known as the nontrivial case), let $p_0,p_1,\dots $ be the orthonormal polynomials with positive leading coefficients obtained by applying the orthonormalizing Gram-Schmidt procedure to the sequence $1, x, x^2, \dots$ in $L^2(\mu)$. They obey the recursion relation \begin{align} \label{polrecursion} xp_k(x) = a_{k+1} p_{k+1}(x) + b_{k+1} p_k (x) + a_{k} p_{k-1}(x) \end{align} for $ k \geq 0$ (resp. for $0 \leq k \leq n-1$) where the Jacobi parameters satisfy $b_k \in \mathbb R, a_k > 0$ for all $k\geq 1$ and with $p_{-1}(x)=0$. In the basis $\{p_0, p_1, \dots\}$, the linear transform $f(x) \rightarrow xf(x)$ (multiplication by the identity) in $L^2(d\mu)$ is represented by the matrix \begin{eqnarray} \label{favardinfini} T = \begin{pmatrix} b_1&a_1 &0&0&\cdots\\ a_1&b_2 &a_2&0&\cdots\\ 0&a_2 &b_3&a_3& \\ \vdots& &\ddots&\ddots&\ddots \end{pmatrix} , \end{eqnarray} where we have $a_k > 0$ for every $k$. The mapping $\mu \mapsto T$ (called here the Jacobi mapping) is a one to one correspondence between probability measures on $\mathbb R$ having compact infinite support and Jacobi matrices with built with sequences satisfying $\sup_n(|a_n| + |b_n|) < \infty$. Actually, such Jacobi matrix is identified as an element of $\mathcal{R}$ (defined below in \ref{defR}). This result is sometimes called Favard's theorem. If $T$ is a infinite Jacobi matrix, we denote the $N\times N$ upper left subblock by $\pi_N(T)$. If $\mu\in \mathcal{M}_1$ is supported by $n$ distinct points, we may still define orthonormal polynomials up to degree $2n-1$. The $n$-th polynomial has roots at the $n$ support points and thus norm zero in $L^2(\mu)$. We then consider the finite dimensional Jacobi matrix of $\mu$, \begin{eqnarray} \label{favardfini} T_n = \begin{pmatrix} b_1&a_1 &0&\dots&0\\ a_1&b_2 &a_2&\ddots&\vdots\\ 0&\ddots &\ddots&\ddots&0\\ \vdots&\ddots&a_{n-2}&b_{n-1}&a_{n-1}\\ 0&\dots&0&a_{n-1}&b_{n} \end{pmatrix} . \end{eqnarray} So, measures supported by $n$ points lead to $n\times n$ symmetric tridiagonal matrices with subdiagonal positive terms. In fact, there is a one-to-one correspondence between such a matrix and such a measure. We can identify $T_n$ with the vector $r_n=(b_1,a_n\dots ,a_{n-1},b_n)$. It is convenient to embed this into sequence spaces and to identify $r_n$ with the infinite vector $(b_1,a_1\dots ,a_{n-1},b_n,0,\dots)\in \mathcal{R}$, where \begin{align} \label{defR} \mathcal{R} = (\mathbb{R}\times [0,\infty))^{\mathbb{N}}\,. \end{align} Similarly, $T_n$ may be identified with the one-sided infinite Jacobi matrix extended by zeros. For an element $r\in \mathcal{R}$, we let $\pi_N(r)\in \mathcal{R}_N$ be the projection onto the first $2N-1$ coordinates. Let now $\psi$ be the mapping defined on the set of measures $\mu\in \mathcal{M}_1$ with compact support by \begin{align} \psi(\mu) = r = (b_1,a_1,b_2,\dots ) , \end{align} where $a_n=b_{n+1}=0$ if $\#\operatorname{supp}(\mu)\leq n$. Note that $\psi$ is not continuous. Nevertheless, if for $K > 0$ \begin{align} \label{defMK} \mathcal{M}_{1,K} = \{\mu \in \mathcal M_1 : \operatorname{supp} (\mu) \in [-K, K] \}\,, \end{align} then, $\psi$ is a homeomorphism on $\mathcal{M}_{1,K}$. An other point of view consists in considering that the measure $\mu$ is the spectral measure of the tridiagonal operator. More precisely, let $H$ be a self-adjoint bounded operator on a Hilbert space $\mathcal H$ and $e\in \mathcal H$ be a cyclic vector (that is, such that the linear combinations of the sequence $(H^ke)$ are dense in $\mathcal H$). Then, the spectral measure of the pair $(H,e)$ is the unique $\mu\in \mathcal M_1$ such that \[\langle e, H^k e\rangle = \int_\mathbb R x^k \mathop{}\!\mathrm{d} \mu(x) \ \ (k \geq 1).\] It turns out that $\mu$ is a unitary invariant for $(H,e)$. Another invariant is the tridiagonal reduction defined above. If dim $\mathcal H =n$ and $e$ is cyclic for $H$, let $\lambda_1, \ldots, \lambda_n$ be the (real) eigenvalues of $H$ and let $u_1, \ldots, u_n$ be a system of orthonormal eigenvectors. The spectral measure of the pair $(H,e)$ is then \begin{align}\label{spectralmeasure} \mu^{(n)} = \sum_{k=1}^n w_k\delta_{\lambda_k}\,, \end{align} with $w_k= |\langle u_k, e\rangle|^2$. This measure is a weighted version of the empirical eigenvalue distribution \begin{align}\label{empiricallaw} \mu^{(n)}_{{\tt u}} = \frac{1}{n} \sum_{k=1}^n \delta_{\lambda_k} \,. \end{align} If $J$ is a Jacobi matrix, we can take the first vector $e_1$ of the canonical basis as the cyclic vector. Let $\mu$ be the spectral measure associated to the pair $(J, e_1)$, then $J$ represents the multiplication by $x$ in the basis of orthonormal polynomials associated to $\mu$ and $J=T(\mu)$. Although the general sum rules of the present work are purely deterministic identities, we need now to present a randomization to define the elements involved in these formulas. \subsection{Randomization} \label{sec:randomization} In the following $\beta = 2\beta' >0$ is a parameter, having in statistical physics the meaning of inverse temperature. The main object in our large deviation results is the random probability measure \begin{align} \label{defmun} \mu_n = \sum_{k=1}^n w_k\delta_{\lambda_k} . \end{align} For suitable $V$, we let $\mathbb{P}_n^V$ be the distribution of a random measure such that \begin{itemize} \item the support points $(\lambda_1,\dots ,\lambda_n)$ have the joint density \begin{align}\label{generaldensity} (Z_n^V)^{-1} e^{- n\beta' \sum_{k=1}^nV(\lambda_k)}\prod_{1\leq i < j\leq n} |\lambda_i - \lambda_j|^\beta. \end{align} with respect to the Lebesgue measure on $\mathbb{R}^n$, \item the weights $(w_1,\dots w_n)$ have a Dirichlet distribution $\operatorname{Dir}_n(\beta')$ of homogeneous parameter $\beta'$ on the simplex $\{(w_1,\dots ,w_n)\in [0,1]^n |\, \sum_k w_k = 1\}$, with density proportional to $(w_1 \cdots w_n)^{\beta' - 1}$. \item the support points $(\lambda_1,\dots ,\lambda_n)$ are independent of the weights $(w_1,\dots w_n)$. \end{itemize} Formula (\ref{generaldensity}) defines a log-gas density of particles in an external potential $V$. For specific values of $\beta$, the distribution of $\mu_n$ is exactly the distribution of the spectral measure as defined in the above section. For $\beta= 1$ (resp. $\beta= 2, \beta=4$), it is the distribution of the spectral measure of the pair $(X_n , e_1)$ where $X_n$ is a random symmetric (resp. Hermitian, self-dual) matrix whose density is proportional to $\exp-\beta' V(X)$. Additionally, for some classical potentials (Hermite, Laguerre, Jacobi) and general $\beta$, there is models of tridiagonal random matrices whose spectral measures are distributed as $\mu_n$ (see \cite{dumede2002}, \cite{Killip1}). For general potentials and general $\beta$, it is shown in \cite{KriRidVir2016}, Proposition 2, that under $\mathbb P_n^V$, the Jacobi coefficients $r_n=(b_1,a_1,\dots , b_n)$ have a density proportional to \begin{align} \label{krishnadensity} (\tilde Z_n^V)^{-1} \exp \left\{ -n\beta' \left( \mathrm{tr}\!\ V(T_n) - 2\sum_{k=1}^{n-1} (1-\tfrac{k}{n} - \tfrac{1}{n\beta}) \log (a_k) \right) \right\} \end{align} with respect to the Lebesgue measure on $\mathcal{R}_n = (\mathbb{R}\times [0,\infty))^{n-1}\times \mathbb{R}$ and where $T_n$ is as in \eqref{favardfini}. \iffalse Notice that for very special values of $\beta$, the underlying probability distributions on matrices could be explicitly computed. Indeed, the corresponding random matrix is $n\times n$ self-adjoint real ($\beta=1$), complex ($\beta=2$) or quaternion ($\beta=4$). Furthermore, it has a density proportional to $\exp -n\beta' V(X)$. Now, the spectral measure $\mu_n$ of a random matrix $X_n$ (at the vector $e_1$), is defined by the relation \begin{align} \label{spectralmeasuremoments} \int x^k \mathop{}\!\mathrm{d} \mu_n(x) = (X_n^k)_{1,1} ,\qquad k\geq 0, \end{align} and is given in \eqref{defmun}. (see for example \cite{mehta04}). The support points in this case are then precisely the eigenvalues of the matrix $X_n$. \textcolor{Red}{So that, the similarity of $\mu^{(n)}$ in \eqref{spectralmeasure} and $\mu_n$ in \eqref{defmun} is not a coincidence!} is shown in \cite{KriRidVir2016}, Proposition 2, that under $\mathbb P_n^V$, the Jacobi coefficients $r_n=(b_1,a_1,\dots , b_n)$ have a density proportional to \begin{align} \label{krishnadensity} (\tilde Z_n^V)^{-1} \exp \left\{ -n\beta' \left( \mathrm{tr}\!\ V(T_n) - 2\sum_{k=1}^{n-1} (1-\tfrac{k}{n} - \tfrac{1}{n\beta}) \log (a_k) \right) \right\} \end{align} with respect to the Lebesgue measure on $\mathcal{R}_n = (\mathbb{R}\times [0,\infty))^{n-1}\times \mathbb{R}$ \textcolor{Red}{and where $T_n$ is as in \eqref{favardfini}.} \fi \subsection{Assumptions on the potential} The potential $V:\mathbb{R} \to (-\infty,+\infty]$ is supposed to be continuous and real valued on the interval ${(b^-, b^+)}$ ($-\infty\leq b^-<b^+\leq+\infty$), infinite outside of $[b^-,b^+]$ and $\lim_{x\to b^\pm} V(x) = V(b^\pm)$ with possible limit $V(b^\pm)=+\infty$. We will always make the following assumption. \begin{itemize} \item[(A1)] Confinement: If $|b|=\infty$, $b\in\{b^-,b^+\}$, then \begin{align*} \liminf_{x \rightarrow b} \frac{V(x)}{2 \log |x|} > 1 \end{align*} \end{itemize} Under (A1), the functional $\mathcal E (\mu)$ defined by \begin{align} \label{ratemuu} \mu \mapsto \mathcal E (\mu) := \int V(x) d\mu(x) - \int\!\!\!\int\log |x-y| d\mu(x)d\mu(y) \end{align} has a unique minimizer $\mu_V$ which is compactly supported, see \cite{johansson1998fluctuations} or \cite{agz}. We write $b_k^V,a_k^V$ for the Jacobi coefficients of $\mu_V$. Further, we denote by $I$ the support of $\mu_V$. The following assumption is crucial for the large deviation behavior of the extremal eigenvalues. \noindent (A2) Control (of large deviations): the effective potential \begin{align} \label{poteff} \mathcal{J}_V (x) := V(x) -2\int \log |x-\xi|\!\ d\mu_V(\xi) \end{align} achieves its global minimum value on $(b^-, b^+) \setminus \operatorname{Int}(I)$ only on the boundary of this set. We need also the function \begin{align} \label{defF} \mathcal{F}_V(x) & = \begin{cases} \mathcal{J}_V(x) - \inf_{\xi \in \mathbb{R}} \mathcal{J}_V(\xi) & \text{ if } x \notin \operatorname{Int}(I), \\ \infty & \text{ otherwise. } \end{cases} \end{align} For $d \in \mathbb N$, let $\mathcal V_{2d}$ the set of all polynomials of degree $2d$ with coefficient of the leading term positive, and \[\mathcal V = \bigcup_{d\geq 1} \mathcal V_{2d}\,.\] It is known (\cite{pastur_shcherbina} Sect. 11.2) that if $V \in \mathcal V_{2d}$, then the support of $\mu_V$ is the union of a finite number $M \leq d$ of disjoint intervals \[I = I_1\cup \dots \cup I_M\,.\] If $M=1$, we say that we are in the one-cut regime, otherwise, we are in the multi-cut regime. Notice that when $V\in \mathcal V$ is convex, then we are in the one-cut case and assumption (A2) is satisfied \cite[Proposition 3.1]{johansson1998fluctuations}. \section{Sum rules} Let $I = I_1\cup \dots \cup I_M$ be a union of $M$ compact and disjoint intervals, each with nonempty interior. We introduce the set $\mathcal{S} =\mathcal{S}(I)$ of finite non-negative measures $\mu$ which have compact support \begin{align} \label{support} \operatorname{supp}(\mu) = J \cup E , \end{align} with $J\subset I$ and $E=E(\mu)$ a finite or countable subset of $\mathbb{R}\setminus I$. Denote by $\mathcal{S}_1(I)$ the set of all probability measures member of $\mathcal{S}$. For a point $\lambda \in \mathbb{R}$, we denote by $e(\lambda)$ the point in $\partial I$ minimizing the distance to $\lambda$, where in case of ties we choose the leftmost one. In all the sum rules considered, the Kullback-Leibler divergence or relative entropy between two probability measures $\mu$ and $\nu$ plays a major role. When the ambient space is $\mathbb R$ endowed with its Borel $\sigma$-field it is defined by \begin{equation} \label{KL} {\mathcal K}(\mu\, |\ \nu)= \begin{cases} \ \displaystyle\int_{\mathbb R}\log\frac{d\mu}{d\nu}\!\ d\mu\;\;& \mbox{if}\ \mu\ \hbox{is absolutely continuous with respect to}\ \nu ,\\ \ \infty & \mbox{otherwise.} \end{cases} \end{equation} Usually, $\nu$ is the reference measure. Here the spectral side will involve the reversed Kullback-Leibler divergence, where $\mu$ is the reference measure and $\nu$ is the argument. In this case, we have that $\mathcal{K}(\mu |\nu)$ is finite if and only if \begin{align} \label{eq:KL2} \int \log w(x) \, d\mu(x) > - \infty , \end{align} where $d\nu(x) = w(x)d\mu(x) + d\nu_s(x)$ is the Lebesgue decomposition of $\nu$ with respect to $\mu$. \subsection{A general gem} The following theorem is our first spectral theoretic main result. It is a \emph{gem}, as explained in the introduction, giving equivalent conditions when a measure $\mu$ is sufficiently ``close'' to the reference measure $\mu_V$, defined through its potential $V$. We denote by $r^V=(b_1^V,a_1^V,b_2^V,\dots )$ the Jacobi coefficients of $\mu_V$. \begin{thm} \label{abstractgem} Let $V\in \mathcal V$ and let $\mu$ be a probability measure with compact, infinite support. Then \begin{align} \label{gengem} \sup_{N\geq 1} \left[ \mathrm{tr} V(\pi_N(r)) -\mathrm{tr} V(\pi_N(r^V)) - 2\sum_{k=1}^{N-1} \log (a_k/ a_k^V) \right] <\infty \end{align} if and only if \begin{enumerate} \item $\mu \in \mathcal S_1(I)$ with $I=\operatorname{supp}(\mu_V)$, \item $\sum_{\lambda \in E(\mu)} \mathcal{F}_V(\lambda)<\infty$ , \item the Lebesgue decomposition $\mathop{}\!\mathrm{d} \mu(x) = f(x) \mathop{}\!\mathrm{d}\mu_V(x)+\mathop{}\!\mathrm{d}\mu_s(x)$ with respect to $\mu_V$ satisfies \begin{align*} \int_I \log f(x)\, \mathop{}\!\mathrm{d}\mu_V(x) >-\infty . \end{align*} \end{enumerate} \end{thm} \begin{rem} \label{rem:summability} \begin{enumerate} \item In the vocabulary of spectral theory, Condition 1 is called Blumenthal-Weyl and Condition 3 is called Quasi-Szeg{\H o} condition. \item The second condition in Theorem \ref{abstractgem} regarding the outliers in $E(\mu)$ may be simplified, provided one can specify the decay of the density of $\mu_V$ at the boundary of $I$. Suppose the equilibrium measure $\mu_V$ has a Lebesgue-density $\rho_V$ satisfying $\rho_V(x) = \sqrt{d(x,\partial I)}Q(x)$ with $Q$ and $Q^{-1}$ bounded on $\operatorname{supp}(\mu_V)$ (see, e.g., Theorem 11.2.1 in \cite{pastur_shcherbina} for sufficient conditions), then condition $2.$ in Theorem \ref{abstractgem} is equivalent to \begin{align} \label{Lieb} \sum_{\lambda \in E}\, d(\lambda,I)^{3/2} < \infty . \end{align} In the vocabulary of spectral theory, it is then called Lieb-Thirring condition. \end{enumerate} \end{rem} \subsection{The one-cut sum rule} \label{suonecut} We now give a general sum rule for any suitable polynomial potential. In this quite general frame the shape of the sum rule remains the same as previously. On one hand, the spectral side always involves both the reversed Kullback information with respect to the corresponding equilibrium measure and an additional term related to the non-essential spectra. On the other hand, the other side is a discrepancy between the Jacobi coefficients. Hence, this Theorem gives rise to a general sum rule for the one-cut case. We will see later (in Theorem \ref{MAIN2}) that, in the multi-cut case a remainder term in the Jacobi coefficients side appears. \begin{thm} \label{newsumrule} Let $V\in \mathcal V$ such that $\mu_V$ is supported by a single interval and let $\mu$ be a probability measure with compact infinite support. Then \begin{align*} \lim_{N\to \infty} \left[ \mathrm{tr}\!\ V(\pi_N(r)) -\mathrm{tr}\!\ V(\pi_N(r^V)) - 2\sum_{k=1}^{N-1} \log (a_k/a^V_k) \right] = \mathcal{K}(\mu_V\!\ |\!\ \mu) + \sum_{\lambda \in E} {\mathcal F}_{V}(\lambda) \end{align*} for $\mu\in \mathcal{S}_1(I)$ and if $\mu\notin \mathcal{S}_1(I)$, the left hand side equals $+\infty$. \end{thm} This sum rule has to be compared with the sum rule proved by Nazarov et al. \cite{Nazarov}. Let us assume, without loss of generality, that we are in the one-cut regime with $I = [-2,2]$. Nazarov et al. considered equilibrium measure of the form $A(x) \operatorname{SC}(dx)$ where $A$ is a nonegative polynomial. They define the function $F$ by \begin{align} F(x) = \begin{cases} \displaystyle\int_{2}^x A(t) \sqrt{(t-2)(t+2)} \mathop{}\!\mathrm{d} t & x \geq 2\\ \displaystyle \int_x^{-2} A(t) \sqrt{(2-t) (t+2)} \mathop{}\!\mathrm{d} t & x \leq -2\,, \end{cases} \end{align} and state the sum rule with $F$ instead of $\mathcal F_V$. Let us recall how $A$ and $V$ are related. First, it is known (\cite{pastur_shcherbina} Th. 11.2.4) that if $V \in \mathcal C^2$, then \begin{align} \label{eq:polynomialmuV} \mu_V(dx) = \frac{1}{2\pi}A(x)\sqrt{4-x^2} \mathop{}\!\mathrm{d} x = A(x) \mathop{}\!\mathrm{d} \operatorname{SC}(x) \end{align} with \begin{align} \label{VtoA} A(x) = \frac{1}{\pi}\int_{-2}^2 \frac{V'(x) - V'(t)}{x-t} \frac{\mathop{}\!\mathrm{d} t}{\sqrt{4-t^2}}\,. \end{align} In particular, if $V \in \mathcal V_{2d}$, then $A$ is a polynomial of degree $(2d-2)$. Conversely, if $A \in \mathcal C^1$, we have \begin{align} \label{AtoV} V'(x) = xA(x) -2 \int \frac{A(x) - A(t)}{x-t} \mathop{}\!\mathrm{d} \operatorname{SC}(t)\,. \end{align} Indeed, let us consider the master equation connecting $V$ and $\mu_V$ : \begin{align*}\frac{V(x)}{2} - \int \log |x-y| d\mu_V (y) = \mathcal E(\mu_V) -\frac12 \int V d\mu_V \ \hbox{for}\ \ x \in [-2,2]\,. \end{align*} By differentiation, we get \begin{align} \label{master} \frac{1}{2} V'(x) &= \mathrm{P.V.} \int \frac{\mathop{}\!\mathrm{d} \mu_V(t)}{x-t} \\ \notag &= \mathrm{P.V.} \int \frac{A(t)}{x-t}\mathop{}\!\mathrm{d} \operatorname{SC}(t)\\ \label{integrand} &= \mathrm{P.V.}\left[\int \frac{A(t) - A(x)}{x-t}\mathop{}\!\mathrm{d} \operatorname{SC}(t) + A(x) \int \frac{\mathop{}\!\mathrm{d} \operatorname{SC}(t)}{x-t} \right]\\ \label{cont} &= \int \frac{A(t) - A(x)}{x-t}\mathop{}\!\mathrm{d} \operatorname{SC}(t) + A(x)\!\ \mathrm{P.V.}\int \frac{\mathop{}\!\mathrm{d} \operatorname{SC}(t)}{x-t} \\ \label{master2}&= \int \frac{A(t) - A(x)}{x-t}\mathop{}\!\mathrm{d} \operatorname{SC}(t) + \frac{x}{2} A(x)\,, \end{align} where (\ref{cont}) holds by continuity of the integrand in the first integral of (\ref{integrand}), and (\ref{master2}) holds by application of (\ref{master}) to the potential $x^2/2$. This gives exactly (\ref{AtoV}) for $x \in [-2, 2]$, and this can be extended for every $x$ real, as an equality between polynomials. As a consequence, from (\ref{poteff}) and (\ref{defF}) we have for $x\notin [-2, 2]$ \begin{align} \mathcal F'_V (x) &= V'(x) - 2 \int \frac{A(t)}{x-t} \mathop{}\!\mathrm{d} \operatorname{SC}(t) \notag \\ &= xA(x) - 2 A(x) \int \frac{\mathop{}\!\mathrm{d} \operatorname{SC}(t)}{x-t}\notag \\ &= xA(x) - A(x) \left(x - \sqrt{x^2-4}\right)\notag \\ &= A(x) \sqrt{x^2-4} \end{align} hence (A2) is satisfied if $A \geq 0$. Actually, it is known that, if $A > 0$ on $[-2,2]$ and if (A2) is satisfied, then $\mathcal F_V = F$ (a consequence of (1.13) in \cite{Albeverio}). In this case, (\ref{Lieb}) holds true. \section{Large deviation results} \label{sec:LDPs} In order to be self-contained, we recall the basic definition and tools in the appendix, as well as a technical result used in the proof. We refer to \cite{demboz98} for more details. The classical LDP for the empirical eigenvalue measure (defined in \eqref{empiricallaw}) is widely known, see \cite{arous1997large} or \cite{agz}, Theorem 2.6.1. It holds in the space $\mathcal{M}_1$, equipped with the weak topology. \begin{thm} \label{thm:classicalLDP} Assume that $V$ satisfies the assumption (A1). Then the sequence of empirical spectral measures $\mu_{\tt u}^{(n)}$ satisfies the LDP with speed $n^2$ and good rate function \begin{align*} \mathcal{I}_u (\mu) = \mathcal{E}(\mu) - \inf_\nu\mathcal{E}(\nu) \end{align*} where $\mathcal{E}$ is as in \eqref{ratemuu}. \end{thm} This theorem shows that $\mu_{\tt u}^{(n)}$ converges somehow quickly towards the unique minimizer $\mu_V$ of $\mathcal I_u$ since the speed in the LDP is $n^2$. On the other hand, the convergence of the extremal eigenvalue to the extremal point of the support of $\mu_V$ is slower. Indeed, the extremal eigenvalue satisfies an LDP only at speed $n$ as stated in the following theorem (see \cite{BorGui2013onecut} Prop. 2.1 based on \cite{arous2001aging}). \begin{thm} \label{thm:classicalextremalLDP} If $V$ is continuous and satisfies (A1) and (A3), then the random variable $\lambda^{(n)}_{\max} = \max\{\lambda_1, \dots, \lambda_n\}$ satisfies the LDP at speed $n$ and good rate function $\mathcal F_V$. \end{thm} In the multi-cut case, we can also show an LDP for eigenvalues between two intervals of $I$ (Theorem \ref{thm:extremalLDP}). A related statement about outliers is given in \cite[Lemma 3.1]{BorGuimulticut}. \subsection{Spectral LDP} The following theorem is our main large deviation result. The spectral measures $\mu_n$ are considered as random elements in $\mathcal M_1$, equipped with the weak topology and the corresponding Borel $\sigma$-algebra. \begin{thm} \label{MAIN} Assume that the potential $V$ satisfies the assumptions (A1), (A2) and (A3). Then the sequence of spectral measures $\mu_n$ satisfies under $\mathbb P_n^V$ the LDP with speed $\beta'n$ and good rate function \begin{align*} \mathcal{I}_{\operatorname{sp}}(\mu) = \mathcal{K}(\mu_V\!\ |\!\ \mu) + \sum_{\lambda \in E(\mu)} {\mathcal F}_{V}(\lambda) \end{align*} if $\mu \in \mathcal{S}_1(I)$ and $\mathcal{I}_V(\mu) = \infty$ otherwise. \end{thm} \subsection{Coefficient LDP} To obtain an expression for the rate function of the random recursion coefficients, we need to assume that $V$ is a polynomial. Recall that $\mathcal{M}_{1,K}$ is the set of probability measures with support in $[-K,K]$ and $\psi$ maps a measure $\mu$ to its Jacobi coefficients. The large deviation principle for the recursion coefficients will be under conditioning on the set $\mathcal R_K=\psi(\mathcal{M}_{1,K})$ and we define $\mathbb{P}^V_{n,K} = \mathbb{P}^V_{n}(\cdot |\mathcal R_K)$. Note that if $\mu$ is a spectral measure with support in $[-K,K]$, the Jacobi coefficients satisfy $|b_k|,|a_k|\leq K$, see e.g. Proposition 1.3.8 in \cite{simon2}. Recall that $r^V$ is the sequence of Jacobi parameters of the equilibrium measure $\mu_V$ and we will always choose $K$ so large that the support of $\mu_V$ is contained in the interior of $\mathcal{M}_{1,K}$. \begin{thm} \label{MAIN2} Suppose $V$ is a polynomial of even degree $V$ with positive leading coefficient. Then the sequence $(\mu_n)_n$ satisfies under $\mathbb{P}^V_{n,K}$ the LDP in $\mathcal{M}_{1,K}$ with speed $n\beta'$ and good rate function \begin{align*} \mathcal{I}_{\operatorname{co},K}(\mu) = \lim_{N\to \infty} \left[ \mathrm{tr} V(\pi_N(r)) -\mathrm{tr} V(\pi_N(r^V)) - 2\sum_{k=1}^{N-1} \log (a_k/a^V_k) + \xi_{N,K}(\pi_N(r))\right] . \end{align*} where the term $\xi_{N,K}(\pi_N(r))$ satisfies \begin{align}\label{crucialbound} |\xi_{N,K}(\pi_N(r))| \leq C(K,V) \limsup_{N\to \infty} M_+ (r_N) \,, \end{align} for some constant $C(K,V) > 0$ and \begin{align} \label{defM+} M_+ (r_N) = |a_{N-d}-a^V_{N-d}| + |b_{N-d}-b^V_{N-d}| + \dots + |b_{N+d}-b^V_{N+d}|\,. \end{align} \end{thm} The proof of this LDP is done in Section \ref{sec:coeffproof}. Actually this proof is not independent from the proof of Theorem \ref{MAIN} since it uses several times that we know an LDP holds. However, the method of proof is different and uses directly the density \eqref{krishnadensity}. Unfortunately, this does not give an explicit expression for the term $\xi_{N,K}$ in the rate function. The bound \eqref{crucialbound} implies though that this term is uniformly bounded on $\mathcal{R}_K$. In particular, it does not influence the finiteness of the rate function, which is crucial in view of the {\it gem} in Section \ref{sec:sumrules}. We remark that although the density \eqref{krishnadensity} always gives a (finite) vector of Jacobi parameters, the above rate function is only finite if $a_k>0$ for all $k$, that is, $r$ is a Jacobi sequence of a measure with infinite support. For general polynomial potentials $V$, the expression for the rate cannot be extended to the full space $\mathcal{R}$, as the constant $C(K,V)$ would blow up as $K\to \infty$. We do have a good control in the one-cut regime, where we can show that $\xi_K(r)$ does not change the value of the $K$-independent part of the rate. The consequence is the following LDP for the unrestricted measure. \begin{thm} \label{MAIN3} Suppose that the assumptions of Theorem \ref{MAIN2} hold. Assume further that the support of $\mu_V$ is a single interval. Then, the sequence $(\mu_n)_n$ satisfies under $\mathbb{P}^V_{n}$ the LDP with speed $n\beta'$ and good rate function \begin{align*} \mathcal{I}_{\operatorname{co}}(\mu) = \lim_{N\to \infty} \left[ \mathrm{tr} V(\pi_N(r)) -\mathrm{tr} V(\pi_N(r^V)) - 2\sum_{k=1}^{N-1} \log (a_k/a^V_k) \right] . \end{align*} \end{thm} \medskip \begin{rem} \label{rem:errorterm} It is an interesting question to ask for the role of the remainder term $\xi_{N,K}$ in Theorem \ref{MAIN2}. Considering that the one-cut-LDP in Theorem \ref{MAIN3} does not involve such a term, either it is an artifact or an immanent feature of the multi-cut-case. We argue that the remainder is in fact not an artifact, but necessary in order to distinguish between different measures obtained by permuting the Jacobi coefficients. As an example, we consider the quartic potential $V(x) = \frac{x^4}{4}-\frac{\mathrm{v}x^2}{2}$. When $\mathrm{v}>2$, the equilibrium measure $\mu_V$ is supported by two disjoint intervals $[-\alpha^+, -\alpha^-] \cup [\alpha^-,\alpha^+]$ with $\alpha^\pm = \sqrt{\mathrm{v}\pm 2}$, see \cite{pastur_shcherbina}, Example 11.2.11 (2) and Problem 11.4.13 or \cite{Blower}, Section 4.6. The Jacobi coefficients of $\mu_V$ are given by $b_k=0$ for all $k\geq 1$ and the $a_k$ are perturbations of two-periodic coefficients, as given in equation (14.2.16) in \cite{pastur_shcherbina}. Indeed, the measure $\mu_V$ is symmetric and the monic orthogonal polynomials of degree $2n$ may be written as $P_n(x^2)$, where the $P_n$ are orthogonal with respect to a measure $\mu_{\mathrm{even}}$ supported by $[{\tt v}-2,{\tt v}+2]$ and satisfy the recursion \begin{align} x P_n(x) = P_{n+1}(x) + (a_{2n}^2 + a_{2n+1}^2) P_n(x) + a_{2n}^2a_{2n-1}^2 P_{2n-1}(x) . \end{align} Additionally, the monic orthogonal poylnomials of $\mu_V$ of degree $2n+1$ may be written as $xQ_n(x^2)$, with the $Q_n$ orthogonal to a measure $\mu_{\mathrm{odd}}$ supported by $[{\tt v}-2,{\tt v}+2]$, and they satisfy \begin{align} x Q_n(x) = Q_{n+1}(x) + (a_{2n+2}^2 + a_{2n+1}^2) Q_n(x) + a_{2n+1}^2a_{2n}^2 Q_{2n-1}(x) . \end{align} The combination of the two recursions implies that \begin{align} \label{Rakh} \lim_{n\to \infty} a_{n}^2a_{n-1}^2 =1, \qquad \lim_{n\to\infty} (a_{n}^2 + a_{n-1}^2) = {\tt v} . \end{align} This shows that (at least along subsequential limits, which we may ignore for the following argument) $a_{2n-1}\to a$, $a_{2n}\to \bar a$, where $a,\bar a$ are the two solutions to $\ell^2 - {\tt v} \ell +1 = 0$, i.e., \begin{align} \ell_1 = \frac{{\tt v}- \sqrt{{\tt v}^2 -4}}{2}, \qquad \ell_1 = \frac{{\tt v}+ \sqrt{{\tt v}^2 -4}}{2} . \end{align} Switching the $a_k$ of even and odd index, we obtain a new measure $\bar \mu$. If there would be no remainder term $\xi_{J,K}$, the rate function at $\bar \mu$ would be the limit as $N\to \infty $ of \begin{align} \label{eq:pseudorate} \tilde{\mathcal{I}}^{(N)}_{\operatorname{co},K}(\bar \mu) = \mathrm{tr} V(\pi_N(r)) -\mathrm{tr} V(\pi_N(r^V)) - 2\sum_{k=1}^{N-1} \log (a_k/a^V_k) . \end{align} However, the quasi-periodic structure of $\mu$ causes an alternating behavior of $\tilde{\mathcal{I}}^{(N)}_{\operatorname{co},K}(\bar \mu)$. More precisely, a straightforward but lengthy calculation yields that \begin{align} \label{eq:pseudorate2} \lim_{N\to \infty} \left( \tilde{\mathcal{I}}^{(2N)}_{\operatorname{co},K}(\bar \mu)-\tilde{\mathcal{I}}^{(2N-1)}_{\operatorname{co},K}(\bar \mu)\right) = \frac{1}{2}({\bar a}^4-a^4) - \mathrm{v}({\bar a}^2-a^2)-2\log(\bar a/a) , \end{align} so that \eqref{eq:pseudorate} does not converge as $N\to \infty$. \end{rem} \medskip \subsection{From LDPs to sum rules} In this section, we prove Theorem \ref{abstractgem} and Theorem \ref{newsumrule}. The main argument in both cases is that we have two different expressions for the large deviation rate function, one using the spectral encoding and one using the encoding by Jacobi coefficients. Since both expressions must agree, they yield the ``spectral side'' and the ``coefficient side'', respectively, of a sum rule. \medskip \textbf{Proof of Theorem \ref{newsumrule}:} Suppose $V$ is a nonzero polynomial of even degree, such that the equilibrium measure $\mu_V$ is supported by a single interval $I$. Theorem \ref{MAIN} yields the LDP for $(\mu_n)_n$ with speed $n$ and rate $\mathcal{I}_{\operatorname{sp}}$. On the other hand, by Theorem \ref{MAIN3}, $(\mu_n)_n$ satisfies the LDP with speed $n$ and rate function $\mathcal{I}_{\operatorname{co}}$. Since a large deviation rate function is unique, we have the equality \begin{align} \mathcal{I}_{\operatorname{sp}}(\mu) = \mathcal{I}_{\operatorname{co}}(\mu) \end{align} for all $\mu \in \mathcal{M}_1$. For $\mu \in \mathcal{S}_1(I)$, this is precisely the equality claimed in Theorem \ref{newsumrule}. For $\mu \notin \mathcal{S}_1(I)$, we know that the left hand side satisfies $\mathcal{I}_{\operatorname{sp}}(\mu)=+\infty$, so the right hand side must equal $+\infty$ as well. \hfill $\Box$ \medskip \textbf{Proof of Theorem \ref{abstractgem}:} Let $V$ be a nonzero polynomial of even degree. We want to combine the LDP results of Theorem \ref{MAIN} and Theorem \ref{MAIN2}. The former are obtained under $\mathbb P_n^V$, whereas the latter are under $\mathbb{P}^V_{n,K}$, for $K$ large enough (depending on $V$). Theorem \ref{thm:restrictedLDP} shows that $(\mu_n)_n$ satisfies also the LDP under $\mathbb{P}^V_{n,K}$ in the restricted space $\mathcal{M}_{1,K}$, with rate $\mathcal{I}_{\operatorname{sp},K}$ the restriction of $\mathcal{I}_{\operatorname{sp}}$ to $\mathcal{M}_{1,K}$. By uniqueness of rate functions, we obtain the restricted sum rule \begin{align} \mathcal{I}_{\operatorname{sp},K}(\mu) = \mathcal{I}_{\operatorname{co},K}(\mu) \end{align} for any $\mu \in \mathcal{M}_{1,K}$. For $\mu$ a probability measure with compact, infinite support, we may choose $K$ so large that $\mu \in \mathcal{M}_{1,K}$. Then the above equality holds, where both sides are simultaneously finite or infinite. Condition \eqref{gengem} is equivalent to finiteness of $\mathcal{I}_{\operatorname{co},K}(\mu)$ since from (\ref{crucialbound}) and (\ref{defM+}) \[ |\xi_{N,K}(\pi_N (r))| \leq 4(2d+1) K C(K,V)\,.\]By the restricted sum rule, this is equivalent to finiteness of $\mathcal{I}_{\operatorname{sp},K}(\mu)$. We have $\mathcal{I}_{\operatorname{sp},K}(\mu)<\infty$ if and only if $\mu \in \mathcal{S}_1(I)$, and both $\sum_{\lambda\in E}\mathcal{F}_V(\lambda)$ and $\mathcal{K}(\mu_V|\mu)$ are finite. The first two conditions are just \emph{1.} and \emph{2.} in Theorem \ref{abstractgem}, and the third one is equivalent to \emph{3.}, see \eqref{eq:KL2}. \hfill $\Box$ \section{Proof of the spectral LDP} \label{sec:spectralproof} \subsection{Structure of the proof} This section is devoted to the proof of Theorem \ref{MAIN}. In the large deviation behavior of the weighted spectral measure $\mu_n$, all eigenvalues outside of $I$ (the outliers) will contribute and in fact, the rate function in Theorem \ref{MAIN} can be finite even for countably many outliers. The main difficulty of the proof comes from the {\it a priori} unbounded number of eigenvalues close to $\partial I$, and the dependence with the bulk of eigenvalues on $I$. As in our proof in the one-cut regime, the main idea is to apply the projective limit method to reduce the spectral measure to a measure with only a fixed number of eigenvalues outside the limit support $I$. However, controlling the eigenvalues between two intervals in $I$ requires special care. We do this by dividing the outliers into groups according to which of the sub-intervals constituting $I$ they are the closest. This allows to apply the general strategy of the one-cut case, albeit in a much more technical way. Additionally, our new encoding for spectral measures introduced below also takes care of topological problems which occurred in \cite{magicrules} when transferring the LDPs from one space to another. The main steps of the proof are as follows. To begin with, we decouple the weights of the random measure $\mu_n$ and introduce a non-normalized random measure $\tilde\mu_n$ with weights from a family of independent random variables. In the next section, we will introduce a family $\zeta(\tilde\mu_n)$ of points not in $\operatorname{Int}(I)$ encoding the outlying support points and a family $\gamma(\mu_n)$ of corresponding weights. If $\tilde\mu_{I,n}$ denotes the restriction of $\tilde\mu_n$ to $I$, then we may identify $\tilde\mu_n$ with the collection \begin{align*} \big( \tilde\mu_{I,n}, \zeta(\tilde\mu_n), \gamma(\tilde\mu_n) \big) . \end{align*} The LDP for $\mu_n$ is then proved using this representation, with the following intermediate steps. \begin{itemize} \item[(1)] We prove an LDP for a finite collection of entries of $\zeta(\tilde\mu_n)$. This is Theorem \ref{thm:extremalLDP}. \item[(2)] Using (conditional) independence of the outliers $\zeta(\tilde\mu_n)$ and the weights $\gamma(\tilde\mu_n)$, we can prove in Theorem \ref{thm:jointLDP} a joint LDP for a finite collection of entries of $(\zeta(\tilde\mu_n),\gamma(\tilde\mu_n))$. \item[(3)] In Theorem \ref{thm:LDPprojectivelimit} we use the projective method (the Dawson-G\"artner Theorem) to prove the LDP for the whole family $(\zeta(\tilde\mu_n),\gamma(\tilde\mu_n))$. \item[(4)] Once we have this LDP, we can use the technical result of Theorem \ref{newgeneral} to combine the outliers with $\tilde\mu_{I,n}$, and we show in Theorem \ref{thm:jointjointLDP} the joint LDP for $(\tilde\mu_{I,n},\zeta(\tilde\mu_n),\gamma(\tilde\mu_n))_n$. \item[(5)] Finally, in Section \ref{sec:normalizing}, we use the contraction principle to transfer this LDP to the non-normalized spectral measure $\tilde\mu_n$ and recover after normalizing the spectral measure $\mu_n$. \end{itemize} \subsection{New encoding of measures} \label{sec:newencoding} Let $\mu \in \mathcal{S}(I)$ be a nonnegative measure with the restrictions on the support as in \eqref{support}. Then, $\mu$ can be written as \begin{align}\label{muinS} \mu = \mu_{I} + \sum_{\lambda \in E(\mu)} \gamma_\lambda \delta_{\lambda} , \end{align} where $\mu_I$ is the restriction to $I$. We now introduce a particular enumeration of the elements of $E(\mu)$, according to the point of $\partial I$ to which they are closest, and then according to their distance to that point. For this, recall that $I$ is a disjoint union of compact intervals $I_m$, and suppose $I_m=[l_m,r_m]$, so that $r_m<l_{m+1}$ for $m=1,\dots,M-1$. Let $\theta_m = \tfrac{1}{2}(r_m+l_{m+1})$ denote the midpoint between $I_{m}$ and $I_{m+1}$. Then there is a unique array $ \zeta = (\zeta_{i,j})_{i,j}$ $i=1,\dots 2M$ and $j\geq 1$ encoding the elements of $E(\mu)$, which is defined as follows : \begin{itemize} \item $\zeta_{1,j}$ for $j\geq 1 $ are the elements of $E(\mu)$ to the left of $l_1$, in increasing order, \item $\zeta_{2,j}$ is are the elements of $E(\mu)$ in $(r_1,\theta_1]$ in decreasing order, \item $\zeta_{3,j}$ are are in increasing order the elements in $E(\mu)\cap (\theta_1,l_2)$, \item and so on. \end{itemize} If there are only a finite number of such elements, the sequence $\zeta_{i,j}$ is extended by the boundary element $l_m$ for $i=2m-1$ and by $r_m$ for $i=2m$. More precisely, given $\mu\in \mathcal{S}(I)$, let $ \zeta = \zeta(\mu) = (\zeta_{i,j})_{i,j}$ be the unique array, such that \begin{align} \label{newencoding} E(\mu) = \bigcup_{i=1}^{2M} \bigcup_{j= 1}^\infty \{\zeta_{i,j}\} \setminus \partial I , \end{align} and additionally, for $j \geq 1$, \begin{align} \label{newencoding1} \begin{split} \zeta_{1,j} \in (-\infty,l_1], & \qquad \zeta_{2M,j} \in [r_M,\infty), \qquad \\ \zeta_{2m,j}\in [r_m,\theta_m], & \qquad \zeta_{2m+1,j}\in(\theta_m,l_{m+1}] , \end{split} \end{align} for $m=1,\dots , M-1$, and for all $i\leq 2M, j \geq 1$, \begin{align} \label{newencoding2} d(\zeta_{i,j},I)\geq d(\zeta_{i,j+1},I) \quad \text{ and }\quad d(\zeta_{i,j},I)> d(\zeta_{i,j+1},I)\text{ unless } \zeta_{i,j}\in \partial I \end{align} (recall that $d(\cdot ,I)$ is the distance to the set $I$). Condition \eqref{newencoding1} ensures that the elements are grouped according to the closest point in $\partial I$, and condition \eqref{newencoding2} ensures that the elements are strictly ordered, unless there are only finitely many. The union of all entries in $\zeta$ as in \eqref{newencoding} yields the elements of $E$ again, and in addition possibly the boundary points if there are only finitely many nonzero entries. We denote the closure (in the product topology on $\mathbb{R}^{2M\times \mathbb{N}}$) of the set of all arrays $(\zeta_{i,j})_{i,j}$ satisfying \eqref{newencoding1} and \eqref{newencoding2} by $\mathcal{Z}$. In order to encode the weights of a measure as in \eqref{muinS} as well, let $\gamma(\mu) =(\gamma_{i,j})_{i,j}$, $i=1,\dots ,2M, j\geq 1$ be the unique non-negative array such that \begin{align} \label{newencoding3} \gamma_{i,j} = 0 \quad \text{ if and only if }\quad \zeta_{i,j} \in \partial I \end{align} and such that \begin{align} \label{newencoding4} \mu = \mu_{I} + \sum_{i=1}^{2M}\sum_{j=1}^\infty \gamma_{i,j} \delta_{\zeta_{i,j}}. \end{align} The set of weights is denoted by \begin{align} \label{dafweightset} \mathcal{G} = [0,\infty)^{2M\times \mathbb{N}} \end{align} and we endow $\mathcal{G}$ with the product topology. These definitions set up a one-to-one correspondence between a finite measure $\mu \in \mathcal{S}(I)$ and \begin{align} \label{newencodingfinal} \big( \mu_{I}, \zeta(\mu), \gamma(\mu) \big) \in \mathcal{M}(I)\times \mathcal{Z}\times \mathcal{G} , \end{align} where $(\zeta, \gamma)$ satisfy \eqref{newencoding3}. The representation \eqref{newencodingfinal} will be applied not directly to the spectral measure $\mu_n$, but to a variant with uncoupled, independent weights. Recall that under $\mathbb P_n^V$, the vector $(w_1,\dots ,w_n)$ is Dirichlet distributed and has the same distribution as \begin{align}\label{decoupling} \left(\frac{\omega_1}{\omega_1+ \dots + \omega_n}, \dots , \frac{\omega_n}{\omega_1+ \dots + \omega_n}\right) , \end{align} where $\omega_1, \dots, \omega_n$ are independent variables with distribution Gamma$(\beta', (\beta'n)^{-1})$ and mean $n^{-1}$. Without loss of generality, assume that the variables $\omega_k$ are defined on the same probability space as the $\lambda$'s and independent of them. We then consider the non-normalized measure \begin{align}\label{unnorm} \tilde\mu_n = \sum_{k=1}^n \omega_k \delta_{\lambda_k} \in \mathcal{S} (I) \end{align} and we can come back to the original measure by normalization. Therefore, we start by looking at \begin{align*} \big( \tilde\mu_{n,I}, \zeta(\tilde\mu_n), \gamma(\tilde\mu_n) \big) \end{align*} and to simplify notation, we will write $\zeta^{(n)}$ for $\zeta(\tilde\mu_n)$ and $\gamma^{(n)}$ for $\gamma(\tilde\mu_n)$. \subsection{LDP for a finite collection of extremal eigenvalues} \label{sec:spectralproof1} In this section, we prove an LDP for a finite collection of elements of the arrays $(\zeta^{(n)})_n= (\zeta(\tilde\mu_n))_n$. Fix a $N \geq 1$ and let \begin{align} \label{defproject} \pi_N:\mathbb{R}^{2M\times \mathbb{N}} \to \mathbb{R}^{2M\times N} \end{align} denote the canonical projection onto the first $N$ columns. We denote by $\zeta_N^{(n)}= \pi_N(\zeta^{(n)})$ the image of the outlying support points and let $\mathcal{Z}_N = \pi_N(\mathcal{Z})$. The following LDP for the finite collection of extremal eigenvalues is a crucial starting point for the LDP of $\mu_n$. \begin{thm} \label{thm:extremalLDP} Under $\mathbb{P}_n^V$, the collection of extreme eigenvalues $(\zeta_N^{(n)})_n$ satisfies the LDP in $\mathcal{Z}_N$ with speed $\beta' n$ and good rate function \begin{align*} \mathcal{I}_N^{\mathrm{ext}}(z) = \sum_{i=1}^{2M} \sum_{j=1}^N \mathcal{F}_V( z_{i,j}) . \end{align*} \end{thm} The proof follows the main steps of Theorem 4.1 in \cite{magicrules}. Therein, the $N$ largest and smallest eigenvalues were considered. The multi-cut situation, besides being notationally heavier, requires some additional care. This is not only due to outliers between two intervals in $I$, but also to the new encoding of outliers. This encoding was not useful in the one-cut case. For this reason, we give the main arguments of the proof in Section \ref{sect:extremalLDP1}, and refer to \cite{magicrules} for the detailed calculations. The next main step is then to combine the finite collection of extremal eigenvalues with their weights. \subsection{LDP for a finite collection of eigenvalues and weights} \label{sec:spectralproof2} Similarly to the definition in Section \ref{sec:spectralproof1}, we denote by $\gamma_N^{(n)}= \pi_N(\gamma^{(n)})$ the projection of the array of weights and let $\mathcal{G}_N = \pi_N(\mathcal{G})$. The following joint LDP for $\zeta_N^{(n)}$ and $\gamma_N^{(n)}$ is the main result in this section. Since $\zeta_{i,j}^{(n)}\in \partial I$ implies in our encoding that $\gamma_{i,j}^{(n)}=0$, the two arrays are not independent. However, conditioned on $\{\zeta_{i,j}^{(n)}\notin \partial I\}$, the eigenvalue $\zeta_{i,j}^{(n)}$ and its weight $\gamma_{i,j}^{(n)}$ are actually independent. Using this fact, and the explicit (conditional) distribution of $\gamma_{i,j}^{(n)}$, the proof becomes fairly straightforward. Let us remark that we prove the joint LDP in the ``full'' space $\mathcal{Z}_N \times \mathcal{G}_N$ without the above condition on some weights being zero, as formalized in \eqref{newencoding3}. While the distribution $\mathbb P_n^V$ is concentrated on the subset satisfying \eqref{newencoding3}, this would lead to a rate function without compact level sets. In view of later parts of the proof, we consider the larger space with the lower semi-continuous continuation of the rate function. \begin{thm} \label{thm:jointLDP} For any $N\geq 1$, the sequence $(\zeta_N^{(n)},\gamma_N^{(n)})_n$ satisfies under $\mathbb P_n^V$ the LDP in $\mathcal{Z}_N \times \mathcal{G}_N$ with speed $\beta'n$ and good rate function \begin{align*} \mathcal{I}_N^{(\mathrm{ext},\mathrm{w})}(z,g) = \mathcal{I}_N^{\mathrm{ext}}(z) + ||g||_{N,1}, \end{align*} with $||\cdot ||_{N,1}$ the $\ell_1$-norm on $\mathcal{G}_N$. \end{thm} \textbf{Proof:} Let $\tilde \gamma^{(n)}_{i,j}$, $1\leq i \leq 2M, j\geq 1$ be independent and Gamma$(\beta', (\beta'n)^{-1})$ distributed random variables, defined on the same probability space as $\zeta_N^{(n)},\gamma_N^{(n)}$, and independent of $\zeta_N(\tilde\mu_n)$. Then, by \eqref{unnorm}, we have the equality in distribution \begin{align} \label{equaldistextweights} \big(\zeta^{(n)}_{i,j},\gamma_{i,j}^{(n)}\big)_{i,j} \stackrel{d}{=} \big( \zeta^{(n)}_{i,j},\tilde\gamma^{(n)}_{i,j}\mathbbm{1}_{\{ \zeta_{i,j}^{(n)}\notin \partial I \}} \big)_{i,j}. \end{align} Let $\tilde{\gamma}^{(n)}_N = \pi_N( (\tilde{\gamma}^{(n)}_{i,j})_{i,j})$. It follows by straightforward calculations, that for each $i,j$, the sequence $(\tilde{\gamma}^{(n)}_{i,j})_n$ satisfies the LDP in $[0,\infty)$ with speed $\beta'n$ and good rate function $I_0$, with $I_0(x) = x$. Since the $\tilde\gamma_{i,j}$ are independent, this implies the LDP for $\tilde{\gamma}^{(n)}_N$ in $\mathcal{G}_N$ with speed $\beta'n$ and good rate function \begin{align} \mathcal{I}_N^{\mathrm{w}}(\tilde{g}) = \sum_{i=1}^{2M} \sum_{j=1}^N I_0(\tilde{g}_{i,j}) = ||\tilde{g}||_{N,1} . \end{align} For the joint LDP, let us consider the finite family $(\zeta_{1,j}^{(n)},\gamma_{1,j}^{(n)})_{1\leq j\leq N}$ corresponding to eigenvalues to the left of the leftmost interval. Then, for sets $A=A_1\times \dots \times A_N, B=B_1\times \dots\times B_N\subset \mathbb{R}^N$, \eqref{equaldistextweights} implies \begin{align} \label{jointLDP0} & \mathbb P_n^V\big((\zeta_{1,j}^{(n)})_j \in A,(\gamma_{1,j}^{(n)})_j \in B ) = \mathbb P_n^V\big((\zeta_{1,j}^{(n)})_j \in A )\mathbb P_n^V( (\tilde\gamma_{1,j}^{(n)})_j \in B) , \end{align} whenever $\partial I \cap A_N= \emptyset$, that is, we require the rightmost outlier (and then all of them) to be outside of $I$. The LDPs for $((\zeta_{1,j}^{(n)})_j)_n$ and for $((\tilde\gamma_{1,j}^{(n)})_j)_n$ (which can be obtained from the LDPs for $(\zeta_{N}^{(n)})_n$ and for $(\tilde\gamma_{N}^{(n)})_n$ by the contraction principle), imply then for any $A,B$ as above and closed \begin{align} \label{jointLDPub} \limsup_{n\to \infty} \frac{1}{\beta'n}\log \mathbb P_n^V\big((\zeta_{1,j}^{(n)})_j \in A,(\gamma_{1,j}^{(n)})_j \in B ) & \leq - \inf_{z \in A\cap \mathcal{Z}_N} \mathcal{I}_N^{\mathrm{ext}}(z) - \inf_{g \in B\cap \mathcal{G}_N} \mathcal{I}_N^{\mathrm{w}}(g) \notag \\ & = - \inf_{(z,g) \in (A \times B)\cap (\mathcal{Z}_N\times \mathcal{G}_N)} \left( \mathcal{I}_N^{\mathrm{ext}}(z) + ||g||_{N,1}\right) . \end{align} For $A,B$ as above and open, we get the lower bound \begin{align} \label{jointLDPlb} \liminf_{n\to \infty} \frac{1}{\beta'n}\log \mathbb P_n^V\big((\zeta_{1,j}^{(n)})_j \in A,(\gamma_{1,j}^{(n)})_j \in B ) & \geq - \inf_{(z,g) \in (A \times B)\cap (\mathcal{Z}_N\times \mathcal{G}_N)} \left( \mathcal{I}_N^{\mathrm{ext}}(z) + ||g||_{N,1}\right) . \end{align} In fact, the lower bound can easily be extended to open sets $A=A_1\times \dots \times A_N, B=B_1\times \dots\times B_N$, with $A_j,B_j$ generic open subsets of $\mathbb{R}$. Set $A' = A\setminus I^N$, then \begin{align} \mathbb P_n^V\big((\zeta_{1,j}^{(n)})_j \in A,(\gamma_{1,j}^{(n)})_j \in B ) \geq \mathbb P_n^V\big((\zeta_{1,j}^{(n)})_j \in A',(\gamma_{1,j}^{(n)})_j \in B ) , \end{align} and since $A'$ is still an open set, the generic lower bound follows from \eqref{jointLDPlb}. For the general upper bound, let $A,B$ be again of product form as above and closed. We define a modification $B' = B_1'\times \dots \times B_N'$ as follows. If $\partial I \cap A_j = \emptyset$, or $0\notin B_j$, set $B_j'=B_j$. If, on the other hand, $\partial I \cap A_j \neq \emptyset$, and $0\in B_j$, set $B_j'=[0,\infty)$. Then we have \begin{align} \label{jointLDPub2} \mathbb P_n^V\big(\zeta_{1,j} \in A_j,\gamma_{1,j} \in B_j ) \leq \mathbb P_n^V\big(\zeta_{1,j} \in A_j,\tilde \gamma_{1,j} \in B_j' ) = \mathbb P_n^V\big(\zeta_{1,j} \in A_j)\mathbb P_n^V(\tilde \gamma_{1,j} \in B_j' ) . \end{align} This extends also to the whole vector, yielding \begin{align} \label{jointLDPub3} \limsup_{n\to \infty} \frac{1}{\beta'n}\log \mathbb P_n^V\big((\zeta_{1,j}^{(n)})_j \in A,(\gamma_{1,j}^{(n)})_j \in B ) \leq - \inf_{(z,g) \in A \times B'\cap \mathcal{Z}_N\times \mathcal{G}_N} \left( \mathcal{I}_N^{\mathrm{ext}}(z) + ||g||_{N,1}\right) . \end{align} The general upper bound follows then from \eqref{jointLDPub3}, since the infimum over $g\in B'$ may be replaced by the infimum over $g\in B$. This implies the LDP for $(\zeta_{1,j}^{(n)},\gamma_{1,j}^{(n)})_{1\leq j\leq N}$. The arguments can be directly extended to outliers and weights in each of the intervals, which yields then the LDP for the family $(\zeta_N^{(n)},\gamma_N^{(n)})_n$. \hfill $\Box$ \subsection{LDP for the projective limit of extremal eigenvalues and weights} By Theorem \ref{thm:jointLDP}, each projected sequence $(\zeta_N^{(n)},\gamma_N^{(n)})_n$ satisfies an LDP with a good rate function. We can then apply the Dawson-G\"artner Theorem (see the Appendix). It yields the LDP for the sequence of projective limits \begin{align} (\pi_N(\zeta^{(n)}),\pi_N(\gamma^{(n)}))_{N\geq 1} \end{align} in the projective limit of the spaces $\pi_N(\mathcal{Z})\times \pi_N(\mathcal{G})$. Since the topology on $\mathcal{Z} \times \mathcal{G}$ is the product topology, the canonical embedding from the projective limit into $\mathcal{Z} \times \mathcal{G}$ is continuous. An application of the contraction principle yield then the following result. \begin{thm} \label{thm:LDPprojectivelimit} The sequence $(\zeta^{(n)},\gamma^{(n)})_n$ satisfies under $\mathbb P_n^V$ the LDP in $\mathcal{Z}\times \mathcal{G}$ with speed $\beta'n$ and good rate function \begin{align*} \mathcal{I}^{(\mathrm{ext},\mathrm{w})}(z,g)= \sup_{N\geq 1} \mathcal{I}_N^{(\mathrm{ext},\mathrm{w})}(z,g) = \sum_{i=1}^{2M} \sum_{j=1}^\infty \mathcal{F}_V( z_{i,j}) + |g_{i,j}| . \end{align*} \end{thm} \subsection{Joint LDP for the measure on $I$, the extremal eigenvalues and the weights} The main result in this subsection is the following joint LDP, when we also add $\tilde\mu_{n,I}$, the restriction of $\tilde\mu_n$ to $I$. \begin{thm} \label{thm:jointjointLDP} The sequence $(\tilde\mu_{n,I},\zeta^{(n)},\gamma^{(n)})_n$ satisfies under $\mathbb P_n^V$ the LDP in $\mathcal{M}(I)\times \mathcal{Z}\times \mathcal{G}$ with speed $\beta'n$ and good rate function \begin{align*} \widetilde{\mathcal{I}}(\tilde\mu,z,g) = \mathcal{K}(\mu_V | {\tilde\mu}) + \tilde\mu(I)-1+\mathcal{I}^{(\mathrm{ext},\mathrm{w})}(z,g) . \end{align*} \end{thm} \textbf{Proof:} The proof makes use of the LDP in Theorem \ref{thm:LDPprojectivelimit} for the extremal eigenvalues and their weights, and Theorem \ref{newgeneral} to combine this with the measure restricted to $I$. We check the conditions of Theorem \ref{newgeneral}, beginning with exponential tightness. The set \begin{align} K_{H,T} = \big\{ (\mu,z,g)\in \mathcal{M}(I)\times \mathcal{Z}\times \mathcal{G}\, \big| \, ||z||_\infty \leq H, \mu(I)+||g||_1 \leq T \big\} \end{align} is compact, and for $H$ so large that $I\subset [-H,H]$, \begin{align} \label{exptight} \mathbb P_n^V\left( (\tilde\mu_{n,I},\zeta^{(n)},\gamma^{(n)}) \notin K_{H,T} \right) & \leq \mathbb P_n^V\left( \zeta_{1,1}<-H\right) + \mathbb P_n^V\left( \zeta_{2M,1}>H) \right) \notag \\ & \qquad + \mathbb P_n^V\left( \omega_1+\dots + \omega_n> T\right) . \end{align} By Theorem \ref{thm:extremalLDP} (LDP for extremal eigenvalues), we have \begin{align} \label{exptight2} \limsup_{n\to \infty} \frac{1}{\beta' n} \log \mathbb P_n^V\left( \zeta_{1,1}<-H\right) \leq - \inf_{x\leq H} \mathcal{F}_V(x) , \end{align} From the definition of $\mathcal{F}_V$ in \eqref{defF} we see that the upper bound goes to $-\infty$ as $H\to \infty$. For the last probability in \eqref{exptight}, we have by Cram\'er's Theorem for Gamma-distributed random variables, \begin{align}\label{exptight3} \limsup_{n\to \infty} \frac{1}{\beta' n} \log \mathbb P_n^V\left( \omega_1+\dots + \omega_n> T\right) \leq - \big( T- \log T -1 \big) . \end{align} Combining \eqref{exptight2} (and the analogous bound for the largest eigenvalue) and \eqref{exptight3}, we see that \begin{align}\label{exptight4} \lim_{H,T \to \infty} \limsup_{n\to \infty} \frac{1}{\beta' n} \log \mathbb P_n^V\left( (\tilde\mu_{n,I},\zeta^{(n)},\gamma^{(n)}) \notin K_{H,T} \right) = -\infty , \end{align} that is, the sequence $(\tilde\mu_{n,I},\zeta^{(n)},\gamma^{(n)})_n$ is exponentially tight. Now, let $D$ be the set of continuous $f:I\to (-\infty,1)$ and let $\varphi\in C_b(\mathcal{Z}\times \mathcal{G})$. We need to calculate the limit, on a logarithmic scale, of \begin{align} \label{jointmgf} \mathcal{G}_n(f,\varphi) :&= \mathbb E_n^V \left[ \exp \left(n\beta' \int f\, d \tilde \mu_{n,I} + n\beta' \varphi( \zeta^{(n)},\gamma^{(n)} )\right)\right] \notag \\ &= \mathbb E_n^V \left[ \exp \left(n\beta'\sum_{k:\lambda_k\in I} \omega_k f(\lambda_k) + n\beta' \varphi(\zeta^{(n)},\gamma^{(n)} )\right)\right] . \end{align} We will see that the main reasons which allow us to calculate the limit is the independence of the decoupled weights and then the faster LDP for the sequence of empirical spectral measures $\mu_n$. Indeed, recall that the weights $\omega_1,\dots ,\omega_n$ are independent and Gamma$(\beta', (\beta'n)^{-1})$ distributed and, conditioned on the eigenvalues $\lambda_1,\dots ,\lambda_n$, the weights $(\omega_k)_{ \lambda_k \in I }$ are independent of $\zeta^{(n)}$. For each individual weight $\omega_k$ we have \begin{align*} \frac{1}{\beta'} \mathbb{E}_n^V [e^{n\beta' t \omega_k}] = L(t). \end{align*} Conditioning in \eqref{jointmgf} on $\lambda$ and integrating with respect to $(\omega_k)_{ \lambda_k \in I }$ yields therefore \begin{align} \mathcal{G}_n(f,\varphi) &= \mathbb E_n^V \left[ \mathbb E_n \left[ \left. \exp \left(n\beta'\sum_{k:\lambda_k\in I} \omega_k f(\lambda_k) + n\beta' \varphi(\zeta{(n)},\gamma^{(n)} )\right) \right|\, \lambda \right]\right] \notag \\ &= \mathbb E_n^V \left[ \exp \left(n\beta'\int (L\circ f)\, d\mu_{n,I}^{({\tt u})}\right) \mathbb E_n \left[ \left. \exp \left( n\beta' \varphi( \zeta^{(n)},\gamma^{(n)} )\right)\right|\, \lambda \right]\right] \notag \\ &= \mathbb E_n^V \left[ \exp \left(n\beta'\int (L\circ f)\, d\mu_{n,I}^{({\tt u})} + n\beta' \varphi( \zeta^{(n)},\gamma^{(n)} )\right) \right] , \end{align} where $\mu^{(n)}_{{\tt u},I} $ is the restriction of $\mu^{(n)}_{{\tt u}}$ to $I$. We may now proceed as in \cite{magicrules}, Section 4.2. The empirical eigenvalue measure $\mu^{(n)}_{{\tt u}}$ (and then also the restriction $\mu^{(n)}_{{\tt u},I}$) satisfies the LDP at the faster scale $n^2$, which allows to replace it at our slower scale by its limit $\mu_V$. This yields \begin{align} \lim_{n\to \infty} \frac{1}{\beta'n} \log \mathcal{G}_n(f,\varphi) & = \lim_{n\to \infty} \frac{1}{\beta'n} \log \mathbb{E}_n^V \left[ \exp \left(n\beta'\int (L\circ f)\, d\mu_V + n\beta' \varphi( \zeta^{(n)},\gamma^{(n)} )\right)\right] \notag \\ & = \int (L\circ f)\, \mathop{}\!\mathrm{d} \mu_V + J(\varphi) . \end{align} The second equality follows from Theorem \ref{thm:LDPprojectivelimit} and Varadhans Lemma (Theorem 4.3.1 in \cite{demboz98}), with \begin{align*} J(\varphi) = \sup_{(z,g) \in \mathcal{Z}\times \mathcal{G} } \{ \varphi(z,g) - \mathcal{I}^{(\mathrm{ext},\mathrm{w})}(z,g) \}. \end{align*} Note that then by duality, also $\mathcal{I}^{(\mathrm{ext},\mathrm{w})}(z,g) = \sup_{\varphi \in C_b(\mathcal{Z}\times \mathcal{G})} \{ \varphi(z,g) - J(\varphi) \} $. This shows that the first assumption in Theorem \ref{newgeneral} holds, with $\Lambda(f) = \int L\circ f\, d\mu_V$. It was shown in \cite{magicrules} that \begin{align*} \Lambda^*(\mu) = \mathcal{K}(\mu_V\!\ |\!\ \mu) +\mu(I) - 1 \end{align*} for $\mu \in \mathcal{M}(I)$. Moreover, the set $\mathcal{F}$ of exposed points of $\Lambda^*$ contains the set of measures $\mu=h\cdot \mu_V$, absolutely continuous with respect to $\mu_V$ with strictly positive continuous density $h$. The exposing hyperplane of $\mu=h\cdot \mu_V$ is given by $1-h^{-1}$, such that for any such $\mu$ there exists a $\gamma>1$ such that $\gamma (1-h^{-1})\in D$. Suppose now that $\mu \in \mathcal{M}(I)$ is such that $\Lambda^*(\mu)$ is finite. By the same arguments as in \cite{grz}, we can find a sequence $\mu_n$ of measures with strictly positive continuous density such that $\mu_n$ converges weakly to $\mu$ and $\Lambda^*(\mu_n)$ converges to $\Lambda^*(\mu)$. This approximation is also made more precise for matrix valued measures in \cite{GaNaRomat}. All assumptions of Theorem \ref{newgeneral} are then fulfilled, which yields the joint LDP for $ (\tilde \mu_{n,I}, \zeta^{(n)},\gamma^{(n)})_n$. \hfill $\Box $ \subsection{Normalizing and recovering the spectral measure} \label{sec:normalizing} To finish the proof of Theorem \ref{MAIN}, two steps remain. First, we need to map the collection $(\tilde\mu_{n,I},\zeta^{(n)}, \gamma^{(n)})$ to the measure $\tilde\mu_n$, and then normalize $\tilde\mu_n$ to recover the distribution of the original spectral measure $\mu_n$. For the first step, let $\Theta:\mathcal{M}(I)\times \mathcal{Z}\times \mathcal{G}\rightarrow \mathcal{S}(I)$ be defined by \begin{align} \label{defTheta} \Theta(\mu_I,\zeta,\gamma) = \mu_I + \sum_{i=1}^{2M}\sum_{j=1}^\infty \gamma_{i,j} \delta_{\zeta_{i,j}} . \end{align} Then by the construction in Section \ref{sec:newencoding}, in particular \eqref{newencoding4}, we have \begin{align*} \Theta(\tilde\mu_{n,I},\zeta^{(n)}, \gamma^{(n)}) = \tilde\mu_n . \end{align*} However, we cannot apply the contraction principle directly, since the mapping $\Theta $ is not continuous when $\mathcal{G}$ is endowed with product topology. We need to slightly modify the LDP for $(\zeta^{(n)})_n$. Since the rate function for $(\zeta^{(n)})_n$ is given by the $\ell_1$-norm of an array in $\mathcal{G}$, it is easy to see that $(\zeta^{(n)})_n$ is exponentially tight in the $\ell_1$-topology. From \cite{demboz98}, Corollary 4.2.6 (and the LDP in Theorem \ref{thm:jointjointLDP}), we get that $(\tilde\mu_{n,I},\zeta^{(n)},\gamma^{(n)})_n$ satisfies under $\mathbb P_n^V$ the LDP in $\mathcal{M}(I)\times \mathcal{Z}\times \mathcal{G}$, with $\mathcal{G}$ endowed with the $\ell_1$-topology, with speed $\beta'n$ and good rate function $\widetilde{\mathcal{I}}$. We can then make use of the following lemma, the proof is postponed to the end of this section. \medskip \begin{lem} \label{lem:continuous} When $\mathcal{M}(I)$ is endowed with the weak topology, $\mathcal{Z}$ with the product topology, and $\mathcal{G}$ with the $\ell_1$-topology, the mapping $\Theta$ as defined in \eqref{defTheta} is continuous. \end{lem} \medskip Then by the contraction principle, the spectral measures $\tilde\mu_n = \Theta(\tilde\mu_{n,I},\zeta^{(n)}, \gamma^{(n)})$ satisfy under $\mathbb P_n^V$ the LDP in $\mathcal{S}(I)$ with speed $\beta' n $ and good rate function \begin{align}\label{rateinS} \widetilde{\mathcal{I}}_{\operatorname{sp}}(\tilde \mu) = \inf \left\{ \widetilde{\mathcal{I}}(\tilde{\mu}_I,z,g) \mid \Theta(\tilde{\mu}_I,z,g)=\tilde\mu \right\} . \end{align} Note that $\Theta$ is not a bijection: if $\tilde{\mu}$ has point masses in $\partial I$, they may come from $\tilde{\mu}_I$ or from elements of $g$, for which the corresponding entry in $z$ lies in $\partial I$, and a point mass of $\tilde\mu$ at $x\notin I$ may arise from the combination of several equal elements in $g$. It follows from the form of the rate $\widetilde{\mathcal{I}}$, that in the first case the infimum in $\eqref{rateinS}$ is obtained by attributing these point masses to $\tilde\mu_I$ and in the second case the infimum is attained by choosing only a single outlier at $x$. The infimum in \eqref{rateinS} is therefore given by \begin{align} \label{rateinS2} \widetilde{\mathcal{I}}_{\operatorname{sp}}(\tilde \mu) & = \mathcal{K}(\mu_V | {\tilde\mu}) + \tilde\mu(I)-1+ \sum_{z \in E(\tilde\mu)} \mathcal{F}_V(z) + \tilde\mu(\{z \}) \notag \\ & = \mathcal{K}(\mu_V | {\tilde\mu}) + \tilde\mu(\mathbb{R})-1+ \sum_{z \in E(\tilde\mu)} \mathcal{F}_V(z) . \end{align} It remains to normalize the measures $\tilde\mu_n$. Note that if $\tilde\mu$ is the zero measure, the Kullback-Leibler part in \eqref{rateinS2} equals $+\infty$ and so the rate $\widetilde{\mathcal{I}}_{\operatorname{sp}}$ can only be finite if $\tilde\mu(\mathbb{R})>0$. Furthermore, $\mathbb P_n^V(\tilde\mu_n(\mathbb{R})>0)=1$. Then we may restrict the LDP for $(\tilde\mu_n)_n$ to the set of measures $\tilde\mu \in \mathcal{S}(I)$ with $\tilde\mu(\mathbb{R})>0$ (see Lemma 4.1.5 in \cite{demboz98}). On this set of measures, the mapping $ \tilde\mu \mapsto\tilde\mu(\mathbb{R})^{-1}\tilde\mu$ is continuous. Since $\tilde\mu_n(\mathbb{R})^{-1}\tilde\mu_n$ has the same distribution as $\mu_n$, a final application of the contraction principle yields that $(\mu_n)_n$ satisfies the LDP in $\mathcal{S}_1(I)$ with speed $\beta'n$ and good rate function \begin{align} \label{finalrate} \mathcal{I}_{\operatorname{sp}}(\mu) & = \inf_{\kappa>0} \widetilde{\mathcal{I}}_{\operatorname{sp}}(\kappa\m ) \notag \\ & = \inf_{\kappa>0} \int \log \left( \frac{\mathrm{d} \mu_V}{\mathrm{d}(\kappa\m )} \right) \mathrm{d}\mu_V + (\kappa\m )(\mathbb{R})-1 + \sum_{z \in E(\tilde\mu)} \mathcal{F}_V(z) \notag \\ & = \inf_{\kappa>0} (\kappa-\log \kappa -1 )+ \int \log\left( \frac{\mathrm{d} \mu_V}{\mathrm{d}\mu} \right) \mathrm{d}\mu_V + \sum_{z \in E(\tilde\mu)} \mathcal{F}_V(z) . \end{align} This last infimum equals 0, attained for $\kappa=1$. This yields precisely the rate function in Theorem \ref{MAIN}. Finally, we can extend the last LDP for $(\mu_n)_n$ from the space $S_1(I)$ to $\mathcal{M}_1(\mathbb{R})$ by setting $\mathcal{I}_{\operatorname{sp}}(\mu)=+\infty$ if $\mu \notin S_1(I)$. Then it is easy to see that $\mathcal{I}_{\operatorname{sp}}$ is lower semicontinuous on $\mathcal{M}_1(\mathbb{R})$ and so the LDP holds in $\mathcal{M}_1(\mathbb{R})$ as well. This concludes the proof of Theorem \ref{MAIN}. \hfill $\Box$ \medskip \textbf{Proof of Lemma \ref{lem:continuous}:} Let $\mu_{I}^{(n)}\rightarrow \mu_I $ weakly in $\mathcal{M}(I)$, $z^{(n)}\to z$ entrywise in $\mathcal{Z}$ and $g^{(n)}\to g$ in $\mathcal{G}$ with respect to the $\ell_1$-topology. Denote $\Theta(\mu_{I}^{(n)},z^{(n)},g^{(n)} )=\mu^{(n)}$ and $\Theta(\mu_{I},z,g )=\mu$. Let $f$ be continuous and bounded. Then \begin{align*} \left| \int f \mathop{}\!\mathrm{d} \mu^{(n)} - \int f \mathop{}\!\mathrm{d} \mu \right| & \leq \left| \int f \mathop{}\!\mathrm{d} \mu^{(n)}_I - \int f \mathop{}\!\mathrm{d} \mu_I \right| + \sum_{i=1}^{2M} \sum_{j=1}^\infty | g^{(n)}_{i,j} f(z^{(n)}_{i,j}) - g_{i,j} f(z_{i,j})| \\ & \leq \left| \int f \mathop{}\!\mathrm{d} \mu^{(n)}_I - \int f \mathop{}\!\mathrm{d} \mu_I \right| + \sum_{i=1}^{2M} \sum_{j=1}^q | g^{(n)}_{i,j} f(z^{(n)}_{i,j}) - g^{(n)}_{i,j} f(z_{i,j})| \\ & \qquad + ||f||_\infty \sum_{i=1}^{2M} \sum_{j=q+1}^\infty |g_{i,j}| + ||f||_\infty ||g^{(n)}-g||_1 \end{align*} for any $q\geq 1$. The terms in the last two lines can be made arbitrarily small by first choosing $q$ and then $n$ large enough. \hfill $\Box$ \section{Proof of the coefficient LDP} \label{sec:coeffproof} The proofs of Theorem \ref{MAIN2} and Theorem \ref{MAIN3} make use of the explicit density \eqref{krishnadensity}, but for several arguments we rely on the fact that by Theorem \ref{MAIN}, we know an LDP holds for the spectral measure. In Section \ref{susec:conditionalLDP}, this allows to show that an LDP holds for the recursion coefficients when we condition on a compact set $\mathcal{R}_K$. Although general large deviation theory allows to write the corresponding rate function as a projective limit, at this stage, it is not available in an explicit form. In Section \ref{susec:alternativerate}, we look at the density \eqref{krishnadensity} to obtain an alternative description for the rate function, up to an {\it error term}, which is bounded on the compact set $\mathcal{R}_K$. This proves Theorem \ref{MAIN2}. Finally, in Section \ref{susec:proofofonecut}, we show that in the one-cut case the error term vanishes, concluding the proof of Theorem \ref{MAIN3}. \subsection{An abstract LDP for the conditional measure} \label{susec:conditionalLDP} To start with, note that by Theorem \ref{MAIN}, the sequence $(\mu_n)_n$ satisfies under $\mathbb{P}_n^V$ the LDP in $\mathcal{M}_1$ with speed $n\beta'$ and good rate function $\mathcal{I}_{\operatorname{sp}}$ which vanishes only at the compactly supported equilibrium measure $\mu_V$. The following theorem shows that this LDP holds also under conditioning on the smaller set $\mathcal{M}_{1,K}$ of probability measures with support in $[-K,K]$, where $K$ is so large that $I\subset [-K+1,K-1]$. We denote by $\mathbb{P}_{n,K}^V = \mathbb{P}_{n}^V(\cdot | \mu_n \in \mathcal{M}_{1,K})$ the measure conditioned on $\mathcal{M}_{1,K}$. \begin{thm} \label{thm:restrictedLDP} Assume that the potential $V$ satisfies the assumptions (A1), (A2) and (A3). Then the sequence of spectral measures $\mu_n$ satisfies under $\mathbb{P}_{n,K}^V$ the LDP in $\mathcal{M}_{1,K}$ with speed $\beta'n$ and good rate function the restriction of $\mathcal{I}_{\operatorname{sp}}$ to $\mathcal{M}_{1,K}$. \end{thm} \noindent{\bf Proof:}\hskip10pt Instead of starting from Theorem \ref{MAIN}, we make use of Theorem \ref{thm:jointjointLDP}, which states that $(\tilde\mu_{n,I},\zeta_n,\gamma_n)_n$ satisfies the LDP in $\mathcal{M}(I)\times \mathcal{Z}\times \mathcal{G}$ with speed $\beta'n$ and good rate function $\mathcal{I}_{\operatorname{sp}}$. Let $\mathcal{Z}_K = \{ z \in \mathcal{Z}: \, ||z||_\infty \leq K\}$. Then $\mathcal{Z}_K$ is a closed subset of $\mathcal{Z}$. Furthermore, by the LDP in Theorem \ref{thm:extremalLDP} for the extremal eigenvalues, we have that $\mathbb P_n^V(\zeta_n \in \mathcal{Z}_K)$ converges to 1. Therefore, for any set $C$ closed in $\mathcal{M}(I)\times \mathcal{Z}_K\times \mathcal{G}$, \begin{align} \label{restrictedLDPub} \limsup_{n\to \infty} \frac{1}{\beta'n} \log \mathbb{P}_{n,K}^V ((\tilde\mu_{n,I},\zeta_n,\gamma_n) \in C) & = \limsup_{n\to \infty} \frac{1}{\beta'n} \log \mathbb{P}_{n}^V ((\tilde\mu_{n,I},\zeta_n,\gamma_n) \in C, ||\zeta_n||_\infty \leq K ) \notag \\ & \leq - \inf_{(\tilde\mu,z,g)\in C\cap \mathcal{M}(I)\times \mathcal{Z}_K\times \mathcal{G} } \widetilde{\mathcal{I}}(\tilde\mu,z,g) , \end{align} by the large deviation upper bound of Theorem \ref{thm:jointjointLDP}. Similarly, we get from the lower bound for any set $O$ open in $\mathcal{M}(I)\times \mathcal{Z}_K\times \mathcal{G}$, \begin{align} \label{restrictedLDPlb} & \liminf_{n\to \infty} \frac{1}{\beta'n} \log \mathbb{P}_{n,K}^V ((\tilde\mu_{n,I},\zeta_n,\gamma_n) \in O) \notag \\ & \geq \liminf_{n\to \infty} \frac{1}{\beta'n} \log \mathbb{P}_{n}^V ((\tilde\mu_{n,I},\zeta_n,\gamma_n) \in O\cap \mathcal{M}(I)\times \operatorname{Int}(\mathcal{Z}_K)\times \mathcal{G}) \notag \\ & \geq - \inf_{(\tilde\mu,z,g)\in O\cap \mathcal{M}(I)\times \operatorname{Int}(\mathcal{Z}_K)\times \mathcal{G}} \widetilde{\mathcal{I}}(\tilde\mu,z,g) . \end{align} where $\operatorname{Int}(\mathcal{Z}_K)$ is the interior of $\mathcal{Z}_K$ as a subset of $\mathcal{Z}$, that is, $\operatorname{Int}(\mathcal{Z}_K)= \{g \in \mathcal{Z}:\, ||g||_\infty <K\}$. We remark that this argument would not be helpful is we started from the LDP in $\mathcal{M}_1$, as then the interior of the restricted space (in the weak topology) would be empty. From the explicit form of the rate in Theorem \ref{thm:extremalLDP}, it can be seen that for any open set $O$, \begin{align*} \inf_{(\tilde\mu,z,g)\in O\cap \mathcal{M}(I)\times \operatorname{Int}(\mathcal{Z}_K)\times \mathcal{G}} \widetilde{\mathcal{I}}(\tilde\mu,z,g) = \inf_{(\tilde\mu,z,g)\in O\cap \mathcal{M}(I)\times \mathcal{Z}_K\times \mathcal{G}} \widetilde{\mathcal{I}}(\tilde\mu,z,g) . \end{align*} Together with \eqref{restrictedLDPub}, this shows that $(\tilde\mu_{n,I},\zeta_n,\gamma_n)_n$ satisfies under $\mathbb{P}_{n,K}^V$ the LDP in the space $\mathcal{M}(I)\times \mathcal{Z}_K\times \mathcal{G}$ with rate function the restriction of $\widetilde{\mathcal{I}}$. We may now proceed as in the proof of Theorem \ref{MAIN}. We have $\mu_n \in \mathcal{M}_{1,K}$ if and only if $\zeta_n \in \mathcal{Z}_K$. The same arguments as in Section \ref{sec:normalizing} applied to the restricted LDP show that $(\mu_n)_n$ satisfies the LDP in the space $\mathcal{M}_{1,K}$, and the rate function is the restriction of $\mathcal{I}_{\operatorname{sp}}$ to this space. \hfill $\Box$ \begin{cor} \label{cor:restrictedLDP} Assume that the potential $V$ satisfies the assumptions (A1), (A2) and (A3). Then the sequence of recursion coefficients $r_n$ satisfies under $\mathbb{P}_{n,K}^V$ the LDP in $\mathcal{R}_K$ with speed $\beta'n$ and good rate function given by \begin{align*} \mathcal{I}_{\operatorname{co},K}(r) = \mathcal{I}_{\operatorname{sp}}(\psi^{-1}(r)) = \lim_{N\to \infty} \mathcal{I}_N(r), \end{align*} with \begin{align*} \mathcal{I}_{N}(r) = - \lim_{\delta \to 0} \limsup_{n\to \infty} \frac{1}{\beta'n} \log \mathbb{P}_{n,K}^V (B_{\delta,N} (r)) , \end{align*} where $B_{\delta,N} (r)= \{ z \in \mathbb{R}^\mathbb{N} |\, |z_i-r_i|<\delta \text{ for } i\leq 2N-1\}$ is the ball around the first $2N-1$ coordinates of $r$. \end{cor} \noindent{\bf Proof:}\hskip10pt We have $\mathcal{R}_K = \psi(\mathcal{M}_{1,K})$, and $\psi$ is a homeomorphism from $\mathcal{M}_{1,K}$ to $\mathcal{R}_K$, which implies by the contraction principle the LDP for $(r_n)_n$ with good rate function $\mathcal{I}_{\operatorname{sp}}\circ \psi^{-1}$. Restricting the continuous projections to $\mathcal{R}_K$, we get again by the contraction principle that the sequence of projected coefficients $\pi_N(r_n)$ satisfies the LDP in $\pi_N(\mathcal R_K)$, with some good rate function $\tilde{\mathcal{I}}_N$. The Dawson-G\"artner Theorem implies that the rate function for $(r_n)_n$ can then be recovered as $\mathcal{I}_{\operatorname{co},K}= \lim_{N\to \infty} \tilde{\mathcal{I}}_N\circ \pi_N$. On $\mathcal R_K$, we let $\mathcal{I}_N=\tilde{\mathcal{I}}_N\circ \pi_N$. As shown in Theorem 4.1.18 in \cite{demboz98}, \begin{align} \tilde{\mathcal{I}}_N(\pi_N(r)) = - \lim_{\delta \to 0} \limsup_{n\to \infty} \frac{1}{\beta'n} \mathbb{P}_{n,K}^V (||\pi_N(r_n)- \pi_N(r)||_\infty < \delta ) , \end{align} which proves the last display of Corollary \ref{cor:restrictedLDP}. \hfill $ \Box$ \medskip In the following, we write $r_{n,N}$ for $\pi_N(r_n)$, and if $T_n$ is the tridiagonal matrix with Jacobi coefficients $r_n$, we write $T_{n,N}$ for $\pi_N(T_n)$. Recall that $r^V$ is the sequence of Jacobi coefficients of $\mu_V$ and the corresponding Jacobi operator is $T^V$. We use the analogous notation for $r^V_N=\pi_N(r^V)$ and $T^V_N=\pi_N(T^V)$. \subsection{An alternative expression for the rate} \label{susec:alternativerate} To obtain an alternative description of $\mathcal{I}_N$, we will decompose the density in \eqref{krishnadensity} into three factors, one depending only on $r_{n,N}$, one only on entries omitted in $r_{n,N}$ and one factor containing finitely many mixed terms. Let $d=2p$ be the degree of the polynomial potential $V \in \mathcal V$. \begin{lem}\label{lem:densitydecomp} There exist continuous functions $M : ([0, \infty) \times \mathbb{R})^{2d+1} \to \mathbb{R}$ not depending on $n$ and $E_n : ([0, \infty) \times \mathbb{R})^{n-N} \to \mathbb{R}$, such that for all $n\geq N+ d, N\geq d$, \begin{align*} \mathrm{tr}\!\ V(T_n)-\mathrm{tr}\!\ V(T^V_n) = \ & \mathrm{tr}\!\ V(T_{n,N})- \mathrm{tr}\!\ V(T^V_{N}) + M(a_{N-d},b_{N-d},\dots ,b_{N+d})\\ &+ E_n(a_{N+1},b_{N+1},\dots,b_n )\,. \end{align*} Moreover, if $|a_k|,|b_k|\leq K$ for every $k \leq n$, then there exists a constant $C(K,V) > 0$ such that for every $N\geq d$ : \begin{align*} |M(a_{N-d},b_{N-d},\dots , b_{N+d})| \leq C(K,V) M_+(r_N)\,, \end{align*} with $M_+$ defined as in \eqref{defM+}. \end{lem} \textbf{Proof:} By linearity, it suffices to show the decomposition for $V(x)=x^d$ a monomial. Note that $\mathrm{tr} V(T_{n,N})= \mathrm{tr} V(A)$, where $A = T_{n,N} \oplus 0_{n-N}$ and $T_{n,N}$ is the $N\times N$ tridiagonal matrix with the first $2N-1$ entries of $r_n$. Let $B= T_n- A$. We have \begin{align*} T_n^d = (A+B)^d = A^d + B^d + \sum_{i\in \{0,1\}^d, i\neq 0,1} A^{i_1}B^{1-i_1} \dots B^{1-i_d} , \end{align*} where in the last sum there is always one factor equal to $A$ and one equal to $B$. Define $\hat A$, $\hat B$ analogously, build from $T_n^V$, then \begin{align*} V(T_n)-V(T_n^V) & = (A^d-\hat A^d) + (B^d- \hat B^d) + \sum_{i\in \{0,1\}^d, i\neq 0,1} \left( A^{i_1}B^{1-i_1} \dots B^{1-i_d} - \hat A^{i_1}\hat B^{1-i_1} \dots \hat B^{1-i_d} \right) \\ & = (A^d-\hat A^d) + (B^d+\hat B^d) + \sum_{i\in \{0,1\}^d, i\neq 0,1} \left( A^{i_1}B^{1-i_1} \dots B^{1-i_d}- \hat A^{i_1}B^{1-i_1} \dots B^{1-i_d}\right) \\ & \qquad + \sum_{i\in \{0,1\}^d, i\neq 0,1} \left( \hat A^{i_1}B^{1-i_1} \dots B^{1-i_d} - \hat A^{i_1}\hat B^{1-i_1} \dots \hat B^{1-i_d} \right) . \end{align*} Now $(A^d-\hat A^d)= V(T_{n,N})- V(T^V_{n,N})$, and on the other hand $(B^d-\hat B^d)$ and the last sum do not depend on $r_{n,N}$ and their trace can be combined into $E_n$. We are then left with evaluating \begin{align*} \Delta= \sum_{i\in \{0,1\}^d, i\neq 0,1} \left( A^{i_1}B^{1-i_1} \dots B^{1-i_d}- \hat A^{i_1}B^{1-i_1} \dots B^{1-i_d}\right) . \end{align*} Suppose $n\geq d$. To see that $\mathrm{tr} \Delta$ depends only on $a_{N-d},b_{N-d},\dots ,b_{N+d}$, write \begin{align} \label{eq:traceissues} \mathrm{tr} \left( A^{i_1}B^{1-i_1} \dots B^{1-i_d}\right) = \sum_{k_1,\dots ,k_d=1}^n (A^{i_1}B^{1-i_1})_{k_1,k_2}(A^{i_2}B^{1-i_2})_{k_2,k_3} \dots(A^{i_d} B^{1-i_d})_{k_d,k_1} . \end{align} Both $A$ and $B$ are tridiagonal, such that any nonzero term in this sum satisfies $|k_i-k_{i-1}|,|k_1-k_d|\leq 1$. In other words, $k=(k_1,\dots, k_d,k_1)$ is a closed path on $\{1,\dots ,n\}$ with step size at most 1. Furthermore, $A_{\ell,m}=0$ if $\ell\geq N+1$ or $m\geq N+1$ and $B_{\ell,m}=0$ if $\ell\leq N-1$ or $m\leq N-1$. At least one of the matrices $(A^{i_j}B^{1-i_j})$ equals $A$ and one equals $B$. Therefore, any path $k=(k_1,\dots, k_d,k_1)$ with $k_i\neq N$ for all $i$ gives a zero term in \eqref{eq:traceissues}. But then any contribution in \eqref{eq:traceissues} comes from paths with $N-\lfloor d/2\rfloor \leq k_1,\dots ,k_d\leq N+\lfloor d/2\rfloor$, which implies that only the entries $a_{N-d},b_{N-d},\dots ,b_{N+d}$ appear in \eqref{eq:traceissues}. The same holds true if we replace $A$ by $\hat A$, so that $M=\mathrm{tr}\Delta$ has the claimed form. It remains to show the bound for $|M|$. After taking the trace, we are left with finitely many differences \begin{align*} & (A^{i_1}B^{1-i_1})_{k_1,k_2}(A^{i_2}B^{1-i_2})_{k_2,k_3} \dots(A^{i_d} B^{1-i_d})_{k_d,k_1} \\ & \qquad \qquad - (\hat A^{i_1}B^{1-i_1})_{k_1,k_2}(\hat A^{i_2}B^{1-i_2})_{k_2,k_3} \dots(\hat A^{i_d} B^{1-i_d})_{k_d,k_1} , \end{align*} with $k$ a path as above. Whenever one of the entries of $r_{n,N}$ appears in the first product (and there is always at least one such entry), the corresponding entry of $r^V_{n,N}$ appears in the second product, and the desired bound follows from the boundedness of $|a_k|,|b_k|$ and possibly the triangle inequality, in case $A$ appears more than once in the product. \hfill $\Box$ \medskip Looking at the density \eqref{krishnadensity}, a natural guess for the rate function of the projected vector $r_{n,N}$ would be \begin{align} \label{defUJ} \mathcal{U}_N(r_N) = \mathrm{tr}\!\ V(T_N) -\mathrm{tr}\!\ V(T^V_N) - 2\sum_{k=1}^{N-1} \log (a_k/a^V_k) . \end{align} Since we cannot ignore the boundary effects with higher order Jacobi coefficients in \eqref{krishnadensity}, we cannot conclude the LDP with this rate function. In fact, the deviation from $\mathcal{U}_N$ will be given in terms of $M_+(r_N)$. We then have the following result. \begin{thm} \label{thm:improperLDP} Let $r_N$ be a fixed finite vector in $\pi_N(\mathcal{R}_K)$ and let $B_\delta(r_N)$ be the open ball in $\mathbb{R}^{2N-1}$ around $r_N$ with respect to the sup-norm. Then \begin{align*} \lim_{\delta \to 0} \limsup_{n\to \infty} \frac{1}{n\beta'} \log \mathbb{P}_{n,K}(r_{n,N} \in B_\delta(r_N)) & \leq - \mathcal{U}_N(r_N) + C(K,V) M_+(r_N) \\ \lim_{\delta \to 0} \liminf_{n\to \infty} \frac{1}{n\beta'} \log \mathbb{P}_{n,K}(r_{n,N} \in B_\delta(r_N)) & \geq - \mathcal{U}_N(r_N) - C(K,V) M_+(r_N) \\ \end{align*} \end{thm} \textbf{Proof:} We use the same idea as in \cite{BSZ} and look at ratios of probabilities, so that we can ignore the normalizing constant and consider $\widetilde{\mathbb{P}}^V_{n,K}=Z^V_{n,K}\mathbb{P}^V_{n,K}$. We then decompose the density as in Lemma \ref{lem:densitydecomp}. For this, define additionally \begin{align*} \ell_0(a_1,\dots ,a_{N-1}) & = 2 \sum_{k=1}^{N-1} \left( \frac{k}{n} + \frac{1}{n\beta}\right) \log (a_k/a^V_k) , \\ \ell_1(a_N,\dots ,a_{n-1}) & = -2 \sum_{k=N}^{n-1} \left( 1- \frac{k}{n} - \frac{1}{n\beta}\right) \log (a_k/a^V_k) . \end{align*} The density of Jacobi coefficients is given in \eqref{krishnadensity}. The measure $\widetilde{\mathbb{P}}^V_{n,K}$ has a (non-normalized) density, which on $\pi_N(\mathcal{R}_K)$ is proportional to \eqref{krishnadensity}. The restriction to $\pi_N(\mathcal{R}_K)$ implies in particular, that the density of $\widetilde{\mathbb{P}}_{n,K}$ is zero on the complement of $[-K,K]\times ([0,K]\times [-K,K])^{n-1}$. Given the ball $B_\delta(r_N)$, define \begin{align*} \widetilde B_\delta(r_N) = B_\delta(r_N) \times ([0,K]\times [-K,K])^{n-N} . \end{align*} Then, using the decomposition from Lemma \ref{lem:densitydecomp}, \begin{align*} \widetilde{\mathbb{P}}^V_{n,K} (\widetilde B_\delta(r_N) ) = \int_{\widetilde B_\delta(r_N)\cap \mathcal{R}_K} \exp\left\{-n \beta'\left( \mathcal{U}_N + M+E_n+\ell_0+\ell_1\right) \right\} d\lambda_n , \end{align*} with $E_n$ and $\ell_1$ independent of $b_1,a_1,\dots b_j$, and $\mathcal{U}_N$ and $\ell_0$ independent of $a_{N},b_{N+1},\dots ,b_n$. Here, we wrote $\lambda_n$ for the Lebesgue measure on $\mathbb{R}^{2n-1}$. Looking at the ratio of probabilities and applying the bound for $M$ in Lemma \ref{lem:densitydecomp}, we have then \begin{align*} & \quad \frac{1}{n\beta'} \log \frac{{\mathbb{P}}^V_{n,K} (\widetilde B_\delta(r_N) )}{{\mathbb{P}}_{n,K}^V (\widetilde B_\delta( r^V_N) )} = \frac{1}{n\beta'} \log \frac{\widetilde{\mathbb{P}}^V_{n,K} (\widetilde B_\delta(r_N) )}{\widetilde{\mathbb{P}}^V_{n,K} (\widetilde B_\delta(r^V_N) )} \\ & \leq \sup_{ r\in B_\delta(r_N)} \left( - \mathcal{U}_N(r) - \ell_0(r) + C(K,V) M_+(r)\right) - \inf_{ r\in B_\delta( r^V_N)} \left( - \mathcal{U}_N(r) - \ell_0(r) - C(K,V) M_+(r)\right) . \end{align*} By continuity of $\mathcal{U}_N,\ell_0$ and $M_+$ on $B_\delta(r_N)$, \begin{align*} \lim_{\delta \to 0} \lim_{n\to \infty} \sup_{ r\in B_\delta(r_N)} \left( - \mathcal{U}_N(r) - \ell_0(r) +C(K,V) M_+(r)\right) &= \lim_{\delta \to 0} \sup_{ r\in B_\delta(r_N)}\left( -\mathcal{U}_N(r) + C(K,V)M_+(r)\right) \\ & = - \mathcal{N}_N(r_N) +C(K,V) M_+(r_N), \end{align*} and \begin{align*} \lim_{\delta \to 0} \lim_{n\to \infty} \inf_{ r\in B_\delta(r_N)} \left( - \mathcal{U}_N(r) - \ell_0(r) - C(K,V) M_+(r)\right) & = \lim_{\delta \to 0} \sup_{ r\in B_\delta(r_N)}\left( -\mathcal{U}_N(r) - C(K,V) M_+(r)\right) \\ & = -\mathcal{U}_N(r_N) - C(K,V) M_+(r_N). \end{align*} For the ratio of probabilities this implies \begin{align*} \lim_{\delta\to 0} \limsup_{n\to \infty} \frac{1}{n\beta'} \log \frac{{\mathbb{P}}^V_{n,K} (\widetilde B_\delta(r_N) )}{{\mathbb{P}}^V_{n,K} (\widetilde B_\delta( r^V_N) )} & \leq -\mathcal{U}_N(r_N) +C(K,V) M_+(r_N)+\mathcal{U}_N(r^V_N) +C(K,V) M_+( r^V_N) \\ & = -\mathcal{U}_N(r_N) +C(K,V) M_+(r_N) \end{align*} Since ${\mathbb{P}}^V_{n} (\widetilde B_\delta( r^V_N) )$ and then also ${\mathbb{P}}^V_{n,K} (\widetilde B_\delta( r^V_N) )$ converges to 1, this implies the first inequality, the second one follows by analogous arguments. \hfill $\Box$ \medskip Theorem \ref{thm:improperLDP} implies for the rate function $\mathcal{I}_{N}$ of the projected sequence $\pi_N(r_n)$ \begin{align} \label{comparison} \big| \mathcal{I}_{N}(r_N) - \mathcal{U}_N(r_N) \big| \leq C(K,V) M_+(r_N) \end{align} for any $r_N \in \pi_N(R_K)$. By the second identity in Corollary \ref{cor:restrictedLDP}, the sequence $(r_n)_n$ satisfies then the LDP with speed $\beta'n$ and rate function given by $\mathcal{I}_{\operatorname{co}}(r)=\lim_{N\to \infty} \mathcal{I}_{N}(\pi_N(r))$. We may then set $\xi_{N,K} = \mathcal{U}_N - \mathcal{I}_N$ and get for the rate of $r_n$ \begin{align} \label{projrate} \mathcal{I}_{\operatorname{co}}(r)=\lim_{N\to \infty} \left[ \mathcal{U}_N(\psi_N(r)) + \xi_{N,K} \right]. \end{align} Together with \eqref{comparison} and the bound in Lemma \ref{lem:densitydecomp}, this implies Theorem \ref{MAIN2}. \subsection{Reduction to the one-cut case: proof of Theorem \ref{MAIN3}} \label{susec:proofofonecut} First, suppose $r_n$ is distributed according to $\mathbb{P}^V_{n,K}$. We will show that the limit in \eqref{projrate} equals $\lim_{N\to \infty} \mathcal{U}_N(\pi_N(r))$, using the large deviation result of Theorem \ref{MAIN2}. Suppose that $\mathcal{I}_{\operatorname{co}}(r)$ is not finite for some $r\in \mathcal{R}_K$. Then \eqref{comparison} and the uniform bound for $M_+(\pi_N(r))$ on $\mathcal{R}_K$ implies that $\lim_{N\to \infty} \mathcal{U}_N(\pi_N(r))$ is infinite as well. Suppose $\mathcal{I}_{\operatorname{co}}(r)$ is finite. Since $r\in \mathcal{R}_K$ there exists a unique $\mu$ with support in $[-K,K]$ such that $\psi(\mu) = r$. By the contraction principle, $\mathcal{I}_{\operatorname{sp}}(\mu)<\infty$. By the Kullback-Leibler part of the rate, $\mu$ has then a Lebesgue decomposition $\mathop{}\!\mathrm{d} \mu(x)= f(x) \mathop{}\!\mathrm{d}\mu_V(x) + \mathop{}\!\mathrm{d}\mu_s(x)$ with $f(x)>0$ for $\mu_V$-almost all $x\in \operatorname{supp}(\mu_V)$. By the explicit form of $\mu_V$ as in \eqref{eq:polynomialmuV}, this implies $f(x)>0$ for Lebesgue-almost all $x\in\operatorname{supp}(\mu_V)$. Rakhmanov's Theorem for Jacobi matrices \cite{Den2004} yields that then $a_k(\mu)\to \hat a$ and $b_k(\mu)\to \hat b$, where $\hat a = \lim_{k\to \infty} a_k(\mu_V), \hat b = \lim_{k\to \infty} b_k(\mu_V)$. From the bound for $M_+$ in Lemma \ref{lem:densitydecomp}, we have \begin{align} \label{Mvanishes} \lim_{N\to \infty} M_+(\pi_N(r)) =0, \end{align} and then \begin{align}\label{projlimit2} \mathcal{I}(r) = \lim_{N\to \infty} \mathcal{U}_N(\pi_N(r)) \end{align} as well. It remains to extend the LDP to the full space $\mathcal{R}$ defined in \eqref{defR}. From \eqref{exptight2}, we have \begin{align} \label{Qexptight3} \lim_{K\to \infty} \limsup_{n\to \infty} \frac{1}{n\beta'} \log \mathbb{P}^V_n(\mathcal{R}^c_K) = -\infty , \end{align} such that the measures $\mathbb{P}^V_{n,K}$ are exponentially good approximations of the measures $\mathbb{P}^V_n$. By Theorem 4.2.16 in \cite{demboz98}, the sequence $(r_n)_n$ under $\mathbb{P}^V_n$ satisfies the LDP with speed $\beta'n$ and rate given by the limit of \eqref{projlimit2} as $K\to \infty$. \section{LDP for extremal eigenvalues} \label{sect:extremalLDP} In this section we prove Theorem \ref{thm:extremalLDP}. We first remark that $\mathcal I_N^{{\operatorname{ext}}}$ is a good rate function: it is lower semicontinuous as proved in \cite{BorGui2013onecut}, A.1. p.478. From the same reference, $\mathcal{F}_{V}$ has compact level sets, so that $\mathcal I_N^{{\operatorname{ext}}}$ has compact level sets by the union bound. In Section \ref{sect:extremalLDP1}, we show exponential tightness of $(\zeta_N^{(n)})_{n\geq 1}$ under the sequence $\mathbb{P}_n^V$. It then suffices to prove the weak LDP, which follows from the control of probabilities of balls $B_\delta(z)$ of radius $\delta$ in the sup-norm around $z \in \mathcal{Z}_N$. We then show in Section \ref{sect:extremalLDP2} the upper bound \begin{align} \label{ub} \lim_{\delta \to 0} \limsup_{n\to \infty}\ (\beta'n)^{-1} \log \mathbb P_n^V ( \zeta_N^{(n)} \in B_\delta(z) ) \leq - \mathcal{I}_N^{\operatorname{ext}}(z) \end{align} and in Section \ref{sect:extremalLDP3} the lower bound \begin{align} \label{lb} \lim_{\delta \to 0} \liminf_{n\to \infty}\ (\beta'n)^{-1} \log \mathbb P_n^V(\zeta_N^{(n)} \in B_\delta(z)) \geq - \mathcal{I}_N^{\operatorname{ext}}(z) , \end{align} for any $z \in \mathcal{Z}_N$, which together imply then the full LDP. Along the way, we need the following four technical lemmas. Since their proofs are straightforward generalizations of the one-cut proof in \cite{magicrules}, they are omitted. \begin{lem} \label{LDPtilde} Let $V$ be a potential satisfying the confinement condition (A1) and let $r$ be a fixed integer. If $\mathbb P^{V_n}_n$ is the probability measure associated to the potential $V_n= \frac{n+r}{n} V$, then the law of $\mu_n^{\tt u}$ under $\mathbb P_n^{V_n}$ satisfies the LDP with speed $\beta' n^2$ with good rate function \begin{equation} \label{entropyV} \mu \mapsto \mathcal E(\mu) - \inf_\nu\mathcal E(\nu) \end{equation} where $\mathcal E$ is defined in \eqref{ratemuu}. \end{lem} \begin{lem} \label{lem:teknik} If the potential $V$ is finite and continuous on a compact set and infinite outside, we have, for every $q \geq 1$ \begin{equation} \lim_{n\to \infty} \frac{1}{n} \log \frac{Z^V_n}{Z^{\frac{n}{n-q}V}_{n-q}} = -q \inf_{x\in \mathbb{R}} \mathcal J_V(x)\,. \end{equation} \end{lem} \begin{lem}\label{lem:cvproba} Under Assumption (A1) and (A3), $\max_{i=1,\dots 2M} d(\zeta_{i,1}^{(n)} ,\partial I)$ converges to 0 in probability. Also, for any $q\geq 1$ and $\varepsilon>0$, \begin{align*} \lim_{n\to \infty} \mathbb{P}_{n-q}^{\frac{n}{n-q}V} \left( \max_{i=1,\dots, 2M} d(\zeta_{i,1}^{(n)} ,\partial I) >\varepsilon \right) = 0 . \end{align*} \end{lem} \begin{lem} \label{lem:Zratio} Under Assumption (A1), \begin{align*} \limsup_{n\to \infty} \frac{1}{n} \log \frac{Z_{n-1}^V}{Z_n^V} < \infty . \end{align*} \end{lem} \subsection{Exponential tightness} \label{sect:extremalLDP1} The exponential tightness will follow from \begin{align} \label{expotight} \limsup_{L \rightarrow \infty}\limsup_{n \rightarrow \infty} \frac{1}{n} \log \mathbb P_n^V\big( \zeta_N^{(n)} \notin K_M^{2MN} \big) = -\infty \end{align} for any $N\geq 1$, with $K_L = \{x\in \mathbb{R} \mid V(x)\leq L\}$. For $L$ large enough, we have \begin{align} \label{expotight2} \mathbb P_n^V\big( \zeta_N^{(n)} \notin K_L^{2MN} \big)\leq \mathbb P_n^V( \zeta_{1,1}^{(n)} \notin K_L) + \mathbb P_n^V( \zeta_{2M,1}^{(n)} \notin K_L) \end{align} so the proof of exponential tightness reduces to the consideration of the smallest and the largest eigenvalue, and by symmetry, it suffices to show \begin{align} \label{expotight3} \limsup_{L \rightarrow \infty}\limsup_{n \rightarrow \infty} \frac{1}{n} \log \mathbb P_n^V( \zeta_{2M,1}^{(n)} \notin K_L ) = -\infty . \end{align} The rest of the proof follows now verbatim the proof of (A.7) in \cite{magicrules}, making use of Lemma \ref{lem:Zratio}. As a consequence of exponential tightness, we may simplify the remaining proof substantially by replacing the potential $V$ by \begin{align*} V_L (x) = \begin{cases} V(x) & \text{ if } V(x) \leq L ,\\ \infty & \text{ otherwise}, \end{cases} \end{align*} for $L$ large enough. Indeed, if $L$ is large enough, the minimizer $\mu_{V_L}$ will coincide with $\mu_V$ and also $\inf_{\xi\in \mathbb{R}} \mathcal J_{V_L}(\xi)=\inf_{\xi\in \mathbb{R}} \mathcal J_V(\xi)$. For the sake of a lighter notation, we will drop the subscript $L$, but we may assume that the eigenvalues are confined to a compact interval. In particular, Lemma \ref{lem:teknik} is applicable. \subsection{Proof of the upper bound} \label{sect:extremalLDP2} In this section, we prove the upper bound \eqref{ub}. Let $z\in \mathcal{Z}_N$. Without loss of generality, we may assume that $z_{i,j} \notin I$ for all $i,j$. To see this, let $\mathrm{Ind} = \{ (i,j): z_{i,j} \notin I\}$ be the set of indices of entries not in $I$. Then we have the trivial upper bound \begin{align} \label{ubtrivial} \mathbb P_n^V ( \zeta_N^{(n)} \in B_\delta (z) ) \leq \mathbb P_n^V ( \zeta_{i,j}^{(n)} \in [z_{i,j}-\delta,z_{i,j}+\delta] \text{ for all } (i,j)\in \mathrm{Ind} ) , \end{align} and since $\mathcal{F}_V(z_{i,j}) = 0$ for $(i,j)\notin \mathrm{Ind}$, it suffices to consider the entries with indices in $\mathrm{Ind}$. In order to keep the notation simple, we assume then that $z_{i,j} \notin I$ for all $i,j$. In addition, let $\delta$ be so small that $B_\delta(z)\cap I^{2MN} = \emptyset$. The eigenvalue density as in \eqref{generaldensity} is the density of unordered eigenvalues, so that we have \begin{align} \label{ub:representation0} \mathbb P_n^V (\zeta^{(n)}_N\in B_\delta(z)) = \binom{n}{2MN} \frac{1}{Z^V_n} \int_{B_\delta(z)} \int_{\Delta(\lambda^{\mathrm{ex}})} \prod_{i< j}|\lambda_i-\lambda_j|^\beta \prod_{i=1}^n e^{-\beta'V(\lambda_i)} \mathop{}\!\mathrm{d}\lambda^\mathrm{in} \mathop{}\!\mathrm{d} \lambda^\mathrm{ex} , \end{align} where $\lambda^\mathrm{ex}=(\lambda_1,\dots ,\lambda_{2MN}) \in \mathbb{R}^{2MN}$ are the collection of (unordered) extremal eigenvalues, and the vector of the remaining (unordered) eigenvalues is denoted by $\lambda^\mathrm{in}=(\lambda_{2MN+1},\dots ,\lambda_n)\in \mathbb{R}^{n-2MN}$. Here, we consider $B_\delta(z)$ also as a subset of $\mathbb{R}^{2MN}$. Fixing the extremal eigenvalues forces the entries of $\lambda^\mathrm{in}$ then to be in the compact set \begin{align} D(\lambda^\mathrm{ex}) = \left( \bigcup_{i=1}^{2M} \big[ \max \{\lambda_k : \, k\leq 2MN, \lambda_k\leq l_i\} , \min\{\lambda_k:\, k\leq 2MN, \lambda_k\geq r_i \} \big] \right)^{n-2MN} , \end{align} where we recall that $I=[l_1,r_1]\cup \dots \cup [l_m,r_m]$. That is, the elements of $D(\lambda^\mathrm{ex})$ are ``more internal'' than the vector of eigenvalues $\lambda^\mathrm{ex}$, according to the ordering introduced in Section \ref{sec:newencoding}. For any $\lambda^\mathrm{ex}\in B_\delta(z)$, the maxima and minima in the definition of $\Delta(\lambda^{ex})$ are attained. The integral in \eqref{ub:representation0} may be rewritten as \begin{align} \label{ub:representation} \mathbb P_n^V (\zeta^{(n)}_N\in B_\delta(z)) = \binom{n}{2MN} \frac{1}{Z_n^V} \int_{B_\delta(z)} \Upsilon_{n,N}(\lambda^\mathrm{ex})\ d\lambda^\mathrm{ex} , \end{align} with the term $\Upsilon_{n,N}(\lambda^\mathrm{ex})$ given by \begin{align} \label{ub:upsilon} \Upsilon_{n,N}(\lambda^\mathrm{ex})= H(\lambda^\mathrm{ex}) \Xi_{n,N}(\lambda^\mathrm{ex}) \exp\left\{ -\beta'n\sum_{k=1}^{2MN}V(\lambda_k) \right\} , \end{align} with \begin{align*} H(\lambda^\mathrm{ex}) =\prod_{1\leq r<s\leq 2MN} |\lambda_r - \lambda_s|^{\beta} , \end{align*} and \begin{align} \label{ub:upsilon2} \Xi_{n,N}(\lambda^\mathrm{ex}) & = \int_{D(\lambda^\mathrm{ex}) } \prod_{r=1}^{2MN} \prod_{s=2MN+1}^{n} |\lambda_r-\lambda_s|^\beta \prod_{r=2MN+1}^{n} e^{-n\beta'V(\lambda_r)}\prod_{2MN < r <s \leq n} |\lambda_r - \lambda_s|^\beta \mathop{}\!\mathrm{d}\lambda^\mathrm{in} \notag \\ & = Z_{n-2MN}^{\frac{n}{n-2MN}V} \int_{D(\lambda^\mathrm{ex})} \prod_{r=1}^{2MN} \prod_{s=2MN+1}^{n} |\lambda_r-\lambda_s|^\beta \mathop{}\!\mathrm{d} \mathbb P_{n-2MN}^{\frac{n}{n-2MN}V} (\lambda) . \end{align} In order to simplify notation we define $q=2MN$, so the above measure becomes $\mathbb P_{n-q}^{\frac{n}{n-q}V}$. Now, to find an upper bound for $\Upsilon_{n,N}(\lambda^\mathrm{ex})$, we first choose $K$ so large that $\operatorname{supp}(\mu_V)\subset [-K+1,K-1]$ and define \begin{align}\label{def:ball1} \mathcal{B}_\kappa = \big\{ \mu \in \mathcal{M}_{1,K} |\, d_P(\mu,\mu_V)<\kappa \big\} \end{align} the open ball around $\mu_V$ with radius $\kappa$ in the Prokhorov-metric, (recall the definition of $\mathcal M_{1,K}$ in (\ref{defMK})). Let also $\bar{\mathcal{B}}_\kappa=\{ \lambda \in \mathbb{R}^{n-q} |\, \mu^{\tt u}_{n-q} \in \mathcal B_\kappa \}$. On the bounded set $D(\lambda^\mathrm{ex})$ the integrand in \eqref{ub:upsilon2} can be bounded by $ e^{c_1n}$ for some $c_1 > 0$ depending only on $z$ and $\delta$. We then use the fact that by Lemma \ref{LDPtilde}, the sequence of measures $\mu^{\tt u}_{n-q}$ satisfies under $\mathbb P_{n-q}^{\frac{n}{n-q}V}$ the LDP with speed $\beta'n^2$, and rate function vanishing only at $\mu_V$. The same arguments as in the large deviation upper bound in \cite{magicrules} yield then for any $\eta>0$, \begin{align} \label{ub:ub} & \limsup_{n\to \infty} \, (\beta'n)^{-1}\log \mathbb P_n^V (\zeta_N^{(n)} \in B_\delta(z) ) \notag \\ & = \limsup_{n\to \infty} \, (\beta'n)^{-1}\log \int_{B_\delta(z)} \left(Z^{\frac{n}{n-q}V}_{n-q}\right)^{-1} \Upsilon_{n,N}(\lambda^\mathrm{ex}) \mathop{}\!\mathrm{d} \lambda^{\mathrm{ex}} + \limsup_{n\to \infty}\, (\beta'n)^{-1} \log \left(\frac{Z^{\frac{n}{n-q}V}_{n-q}}{Z^V_n}\right) \notag \\ & \leq \eta - \inf_{\lambda^\mathrm{ex} \in B_\delta(z)} \sum_{k=1}^{n-q} \mathcal J_V(\lambda_k) + \limsup_{n\to \infty}\, (\beta'n)^{-1} \log \left(\frac{Z^{\frac{n}{n-q}V}_{n-q}}{Z^{V}_n}\right) . \end{align} Now we may apply Lemma \ref{lem:teknik} and use the fact that $\eta>0$ is arbitrary, to obtain \begin{align} \limsup_{n\to \infty} \ (\beta'n)^{-1} \log \mathbb P_n^V (\zeta_N^{(n)} \in B_\delta(z) ) \leq -\inf_{\lambda^\mathrm{ex} \in B_\delta(z)} \sum_{k=1}^{q} \mathcal J_V(\lambda_k) + q \inf_{\xi\in \mathbb{R}} \mathcal J_V(\xi) . \end{align} Using that $\mathcal{J}_V$ is lower semicontinuous, the right hand side converges as $\delta\searrow 0$ to \begin{align} - \sum_{k=1}^{q} \left( \mathcal J_V(z_k) - \inf_{\xi\in \mathbb{R}}\mathcal J_V(\xi)\right) = - \mathcal{I}_N^\mathrm{ext}(z) . \end{align} This concludes the proof of the upper bound. \subsection{Proof of the lower bound} \label{sect:extremalLDP3} To prove the lower bound \eqref{lb}, we fix $z\in \mathcal{Z}_N$ and show that \begin{align} \label{lb:lb} \liminf_{n\to \infty}\ (\beta'n)^{-1} \log \mathbb P_n^V(\zeta_N^{(n)} \in B_\delta(z)) \geq - \mathcal{I}_N^{\operatorname{ext}}(z) \end{align} for $\delta$ small enough. We may restrict our proof to $z$ with $V(z_{i,j})<\infty$ for all $i,j$, as otherwise $\mathcal{I}_N^{\operatorname{ext}}(z)=\infty$ and the lower bound is trivial. We recall that in the proof of the upper bound, we identified $z$ with a vector in $\mathbb{R}^{q}$, $q=2MN$, and we decomposed the vector of $n$ eigenvalues as in \eqref{ub:representation0} into $\lambda^\mathrm{ex} \in \mathbb{R}^{q}$ and $\lambda^\mathrm{in} \in \mathbb{R}^{n-q}$. In the course of the proof, we will separate the $q$ extremal eigenvalues from the $n-q$ remaining eigenvalues and use the convergence of the empirical measure build from the latter ones. For this we need some care to separate the extremal eigenvalues from $\partial I$. We will show \eqref{lb:lb} with $B_\delta(z)$ replaced by a set $U_\delta(z)\subset B_\delta(z)$, which is constructed as follows. $U_\delta(z)$ contains those $y\in \mathbb{R}^{q}$, where each coordinate $y_k$ deviates less than $\delta$ from $z_k$, if $z_k\notin \partial I$, and if $z_k\in \partial I$, $y_k$ keeps also a distance more than $\delta/2$ from $I$. More precisely, for $\delta>0$, let $J_\delta(x) = (x-\delta,x+\delta)$ if $x\notin I$, and if $x\in I$, let \begin{align} \label{lb:defB0} J_\delta(x) = (x-\delta,x+\delta) \cap \{x': d(x',I)>\delta/2\} . \end{align} Then, define \begin{align} \label{lb:defB} U_\delta(z) = J_\delta(z_1)\times \dots \times J_\delta(z_{q}). \end{align} For $\delta$ small enough, $U_\delta(x)\subset I^c$, and additionally \begin{align*} \mathbb P_n^V(\zeta_N^{(n)} \in B_\delta(z)) \geq \mathbb P_n^V(\zeta_N^{(n)} \in U_\delta(z)). \end{align*} Actually, to simplify later arguments, let $\delta$ be so small that $ x\in U_\delta(z)$ implies $d(x_k,I)>\delta/2$ for all $k$. Note that the latter condition is satisfied by definition for $k$ with $z_k\in \partial I$, but for the other only for $\delta$ small enough. Then we may bound \begin{align} \mathbb P_n^V(\zeta_N^{(n)} \in U_\delta(z)) \geq \mathbb P_n^V(\zeta_N^{(n)} \in U_\delta(z), d(\lambda_k)<\delta/4 \text{ for all } k> q) . \end{align} Similar to \eqref{ub:representation}, we can then write \begin{align} \label{lb:upsilon} Z_n^V\, \binom{n}{q}^{-1} \mathbb P_n^V(\zeta_N^{(n)} \in U_\delta(z), d(\lambda_k)<\delta/4 \text{ for all } k> q) = \int_{U_\delta(z)} \Upsilon_{n,N}(\lambda^\mathrm{ex}) \mathop{}\!\mathrm{d} \lambda^\mathrm{ex} , \end{align} with $\Upsilon_{n,N}$ as in \eqref{ub:upsilon}, except that $\Xi_{n,N}(\lambda^\mathrm{ex})$ is replaced by \begin{align*} \hat{\Xi}_{n,N}(\lambda^\mathrm{ex}) = \int_{D} \prod_{r=1}^{q} \prod_{s=q+1}^{n} |\lambda_r-\lambda_s|^\beta \prod_{r=q+1}^{n} e^{-n\beta'V(\lambda_r)}\prod_{q< r <s \leq n} |\lambda_r - \lambda_s|^\beta \mathop{}\!\mathrm{d} \lambda^\mathrm{in} , \end{align*} where the integration is now over the set $D=\{x :\, d(x_k,I)<\delta/4 \text{ for all } k\}$. We then consider the probability measure $\chi_{n,N}$ on $\mathbb{R}^n$, which forces the extremal eigenvalues to be in $U_\delta(z)$, and is defined by \begin{align} \label{lb:defchi} \mathop{}\!\mathrm{d} \chi_{n,N}(\lambda^\mathrm{ex}, \lambda^\mathrm{in}) := (\kappa_{n,N})^{-1} \left( \mathbbm{1}_{U_\delta(z)}(\lambda^\mathrm{ex})\mathop{}\!\mathrm{d} \lambda^\mathrm{ex} \right) \left( \mathbbm{1}_{D}(\lambda^\mathrm{in}) \mathop{}\!\mathrm{d}\mathbb P_{n-q}^{\frac{n}{n-q}V}(\lambda^\mathrm{in})\right) , \end{align} where the arguments are $\lambda^\mathrm{ex}\in \mathbb{R}^{q}$ and $\lambda^\mathrm{in} \in \mathbb{R}^{n-q}$ and $\kappa_{n,N}$ is the normalizing constant. Recall that as remarked at the end of Section \ref{sect:extremalLDP1}, we may assume that $V$ is infinite outside of a compact set. We then have the representation \begin{align} \label{lb:upsilon2} \int_{U_\delta(z)} \Upsilon_{n,N}(\lambda^\mathrm{ex}) \mathop{}\!\mathrm{d} \lambda^\mathrm{ex} = Z_{n-q}^{\frac{n}{n-q}V} \kappa_{n,N} I_{n,N} , \end{align} where, with $H(\lambda^\mathrm{ex})$ as in \eqref{ub:upsilon}, \begin{align} I_{n,N} := \int H(\lambda^\mathrm{ex}) e^{-\beta'n\sum_{r=1}^{q} V(\lambda_r)} \left(\prod_{r=1}^{q}\prod_{s=q+1}^{n}|\lambda_r - \lambda_s|^\beta\right) \mathop{}\!\mathrm{d} \chi_{n,N}(\lambda^\mathrm{ex}, \lambda^\mathrm{in}) . \end{align} Jensen's inequality allows then to bound \begin{align} \label{lb:Ibound} \frac{1}{\beta'}\log I_{n,N} \geq n I_{n,N}^{(1)} + 2I_{n,N}^{(2)} + 2 (n-q) I_{n,N}^{(3)} , \end{align} where \begin{align*} I_{n,N}^{(1)} & = -\int \sum_{r=1}^{q} V(\lambda_r) \mathop{}\!\mathrm{d} \chi_{n,N}(\lambda^\mathrm{ex}, \lambda^\mathrm{in}),\\ I_{n,N}^{(2)} & = \int \sum_{1\leq r<s \leq q} \log |\lambda_r-\lambda_s| \mathop{}\!\mathrm{d} \chi_{n,N}(\lambda^\mathrm{ex}, \lambda^\mathrm{in}),\\ I_{n,N}^{(3)} & = \frac{1}{n-q}\int \sum_{r=1}^q \sum_{s=q+1}^{n} \log |\lambda_r - \lambda_s| \mathop{}\!\mathrm{d} \chi_{n,N}(\lambda^\mathrm{ex}, \lambda^\mathrm{in}) . \end{align*} To obtain bounds for the $I_{n,N}^{(i)}$, we first consider the normalizing constant $\kappa_{n,N}$. From definition \eqref{lb:defchi}, it is given by \begin{align*} \kappa_{n,N} = \int_{U_\delta(z)} \mathop{}\!\mathrm{d} \lambda^\mathrm{ex} \, \mathbb{P}_{n-q}^{\frac{n}{n-q}V}(\lambda \in D). \end{align*} By definition of the set $D$, $I\subset \operatorname{Int}(D)$, and so by Lemma \ref{lem:cvproba}, \begin{align} \label{lb:kappa} \lim_{n\to \infty} \mathbb{P}_{n-q}^{\frac{n}{n-q}V}(\lambda \in D) = 1, \end{align} which implies \begin{align} \label{lb:kappa2} \lim_{n\to \infty} \kappa_{n,N} = |U_\delta(z)| , \end{align} where we write $|A|$ for the Lebesgue measure of a Borel set $A$. This allows then to prove the following limits for $I_{n,N}^{(i)}$, $i=1,2,3$: \begin{align} \lim_{n \to \infty} I_{n,N}^{(1)} & = -|U_\delta(z)|^{-1} \int_{U_\delta(z)} \sum_{r=1}^{q} V(\lambda_r) \mathop{}\!\mathrm{d} \lambda^\mathrm{ex}, \\ \lim_{n \to \infty} I_{n,N}^{(2)} & = |U_\delta(z)|^{-1} \int_{U_\delta(z)} \sum_{1\leq r<s \leq q} \log |\lambda_r-\lambda_s| \mathop{}\!\mathrm{d} \lambda^\mathrm{ex} \\ \lim_{n \to \infty} I_{n,N}^{(3)} & = |U_\delta(z)|^{-1} \int_{U_\delta(z)} \left( \sum_{r=1}^q \int \log |\lambda_r-\xi| \mathop{}\!\mathrm{d} \mu_V(\xi) \right) \mathop{}\!\mathrm{d} \lambda^{\mathrm{ex}}. \end{align} For the detailed arguments we again refer to \cite{magicrules}. This implies \begin{align} \label{lb:Ibound2} \liminf_{n\to \infty} \frac{1}{\beta'n} \log I_{n,N} & \geq - |U_\delta(z)|^{-1} \int_{U_\delta(z)} \sum_{r=1}^q \mathcal{J}_V(z_r) \mathop{}\!\mathrm{d} \lambda^\mathrm{ex} . \end{align} We can then return to \eqref{lb:upsilon} via \eqref{lb:upsilon2}, and obtain \begin{align} \label{lb:almostfinished} & \liminf_{n\to \infty} \frac{1}{\beta'n} \log \mathbb P_n^V(\zeta_N^{(n)} \in U_\delta(z), d(\lambda_k,I)<\delta/4 \text{ for all } k> q) \notag \\ & \geq - |U_\delta(z)|^{-1} \int_{U_\delta(z)} \sum_{r=1}^q \mathcal{J}_V(\lambda_r) d\lambda^\mathrm{ex} + \liminf_{n\to \infty} \frac{1}{\beta'n} \log \left( \frac{Z_{n-q}^{\frac{n}{n-q}V}}{Z_{n}^{V}} \right) +\liminf_{n\to \infty} \frac{1}{\beta'n} \log \kappa_{n,N} . \end{align} By Lemma \ref{lem:teknik}, the first $\liminf$ in \eqref{lb:almostfinished} is given by $q \inf_{x \in \mathbb{R}} \mathcal{J}_V(x)$. Since $\kappa_{n,N}$ converges to a positive limit, the second one vanishes. Altogether, we obtain for $U_\delta(z)$ a neighborhood of a point $z\in \mathcal{Z}_N$ such that $\mathcal{I}_N^{\operatorname{ext}}(z)$ is finite, \begin{align} \label{lb:almostfinished2} \liminf_{n\to \infty} \frac{1}{\beta'n} \log \mathbb P_n^V(\zeta_N^{(n)} \in B_\delta(x)) \geq - \left( |U_\delta(z)|^{-1} \int_{U_\delta(z)} \sum_{r=1}^q \mathcal{J}_V(\lambda_r) \mathop{}\!\mathrm{d} \lambda^\mathrm{ex} - \inf_{x \in \mathbb{R}} \mathcal{J}_V(x) \right), \end{align} for $\delta >0$ small enough. Letting $\delta \to 0$, the set $U_\delta(z)$ concentrates at $z$ with $|U_\delta(z)|\to 0$. By continuity of $\mathcal{J}_V$ on the set where this function is finite, the lower bound in \eqref{lb:almostfinished2} converges to $\sum_k \mathcal{F}_V(z_k)$, which finishes the proof of the lower bound.
train/arxiv
BkiUeFw4dbghVu0Okks_
5
1
\section{Introduction} \emph{Representation learning} has become one of the most important tasks in machine learning~\cite{Bengio13,Goodfellow16}. The typical task is to find a numerical (vectorized) representation from structured objects, such as images~\cite{Krizhevsky12}, speeches~\cite{Graves13}, and texts~\cite{Mikolov13}, which have discrete structures between variables. However, to date, learning of a structured representation from numerical data has not been studied at sufficient depth, while the task is crucial for \emph{relational reasoning} aiming at finding relationships between objects to bridge between a symbolic approach and a gradient-based numerical approach~\cite{Santoro17}. A classic yet promising branch of research for learning structures from numerical data is \emph{hierarchical clustering}~\cite{hie_clu}, which is a widely used unsupervised learning method in multivariate data analysis from natural language processing~\cite{PeterF} to human motion analysis~\cite{FengZ}. Given a set of data points without any class labels, hierarchical clustering can learn a tree structured representation, called a \emph{dendrogram}, whose nodes correspond to clusters and leaves correspond to the input data points. The resulting tree structures can be used for further analysis of relational reasoning and other machine learning tasks. However, the representation in hierarchical clustering is restricted to the form of a \emph{binary tree} since clusters must be disjoint with each other in the exiting approaches. Nevertheless, clusters of objects can often overlap in real-world data analysis. Therefore the technique that can find a hierarchical structure of overlapped clusters from multivariate data is needed, which leads to a more flexible \emph{graph structured representation} of numerical data. To solve the problem, we propose to combine {\it nearest neighbor-based binarization} and {\it formal concept analysis} (FCA)~\cite{FCA}. FCA can extract a hierarchical structure of data using the algebraic closedness property, which consists of mutually overlapped clusters~\cite{Valtchev04}. Since FCA is designed for binary data, we first binarize numerical data by nearest neighbor-based binarization, which can model local geometric relationships between data points. The remainder of this paper is organized as follows. Section~\ref{propose_section} introduces our method; Section~\ref{knn_section} explains nearest neighbor based binarization and Section~\ref{fca_section} introduces FCA. Section~\ref{experi_section} empirically examines our method and Section~\ref{conc_section} summarizes our contribution. \section{The Proposed Method}\label{propose_section} We introduce our method that learns a graph representation from numerical data. It consists of two stages. It first binarizes numerical data by nearest neighbor-based binarization, followed by applying formal concept analysis (FCA)~\cite{FCA} to the binarized data, which is an established method to analyze relational databases~\cite{Kaytoue10}. Input to our method is an unlabeled real-valued vectors. Let $D = \{\vec{d}_1, \vec{d}_1, \dots, \vec{d}_n\}$ be an input dataset. Each data point is an $m$-dimensional vector and is denoted by $\boldsymbol{d}_i = (d_i^1, d_i^2, \dots, d_i^m)\in \mathbb{R}^m$. \subsection{Nearest Neighbor-based Binarization}\label{knn_section} In the first stage, we convert each $m$-dimensional vector $\vec{d}_i \in \mathbb{R}^m$ into an $n$-dimensional binary vector $\vec{z}_i \in \{0,1\}^n$, where $n$ coincides with the number of data points. The $j$th feature in the converted binary vector $\vec{z}_i$ shows whether or not the $j$th data point $\vec{d}_j$ belongs to nearest neighbors of $\vec{d}_i$. Formally, given a dataset $D \subset \mathbb{R}^m$ and a parameter $k \in \mathbb{N}$, each data point $\vec{d}_i \in D$ is binarized to the binary vector $\boldsymbol{z}_i = (z_i^1, z_i^2, \dots , z_i^n) \in \{0, 1\}^n$, where each component $z_i^j$ is defined as \begin{equation}\label{z_defi} \begin{split} z_i^j=\left\{ \begin{array}{ll} 1 &\text{if } \boldsymbol{d}_j \text{ is the }l\text{th nearest data point from } \boldsymbol{d}_i \text{ for } l \leq k,\\ 0 &\text{otherwise}. \end{array} \right. \end{split} \end{equation} Hence our binarization models local relationships between data points in terms of the relative closeness in the original feature space, which is often used as $k$-nearest neighbor graphs in spectral clustering~\cite{vonluxburg07}. When we regard indices of data points as \emph{items}, every binary vector $\boldsymbol{z}_i \in \{0, 1\}^n$ can be directly treated as a \emph{transaction} $X_i \subseteq \{1, 2, \dots, n\}$ defined as $X_i = \{\,j \in \{1,2,\dots n\} \mid z_i^j = 1\,\}$. In other words, $X_i$ is the set of indices of the data points which are the $l$th nearest data points ($l \leq k$) from $\boldsymbol{d}_i$, and hence $|X_i| = k$ always holds. Output in this stage is the transaction database $\mathcal{T} = \{X_1, X_2, \dots, X_n\}$. Transaction databases are the standard data format in frequent pattern mining~\cite{freq_pattern} and other fields in databases. \subsection{Formal Concept Analysis}\label{fca_section} In the second stage, we apply formal concept analysis (FCA)~\cite{FCA} to the transaction database obtained from the first stage, which is a mathematical way to analyze databases based on the lattice theory and can be viewed as a co-clustering method for binary data. FCA can obtain hierarchical relationships of the original numerical data via the binarized transaction database. Let $\mathcal{A} \subseteq \mathcal{T}$ be a subset of transactions and $B \subseteq [n] = \{1,2,\dots,n\}$ be a subset of data indices. We define that $\mathcal{A}'$ is the set of indices common to all transactions in $\mathcal{A}$ and $B'$ is the set of transactions possessing all indices in $B$; that is, \begin{align*} \mathcal{A}' = \{\,j \in [n] \mid j \in X_i \text{ for all }X_i \in \mathcal{A} \,\}, \quad B' = \{\,X_i \in \mathcal{T} \mid B \subseteq X_i \,\}. \end{align*} The pair $(\mathcal{A}, B)$ is called a {\it concept} if and only if $\mathcal{A}' = B$ and $B' = \mathcal{A}$. Here the mapping $''$ is a \emph{closure operator}, as it satisfies $\mathcal{A} \subseteq \mathcal{A}''$, $\mathcal{A} \subseteq \mathcal{C} \Rightarrow \mathcal{A}'' \subseteq \mathcal{C}''$, and $(\mathcal{A}'')'' = \mathcal{A}''$, and $\mathcal{A}$ is \emph{closed} if and only if $(\mathcal{A}, B)$ is a concept. A concept $(\mathcal{A}_1, B_1)$ is less general than a concept $(\mathcal{A}_2, B_2)$ if $\mathcal{A}_1$ is contained in $\mathcal{A}_2$; that is, $(\mathcal{A}_1, B_1) \leq (\mathcal{A}_1, B_1) \Longleftrightarrow \mathcal{A}_1 \subseteq \mathcal{A}_2$, where the relation ``$\le$'' becomes a partial order. The {\it concept lattice} is the set of concepts equipped with the order $\le$. Intuitively, concepts are representative clusters in the dataset. From the set $\mathfrak{L}$ of concepts, we finally construct a graph representation $G = (V, E)$, where $V = \{S \subseteq D \mid (T(S), T(S)') \in \mathfrak{L}\}$ with $T(S) = \{X_i \in \mathcal{T} \mid \vec{d}_i \in S\}$ and a directed edge $(v, w) \in E$ exists if $v$ covers $w$; that is, $v \subset w$ and $v \subseteq u \subset w \Rightarrow u = v$. Hence $v \subseteq w$ if and only if $w$ is reachable from $v$. This graph coincides with the Hasse diagram of the concept lattice using the partial order $\le$. We illustrate an example of a graph representation in Figure~\ref{hasse}(\textbf{b}) obtained by our method from a dataset in Figure~\ref{hasse}(\textbf{a}). Since the set of concepts is equivalent to that of \emph{closed itemsets} used in closed itemset mining~\cite{lattice_closed}, we can efficiently enumerate all concepts using a closed itemset mining algorithm such as LCM~\cite{lcm}. Moreover, we can directly obtain more compact representations by pruning nodes with small clusters using \emph{frequent closet itemset mining} as the frequency (or the support) of an itemset coincides with the size of a cluster. \begin{figure}[t] \centering \includegraphics[width=.9\linewidth]{graph.pdf} \caption{(\textbf{a}) Example of dataset. (\textbf{b}) Graph representation obtained from the dataset with $k = 3$. All edges are directed from left to right. Numbers of nodes indicate clusters as follows: 1: $\emptyset$, 2: $\{\vec{d}_1\}$, 3: $\{\vec{d}_6\}$, 4: $\{\vec{d}_3\}$, 5: $\{\vec{d}_8\}$, 6: $\{\vec{d}_2,\vec{d}_4\}$, 7: $\{\vec{d}_1,\vec{d}_6\}$, 8: $\{\vec{d}_3,\vec{d}_8\}$, 9: $\{\vec{d}_5,\vec{d}_7\}$, 10: $\{\vec{d}_1,\vec{d}_2,\vec{d}_4\}$, 11: $\{\vec{d}_2,\vec{d}_3,\vec{d}_4\}$, 12: $\{\vec{d}_1,\vec{d}_2,\vec{d}_4,\vec{d}_6\}$, 13: $\{\vec{d}_5,\vec{d}_6,\vec{d}_7,\vec{d}_8\}$, 14: $\{\vec{d}_1,\vec{d}_5,\vec{d}_6,\vec{d}_7,\vec{d}_8\}$, 15: $\{\vec{d}_3,\vec{d}_5,\vec{d}_6,\vec{d}_7,\vec{d}_8\}$, 16: $\{\vec{d}_1,\vec{d}_2,\vec{d}_3,\vec{d}_4,\vec{d}_5,\vec{d}_6,\vec{d}_7,\vec{d}_8\}$.} \label{hasse} \end{figure} \section{Experiments}\label{experi_section} We evaluate the proposed method using synthetic and real-world datasets. Since our method can be viewed as hierarchical clustering, to assess the effectiveness of our method, we compare our method with the standard hierarchical agglomerative clustering (HAC) with Ward's method~\cite{ward_original}. We performed all experiments on Windows10 Pro 64bit OS with a single processor of Intel Core i7-4790 CPU 3.60 GHz and 16GB of main memory. All experiments were conducted in Python 3.5.2. In our method, we used LCM~\cite{lcm} version 5.3\footnote{\url{http://research.nii.ac.jp/~uno/code/lcm53.zip}} for closed itemset mining. HAC is implemented in scipy~\cite{scipy}. We use dendrogram purity (DP)~\cite{dp_first} for evaluation. Dendrogram purity is the standard measure to evaluate the quality of hierarchical clusters~\cite{dp_use}. Given a dataset $D = \{\boldsymbol{d}_1, \boldsymbol{d}_2, \dots, \boldsymbol{d}_n\}$ and its ground truth partition $\mathcal{C} = \{C_1, C_2, \dots, C_P\}$ such that $\bigcup_{C_i \in \{1, \dots, P\}} C_i = D$ and $C_i \cap~C_j = \emptyset$, and let $\mathcal{H}$ be a set of clusters obtained by a hierarchical clustering algorithm. We denote by $\mathrm{LCA}(\boldsymbol{d}_i, \boldsymbol{d}_j) \in \mathcal{H}$ the smallest cluster that includes both $\boldsymbol{d}_i$, $\boldsymbol{d}_j$ in $\mathcal{H}$ and $\mathrm{pur}(F, G) = |F \cap G| / |F|$ for a pair of clusters $F, G \subseteq D$. Assume that $Q$ be the pair of data points in the same cluster; that is, $Q = \{(\boldsymbol{d}_i, \boldsymbol{d}_j) \mid \boldsymbol{d}_i, \boldsymbol{d}_j \in C_l \text{ for some } C_l \in \mathcal{C}\}$. The \emph{dendrogram purity} of hierarchical clusters $\mathcal{H}$ is defined as \begin{align*} DP(\mathcal{H}) = \frac{1}{|Q|}\sum\nolimits_{l=1}^P \sum\nolimits_{\boldsymbol{d}_i, \boldsymbol{d}_j \in C_{l}} \mathrm{pur}(\mathrm{LCA}(\boldsymbol{d}_i, \boldsymbol{d}_j), C_{l}). \end{align*} The dendrogram purity takes values from $0$ to $1$ and larger is better. We use three types of synthetic datasets \texttt{synth1}, \texttt{synth2}, and \texttt{synth3}. \texttt{synth1} consists of two equal sized clusters sampled from two normal distributions $(\mu_0, \sigma^2_0) = (0, 1)$ and $(\mu_1, \sigma^2_1) = (2, 1)$ for each feature. \texttt{synth2} consists of two equal sized clusters sampled from two normal distributions $(\mu_0, \sigma^2_0) = (0, 1)$ and $(\mu_1, \sigma^2_1) = (2, 4)$ for each feature. \texttt{synth3} consists of three clusters with the size ratio $(2, 1, 1)$ sampled from three two-dimensional multivariate normal distributions, where the mean is randomly sampled from $[-25, 25]$ and the variance is always $1$ for each feature. For each dataset, we obtained the averaged dendrogram purity from 10 trials. \setlength{\tabcolsep}{5pt} \begin{table}[t] \begin{small} \centering \caption{Experimental results, where $c$ denotes the number of classes.} \begin{tabularx}{1.0\linewidth}{lrrXrrrrll} \toprule Name & $n$ & $m$ & $c$ & \multicolumn{2}{c}{\# clusters} & \multicolumn{2}{c}{DP} & \multicolumn{2}{c}{Runtime (sec.)}\\ &&&& Ours & HAC & Ours & HAC & Ours & HAC\\ \midrule \texttt{synth1} & 100 & 2 & 2 & 77,364.5 & 199 & \textbf{0.937} & 0.812 & \vp{1.24}{0} & \vn{1.01}{3}\\ \texttt{synth1\_large} & 1,000 & 500 & 2 & 58.4 & 1,999 & \textbf{1.0} & \textbf{1.0} & \vp{1.42}{1} & \vn{4.52}{1}\\ \texttt{synth2} & 100 & 2 & 2 & 27,445.3 & 199 & \textbf{0.842} & 0.705 & \vn{5.88}{1} & \vn{9.02}{4}\\ \texttt{synth3} & 100 & 2 & 3 & 425.2 & 199 & \textbf{0.976} & 0.936 & \vn{1.64}{1} & \vn{9.12}{4}\\ \midrule \texttt{parkinsons} & 197 & 23 & 2 & 263,189 & 393 & \textbf{0.828} & 0.738 & \vp{1.30}{1} & \vn{5.32}{3}\\ \texttt{vertebral} & 310 & 6 & 2 & 503,476,064 & 619 & \textbf{0.872} & 0.686 & \vp{2.55}{4} & \vn{4.25}{3}\\ \texttt{breast\_cancer} & 569 & 10 & 2 & 3,142 & 1,137 & \textbf{0.869} & 0.771 & \vp{3.40}{0} & \vn{8.43}{3}\\ \texttt{wine\_red} & 1,600 & 12 & 2 & 24,412,834 & 3,199 & \textbf{0.849} & 0.845 & \vp{3.73}{3} & \vn{8.09}{2}\\ \texttt{ctg} & 2,126 & 20 & 2 & 1,426,981 & 4,251 & \textbf{0.800} & 0.765 & \vp{2.52}{2} & \vn{1.23}{1}\\ \texttt{seismic\_bumps} & 2,584 & 25 & 2 & 91,059 & 5,167 & 0.931 & \textbf{0.943} & \vp{9.48}{1} & \vn{1.52}{1}\\ \bottomrule \end{tabularx} \label{realdatasets} \end{small} \end{table} \begin{figure}[t] \centering \includegraphics[width=.9\linewidth]{plot.pdf} \caption{Result on \texttt{synth1} (left) and \texttt{parkinsons} (right).} \vskip -10pt \label{result_da1_mean} \end{figure} We collected six real-world datasets from UCI machine learning repository~\cite{UCI} and used only continuous features. The statistics of datasets are summarized in Table~\ref{realdatasets}. \subsection{Results and Discussion}\label{result_dis_subsection} First we examine the sensitivity of our method with respect to the parameter $k$ for $k$-nearest neighbor binarization using the synthetic dataset \texttt{synth1} with $n = 100$ and $m = 2$ and the real-world dataset \texttt{parkinsons} with $n = 197$ and $m = 23$. We plot results in Figure \ref{result_da1_mean}, where we varied $k$ from 10 to 90 for \texttt{synth1} and from 10 to 190 for \texttt{parkinsons}. It shows that when $k \ge 20$, the dendrogram purity is higher than HAC in both datasets and it is stable for larger $k$ except for $k = 190$ in \texttt{parkinsons}, which is almost the same as the dataset size. This means that our method is robust to changes in $k$ if $k$ is set to be sufficiently large. In the following, we set $k$ to be the half of the respective dataset size. Next we examine the clustering performance of our method compared to HAC across various types of datasets. Results are summarized in Table~\ref{realdatasets}. To prune unnecessary small clusters in our method, we set the lower bound of the size of clusters as $190$ for \texttt{ctg}, \texttt{seismic\_bumps}, and $490$ for \texttt{synth1\_large}. They clearly show that our method is consistently superior to HAC across all synthetic and real-world datasets except for \texttt{seismic\_bumps}. The reason is that our method can learn overlapped clusters while HAC cannot. Although the number of clusters in HAC is always fixed to $2n - 1$ as it learns a binary tree, our method allows more flexible clustering, resulting in a larger number of clusters as shown in Table~\ref{realdatasets}. How to effectively use the lower bound of the size of clusters to reduce clusters is our future work. To summarize, our results show that the proposed method is robust to the parameter setting and can obtain better quality hierarchical structures than the standard baseline, hierarchical agglomerative clustering with Wald's method. This means that a graph representation learned by our method can be effective for further data analysis for relational reasoning. \section{Conclusions}\label{conc_section} In this paper, we have proposed a novel method that can learn graph structured representation of numerical data. Our method first binarizes a given dataset based on nearest neighbor search and then applies formal concept analysis (FCA) to the binarized data. The extracted concept lattice corresponds to a hierarchy of clusters, which leads to a directed graph representation. We have experimentally showed that our method can obtain more accurate hierarchical clusters compared to the standard hierarchical agglomerative clustering with Wald's method. \textbf{Acknowledgments}: This work was supported by JSPS KAKENHI Grant Numbers JP16K16115, JP16H02870, and JST, PRESTO Grant Number JPMJPR1855, Japan (M.S.); and JSPS KAKENHI Grant Number 15H05711 (T.W.).
train/arxiv
BkiUgXLxK7ICUt7cRgd0
5
1
\section{Introduction} With the ever-increasing applications of wireless networks, an unprecedented amount of private and sensitive information is transferred over the wireless medium. However, the broadcast characteristic of wireless channels will lead to serious security risks. Although cryptography is the well-known security method, it has been demonstrated that even the most robust cryptography is at risk of being brutally cracked by quantum computing \cite{chen2016report}. As an alternative approach, physical layer security can achieve information-theoretic security by exploiting the inherent physical layer randomness of wireless channels \cite{zhou2013physical}. Recently, covert communication has been regarded as a new security paradigm to hide the communication process itself to achieve a stronger level of security \cite{yan2019low}. Bash et al. first derived the information theory limitation called square root law (SRL) for covert communication \cite{bash2013limits}, pointing out that under the additive white Gaussian noise (AWGN) channel, Alice can only reliably and covertly send $\mathcal{O}(\sqrt{n})$ bits to Bob over $n$ channel uses. Subsequently, the fundamental limit of covert communication has been further studied under various wireless channels, including binary symmetric channels \cite{che2013reliable}, discrete memory-less channels \cite{wang2016fundamental}, and multiple input multiple output (MIMO) AWGN channels \cite{abdelaziz2017fundamental}. Following this line, extensive research works have been dedicated to the study of covert communication in various network scenarios. According to the movability of network nodes, these available works can be classified as studies for network scenarios with fixed nodes and studies for network scenarios with mobile nodes. For the network scenarios with fixed nodes, the authors in \cite{lee2015achieving} first proved that a nonzero covert rate is possible over the AWGN channels and Rayleigh channels regardless of the finite or infinite channel use. Based on the results, the optimal power adaptation schemes were designed in \cite{li2020optimal}. Considering the channel uncertainty, the work \cite{shahzad2020covert} reveals the fundamental difference in the design of covert transmission schemes between the case of quasi-static fading channel and the case of non-fading AWGN channel \cite{shahzad2017covert}. The jamming-assisted covert communication was also explored in different network scenarios, including the ones with full-duplex jamming receiver \cite{shahzad2018achieving}, the ones with Poisson point distributed jammer \cite{soltani2018covert}, the ones with randomly located warden \cite{zheng2019multi}, the ones with two-hop relaying \cite{bai2022covert}, and the ones with intelligent reflecting surface (IRS) technology \cite{LvWLDAC22,MamaghaniH22}. Notice that the millimeter-wave (mmWave) technology can bring high bandwidth, the covert communications with mmWave has been examined in some recent works. The authors in \cite{jamali2021covert} first considered a mmWave communication system with dual independent antenna arrays, and proposed a covert transmission scheme where one antenna array is used to form a beam towards Bob for data transmission and the other is used for jamming. The covert rate maximization in mmWave system was investigated in \cite{zhang2021joint} through the joint optimization of beam training duration, training power and transmission power. A multi-user beam training strategy was proposed in \cite{zhang2022multi} to maximize the effective covert throughput. Besides, the authors in \cite{wang2021covert} considered a full-duplex covert mmWave communications scenario, and explored the maximization of achievable covert rate by jointly optimizing the beamforming, transmit power, and jamming power. For the mobile node network scenario, the impact of node mobility on the covert performance in the mobile Ad Hoc network was first studied in \cite{im2020mobility}. Considering the UAV-assisted covert communications, the authors in \cite{yang2021mode} discussed an adaptive switching scheme between half-duplex and full-duplex modes to enhance the covert capacity. Subsequently, for improving the covert performance of air-to-ground (A2G) systems, the authors in \cite{zhou2019joint} focused on the corresponding optimization of UAV's trajectory and transmission power and the authors in \cite{zhou2021three} explored the corresponding optimization of UAV'S placement. Then, the joint optimization of transmission power and jamming power of full-duplex UAV was investigated in \cite{chen2021uav} to enhance the covert performance in the two-hop UAV relayed covert communication system. Regarding the joint optimization of trajectory and power for the covert communication with UAV jamming, different optimization algorithms have been developed, e.g., penalty successive convex approximation algorithm \cite{zhou2021uav}, model-driven generative adversarial network algorithm \cite{li2021md}, and geometric method \cite{rao2022optimal}. From the monitor's point of view, the authors in \cite{hu2019optimal} developed a novel detection strategy which adopts multiple antennas with beam sweeping to confront the UAV covert communication. In \cite{wang2019secrecy}, the authors further discussed the trade-off between secrecy/covertness and efficiency of the multi-hop transmission with a UAV warden, and studied the related throughput optimization by carefully designing the parameters of the network, including the coding rates, transmit power, and required number of hops. It is expected that the mmWave and Terahertz communication can meet the demand of high data rate applications for forthcoming beyond-5G (B5G) wireless communications, the authors in \cite{zhang2020optimized} and \cite{mamaghani2022aerial} conducted some initial studies to investigate the covert performance of the UAV-aided mmWave wireless system and Terahertz wireless system, respectively. The aforementioned works help us understand the covert communication with either microwave or mmWave technologies. Notice that these two technologies have their strength and weakness, so some recent work explored the dual-frequency antenna design to support both bands of communications \cite{xiang2017flexible,zhang2018dual,Li2018} as well the related issue of network performance analysis \cite{vuppala2016analysis,semiari2017joint}. These works indicate that network performance can be significantly enhanced by supporting both microwave and mmWave in the communication system. Thus, a natural question that arises is how to take the full advantages of microwave and mmWave to achieve a more efficient covert communication. To this end, we explore covert communication in an A2G wireless system where UAV is equipped with a dual-band antenna and proposed a hybrid omnidirectional microwave (OM) and directional mmWave (DM) transmission mode to improve covert performance of the system. The main contributions of this paper are summarized as follows. \begin{itemize} \item We first develop theoretical frameworks for covert performance analysis of an A2G system under both the OM and DM transmission modes, in terms of the detection error probability (DEP), the effective covert rate (ECR) and the covert Shannon capacity (CSC). Specially, the uncertainty of line-of-sight (LoS) and non-line-of-sight (NLoS) of the A2G channel due to the movement of UAV is fully considered in the analysis. These frameworks reveal the inherent relationship between the key system parameters (i.e., UAV's location, transmission rate, transmission power) and the DEP, ECR and CSC. \item We then explore the covert performance optimization of the concerned A2G wireless system under both the OM and DM transmission modes. Specifically, for a given position of UAV, we determine the optimal covert transmission power and transmission rate to achieve the maximal ECR and the maximal CSC while satisfying the DEP constraint. Based on the optimization results, we further propose a hybrid OM/DM transmission mode which allows the UAV to adaptively select between the OM and DM transmission modes at different locations to achieve the optimal covert transmission performance. \item Finally, extensive numerical results are provided to illustrate the effects of the key system parameters on the overall covert performance and also to demonstrate that the proposed hybrid OM/DM transmission mode can achieve an enhanced covert performance than the pure OM or DM transmission mode in the considered A2G system. \end{itemize} The remainder of the paper is organized as follows. Section \ref{sec2} introduces the system model and preliminaries. Section \ref{sec3} presents the concerned transmission modes and detection strategy at Willie. The covert performance modeling of the OM and DM transmission modes is provided in \ref{TA_Performance} and Section \ref{MA_Performance}, respectively. Section \ref{sec_results} shows the covert performance optimization and the hybrid OM/DM transmission mode. The numerical results are provided in Section \ref{sec_num_results}. Finally, we conclude in Section \ref{sec_conclusion}. The notations used in this paper are summarized in Table \ref{notations table}. \renewcommand{\arraystretch}{1.4} \begin{table}[th] \centering \caption{List of Notations} \label{notations table} \begin{tabular}{|m{0.13\textwidth}|m{0.31\textwidth}|} \hline \textbf{Notation} & \textbf{Description} \\ \hline \hline $P_{a}$, $d_{ij}$, $h_{i}$ & Transmission power of Alice, Distance between nodes $i$ and $j$, height of node $i$ \\ \hline $d_{aw}^{\min}$, $d_{aw}^{\max}$ & Minimum distance and maximum distance between Alice and Willie\\ \hline $\mathbb{A}\in \{M,S\}$ & Main lobe and side lobe, respectively\\ \hline $\mathbb{B}\in \{L,N\}$ & LoS channel and NLoS channel, respectively\\ \hline $\mathbb{C}\in \{o,d\}$ & OM transmission mode and DM transmission mode, respectively\\ \hline $G_{ij}, G_{i}^{M}, G_{i}^{S}$ & Total antenna gains of link $i\to j$, main lobe gain, side lobe gain\\ \hline $P_{ij}^{L}, P_{ij}^{N}$ & Probability that the channel mode of link $i\to j$ is LoS link and NLoS link, respectively\\ \hline $H_{ij}^{\mathbb{C},\mathbb{B}}, h_{ij}^{\mathbb{C},\mathbb{B}},L_{ij}^{\mathbb{C},\mathbb{B}}$ & Channel coefficient, small scaling fading coefficient and path loss of link i-j, respectively\\ \hline $\alpha_{\mathbb{B}}, \beta _{\mathbb{B}}, S_{\mathbb{B}}, k_{\mathbb{B}}$& Path-loss exponent, path-loss coefficient, Nakagami fading parameter, Rician factor of $\mathbb{B}$, respectively\\ \hline $\theta_{i}$, $\theta_{i}^{h}$, $\theta_{i}^{e/d}$, $\varphi_{i}$ & Elevation angle of UAV relative to node $i$, half-power beamwidth for the azimuth orientation in the horizontal direction of node $i$, half-power beamwidth for the elevation/depression angles of node $i$, azimuth orientation in the vertical direction of node $i$ relative to UAV, respectively\\ \hline $\sigma _{b}^{2}$, $\sigma _{w}^{2}$, $\hat{\sigma} _{n}^{2}$ & Noise power of Bob, noise power of Willie, nominal noise power of Willie\\ \hline $\epsilon$, $\gamma_{th}$, $R_{b}$ & Covertness constraint, outage probability threshold, target rate, respectively.\\ \hline \end{tabular} \end{table} \section{System Model and Preliminaries} \label{sec2} \subsection{System Model} \begin{figure} [t] \centering \includegraphics[width=0.45\textwidth]{figure_1.pdf} \caption{Illustration of the concerned A2G system model.} \label{fig:system_model} \vspace{-0.3cm} \end{figure} We consider an A2G covert communication scenario shown in Fig.~\ref{fig:system_model}, where an UAV (Alice, $a$) executes surveillance task (e.g., imaging) over a geographic area and urgently transmits critical information back to the intended user (Bob, $b$), but suffering from the detection of a warden (Willie, $w$). Alice (resp., Bob and Willie) is configured with the integrated antennas of the OM and DM as \cite{xiang2017flexible,zhang2018dual,Li2018}, such that it can send (resp., receive) both microwave signals and mmWave signals. Alice employs a fixed transmission power $P_a$ subject to maximum power constraint $P_{max}$. The UAV cyclically flies with a horizontal trajectory around the surveillance area at a fixed height $h_{a}$. We let $d_{ij}$ denote the distance between node $i$ and node $j$ for $i ,j \in\{a,b,w\}$. To prevent the UAV from being detected by vision or radar, there exists a minimum distance constraint $d_{aw}^{\min}$ between Alice and Willie. Besides, due to the finite resolution of the UAV's imaging equipment, a maximum distance constraint $d_{aw}^{\max}$ is essential. \subsection{Antenna Gains} Because UAV moves in 3D space, we adopt the 3D sectorized antenna pattern as \cite{venugopal2016device,zhu2018secrecy}, which is essential to model the links from air to ground. As shown in Fig.~\ref{fig:system_model}, $\theta_{i}^{h}$ is the half-power beamwidth for the azimuth orientation in the horizontal direction, $\theta_{i}^{e}$ (resp. $\theta_{i}^{d}$) is the half-power beamwidth for the elevation (resp. depression) angles in the ground or air node $i$. Considering the antenna gain $G_{i}^{M}$ within the half-power beamwidth (i.e., main lobe) and gain $G_{i}^{S}$ outside the half-power beamwidth (i.e., side lobe), the total directivity gain of DM for link $i\to j$ is determined as $G_{ij}=G_{i}^{\mathbb{A}}G_{j}^{\mathbb{A}}$, where $\mathbb{A}\in \{M,S\}$. To improve the signal quality at Bob, Alice dynamically adjusts the steering orientation of the antenna array toward Bob, such that she can maximize antenna array gain $G_{a}^{M}$. Thus, the directional antenna gain of Alice$\to$ Bob link is determined as $G_{ab}=G_{a}^{M}G_{b}^{\mathbb{A}}$. Note that in the process of Alice's movement, Willie may be located in the half-power beamwidth direction of the antenna array of Alice$\to$ Bob link (i.e., $\pi-\varphi_{w}-\varphi_{b} \leq \theta_{a}^{d}$) and the antenna array gain of Alice for Willie is the main lobe gain $G_{a}^{M}$, otherwise, the side lobe gain $G_{a}^{S}$. Thus, the antenna gain of Alice$\to$ Willie link is determined as \begin{equation} \label{neq1_gaw} \begin{aligned} G_{aw}=G_{w}^{\mathbb{A}} \times\begin{cases} G_{a}^{M},& \pi-\varphi_{w}-\varphi_{b} \leq \theta_{a}^{d},\\ G_{a}^{S},& otherwise. \end{cases} \end{aligned} \end{equation} Considering an actual situation, we adopt a uniform planar square array (UPA) to model the sectorized pattern at node $j$ (here, $j\in\{b,w\}$). The directional antenna gain and the associated probability can be approximated as \begin{equation} \label{neq1} \begin{aligned} G_{j}^{\mathbb{A}}= \begin{cases} G_{j}^{M}=\mathcal{N}_j,& \!P_{j}^{M}\!= \frac{\theta_{j}^{h}}{2\pi}\times \frac{\theta_{j}^{e}}{\pi},\\ G_{j}^{S}=\frac{\sqrt{\mathcal{N}_j}-\frac{\sqrt{3}}{2\pi}\mathcal{N}_j\sin{\left( \frac{3\pi}{2\sqrt{\mathcal{N}_j}}\right)}}{\sqrt{\mathcal{N}_j}-\frac{\sqrt{3}}{2\pi}\sin{\left( \frac{3\pi}{2\sqrt{\mathcal{N}_j}}\right)}} ,& P_{j}^{S}= 1- \frac{\theta_{j}^{h}}{2\pi}\times \! \frac{\theta_{j}^{e}}{\pi}, \end{cases} \end{aligned} \end{equation} where $\mathcal{N}_j$ denotes the number of antenna elements at the node $j$, $P_{j}^{\mathbb{A}}$ denotes the associated probability. \subsection{Propagation Model} To characterize the A2G wireless channel feature, the channel from UAV to Bob (Willie) is modeled as the combination of NLoS and LoS channels. Specifically, we let $P_{aj}^{\mathbb{B}}$ ($\mathbb{B}\in \{L,N\}, j \in \{b, w\}$) denotes the probability the channel from the UAV to node $j$ is the LoS or NLoS, where $P_{aj}^{L}=(1+\sigma\exp{(-f[\theta_{j}-\sigma])})^{-1}$ and $P_{aj}^{N}\triangleq 1-P_{aj}^{L}$ as \cite{andrews2016modeling,al2014optimal}, where $\theta_{j}=\frac{180}{\pi}\arcsin(\frac{h_{a}}{d_{aj}})$ is the degree of the elevation angle for node $j$ relative to UAV, $d_{aj} =\sqrt{(x_{a}-x_{j})^2+h^2_a}$ is the distance between UAV and node $j$, $\sigma$ and $f$ are the S-curve parameters that depend on the communication environment. In this work, we consider both the large scale fading and small scale fading. Thus, the channel coefficient for link $a\to j$ is denoted as $H_{aj}^{\mathbb{C},\mathbb{B}}=h^{\mathbb{C},\mathbb{B}}_{aj}\sqrt{L_{aj}^{\mathbb{C},\mathbb{B}}}$, where $\mathbb{C} \in \{ o,m\}$ denotes adopting the OM or DM, $h^{\mathbb{C},\mathbb{B}}_{aj}$ and $L_{aj}^{{\mathbb{C},\mathbb{B}}}$ denote the channel fading coefficient and path loss, respectively. We have $L_{aj}^{\mathbb{C},\mathbb{B}}=\beta^{\mathbb{C}}_{\mathbb{B}}d_{aj}^{-\alpha^{\mathbb{C}}_{\mathbb{B}}}$, where $\beta^{\mathbb{C}}_{\mathbb{B}}$ and $\alpha^{\mathbb{C}}_{\mathbb{B}}$ are the constant coefficient and path-loss exponent, respectively. \subsubsection{Small scaling fading of Microwave channel} Due to the possible combination of LoS and multipath scatterers of NLoS, the microwave channels are modeled as Rician fading to characterize the propagation effect. Thus, the fading coefficient $h^{o,\mathbb{B}}_{ij}$ can be described as \begin{equation} \label{Eq:rician_h} \begin{aligned} h^{o,\mathbb{B}}_{ij}=\sqrt{\frac{k_{\mathbb{B}}}{k_{\mathbb{B}}+1}}\hat{h}_{ij}+\sqrt{\frac{1}{k_{\mathbb{B}}+1}}\tilde{h}_{ij}, \end{aligned} \end{equation} where $|\hat{h}_{ij}|=1$ and $\tilde{h}_{ij}\sim\mathcal{CN}(0,1)$ denote the components of LoS channel and NLoS Rayleigh fading, respectively, $k_{\mathbb{B}}$ denotes the Rician factor which is determined as \cite{shimamoto2006channel,you20193d} \begin{equation} \label{Eq:rician_k} \begin{aligned} k_{\mathbb{B}}(\theta _{j})=\begin{cases} \eta_{1} \exp{(\eta_{2}\theta _{j})},& LoS, \\0 ,& NLoS, \end{cases} \end{aligned} \end{equation} $\eta_{1}$ and $\eta_{2}$ are constant coefficients with $\eta_{1}=k_0$ and $\eta_{2}=\frac{2}{\pi} \ln{(\frac{k_\frac{\pi}{2}}{k_0})}$, $k_0=k_{\mathbb{B}}(0)$ and $k_\frac{\pi}{2}=k_{\mathbb{B}}(\frac{\pi}{2})$ are depending on the specific environment. Thus, the fading coefficient $|h^{o,\mathbb{B}}_{ij}|^{2}$ follows a non-central chi-squared distribution with two degrees of freedom and $\mathbb{E}[|h^{o,\mathbb{B}}_{ij}|^{2}]=1$, the probability density function (PDF) of $|h^{o,\mathbb{B}}_{ij}|^{2}$ is expressed as \cite{simon2005digital} \begin{equation} \label{Eq:rician_pdf} \begin{aligned} f_{|h^{o,\mathbb{B}}_{ab}|^{2}}(x)\!=\!(k_{\mathbb{B}}\!+\!1)e^{-k_{\mathbb{B}}-(k_{\mathbb{B}}+1)x}I_{0}(2\sqrt{k_{\mathbb{B}}(k_{\mathbb{B}}\!+\!1)x}), \end{aligned} \end{equation} where $I_{0}(\cdot)$ is the modified Bessel function of the first kind and zeroth order, and the corresponding CDF of $|h^{o,\mathbb{B}}_{ij}|^{2}$ can be expressed as \cite{simon2005digital,you20193d} \begin{equation} \label{Eq:rician_cdf} \begin{aligned} F_{|h^{o,\mathbb{B}}_{ij}|^{2}}(x)=1\!-Q_{1}\!\!\left (\! \sqrt{2k_{\mathbb{B}}},\sqrt{2(k_{\mathbb{B}}+1)x} \right ), \end{aligned} \end{equation} where $Q_{1}(a,b)$ is the standard Marcum-Q function which is \begin{equation} \label{Eq:rician_q} \begin{aligned} Q_{1}(a,b)=\int_{b}^{\infty}xI_{0}(ax)\exp{\left(-\frac{x^{2}+a^{2}}{2}\right)}dx. \end{aligned} \end{equation} $I_{0}(\cdot)$ is mentioned above. \subsubsection{Small scaling fading of mmWave channel} mmWave channels are characterized by the Nakagami-m fading with shape parameter $S_{\mathbb{B}} \geq 1/2$ and scale parameter $\Omega =\mathbb{E}[|h^{d,\mathbb{B}}_{ij}|^{2}]=1$ as \cite{bai2014coverage,andrews2016modeling,zhu2018secrecy}. Thus, the fading coefficient $|{h}^{d,\mathbb{B}}_{ij}|^{2}$ follows a normalized gamma distribution with shape and scale parameters of $S_{\mathbb{B}}$ and $1/S_{\mathbb{B}}$. Therefore, the PDF is given by \cite{simon2005digital,jamali2021covert} \begin{equation} \label{eq_mmwave_pdf} \begin{aligned} f_{|h^{d,\mathcal{B}}_{ab}|^{2}}(x)=S_{\mathcal{B}}^{S_{\mathcal{B}}}x^{S_{\mathcal{B}}-1}e^{-S_{\mathcal{B}}x}/\Gamma(S_{\mathcal{B}}). \end{aligned} \end{equation} According to \cite[Lemma 6]{bai2014coverage} and Alzer's lemma \cite{alzer1997some}, the CDF of a normalized gamma RV is tightly approximated by $[1-\exp{(-\xi _{\mathbb{B}}x)}]^{S_{\mathbb{B} }}$ where $\xi _{\mathbb{B}}=S_{\mathbb{B} }(S_{\mathbb{B} }!)^{-1/S_{\mathbb{B}}}$. By applying the binomial theorem assuming $S_{\mathbb{B} }$ is an integer \cite{bai2014coverage}, the CDF of $|h^{d,\mathbb{B}}_{ij}|^{2}$ is given by \begin{equation} \label{eq10} \begin{aligned} F_{|h^{d,\mathbb{B}}_{ij}|^{2}}(x)=\sum_{r=0}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r}e^{-r\xi _{\mathbb{B}}x}. \end{aligned} \end{equation} \subsection{Noise Uncertainty Model}\label{sec_noise_uncertainty} In practice, the background noise is uncertain due to its complex composition (including thermal noise, quantization noise, imperfect filters, ambient wireless signals, etc.), dynamic environment and calibration error. Therefore, it is practical to use noise uncertainty to realize covert communication in mobile communication scenarios. In this work, we adopt the typical bounded noise uncertainty model \cite{shellhammer2006performance}, where the exact noise power of node $i$ $\sigma _{i}^{2}$ lies in a finite range around the nominal noise power $\hat{\sigma} _{n}^{2}$, then, the log-uniform distribution of $\sigma _{i}^{2}$ for the bounded uncertainty model is given by \begin{align} \label{Eq:noise_un_pdf} f_{\sigma _{i}^{2}}(x)=\begin{cases} \frac{1}{2\ln{(\rho)}x}, & \frac{1}{\rho}\hat{\sigma} _{n}^{2}\leq \sigma _{i}^{2}\leq \rho\hat{\sigma}_{n}^{2},\\ 0, & otherwise, \end{cases} \end{align} where $\rho$ is the parameter that quantifies the level of the uncertainty. \subsection{Definitions}\label{param_definitions} Some basic definitions involved in this work are as follows. \textbf{Detection Error Probability (DEP) $\bm{P_{ew}}$:} Let $H_{0}$ (the null hypothesis and $H_{1}$ (the alternative hypothesis) denote that Alice does not transmit and transmits covert information, respectively, let $D_{0}$ and $D_{1}$ respectively indicate the judgment of Willie that UAV is transmitting or not transmitting covert information. The DEP $P_{ew}$ is defined as the probability that Willie makes a wrong decision on whether or not UAV is transmitting covert messages, which is expressed as \begin{equation} P_{ew}=P_{FA}+P_{MD}\label{Eq:Pew}, \end{equation} where $P_{FA}=\mathbb{P}(D_{1}|H_{0})$ is the false alarm probability that Willie is in favor of $H_{1}$ while $H_{0}$ is true, $P_{MD}=\mathbb{P}(D_{0}|H_{1})$ is the missed detection probability that Willie is in favor of $H_{0}$ while $H_{1}$ is true. \textbf{Outage Probability $\bm{P_{out}}$:} The outage probability $\bm{P_{out}}$ is defined as the probability that the channel capacity $C_{ab}=W\log_2(1+\gamma_{ab})$ cannot support a given target transmission rate $R_{b}$, i.e., $P_{out}=\mathbb{P}(C_{ab}<R_{b})$ which can also be denoted as $P_{out}=\mathbb{P}(\gamma_{ab}<\gamma_{th})$ where $W$ is the bandwidth of $a\to b$ link, $\gamma_{ab}$ is the SNR, and $\gamma_{th}$ is the threshold of SNR which can be calculated as $\gamma_{th}=2^{\frac{R_{b}}{W}}-1$. \textbf{Effective Covert Rate (ECR) $\bm{R_{ab}}$:} The effective covert rate $R_{ab}$ is defined the average \emph{successfully} transmitted amount of information, which can be denoted as $R_{ab}= R_{b} \times (1-P_{out})$, where $R_{b}$ is the target covert transmission rate and $P_{out}$ is the outage probability. \textbf{Covert Shannon (Ergodic) Capacity (CSC) $\bm{C_{ab}}$:} The covert Shannon capacity is defined as the maximum value of the time average rate of messages delivered from transmitter to the destination, which is expressed as $C_{ab}=\mathbb{E}[W\log_2(1+\gamma_{ab})]$, where $W$ is the bandwidth and $\gamma_{ab}$ is the SNR. \section{Transmission Mode and Detection Strategy} \label{sec3} In this section, we first introduce two transmission modes of UAV (OM transmission mode and DM transmission mode), and then present the detection strategy at Willie. \subsection{OM Transmission Mode} When UAV adopts the OM transmission mode, the signals $y_{b}(k)$ ($k \in\{ 1, 2, \cdots, N\}$) received by Bob at $k$-th channel use is given by \begin{equation} \label{2eq2_bob} \begin{aligned} y_{b}^{o}(k) =\sqrt{P_{a}}H_{ab}^{o,\mathbb{B}}x_{c}(k)+n_{b}(k), \end{aligned} \end{equation} and the signals $y_{w}^{o}(k)$ received by Willie is given by \begin{equation} \label{2eq2} \begin{aligned} y_{w}^{o}(k) =\sqrt{P_{a}}H_{aw}^{o,\mathbb{B}}x_{c}(k)+n_{w}(k). \end{aligned} \end{equation} where $x_{c}$ is the desired signal, which following a zero-mean Gaussian distribution, i.e., $\mathbb{E}[|x_{c}(k)|^{2}]=1$. Besides, $n_{b}(k)$ and $n_{w}(k)$ are the received noise power at Bob and Willie, respectively, and the PDF is given by (\ref{Eq:noise_un_pdf}). \subsection{DM Transmission Mode} When UAV adopts the DM transmission mode, the signals $y_{b}^{d}(k)$ received by Bob at the $k$-th ($k \in \{1, 2, \cdots, N\}$) channel use is given by \begin{equation} \label{eq2_bob} \begin{aligned} y_{b}^{d}(k) =\sqrt{P_{a}G_{ab}}H^{d,\mathbb{B}}_{ab}x_{c}(k)+n_{b}(k), \end{aligned} \end{equation} and the signals $y_{w}^{d}(k)$ received by Willie is given by \begin{equation} \label{eq2} \begin{aligned} y_{w}^{d}(k) =\sqrt{P_{a}G_{aw}}H^{d,\mathbb{B}}_{aw}x_{c}(k)+n_{w}(k), \end{aligned} \end{equation} The desired signal $x_{c}$ and noise power obey the same distribution as the OM transmission mode. \subsection{Detection Strategy at Willie} Based on the observations over all time slots, Willie attempts to determine whether Alice conducts transmission or not. According to \cite{shahzad2017covert}, the optimal decision rule at Willie to minimize the detection error probability is determined as \begin{equation} \label{neq2} \begin{aligned} T_{w}=\lim_{N\rightarrow \infty}\frac{P_{tol}}{N}\mathop{\gtrless}\limits_{H_0}^{H_1} \tau, \end{aligned} \end{equation} \noindent where $P_{tol}=\textstyle\sum_{k=1}^{N}|y_{w}(k)|^{2}$ is the total power received by Willie in a given block, $\tau$ is the decision threshold, and $N$ is the total number of channel uses in a block. According to (\ref{2eq2}), when Alice adopts the OM transmission mode, the received signal $y_{w}^{o}(k)$ at Willie is given by \begin{align} \label{Eq:Tw_TA} T_{w}^{o}\!=\!\begin{cases} \sigma _{w}^{2},\!\!\! &\text{$H_0$} \\ P_{\!a}L_{aw}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{aw}|^{2}\!+\sigma _{w}^{2}.\!\!\!&\text{$H_1$} \end{cases} \end{align} Similarly, according to (\ref{eq2}), when Alice adopts the DM transmission mode, the received signal $y_{w}^{d}(n)$ received at Willie is given by \begin{align} \label{Eq:Tw} T_{w}^{d}\!=\!\!\begin{cases} \sigma _{w}^{2},\!\! &\text{$H_0$} \\ P_{\!a}G_{\!aw}L_{aw}^{d,\mathbb{B}}|h^{d,\mathbb{B}}_{aw}|^{2}\!+\sigma_{w}^{2}. \!\!&\text{$H_1$} \end{cases} \end{align} \section{Covert Performance Analysis under the OM Transmission Mode}\label{TA_Performance} In this section, we derive the optimal detection threshold and the corresponding minimum DEP at Willie, as well as the expected minimum DEP from Alice's perspective, lastly, the ECR and CSC under the OM transmission mode. \subsection{Detection Error Probability} To determine the DEP at Willie, we first analyze the false alarm probability and the missed detection probability. Based on (\ref{neq2}) and (\ref{Eq:Tw_TA}), the false alarm probability $P_{FA}^{o}$ is given by \begin{align} \label{2eq5_NJ} P_{FA}^{o} =\mathbb{P}(T_{w}^{o}>\tau |H_{0})=\mathbb{P}\left(\sigma _{w}^{2}\frac{\chi_{2N}^{2}}{N}>\tau\right), \end{align} where $\chi_{2N}^{2}$ denotes a random value that follows chi-squared distribution with $2N$ degrees of freedom. From the strong law of large numbers, we know that $\chi_{2N}^{2}/N$ converges to $1$. Based on Lebesgue's Dominated Convergence Theorem \cite{browder2012mathematical}, $P_{FA}^{o}$ can be derived as \begin{align} \label{eq5_NJ} P_{FA}^{o} & =\mathbb{P}(\sigma _{w}^{2}>\tau ) \notag \\ & =\begin{cases} 1, & \tau < \frac{\hat{\sigma}_{n}^{2}}{\rho}, \\ 1-\frac{\ln(\rho\tau)-\ln(\hat{\sigma}_{n}^{2})}{2\ln(\rho)}, & \frac{\hat{\sigma}_{n}^{2}}{\rho}\leq \tau \leq \rho\hat{\sigma}_{n}^{2}, \\ 0, & \tau > \rho\hat{\sigma}_{n}^{2}. \end{cases} \end{align} Similarly, the missed detection probability can be derived as \begin{align} \label{eq6_NJ} P_{MD}^{o}\!&=\mathbb{P}(T_{w}^{o}>\tau |H_{1}) =\mathbb{P}(k_{a}^{o}+\sigma _{w}^{2}<\tau ) \notag\\ &=\begin{cases} 1, & \tau \!>\!k_{a}^{o}\!+\!\rho\hat{\sigma}_{n}^{2},\\ \frac{\ln(\rho(\tau -k_{a}^{o}))-\ln(\hat{\sigma}_{n}^{2})}{2\ln(\rho)},& k_{a}^{o}\!+\!\frac{\hat{\sigma}_{n}^{2}}{\rho}\leq \tau \!\leq \! k_{a}^{o}\!+\!\rho\hat{\sigma}_{n}^{2},\\ 0 , & \tau\!<\! k_{a}^{o}\!+\!\frac{\hat{\sigma}_{n}^{2} }{\rho}, \end{cases} \end{align} where $k_{a}^{o}=P_{a}L_{aw}^{o,\mathbb{B}}|{h}^{o,\mathbb{B}}_{aw}|^{2}$. According to (\ref{eq5_NJ}) and (\ref{eq6_NJ}), we can analyze the optimal detection threshold and the minimization DEP for Willie under OM transmission mode as follows. \begin{lemma} \label{2th1_NJ} When Alice adopts the OM transmission mode, the optimal threshold $\tau^{*}$ for Willie's detector is in the interval \begin{equation} \label{2eq3_NJ} \begin{aligned} \tau^{*}\in\begin{cases} [\rho\hat{\sigma}_{n}^{2},k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}],& \rho\hat{\sigma}_{n}^{2}< k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho},\\ k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho} , & \rho\hat{\sigma}_{n}^{2}\geq k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}, \end{cases} \end{aligned} \end{equation} and the corresponding minimum DEP $P_{ew}^{*,o}$ is given as \begin{equation} \label{2eq4_NJ} \begin{aligned} P_{ew}^{*,o}=\begin{cases} 0 , & \rho\hat{\sigma}_{n}^{2}< k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}, \\ 1-\frac{\ln(\rho k_{a}^{o}+\hat{\sigma}_{n}^{2})-\ln( \hat{\sigma}_{n}^{2})}{2\ln(\rho)} , & \rho\hat{\sigma}_{n}^{2} \geq k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}, \end{cases} \end{aligned} \end{equation} where $k_{a}^{o}=P_{a}L_{aw}^{o,\mathbb{B}}|{h}^{o,\mathbb{B}}_{aw}|^{2}$, $\rho$ and $\hat{\sigma}_{n}^{2}$ are the parameter that quantifies the size of the uncertainty and nominal noise power, respectively, defined in \ref{sec_noise_uncertainty}. \end{lemma} \begin{proof} To find the optimal threshold, we consider the optimization problem ${\min}_{\tau} \ P_{ew}^{o}=P_{FA}^{o}+P_{MD}^{o}$. According to (\ref{eq5_NJ}) and (\ref{eq6_NJ}), the following two cases are considered. \emph{Case I}: When $\rho\hat{\sigma}_{n}^{2}< k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}$, for any $\tau \in [\rho\hat{\sigma}_{n}^{2},k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}]$, we have $P_{ew}^{o}=P_{FA}^{o}+P_{MD}^{o}=0$. \emph{Case II}: When $\rho\hat{\sigma}_{n}^{2}\geq k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}$, the DEP $P_{ew}^{o}=P_{FA}^{o}+P_{MD}^{o}$ can be written as \setlength\arraycolsep{1pt} \begin{footnotesize} \begin{subnumcases} {P_{ew}^{o}=\!} 1, & $\tau<\frac{\hat{\sigma}_{n}^{2} }{\rho}$, \qquad\label{eq7a_NJ} \\ 1\!- \!\frac{\ln(\rho\tau)\!-\!\ln(\hat{\sigma}_{n}^{2})}{2\ln(\rho)}, & $\frac{\sigma_{n}^ {2}}{\rho}\! \leq \!\tau < k_{a}^{o}\!+\!\frac{\hat{\sigma}_{n}^{2}}{\rho}$, \qquad\label{eq7b_NJ} \\ 1\!-\!\frac{\ln(\tau)\!-\!\ln(\tau-k_{a}^{o})}{2\ln(\rho)},& $k_{a}^{o}\!+\!\frac{\hat{\sigma}_{n}^{2}}{\rho}\! \leq \!\tau \!\leq\!\rho\hat{\sigma}_{n}^{2}$, \qquad\label{eq7c_NJ} \\ \frac{\ln(\rho(\tau\! -\! k_{a}^{o}))\!-\!\ln(\hat{\sigma}_{n}^{2})}{2\ln(\rho)},& $\rho\hat{\sigma}_{n}^{2}\!< \!\tau \!\leq \! k_{a}^{o}\!+\!\rho\hat{\sigma}_{n}^{2}$, \qquad\label{eq7d_NJ} \\ 1, & $\tau >k_{a}^{o}\!+\!\rho\hat{\sigma}_{n}^{2}$ . \qquad\label{eq7e_NJ} \end{subnumcases} \end{footnotesize}\normalsize then we can see from (\ref{eq7b_NJ}), when $\frac{\sigma_{n}^ {2}}{\rho} \leq \tau < k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}$, $P_{ew}^{o}$ is a decreasing function of $\tau$. Thus, when $\tau = k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}$, $P_{ew}^{o}$ gets the minimum value which equals to (\ref{eq7c_NJ}). From (\ref{eq7d_NJ}), when $\rho\hat{\sigma}_{n}^{2}< \tau \leq k_{a}^{o}+\rho\hat{\sigma}_{n}^{2}$, $P_{ew}^{o}$ is a increasing function of $\tau$. Thus, when $\tau = \rho\hat{\sigma}_{n}^{2}$, $P_{ew}^{o}$ gets the minimum value which equals to (\ref{eq7c_NJ}). Overall, the DEP gets the minimum value $P_{ew}^{*,o}$ for Willie if and only if $\tau\in[k_{a}^{o}\!+\!\frac{\hat{\sigma}_{n}^{2}}{\rho},\rho\hat{\sigma}_{n}^{2}]$. After deriving the first-order derivative of $P_{ew}^{o}$ with respect to $\tau$, we have \begin{align}\label{Eq:pew_derivative} \frac{\partial P_{ew}^{o}}{\partial \tau} =\frac{k_{a}^{o}}{2\tau(\tau-k_{a}^{o})\ln{(\rho)}} >0. \end{align} Obviously, $P_{ew}^{o}$ is monotonically increasing in the interval. Thus, the optimal threshold $\tau^{*}$ for Willie is $\tau= k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}$ and the corresponding minimum DEP $P_{ew}^{*,o}$ is \begin{align}\label{Eq:minimum_pew} P_{ew}^{*,o}=1-\frac{\ln(\rho k_{a}^{o}+\hat{\sigma}_{n}^{2})-\ln( \hat{\sigma}_{n}^{2})}{2\ln(\rho)}. \end{align} Based on Case I and Case II, we have (\ref{2eq3_NJ}) and (\ref{2eq4_NJ}). \end{proof} Since only the statistical channel static information of Alice $\to$ Willie, we use the expected measure of $P_{ew}^{*,o}$ to evaluate the covertness. From (\ref{2eq4_NJ}), we note that $\mathbb{E}[P_{ew}^{*,o}]$ is related to the numerical integration of $|h^{o,\mathbb{B}}_{aw}|^{2}$. To this end, we first present the following lemma. \begin{lemma}\label{lemma:calu2} If $h^{o,\mathbb{B}}_{aw}$ $(\mathbb{B}\in \{L,N\})$ is the channel fading coefficient of the microwave channel, we have \begin{scriptsize} \begin{align}\label{Eq:calu2} \int_{0}^{a}\!{x}f_{|h^{ o,\mathbb{B}}_{ aw}|^{2}}({x})d{x}\!&=\frac{\gamma\left ( \frac{2}{\nu(\sqrt{2k_{\mathbb{B}}})}, \left[2a(k_{\mathbb{B}}\!+\!1)\right]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})} \right )}{(k_{\mathbb{B}}+\!1)\nu(\sqrt{2k_{\mathbb{B}}})e^{\frac{2\mu(\sqrt{2k_{\mathbb{B}}})}{\nu(\sqrt{2k_{\mathbb{B}}})}}} \notag \\ & \qquad-a\exp{\left(\!-\left[2a(k_{\mathbb{B}}\!+\!1)\right]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})}\right)}, \end{align} \end{scriptsize}\normalsize where $a\ge0$, $f_{|h^{ o,\mathbb{B}}_{ aw}|^{2}}$ is the PDF of ${|h^{ o,\mathbb{B}}_{ aw}|^{2}}$, $\gamma(\cdot,\cdot)$ is the lower incomplete gamma function, $\mu(x)$ and $\nu(x)$ are the polynomial expressions of $x$ as in \cite{mishra20162,bocus2013approximation}, which are respectively given by \begin{scriptsize} \begin{align}\label{Eq:s2_rician_q_u_NJ} \mu(x) \triangleq\! \begin{cases} -\ln2, & x=0 , \\ -3.0888\! \times \!\!10^{-10}x^{6}\!+\!1.8362 \!\times \!\!10^{-7}\!x^{5} \\ -3.7185\! \times \!\!10^{-5}x^{4} \!+\!3.4103 \!\times \!\!10^{-3}x^{3} \\ -0.1624\! \times \!\!x^{2} \!-\!1.4318x\!+\!0.7409, & 10 \leq x \leq 8000 , \end{cases} \end{align} \end{scriptsize}\normalsize \vspace*{-0.4cm} \begin{scriptsize} \begin{align}\label{Eq:s2_rician_q_v_NJ} \nu (x) \triangleq \begin{cases} 2, & x=0 , \\ 5.1546 \!\times \!10^{-11}x^{6}\!-\!3.1961\!\times\!10^{-8}x^{5} \\ +6.3859 \!\times \!10^{-6}x^{4}\!-\!5.4159\!\times \!\!10^{\!-4}x^{3} \\ +1.9833 \!\times \!10^{-2}x^{2}\!+\!0.9044x\!+\!0.9439 , & 10 \leq x \leq 8000 . \end{cases} \end{align} \end{scriptsize} \end{lemma} \begin{proof} The detailed proof is presented in Appendix~\ref{nnap1}. \end{proof} Considering the uncertainty of the LoS/NLoS channels, the probability $P^{\mathbb{B}}_{aw}$ should be considered in the calculation of $\mathbb{E}[P_{ew}^{*,o}]$. According to Lemma~\ref{lemma:calu2}, we can derive $\mathbb{E}[P_{ew}^{*,o}]$ as the following theorem. \begin{theorem} \label{2th2_NJ} When Alice adopts the OM transmission mode, the expected value $\mathbb{E}[P_{ew}^{*,o}]$ from Alice's perspective is given as (\ref{2eqn1_NJ}), \begin{figure*}[t] \begin{small} \begin{equation} \label{2eqn1_NJ} \mathbb{E}[P_{ew}^{*,o}]=\smashoperator{\sum_{\mathbb{B}\in\{L,N\}}}P_{aw}^{\mathbb{B}} (1-\Theta_\mathbb{B}^{o}) \left\{1 - \frac{1}{2\ln(\rho)}\left\{\ln\!\left[\rho P_{a}L_{aw}^{o,\mathbb{B}} \left( \frac{\!\gamma \!\left ( \frac{2}{\nu(\sqrt{2k_{\mathbb{B}}})}, [2\varrho^{o}(k_{\mathbb{B}}\!+\!1)]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})} \right )}{(k_{\mathbb{B}}\!+\!1)\nu(\sqrt{2k_{\mathbb{B}}})e^{\frac{2\mu(\sqrt{2k_{\mathbb{B}}})}{\nu(\sqrt{2k_{\mathbb{B}}})}}} -\varrho^{o} \Theta_\mathbb{B}^{o}\!\right ) \!+\!\hat{\sigma}_{n}^{2} \right]\!-\ln( \hat{\sigma}_{n}^{2}) \!\right\} \right\}. \end{equation} \end{small} \rule[1cm]{\textwidth}{0.04em} \vspace*{-0.6cm} \end{figure*} where $\varrho^{o}=\frac{(\rho^{2}-1)\hat{\sigma}_{n}^{2}}{\rho P_{a}L_{aw}^{o,\mathbb{B}}}$ and $\Theta_\mathbb{B}^o$ is defined as \begin{equation} \label{eqn22} \Theta_\mathbb{B}^{o}= \exp{\left(-\left[2\varrho^{o}(k_{\mathbb{B}}+1)\right]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})}\right)}. \end{equation} \end{theorem} \begin{proof} For convenience, let's write $k_{1}^{o}=\rho\hat{\sigma}_{n}^{2}$ and $k_2^{o}=k_{a}^{o}+\frac{\hat{\sigma}_{n}^{2}}{\rho}$. According to (\ref{2eq4_NJ}) in Lemma~\ref{2th1_NJ}, we have \begin{align} \label{2eq8_NJ} \mathbb{E}&[P_{ew}^{*,o}] \!=\!\mathbb{E}_{k_{1}^{o}< k_{2}^{o}}[P_{ew}^{*,o}]\mathbb{P}(k_{1}^{o}<k_{2}^{o})\!+\!\mathbb{E}_{k_{1}^{o}\geq k_{2}^{o}}[P_{ew}^{*,o}]\mathbb{P}(k_{1}^{o}\!\geq\! k_{2}^{o}) \notag \\ &\!=\!\!\mathbb{P}(k_{1}^{o}\!\geq\! k_{2}^{o}\!)\!\!\left(\!\!1\!\!-\!\!\frac{\ln(\rho P_{a}L_{aw}^{o,\mathbb{B}}\mathbb{E}_{k_{1}^{o}\geq k_{2}^{o}}\!\!\left[|{h}^{o,\mathbb{B}}_{aw}|^{2}\right]\!\!+\!\hat{\sigma}_{n}^{2})\!-\!\ln(\hat{\sigma}_{n}^{2})}{2\ln(\rho)}\!\right)\!, \end{align} where $\mathbb{P}(k_{1}^{o}\geq k_{2}^{o})$ can be derived as \begin{align} \label{2eq9_NJ} \mathbb{P}(k_{1}^{o}\geq k_{2}^{o})&=\mathbb{P}\left(\rho\hat{\sigma}_{n}^{2} \geq P_{a}L_{aw}^{o,\mathbb{B}}|{h}^{o,\mathbb{B}}_{ aw}|^{2}+\frac{\hat{\sigma}_{n}^{2}}{\rho}\right)\notag\\ &=\mathbb{P}\left(|{h}^{o,\mathbb{B}}_{aw}|^{2}\leq \varrho^{o} \right) \notag\\ &=1-Q_{1}\left( \sqrt{2k_{\mathbb{B}}},\sqrt{2(k_{\mathbb{B}}+1)\varrho^{o}} \right). \end{align} To provide some deep analytical insights on the average value of $P_{ew}^{*,o}$, a tight exponential-type approximation for the standard Marcum-Q function $Q_{1}(\cdot,\cdot)$ is adopted, which is expressed by \begin{align} \label{Eq:s2_rician_q_approx} Q_{1}(x,y)\approx \exp{(-e^{\mu(x)}y^{\nu(x)})}, \end{align} where $\mu(x)$ and $\nu(y)$ are conditionally defined for $0\leq x \ll 1$ and $10\leq x \leq 8000$, which is good range for Rice factors $k_{\mathbb{B}}$ in the concerned system. In this work, we define $\mu(x)$ and $\nu(x)$ as (\ref{Eq:s2_rician_q_u_NJ}) and (\ref{Eq:s2_rician_q_v_NJ}), respectively. The quality and reliability of approximation has been testified where the root mean square error (RMSE) of approximation is less than $0.005$ \cite{bocus2013approximation}. Thus, we can obtain an approximation of $Q_{1}(\cdot,\cdot)$ in (\ref{2eq9_NJ}) as \begin{align}\label{2eq81_NJ} \mathbb{P}&(k_{1}^{o}\!\geq\! k_{2}^{o})=1-\exp{\left(-e^{\mu(\sqrt{2k_{\mathbb{B}}})}\left[2\varrho^{o} (k_{\mathbb{B}}+1)\right]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}\right)}. \end{align} For the expectation term in (\ref{2eq8_NJ}), we have \begin{align}\label{2eq11_NJ} \mathbb{E}&_{k_{1}^{o}\geq k_{2}^{o}} \left[ |{h}^{o,\mathbb{B}}_{aw}|^{2} \right]=\mathbb{E}\left [ |{h}^{o,\mathbb{B}}_{aw}|^{2}\Bigg||{h}^{o,\mathbb{B}}_{aw}|^{2}\leq \varrho^{o} \right ] \notag \\ &=\int_{0}^{\varrho }{x}f_{|{h}^{o,\mathbb{B}}_{aw}|^{2}}({x})d{x} \notag \\ &\overset{(a)}{=}\frac{\gamma\left( \frac{2}{\nu(\sqrt{2k_{\mathbb{B}}})}, \left[2\varrho^{o}(k_{\mathbb{B}}\!+\!1)\right]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})} \right)}{\nu(\sqrt{2k_{\mathbb{B}}})(k_{\mathbb{B}}+1)e^{\frac{2\mu(\sqrt{2k_{\mathbb{B}}})}{\nu(\sqrt{2k_{\mathbb{B}}})}}} \notag \\ & \qquad-\varrho^{o}\exp{\left(-[2\varrho^{o}(k_{\mathbb{B}}+1)]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})}\right)}, \end{align} where step (a) is due to Lemma~\ref{lemma:calu2}. Finally, submitting (\ref{2eq81_NJ}) and (\ref{2eq11_NJ}) into (\ref{2eq8_NJ}), we can get $\mathbb{E}[P_{ew}^{*,o}]$ as (\ref{2eqn1_NJ}). \end{proof} \subsection{Effective Covert Rate and Covert Shannon Capacity}\label{oa_ecrcsc} According to the definition mentioned in Section~\ref{param_definitions}, the ECR $R_{ab}^{o}$ can be given as the following theorem. \begin{theorem}\label{th3_s2_out_NJ} When UAV adopts the OM transmission mode in the concerned A2G system, the ECR $R_{ab}^{o}$ of the system is determined by \begin{align}\label{Eq:s2_out_NJ} R_{ab}^{o}\!=\!R_{b}\!\times\! \left\{\! 1\!-\smashoperator{\sum_{\mathbb{B}\in\{L,N\}}}P_{ab}^{\mathbb{B}}\! \left[ 1\! - \!\mathcal{F}_\mathrm{Ei}^{o}\!(\rho\hat{\sigma}_{n}^{2}) \!+ \!\mathcal{F}_\mathrm{Ei}^{o} \!\left( \! \frac{\hat{\sigma}_{n}^{2}}{\rho} \right)\! \right]\!\right\}, \end{align} where $R_{b}$ is the target rate, $\mathcal{F}_{\mathrm{Ei}}^{o}(\cdot)$ is defined as following \begin{small} \begin{align}\label{Eq:s1_fp} \mathcal{F}_{\mathrm{Ei}}^{o}(x) \!= \!\frac{1}{\nu(\sqrt{2k_{\mathbb{B}}})\ln{\rho}}\mathrm{Ei}\!\!\left[ \!-e^{\mu(\sqrt{2k_{\mathbb{B}}})} \! \!\left (\!\frac{2(k_{\mathbb{B}}\!+\!1)\gamma_{th}x}{P_aL_{ab}^{o,\mathbb{B}}}\!\right)^{\!\!\!\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}} \!\right], \end{align} \end{small}\normalsize $\mu(x)$ and $\nu(x)$ are defined as (\ref{Eq:s2_rician_q_u_NJ}) and (\ref{Eq:s2_rician_q_v_NJ}) in Theorem \ref{2th2_NJ}, respectively, and $\mathrm{Ei}(\cdot)$ is the exponential integral function defined as \cite[Eq. (8.211.1)]{gradshteyn2014table}. \end{theorem} \begin{proof} To derive the ECR, we first need to determined the outage probability at Bob $P^{o}_{out}$. Considering the uncertainty of the LoS/NLoS channel, the outage probability at Bob $P^{o}_{out}$ under the OM transmission mode can be determined by \begin{equation}\label{Eq:s2_proof_out11_NJ} P_{out}^{o}=\,\,\smashoperator{\sum_{\mathbb{B}\in\{L,N\}}}P_{ab}^{\mathbb{B}}\times\mathbb{P}\left(\gamma_{ab}^{o}<\gamma_{th}\right). \end{equation} According to (\ref{2eq2_bob}), the SNR at Bob is determined by $\gamma _{ab}^{o}={P_a L_{ab}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{ab}|^{2}}/{\sigma _{b}^{2}}$. Thus, the outage probability at Bob $P^{o}_{out}$ is given by \begin{footnotesize} \begin{align}\label{Eq:s2_proof_out_TANJ} &\mathbb{P}\left(\gamma_{ab}^{o}<\gamma_{th}\right) =\mathbb{P}\left(|{h}^{o,\mathbb{B}}_{ab}|^{2}<\frac{\sigma _{b}^{2}\gamma_{th}}{P_aL_{ab}^{o,\mathbb{B}}}\right) \notag \\ &\overset{(b)}{=} 1-\mathbb{E}_{\sigma _{b}^{2}} \left [\exp{\left(-e^{\mu(\sqrt{2k_{\mathbb{B}}})} \left ( \frac{2(k_{\mathbb{B}}+1)\sigma _{b}^{2}\gamma_{th}}{P_aL_{ab}^{T,\mathbb{B}}} \right)^{\!\!\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}}\right)} \right ] \notag \\ &=1- \int_{\frac{\hat{\sigma}_{n}^{2}}{\rho}}^{\rho\hat{\sigma}_{n}^{2}}\exp{\left(-e^{\mu(\sqrt{2k_{\mathbb{B}}})}\! \left( \!\frac{2(k_{\mathbb{B}}+1)\gamma_{th}x}{P_aL_{ab}^{o,\mathbb{B}}}\! \right)^{\!\!\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}}\right)} \frac{1}{2x\ln{\rho} }dx \notag \\ &\overset{(c)}{=} 1\!-\!\frac{1}{\nu(\sqrt{2k_{\mathbb{B}}})\ln{\rho}}\!\left[\mathrm{Ei}\!\left(\!\!-e^{\mu(\sqrt{2k_{\mathbb{B}}})} \!\left (\! \frac{2(k_{\mathbb{B}}\!+\!1)\gamma_{th}x}{P_aL_{ab}^{o,\mathbb{B}}} \!\right)^{\!\!\!\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}}\! \right)\!\right]\!\Bigg|_{\frac{\hat{\sigma}_{n}^{2}}{\rho}}^{\rho\hat{\sigma}_{n}^{2}} \notag \\ &=1 -\frac{1}{\nu(\sqrt{2k_{\mathbb{B}}})\ln{\rho}} \mathrm{Ei}\left(-e^{\mu(\sqrt{2k_{\mathbb{B}}})} \left ( \frac{2(k_{\mathbb{B}}+1)\gamma_{th}\rho\hat{\sigma}_{n}^{2}}{P_aL_{ab}^{o,\mathbb{B}}} \right)^{\!\!\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}} \right) \notag\\ & \qquad+\frac{1}{\nu(\sqrt{2k_{\mathbb{B}}})\ln{\rho}} \mathrm{Ei}\left(-e^{\mu(\sqrt{2k_{\mathbb{B}}})} \left ( \frac{2(k_{\mathbb{B}}+1)\gamma_{th}\hat{\sigma}_{n}^{2}}{P_aL_{ab}^{o,\mathbb{B}}\rho} \right)^{\!\!\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}} \right), \end{align} \end{footnotesize}\normalsize where step (b) is similar to (\ref{2eq81_NJ}), and step (c) is due to $\int e^{ax^{n}}x^{-1}dy=\mathrm{Ei}(ax^{n})/n$ as \cite[Eq. (2.325.7)]{gradshteyn2014table}. Then, submitting (\ref{Eq:s2_proof_out_TANJ}) into (\ref{Eq:s2_proof_out11_NJ}), we can obtain (\ref{Eq:s2_out_NJ}). \end{proof} Next, we derive the covert Shannon capacity $C_{ab}^{o}$ of the concerned system under the OM transmission mode. \begin{theorem} \label{th_s2_covert_rate_TANJ} When UAV adopts the OM transmission mode in the concerned A2G system, the CSC $C_{ab}^{o}$ of the system is determined by \begin{small} \begin{align}\label{s2_rab_TANJ} C_{ab}^{o}=\,\,\smashoperator{\sum_{\mathbb{B}\in\{L,N\}}}\,\,\, \frac{W^{o}P_{ab}^{\mathbb{B}}}{2\ln2\times\ln\rho}\left [ \mathcal{F}_{\mathbf{Li}}^{o}\left(\frac{1}{\rho\hat{\sigma}_{n}^{2}}\right)-\mathcal{F}_{\mathbf{Li}}^{o}\left (\frac{\rho}{\hat{\sigma}_{n}^{2}}\right) \right], \end{align} \end{small}\normalsize where $W^{o}$ is the bandwidth of microwave, $\mathcal{F}_{\mathbf{Li}}^{o}(\cdot)$ is given as \begin{align}\label{Eq:s1_fr} \mathcal{F}_{\mathbf{Li}}^{o}(x)= \int_{0}^{\infty}\mathbf{Li}_{2} (-P_aL_{ab}^{o,\mathbb{B}}xy)f_{|h^{o,\mathbb{B}}_{ab}|^{2}}(y)dy, \end{align} $\mathbf{Li}_{s}(\cdot)$ is the polylogarithm of order s, and $f_{|h^{o,\mathbb{B}}_{ab}|^{2}}(y)$ is refer to (\ref{Eq:rician_pdf}). \end{theorem} \begin{proof} Considering the possibility of the LoS/NLoS channel, the CSC $C_{ab}^{o}$ of the A2G system is determined by \begin{equation}\label{s2_rab_proof1_NJ} C_{ab}^{o}=\,\,\smashoperator{\sum_{\mathbb{B}\in\{L,N\}}}P_{ab}^{\mathbb{B}}\times\,C_{ab}^{o}. \end{equation} According to the SNR at Bob, $C_{ab}^{o}$ can be derived as \begin{footnotesize} \begin{align}\label{s2_rab_proof_NJ} C_{ab}^{o}\!&= W^{o}\mathbb{E}_{|h^{o,\mathbb{B}}_{ab}|^{2},\sigma _{b}^{2}}\left [ \log_{2}{\left(1+\frac{P_a L_{ab}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{ab}|^{2}}{\sigma _{b}^{2}}\right)} \right ] \notag \\ &=W^{o}\mathbb{E}_{|h^{o,\mathbb{B}}_{ab}|^{2}}\left [\frac{1}{2\ln\rho}\int_{\frac{\hat{\sigma}_{n}^{2}}{\rho}}^{\rho\hat{\sigma}_{n}^{2}} \log_{2}\left(1+\frac{P_aL_{ab}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{ab}|^{2}}{x}\right)\frac{1}{x}dx \right ] \notag \\ &=W^{o}\mathbb{E}_{|h^{o,\mathbb{B}}_{ab}|^{2}}\left [\frac{1}{2\ln2\ln\rho}\int_{\frac{1}{\rho\hat{\sigma}_{n}^{2}}}^{\frac{\rho}{\hat{\sigma}_{n}^{2}}}\!\ln\left(1+P_aL_{ab}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{ab}|^{2} y\right)\frac{1}{y}dy\right ] \notag \\ &\overset{(c)}{=}W^{o}\mathbb{E}_{|h^{o,\mathbb{B}}_{ab}|^{2}}\!\left[\frac{P_aL_{ab}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{ab}|^{2}y}{2\ln2\ln\rho}[\Phi (-P_aL_{ab}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{ab}|^{2}y,2,1)]\Big|_{\frac{1}{\rho\hat{\sigma}_{n}^{2}}}^{\frac{\rho}{\hat{\sigma}_{n}^{2}}}\right ] \notag \\ &\overset{(d)}{=}W^{o}\mathbb{E}_{|h^{o,\mathbb{B}}_{ab}|^{2}}\left[\frac{-1}{2\ln2\ln\rho}\left[\mathbf{Li}_{2} (-P_aL_{ab}^{o,\mathbb{B}}|h^{o,\mathbb{B}}_{ab}|^{2}y)\right]\Big|_{\frac{1}{\rho\hat{\sigma}_{n}^{2}}}^{\frac{\rho}{\hat{\sigma}_{n}^{2}}}\right ] \notag \\ &=\frac{W^{o}}{2\ln2\ln\rho}\int_{0}^{\infty }\mathbf{Li}_{2} \left(\frac{-P_aL_{ab}^{o,\mathbb{B}}x}{\rho \hat{\sigma}_{n}^{2}}\right)f_{|h^{o,\mathbb{B}}_{ab}|^{2}}(x) dx \notag \\ &\,\,\,\, -\frac{W^{o}}{2\ln2\ln\rho}\int_{0}^{\infty }\mathbf{Li}_{2} \left(\frac{-\rho P_aL_{ab}^{o,\mathbb{B}}x}{\hat{\sigma}_{n}^{2}}\right) f_{|h^{o,\mathbb{B}}_{ab}|^{2}}(x) dx, \end{align} \end{footnotesize}\normalsize where step (c) is according to \cite[Eq. (2.728.2)]{gradshteyn2014table}, $\mathbf{\Phi}(\cdot,\cdot,\cdot)$ is Lerch function defined as \cite[Eq. (9.550)]{gradshteyn2014table}, and step (d) is due to $\mathbf{Li}_{s}(z)=z\mathbf{\Phi}(z,s,1)$. Submitting (\ref{s2_rab_proof_NJ}) into (\ref{s2_rab_proof1_NJ}), we can obtain (\ref{s2_rab_TANJ}). \end{proof} \section{Covert Performance Analysis under the DM Transmission Mode}\label{MA_Performance} Similar to Section~\ref{TA_Performance}, this section investigates the optimal detection threshold and the corresponding minimum DEP at Willie, as well as the expected minimum DEP from Alice's perspective, lastly, the ECR and CSC under the DM transmission mode. \subsection{Detection Error Probability} According to (\ref{neq2}) and (\ref{Eq:Tw}), when Alice does not transmit information, Willie only received the background noise. Thus, $P^{d}_{FA}$ is same as (\ref{eq5_NJ}). While Alice transmits information, by observing the equation deducing in (\ref{eq6_NJ}), we can obtain $P^{d}_{MD}$ by replacing the parameter $k_a^{o}$ in (\ref{eq6_NJ}) with $k_a^{d}=P_{a}G_{aw}L_{aw}^{d,\mathbb{B}}|{h}^{d,\mathbb{B}}_{aw}|^{2}$. Based on $P_{FA}^{d}$ and $P_{MD}^{d}$, we can further derive the optimal detection threshold and the minimum detection error probability at Willie as the following lemma. \begin{lemma} \label{th1_NJ} When Alice adopts the DM transmission mode, the optimal detection threshold $\tau^{*}$ for Willie's detector is in the interval \begin{equation} \label{eq3_NJ} \begin{aligned} \tau^{*}\in\begin{cases} [\rho\hat{\sigma}_{n}^{2},k_{a}^{d}+\frac{\hat{\sigma}_{n}^{2}}{\rho}],& \rho\hat{\sigma}_{n}^{2}< k_{a}^{d}+\frac{\hat{\sigma}_{n}^{2}}{\rho},\\ k_{a}^{d}+\frac{\hat{\sigma}_{n}^{2}}{\rho},& \rho\hat{\sigma}_{n}^{2}\geq k_{a}^{d}+\frac{\hat{\sigma}_{n}^{2}}{\rho}, \end{cases} \end{aligned} \end{equation} and the corresponding minimum DEP $P_{ew}^{*,d}$ is given as \begin{equation} \label{eq4_NJ} \begin{aligned} P_{ew}^{*,d}=\begin{cases} 0 , & \rho\hat{\sigma}_{n}^{2}< k_{a}^{d}+\frac{\hat{\sigma}_{n}^{2}}{\rho}, \\ 1-\frac{\ln(\rho k^{d}_{a}+\hat{\sigma}_{n}^{2})-\ln( \hat{\sigma}_{n}^{2})}{2\ln\rho} , & \rho\hat{\sigma}_{n}^{2} \geq k^{d}_{a}+\frac{\hat{\sigma}_{n}^{2}}{\rho}, \end{cases} \end{aligned} \end{equation} where $k^{d}_{a}=P_{a}G_{aw}L_{aw}^{d,\mathbb{B}}|{h}^{d,\mathbb{B}}_{aw}|^{2}$, $\rho$ and $\hat{\sigma}_{n}^{2}$ are the parameter that quantifies the size of the uncertainty and nominal noise power, respectively, which are defined in Section \ref{sec_noise_uncertainty}. \end{lemma} \begin{proof} The proof is similar to Lemma \ref{2th1_NJ}, we omit it here. \end{proof} Similar to Theorem~\ref{2th2_NJ}, Alice and Bob only rely on the expected measure of $P_{ew}^{*,d}$ to evaluate the covertness under the DM mode. Note that $\mathbb{E}[P_{ew}^{*,d}]$ is related with the numerical integration of $|h^{m,\mathbb{B}}_{aw}|^{2}$. Thus before deriving $\mathbb{E}[P_{ew}^{*,d}]$, we first present the following lemma. \begin{lemma}\label{lemma:calu} If $h^{m,\mathbb{B}}_{aw}$ $(\mathbb{B}\in \{L,N\})$ is the channel fading coefficient of the mmWave channel, we have \begin{align}\label{Eq:calu} \int_{0}^{a}\!\!\!{x}f_{|h^{ m,\mathbb{B}}_{ aw}|^{2}}({x})d{x}\!=\!\!\sum_{r=1}^{S_{\mathbb{B} }}\!\binom{S_{\mathbb{B} }}{r}(-1)^{r}\!\!\left[ae^{-r\xi _{\mathbb{B}}a}-\frac{1\!-\!e^{-r\xi _{\mathbb{B}}a}}{r\xi _{\mathbb{B}}}\right], \end{align} where $f_{|h^{ m,\mathbb{B}}_{ aw}|^{2}}$ is the PDF of ${|h^{ m,\mathbb{B}}_{ aw}|^{2}}$. \end{lemma} \begin{proof} The detailed proof is presented in Appendix~\ref{ap1}. \end{proof} Note that the probability of channel model uncertainty as well as antenna gain uncertainty needs to be considered simultaneously. Thus, when UAV adopts the DM transmission mode, $\mathbb{E}[P_{ew}^{*,d}]$ can be given as follows. \begin{theorem} \label{th2} When Alice adopts the DM transmission mode, the expected value $\mathbb{E}[P_{ew}^{*,d}]$ from Alice's perspective is given as (\ref{eqn1_NJ}), \begin{figure*}[t] \begin{equation} \label{eqn1_NJ} \mathbb{E}[P_{ew}^{*,d}\!=\,\smashoperator{\sum_{\mathbb{A}\in\{M,S\}}}P_{w}^{\mathbb{A}}\;\smashoperator{\sum_{\mathbb{B}\in\{L,N\}}}P_{\!aw}^{\mathbb{B}}(1\!+\!\Theta^{d}_\mathbb{B})\!\left \{\!1\!\!-\!\frac{1}{2\ln\rho}\!\left\{\!\ln \!\left [\rho P_{a}G_{a}^{\mathbb{A}}G_{w}^{\mathbb{A}}L_{aw}^{m,\mathbb{B}}\!\!\left(\!\varrho^{d}\Theta^{d}_\mathbb{B}\!- \!\sum_{r=1}^{S_{\mathbb{B} }}\binom{\!S_{\mathbb{B}}}{r}(-1)^{r}\frac{1\!-\!e^{-r\xi _{\mathbb{B}}\varrho^{d}}}{r\xi _{\mathbb{B}}}\!\right)\!\!+\!\hat{\sigma}_{n}^{2} \right]\!\!-\!\ln(\hat{\sigma}_{n}^{2})\! \right\}\!\right\}. \end{equation} \rule[1cm]{\textwidth}{0.04em} \vspace*{-0.6cm} \end{figure*} where $G_{a}^{\mathbb{A}}$ is refer to (\ref{neq1_gaw}), $\varrho^{d}=\frac{(\rho^{2}-1)\hat{\sigma}_{n}^{2}}{\rho P_{a}G_{a}^{\mathbb{A}}G_{w}^{\mathbb{A}}L_{aw}^{m,\mathbb{B}}}$, and $\Theta^{d}_\mathbb{B}$ is defined as \begin{equation} \label{eqn2} \Theta^{d}_\mathbb{B}=\sum\nolimits_{r=1}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r} e^{-r\xi _{\mathbb{B}}\varrho^{d}}. \end{equation} \end{theorem} \begin{proof} For convenience, let's denote $k_1^{d}=\rho\hat{\sigma}_{n}^{2}$ and $k_2^{d}=k^{d}_{a}+\frac{\hat{\sigma}_{n}^{2}}{\rho}$. According to (\ref{eq4_NJ}) in Lemma~\ref{th1_NJ}, we have \begin{footnotesize} \begin{align} \label{eq8_NJ} \mathbb{E}[&P_{\!ew}^{*,d}]\!=\!\mathbb{E}_{k_{1}^{d}< k_{2}^{d}}[P_{ew}^{*,d}]\mathbb{P}(k_{1}^{d}\!<\!k_{2}^{d})\! +\!\mathbb{E}_{k_{1}^{d}\!\geq \! k_{2}^{d}}[P_{ew}^{*,d}]\mathbb{P}(k_{1}^{d}\!\geq \!k_{2}^{d}) \notag \\ &=\mathbb{P}(k_{1}^{d}\geq k_{2}^{d})\notag\\ &\times\left( \!\!1\!\!-\!\frac{\! \ln\!\left( \!\rho P_{a}G_{a}^{\mathbb{A}}G_{w}^{\mathbb{A}}L_{aw}^{m,\mathbb{B}}\mathbb{E}_{k_{1}^{d}\geq k^{d}_{2}}\!\!\left[|{h}^{m,\mathbb{B}}_{aw}|^{2}\right]\!\!+\!\hat{\sigma}_{n}^{2}\!\right)\!\!-\!\ln(\!\hat{\sigma}_{n}^{2}\!)}{2\ln(\rho)}\!\!\right)\!\!, \end{align} \end{footnotesize}\normalsize where the term $\mathbb{P}(k^{d}_{1}\geq k^{d}_{2})$, we can get \begin{align} \label{eq9_NJ} \mathbb{P}&(k^{d}_{1}\!\geq \!k^{d}_{2})\!=\mathbb{P}\left(\rho\hat{\sigma}_{n}^{2} \geq P_{a}G_{a}^{\mathbb{A}}G_{w}^{\mathbb{A}}L_{aw}^{m,\mathbb{B}}|{h}^{m,\mathbb{B}}_{ aw}|^{2}+\frac{\hat{\sigma}_{n}^{2}}{\rho}\right)\notag\\ &\!=\mathbb{P}\left (|{h}^{m,\mathbb{B}}_{aw}|^{2}\!\leq \!\varrho^{d}\right) \!=\! \sum_{r=0}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r} e^{-r\xi _{\mathbb{B}}\varrho^{d}}. \end{align} For the expectation term in (\ref{eq8_NJ}), we have \begin{align} \label{eq11_NJ} \mathbb{E}&_{k_{1}^{d}\geq k^{d}_{2}} \left [ |{h}^{m,\mathbb{B}}_{aw}|^{2} \right]=\mathbb{E}\left [ |{h}^{m,\mathbb{B}}_{aw}|^{2}\Bigg||{h}^{m,\mathbb{B}}_{aw}|^{2}\leq \varrho^{d}\right ] \notag \\ &=\int_{0}^{ \varrho^{d}}{x}f_{|{h}^{m,\mathbb{B}}_{aw}|^{2}}({x})d{x} \notag \\ &\overset{(a)}=\sum_{r=1}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B}}}{r}(-1)^{r}\left[ \varrho^{d}e^{-r\xi _{\mathbb{B}} \varrho^{d}}-\frac{1-e^{-r\xi _{\mathbb{B}} \varrho^{d}}}{r\xi _{\mathbb{B}}}\right], \end{align} where step (a) is due to Lemma~\ref{lemma:calu}. Finally, substituting (\ref{eq9_NJ}), (\ref{eq11_NJ}) into (\ref{eq8_NJ}), we can obtain $\mathbb{E}[P_{ew}^{*,d}]$ as (\ref{eqn1_NJ}). \end{proof} \subsection{Effective Covert Rate and Covert Shannon Capacity} Similarly as in Section~\ref{oa_ecrcsc}, the ECR $R^{d}_{ab}$ of the system under the DM transmission mode can be given as the following theorem. \begin{theorem}\label{le3_s1_out_NJ} When UAV adopts the DM transmission mode, the ECR $R_{ab}^{d}$ of the concerned A2G system is given by \begin{equation} \label{Eq:s1_out_NJ} R^{d}_{ab}\!=\!R_{b}\!\times\!\left\{\! 1\!-\!\smashoperator{\sum_{\mathbb{A}\in\{\! M,S\!\}}}{P}^{\mathbb{A}}_{b}\;\smashoperator{\sum_{\mathbb{B}\in\{\!L,N\!\}}}P_{ab}^{\mathbb{B}}\left[1\!+\! \mathcal{F}^{d}_{\mathrm{Ei}}(\!\rho\hat{\sigma}_{n}^{2}\!)\!-\!\mathcal{F}^{d}_{\mathrm{Ei}}\!\left(\!\frac{\hat{\sigma}_{n}^{2}}{\rho}\!\right)\!\right]\!\right\}, \end{equation} where $R_{b}$ is the target covert rate, and $\mathcal{F}^{d}_{\mathrm{Ei}}(\cdot)$ in (\ref{Eq:s1_out_NJ}) is defined as following \begin{align}\label{Eq:s2_fp} \mathcal{F}^{d}_{\mathrm{Ei}}(x)\!=\! \sum_{r=1}^{S_{\mathbb{B}}}\binom{S_{\mathbb{B} }}{r}(-1)^{r}\frac{1}{2\ln{\rho}} \mathrm{Ei}\left({\frac{-r\xi _{\mathbb{B}}\gamma_{th}x}{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}}}\right). \end{align} \end{theorem} \begin{proof} Similarly, to analyze the ECR, we need to determine the outage probability $P^{d}_{out}$ at Bob. Due to the uncertainties of the LoS/NLoS channel and antenna gains, the outage probability at Bob $P^{d}_{out}$ under the DM transmission mode can be determined by \begin{equation}\label{Eq:s1_proof_out11_NJ} P^{d}_{out}\!=\!{\sum\nolimits_{\mathbb{A}\in\{M,S\}}}{P}^{\mathbb{A}}_{b}{\sum\nolimits_{\mathbb{B}\in\{L,N\}}}P_{ab}^{\mathbb{B}}\times \mathbb{P}\left(\gamma^{d}_{ab}<\gamma_{th}\right). \end{equation} According to (\ref{eq2_bob}), the SNR at Bob is given as $\gamma^{d} _{ab}={P_a G_{ab}L_{ab}^{d,\mathbb{B}}|h^{d,\mathbb{B}}_{ab}|^{2}}/{\sigma _{b}^{2}}$. Thus, the outage probability at Bob $P^{d}_{out}$ is given by \begin{align}\label{Eq:s2_proof_out_NJ} \mathbb{P}&\left( \gamma^{d}_{ab}<\gamma_{th}\right) =\mathbb{P}\left(\frac{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}|{h}^{m,\mathbb{B}}_{ab}|^{2} }{\sigma _{b}^{2}}<\gamma_{th}\right) \notag \\ & =\mathbb{P}\left(|{h}^{m,\mathbb{B}}_{ab}|^{2}<\frac{\sigma _{b}^{2}\gamma_{th}}{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}}\right) \notag \\ &=\sum\nolimits_{r=0}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r}\mathbb{E}_{\sigma _{b}^{2}} \left [ \exp \left({\frac{-r\xi _{\mathbb{B}}\gamma_{th}\sigma _{b}^{2}}{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}}}\right)\right ]\notag \\ &=1\!+\!\sum_{r=1}^{S_{\mathbb{B} }}\!\binom{S_{\mathbb{B} }}{r}\!(-1)^{r}\!\!\int_{\frac{\hat{\sigma}_{n}^{2}}{\rho}}^{\rho\hat{\sigma}_{n}^{2}}\!\!\exp \!\left(\!{\frac{-r\xi _{\mathbb{B}}\gamma_{th}x}{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}}}\!\right)\!\frac{1}{2x\ln{\rho} }dx \notag\\ &\overset{(a)}{=}1+\sum\nolimits_{r=1}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r}\frac{1}{2\ln{\rho} }\mathrm{Ei}\left({\frac{-r\xi _{\mathbb{B}}\gamma_{th}x}{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}}}\right)\Bigg|_{\frac{\hat{\sigma}_{n}^{2}}{\rho}}^{\rho\hat{\sigma}_{n}^{2}} \notag\\ &=1+\sum\nolimits_{r=1}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r}\!\frac{1}{2\ln{\rho} }\mathrm{Ei}\left({\frac{-r\xi _{\mathbb{B}}\gamma_{th}\rho\hat{\sigma}_{n}^{2}}{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}}}\right) \notag \\ & -\sum\nolimits_{r=1}^{S_{\mathbb{B} }}\!\binom{S_{\mathbb{B} }}{r}(-1)^{r}\frac{1}{2\ln{\rho} }\mathrm{Ei}\left({\frac{-r\xi _{\mathbb{B}}\gamma_{th}\hat{\sigma}_{n}^{2}}{P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}\rho}}\right), \end{align} where step (a) is due to $\int e^{ax}x^{-1}dy=\mathrm{Ei}(ax)$ \cite[Eq. (2.325.1)]{gradshteyn2014table}. Then, submitting (\ref{Eq:s2_proof_out_NJ}) into (\ref{Eq:s1_proof_out11_NJ}), we obtain (\ref{Eq:s1_out_NJ}). \end{proof} Next, we derive the covert Shannon capacity $C^{d}_{ab}$ of the considered system under the DM transmission mode. \begin{theorem} \label{th_s1_covert_rate_NJ} In the considered A2G system, the CSC $C^{d}_{ab}$ of the system under the DM transmission mode is determined by \begin{equation} \label{s1_rab_NJ} C^{d}_{ab}\!=\,\,\smashoperator{\sum_{\mathbb{A}\in\{M,S\}}}P^{\mathbb{A}}_{b}\,\,\smashoperator{\sum_{\mathbb{B}\in\{L,N\}}}\! P_{ab}^{\mathbb{B}} \frac{W^{d}}{2\ln2\!\times\!\ln\rho}\!\left [\! \mathcal{F}^{d}_{\mathbf{Li}}\!\left (\!\frac{1}{\rho\hat{\sigma}_{n}^{2}}\!\right)\!-\! \mathcal{F}^{d}_{\mathbf{Li}}\!\left (\!\frac{\rho}{\hat{\sigma}_{n}^{2}}\!\right)\!\right ], \end{equation} where $W^{d}$ is the bandwidth of mmWave, and $\mathcal{F}^{d}_{\mathbf{Li}}(\cdot)$ in (\ref{s1_rab_NJ}) is defined as following \begin{align}\label{Eq:s2_fr} \mathcal{F}^{d}_{\mathbf{Li}}(x)= \int_{0}^{\infty}\mathbf{Li}_{2} (-P_aG_{a}^{M}G_{b}^{\mathbb{A}}L_{ab}^{m,\mathbb{B}}xy)f_{|{h}^{m,\mathbb{B}}_{ab}|^{2}}(y)dy, \end{align} $\mathbf{Li}_{2}(\cdot)$ is refer to Theorem \ref{th_s2_covert_rate_TANJ}, and $f_{|h^{ m,\mathcal{B}}_{ab}|^{2}}(y)$ is refer to (\ref{eq_mmwave_pdf}). \end{theorem} \begin{proof} The proof is similar to Theorem \ref{th_s2_covert_rate_TANJ}, we omit here. \end{proof} \section{Performance Optimization and Mode Selection}\label{sec_results} In this section, we establish the optimization problems of the ECR and CSC maximization under the OM and DM transmission modes, and further propose a hybrid OM/DM transmission mode. \subsection{Maximum Effective Covert Rate} \label{oecr} From Theorem \ref{th3_s2_out_NJ} and \ref{le3_s1_out_NJ}, we note that the ECR $R_{ab}$ is related with the transmission power $P_{a}$ and target rate $R_b$. Besides, a large $P_{a}$ and $R_b$ results in a small $\mathbb{E}[P_{ew}^{*,\mathbb{C}}]$ and a large $P_{out}$, respectively. Thus, for a certain position $U(x,y,z)$, the UAV intends to find its optimal transmission power $P_{a}$ and optimal target rate $R_b$ to maximize $R_{ab}$ with the DEP constraint. The corresponding optimal problems under the OM and DM transmission modes can be respectively formulated as the follows \begin{subequations} \label{Eq:Max_Coverate_om} \begin{align} \bar{R}_{a,b}^{*,o}(x_a,y_a,h_a)&=\mathop{\max}\limits_{P_{a},R_{b}} R_{ab}^{o}(x_a,y_a,h_a), \\ \qquad s.t.\, \,\, &\mathbb{E}[P_{ew}^{*,\mathbb{C}}] \geq 1-\epsilon \label{Eq:con_dep_r_om} \\ & P_{a} \le P_{max} \label{Eq:con_pa_r_om} \end{align} \end{subequations} and \begin{subequations} \label{Eq:Max_Coverate_dm} \begin{align} \bar{R}_{a,b}^{*,d}(x_a,y_a,h_a)&=\mathop{\max}\limits_{P_{a},R_{b}} R_{ab}^{d}(x_a,y_a,h_a), \\ \qquad s.t.\, \,\, &\mathbb{E}[P_{ew}^{*,\mathbb{C}}] \geq 1-\epsilon \label{Eq:con_dep_r_dm} \\ & P_{a} \le P_{max} \label{Eq:con_pa_r_dm} \end{align} \end{subequations} where $\bar{R}^{*,o}_{ab}(x,y,h_a)$ (resp. $\bar{R}^{*,d}_{ab}(x,y,h_a)$) denotes the maximum $R_{ab}^{o}(x,y,h_a)$ (resp. $R_{ab}^{d}(x,y,h_a)$), which characterizes the maximum of average \emph{successfully} transmitted amount of information subject to a covertness requirement $\epsilon$. Although we cannot obtain a closed-form result of optimal transmission power and optimal target rate due to transcendental functions, we can obtain the optimal solutions by numerical search methods. \subsection{Maximum Covert Shannon Capacity}\label{ocsc} According to Theorem \ref{th_s2_covert_rate_TANJ} and \ref{th_s1_covert_rate_NJ}, we know that a larger $P_{a}$ will lead to a larger CSC $C_{ab}$. But, according to the detection strategy of Willie, a large $P_{a}$ also lead to a lower DEP $\mathbb{E}[P_{ew}^{*,\mathbb{C}}]$. Therefore, UAV hopes to maximize the CSC of the A2G system by optimizing the transmission power under the OM and DM transmission modes, which can be respectively formulated as \begin{subequations} \label{Eq:Max_capcacity_om} \begin{align} \bar{C}_{a,b}^{*,o}(x_a,y_a,h_a)&=\mathop{\max}\limits_{P_{a}} C_{ab}^{o}(x_a,y_a,h_a), \\ \qquad s.t.\, \,\, &\mathbb{E}[P_{ew}^{*,\mathbb{C}}] \geq 1-\epsilon \label{Eq:con_dep_c_om} \\ & P_{a} \le P_{max} \label{Eq:con_pa_c_om} \end{align} \end{subequations} and \begin{subequations} \label{Eq:Max_capcacity_dm} \begin{align} \bar{C}_{a,b}^{*,d}(x_a,y_a,h_a)&=\mathop{\max}\limits_{P_{a}} C_{ab}^{d}(x_a,y_a,h_a), \\ \qquad s.t.\, \,\, &\mathbb{E}[P_{ew}^{*,\mathbb{C}}] \geq 1-\epsilon \label{Eq:con_dep_c_dm} \\ & P_{a} \le P_{max} \label{Eq:con_pa_c_dm} \end{align} \end{subequations} where $\bar{C}^{*,o}_{ab}(x_a,y_a,h_a)$ (resp. $\bar{C}^{*,d}_{ab}(x_a,y_a,h_a)$) denote the maximum $C_{ab}^{o}(x_a,y_a,h_a)$ (resp. $C_{ab}^{d}(x_a,y_a,h_a)$), which characterizes the \emph{upper bound} of the time average rate of messages delivered from transmitter to the destination subject to a covertness requirement $\epsilon$. Similarly, we can solve these optimization problems through numerical search methods. \subsection{Optimal Transmission Mode for Covert Communication} Although mmWave has a larger bandwidth than low-frequency microwave, it also have the disadvantage of faster attenuation. Thus, with the dynamic change of the distance from UAV to Bob and Willie, respectively, the hybrid OM/DM transmission mode of the UAV would be superior to the pure OM or DM transmission mode in terms of covert performance. To confirm our idea, we propose a hybrid OM/DM transmission mode which allows UAV to adaptively switch between the OM and DM transmission modes based on (\ref{Eq:Max_Coverate_om})-(\ref{Eq:Max_capcacity_dm}). Specifically, for a given position ($x_a,y_a,h_a$) of UAV, we first calculate the distance $d_{ab}$ from UAV to Bob and the distance $d_{aw}$ from UAV to Willie. Then, we submit them into (\ref{Eq:Max_Coverate_om}) and (\ref{Eq:Max_Coverate_dm}) (resp. (\ref{Eq:Max_capcacity_om}) and (\ref{Eq:Max_capcacity_om})) and solve the optimal problems. We select the optimal transmission mode $I_{ECR}$ (resp. $I_{CSC}$) by comparing $\bar{R}_{ab}^{*,o}(x_a,y_a,h_a)$ and $\bar{R}_{ab}^{*,d}(x_a,y_a,h_a)$ (resp. $\bar{C}_{a,b}^{*,o}(x_a,y_a,h_a)$ and $\bar{C}_{a,b}^{*,d}(x,y,z)$), where $I_{ECR}$ and $I_{CSC}$ are the indicators of the optimal selection mode for maximizing the ECR and CSC, which are respectively denoted as \begin{equation} \label{eq:I_ecr} \begin{aligned} I_{ECR}=\begin{cases} \text{OM mode},& \bar{R}_{ab}^{*,o}(x_a,y_a,h_a)\ge \bar{R}_{ab}^{*,d}(x_a,y_a,h_a),\\ \text{DM mode},& otherwise, \end{cases} \end{aligned} \end{equation} and \begin{equation} \label{eq:I_csc} \begin{aligned} I_{CSC}=\begin{cases} \text{OM mode},& \bar{C}_{ab}^{*,o}(x_a,y_a,h_a)\ge \bar{C}_{ab}^{*,d}(x_a,y_a,h_a),\\ \text{DM mode},& otherwise. \end{cases} \end{aligned} \end{equation} Overall, the hybrid OM/DM transmission mode selection algorithm for optimal covert communication can be described as Algorithm~\ref{Alg:Selection_Max_CR}. \begin{algorithm}\label{Alg:Selection_Max_CR} \caption{Hybrid OM/DM Transmission Mode Selection Algorithm} \SetAlgoLined \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \DontPrintSemicolon \Input{UAV's location $U(x_a,y_a,h_a)$, Bob's location $U(x_b,y_b,h_b)$, Willie's location $U(x_w,y_w,h_w)$.} \Output{The mode selection indicator $I_{ECR}$ or $I_{CSC}$.} Calculate the distances $d_{ab}$ and $d_{aw}$, respectively.\\ \uIf {UAV intends to maximize the ECR} {Submit $d_{ab}$ and $d_{aw}$ into (\ref{Eq:Max_Coverate_om}) and (\ref{Eq:Max_Coverate_dm}) and solve the optimal problems;\\ Obtain $I_{ECR}$ by comparing $\bar{R}_{ab}^{*,o}(x_a,y_a,h_a)$ with $\bar{R}_{ab}^{*,d}(x_a,y_a,h_a)$ according to (\ref{eq:I_ecr}).} \ElseIf{UAV intends to maximize the CSC} {Submit $d_{ab}$ and $d_{aw}$ into (\ref{Eq:Max_capcacity_om}) and (\ref{Eq:Max_capcacity_dm}) and solve the optimal problems;\\ Obtain $I_{CSC}$ by comparing $\bar{C}_{ab}^{*,o}(x_a,y_a,h_a)$ with $\bar{C}_{ab}^{*,d}(x_a,y_a,h_a)$ according to(\ref{eq:I_csc}).} Return $I_{ECR}$ or $I_{CSC}$. \end{algorithm} \section{Numerical Results} \label{sec_num_results} This section provides extensive numerical results to illustrate the system performance under both the OM mode and DM mode, such that optimal model for covert communication in the concerned hybrid microwave/mmWave A2G systems can be obtained. In the following, we considered the A2G system where Bob is located at $(-500, 0, 0)$, Willie is located at $(1000, 0, 0)$, UAV flies at a fixed altitude of $h_{a}=500$m, the minimum and maximum safe distance limits for UAV flight are $d_{aw}^{min}=300$m $d_{aw}^{max}=1500$m, respectively. The OM transmission mode of Alice transmits at a typical frequency $2.5$GHZ with $40$-MHz bandwidth as \cite{akdeniz2014millimeter}, and the DM transmission mode transmits at a typical mmWave frequency $73$GHz and with $100$-MHz bandwidth as \cite{liu2017millimeter,zhang2020optimized}. We summarized the considered parameters in Table \ref{num_parameters}, unless explicitly mentioned. \renewcommand{\arraystretch}{1.4} \begin{table}[t] \centering \caption{Network Parameter Settings} \label{num_parameters} \begin{tabular}{|m{0.30\textwidth}|m{0.13\textwidth}|} \hline \textbf{Network Parameters} & \textbf{Values} \\ \hline \hline UPA antenna elements ($\mathcal{N}_a$, $\mathcal{N}_b$, $\mathcal{N}_w$) & (6, 18, 18)\\ \hline S-curve parameters ($\sigma$, $f$) & (4.88, 0.429) \\ \hline OM channel path loss coefficients ($\beta_{L}^{o}$, $\beta_{N}^{o}$) & ($10^{-6}$, $10^{-7}$)\\ \hline OM channel path loss exponents ($\alpha_{L}^{o}$, $\alpha_{N}^{o}$) & (1.64, 2.71) \\ \hline DM channel path loss coefficients ($\beta_{L}^{d}$, $\beta_{N}^{d}$) & ($10^{-6.11}$, $10^{-7.18}$)\\ \hline DM channel path loss exponents ($\alpha_{L}^{d}$, $\alpha_{N}^{d}$) & (2, 3) \\ \hline Rician factor ($k_{0}$, $k_{\pi/2}$) & (5, 15) dBm \\ \hline Nakagami-m fading shape parameters ($S_L$, $S_N$) & (3, 2) \\ \hline Nominal noise power $\hat{\sigma} _{n}^{2}$ & -80 dBm\\ \hline Noise uncertainty level $\rho$ & 2 dB\\ \hline Target rate $R_b$ & 1 Mbits/s/Hz\\ \hline Covertness requirement $\epsilon$ & 0.2\\ \hline \end{tabular} \end{table} \subsection{Analysis of Expected Minimum DEP} \begin{figure}[t] \centering \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{fig_dvsrho.pdf \vspace{-0.3cm} \caption{The expected minimum DEP $\mathbb{E}[P_{ew}^{*}]$ vs. noise uncertainty $\rho$.} \label{fig4_1} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{fig_dedvspa.pdf \vspace{-0.3cm} \caption{The expected minimum DEP $\mathbb{E}[P_{ew}^{*}]$ vs. transmission power $P_a$.} \label{fig4_2} \end{minipage} \vspace{-0.5cm} \end{figure} \begin{figure}[t] \centering \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{fig_dvsxa.pdf \vspace{-0.3cm} \caption{The expected minimum DEP $\mathbb{E}[P_{ew}^{*}]$ vs. horizontal position $x_a$.} \label{fig4_3} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{fig_outvsrho.pdf \vspace{-0.3cm} \caption{Outage probability $P_{out}$ vs. noise uncertainty parameter $\rho$.} \label{fig5_1} \end{minipage} \vspace{-0.5cm} \end{figure} We first explore the impact of parameter $\rho$ which is used to quantify the level of the noise uncertainty on the expected minimum DEP $\mathbb{E}[P_{ew}^{*}]$ at Willie. As depicted in Fig.~\ref{fig4_1}, $\mathbb{E}[P_{ew}^{*}]$ keeps increasing as $\rho$ increases. This is because the greater the level of noise uncertainty, the more conducive to hide the message transmission of Alice. We also can observe that, for a given position of Alice, i.e., $x_a=1000m$ and $x_a=1360$m, the DM transmission mode is superior to the OM transmission mode in terms of covert performance. Due to the fast fading characteristic of mmWave, the probability of misjudgment under the DM mode is higher than that in the OM mode. From Fig.~\ref{fig4_1}, we can further observe that, for the DM transmission mode, $\mathbb{E}[P_{ew}^{*}]$ of Willie is higher when $x_a=1000m$ than that one when $x_a=1360m$, which is opposite of the OM transmission mode. The reason behind the phenomena can be explained that, Alice always steers her main lobe of the DM antenna to Bob, when $x_a=1360m$, Willie is also located in the boresight direction of the main lobe, such that he can obtain more power from Alice, thereby reducing $\mathbb{E}[P_{ew}^{*}]$. We then investigate the impact of the transmission power of Alice $P_a$ on the expected minimum DEP $\mathbb{E}[P_{ew}^{*}]$ with $\rho=2$dB and $x_a=\{1000m, 1360m\}$. As shown in Fig.~\ref{fig4_2}, it can be observed that $\mathbb{E}[P_{ew}^{*}]$ is monotonically decreasing with respect to $P_a$. That is because as the transmission power increases, it is not conducive to hiding the information in the background noise. Also, as $P_a$ increases, the expected minimum DEP performance in OM transmission mode degrades faster than in DM transmission mode. Because the mmWave attenuates faster than the microwave, and the received power at Willie remains lower under the same transmission power. To explore the impact of the position of Alice $x_a$ on the expected minimum DEP $\mathbb{E}[P_{ew}^{*}]$, we summarize in Fig.~\ref{fig4_3} to show how $\mathbb{E}[P_{ew}^{*}]$ varies with $x_a$ with $P_a=\{15\text{dBm}, 20\text{dBm}\}$. From Fig.~\ref{fig4_3}, we can see that $\mathbb{E}[P_{ew}^{*}]$ first decreases as $x_a$ increases from $-200$m to $1000$m, and then increases after $1000$m under the OM transmission mode. It is due to the fact that Alice just moves right above Willie, the distance between them is the closest, and Willie can detect the transmission behavior of Alice more accurately. For the DM transmission mode, we can see that when $x_a=1360m$, $\mathbb{E}[P_{ew}^{*}]$ decreases sharply to the minimal value and then gradually increase as $x_a$ increases. This interesting phenomenon can be explained as when $x_a=1360$m, Willie is just in the half-power beam width direction of the antenna array of link Alice $\to$ Bob and he can obtain more power benefiting from the main lobe antenna gain of Alice. But as $x_a$ increases, the distance between Willie and Alice will increase and the received power at Willie will decrease causing an increasing $\mathbb{E}[P_{ew}^{*}]$. \subsection{Analysis of Outage Probability} \begin{figure}[t] \centering \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{fig_outvspa.pdf \vspace{-0.3cm} \caption{Outage probability $P_{out}$ vs. transmission power $P_a$.} \label{fig5_2} \end{minipage}\hfill \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{fig_outvsrb.pdf \vspace{-0.3cm} \caption{Outage probability $P_{out}$ vs. target rate $R_b$. \label{fig5_3} \end{minipage} \vspace{-0.5cm} \end{figure} We plot Fig.~\ref{fig5_1} to explore the impact of noise uncertainty parameter $\rho$ on the outage probability $P_{out}$ under both the OM and DM transmission modes. From Fig.~\ref{fig5_1}, we can see that as $\rho$ increases, $P_{out}$ gradually increases. A larger $\rho$ leads to a lower SNR at Bob such that the outage probability is higher. Note that we also can find that under a given $x_a$ and $\rho$, the OM transmission mode outperforms the DM transmission mode in terms of the outage probability performance. This phenomenon can be attributed to the fast fading of mmWave resulting in a small SNR at Bob. Fig.~\ref{fig5_2} is used to study the impact of transmission power $P_{a}$ on the outage probability $P_{out}$. We can see from Fig.~\ref{fig5_2} that the outage probability $P_{out}$ monotonically decreases with respect to $P_a$. Obviously, a larger $P_a$ will lead to a higher SNR resulting in a lower outage probability. A further careful observation of Fig.~\ref{fig5_2} indicates that when Alice transmits with a small $P_a$, $P_{out}$ under the OM transmission mode is higher than the one under the DM transmission mode, but once $P_a$ exceeds a certain value, $P_{out}$ under the OM transmission mode is lower than the one under the DM transmission mode. Note that when Alice transmits with a lower power, mmWave directional antenna can offset a tiny part of channel fading. Thus, the outage probability under the DM transmission mode is lower than that one under the OM transmission mode. But, the channel fading of mmWave is much faster than the microwave, such that when Alice transmits with a big power, the received power of Bob under the DM transmission mode than the one under the OM transmission mode, resulting in a higher outage probability. To further study the changing trend of the outage probability $P_{out}$ varying with the target rate $R_b$ under both transmission modes, we draw Fig.~\ref{fig5_3}. We can observe that as the target rate $R_b$ increases, the outage probability $P_{out}$ also increases. It can be easily explained based on the definition of the outage probability. Besides, it also implies that there is a trade-off relationship between $P_{out}$ and $R_b$ which validates the rationality of optimizing $R_b$ in this work. \subsection{Analysis of ECR and CSC} \begin{figure} [tb] \centering \subfigure[ECR $\bar{R}^{*}_{ab}$]{ \includegraphics[width=0.22\textwidth]{fig_rabvspmax.pdf} \label{fig61} } \subfigure[CSC $\bar{C}^{*}_{ab}$]{ \includegraphics[width=0.22\textwidth]{fig_cabvspmax.pdf} \label{fig62} } \vspace{-0.3cm} \caption{Covert performance vs. maximum transmission power constraint $P_{max}$ with $x_a \in\{1000m, 1360m\}$. } \label{fig6} \vspace{-0.5cm} \end{figure} To explore the impact of the maximum power constraint $P_{max}$ on the maximum ECR $\bar{R}_{ab}^{*}$ and CSC $\bar{C}_{ab}^{*}$ under both two modes, we summarize in Fig.~\ref{fig6} how they vary with $P_{max}$ with $x_a=1000$m and $x_a=1360$m, respectively. From Fig.~\ref{fig61}, we can see that as $P_{max}$ increases, $\bar{R}_{ab}^{*}$ gradually increases and then tends to a constant under the OM transmission mode. It can be explained as that a larger $P_{max}$ can allow Alice uses more power to transmit information resulting in a smaller $P_{out}$. But the covert performance constraint limits that $P_a$ cannot increase all the time. Thus, when $P_{max}$ is enough large, $\bar{R}_{ab}^{*}$ reaches its maximum value and keeps constant. For the DM transmission mode, due to the fast fading of mmWave, even given a large $P_{max}$, the received power at Willie is still small and the DEP is still high. Thus, Alice can still increase $P_a$ to improve the effective covert rate. The corresponding analysis can be applied to illustrate the phenomenon of the covert capacity $\bar{C}_{ab}^{*}$ in Fig.~\ref{fig62}, we omit here. \begin{figure}[tb] \centering \subfigure[ECR $\bar{R}^{*}_{ab}$]{ \includegraphics[width=0.225\textwidth]{fig_rabvscovert.pdf}\label{fig:CR} } \subfigure[CSC $\bar{C}^{*}_{ab}$]{ \includegraphics[width=0.225\textwidth]{fig_cabvscovert.pdf}\label{fig:CC} } \vspace{-0.3cm} \caption{Covert performance vs. covertness constraint $\epsilon$ with $x_a \in\{1000m, 1360m\}$ and $P_{max}=20$dBm.} \label{fig:covert_epsilon} \vspace{-0.5cm} \end{figure} We then investigate the impact of the covertness constraint $\epsilon$ on the maximum effective covert rate $\bar{R}_{ab}^{*}$ and covert capacity $\bar{C}_{ab}^{*}$. The results are summarized in Fig.~\ref{fig:covert_epsilon}. We can observe from Fig.~\ref{fig:covert_epsilon} that as $\epsilon$ increases, both $\bar{R}^{*}_{ab}$ and $\bar{C}^{*}_{ab}$ first increase and then tends to remain unchanged. This can be explained as follows. As the covert constraints are gradually relaxed, Alice can adopt a large $P_a$ to transmit the covert information and thus both $\bar{R}_{ab}^{*}$ and $\bar{C}_{ab}^{*}$ increase. But when $P_a$ is large enough, the transmission power remains unchanged due to the maximum power constraint $P_{max}$. We further observe from Fig.~\ref{fig:CR} and Fig.~\ref{fig:CC} that under the hybrid mode, adopting the OM transmission mode can achieve better performance when the covertness restrictions are relaxed enough, while adopting the DM transmission mode is better when the covertness constraint is strict. \begin{figure} [tb] \centering \includegraphics[width=0.3\textwidth]{fig_rabcabvsxa_big.pdf} \vspace{-0.3cm} \caption{Covert performance vs. horizontal position of UAV $x_a$.} \label{fig:covert_xa} \vspace{-0.5cm} \end{figure} Finally, we investigate the performance of different $x_a$ during the movement of Alice. We summarized in Fig.~\ref{fig:covert_xa} how the covert performance varies with $x_a$ for a setting of $\rho=2$dB and $P_{max}=20$dBm. We can see from Fig.~\ref{fig:covert_xa} that during Alice's movement from $(-200,0,500)$ to $(2200,0,500)$, the covert performance first decreases and then increases and finally decreases under both transmission modes. The interesting behavior can be explained as follows. When Alice moves away from Bob and closes to Willie, due to the negative effects brought by the path attenuation and covert constraint, both $R^{*}_{ab}$ and $\bar{C}^{*}_{ab}$ will decrease quickly. When Alice moves away from Willie, the power received by Willie decreases, and it can increase $P_a$ appropriately to improve the transmission efficiency. When Alice moves far away from Willie and Bob enough, Alice can adopt a larger transmission power satisfying the covertness constraint, however, it cannot offset the negative impact of large-scale attenuation on transmission performance. Thus, both $\bar{R}^{*}_{ab}$ and $\bar{C}^{*}_{ab}$ will decrease. Moreover, from Fig~\ref{fig:covert_xa}, we can see that when Alice is far away from Willie, the DM transmission mode outperforms the OM transmission mode in terms of both $\bar{R}^{*}_{ab}$ and $\bar{C}^{*}_{ab}$, but when Alice moves a certain distance away from Willie, the OM transmission mode is better. On the one hand, the power received by Bob is smaller due to the rapid attenuation of mmWave; on the other hand, Willie is in the main lobe radiation range such that Willie has good detection performance. By comparing the two modes, we can derive the optimal transmission mode as Fig.~\ref{fig:covert_xa} in the hybrid microwave/mmWave A2G Systems. \section{Conclusion}\label{sec_conclusion} This paper investigated the covert communication in a hybrid Microwave/mmWave A2G wireless communication system. Based on our theoretical performance analysis and covert performance optimization under both OM and DM transmission modes, we proposed a new hybrid OM/DM transmission mode for covert performance enhancement. The results in this paper revealed that the hybrid transmission mode can lead to a significant improvement of overt performance than the pure OM or DM mode in the concerned A2G system. It is expected that this work can provide meaningful insights into the covert communication scheme design in more general and more complicated UAV networks, e.g., UAV with multiple band antennas, UAV swarm, etc. \begin{appendices} \section{Proof of Lemma~\ref{lemma:calu2}} \label{nnap1} Here, we provide the calculation steps for (\ref{Eq:calu2}) as follows \begin{equation} \label{nneqa1} \begin{aligned} \int_{0}^{a}\!\! xf_{|h^{\!o,\mathbb{B}}_{ aw}|^{2}}(x)dx =xF_{|h^{o,\mathbb{B}}_{aw}|^{2}}(x)\textbf{\Big|}_{0}^{a}-\int_{0}^{a}\!\!F_{|h^{o,\mathbb{B}}_{aw}|^{2}}(x)dx. \end{aligned} \end{equation} Similarly to (\ref{2eq81_NJ}), we can derive the first term on the right as \begin{equation} \label{nnEq_ap1A} \begin{aligned} xF_{|h^{ o,\mathbb{B}}_{aw}|^{2}}(x)\textbf{\Big|}_{0}^{a}=\!a\!\left(\!1\!\!-\!\exp{\!\left(\!-\left[2a (k_{\mathbb{B}}\!+\!1)\right]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})}\!\right)}\!\right). \end{aligned} \end{equation} Then, we derive the second term on the right as \begin{align} \label{nneqa2} &\!\int_{0}^{a}\!\!\!\!F_{|h^{ o,\mathbb{B}}_{ aw}|^{2}}(x)dx\!=\!\!\int_{0}^{a}\!\!\!1\!-\!\exp{\left(\!-[2x(k_{\mathbb{B}}\!+\!1)]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})}\!\right)}dx \notag \\ &=a-\int_{0}^{a}\exp{\left(-[2x(k_{\mathbb{B}}+1)]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2}e^{\mu(\sqrt{2k_{\mathbb{B}}})}\right)}dx \notag \\ & \overset{(a)}{=}a-\frac{\gamma\left ( \frac{2}{\nu(\sqrt{2k_{\mathbb{B}}})}, [2a(k_{\mathbb{B}}+1)]^\frac{\nu(\sqrt{2k_{\mathbb{B}}})}{2} e^{\mu(\sqrt{2k_{\mathbb{B}}})}\right)}{(k_{\mathbb{B}}+1)\nu(\sqrt{2k_{\mathbb{B}}})e^{\frac{2\mu(\sqrt{2k_{\mathbb{B}}})}{\nu(\sqrt{2k_{\mathbb{B}}})}}}, \end{align} where step (a) is according to $\int_{0}^{u}x^{m}e^{-bx^{n}}dx=\frac{\gamma(v,bu^{n})}{nb^{v}}, v=\frac{m+1}{n}$ as in \cite[Eq. (3.381.8)]{gradshteyn2014table}. Substituting (\ref{nnEq_ap1A}) and (\ref{nneqa2}) into (\ref{nneqa1}), we can obtain (\ref{Eq:calu2}). \section{Proof of Lemma~\ref{lemma:calu}} \label{ap1} Here, we provide the calculation steps for (\ref{Eq:calu}) as follows \begin{equation} \label{eqa1} \begin{aligned} \int_{0}^{a}\!\!\!\! xf_{|h^{\! m,\mathbb{B}}_{\! aw}|^{2}}(x)dx \!=xF_{|h^{\! m,\mathbb{B}}_{\! aw}|^{2}}(x)\textbf{\Big|}_{0}^{a}\!-\!\int_{0}^{a}\!\!F_{|h^{\! m,\mathbb{B}}_{\! aw}|^{2}}\!(x)dx. \end{aligned} \end{equation} According to (\ref{eq10}), we can derive the first term on the right as \begin{equation} \label{Eq_ap1A} \begin{aligned} xF_{|h^{ m,\mathbb{B}}_{aw}|^{2}}(x)\textbf{\Big|}_{0}^{a}=a\sum\nolimits_{r=0}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r}e^{-r\xi _{\mathbb{B}}a}. \end{aligned} \end{equation} Similarly, the second term on the right can be derived as \begin{align} \label{eqa2} \int_{0}^{a}&F_{|h^{ m,\mathbb{B}}_{aw}|^{2}}(x)dx =\int_{0}^{a}\sum_{r=0}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r}e^{-r\xi _{\mathbb{B}}x}dx \notag \\ & =\int_{0}^{a}\left[1+\sum\nolimits_{r=1}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}(-1)^{r}e^{-r\xi _{\mathbb{B}}x}\right]dx\notag \\ &=a+\sum\nolimits_{r=1}^{S_{\mathbb{B} }}\binom{S_{\mathbb{B} }}{r}\frac{(-1)^{r}}{r\xi _{\mathbb{B}}}\left(1-e^{-r\xi _{\mathbb{B}}a}\right). \end{align} Then, substituting (\ref{Eq_ap1A}) and (\ref{eqa2}) into (\ref{eqa1}), we can obtain (\ref{Eq:calu}). \end{appendices}
train/arxiv
BkiUcSPxK0wg05VB6bYq
5
1
\section{Introduction} Machine learning is a popular method for approximating complex functions arising in a variety of fields like computer vision \citep{krizhevsky2012imagenet}, speech recognition \citep{graves2013speech} and many more. Stochastic Gradient Descent (SGD) is a common choice for optimizing the parameters of the neural network representing the functions \citep{deeplearning_book_Goodfellow,lecun2012efficient}. But, the correct application of the SGD algorithm requires setting an initial learning rate, or stepsize, and a schedule that reduces the stepsize as the algorithm proceeds closer to the optimal parameter values. Choosing a stepsize too small results in no improvement in the cost function, while choosing a stepsize too large causes non-convergence. There have been a number of approaches suggested to solve this problem. Adaptive SGD algorithms like AdaGrad and Adam \citep{duchi2011adaptive} require the setting of a global stepsize and other hyperparameters. In non-stochastic gradient descent, there are two methods popular for determining the optimal step size for a gradient. The first is the is trust region method \citep{trust_region_conn,trust_region_Byrd,trust_region_sorensen}, and the second is the line search method, like backtracking line search \citep{armijo1966minimization}, Wolfe conditions \citep{wolfe1969convergence}, and probabilistic line search \citep{mahsereci2015probabilistic}. However, when applied to SGD, both methods solve an optimization within a prescribed region or along a direction, which could lead to over-optimization for the current minibatch, but deterioration of other minibatches. Moreover, trust regions methods typically use second-order models, which can be expensive to build; line search methods typically give only upper but not lower bound on the objective change, which might lead to even more over-optimization. The center issue of stepsize selection is the lack of a good criterion for deciding the quality of descent directions and stepsizes. This paper provides the criterion of linear range, defined as the range of parameter perturbations having small nonlinear measurement. The nonlinear measurement is the relative difference between the actual state perturbations and the linearized state perturbations given by the tangent solution. As an application, we propose to select stepsizes, at the initial stages of the training process, by imposing a `speed limit' such that all minibatches are within linear range. The paper is organized as follows. First, we define tangent and adjoint solutions and show their utilities and relations in sensitivity analysis. Then, we define linear range and develop linGrad. Finally, we demonstrate linGrad on a few networks with different architectures. \section{Preparations} In this section, we first define neural networks as dynamical systems. Then in section~\ref{s:tangent}, we show that small perturbations on parameters will lead to roughly linear perturbation in all states of the network, which further leads to a tangent equation for linear perturbation in the objective. Finally, in section~\ref{s:adjoint}, we show this tangent formula is equivalent to the adjoint formula, which is a generalization of the backpropagation. Above discussion leads to the natural conclusion that, gradients are meaningful only when stepsizes lead to roughly linear perturbation in the states. To start, we define a neural network with $I$ layers as a discrete dynamical system, governed by: \begin{equation} \label{e:primal_system_diffeo} u_0=x \,,\quad u_{i+1} = f_i(u_i,s_i) \text{\; for \;} 0\le i \le I-1\,, \end{equation} where $x$ is the input, column vectors $u_i\in \R^{m_i\times 1}$ are states of neurons, and $s_i\in\R^{n_i \times 1}$ are parameters at the $i$-th layer to be trained, to minimize some objective $J$, defined as: \begin{equation} \label{e:J} J\left(\{u_i,s_i\}_{i=0}^I \right)= \sum_{i=0}^{I} J_i(u_i,s_i). \end{equation} Typically, the objective is the difference between the actual output of the network for a given input and the output specified by the data: in this case, the objective depends only on $J_I(u_I)$. However, for future development, we allow the objective depend on all layers of network. \subsection{Tangent solutions} \label{s:tangent} Assume we want to make perturbations to parameters $\{s_i\}_{i=0}^I$ in the direction of $\{\sigma_i\in\R^{n_i \times 1}\}_{i=0}^I$, the perturbations will be $\{ \Delta s_i=\sigma_i \psi\}_{i=0}^I$, where $\psi\in\R$ is the stepsize, or learning rate. When the stepsize is infinitesimal $\delta \psi$, the first order approximation at each layer is: $ \delta u_0 = 0, \delta u_{i+1} = f_{ui} \delta u_{i} + f_{si} \sigma_i \psi, $ where $\delta$ indicates infinitesimal perturbations, and $\delta u_i$ includes perturbations propagated from previous layers. Here $f_{ui}:= \partial f_i/ \partial u (u_i,s_i) \in \R^{m_{i+1}\times m_i}$, and $f_{si}:= \partial f_i/ \partial s (u_i,s_i) \in \R^{m_{i+1}\times n_i}$. There is no perturbation on $u_0$, since the input data are accurate. Define $v_i:=\delta u_i/\delta\psi$, it is governed by the conventional inhomogeneous tangent equation: \begin{equation} \label{e:convential inhomo tangent} v_0 = 0 \,,\quad v_{i+1} = f_{ui} v_{i} + f_{si} \sigma_i \,. \end{equation} Later we are interested in computing $v_i\psi$, which is easier to compute via: \begin{equation} \label{e:vipsi} v_0\psi = 0 \,,\quad v_{i+1}\psi = f_{ui} (v_{i}\psi) + f_{si} \Delta s_i \,. \end{equation} To extend our work to networks with architecures not satisfying equation~\eqref{e:primal_system_diffeo}, such as ResNet and neural ODE, we need their corresponding tangent equations, which are given in appendix \ref{app:tangent for other archtec}. Now, we can write out a tangent formula for the sensitivity $dJ/d\psi$. More specifically, we first differentiate each term in equation~\eqref{e:J}, then apply the definition of $v_i$, we get: \begin{equation}\label{e:djdpsi_tan} \dd J \psi =\sum_{i=0}^{I} \left( J_{ui} v_i + J_{si} \sigma_i\right) \end{equation} Here both $J_{ui}:=\partial J_i/\partial u (u_i,s_i)\in \R^{1\times m_i}$ and $J_{si}:=\partial J_i/\partial s (u_i,s_i)\in \R^{1\times n_i}$ are row vectors. Investigating inhomogeneous tangent solutions calls for first defining the homogeneous tangent equation: $ w_{i+1} = f_{ui} w_i$, which describes the propagation of perturbation on states while the parameters are fixed. The propagation operator $D_l^i\in \R^{m_i\times m_l}$ is defined as the matrix that maps a homogeneous tangent solution at $l$-th layer to $i$-th layer. More specifically, \begin{equation} \begin{split} \label{e:D} D_l^i := \begin{cases} I_d \textnormal{ (the identity matrix)} \,,\quad \textnormal{ when } i=l \,; \\ f_{u,i-1} f_{u,i-2} \cdots f_{u,l+1}f_{u,l} \,, \quad \textnormal{ when } i>l \,. \end{cases} \end{split} \end{equation} We can use Duhamel's principle to analytically write out a solution to equation~\eqref{e:convential inhomo tangent}. Intuitively, an inhomogeneous solution can be viewed as linearly adding up homogeneous solutions, each starting afresh at a previous layer, with initial condition given by the inhomogeneous term. More specifically, \begin{equation} \label{e:v by Duhamel} v_0 = 0 \,, \quad v_i = \sum_{l=0}^{i-1} D^i_{l+1}f_{sl}\sigma_l \text{\; for \;} 1 \le i \le I. \end{equation} \subsection{Adjoint solutions} \label{s:adjoint} In this subsection, we first use a technique similar to backpropagation to derive an adjoint sensitivity formula, which we then show is equivalent to the tangent sensitivity formula in equation~\eqref{e:djdpsi_tan}. Assume we perturb the $l$-th layer by $\delta u_l$, and let it propagate through the entire network, then the change in the objective is: $J+\delta J = \sum_{j=l}^{I} J_j(f_{I-1}(\cdots f_l(u_l+\delta u_l,s_l)\cdots),s_j)$. Neglecting higher order terms, we can verify the inductive relation $\delta J /\delta u_l =\delta J /\delta u_{l+1} f_{ul} + J_{ul}$. Define $\av_l: = \delta J /\delta u_{l} \in \R^{1\times m_l}$, it satisfies the conventional inhomogeneous adjoint equation: \begin{equation} \label{e:inhomo_adjoint_diffeo} \av_{I+1} = 0 \,, \quad \av_{l} = \av_{l+1} f_{ul} + J_{ul} \,. \end{equation} Notice the reversed order of layers. Here, the terminal condition is used because we can assume there is $(I+1)$-th layer which $J$ does not depend on. Hence, the adjoint sensitivity formula is: \begin{equation} \begin{split} \label{e:adjoint sensitivity directly} \dd J \psi = \sum_{l=0}^I \frac{\delta J}{\delta u_l} \pp {u_l}\psi + J_{sl}\sigma_l = \sum_{l=1}^I \av_l f_{s,l-1}\sigma_{l-1} + \sum_{l=0}^I J_{sl}\sigma_l \,, \end{split} \end{equation} where $\partial u_0 /\partial \psi =0 $ as $u_0$ is fixed, and $ \partial u_l /\partial \psi = f_{s,l-1}\sigma_{l-1}$. Notice that $\partial u_l /\partial \psi$ is not the tangent solution $ v_l= \delta u_l /\delta \psi$, since $\delta u_l$ in the definition of tangent solution includes not only perturbation due to change in $s_{l-1}$, but also the perturbation propagated from the previous layer. In other words, in the tangent formula, the propagation of perturbations on states is included in $v_l$, whereas in the adjoint formula such propagation is included in $\av_l$. The advantage of the adjoint sensitivity formula, comparing to the tangent formula, is a clearer view of how the sensitivity depends on $\sigma_i$, which further enables us to select the direction for perturbing parameters, $\{\sigma_i\}_{i=0}^I$. Not surprisingly, the inhomogeneous adjoint solution is a generalization of the backpropagation. To illustrate this, if we set the objective to take the common form, $J = J_I(u_I)$, then $\av_{l} = J_{uI} f_{u, I-1} \cdots f_{u,l}$. The gradient of $J$ to parameters, given by the backpropagation, is: \begin{equation} \begin{split} \partial J / {\partial s_l} &= J_{uI} f_{u, I-1} \cdots f_{u,l+1} f_{sl} = \av_{l+1}f_{sl} \,. \end{split} \end{equation} The sensitivity can be given by either a tangent formula in equation~\eqref{e:djdpsi_tan}, or an adjoint formula in equation~\eqref{e:adjoint sensitivity directly}, hence, the two formula should be equivalent. Since later development heavily depends on this equivalence, we also prove it directly. To start, first define the homogeneous adjoint equation: $ \aw_{l} = \aw_{l+1} f_{ul}$, where $\aw_l \in \R^{1\times m_i}$ is a row vector. The adjoint propagation operator $\ad_i^l$ is the matrix which, multiplying on the right of a row vector, maps a homogeneous adjoint solution at $i$th layer to $l$th layer. A direct computation shows that $D_l^i = \ad_i^l$. Using Duhamel's principle with reversed order of layers, we can analytically write out the inhomogeneous adjoint solution: \begin{equation} \label{e:av by Duhamel} \av_{I+1} = 0 \,, \quad \av_l = \sum_{i=l}^{I} J_{ui} \ad^l_{i} \text{\; for \;} 0\le l\le I \,. \end{equation} To directly show the equivalence between tangent and adjoint formula, first substitute equation~\eqref{e:v by Duhamel} into \eqref{e:djdpsi_tan}, change the order of the double summation, then assemble terms with the same $f_{sl}\sigma_l$: \begin{equation} \begin{split} \label{e:djdpsi_adj} \dd J \psi &=\sum_{i=1}^{I} J_{ui} v_i + \sum_{i=0}^{I} J_{si} \sigma_i =\sum_{i=1}^{I} \sum_{l=0}^{i-1} J_{ui} \ad_i^{l+1} f_{sl}\sigma_l + \sum_{i=0}^{I} J_{si} \sigma_i \\ &=\sum_{l=0}^{I-1} \left( \sum_{i=l+1}^{I} J_{ui} \ad_i^{l+1} \right) f_{sl}\sigma_l + \sum_{i=0}^{I} J_{si} \sigma_i =\sum_{l=0}^{I-1} \av_{l+1} f_{sl}\sigma_l + \sum_{i=0}^{I} J_{si} \sigma_i \,. \end{split} \end{equation} \section{Linear range} \subsection{Definition} Assuming that the direction to perturb parameters, $\{\sigma_i\}_{i=0}^{I}$, has been decided, we still need to specify the stepsize $\psi$ to get the new parameters, the selection of which is the topic of this section. There are two equivalent methods for computing the sensitivity $dJ/d\psi$: the tangent formula in equation~\eqref{e:djdpsi_tan} and the adjoint formula in equation~\eqref{e:djdpsi_adj}. The adjoint formula is useful for deciding $\sigma_i$, and the tangent formula is useful for checking the effectiveness of sensitivity as the tangent solution has the ability to predict the linear change in objective after slightly perturbing the parameters. A sufficient condition for the approximate linearity, is that the perturbation in all of the states are roughly linear to $\{ \Delta s_i\}_{i=0}^I$. To elaborate, we first define a nonlinear measurement for the perturbations in the states of one network, \begin{equation} \label{e:nonlinear measurement} \varepsilon := \frac 1{I} \sum_{i=1}^I \frac{ \|u_{new,i} - u_{old,i} - v_i \psi\| } { \|v_i\psi\| }\,, \end{equation} where $v$ is the conventional tangent solution, $u_{old}$ and $u_{new}$ are the states before and after parameter change, subscript $i$ indicates the layer, and the norm is $l^2$. Assume that we can use Taylor expansion for $u_{new}$ around $u_{old}$, and that $v = \delta u_{new}/ \delta \psi$ is non-zero, we have $u_{new} = u_{old} + v\psi + v'\psi^2 + O(\psi^3)$, where $v'$ is some unknown constant vector. Hence $ \varepsilon = C \psi + O(\psi^2) $ for some constant $C$, and for small $\psi$ we may regard $\varepsilon$ as linear to $\psi$. With the above description of nonlinear measurement, we can finally define linear range. \begin{definition} Given a network and an input data, the $\varepsilon^*$-linear range on parameters is the range of parameter perturbations such that $\varepsilon \le \varepsilon^*$. The linear range on objective and on states are the image sets of the linear range on parameters. \end{definition} \subsection{Gradient descent by linear range} \label{s:linGrad} Linear range is a criterion that can be used in many ways. In this subsection, we use it to develop linGrad, which is a stochastic gradient descent (SGD) method. In linGrad, the stepsize is determined by a subset of samples in all minibatches, such that the perturbation on parameters are just within the $\varepsilon^*$-linear range. More specifically, for each one out of several minibatches, we use the current $\psi$ to compute $\varepsilon$. Since $\varepsilon$ is linear to $\psi$ when $\psi$ is small, $\psi^*=\psi \varepsilon^* /\varepsilon$ is the $\varepsilon^*$-linear range on stepsize for this minibatch. We update $\psi$ to be the smallest $\psi^*$ within a finite history. Algorithm~\ref{a:lingrad} lists steps of linGrad. We suggest to use $0.3\le \varepsilon^* \le 1$, so that the stepsize is not too small, yet the gradient is still meaningful. Our experiments show that above range of $\varepsilon^*$ yields smaller than 10 times difference in stepsizes, meaning that the linear range criterion reduces the possible range of optimal stepsizes to within an order of magnitude. $N_{hist}$ should be chosen by statistical significance, for example $N_{hist}\ge 50$, such that the max of $\psi^*$ over sampling minibatches is approximately the true max over all minibatches. We also require $N_{hist}N_{lin}\le C N_b$ for some $C$ of order $O(1)$, so that only recent linear ranges affects the selection of current stepsize. In fact, we found in our experiments that linGrad is robust to the selection of $N_{lin}$ and $N_{hist}$ once above conditions are satisfied. \begin{algorithm} \caption{linGrad: Linear range gradient descent (with fixed $\varepsilon^*$)} \label{a:lingrad} \begin{algorithmic}[1] \Require $\varepsilon^*$; empty list $L$; $N_s$ samples in a minibatch; $N_b$ minibatches; $N_{hist}$; $N_{lin}$. \For {each epoch,} \For {each minibatch,} \For{$n\gets 1, N_s$} \Comment{The subscript $n$ is omitted sometimes.} \State Compute $u_{old}$ using parameters $\{s_i\}_{i=0}^I$ and input data. \State Compute adjoint solution $\{\av_l\}_{l=1}^I$. \State Select $\{\sigma_i\}_{i=0}^{I}$ according to predetermined rules. \State Compute tangent solution $\{v_i\psi\}_{i=1}^I$ by equation~\eqref{e:vipsi}. \label{l:extra start} \State Compute new states $u_{new}$ using parameters $\{s_i + \sigma_i\psi\}_{i=0}^I$. \State Compute $\varepsilon_n$ for this sample using equation~\eqref{e:nonlinear measurement}. \EndFor \State Compute $\varepsilon = (\sum_{n=1}^{N_s} \varepsilon_n) / N_s$. \State Append $\psi^*=\psi \varepsilon^*/\varepsilon $ to the list $L$. \State $\psi \gets \min \{\textnormal{last } N_{hist} \textnormal{ elements in }L \}$ \label{l:extra end} \State Update parameters $s_i \gets s_i+\sigma \psi$. \EndFor \Comment{Only need to perform steps \ref{l:extra start} to \ref{l:extra end} once every $N_{lin}$ minibatches.} \EndFor \end{algorithmic} \end{algorithm} \subsection{Remarks} Notice that the nonlinear measurement is defined over the entire network rather than just over the objective. Since the objective is only one number, it may not provide adequate information for deciding where the parameter perturbations are within the linear range. In fact, we tried defining linear measurement by objectives, and found the algorithm not robust, for example, optimal $\varepsilon^*$ changes to settings like minibatch sizes, and for larger $\varepsilon^*$ the algorithm diverges. We also tried adding the objective as an additional layer after the output, but still find the algorithm not robust; further limiting the maximum contribution from the objective layer in the nonlinear measurement helps improving robustness. We suggest readers to experiment whether and how to include objective in the definition of nonlinear measurement. The concept of linear range is useful for other scenarios beyond linGrad. One possible application is that it offers a criterion for comparing different descent directions: larger linear range yields larger parameters and objective perturbations, thus faster convergence. For example, we can use linear range to compare the gradients computed by normal backpropagation and by clipping gradients \citep{clip_gradients} for deep neural networks. Another use of linGrad is to determine the initial stepsize and then change to an adaptive algorithm for stepsizes like Adam or AdaGrad. It is also possible to increase batch size instead of decrease stepsize \citep{Byrd_big_batch,Friedlander_big_batch}. There are also many choice in terminating criteria, There are many choices for termination criteria for the optimization process, for example the optimization can be terminated when the signal-to-noise ratio, which is the ratio between the average and RMS of gradients, is too low \citep{De_big_batch}; or when the ratio of counter-directions, which is the pair of gradients with negative inner-products, is roughly half. LinGrad can be added to many existing optimization algorithms and training schemes, and we suggest readers to experiment. Although not implemented in this paper, it is possible to obtain tangent solutions by tracing computation graphs. The tangent propagation is a local process, just like backpropagation: every gate in a circuit diagram can compute how perturbations in its inputs are linearly transported to the output. Notice that here the inputs to a gate can be either states or parameters. For cases such as convolution networks where each neuron depends only on a few neurons in the previous layer, tangent solvers implemented using graph tracing are faster. An easier but less accurate way to obtain tangent solutions is via finite differences. By the definition of tangent solutions, we can see $ v_i \approx \Delta u_i / \Delta\psi$, meaning that we can first set $\psi$ to be a small number $\delta$, say 1e-6, then compute new states $u_{new,i}^\delta$, and then $v_i \approx (u_{new,i}^\delta-u_{old,i}) / \delta$. This way of computing tangent solutions does not require coding a true linearized solver, rather, it only requires running the feedforward process one more time. \section{Applications} \label{s:applications} \subsection{Application on an artificial data set} \label{s:DIST} We first apply linGrad on a network where each layer is given by $f_i(u_i, W_i) = g(W_i u_i+ b_i)$, where $g$ is the vectorized logistic function. Our parameters to be learned are $W_i\in\R^{m_{i+1}\times m_{i}}$ and $b_i\in\R^{m_{i+1}}$. The perturbations on parameters are $\Sigma_i \psi = \Delta W_i$ and $\beta_i \psi = \Delta b_i$. Our objective is defined only on the last layer as the square difference $J:=J_I(u_I) = \frac 12 \sum_{j = 1}^{m_I} (u_I^j - y^j)^2$, where $y$ is the output data for this sample. To adapt with our previous notations, we regard $(W_i, b_i)$ and $(\sigma_i, \beta_i)$ as one-dimensional vectors of length $n_i=m_{i+1}\times m_{i}+m_{i+1}$, obtained by flattening the matrix and appending to the vector. Then, for programming convenience, we reshape this vector back into a matrix and a vector in the list of results below. \begin{equation} \begin{split} f_{ui} = \Lambda_i W_i \,,\quad f_{si} \sigma_i = \Lambda_i (\Sigma_i u_i + \beta_i) \,,\quad J_{uI} = (u_i - y)^T \,, \end{split} \end{equation} where $\Lambda_i = diag[g_i(1-g_i)] \in \R^{m_{i+1}\times m_{i+1}}$ is a diagonal matrix due to differentiating the component-wise logistic function. By either carefully managing subscripts of partial derivatives in equation~\eqref{e:vipsi} and \eqref{e:inhomo_adjoint_diffeo}, or deriving directly from the definition, we get tangent and adjoint equations: \begin{equation} \begin{split} v_0\psi = 0 \,,\quad v_{i+1} \psi &= \Lambda_i(W_i v_i\psi + \Delta W_i u_i + \Delta b_i) \,; \\ \av_I = J_{uI} \,,\quad \av_{i} &= \av_{i+1} \Lambda_i W_i \,. \end{split} \end{equation} The feedforward and backpropagation in our implementation are from the code complementing~\citep{Nielsen2015}. For our particular example, we use two hidden layers. All layers have the same number of neurons, $m_i = 50$. We first fix the network with randomly generated parameters, then generate 50k training samples and 10k test samples by feeding this fixed network with random inputs. Here all random numbers are from independent standard normal distribution. For the training, initial parameters are generated randomly, and all samples are randomly shuffled for each epoch. We compute the nonlinear measurement and adjust stepsize every $N_{lin}=100$ minibatches, and take stepsize as the smallest of the last $N_{hist}=\max(50,N_b/N_{lin})$ candidate values, where $N_s$ varies. We choose $\Sigma_i$ as the fastest descent direction: \begin{equation} \Sigma_i = - \av_{i+1}f_{si} = - \Lambda_i \av_{i+1}^T u_i^T \,, \quad \beta_i = - \Lambda_i \av_{i+1}^T \,. \end{equation} As we can see from the left of figure~\ref{f:DIST}, for batch size $N_s=10$, comparing to SGD with fixed stepsizes, linGrad with $\varepsilon^*=0.3$ descents the fastest, especially in the first 50 epochs, confirming that the `speed limit' during the first phase of training neural networks is given by the criterion of linear range. In fact, if the objective function is defined as the current objective multiplied by 10, SGD would have parameter perturbations that are 10 times larger, resulting in different convergence behavior, whereas the linear range and hence linGrad would remain unaffected. Moreover, from the right of figure~\ref{f:DIST}, we can see that $\varepsilon^*=0.3$ persists to be optimal for linGrad with different batch sizes. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth] {figs/DIST_obj_hist} \includegraphics[width=0.49\textwidth] {figs/DIST_obj_hist_different_mbsize} \caption{LinGrad applied on data generated by an artificial network. Each objective history is averaged over 5 runs. The vertical axis is average of normalized distance $(\sum_{j = 1}^{m_I} (u_I^j - y^j)^2)^{0.5}/\sqrt{m_I}$. Left: linGrad with minibatch size $N_s=10$ and different $\varepsilon^*$ versus SGD with different fixed stepsizes. Right: linGrad with different $N_s$ and $\varepsilon^*$.} \label{f:DIST} \end{figure} Histories of stepsize $\psi$ and nonlinear measurement $\varepsilon$ of linGrad are shown in figure~\ref{f:DIST_psi_eps}. We run linGrad with different initial stepsizes $\psi_0=0.01$ and $\psi_0=1$. As shown, $\psi_0$ does not affect much: this is expected, since $\psi_0$ is used only to infer the first $\varepsilon^*$-linear range. This confirms that linGrad relieves the headache of choosing initial stepsizes. Also we can see that for this shallow network, the stepsize remains at roughly the same value, indicating that $N_{hist}$ is statistically significant. Finally, the nonlinear measurement remains below 0.3, confirming that our implementation correctly keeps the stepsize within the linear range. \begin{figure}[ht] \centering \includegraphics[width=\textwidth] {figs/step_nonlinmeas_hist} \caption{History of stepsize and nonlinear measurement for linGrad with $\varepsilon^*=0.3$ and $N_s=10$.} \label{f:DIST_psi_eps} \end{figure} \subsection{Application on MNIST} We then apply linGrad on MNIST with 60k training data and 10k test data. The network has three layers, where the input layer has 764 neurons, the hidden layer 30 neurons, and the output layer 10 neurons. The classification is done by selecting the largest component in the output layer. Other aspects of the architecture and settings are the same as we used in section~\ref{s:DIST}. We compare linGrad to SGD with constant stepsizes and compare linGrad with different minibatch sizes in figure~\ref{f:MNIST}. For this problem linGrad converges fastest for either $\varepsilon^*=0.3, 0.5$ or $0.8$, both comparable to SGD with optimal stepsize. Again, we can see the selection of $\varepsilon^*$ is robust to $N_s$. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth] {figs/MNIST_obj_hist} \includegraphics[width=0.49\textwidth] {figs/MNIST_obj_hist_different_mbsize} \caption{LinGrad applied on MNIST with the same settings as figure~\ref{f:DIST}. The history for SGD with $\psi=100$ does not converge and the objective value remains at 0.9, hence it is out of picture.} \label{f:MNIST} \end{figure} \subsection{Application on CIFAR-10} Finally, we apply linGrad on the CIFAR-10 dataset using the ResNet model \citep{resnet} with 18 layers. The size of the training dataset is 50k image samples and the number of classes for the classification task is 10. The size of each image is 32x32, and the total number of weights in the model is approximately 11 million. The tangent solution of the network is computed using the finite difference method with a small stepsize $\delta=$1e-6 for the parameter perturbation. The nonlinear measurement is computed using the states of the network after every residual block in ResNet. We compare linGrad to SGD with constant stepsize and additionally compare linGrad with different minibatch sizes in figure~\ref{f:CIFAR} by computing the error on the test data. LinGrad converges as fast as SGD for $\varepsilon^*=0.6$ and $0.8$. The performance of linGrad is similar across varying minibatch sizes. Moreover, we find that for this deeper network, the stepsizes given by linGrad automatically reduces during the training. It remains to be further investigated whether this automatic decay by linGrad fits the known optimal stepsize scheme for later stages of training. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth] {figs/CIFAR_obj_hist} \includegraphics[width=0.49\textwidth] {figs/CIFAR_obj_hist_different_mbsize} \caption{LinGrad applied on CIFAR-10 using ResNet-18 with $N_{lin} = 10$. Left: compare SGD and linGrad using $N_s = 128$. Right: compare linGrad with different minibatch sizes.} \label{f:CIFAR} \end{figure} \section{Conclusion} This paper defines linear range and states how to compute it via comparing tangent solutions with finite differences. Linear range is a new criteria which can be used for evaluating the quality of stepsizes and descent directions, and it could have many theoretical and practical applications. In particular, we develop a stochastic gradient descent algorithm, linGrad, where the stepsize is given by such that all minibatches are within $\varepsilon^*$-linear range. By applying linGrad on two shallow networks and a ResNet, we find that the fastest convergence is obtained inside the interval $0.3\le\varepsilon^*\le 1.0$, which corresponds to stepsize differences less than an order of magnitude. LinGrad can be integrated with many existing gradient descent algorithms to improve the selection of stepsizes, at least during the initial phase of the training process.
train/arxiv
BkiUcqo4uzlhXz2JTuNu
5
1
\section{Introduction} \label{sec:introduction} \subsection{Previous Anisotropy Searches} \label{ssec:previous} No all-sky observatory has yet flown. Consequently, the first full-sky large anisotropy search was based on combined northern and southern hemisphere ground-based data. The respective data came from the SUGAR~\cite{0305-4616-12-7-015} and AGASA~\cite{Ohoka:1996ww} experiments, taken over a 10~yr period. Nearly uniform exposure to the entire sky resulted. No significant deviation from isotropy was seen by these experiments, even at energies beyond $4\e{19}$~eV~\cite{Anchordoqui:2003bx}. More recently, the Pierre Auger Collaboration carried out various searches for large scale anisotropies in the distribution of arrival directions of cosmic rays above $10^{18}$ eV (an EeV)~\cite{Abreu:2011ve,Auger:2012an}. At the energies exceeding $6\e{19}$~eV, early hints for a dipole anisotropy existed, but these hints have grown increasingly weaker in statistical strength~\cite{Anchordoqui:2011ks}. The latest Auger study was performed as a function of both declination and right ascension (RA) in several energy ranges above $10^{18}$ eV. Their results were reported in terms of dipole and quadrupole amplitudes. Assuming that any cosmic ray anisotropy is dominated by dipole and quadrupole moments in this energy range, the Pierre Auger Collaboration derived upper limits on their amplitudes. Such upper limits challenge an origin of cosmic rays above $10^{18}$ eV from long lived galactic sources densely distributed in the galactic disk~\cite{Abreu:2012ybu}. In the $E>8$ EeV bin, they did report a dipolar signal with a $p$-value of $6.4\e{-5}$ (not including a ``look elsewhere" penalty factor)~\cite{ThePierreAuger:2014nja}. Their cutoff of $\sim8$ EeV is above the galactic to extragalactic transition energy of $\sim1$ EeV, but still below the GZK cutoff energy of $\sim55$ EeV. Also, Telescope Array (TA), the largest cosmic ray experiment in the northern hemisphere, has reported a weak anisotropy signal above their highest energy cut of 57~EeV~\cite{Abbasi:2014lda}. \subsection{Extreme Universe Space-Based Observatory} \label{subsec:EUSO} Proposals currently exist for all-sky, space-based cosmic-ray detectors such as the Extreme Universe Space Observatory (EUSO)~\cite{Adams:2013hqc} and the Orbiting Wide-Angle Lens (OWL)~\cite{Krizmanic:2013pea}. In addition, work is currently underway to combine datasets from two large ground based experiments, the Pierre Auger Observatory (Auger) in the southern hemisphere, and Telescope Array (TA) in the northern hemisphere~\cite{Deligny:2014fxa}. This paper will use EUSO as the example for a full-sky observatory, but our conclusions will apply to any full-sky observatory. EUSO is a down looking telescope optimized for near ultraviolet fluorescence produced by extended air showers in the atmosphere of the Earth. EUSO was originally proposed for the International Space Station (ISS), where it would collect up to 1000 cosmic ray (CR) events at and above 55 \EeV\ ($1\,\EeV=10^{18}$~eV) over a $5$~year lifetime, far surpassing the reach of any ground based project. It must be emphasized that because previous data were so sparse at energies which would be accessible to EUSO, upper limits on anisotropy were necessarily restricted to energies below the threshold of EUSO. EUSO expects many more events at $\sim 10^{20}$~eV, allowing an enhanced anisotropy reach. In addition, EUSO would observe more events with a higher rigidity ${\cal R}=E/Z$, events less bent by magnetic fields; this may be helpful in identifying point sources on the sky. \subsection{Space-Based Advantages} \label{subsec:advantages} EUSO brings two new, major advantages to the search for the origins of extreme-energy (EE) CRs. One advantage is the large field of view (FOV), attainable only with a space-based observatory. With a $60^\circ$ opening angle for the telescope, the down pointing (``nadir'') FOV is \begin{equation} \pi(h_{\rm ISS}\tan(30^\circ))^2\approx h_{\rm ISS}^2\approx 150,000\,{\rm km}^2\,. \label{eq:nadirFOV} \end{equation} We will compare the ability to detect large scale anisotropies at a space-based, full-sky experiment with that of a ground based partial-sky experiment. For reference, we will use the largest ground based cosmic ray observatory, the Pierre Auger Observatory~\cite{ThePierreAuger:2013eja}. Auger has a FOV of 3,000 km$^2$. Thus the proposed EUSO FOV, given in Eq.~\ref{eq:nadirFOV}, is 50 times larger for instantaneous measurements (e.g., for observing transient sources). Multiplying the proposed EUSO FOV by an expected 18\% duty cycle, yields a time averaged nine-fold increase in acceptance for the EUSO design compared to Auger, at energies where the EUSO efficiency has peaked (at and above $\gtrsim 50-100$~\EeV). Tilting the telescope turns the circular FOV given in Eq.~\ref{eq:nadirFOV} into a larger elliptical FOV. The price paid for ``tilt mode'' is an increase in the threshold energy of the experiment. The second advantage of a space-based experiment over a ground based one is the coverage of the full-sky ($4\pi$~steradians) with nearly constant exposure and consistent systematic errors on the energy and angle resolution, again attainable only with a space-based observatory. This paper compares full-sky studies of possible anisotropies to partial-sky studies. The reach benefits from the $4\pi$~sky coverage, but also from the increased statistics resulting from the greater FOV. In addition to the two advantages of space-based observation just listed, a third feature provided by a space-based mission may turn out to be significant. It is the increased acceptance for Earth skimming neutrinos when the skimming chord transits ocean rather than land. On this latter topic, just one study has been published~\cite{PalomaresRuiz:2005xw}. The study concludes that an order of magnitude larger acceptance results for Earth skimming events transiting ocean compared to transiting land. Most ground based observatories will not realize this benefit, since they cannot view ocean chords, although those surrounded by ice such as IceTop~\cite{Aartsen:2013lla} may also benefit from the ice as well. The outline of this paper is the following: We present the difference between partial-sky exposure and that of full-sky exposure in section~\ref{sec:comparison}. In section~\ref{sec:tools} we review spherical harmonics as applied to a full-sky search for anisotropies, the power spectrum, and anisotropy measures. In section~\ref{sec:moments} we explain the particular interest in the dipole ($\ell=1$) and quadrupole ($\ell=2$) regions of the spherical harmonic space as well as the techniques used here and in the literature for reconstructing the first two spherical harmonics. We also discuss the difficulties in differentiating dipoles and quadrupoles with partial-sky coverage; these difficulties are not present in full-sky coverage. In section~\ref{sec:results} we present the results of our analysis for a pure dipole or quadrupole. Finally, some conclusions are presented in section~\ref{sec:conclusion}. \section{Comparison of Full-Sky Proposed EUSO to Partial-Sky Auger} \label{sec:comparison} The Pierre Auger Observatory is an excellent, largest-in-its-class, ground-based experiment. However, in the natural progression of science, it is expected that eventually ground-based observation will be superseded by space-based observatories. EUSO is proposed to be the first-of-its-class, space-based observatory, building upon ground-based successes. The two main advantages of a space-based observatory over a ground-based observatory are the greater FOV leading to a greater exposure at EE and the full-sky nature of the orbiting, space-based observatory. We briefly explore the advantage of the enhanced exposure first. The 231--event sample published by Auger over 9.25 years of recording cosmic rays at and above $\sim52$~EeV~\cite{PierreAuger:2014yba} allows us to estimate the flux at these energies. The annual rate of such events at Auger is $\sim 231/9.25=25$. For simplicity, we consider a 250 event sample for Auger, as might be collected over a full decade. Including the suppressed efficiency of EUSO down to $\sim55$~\EeV\ reduces the factor of 9 relative to Auger down to a factor $\sim$6 for energies at and above 55\EeV. We arrive at the 450 event sample as the EUSO expectation at and above 55 \EeV\ after three years running in nadir mode (or, as is under discussion, in tilt mode with an increased aperture but reduced PDM count). A 750 event sample is then expected for five years of EUSO running in a combination of nadir and tilt mode. Finally, the event rate at an energy measured by High Resolution Fly's Eye (HiRes) is known to significantly exceed that of Auger. This leads to a five year event rate at EUSO of about 1000 events. Thus, we consider the motivated data samples of 250, 450, 750, and 1000 events in the simulations that follow. Now we turn to the $4\pi$~advantage. Auger's exposure only covers part of the sky, and is highly nonuniform across even the part that it does see, as shown in Fig.~\ref{fig:auger exposure}. \begin{figure} \centering \includegraphics[width=\columnwidth]{PAOExposure} \caption{Auger's exposure function normalized to $\int\omega(\Omega)d\Omega=4\pi$. Note that the exposure is exactly zero for declinations $45^\circ$ and above.} \label{fig:auger exposure} \end{figure} The relative exposure is given explicitly as~\cite{Sommers:2000us} \begin{equation} \begin{gathered} \omega(\delta)\propto\cos a_0\cos\delta\sin\alpha_m+\alpha_m\sin a_0\sin\delta\\ \alpha_m= \begin{cases} 0&{\rm for}\,\xi>1\\ \pi&{\rm for}\,\xi<-1\\ \cos^{-1}\xi\quad&{\rm else,} \end{cases}\\ {\rm where\ } \xi\equiv\frac{\cos\theta_m-\sin a_0\sin\delta}{\cos a\cos\delta}\,, \end{gathered} \end{equation} and where $\omega(\delta)$ is the relative exposure at declination $\delta$, $a_0=-35.2^\circ$ is Auger's latitude, $\theta_m=80^\circ$ is the new~\cite{PierreAuger:2014yba} maximum zenith angle Auger accepts. We have assumed that the detector is effectively uniform in RA and that any variation (due to weather, down time, tilt of the machine, etc.) does not significantly affect the uniformity of the exposure in RA. Auger recently modified their acceptance from $\theta_m=60^\circ\to80^\circ$ with the extension calculated using a different metric: The $S(1000)$ technique is used for zenith angles $\theta\in[0^\circ,60^\circ]$, and the $N_{19}$ muon based technique is used for the new range, $\theta\in[60^\circ,80^\circ]$. These inclined events extend Auger's reach up to a declination on the sky of $+45^\circ$, as can be seen in Fig.~\ref{fig:auger exposure}. In contrast, a space-based observatory such as EUSO would see in all directions with nearly uniform exposure. Of course, Auger is an existing observatory, while EUSO is but a proposal. \section{Tools for Anisotropy Searches} \label{sec:tools} \subsection{Spherical Harmonics on the Sky} \label{ssec:spherical} As emphasized by Sommers over a dozen years ago~\cite{Sommers:2000us}, a full-sky survey offers a rigorous expansion in spherical harmonics, of the normalized spatial event distribution $I(\Omega)$, where $\Omega$ denotes the solid angle parameterized by the pair of latitude ($\theta$) and longitude ($\phi$) angles, \begin{equation} I(\Omega)\equiv \frac{N(\Omega)}{\int d\Omega\, N(\Omega)} =\sum_{\ell=0}^\infty \, \sum_{|m| \le l} a_{\ell m}\,\ylm(\Omega)\,, \label{eq:SphHarm} \end{equation} i.e., the set $\{\ylm\}$ is complete. $N(\Omega)$ is the number of events seen in the solid angle $\Omega$. The spherical harmonic coefficients, $a_{\ell m}$, then contain all the information about the distribution of events. The set $\{\ylm\}$ is also orthonormal, obeying \begin{equation} \int d\Omega\;Y_{{\ell_1} {m_1}}(\Omega)\,Y_{{\ell_2}{m_2}}(\Omega)=\delta_{{\ell_1}{\ell_2}}\,\delta_{{m_1}{m_2}}\,. \label{eq:ortho} \end{equation} We are interested in the real-valued, orthonormal $\ylm$'s, defined as \begin{equation} Y_{\ell m}(\theta,\phi)= N(\ell,m) \begin{cases} P^\ell_m(x)(\sqrt{2}\cos(m\phi))&m>0\\ P_\ell(x)&m=0\\ P^\ell_m(x)(\sqrt{2}\sin(m\phi))\quad&m<0 \end{cases}\,, \end{equation} where, $P^\ell_m$ is the associated Legendre polynomial, $P_\ell=P^\ell_{m=0}$ is the regular Legendre polynomial, $x\equiv\cos\theta$, and the normalization-factor squared is $N^2(\ell,m) = \frac{(2\ell+1)(\ell-m)!}{4\pi\,(\ell+m)!}$. The lowest multipole is the $\ell=0$ monopole, equal to the average full-sky flux and is fixed by normalization. The higher multipoles ($\ell\ge 1$) and their amplitudes $a_{\ell m}$ correspond to anisotropies. Guaranteed by the orthogonality of the $\ylm$'s, the higher multipoles when integrated over the whole sky equate to zero. A nonzero $m$ corresponds to $2\,|m|$ longitudinal ``slices'' ($|m|$ nodal meridians). There are $\ell+1-|m|$ latitudinal ``zones'' ($\ell-|m|$ nodal latitudes). In Fig.~\ref{fig:nodal} \begin{figure*}[t] \centering \newcommand\nodalwidth{0.197\textwidth} \includegraphics[width=\nodalwidth]{Nodal_lines_0_0} \includegraphics[width=\nodalwidth]{Nodal_lines_1_0} \includegraphics[width=\nodalwidth]{Nodal_lines_1_1}\\ \includegraphics[width=\nodalwidth]{Nodal_lines_2_0} \includegraphics[width=\nodalwidth]{Nodal_lines_2_1} \includegraphics[width=\nodalwidth]{Nodal_lines_2_2}\\ \includegraphics[width=\nodalwidth]{Nodal_lines_3_0} \includegraphics[width=\nodalwidth]{Nodal_lines_3_1} \includegraphics[width=\nodalwidth]{Nodal_lines_3_2} \includegraphics[width=\nodalwidth]{Nodal_lines_3_3} \caption{Nodal lines separating excess and deficit regions of sky for various $(\ell, m)$ pairs. The top row shows the $(0, 0)$ monopole, and the partition of the sky into two dipoles, $(1, 0)$ and $(1, 1)$. The middle row shows the quadrupoles $(2, 0)$, $(2, 1)$, and $(2, 2)$. The bottom row shows the $\ell=3$ partitions, $(3, 0)$, $(3, 1)$, $(3, 2)$, and $(3, 3)$.} \label{fig:nodal} \end{figure*} we show the partitioning described by some low multipole moments. Useful visualizations of spherical harmonics can also be found in Ref.~\cite{wiki:sphr.harm}. The configurations with $(\ell,-|m|)$ are related to those with $(\ell,+|m|)$ by a longitudinal phase advance $\phi\then\phi+\frac{\pi}{2}$, or $\cos\phi\then\sin\phi$. \subsection{Power Spectrum} \label{ssec:power spectrum} The coefficients of the real-valued spherical harmonics, the $a_{\ell m}$'s, are real and frame-dependent. To combat that problem, we use the power spectrum defined by \begin{equation} C_\ell\equiv\frac1{2\ell+1}\sum_{m=-\ell}^\ell a_{\ell m}^2\,. \label{eq:power spectrum} \end{equation} That the $C_\ell$ should be rotationally (coordinate) invariant is not obvious. Certainly the spherical harmonics coefficients, the $a_{\ell m}$'s given in Eq.~\ref{eq:SphHarm}, are coordinate dependent. A simple rotation in the $\phi$ coordinate will change the $\sin\phi,\cos\phi$ part of the spherical harmonic for $m\neq0$ and a rotation in the $\theta$ coordinate will change the associated Legendre polynomial part ($P_\ell^m(\theta)$) for $\ell\neq0$. So only the $\ell=m=0$ monopole coefficient is coordinate independent. However, the power spectrum $C_\ell$ is invariant under rotations. A recent derivation of this fact can be found in the appendices of Ref.~\cite{Denton:2014hfa}. A simple approximation for the number of cosmic rays necessary to resolve power at a particular level is to count the number $N_Z(\ell,m)$ of nodal zones in each $\ylm$. Each $\ylm$ has \begin{equation} N_Z(\ell,m)= \begin{cases} \ell+1&m=0\\ 2|m|(\ell+1-|m|)\qquad&m\neq0 \end{cases}\, \end{equation} nodal zones. The average over $m$ of the number nodal zones at a given $\ell$ is, \begin{equation} \langle N_Z(\ell)\rangle=\frac{\ell+1}{3(2\ell+1)}(2\ell^2+4\ell+3)\,. \end{equation} For low values of $\ell$, this returns the obvious results, $\langle N_Z(\ell=0)\rangle=1,\langle N_Z(\ell=1)\rangle=2$. For large $\ell$, $\langle N_Z(\ell)\rangle\to\ell^2/3$. If we make the simple assumption of requiring $\mathcal O(1)$ event per nodal zone to resolve a particular term in the power spectrum, then, for large $\ell$ we require $\sim\ell^2/3$ to resolve $C_\ell$. Thus, the rule of thumb is that our EUSO fiducial samples of 450, 750, and 1000 events can resolve the $C_\ell$'s up to an $\ell$-value of the mid-30's, mid-40's, and mid-50's, respectively, i.e., (using $\theta\sim \frac{90^\circ}{\ell}$) can resolve structures on the sky down to 2-$3^\circ$. A ground-based observatory, due to having fewer events and no full-sky coverage, would do much worse. We note that the statistical error in angle estimated here for EUSO is well-matched to the expected systematic angular resolution error~$\sim 1^\circ$ of EUSO. \subsection{Anisotropy Measures} \label{ssec:anisotropy measures} Commonly, a major component of the anisotropy is defined via a max/min directional asymmetry, \begin{equation} \alpha\equiv\frac{I_{\max}-I_{\min}}{I_{\max}+I_{\min}}\in[0,1]\,. \end{equation} A dipole (plus monopole) distribution is defined by a dipole axis and an intensity map given by $I(\Omega)\propto1+A\cos\theta$, where $\theta$ is the angle between the direction of observation, denoted by $\Omega$, and the dipole axis. This form contains a linear combination of the $Y_{1m}$'s. In particular, a monopole term is required to keep the intensity map positive definite. One readily finds that the anisotropy due to a dipole is simply $\alpha_D=A$. A quadrupole distribution (with a monopole term but without a dipole term) is similarly defined, as $I(\Omega)\propto1-B\cos^2\theta$. In the frame where the $\uv z$ axis is aligned with the quadrupole axis, the quadrupole contribution is composed of just the $Y_{20}$ term. In any other frame, this $Y_{20}$ is then related to all the $Y_{2m}$'s, by the constraint of rotational invariance of the $C_\ell$'s mentioned above. In any frame, one finds that the anisotropy measure is $\alpha_Q=\frac B{2-B}$, and the inverse result is $B=\frac{2\alpha}{1+\alpha}$. Sample sky maps of dipole and quadrupole distributions are shown in Fig.~\ref{fig:skymaps} for both full-sky acceptance and for Auger's acceptance, along with the actual and reconstructed symmetry axes. \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{DipoleToEUSOMap} \includegraphics[width=0.48\textwidth]{DipoleToPAOMap} \includegraphics[width=0.48\textwidth]{QuadToEUSOMap} \includegraphics[width=0.48\textwidth]{QuadToPAOMap} \caption{Shown are sample sky maps of 500 cosmic rays. The top row corresponds to the $\alpha_{D,{\rm true}}=1$ dipole, while the bottom row corresponds to the $\alpha_{Q,{\rm true}}=1$ quadrupole distribution. The left and right panels correspond to all-sky, space-based and partial-sky, ground-based coverage, respectively. The injected dipole or quadrupole axis is shown as a blue diamond, and the reconstructed direction is shown as a red star. We see that reconstruction of the multipole direction with an event number of 500 is excellent for an all-sky observatory (left panels) and quite good for partial-sky Auger (right panels). In practice, $\alpha_D$ and $\alpha_Q$ are likely much less than unity, and the event rate for EUSO is expected to be $\sim 9$ times that of Auger. Both effects on the comparison of Auger and EUSO are shown in subsequent figures.} \label{fig:skymaps} \end{figure*} \section{Reconstructing Spatial Moments} \label{sec:moments} \subsection{Reconstructing a Dipole Moment} \label{ssec:reconstructing dipole} Dipoles excite the specific spherical harmonics corresponding to $Y_{1m}$, with the three $Y_{1m}$'s proportional to $\uv x$, $\uv y$, and $\uv z$. A dipolar distribution is theoretically motivated by a single distant point source producing the majority of EECRs whose trajectories are subsequently smeared by galactic and extragalactic magnetic fields. With full-sky coverage it is straightforward to reconstruct the dipole moment so long as the exposure function is always nonzero (and possibly nonuniform). For the full-sky case (EUSO), we use the description described in~\cite{Sommers:2000us}, which even allows for a nonuniform exposure, provided that the exposure covers the full-sky. Reconstructing any anisotropy, including the dipole, with partial-sky exposure is challenging. One approach for dipole reconstruction is that presented in~\cite{Aublin:2005nv}. We refer to this approach as the AP method. We note that this AP approach becomes very cumbersome for reconstructing the quadrupole and higher multipoles. Another approach for reconstructing any $\ylm$ with partial coverage is presented in~\cite{Billoir:2007kb}, which we refer to as the $K$-matrix approach. For the dipole with partial-sky coverage, we compare these two approaches to determine which optimally reconstructs a given dipole distribution. Our result is seen in Fig.~\ref{fig:APvK}. We consider 500 cosmic rays with a dipole distribution of magnitude $\alpha_{D,\rm{true}}=1$ oriented in a random direction. Using Auger's exposure map, we then reconstruct the strength of the dipole using each method. This entire process was repeated 500 times. The reconstructed values of $\alpha_D$ are $\alpha_{D,\rm{rec}}=1.013\pm0.101$ and $\alpha_{D,\rm{rec}}=1.009\pm0.084$ for the AP and $K$-matrix approaches, respectively, where the uncertainty is statistical. The mean angles between the actual dipole direction and the reconstructed dipole direction are $\theta=8.82^\circ$ and $\theta=6.41^\circ$ for the AP and $K$-matrix approaches respectively. The the results of the two approaches are comparable. Since the $K$-matrix approach does slightly better than the AP method, we will use the $K$-matrix approach for partial-sky dipole reconstructions in what follows. \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{AP_v_K_alphas} \includegraphics[width=0.48\textwidth]{AP_v_K_thetas} \caption{We simulated 500 cosmic rays with a dipole of amplitude of $\alpha_{D,{\rm true}}=1$ pointing in a random direction, and with the Auger exposure. Then we reconstructed both the direction and the dipole amplitude 500 times with both reconstruction techniques (AP and $K$-matrix). In the left panel we show a histogram of the reconstructed values of $\alpha_{D,{\rm true}}$, and in the right panel the angle (in degrees) between the correct dipole direction and the reconstructed direction.} \label{fig:APvK} \end{figure*} \subsection{Reconstructing a Quadrupole Moment} \label{ssec:reconstructing quadrupole} The physically motivated quadrupoles are the spherical harmonics corresponding to $Y_{20}\propto3z^2-1$. $Y_{20}$ represents an anisotropy that is maximal along the equator and minimal along the poles (or, depending on the sign of $a_{20}$, the opposite). Such a distribution is motivated by the presence of many sources distributed along a plane, such as is the case with the supergalactic plane. As a real, physical example of a well-known source-distribution distributed with a quadrupole contribution, we calculate the power spectrum as might be seen at the Earth for the 2MRS catalog of the closest 5310 galaxies above a minimum intrinsic brightness~\cite{Huchra:2011ii}. The catalog contains redshift information and contains all galaxies above a minimum intrinsic brightness out to $z=0.028\sim120$ Mpc. As such, it is reasonable to suppose that EECRs come from these galaxies and, for simplicity, we implement uniform flux from each galaxy. In the left panel of Fig.~\ref{fig:Clgals} we show the power spectrum that results for the known physical locations of these galaxies. In the right panel we show the power spectrum that results when each galaxy is weighted by the number of events expected from it, i.e., by the inverse-square of the distance to the galaxy, a $1/d^2$ weighting. We remark that for the closest $\sim200$~galaxies, the distance to each galaxy is known better from the direct ``cosmic distance ladder'' approach than it is from the redshift, and we use these direct distance. For the farther galaxies, direct distances are less reliable, and we use the redshift-inferred distances. In this way, we also avoid any (possibly large) peculiar-velocity contributions to the redshifts of the nearer galaxies. It is instructive to compare the two panels. Without the $1/d^2$ weighting (left panel), the intrinsic quadrupole nature of the distribution of 2MRS galaxies dominates the power spectrum; $C_2$ exceeds the other $C_\ell$'s in the panel by a factor of $\gtrsim5$. In the right panel, galaxies are weighted by their apparent fluxes so the closest galaxies dominate. The large dipole is due to the proximity of Cen A, and the fact that the next closest galaxy, M87, is $\sim4$ times farther from the Earth. When determinations of the $C_\ell$'s are made, it is likely to be the dipole and quadrupole that will first emerge from the data based on the distributions of nearby galaxies. This quantifiably motivates our choice made in this paper to examine the dipole and quadrupole anisotropies. While the actual distribution is likely a combination of dipole and quadrupole components, throughout this paper we consider the simpler cases where the distribution of sources has either a pure dipole anisotropy or a pure quadrupole anisotropy. \begin{figure*} \includegraphics[width=\columnwidth]{Cluws.pdf} \includegraphics[width=\columnwidth]{Clws.pdf} \caption{The power spectrum (see Eq.~\ref{eq:power spectrum}) for nearby galaxies out to $z=0.028$ (to $d=120$ Mpc) based on their positions (left panel), and weighted according to $1/d^2$ (right panel). The 2MRS catalog~\cite{Huchra:2011ii} includes a cut on Milky Way latitudes $|b|<10^\circ$ which is accounted for in the calculation of the power spectrum. $C_2$ is large because the galaxies roughly form a planar (quadrupolar) structure; $C_1$ in the right panel is large because we are not in the center of the super cluster, thereby inducing a dipole contribution. (The relative scale between the ordinates of the two figures carries no information.) } \label{fig:Clgals} \end{figure*} As mentioned in section~\ref{ssec:anisotropy measures}, the quadrupole distribution that will be considered in this paper is of the form $1-B\cos^2\theta$, aligned with a particular quadrupole axis. The quadrupolar distribution is a linear combination of the monopole term $Y_{00}$ and $Y_{20}$ oriented along the quadrupole axis. The distribution has two minima at opposite ends of the quadrupole axis and a maximum in the plane perpendicular to this axis. The quadrupolar data and the reconstruction of the quadrupole axis are shown in the lower panel of Fig.~\ref{fig:skymaps}. For the full-sky case, the method outlined by Sommers in~\cite{Sommers:2000us} is used to reconstruct the quadrupole amplitude and axis. It is possible to accurately reconstruct the quadrupole moment for experiments with partial-sky exposure at particular latitudes, independently of their exposure function. This is because there is very little quadrupole moment in the exposure function, as discussed in Ref.~\cite{Denton:2014hfa}. By some chance, Auger is exactly at the optimal latitude in the southern hemisphere, and TA is very close to the optimal latitude in the northern hemisphere. Therefore we use Sommers's technique for quadrupole reconstruction of both full-sky EUSO and partial-sky Auger. \subsection{Distinguishing Between Dipoles and Quadrupoles} \label{ssec:distinguishing} \begin{figure*} \centering \includegraphics[width=0.497\textwidth]{PAO_dipole_to_quad} \includegraphics[width=0.497\textwidth]{EUSO_dipole_to_quad} \includegraphics[width=0.497\textwidth]{PAO_quad_to_dipole} \includegraphics[width=0.497\textwidth]{EUSO_quad_to_dipole} \caption{These panels show the results of attempting to reconstruct a dipole (quadrupole) when there is actually a quadrupole (dipole). The top two panels show the effect of attempting to infer a quadrupole moment from a pure dipole state of varying magnitudes while the bottom two panels show the effect of attempting to infer the dipole moment from a pure quadrupole state of varying magnitudes. The left two panels assume Auger's partial coverage and $250$ cosmic rays, while the right panels assume uniform exposure and the estimated number of events for EUSO (450 minimally, and 1000 maximally). The mean values and the one standard deviation error-bars are derived from 500 samplings. Note that the left most data point in each plot ($\alpha_{(D,Q),\rm{true}}=0$) corresponds to the isotropic case, for which the dashed lines are the 95\% upper limit. Finally, note that the vertical scales vary significantly between the partial-sky low statistics and full-sky larger statistics figures. } \label{fig:distinguishing} \end{figure*} One topic of concern is determining at what significance an injected dipole (quadrupole) distribution can be distinguished from a quadrupole (dipole), and from isotropy. Generally, the level of significance will depend on the number of observed cosmic-ray events, the strength of the anisotropy, etc. Fig.~\ref{fig:distinguishing} shows what happens when Auger or EUSO attempt to reconstruct a pure dipole or a pure quadrupole when the signal is actually the opposite. The mean values and one standard deviation error-bars are derived from 500 repetitions of the given number of cosmic-ray events, where the dipole or quadrupole axis direction is randomly distributed on the sphere. The dashed lines in each plot are the 95\% upper limit for an isotropic distribution (i.e., $\alpha_{\rm true}=0$). We see that as the actual anisotropy strength increases, quite a significant region of the parameter space would show an anisotropy in the absent multipole at the 95\% confidence level when reconstructed by Auger. We also see that the relative size of the error bars reflects the statistical advantage of space-based observatories, while the central values of the data points, falsely rising with $\alpha_{\rm true}$ for Auger but constant for EUSO, reveals the systematic difference of partial-sky coverage versus full-sky coverage. This entire discussion is easily understood in the context of the ``interference'' of spherical harmonics which have been effectively truncated on the part of the sky where the exposure vanishes. The various truncated harmonics interfere heavily, a fact that is built into the $K$-matrix method (and into any method that attempts to reconstruct spherical harmonics based on only partial-sky exposure). Even though the true exposure of EUSO won't be exactly uniform, the fact that it sees the entire sky with nearly comparable coverage means that the individual spherical harmonics are non-interfering, and so can be treated independently. \section{Results} \label{sec:results} In this section we tally our results. The standard procedure involves simulating a number of cosmic rays with a given dipolar or quadrupolar anisotropy shape and amplitude ($\alpha_{\rm true}$) aligned in a random direction. We then reconstruct the amplitude ($\alpha_{\rm rec}$) and direction (here we assume knowledge of the kind of anisotropy -- dipole or quadrupole -- unlike in section~\ref{ssec:distinguishing}) and compare to the true values. This process is repeated 500 times and the shown uncertainties are one standard deviation over the 500 repetitions. \subsection{Dipole results} \label{ssec:dipole results} In Fig.~\ref{fig:dipole reconstruct} we compare the capabilities of design EUSO and Auger to reconstruct a dipole anisotropy. In this comparison, both advantages of EUSO, namely the increased FOV and the $4\pi$~sky coverage, are evident. \begin{figure*}[t] \centering \includegraphics[width=0.497\textwidth]{Fixed_alphaD_v_N} \includegraphics[width=0.497\textwidth]{Theta_v_alphaD} \includegraphics[width=0.497\textwidth]{ADrec_v_ADac} \includegraphics[width=0.497\textwidth]{DipoleSig} \caption{Reconstruction of the dipole amplitude and direction across various parameters. Each data point is the mean value (and one standard deviation error-bar as applicable) determined from 500 independent simulations. The dipole amplitude and direction for Auger's partial coverage were reconstructed with the $K$-matrix approach. The ordinate on the fourth panel, $\frac{\alpha_{\rm true}}{\Delta\alpha_{\rm rec}}$, labels the number of standard deviations above $\alpha_D=0$.} \label{fig:dipole reconstruct} \end{figure*} The first panel shows how changing only the exposure function between Auger and EUSO affects the value of the reconstructed dipole amplitude. For the same number of cosmic-ray events, the EUSO reconstruction is a bit closer to the expected value and has a smaller variation than does the Auger reconstruction. In the next panel we show the angular separation between the actual dipole direction and the reconstructed direction for Auger after a maximal amount of Auger data of $\sim250$ cosmic-ray events, compared to EUSO's minimal and maximal data sizes: 450 and 1000 cosmic-ray events, respectively. Even for a pure dipole, Auger will only reach $10^\circ$ accuracy in dipole direction for a maximum strength dipole, $\alpha_D=1$, while EUSO does much better. In the third panel we compare both experiments at the same number of cosmic rays across a range of dipole strengths. Even if we assume that Auger will see significantly more cosmic rays than it is expected to, it still has a larger error in its ability to reconstruct a dipole of any amplitude than EUSO. The low dipole magnitudes will always lead to a small erroneously reconstructed dipole due to random-walking away from zero. Finally, in the fourth panel we show the discovery power of each experiment to distinguish a dipole amplitude from isotropy. We see that Auger with 250 events would claim a discovery at five standard deviations above isotropy only if the dipole strength is $0.62$ or greater -- a situation that is unlikely given Auger's anisotropy results to date~\cite{Deligny:2014fxa}. EUSO could claim the same significance if the dipole amplitude is $0.37,0.30,0.27$, or greater, for 450, 750, or 1000 events, respectively. The EUSO significance should be enough to probe at high significance the weak signal currently reported by Auger. \subsection{Quadrupole results} \label{ssec:quadrupole results} In Fig.~\ref{fig:quad reconstruct} we again compare Auger and design EUSO in the context of quadrupole anisotropies. The same panels are plotted here as in Fig.~\ref{fig:dipole reconstruct} except with an initial quadrupole rather than dipole anisotropy, and a quadrupole reconstructed. We note that while the increased number of events that EUSO will detect will certainly lead to a better resolution of the quadrupole amplitude (as shown in the first and fourth panels) and direction (as shown in the second panel), we see that the full-sky coverage does not provide any benefit in this case (as deduced from the first and third panels). This result confirms the claims made in \S~\ref{ssec:reconstructing quadrupole} and in Ref.~\cite{Denton:2014hfa}. Even though EUSO gains no benefit from its full-sky exposure for the determination of a quadrupole anisotropy, EUSO's increased statistics will still lead to a detection sooner than Auger. Auger with 250 events would only be expected to claim a quadrupole discovery at five standard deviations above isotropy if the quadrupole strength is $0.67$ or greater. EUSO could claim the same significance if the quadrupole amplitude is $0.47,0.36,0.29$, or greater, for 450, 750, or 1000 events, respectively. \begin{figure*}[t] \centering \includegraphics[width=0.497\textwidth]{Fixed_alphaQ_v_N} \includegraphics[width=0.497\textwidth]{Theta_v_alphaQ} \includegraphics[width=0.497\textwidth]{AQrec_v_AQac} \includegraphics[width=0.497\textwidth]{QuadSig} \caption{Reconstruction of the quadrupole amplitude and direction across various parameters. Each data point is the mean value (and one standard deviation error-bar as applicable) determined from 500 independent simulations. The quadrupole amplitude and direction for Auger's partial coverage were reconstructed with the same Sommers-approach as for the full-sky (EUSO) case. The ordinate on the fourth panel, $\frac{\alpha_{\rm true}}{\Delta\alpha_{\rm rec}}$, labels the number of standard deviations above $\alpha_Q=0$.} \label{fig:quad reconstruct} \end{figure*} \section{Conclusion} \label{sec:conclusion} Many well motivated models predict, in the simplest limit, a dipolar or quadrupolar anisotropy in the EECR flux. The importance of the two lowest non-trivial orders ($\ell=1,2$) can be seen from the 2MRS distribution of the 5310 nearest galaxies that was demonstrated in Fig.~\ref{fig:Clgals}. Due to the lack of any conclusive anisotropy from the partial-sky ground-based experiments, we explored the possible benefits that a full-sky space-based experiment, such as proposed EUSO, has over a ground-based experiment for detecting dipolar or quadrupolar anisotropies. In particular, we see that in addition to the increased statistics that proposed EUSO brings over any ground based experiment, proposed EUSO significantly outperforms Auger when reconstructing a dipole. Moreover, for inferences of both the dipole and the quadrupole, partial-sky experiments fail to differentiate between the two due to the mixing of the spherical harmonics when truncated by the exposure function. This situation is not present with all-sky observation, where the exposure function is nearly uniform and nonzero everywhere. \section{Acknowledgements} \label{sec:ack} We have benefited from several discussions with L.A.~Anchordoqui. This work is supported in part by a Vanderbilt Discovery Grant.
train/arxiv
BkiUbcfxK5YsWR0Kh73o
5
1
\section{Introduction} Electrons and photons are reconstructed with high purity and efficiency in the CMS experiment, one of the two general-purpose detectors operating at the CERN LHC~\cite{Evans:2008zzb}. These electromagnetically interacting particles leave a distinctive signal in the electromagnetic calorimeter (ECAL) as an isolated energy deposit that is also associated with a trace in the silicon tracker in the case of electrons. These properties, together with the excellent energy resolution of the ECAL, make electrons and photons ideal to use both in precision measurements and in searches for physics beyond the standard model with the CMS detector. After a very successful Run 1, at 7 and 8\TeV during the years 2009--2012, which culminated in the discovery of the Higgs boson in July 2012~\cite{Chatrchyan:2013lba,Aad:2012tfa} and a two-year maintenance period, the LHC resumed its operations in 2015 with LHC Run 2, providing proton-proton ($\Pp\Pp$) collisions at an increased center-of-mass-energy of 13\TeV. In this paper, the performance of the reconstruction and identification of electrons and photons with the CMS detector in Run 2 is presented. The Run 1 results are reported in Refs.~\cite{Khachatryan:2015hwa, Khachatryan:2015iwa}. The new results are based on $\Pp\Pp$ collision data collected during 2016--2018, and correspond to a total integrated luminosity of 136\fbinv~\cite{CMS-PAS-LUM-17-001,CMS-PAS-LUM-17-004,CMS-PAS-LUM-18-002}. The $\Pp\Pp$ collisions were delivered with a 25\unit{ns} bunch spacing, and an average number of interactions per beam crossing (pileup or PU) increasing through the years from 22 to 32. In addition, the reconstruction of electrons and photons in lead-lead (PbPb) ion collisions is presented, which requires specific updates because of the significantly higher particle multiplicity compared with $\Pp\Pp$ collisions. The PbPb collisions were recorded in 2018 at a nucleon-nucleon (NN) center-of-mass energy of $\sqrtsNN = 5.02$\TeV, corresponding to an integrated luminosity of $1.7$\nbinv. Table \ref{tb:objectives} lists the main objectives described in the paper concerning electrons and photons, the summary of the methods used to achieve them, as well as the reference to the sections in the paper where they are described. \begin{table}[hbp] \centering \topcaption{List of the main objectives described in the paper concerning electrons and photons, a summary of the methods used to achieve them, and a reference to the section where they are detailed. } \begin{tabular}{p{0.35\textwidth}p{0.45\textwidth}c} \hline Objective & Method & Section \\ \hline Offline reconstruction& Clustering and tracking algorithms integrated in the ``particle-flow'' framework&\ref{sec:sec4} \\ Online reconstruction&Clustering and tracking algorithms with minimal differences with respect to the offline reconstruction, but not integrated in the ``particle-flow'' framework& \ref{sec:sec5}\\ Energy regression & Multivariate technique &\ref{sec:EnergyReg}\\ Energy scale and spreading & ``Fit method'' and ``spreading method'' & \ref{sec:SS}\\ Identification& Cut-based and multivariate selections& \ref{sec:sec8}\\ Performance comparison among the years& Energy reconstruction and object identification& \ref{sec:perfUL2017}\\ Timing& Comparison of arrival time of electrons from $\PZ$ decay& \ref{sec:sec8timing}\\ Performance in PbPb collisions& Clustering and tracking algorithms integrated in the modified ``particle-flow'' framework&\ref{sec:sec10} \\ \hline \end{tabular} \label{tb:objectives} \end{table} \section{The CMS detector} This section describes in detail the parts and features of the CMS detector relevant for this paper. A more detailed description of the CMS detector, together with a definition of the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. The right-handed coordinate system adopted by CMS is centered in the nominal collision point inside the experiment, the $y$ axis pointing vertically upward, and the $x$ axis pointing radially inward towards the LHC center. The azimuthal angle $\phi$ is measured in radians relative to the $x$-axis in the $x$-$y$ plane. The polar angle $\theta$ is measured relative to the $z$ axis. Pseudorapidity $\eta$ is defined as $\eta \equiv -\ln \left (\tan{\theta/2}\right )$. The central feature of the CMS apparatus is a superconducting solenoid with an internal diameter of 6\unit{m}, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate (PbWO$_{4}$) crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters (HF) extend the $\eta$ coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside of the solenoid. The silicon tracker measures charged particles within $\abs{\eta} < 2.5$ and is composed by silicon pixels and strips. The CMS Phase~1 pixel detector~\cite{Dominguez:1481838}, installed during the 2016--17 winter shutdown, is designed to cope with an instantaneous luminosity of $2\times10^{34}\percms$ at 25\unit{ns} bunch spacing and to maintain an excellent reconstruction efficiency. The original (Phase~0) pixel detector had three layers in the barrel and two disks in each of the endcaps, whereas the Phase~1 pixel detector has one more layer and disk each in the barrel and endcaps, respectively, with a total of 124 million pixels. The amount of material located upstream, i.e., in front of the ECAL, mainly consisting of the tracker, the mechanical support structure, and the cooling system, is expressed in units of radiation lengths $X_0$ and ranges from ${\simeq}0.39 X_0$ at $\abs{\eta} = 0$ to ${\simeq}1.94 X_0$ at $\abs{\eta} = 1.4$, decreasing to ${\simeq}1.53 X_0$ at $\abs{\eta} = 2$~\cite{Khachatryan:2015hwa}. The quoted numbers correspond to the Phase~1 upgraded detector that achieves an overall reduction in the tracker material budget of 0.1--0.3 $X_0$ (or 4--20\%) in the pixel region corresponding to $1.4 <\abs{\eta}< 2.0$. For charged particles of transverse momentum $\pt$ in the range $1 < \pt < 10\GeV$ and $\abs{\eta} < 1.4$, the track resolutions are typically 1.5\% in \pt~\cite{Chatrchyan:2014fea}. The ECAL consists of 75\,848 PbWO$_{4}$ crystals, which cover the range $\abs{\eta} < 1.48 $ in the barrel region (EB) and $1.48 < \abs{\eta} < 3.00$ in the two endcap regions (EE). The crystals are $25.8 X_0$ deep in the barrel and $24.7 X_0$ deep in the endcaps. Preshower detectors consisting of two planes of silicon sensors interleaved with a total of $3 X_0$ of lead are located in front of each EE detector. The energy deposited in the ECAL crystals is detected in the form of scintillation light by avalanche photodiodes (APDs) in the EB and by vacuum phototriodes (VPTs) in the EE. The electrical signal from the photodetectors is amplified and shaped using a multigain preamplifier (MGPA), which provides three simultaneous analogue outputs that are shaped to have a rise time of approximately 50\unit{ns} and fall to 10\% of the peak value in 400\unit{ns}~\cite{CERN-LHCC-97-033}. The shaped signals are sampled at the LHC bunch crossing frequency of 40\unit{MHz} and digitized by a system of three channels of floating-point Analog-to-Digital Converters (ADCs) . To maximize the dynamic range (40\MeV to ${\sim}$1.5--3\TeV), three different preamplifiers with different gain settings are used for each of the ECAL crystals, each with its own ADC~\cite{Chatrchyan:2008zzk}. The largest unsaturated digitization from the 3 ADCs is used to reconstruct electromagnetic objects. The CMS particle-flow (PF) event reconstruction~\cite{Sirunyan:2017ulk}, used to reconstruct and identify each physics-object/particle in an event, combines optimally the information from all subdetectors. In this process, the identification of the particle type (photon, electron, muon, charged or neutral hadron) plays an important role in the determination of the particle direction and energy. Photons (e.g., direct or coming from $\pi^{0}$ decays or from electron bremsstrahlung) are identified as ECAL energy deposits (clusters) not linked to any extrapolated track. Electrons (e.g., direct or coming from photon conversions in the tracker material or from semileptonic decays of hadrons) are identified as primary charged-particle tracks and potentially as ECAL energy clusters. These clusters correspond to the electron tracks extrapolated to the ECAL surface and to possible bremsstrahlung photons emitted by the electron when traversing the tracker material. Muons are identified as tracks in the central tracker consistent with either tracks or several hits in the muon system, and potentially associated with calorimeter deposits compatible with the hypothesis of being a muon. Charged and neutral hadrons may initiate a hadronic shower in the ECAL, and are subsequently fully absorbed in the HCAL. The corresponding clusters are used to estimate their energies and directions. The reconstructed vertex with the largest value of summed physics-objects $\pt^2$ is the primary $\Pp\Pp$ interaction vertex. The physics objects are the jets, clustered using the anti-$\kt$ algorithm with a distance parameter of $R=0.4$~\cite{Cacciari:2008gp, Cacciari:2011ma} with the tracks assigned to the primary vertex as inputs, and the associated missing transverse momentum, which is the negative vector \pt sum of those jets and leptons. Events of interest are selected using a two-tiered trigger system~\cite{Khachatryan:2016bia}. The first level (L1), composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events with a latency of 4\mus of the collision and with a total average rate of about 100\unit{kHz}~\cite{Perrotta:2015jyu}. The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\unit{kHz} before data storage. Dedicated techniques~\cite{Sirunyan:2020foa} are used in all detector subsystems to reject signals from electronic noise, from pileup, or from particles that do not originate from $\Pp\Pp$ collisions in the bunch crossing of interest, such as particles arriving from $\Pp\Pp$ collisions that occur in adjacent bunch crossings (so-called out-of-time pileup). \section{Data and simulated event samples}\label{sec:sec3} The data used in this paper were collected from $\Pp\Pp$ collisions at 13\TeV, satisfying a trigger requirement of an isolated single electron with $\ET$ thresholds at 27, 32, and 32\GeV in 2016, 2017 and 2018, corresponding to integrated luminosities of 35.9, 41.5, and 58.7\fbinv, respectively. The best detector alignment, energy calibrations and corrections are being performed for the full Run 2 data for each year separately; they are obtained using the procedures described in Refs.~\cite{Chatrchyan:2013dga,Chatrchyan:2014wfa}. For this paper, only the 2017 data use these most updated conditions and the best calibrations since they were already available at the time of writing. This paper documents the performance and results that are used in more than 90\% of CMS physics analyses based on Run 2 data. In the later sections, the recalibrated data set of 2017 is referred to as the ``Legacy'' data set, whereas the 2016 and 2018 data samples are referred to as ``EOY'' (end of year). The improvements brought by the recently recalibrated 2017 data are discussed in Section ~\ref{sec:EOYvsUL}. Samples of Monte Carlo (MC) simulated events are used to compare the measured and expected performance. Drell--Yan (DY) $\PZ/\gamma^*$ + jets and $\PZ\to \mu \mu \gamma$ events are simulated at next-to-leading order (NLO) with the \MGvATNLO~(v2.2.2, 2.6.1 and 2.4.2 for 2016, 2017, and 2018 conditions, respectively)~\cite{Alwall:2014hca} event generator, interfaced with \PYTHIA v8.212~\cite{Sjostrand:2014zea} for parton showers and hadronization. The CUETP8M1 underlying event tune~\cite{Khachatryan:2015pea} is used for 2016 MC samples and the CP5~\cite{Sirunyan2020} tune is used for 2017 and 2018 MC samples. The matrix elements are computed at NLO for the three processes $\Pp\Pp \to \PZ + \text{N}_{\text{jets}}$, where $\text{N}_{\text{jets}} = 0$, 1, 2, and merged with the parton showers using the FxFx~\cite{Frederix:2012ps} scheme with a merging scale of 30\GeV. The NNPDF 3.0 (2016) and 3.1 (2017--2018) with leading order (LO), in 2016, and next-to-next-leading order (NNLO), in 2017--2018, parton distribution functions (PDFs)~\cite{Ball:2014uwa} are used. Simulated event samples for $\gamma$ + jet final states from direct photon production are generated at LO with \PYTHIA. The NNPDF2.3 LO PDFs~\cite{Ball:2012cx} are used for these samples. A detailed detector simulation based on the \GEANTfour~(v9.4.3)~\cite{Agostinelli:2002hh} package is applied to all generated events. The presence of multiple $\Pp\Pp$ interactions in the same and nearby bunch crossings is incorporated by simulating additional interactions (including also those out-of-time coming from neighbouring bunch crossings) with a multiplicity that matches that observed in data. \section{Offline electron and photon reconstruction}\label{sec:sec4} \subsection{Overview of strategy and methods} \label{sec:strategyReco} Electrons and photons deposit almost all of their energy in the ECAL, whereas hadrons are expected to deposit most of their energy in the HCAL. In addition, electrons produce hits in the tracker layers. The signals in the ECAL crystals are reconstructed by fitting the signal pulse with multiple template functions to subtract the contribution from out-of-time pileup. This procedure~\cite{Sirunyan:2020pmc} has been used for the whole LHC Run 2 data-taking period, for both the HLT and offline event reconstruction. As during Run 1, the signal amplitudes are corrected by time-dependent crystal response corrections and per-channel intercalibrations. As an electron or photon propagates through the material in front of the ECAL, it may interact with the material with the electron emitting bremsstrahlung photons and the photon converting into an electron-positron pair. Thus, by the time the electron or photon reaches the ECAL, it may no longer be a single particle, but it could consist of a shower of multiple electrons and photons. A~dedicated algorithm is used to combine the clusters from the individual particles into a single object to recover the energy of the primary electron or photon. Additionally, the trajectory of an electron losing momentum by emitting brems\-strah\-lung photons changes the curvature in the tracker. A~dedicated tracking algorithm, based on the Gaussian sum filter (GSF), is used for electrons to estimate the track parameters~\cite{Adam_2005}. Electron and photon reconstruction in CMS is fully integrated into the PF framework, and is based on the same basic building blocks as other particles. This is a major change with respect to the Run 1 reconstruction, where different reconstruction algorithms for electrons and photons were used~\cite{Khachatryan:2015hwa}. A brief outline of the reconstruction steps is presented below and a detailed description is given in the following sections. \begin{enumerate} \item The energy reconstruction algorithm starts with the formation of clusters~\cite{Sirunyan:2017ulk} by grouping together crystals with energies exceeding a predefined threshold (typically ${\sim}80\MeV$ in EB and ${\sim}300\MeV$ in EE), which is generally 2 or 3 times bigger than the electronic noise expected for these crystals. A seed cluster is then defined as the one containing most of the energy deposited in any specific region, with a minimum transverse energy ($\ET^{\text{seed}}$) above 1\GeV. We define $\ET$ as $\ET=\sqrt{\smash[b]{m^2+\pt^2}}$ for an object of mass $m$ and transverse momentum \pt. \item ECAL clusters within a certain geometric area (``window'') around the seed cluster are combined into superclusters (SC) to include photon conversions and bremsstrahlung losses. This procedure is referred to as ``superclustering''. \item Trajectory seeds in the pixel detector that are compatible with the SC position and the trajectory of an electron are used to seed the GSF tracking step. \item In parallel to the above steps, all tracks reconstructed in the event are tested for compatibility with an electron trajectory hypothesis; if successful they are also used to seed the GSF tracking step. The ``generic tracks'' are a collection of tracks (not specific to electrons) selected with $ \pt > 2\GeV $, reconstructed from hits in the tracker through an iterative algorithm known as the Kalman filter (KF)~\cite{Sirunyan:2017ulk}. \item A dedicated algorithm~\cite{Khachatryan:2015iwa} is used to find the generic tracks that are likely to originate from photons converting into $\Pe^+\Pe^-$ pairs. \item ECAL clusters, SCs, GSF tracks and generic tracks associated with electrons, as well as conversion tracks and associated clusters, are all imported into the PF algorithm that links the elements together into blocks of particles. \item These blocks are resolved into electron and photon (\Pe and \PGg) objects, starting from either a GSF track or a SC, respectively. At this point, there is no differentiation between electron and photon candidates. The final list of linked ECAL clusters for each candidate is promoted to a refined supercluster. \item Electron or photon objects are built from the refined SCs based on loose selection requirements. All objects passing the selection with an associated GSF track are labeled as electrons; without a GSF track they are labeled as photons. This collection is known as the unbiased \Pe/\PGg collection and is used as a starting point by the vast majority of analyses involving electrons and photons. \item To separate electrons and photons from hadrons in the PF framework, a tighter selection is applied to these \Pe/\PGg objects to decide if they are accepted as an electron or an isolated photon. If the \Pe/\PGg object passes both the electron and the photon selection criteria, its object type is determined by whether it has a GSF track with a hit in the first layer of the pixel detector. If it fails the electron and photon selection criteria, its basic elements (ECAL clusters and generic tracks) are further considered to form neutral hadrons, charged hadrons or nonisolated photons in the PF framework. This is discussed further in Sec.~\ref{subsec:ged}. \end{enumerate} \subsection{Superclustering in the ECAL}\label{sec:ecal-clustering} Energy deposits in several ECAL channels are clustered under the assumption that each local maximum above a certain energy threshold (1\GeV) corresponds to a single particle incident on the detector. An ECAL energy deposit may be shared between overlapping clusters, and a Gaussian shower profile is used to determine the fraction of the energy deposit to be assigned to each of the clusters. Because electrons and photons have a significant probability of showering when traversing the CMS tracker, by the time the particle reaches the ECAL, the original object may consist of several electrons and/or photons produced from bremsstrahlung and/or pair production. The multiple ECAL clusters need to be combined into a single SC that captures the energy of the original electron/photon. This step is known as superclustering and the combining process uses two algorithms. The first is the ``mustache'' algorithm, which is particularly useful to properly measure low-energy deposits. It uses information only from the ECAL and the preshower detector. The algorithm starts from a cluster above a given threshold, called seed cluster. Additional clusters are added if falling into a zone, whose shape is similar to a mustache in the transverse plane. The name mustache is used because the distribution of $\Delta\eta=\eta_{\text{seed-cluster}}-\eta_{\text{cluster}}$ versus $\Delta\phi=\phi_{\text{seed-cluster}}-\phi_{\text{cluster}}$ has a slight bend because of the solenoidal structure of the CMS magnetic field, which tends to spread this radiated energy along $\phi$, rather than along $\eta$. An example of the mustache SC distribution can be seen in Fig.~\ref{fig:mustache}, for simulated electrons with $1< \ET^{\text{seed}} < 10\GeV$. A similar shape is observed in the case of a photon. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{Figure_001.pdf} \caption{Distribution of $\Delta\eta=\eta_{\text{seed-cluster}}-\eta_{\text{cluster}}$ versus $\Delta\phi=\phi_{\text{seed-cluster}}-\phi_{\text{cluster}}$ for simulated electrons with $1 < \ET^{\text{seed}} <10\GeV$ and $1.48 < \eta_{\text{seed}} <1.75$. The $z$ axis represents the occupancy of the number of PF clusters matched with the simulation (requiring to share at least 1\% of the simulated electron energy) around the seed. The red line contains approximately the set of clusters selected by the mustache algorithm. The white region at the centre of the plot represents the $\eta$-$\phi$ footprint of the seed cluster.} \label{fig:mustache} \end{figure} The size of the mustache region depends on $\ET$, since the tracks of particles with larger transverse momenta get less bent by the magnetic field. The mustache SCs are used to seed electrons, photons, and conversion-finding algorithms. The second superclustering algorithm is known as the ``refined'' algorithm, and is described in more detail in Section~\ref{sec:ECALSuperclusterRefinement}. It utilizes tracking information to extrapolate bremsstrahlung tangents and conversion tracks to decide whether a cluster should belong to a SC. It uses mustache SCs as a starting point, but is also capable of creating its own SCs. The refined SCs are used for the determination of all ECAL-based quantities of electron and photon objects. \subsection{Electron track reconstruction and association} \label{sec:electron-track-reconstruction} Electrons use the GSF tracking algorithm to include radiative losses from bremsstrahlung. There have been no significant changes to the tracking algorithm from Run 1~\cite{Khachatryan:2015hwa} and any differences arise primarily from a different ECAL superclustering algorithm. Therefore, the algorithms involved in electron tracking are only briefly summarized here, with additional details available in Ref.~\cite{Khachatryan:2015hwa}. \subsubsection{Electron seeding} The GSF track fitting algorithm is CPU intensive and cannot be run on all reconstructed hits in the tracker. The reconstruction of electron tracks therefore begins with the identification of a hit pattern that might lie on an electron trajectory (``seeding''). The electron trajectory seed can be either ``ECAL-driven'' or ``tracker-driven''. The tracker-driven seeding has an efficiency of ${\sim}50$\% for electrons from $\PZ$ decay with $\pt\sim 3\GeV$ and drops to less than 5\% for $\pt>10\GeV$~\cite{Sirunyan:2017ulk}. The ECAL-driven seeding first selects mustache SCs with transverse energy $E_{\text{SC,T}} > 4\GeV $ and $H/E_{\text{SC}} < 0.15$, where $E_{\text{SC}}$ and $H$ are the SC energy and the sum of the energy deposits in the HCAL towers within a cone of $\Delta R = \sqrt{\smash[b]{(\Delta\eta)^2+(\Delta\phi)^2}} = 0.15$ centered on the SC position. Each mustache SC is then compared in $\phi$ and $z$ (or in transverse distance $r$ in the forward regions where hits occur only in the disks) with a collection of track seeds that are formed by combining multiple hits in the inner tracker detector: triplets or doublets. The hits of these track seeds must be located in the barrel pixel detector layers, the forward pixel layers, or the endcap tracker. For a given SC, the trajectory of its corresponding electron is assumed to be helical and is calculated from the SC position, its $\ET^{\text{seed}}$, and the magnetic field strength. This extrapolation towards the collision vertex neglects the effect of any photon emission. If the first two hits of a tracker seed are matched (within a certain charge-dependent $ \Delta z {\times} \Delta\phi $ window for the barrel pixel detectors, and a $ \Delta r {\times} \Delta\phi $ window for the forward pixel disks and endcap tracker) to the predicted trajectory for a SC under any charge hypothesis, it is selected for seeding a GSF track~\cite{Khachatryan:2015hwa}. The tracker-driven approach iterates over all generic tracks. If any of these KF tracks is compatible with an ECAL cluster, its track seed is used to seed a GSF track~\cite{Khachatryan:2015hwa}. The compatibility criterion is the logical OR of a cut-based selection and a multivariate selection based on a boosted decision tree (BDT)~\cite{Hocker:2007ht,QUINLAN1987221}, using track quality and track-cluster matching variables as inputs. Since it is computationally expensive to reconstruct all tracks in an event, tracker-driven seeding is performed only in the offline reconstruction and not in HLT. The ECAL-driven approach performs better for high-$\ET$ isolated electrons with a larger than 95$\%$ seeding efficiency for $\ET>10\GeV$ for electrons from $\PZ$ boson decay. The tracker-driven approach is designed to recover efficiency for low-\pt or nonisolated electrons with a seeding efficiency higher than ${\sim}50$\% for electrons with $\pt>3\GeV$~\cite{Khachatryan:2015hwa}. It also helps to recover efficiency in the ECAL regions with less precise energy measurements, such as in the barrel-endcap transition region and/or in the gaps between supermodules. The GSF tracking algorithm is run on all ECAL- and tracker-driven seeds. If an ECAL-driven seed shares all but one of its hits with a tracker-driven seed, the resulting track candidate is considered as both ECAL and tracker-seeded. This is also the case for ECAL-driven seeds, which share all hits with a tracker-driven seed, but in this case the tracker-driven seed is discarded before the track-finding step. The majority of electrons fall into one of these two cases. \subsubsection{Tracking} The final collection of selected electron seeds (obtained by combining the ECAL-driven and tracker-driven seeds) is used to initiate the reconstruction of electron tracks. For a given seed, the track parameters evaluated at each successive tracker layer are used by the KF algorithm to iteratively build the electron trajectory, with the electron energy loss modeled using a Bethe--Heitler distribution~\cite{PhysRev.93.768}. If the algorithm finds multiple hits compatible with the predicted position in the next layer, it creates multiple candidate trajectories by doing a $ \chi^{2} $ fit, up to a maximum of five for each tracker layer and for a given initial trajectory. The candidate trajectories are restricted to those with at most one missing hit, and a penalty is applied to the trajectories with one missing hit by increasing the track $ \chi^{2} $. This penalty helps to minimize the inclusion of hits from converted bremsstrahlung photons in the primary-electron trajectory. Any ambiguities that arise when a given tracker hit is assigned to multiple track candidates are resolved by dropping the track with fewer hits, or the track with the larger $\chi^2$ value if the number of hits is the same~\cite{Chatrchyan:2014fea}. Once the track candidates are reconstructed by the KF algorithm, their parameters are estimated at each layer with a GSF fit in which the energy loss is approximated by an admixture of Gaussian distributions~\cite{Khachatryan:2015hwa}. The GSF tracks obtained from this procedure are extrapolated toward the ECAL under the assumption of a homogeneous magnetic field to perform track-cluster associations. \subsubsection{Track-cluster association} \label{sec:ele_trk-clus_assoc} The electron candidates are constructed by associating the GSF tracks with the SCs, where the position of the SC is defined as the energy-weighted average of the constituent ECAL cluster positions. A BDT is used to decide whether to associate a GSF track to an ECAL cluster. The BDT combines track information, supercluster observables, and track-cluster matching variables. The track information covers both kinematical and quality-related features. The SC information includes the spread in $\eta$ and $\phi$ of the full SC, as well as transverse shape variables inferred from a $5{\times}5$ crystal matrix around the cluster seed. For tracker-driven electrons, only the BDT is used to decide whether to associate a GSF track to an ECAL cluster. Electron candidates reconstructed from ECAL-driven seeds are required to pass either the same BDT requirements as for tracker-driven electrons or the following track-cluster matching criteria: \begin{itemize} \item $ \abs{\Delta\eta} = \abs{\eta_{\text{SC}} - \eta^{\text{extrap }}_{\text{trk-in}}} < 0.02 $, with $\eta_{\text{SC}}$ being the SC $ \eta $, and $ \eta^{\text{extrap }}_{\text{trk-in}} $ the track $\eta $ at the position of closest approach to the SC (obtained by extrapolating the innermost track position and direction), \item $ \abs{\Delta\phi} = \abs{\phi_{\text{SC}} - \phi^{\text{extrap }}_{\text{trk-in}}} < 0.15 $, with analogous definitions for $ \phi $. The wider window in $ \phi $ accounts for the material effect and bending of electrons in the magnetic field. \end{itemize} \subsection{Supercluster refinement in the ECAL} \label{sec:ECALSuperclusterRefinement} The mustache SCs can be refined using the information from detector subsystems beyond the ECAL crystal and preshower detectors. Additional conversion and bremsstrahlung clusters are recovered using information from the tracker, with minimal risk of inclusion of spurious clusters. A conversion-finding algorithm~\cite{Khachatryan:2015iwa} is employed to identify pairs of tracks consistent with a photon conversion. A BDT is employed to identify tracks from photon conversions where only one leg has been reconstructed. The input variables to this BDT include the number of missing hits on the track (for prompt electrons no missing hits are expected), the radius of the first track hit, the signed impact parameter or the distance of closest approach ($d_0$). The identified conversion tracks can then be linked to the compatible ECAL clusters. Additionally at each tracker layer, the trajectory of the GSF track is extrapolated to form a ``bremsstrahlung tangent'', which can be linked to a compatible ECAL cluster. Mustache SCs, ECAL clusters, primary generic tracks, GSF tracks, and conversion-flagged tracks are all inputs to the PF algorithm, which builds the \Pe/\PGg objects, as described in Ref.~\cite{Sirunyan:2017ulk}. An \Pe/\PGg object must start from either a mustache SC or a GSF track. To reduce the CPU time, a mustache SC must either be associated with a GSF track or satisfy $E_{\text{SC,T}} > 10\GeV$ and ($H/E_\text{SC}<0.5$ or $E_{\text{SC,T}} > 100\GeV$). The ECAL clusters must not be linked to any track from the primary vertex unless that track is associated with the object's GSF track. ECAL clusters already added by the mustache algorithm are exempted from this requirement. ECAL clusters linked to secondary conversion tracks and bremsstrahlung tangents are then provisionally added to the so called refined supercluster. However, in the final step, they can be withdrawn from the refined SC if this makes the total energy more compatible with the GSF track momentum at the inner layer. ECAL clusters that are within $\abs{\eta}<0.05$ of the GSF track outermost position extrapolated to the ECAL or within $\abs{\eta}<0.015$ of a bremsstrahlung tangent are exempted from this removal. Finally, a given ECAL cluster can belong to only one refined SC. \subsection{Integration in the global event description} \label{subsec:ged} Electrons and photons present a unique challenge in the PF framework because they can be composite objects consisting of several clusters and tracks. This can lead to incorrect results when an object that is not an \Pe/\PGg object is reconstructed under the \Pe/\PGg hypothesis. For example, the photons, charged hadrons, and neutral hadrons in a jet can be reconstructed as \Pe/\PGg objects instead of being reconstructed individually, and can potentially cause a large mismeasurement of the reconstructed jet energy. Therefore, a minimal selection, as reported in Ref.~\cite{Sirunyan:2017ulk}, is applied to correctly identify hadrons and \Pe/\PGg objects and to improve the measurement of jets and missing transverse momenta. Because of computing constraints, it is not currently feasible to rerun the PF algorithm using multiple \Pe/\PGg identification requirements, and hence a common ``loose'' identification selection is used for electrons and photons. A loose requirement on the BDT classifier is applied for electrons, with a different BDT used for isolated and nonisolated electrons. Both the BDTs use various shower-shape, detector-based isolation, and tracker-related variables as input. The BDT selection for nonisolated electrons is the one used for the selection of electron candidates, as explained in Section~\ref{sec:ele_trk-clus_assoc}. Additionally, selection requirements on $E/p$ (the ratio between the electron energy and its momentum), $H/E$, and on quantities based on the associated generic tracks are applied to reject candidates that are problematic for jet algorithms. Occasionally, an electron can be selected by the PF algorithm, but with its additional tracks released for charged hadron reconstruction in PF. Photon candidates are required to be isolated, and their shower-shape variables must be compatible with genuine photons. \subsection{Bremsstrahlung and photon conversion recovery} To collect the energy of photons emitted by bremsstrahlung, tangents to the GSF tracks are extrapolated to the ECAL surface from the track positions. A cluster that is linked to the track is considered as a potential bremsstrahlung photon if the extrapolated tangent position is within the boundaries of the cluster, as defined above, provided that the distance between the cluster and the GSF track extrapolation in $\eta$ is smaller than 0.05. The fraction of the momentum lost by bremsstrahlung, as measured by the tracker is defined as: \begin{equation} f_{\text{brem}} = 1 - \frac{\abs{p_\text{trk-out}}}{\abs{p_\text{trk-in}}}. \end{equation} where $p_\text{trk-in}$ is the momentum at the point of closest approach to the primary vertex, and $p_\text{trk-out}$ is the momentum extrapolated to the surface of the ECAL from the outermost tracker layer. Its distribution is shown in Fig.~\ref{fig:fbrem} for the barrel and the endcaps. Bremsstrahlung photons, as well as prompt photons, have a significant probability to further convert into an $\Pep\Pem$ pair in the tracker material. Because of higher tracker material budget in the endcaps, $f_{\text{brem}}$ has a higher peak at large values, close to 1, compared to the distribution in the barrel. The disagreement observed between data and simulation in the endcap region is attributed to an imperfect modelling of the material in simulation. According to simulation, the fraction of photon conversions occurring before the last tracker layer is as high as 60\% in the $\eta$ regions with the largest amount of tracker material in front of the ECAL. A conversion-finder was therefore developed to create links between any two tracks compatible with a photon conversion~\cite{Khachatryan:2015iwa}. To recover converted bremsstrahlung photons, the vector sum of any possible bremsstrahlung pair conversion candidate track momenta is checked for compatibility with the aforementioned electron track tangents. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_002-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_002-b.pdf} \caption{Fraction of the momentum lost by bremsstrahlung between the inner and outer parts of the tracker for electrons from $\PZ$ boson decays in the barrel (\cmsLeft) and in the endcaps (\cmsRight). The upper panels show the comparison between data and simulation. The simulation is shown with the filled histograms and data are represented by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the statistical uncertainty in the simulation. The lower panels show the data-to-simulation ratio. } \label{fig:fbrem} \end{figure} The photon conversion-finding algorithm is validated by reconstructing the $\PGm\PGm\PGg$ invariant mass from events in which a conversion track pair is matched to the photon, as discussed in Section~\ref{sec:PerformanceAndValidationWithData}. \subsection{Reconstruction performance} \label{sec:recoperf}\label{sec:tnp} Photons are reconstructed as SCs in the ECAL after applying a very loose selection requirement on $H/E_\text{SC}<0.5$, for which 100\% SC reconstruction efficiency is assumed. Since electrons are additionally required to have a track matching with the SC, the reconstruction efficiency for a SC having a matching track is computed, as described below. Electron reconstruction efficiency is defined as the ratio between the number of reconstructed SCs matched to reconstructed electrons and the number of all reconstructed SCs. The electron reconstruction efficiency is computed with a tag-and-probe method using $\PZ\to \Pe\Pe$ events~\cite{Khachatryan:2010xn} as a function of the electron $\eta$ and $\ET$, and covers all reconstruction effects. This reconstruction efficiency is higher than 95\% for $\ET > 20\GeV$, and is compatible between data and simulation within 2\%. The tag-and-probe technique is a generic tool to measure efficiency that exploits dileptons from the decays of resonances, such as a $\PZ$ boson or \PJgy meson. In this technique, one electron of the resonance decay, the tag, is required to pass a tight identification criterion (whose requirements are listed in detail in Sec.~\ref{sec:eleID}) and the other electron, the probe, is used to probe the efficiency under study. The estimated efficiencies are almost insensitive to variations in the definition of the tag. For the results in this paper, tag electrons are required to satisfy $\ET > 30$ (35)\GeV for the 2016 (2017--2018) data-taking years, respectively. The probe is then required to pass the selection criteria (either reconstruction or identification) whose efficiency is under test. A requirement for having oppositely charged leptons is also applied. When there are two or more probe candidates corresponding to a given tag within the invariant mass range considered, only the probe with the highest $\ET$ is kept. In data, the events used in the tag-and-probe procedure are required to satisfy HLT paths that do not bias the efficiency under study. Backgrounds are estimated by fitting. The invariant mass distributions of the (tag, passing probe) and (tag, failing probe) pairs are fitted separately with a signal plus background model around the $\PZ$ boson mass in the range $[60,120]\GeV$. This range extends sufficiently far from the peak region to enable the background component to be extracted from the fit. The efficiency under study is computed from the ratio of the signal yields extracted from the two fits. This procedure is usually performed in bins of $\ET$ and $\eta$ of the probe electron, to measure efficiencies as a function of those variables. Different models can be used in the fit to disentangle the signal and background components. In the absence of any kinematic selection on the tag-and-probe candidates, the background component in the mass spectrum is well described by a falling exponential. However, the kinematic restrictions on the $\PZ$ candidates in each $\ET$ and $\eta$ range of the probe candidate distort the mass spectrum in a way that is well described by an error function. Consequently, the background component of the mass spectrum is described by a falling exponential multiplied by an error function as $\mathrm{f}(m_{\Pe\Pe})= \erf[(a-m_{\Pe\Pe})b]\exp[- (m_{\Pe\Pe}-c)d]$, where $a$ and $c$ are in \GeV and $b$ and $d$ are in \GeV$^{-1}$. All parameters of the exponential and of the error function are free parameters of the fit. The model for the signal component can use analytic expressions, or be based on templates from simulation. When using analytic functions, a Breit--Wigner (BW) function~\cite{Bohm:2004zi} with the world-average $\PZ$ boson mass and intrinsic width~\cite{Zyla2020} is convolved with a one-sided Crystal Ball (OSCB) function~\cite{Oreglia:1980cs} that acts as the resolution function. If a template from simulation is used, the signal part of the distribution is modeled through a sample of simulated electrons from $\PZ$ boson decays, convolved with a resolution function to account for any remaining differences in resolution between data and simulation. An example fit is shown in Fig.~\ref{fig:electronID_fit_example}. \begin{figure}[hbtp] \centering \includegraphics[width=0.8\textwidth]{Figure_003.pdf} \caption{Example $\PZ \to \Pe\Pe$ invariant mass fits for passing (left) and failing (right) probes. Black markers show data while red solid lines show the signal + background fitting model and the blue dotted lines represent the background only component. The vertical bars on the markers represent the statistical uncertainties of the data. } \label{fig:electronID_fit_example} \end{figure} The tag-and-probe technique is applied to data and simulated events to compare efficiencies, and evaluate data-to-simulation ratios (``scale factors''). In many analyses, these scale factors are applied as corrections to the simulation, or are used to assess systematic uncertainties. The efficiency in simulation is estimated from a $\PZ\to \Pe\Pe$ sample that contains no background, since a spatial match with the generator-level electrons is required. Several sources of systematic uncertainties are considered. The main uncertainty is related to the model used in the fit, and is estimated by comparing alternative distributions for signal and background, in addition to comparing analytic functions with templates from simulation. Only a small dependence is found on the number of bins used in the fits and on the definition of the tag. The electron reconstruction efficiencies measured in 2017 data and in simulated DY samples are shown in Fig.~\ref{fig:eleRecoEff2017}, together with the scale factors for different $\pt$ bins as a function of $\eta$. They are compatible in data and simulation, giving scale factors close to unity in almost the entire range. The region $1.44 < \abs{\eta} < 1.57$ corresponds to the transition between the barrel and endcap regions of ECAL and is not considered in a large number of physics analyses. The uncertainties shown in the plots correspond to the quadratic sum of the statistical and systematic contributions, dominated by the latter. The main uncertainty is related to the modeling of the signal. \begin{figure*}[hbtp] \centering \includegraphics[width=0.5\textwidth]{Figure_004.pdf} \caption{Electron reconstruction efficiency versus $\eta$ in data (upper panel) and data-to-simulation efficiency ratios (lower panel) for the 2017 data taking period. The vertical bars on the markers represent the combined statistical and systematic uncertainties. The region $1.44 <\abs{\eta } < 1.57$ corresponds to the transition between the barrel and endcap regions of ECAL and is not considered in physics analyses. } \label{fig:eleRecoEff2017} \end{figure*} Other objects, such as hadronic jets, may also produce electron-like signals, leading to such objects being misidentified as electron candidates. The better the reconstruction algorithm, the lower the misidentification rate per event. The larger the number of multiple interactions in an event, the larger the misidentification rate. Figure~\ref{fig:fakeVsPU} shows the number of misidentified electron candidates per event in different \pt ranges (for DY + jets MC events simulated with the different detector conditions corresponding to the three years of the Run 2 data taking period), as a function of the number of pileup vertices. The significant suppression of the misidentification rate in 2017 and 2018 is due to the new pixel detector. The slightly better results in 2017 with respect to 2018 are due to the better conditions and calibrations used in the Legacy data set. \begin{figure}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_005-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_005-b.pdf} \caption{Number of misidentified electron candidates per event as a function of the number of generated vertices in DY + jets MC events simulated with the different detector conditions of the Run 2 data taking period. Results are shown for electrons with \pt in the range 5--20\GeV (left) and electrons with $\pt > 20\GeV$ (right) without further selection. The vertical bars on the markers represent the statistical uncertainties of the MC sample.} \label{fig:fakeVsPU} \end{figure} \subsection{Electron charge sign measurement} The measurement of the electron charge sign is affected by potential bremsstrahlung followed by photon conversions. In particular, when the bremsstrahlung photons convert upstream in the detector, the initiated showers lead to complex hit patterns, and the contributions from conversion electrons can be wrongly included in the electron track fit. A direct charge sign estimate is the sign of the GSF track curvature, which can be altered by the presence of conversions, especially for $\abs{\eta} > 2$, where the misidentification probability can reach 10\% for reconstructed electrons from $\PZ$ boson decays without any further selection. This is improved by combining this measurement with the estimates from two other methods. A second method is based on the associated KF track that is matched to a GSF track when there is at least one shared hit in the innermost region. A third method evaluates the charge sign using the sign of the $\phi$ angle difference between the vector joining the nominal interaction point to the SC position and the vector connecting the nominal interaction point to the innermost hit of the electron GSF track. A~detailed description of the three methods can be found in Ref.~\cite{Khachatryan:2015hwa}. When two or three out of the three measurements agree on the sign of the charge (majority method), it is assigned as the default electron charge sign. A very high probability of correct charge sign assignment can be obtained by requiring all three measurements to agree (selective method). While the former method is 100\% efficient by construction, the latter has some efficiency loss. The fraction of electrons passing the loose identification requirements (as described in Section~\ref{sec:eleID}) with all three charge sign estimations in agreement is shown in Fig.~\ref{fig:chargeIDeff}, as a function of \pt for electrons from $\PZ$ boson decays in the barrel and in the endcap regions. The efficiency of the selective method for the electron charge sign measurement is better than 90 (75)\% in the barrel (endcap). \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_006.pdf} \caption{Efficiency of the selective method for the electron charge sign measurement as a function of \pt for electrons in the barrel and endcap regions, as measured using simulated $\PZ\to \Pe\Pe$ events. Electrons are required to satisfy the loose identification requirements described in Section~\ref{sec:eleID}. The uncertainties assigned to the points are statistical only.} \label{fig:chargeIDeff} \end{figure*} The measurement of the correct charge identification probability uses the expected number of same-sign events ($ \text{N}_{\text{SS}}^{\text{expected}} $), which in a given $\pt{-}\eta$ bin is defined as: \begin{equation} N_{\text{SS}}^{\text{expected}}(i,j) = p_i(1-p_j)N(i,j)+ p_j(1-p_i)N(i,j) \end{equation} where $p_{i,j}$ is the probability of correctly determining the electron charge in the $(i,j)$-th $\pt{-}\eta$ bin and $N$ is the number of selected electron pairs. By performing a global fit (in all the bins simultaneously) of the $ \text{N}_{\text{SS}}^{\text{expected}} $ to the observed number, the probability $p$ for each bin can be obtained in both data and simulation. The electrons are required to pass the loose identification requirements, as described in Section~\ref{sec:eleIDCB}. Tighter identification requirements, specifically those requiring no ``missing hits'' for the track, have different efficiencies and correct charge sign identification probabilities. In this procedure no background subtraction is applied. Figure~\ref{fig:chargeID_vsAbsEta} shows the probability of correct charge assignment of the majority (left) and selective (right) methods, as a function of the electron's $\abs{\eta}$. The charge identification rate using the 2016 data set is compared with the correct charge assignment probability obtained in $\PZ\to \Pe\Pe$ simulated events. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_007-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_007-b.pdf} \caption{Correct charge identification probability for electrons using the majority method (\cmsLeft) and the selective method (\cmsRight), as measured in the 2016 data set and in simulated $\PZ\to \Pe\Pe$ events. The electrons are required to satisfy the loose identification requirements described in Section~\ref{sec:eleID}. The uncertainties assigned to the points are statistical only.} \label{fig:chargeID_vsAbsEta} \end{figure} From the data-to-simulation comparison, the systematic uncertainty in the charge sign assignment probability for electrons is less than 0.1\% in the barrel and 0.3\% in the endcap regions. \section{Online electron and photon reconstruction}\label{sec:sec5} Electron and photon candidates at L1 are based on ECAL trigger towers defined by arrays of 5$\times$5 crystals in the barrel and by a more complicated pattern in the endcaps, because of the different layout of the crystals~\cite{CERN-LHCC-97-033}. The central trigger tower with the largest transverse energy above a fixed threshold ($\ET> 2\GeV$) is designated as the seed tower. To recover energy losses because of bremsstrahlung, clusters are built from surrounding towers with $\ET$ above 1\GeV to form the L1 candidates. The sum of the $\ET$ of all towers in the cluster is the raw cluster $\ET^\text{L1}$. To obtain better identification of L1 e/$\gamma$ candidates, requirements are set on: (i) the energy distribution between the central and neighboring towers; (ii) the amount of energy deposited in the HCAL downstream of the central tower, the $\ET^\text{L1}$ of the candidate; and (iii) variables sensitive to the spatial extent of the electromagnetic shower~\cite{CMS-PAS-TRG-17-001}. No tracker information is available at L1, so electrons and photons are indistinguishable at this stage. The HLT electron and photon candidates are reconstructed from energy deposits in ECAL crystals grouped into clusters around the corresponding L1 candidate (called the L1 seed). For a given L1 seed, the ECAL clustering algorithm is performed by the HLT from the readout channels overlapping a matrix of crystals centered on the L1 candidate. The HLT processing time is kept short by clustering only around the L1 seed. Based on this L1 seed, superclusters are built using offline reconstruction algorithms, and the HLT requirements are applied as follows. For electron candidates, the ECAL SC is associated with a reconstructed track whose direction is compatible with its location. Electron and photon selection at the HLT relies on the identification and isolation criteria, together with minimal thresholds on the SC $\ET^\text{HLT}$ (i.e., the energy measured by the HLT using only the ECAL information, without any final calibration applied). The identification criteria are based on the transverse profile of the cluster energy deposited in the ECAL, the amount of energy in the HCAL downstream from the ECAL SC, and (for electrons) the degree of association between the track and the ECAL SC. The isolation criteria make use of the energy deposits that surround the HLT electron candidate in the ECAL, HCAL, and tracker detectors. \subsection{Differences between online and offline reconstruction} The HLT must ensure a large acceptance for physics signals, while keeping the CPU time and output rate under control. This is achieved by exploiting the same software used for the off\-line analysis that ensures high reconstruction efficiency and reduces the trigger rate by applying stringent identification criteria and quality selections. The differences between the HLT and offline reconstruction code are minimal and are mainly driven by: (i) the limited CPU time available at the HLT, fixed at about 260\unit{ms} by the number of processing CPUs and the L1 input rate; (ii) the lack of final calibrations, which are not yet computed during the data-taking period; and (iii) more conservative selection criteria to avoid rejecting potentially interesting events. To keep the processing time short, all trigger paths have a modular structure and are characterized by a sequence of reconstruction and filtering blocks of increasing complexity. Thus faster algorithms are run first, and their products are immediately filtered, allowing the remaining algorithms to be skipped when the event fails any given filter. Another important time-saving optimization is to restrict the detector readout and reconstruction to regions of interest around the L1 candidates. Moreover, HLT SCs, which are more robust against possible background contamination, have a simpler energy correction than the offline reconstruction. The main difference between the online and offline reconstruction occurs in the tracking algorithms. Every electron candidate reconstructed at the HLT is ECAL-driven; the algorithm starts by finding a supercluster and then looks for a matching track reconstructed in the pixel detector. The association is performed geometrically, matching the SC trajectory to pixel detector hits. Since 2017, the online pixel matching algorithm requires three pixel hits rather than two, as in the offline algorithm, to maximize early background rejection. Two pixel detector hits are accepted only if the trajectory passes through a maximum of three active modules. Once the SC is associated with the pixel detector seeds, the electron track is reconstructed using the same GSF algorithm as employed offline. Since this algorithm is used only when the pixel matching succeeds, the processing time is considerably reduced. Moreover, not all electron paths lead to reconstructed tracks; some of them can achieve significant rate reduction from pixel detector matching alone. For isolated electrons, all the nearby tracks must be reconstructed to build the track isolation variables. This is accomplished at the end of the path by using an iterative tracking algorithm similar to that applied offline, but specifically customized for the HLT and with fewer iterations of the tracking procedure. Offline tracker-driven electron reconstruction is advantageous only for low energy or nonisolated electrons, neither of which is easy to trigger on. The use of only ECAL-driven electrons at the HLT is thus a reasonable simplification with respect to the offline reconstruction. Other differences that exist with respect to the offline reconstruction concern calorimetry. At HLT the timing selection requirement applied offline to reject out-of-time hits (e.g., pileup, anomalous signals in ECAL from the interaction of particles in photodetectors, cosmic and beam halo events) is removed, since it does not significantly reduce the rate and risks losing rare signatures, such as the detection of long-lived particles. Moreover, the ECAL online calibration is also different; the response corrections for the crystal transparency loss that are applied at HLT during the data-taking period and updated twice per week are not as accurate as the ones used by the offline reconstruction. Finally, some online variables are defined differently with respect to offline. The $\ET$ is computed with respect to the origin of the CMS reference system, instead of the actual position of the collision primary vertex, and it is measured using only calorimeter information, without any track-based corrections or final calibrations. The online particle isolation is defined by exploiting energy clusters built in the ECAL and HCAL and tracks reconstructed in the tracker, instead of using the more complete PF information, which is available offline. Some other variables, such as $H/E$, are defined in the same way both offline and online, although with slightly different parameters, that ensure the online selection is always looser than offline. \subsection{Electron trigger requirements and performance} The electron triggers correspond to the first selection step of most offline analyses using electrons, which requires the presence of at least one, two or three HLT electron candidates. Because of bandwidth limitations, both L1 seeds and HLT paths may be prescaled, i.e., they may record only a fraction of the events, to reduce the trigger rate. Tables~\ref{tb:unprescale_L1} and~\ref{tb:unprescale_HLT} show the lowest unprescaled L1 and HLT $\ET$ thresholds and the corresponding L1 seed and the HLT path names of Run 2~\cite{CMS-PAS-TRG-17-001}. \begin{table}[hbp] \centering \topcaption{Lowest unprescaled $\ET^\text{L1}$ thresholds and the corresponding seed names, for the three years of Run 2.} \begin{tabular}{lll} \hline & $\ET^\text{L1}$ threshold [\GeVns]& $\abs{\eta}$ range \\ \hline 2016&&\\ Single electron/photon & 40 & $<$3\\ Single electron/photon (isolated) & 30& $<$3 \\ Double electron/photon & 23, 10 &$<$3\\ [\cmsTabSkip] 2017&&\\ Single electron/photon & 40 &$<$3\\ Single electron/photon (isolated) & 32 &$<$3\\ Single electron/photon (isolated) & 30 &$<$2.1\\ Double electron/photon & 25, 14 &$<$3\\ [\cmsTabSkip] 2018&&\\ Single electron/photon & 40 &$<$2.5\\ Single electron/photon (isolated) & 32 &$<$2.5\\ Single electron/photon (isolated) & 30 & $<$2.1\\ Double electron/photon & 25, 14 &$<$2.5\\ \hline \end{tabular} \label{tb:unprescale_L1} \end{table} \begin{table}[hbp] \centering \topcaption{Lowest unprescaled $\ET^\text{HLT}$ thresholds and the corresponding path names, for the three years of Run 2 data-taking. The electrons are always required to be within the L1 $\abs{\eta}$ requirement and always within $\abs{\eta} <2.65$.} \begin{tabular}{lll} \hline & $\ET^\text{HLT}$ threshold [\GeVns]\\ \hline 2016&&\\ Single electron & 27 \\ Double electron (isolated)& 23, 12 \\ Double electron (nonisolated) &33\\ Triple electron & 16, 12, 8 \\ [\cmsTabSkip] 2017&&\\ Single electron & 32 \\ Double electron (isolated)& 23, 12 \\ Double electron (nonisolated) &33\\ Triple electron & 16, 12, 8 \\ [\cmsTabSkip] 2018&&\\ Single electron & 32 \\ Double electron (isolated)& 23, 12 \\ Double electron (nonisolated) &25\\ Triple electron & 16, 12, 8 \\ \hline \end{tabular} \label{tb:unprescale_HLT} \end{table} The single- and double-electron trigger performance is reported, using the full Run 2 data sample corresponding to an integrated luminosity of 136\fbinv. Efficiencies are obtained using a data-driven method based on the tag-and-probe technique (described in detail in Section~\ref{sec:tnp}), which exploits $\PZ\to \Pe\Pe$ events and requires one electron candidate, called the tag, to satisfy tight selection requirements, while leaving the other electron of the pair, called the probe, unbiased to measure the efficiency. For the results presented in this section, tag electrons are required to pass the criteria described in Section~\ref{sec:tnp}. Moreover, they have to satisfy an unprescaled single-electron trigger, with $\ET^\text{HLT} > 27$ and 32\GeV for the 2016 and 2017--2018 data-taking periods, respectively. Probe electrons must have $\abs{\eta} < 2.5$ and $\ET^\text{ECAL} >5\GeV$, with $\ET^\text{ECAL} = E^\text{ECAL} \sin{\theta_{\text{SC}}}$, where $E^\text{ECAL}$ is the best estimate of the electron energy measured by ECAL and $\theta_{\text{SC}}$ is the angle with respect to the beam axis of the electron SC. No additional identification criteria are applied to the probes. To measure the trigger efficiency, probes are then required to pass the HLT path under study. The electron triggers analyzed in this paper are the following: \begin{itemize} \item HLT\_Ele(27)32\_WPTight\_Gsf: standard single-electron trigger with tight identification and isolation requirements. The electron $\ET^\text{HLT}$ is required to be above 27\GeV in 2016 and above 32\GeV in 2017-2018. \item HLT\_Ele23\_Ele12\_CaloIdL\_TrackIdL\_IsoVL: standard double-electron trigger with loose identification (CaloIdL, TrackIdL) and isolation (IsoVL) requirements. The $\ET^\text{HLT}$ thresholds of the two electrons are 23 and 12\GeV, respectively. \end{itemize} Photon triggers are not included in this paper, since they are very similar to electron triggers, except for the absence of the requirement on the presence of matching tracks. Moreover, photon triggers are usually designed for specific analyses and are not used as extensively as the electron triggers described above. The efficiency of the two analyzed electron triggers in different SC $\eta$ regions is shown, with respect to an offline reconstructed electron, in Figs.~\ref{fig:SingleEle_pt} and~\ref{fig:Ele23Ele12_pt}, as a function of the electron \pt. The region $1.44 < \abs{\eta} < 1.57$ is not included since it corresponds to the transition between the barrel and endcap regions of the ECAL where the quality of reconstruction, calibration and identification are not as good as in the rest of the ECAL~(see Fig.~\ref{fig:eleRecoEff2017}). The DY + jets simulated event samples produced with MadGraph5~\cite{Alwall:2014hca} are used for comparison. The measurement combines both the L1 and HLT efficiencies. {\tolerance=800 At the HLT level, both objects required by the double-electron path must correspond to an L1 seed, which can require either a single-electron with a higher momentum threshold (L1\_SingleEG) or two electrons (L1\_DoubleEG) with lower momentum thresholds (as shown in Table~\ref{tb:unprescale_L1}). This requirement also needs to be applied offline when performing the tag-and-probe measurement. Since the tag needs to pass a single-electron HLT path, it must pass an L1\_SingleEG seed. As a consequence, it will also satisfy the requirements of the $\ET$-leading object of the lowest unprescaled L1\_DoubleEG, lowering the L1 requirement on the probe to be only above the subleading threshold of the lowest unprescaled L1\_DoubleEG. When the $\ET$-leading object (Ele23) of the double-electron path is tested, the probe is thus specifically requested to pass the leading threshold of the path's L1\_DoubleEG seed. As reported in Table~\ref{tb:unprescale_L1}, the $\ET^\text{L1}$ thresholds of the lowest unprescaled L1\_DoubleEG seed increased across the years, leading to a larger efficiency at low \pt for the double-electron trigger in 2016 than in 2017 and 2018. \par} The single-electron trigger analyzed in this paper is characterized by a sequence of strict identification and isolation selections, known as ``tight working point'' (WPTight). This selection was retuned in 2017 to ensure better performance. As a consequence, the single-electron trigger efficiency is higher in 2017--2018 than in 2016. As previously described, electron candidates at the HLT are built by associating a track reconstructed in the pixel detector with an ECAL SC. In 2017, the CMS pixel detector was upgraded by introducing extra layers in the barrel and forward regions. At the beginning of that year, a commissioning period of the pixel detector led to a slightly reduced efficiency, which mostly affected barrel electrons. Moreover, as a consequence of the new detector, the algorithm used to reconstruct electrons, by matching ECAL superclusters to pixel tracks, was revised. Since the beginning of 2017 data taking, the algorithm requires two hits in the pixel detector when the particle trajectory passes through three or less active modules and three hits otherwise, whereas in 2016 only two hits were demanded in all cases. This change produced a significant rate reductions with minimal efficiency losses. To operate with the new pixel detector, DC-DC converters were installed. After a few months of smooth operation, some converters started to fail once the luminosity of the accelerator was increased, at the beginning of October 2017, leading to a decreasing efficiency toward the end of the year. For these reasons related to the pixel detector, 2017 trigger performance is slightly worse than for the other years, in particular for the double-electron trigger, where the retuning of the tight working point does not have any effect. In Fig.~\ref{fig:2017_npv}, the 2017 efficiencies of the single- and double-electron HLT paths are reported as a function of the number of reconstructed primary vertices. In 2017 the majority of the high-pileup data was recorded at the end of the year, the same time the pixel DC-DC convertors exhibited efficiency losses. Thus the efficiency loss versus number of vertices in the event is not solely due to the pileup. However, as Fig.~\ref{fig:2017_npv} shows, the efficiency loss is significant only for $2.0 < \abs{\eta} < 2.5$. The combined L1 and HLT trigger efficiency for the lowest unprescaled single-electron trigger path is about 80\% at the \pt plateau, with slightly lower values in the endcaps in 2016--2017. Because of the looser selection applied, the double-electron trigger has an efficiency close to unity for both objects. The increase in dead regions in the pixel tracker arising from DC-DC convertor failure is difficult to simulate, and is one of the main causes of disagreement between data and simulation, in particular in 2017, especially at high \pt. The discrepancy in the turn-on at low \pt, seen for all years and $\eta$ values, is mainly because of the small differences that exist between the online and offline ECAL response corrections, as described above. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_008-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_008-b.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_008-c.pdf} \caption{Efficiency of the unprescaled single-electron HLT path with the lowest $\ET^\text{HLT}$ requirement (HLT\_Ele27\_WPTight\_Gsf in 2016, HLT\_Ele32\_WPTight\_Gsf in 2017--2018), with respect to the offline reconstruction, as a function of the electron \pt, for different $\eta$ regions using the 2016 (upper left), 2017 (upper right) and 2018 (lower) data sets. The bottom panel shows the data-to-simulation ratio. The efficiency measurements combine the effects of the L1 and HLT triggers. The vertical bars on the markers represent combined statistical and systematic uncertainties.} \label{fig:SingleEle_pt} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_009-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_009-b.pdf} \caption{Efficiency of the Ele23 object (left) and Ele12 object (right) of the HLT\_Ele23\_Ele12 trigger, with respect to an offline reconstructed electron, as a function of the electron \pt, obtained for different $\eta$ regions using the 2018 data set. The Ele23 efficiency includes the requirement of passing the leading electron threshold of the asymmetric L1\_DoubleEG seed. The bottom panel shows the data-to-simulation ratio. The efficiency measurements combine the effects of the L1 and HLT triggers. The vertical bars on the markers represent combined statistical and systematic uncertainties.} \label{fig:Ele23Ele12_pt} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_010-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_010-b.pdf} \caption{Efficiency of HLT\_Ele32\_WPTight\_Gsf (left) and HLT\_Ele23\_Ele12 (right) trigger, with respect to an offline reconstructed electron, as a function of the number of reconstructed primary vertices, obtained for different $\eta$ regions using the 2017 data set. Electron $\ET$ is required to be $>$50\GeV. The bottom panel shows the data-to-simulation ratio. The efficiency measurements combine the effects of the L1 and HLT triggers. The vertical bars on the markers represent combined statistical and systematic uncertainties. In 2017 the majority of the high-pileup data was recorded at the end of the year, the same time the pixel DC-DC convertors exhibited efficiency losses. Thus the efficiency loss versus number of vertices in the event is not solely due to the pileup. However the efficiency loss is significant only for $2.0 < \abs{\eta} < 2.5$.} \label{fig:2017_npv} \end{figure} \section{Energy corrections}\label{sec:sec6} The energy deposited by electrons and photons in the ECAL and collected by the superclustering algorithm is subject to losses for several reasons. Electromagnetic shower energy in the ECAL can be lost through lateral and longitudinal shower leakage, or in intermodule gaps or dead crystals; the shower energy can also be smaller than the initial electron energy because of the energy lost in the tracker. These losses result in systematic variations of the energy measured in the ECAL. Without any corrections, this would lead to a degradation of the energy resolution for reconstructed electrons and photons. To improve the resolution, a multivariate technique is used to correct the energy estimation for these effects, as discussed below. The regression technique described in Section~\ref{sec:EnergyReg} uses simulation events only, whereas the energy scale and spreading corrections detailed in Section~\ref{sec:SS} are based on the comparison between data and simulation. \subsection{Energy corrections with multivariate regressions} \label{sec:EnergyReg} A set of regression fits based on BDTs are applied to correct the energy of $\Pe$/$\gamma$~\cite{Khachatryan:2014ira}. The minimum $\ET$ for electrons (photons) considered for the BDT training is 1 (5)\GeV at the simulation level. Each of these energy regressions is built as follows. The regression target $y$ is the ratio between the true energy of an $\Pe$/$\gamma$ and its reconstructed energy, thus the regression prediction for the target is the correction factor to be applied to the measured energy to obtain the best estimate of the true energy. The regression input variables, represented by the vector $\vec{x}$, includes the object and event parameters most strongly correlated with the target. The regression is implemented as a gradient-boosted decision tree, and a log-likelihood function is employed~\cite{Khachatryan:2014ira}: \begin{equation} \mathcal{L} = - \sum_{\text{MC}\;\; \Pe/\gamma\;\; \text{objects}} \ln{p(y|\vec{x})}, \end{equation} \noindent where $p(y|\vec{x})$ is the estimated probability for an object to have the observed value $y$, given the input variables $\vec{x}$, and the sum runs over all objects in a simulated sample in which the true values of the object energies are known. The probability density function used in this regression algorithm is a double-sided Crystal Ball (DSCB) function~\cite{Oreglia:1980cs} that has a Gaussian core with power law tails on both sides. The definition of the DSCB function is as follows: \begin{equation} \text{DSCB}(y; \mu, \sigma, \alpha_{\text{L}}, n_{\text{L}}, \alpha_{\text{R}}, n_{\text{R}}) = \begin{cases} N\; \re^{-\frac{\xi(y)^2}{2}}, & \text{if } -\alpha_{\text{L}}\le \xi(y)\le \alpha_{\text{R}} \\ N\; \re^{-\frac{\alpha_{\text{L}}^2}{2}} \left(\frac{\alpha_{\text{L}}}{n_{\text{L}}} \left(\frac{n_{\text{L}}}{\alpha_{\text{L}}}-\alpha_{\text{L}}-\xi(y)\right) \right)^{-n_{\text{L}}}, & \text{if } \xi(y) < -\alpha_{\text{L}} \\ N\; \re^{-\frac{\alpha_{\text{R}}^2}{2}} \left(\frac{\alpha_{\text{R}}}{n_{\text{R}}} \left(\frac{n_{\text{R}}}{\alpha_{\text{R}}}-\alpha_{\text{R}}+\xi(y)\right) \right)^{-n_{\text{R}}}, & \text{if } \xi(y) > \alpha_{\text{R}} \\ \end{cases} , \end{equation} where $N$ is the normalization constant, $\xi(y)=(y-\mu)/\sigma$, the variables $\mu$ and $\sigma$ are the parameters of the Gaussian core, and the $\alpha_{\text{R}}$ ($\alpha_{\text{L}}$) and $n_{\text{R}}$ ($n_{\text{L}}$) parameters control the right (left) tails of the function. Through the training phase, the regression algorithm performs an estimate of the parameters of the double Crystal Ball probability density as a function of the input vector of the object and event characteristics $\vec{x}$: \begin{equation} p(y|\vec{x}) = \text{DSCB}( y; \mu(\vec{x}), \sigma(\vec{x}), \alpha_{\text{L}}(\vec{x}), n_{\text{L}}(\vec{x}), \alpha_{\text{R}}(\vec{x}), n_{\text{R}}(\vec{x}) ). \end{equation} Subsequently, for an $\Pe$/$\gamma$ candidate, the most probable value $\mu$ is the estimate of the correction to the object's energy, and the width of the Gaussian core $\sigma$ is the estimate of the per-object energy resolution. Both $\mu$ and $\sigma$ are predicted by the regression, as functions of the object and event parameter vector $\vec{x}$. The electron energy is corrected via the sequential application of three regressions: the first regression (step 1) provides the correction to the SC energy, the second regression (step 2) provides an estimate of the SC energy resolution, taking into account the additional spread in data due to real detector conditions, and the third regression (step 3) yields the final energy value, correcting the combined energy estimate from the SC and the electron track information. The photon energy is corrected using the same method, except that step 3 is omitted. The electron and photon regressions are trained on samples of simulated events with two electrons or photons in each event, generated with a flat transverse momentum spectrum, where the true value of the $\Pe$/$\gamma$ energy is known and the geometric condition $\Delta R < 0.1$ is used to find a match of the reconstructed $\Pe$/$\gamma$ to the true ones. The ECAL crystals exhibit slight variations in the light output for a given electromagnetic shower. This effect is corrected by the intercalibration of the crystals~\cite{Chatrchyan:2013dga}, and a corresponding modeling of this variation is applied in the simulation. In addition the knowledge of the crystal intercalibrations is affected by random deviations~\cite{Khachatryan:2015hwa}, which impact the energy resolution. This effect is usually simulated by applying a random spreading of the crystal intercalibrations within the expected inaccuracy. To avoid the random spreading of the simulation, the regression fit corrects the data using two MC samples: a sample without the intercalibration spreading (called ideal IC samples) is used to train the energy regression, and a sample with the intercalibration spreading (called real IC samples) is used for the energy resolution estimation and for the SC and track combination. The workflow for the electron and photon energy regressions is summarized in Table~\ref{tab:regression-steps}. Each subsequent step depends on the output of the previous step. \begin{table}[hbp] \centering \topcaption{Details of the three energy regression steps used in electron and photon energy reconstruction.} \begin{tabular}{lllll} \hline Regression & $\Pe$/$\gamma$ & High level & Simulated & Quantity \\ index & & object & sample type & corrected \\ \hline step 1 & supercluster & electrons/photons & ideal IC & energy \\ step 2 & supercluster & electrons/photons & real IC & energy resolution\\ step 3 & supercluster and track & electrons only & real IC & energy \\ \hline \end{tabular} \label{tab:regression-steps} \end{table} The step 1 regression primarily corrects for the energy that is lost in the tracker material or in modular gaps in the ECAL. The regression inputs include the energy and position of the SC, and the variable $R_9$, which is defined as the energy sum of the $3{\times}3$ crystal array centered around the most energetic crystal in the SC divided by the energy of the SC. Other quantities, including lateral shower shapes in $\eta$ and $\phi$, number of saturated crystals, and other SC shape parameters, as well as an estimate of the pileup transverse energy density in the calorimeter are also included. Step 2 is performed to obtain an estimate of the per-object resolution. It uses the same inputs as in step 1, but the SC energy is scaled by the correction factor obtained from the step 1 regression, and the target of the step 2 regression is the ratio of the true energy of the particle to the measured energy corrected by step 1. Since imperfect intercalibration affects the spread of the energy response between crystals and not the mean value of the average response, in step 2 the mean $\mu(\vec{x})$ of the DSCB probability density function is fixed to that obtained from step 1. The primary result of the step 2 regression is the estimated value of the energy resolution $\sigma(\vec{x})$. For electrons, since the energy measurement is performed independently in the ECAL and the tracker, an additional step combining the ECAL energy and momentum estimate from the tracker is performed. A weighted combination of the two independent measurements can be formed as: \begin{equation} E_{\text{combined}}^{\text{reco}}=\frac{ E_{\text{ECAL}}/\sigma_{E}^2 + p_{\text{tracker}}/\sigma_p^2 } { 1/\sigma_{E}^2 + 1/\sigma_p^2 }, \end{equation} \noindent where $E_{\text{ECAL}}$ and $\sigma_E$ are the ECAL measurements of the energy and the energy resolution of the SC of the electron corrected with the step 1 and 2 regressions, respectively, and $p_{\text{tracker}}$ with $\sigma_p$ are the momentum magnitude and momentum resolution measured by the electron tracking algorithm (as described in Section~\ref{sec:electron-track-reconstruction}). This improves the predicted electron energy at low $\ET$, especially where the momentum measurement from the tracker has a better resolution than the corresponding ECAL measurement. The average relative momentum resolution of the tracker ($\sigma_p$) and energy resolution of the ECAL ($\sigma_E$) are shown in Fig.~\ref{fig:EcalAndTrackerResolution}. The momentum resolution of the tracker is better than the ECAL energy resolution for transverse momenta below 10--15\GeV and deteriorates at higher energies. The $E$-$p$ combination in CMS is only performed for electrons with energies less than 200\GeV. For higher-energy electrons only the SC energy is used, corrected by the above described regression steps. The step 3 regression uses as a target the ratio of the true electron energy and $E_{\text{combined}}^{\text{reco}}$ computed as the $E$-$p$ combination discussed above. The inputs for the regression include all quantities that enter the $E_{\text{combined}}^{\text{reco}}$ expression, plus several additional tracker quantities including the fractional amount of energy lost by the electron in the tracker, whether the electron was reconstructed as ECAL-driven or tracker-driven (as discussed in Section~\ref{sec:electron-track-reconstruction}), and a few other tracker-related parameters. In Figs.~\ref{fig:EcalAndTrackerResolution}-\ref{fig:energyVSET} the outcome of the step 3 regression is referred to as the $E$-$p$ combination. \begin{figure*}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_011-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_011-b.pdf}\\ \caption{Relative electron resolution versus electron $\pt$, as measured by the ECAL (``corrected SC''), by the tracker, and seen in the $E$-$p$ combination after the step 3 regression, as found in 2016 MC samples for barrel (left) and endcap (right) electrons. Vertical bars on the markers represent the uncertainties coming from the fit procedure.} \label{fig:EcalAndTrackerResolution} \end{figure*} These regressions lead to significantly improved measurements of electron and photon energies and energy resolutions as seen in Fig.~\ref{fig:regressionFitExamples}. The primary improvement occurs in the regressions applied to the energy of the SC (steps 1 and 2). Correcting the $E$-$p$ combination, which already uses the improved SC energy, has a smaller impact. The effects of the regression corrections for the various steps of the correction procedure are illustrated in Fig.~\ref{fig:regressionFitExamples} for low-$\pt$ electrons. The regressions are robust, and the performance is stable for electrons and photons in a wide energy range in all regions of ECAL, and as a function of pileup, as shown in Figs.~\ref{fig:energyVSPU}~and~\ref{fig:energyVSET}. \begin{figure*}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_012-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_012-b.pdf}\\ \caption{Ratio of the true to the reconstructed electron energy in the \pt range 15--30\GeV with and without regression corrections, with a DSCB function fit overlaid, in 2016 MC samples for barrel (left) and endcap (right) electrons. Vertical bars on the markers represent the statistical uncertainties of the MC samples.} \label{fig:regressionFitExamples} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_013-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_013-b.pdf}\\ \caption{Most probable value of the ratio of true to reconstructed electron energy, as a function of pileup, with and without regression corrections in 2016 MC samples for barrel (left) and endcap (right) electrons. Vertical bars on the markers represent the uncertainties coming from the fit procedure and are too small to be observed from the plot.} \label{fig:energyVSPU} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_014-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_014-b.pdf}\\ \caption{Most probable value of the ratio of true to reconstructed electron energy, as a function of the electron \pt with and without the regression corrections, evaluated using 2016 MC samples for barrel (left) and endcap (right) electrons. Vertical bars on the markers represent the uncertainties coming from the fit procedure and are too small to be observed from the plot.} \label{fig:energyVSET} \end{figure*} \subsection{Energy scale and spreading corrections} \label{sec:SS} After applying the corrections described in Section~\ref{sec:EnergyReg}, small differences remain between data and simulation in both the electron and photon energy scales and resolutions. In particular, the resolution in simulation is better than that in data. An additional spreading needs to be applied to the photon and electron energy resolutions in simulation to match that observed in data. The electron and photon energy scales are corrected by varying the scale in the data to match that observed in simulated events. The magnitude of the final correction is up to 1.5\% with a total uncertainty estimated to be smaller than 0.1 (0.3)\% in the barrel (endcap). Two dedicated methods, the ``fit method'' and the ``spreading method''~\cite{Khachatryan:2015hwa}, were developed in Run 1 to estimate these corrections from $\PZ \to \Pe\Pe$ events. In the fit method, an analytic fit is performed to the invariant mass distribution of the $\PZ$ boson ($m_{\Pe\Pe}$), with a convolving of a BW and a OSCB function. The invariant mass distributions obtained from data and from simulated events are fitted separately and the results are compared to extract a scale offset. The BW width is fixed to that of the $\PZ$ boson: $\Gamma_{\PZ} = 2.495\GeV$~\cite{Zyla2020}. The parameters of the OSCB function, which describes calorimeter resolution effects and bremsstrahlung losses in the detector material upstream of the ECAL, are free parameters of the fit. The spreading method, on the other hand, utilizes the simulated $\PZ$ boson invariant mass distribution as a probability density function in a maximum likelihood fit to the data. The simulation already accounts for all known detector effects, reconstruction inefficiencies, and the $\PZ$ boson kinematic properties. The residual discrepancy between data and simulation is described by an energy spreading function, which is applied to the simulation. A Gaussian spreading, which ranges from 0.1 to 1.5\%, is applied to the simulated energy response; it is adequate to describe the data in all the examined categories of events. Compared with the fit method, the spreading method can accommodate a larger number of electron categories in which these corrections are derived. A multistep procedure is implemented, based on the fit and spreading methods, to fine-tune the electron and photon energy scales. To derive the corrections to the photon energy scale, electrons from $\PZ$ boson decays are used, reconstructed using information exclusively from the ECAL. In the first step, any residual long-term drifts in the energy scale in data are corrected by using the fit method, in approximately 18-hour intervals (corresponding approximately to one LHC fill). Further subcategories are defined based on various $\eta$ regions, owing to the different levels of radiation damage and of the amount of material budget upstream of the ECAL. There are two $\eta$ regions in the barrel, $\abs{\eta}< 1.00$ and $1.00 < \abs{\eta}< 1.44$. In the endcap, the two $\eta$ categories are defined by $1.57 < \abs{\eta}< 2.00$ and $2.00 < \abs{\eta}< 2.50$. After applying these time-dependent residual scale corrections, the energy scale in data is stable with time. In the second step, corrections to both the energy resolution in the simulation and the scale for the data are derived simultaneously in bins of $\abs{\eta}$ and $R_9$ for electrons, using the spreading method. The energy scale corrections are derived in 50 electron categories: 5 in $\abs{\eta}$ and 10 in $R_9$. This is a significant improvement in granularity compared with Run 1~\cite{Khachatryan:2015hwa}, where only 8 electron categories were used (4 in $\abs{\eta}$ and 2 in $R_9$), thus leading to an improvement in the precision of the derived scale corrections. The $R_9$ value of each electron or photon SC is used to select electrons that interact or photons that undergo a conversion in the material upstream of the ECAL. The energy deposited by photons that convert before reaching the ECAL tends to have a wider transverse profile and thus lower $R_9$ values than those for unconverted photons. The same is true for electrons that radiate upstream of the ECAL. The energy scale corrections obtained from this step in fine bins of $R_9$ are shown in Fig.~\ref{fig:scaleVSr9} for the 2017 data-taking period. The uncertainties shown are statistical only. \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_015-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_015-b.pdf}\\ \caption{Energy scale corrections for the 2017 data-taking period, as function of $R_9$, for different $\abs{\eta}$ ranges, in the barrel (left) and for the endcap (right). The error bars represent the statistical uncertainties in the derived correction and are too small to be observed from the plot.} \label{fig:scaleVSr9} \end{figure*} The ECAL electronics operate with three gains: 1, 6, and 12, depending on the energy recorded in a single readout channel. Most events are reconstructed with gain 12, whereas events with the highest energies are reconstructed with gains 6 or 1. The gain switch from 12 to 6 (6 to 1) typically happens for electron/photon energies above 150 (300)\GeV in the barrel, and higher values in the endcaps. A residual scale offset of nearly 1\% is measured for gain~6 both in the EB and EE and of 2 (3)\% for gain~1 in the EB (EE). Thus an additional gain-dependent residual correction is derived and applied. The systematic uncertainties in the electron energy scale and resolution corrections are derived using $\PZ\to \Pe\Pe$ events by varying the distribution of $R_9$, the electron selections used, and the $\ET$ thresholds on the electron pairs used in the derivation of the corrections. The contributions of these individual sources are added in quadrature to obtain the total uncertainty. This uncertainty in the energy scale is 0.05--0.1 (0.1--0.3)\% for electrons in the EB (EE), where the range corresponds to the variation in the $R_9$ bins. The performance of energy corrections in data, including the ones described in Section~\ref{sec:EnergyReg}, is illustrated by the reconstructed $\PZ\to \Pe\Pe$ mass distribution before and after corrections, as shown in Fig.~\ref{fig:effectRegression}. The regression clearly improves the mass resolution for electrons from $\PZ$ boson decays, both in the barrel and endcaps, and the absolute energy scale correction shifts the dielectron mass distribution peak closer to the world-average $\PZ$ boson mass value. \begin{figure*}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_016-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_016-b.pdf}\\ \caption{Dielectron invariant mass distribution before and after all the energy corrections (regression and scale corrections) for barrel (left) and endcap (right) electrons for $\PZ\to \Pe\Pe$ events. The error bars represent the statistical uncertainties in data and are too small to be observed from the plot.} \label{fig:effectRegression} \end{figure*} The data-to-simulation agreement, after the application of residual scales to data and spreadings to simulated events, is shown in Fig.~\ref{fig:ZpeakAfterSmearing} for two representative categories. \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_017-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_017-b.pdf}\\ \caption{Invariant mass distribution of $\PZ\to\Pe\Pe$ events, after spreading is applied to simulation and scale corrections to data. The results are shown for barrel (left) and endcap (right) electrons. The simulation is shown with the filled histograms and data are shown by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the combined statistical and systematic uncertainties in the simulation. The lower panels display the ratio of the data to the simulation with the bands representing the uncertainties in the simulation.} \label{fig:ZpeakAfterSmearing} \end{figure*} The ultimate energy resolution after all the corrections (regression and scale corrections) ranges from 2 to 5\%, depending on electron pseudorapidity and energy loss through bremsstrahlung in the detector material. \subsection{Performance and validation with data} \label{sec:PerformanceAndValidationWithData} Energy scale and spreading corrections, derived with electrons from $\PZ$ boson decays with a mean $\ET$ of around 45\GeV, are applied also to electrons and photons over a wide range of $\ET$ up to several hundreds of GeV. Therefore, it is important to validate the performance of the residual energy corrections on a sample of unbiased photons and on high-energy $\Pe$/$\gamma$. To validate the unbiased photons, a sample of $\PZ \to \mu\mu\gamma$ events selected from data with 99\% photon purity is used. Events in both data and simulation are required to satisfy standard dimuon trigger requirements. An event is kept if there are at least two muons passing the tight muon identification requirements~\cite{Sirunyan:2018fpa}, with $\pt>30$ and $10\GeV$ and $\abs{\eta} < 2.4$. The two muons must have opposite charges and an invariant mass ($m_{\mu\mu}$) greater than 35\GeV. Once the dimuon system is identified, a photon in the event is required to have $\abs{\eta}<2.5$, be reconstructed outside the barrel-endcap transition region, and have $\ET>20\GeV$. The $\mu \mu \gamma$ system is then selected by requiring that the photon is within $\Delta R = 0.8 $ of at least one of the muons. After applying these criteria, roughly $140\times 10^3$ ($230\times 10^3$) of events have been selected with a photon in the EB for 2016--2017 (2018) and roughly $40\times 10^3$ ($80\times 10^3$) with a photon in the EE for 2016--2017 (2018) data sets. Figure \ref{fig:ZFSRpeak} shows the invariant mass distribution of the $\PZ\to\mu\mu\gamma$ system ($m_{\mu\mu\gamma}$) obtained after applying the scale and spreading corrections derived with electrons from $\PZ$ boson decays to the photons, shown separately for barrel and endcap photons in 2017 data and simulation, and in 2018 data and simulation. The photon energy scale is extracted for data and simulation from the mean of the distribution of a per-event estimator~\cite{Chatrchyan:2013dga} defined as $s=(m_{\mu\mu\gamma}^2-m_{\PZ}^2)/(m_{\PZ}^2-m_{\mu\mu}^2)$, where $m_{\PZ}$ denotes the Particle Data Group world-average $\PZ$ boson mass~\cite{Zyla2020}. The energy scale difference between data, corrected with the energy scale corrections derived with electrons from $\PZ$ boson decays, and simulation, both from $\PZ\to\mu\mu\gamma$ events, is smaller than 0.1\% for photons both in the barrel or in the endcaps. This correction is within the quadratic sum of the statistical and systematic uncertainties associated with this scale extraction process, which include scale and spreading systematic uncertainty, as well as systematic uncertainties due to corrections applied to muon momenta~\cite{Sirunyan:2018fpa}. \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_018-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_018-b.pdf}\\ \caption{Invariant mass distributions of $\PZ\to\mu\mu\gamma$ shown for barrel (left) and endcap (right) photons selected from 2017 data and simulation (left), 2018 data and simulation (right). The simulation is shown with the filled histograms and data are shown by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the sum of statistical and systematic uncertainties in the simulation. The lower panels display the ratio of the data to the simulation with bands representing the uncertainties in the simulation.} \label{fig:ZFSRpeak} \end{figure*} The performance of the energy corrections on high-energy $\Pe$/$\gamma$ is validated by using $\PZ\to\Pe\Pe$ data and MC samples, with scale and spreading corrections applied. The residual corrections for $\ET$ between 120 and 300\GeV are better than 0.8 (1.1)\% in the barrel (endcaps). These values are used to derive the systematic uncertainties in the energy correction extrapolation above 300\GeV, where the statistics are very low. In this $\ET$ range, the systematic uncertainty is conservatively assumed to be 2 (3) times the systematic uncertainty in EB (EE) of the $\ET$ range 120--300\GeV. \subsubsection{Impact of residual corrections in $\PH\to \gamma\gamma$ channel} \label{sec:rescorr_Hgg} The mass of the Higgs boson in the diphoton channel has been recently measured exploiting $\Pp\Pp$ collision data collected at a center-of-mass energy of 13\TeV by the CMS experiment during 2016, corresponding to an integrated luminosity of 35.9\fbinv~\cite{Sirunyan:2020xwk}. This result benefits from a refined calibration of the ECAL, while exploiting new analysis techniques to constrain the uncertainty in the Higgs boson mass to $m_{\PH} = 125.78 \pm 0.26\GeV$~\cite{Sirunyan:2020xwk}. A key requirement for this measurement was to measure and correct for nonlinear discrepancies between data and simulation in the energy scale, as a function of $\ET$, using electrons from $\PZ$ boson decays. Additional energy scale corrections were derived in bins of $\abs{\eta}$ and $\ET$ to account for any nonlinear response of the ECAL with energy for the purpose of this high-precision measurement. The corrections obtained from this step are shown in Fig.~\ref{fig:scale_vs_et} for electrons, as functions of $\ET$, in three bins of $\abs{\eta}$ in the EB. This improves the precision of the $m_{\PH}$ measurement, since the energy spectrum of the electrons from the $\PZ$ boson decays ($\langle \ET \rangle\approx 45\GeV$) used to derive the scale corrections, is different from the energy spectrum of photons from the Higgs boson decay ($\langle \ET \rangle\approx 60\GeV$). \begin{figure}[t] \centering \includegraphics[ width=\cmsFigWidth]{Figure_019.pdf} \caption{Energy scale corrections as a function of the photon $\ET$. The horizontal bars represent the variable bin width. The systematic uncertainty associated with this correction is approximately the maximum deviation observed in the $\ET$ range between 45 and 65\GeV for electrons in the barrel.} \label{fig:scale_vs_et} \end{figure} The accuracy of the energy scale correction extrapolation in the energy range of interest of the $\PH\to \gamma\gamma$ search (between 45 and 65\GeV in $\ET$) is 0.05--0.1 (0.1--0.3)\% for photons in the EB (EE)~\cite{Sirunyan:2020xwk}. The quality of the energy resolution achieved provides the highest precision to the Higgs boson mass measurement in the diphoton channel. \section{Electron and photon selection}\label{sec:sec8} Many physics processes under study at the LHC are characterized by the presence of electrons or photons in the final state. The performance of the identification algorithms for electrons and photons is therefore crucial for the physics reach of the CMS experiment. Two different techniques are used in CMS for the identification of electrons and photons. One is based on sequential requirements (cut-based), and the other is based on a multivariate discriminant. Although the latter is more suited for precision measurements and physics analyses with well-established final states, the former is largely used for model independent searches of nonconventional signatures. We describe below in detail the main strategies of electron and photon identification and the performance through the full Run 2. \subsection{Electron and photon identification variables} Different strategies are used to identify prompt (produced at the primary vertex) and isolated electrons and photons, and separate them from background sources. For prompt electrons, background sources can originate from photon conversions, hadrons misidentified as electrons, and secondary electrons from semileptonic decays of \cPqb or \cPqc quarks. The most important background to prompt photons arises from jets fragmenting mainly into light neutral mesons $\pi^0$ or $\eta$, which subsequently decay promptly to two photons. For the energy range of interest, the $\pi^0$ or $\eta$ are significantly boosted, such that the two photons from the decay are nearly collinear and are difficult to distinguish from a single-photon incident on the calorimeter. Different working points are defined to identify either electrons or photons, corresponding to identification efficiencies of approximately 70, 80, and 90\%, respectively. In all cases data and simulation efficiencies are compatible within 1--5\% over the full $\eta$ and $\ET$ ranges for electrons and photons. \subsubsection{Isolation criteria}\label{sec:isos} One of the most efficient ways to reject electron and photon backgrounds is the use of isolation energy sums, a generic class of discriminating variables that are constructed from the sum of the reconstructed energy in a cone around electrons or photons in different subdetectors. For this purpose, it is convenient to define cones in terms of an $\eta$--$\phi$ metric; the distance with respect to the reconstructed electron or photon direction is defined by $\Delta R$. To ensure that the energy from the electron or photon itself is not included in this sum, it is necessary to define a veto region inside the isolation cone, which is excluded from the isolation sum. Electron and photon isolation exploits the information provided by the PF event reconstruction~\cite{Sirunyan:2017ulk}. The isolation variables are obtained by summing the transverse momenta of charged hadrons ($I_{\text{ch}}$), photons ($I_{\gamma}$), and neutral hadrons ($I_{\text{n}}$), inside an isolation cone of $\Delta R = 0.3$ with respect to the electron or photon direction. The larger the energy of the incoming electrons or photons, the larger the amount of energy spread around its direction in the various subdetectors. For this reason, the thresholds applied on the isolation quantities are frequently parametrized as a function of the particle $\ET$, as indicated in Tables~\ref{tab:barr_endcap_cutpho} and~\ref{tab:cbarr_endcap_cutele}. The isolation variables are corrected to mitigate the contribution from pileup. This contribution in the isolation region is estimated as $\rho A_{\text{eff}}$, where $\rho$ is the median of the transverse energy density per unit area in the event and $A_{\text{eff}}$ is the area of the isolation region weighted by a factor that accounts for the dependence of the pileup transverse energy density on the object $\eta$~\cite{Khachatryan:2015hwa}. The quantity $\rho A_{\text{eff}}$ is subtracted from the isolation quantities. The distributions of $I_{\gamma}$ for photons after the $\rho$ corrections are shown in Fig.~\ref{fig:ZFSR_pfPhotonISO} for photons in the EB and EE. \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_020-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_020-b.pdf}\\ \caption{The $\rho$-corrected PF photon isolation in a cone defined by $\Delta R = 0.3$ for photons in $\PZ\to\mu\mu\gamma$ events in the EB (left) and in the EE (right). Photons are selected from 2016 data and simulation. The simulation is shown with the filled histograms and data are represented by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the statistical uncertainty in the simulation. The lower panels display the ratio of the data to the simulation, with the bands representing the uncertainties in the simulation.} \label{fig:ZFSR_pfPhotonISO} \end{figure*} \subsubsection{Shower shape criteria} Another method to reject jets with high electromagnetic content exploits the shape of the electromagnetic shower in the ECAL. Even if the two photons from neutral hadron decays inside a jet cannot be fully resolved, a wider shower profile is expected, on average, compared with a single incident electron or photon. This is particularly true along the $\eta$ axis of the cluster, since the presence of the material combined with the effect of the magnetic field reduce the discriminating power resulting from the $\phi$ profile of the shower. This can elongate the electromagnetic cluster in the $\phi$ direction for both converted photons as well as pairs of photons from neutral hadron decays where at least one of the photons has converted. Several shower-shape variables are constructed to parameterize the differences between the geometrical shape of energy deposits from prompt photons or electrons compared with those caused by hadrons from jets. The following are two of the most relevant variables used for photon and electron identification. \begin{itemize} \item Hadronic over electromagnetic energy ratio ($H/E$): The $H/E$ ratio is defined as the ratio between the energy deposited in the HCAL in a cone of radius $\Delta R = 0.15$ around the SC direction and the energy of the photon or electron candidate. There are three sources that significantly contribute to the measured hadronic energy (H) of a genuine electromagnetic object: HCAL noise, pileup, and leakage of electrons or photons through the inter-module gaps. For low-energy electrons and photons, the first two sources are the primary contributors, whereas for high-energy electrons, the last contribution dominates. Therefore, to cover both low- and high-energy regions, the $H/E$ selection requirement is of the form $H < X + Y \rho + J E$, where $X$ and $Y$ represent the noise and pileup terms, respectively, and $J$ is a scaling term for high-energy electrons and photons. An example of the $H/E$ distribution in data and simulation is shown in Fig.~\ref{fig:hoeEle} for electrons in the barrel and endcap regions. The data-to-simulation ratio in Fig.~\ref{fig:hoeEle} is mostly consistent with one except for $H/E > 0.16$ where background from events with nonprompt electrons starts to contribute. \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_021-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_021-b.pdf}\\ \caption{Distribution of $H/E$ for electrons in the barrel (left) and in the endcaps (right). The simulation is shown with the filled histograms and data are represented by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the statistical uncertainty in the simulation. The lower panels display the ratio of the data to the simulation, with the bands representing the uncertainties in the simulation. } \label{fig:hoeEle} \end{figure*} \item $\sigma_{i \eta i \eta}$: the second moment of the log-weighted distribution of crystal energies in $\eta$, calculated in the $5{\times}5$ matrix around the most energetic crystal in the SC and rescaled to units of crystal size. The mathematical expression is given below: \begin{equation}\label{eq:sieie} \sigma_{i \eta i \eta} = \sqrt{ \frac{ \sum_{i}^{5{\times}5} w_i (\eta_i - \overline{\eta}_{5{\times}5})^2 }{\sum_{i}^{5{\times}5} w_i} }. \end{equation} Here, $\eta_i$ is the pseudorapidity of the $i$th crystal, $\overline{\eta}_{5{\times}5}$ denotes the pseudorapidity mean position, and $w_i$ is a weight factor which is defined as: $w_i = \max (0 , 4.7 + \ln (E_i / E_{5{\times}5}))$, and is nonzero if $\ln (E_i / E_{5{\times}5}) > -4.7$, i.e., $E_i > 0.9\% $ of $E_{5{\times}5}$. This selection requirement is intended to reject ECAL noise, by ensuring that crystals with energy deposits of at least 0.9\% of $E_{5{\times}5}$, the energy deposited in a $5{\times}5$ crystal matrix around the most energetic crystal, will contribute to the definition of $\sigma_{i \eta i \eta}$. Because of the presence of upstream material and the magnetic field, the shower from an electron or a photon spreads into more than one crystal. The size of the crystal in $\eta$ in the EB is 0.0175 and in the EE it varies from 0.0175 to 0.05. Following Eq.~\ref{eq:sieie}, the $\sigma_{i \eta i \eta}$ variable essentially depends on the distance between two crystals in $\eta$. Thus, the spread of $\sigma_{i \eta i \eta}$ in EE is twice larger than in the EB. The distribution of $\sigma_{i \eta i \eta}$ is expected to be narrow for single-photon or electron showers, and broad for two-photon showers that arise from neutral meson decays. An example of the $\sigma_{i \eta i \eta}$ distribution in data and MC is shown in Fig.~\ref{fig:ZFSR_sigamIeIe} for photons in the barrel and in the endcap regions. \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_022-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_022-b.pdf}\\ \caption{ Distribution of $\sigma_{i \eta i \eta}$ for photons from $\PZ\to\mu\mu\gamma$ events in the barrel (left) and in the endcaps (right). Photons are selected from 2018 data and simulation. The simulation is shown with the filled histograms and data are represented by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the statistical uncertainty in the simulation. The lower panels display the ratio of the data to the simulation, with the bands representing the uncertainties in the simulation.} \label{fig:ZFSR_sigamIeIe} \end{figure*} \end{itemize} Another important variable is $R_9$. Showers of photons that convert before reaching the calorimeter have wider transverse profiles and lower values of $R_9$ than those of unconverted photons. The energy weighted $\eta$-width and $\phi$-width of the SC provide further information of the lateral spread of the shower. In the endcaps, where CMS is equipped with a preshower detector, the variable $\sigma_{\text{RR}} = \sqrt{\smash[b]{\sigma_{xx}^2 + \sigma_{yy}^2}}$ is also used, where $\sigma_{xx}$ and $\sigma_{yy}$ measure the lateral spread in the two orthogonal directions of the sensor planes of the preshower detector. \subsubsection{Additional electron identification variables}\label{sec:AddVar} Additional tracker-related variables are used for the identification of electrons. One such discriminating variable is $\abs{1/E - 1/p}$, where $E$ is the SC energy and $p$ is the track momentum at the point of closest approach to the vertex. Another important variable for the electron identification is $\abs{\Delta\eta^{\text{seed}}_{\text{in}}}$ defined as $\abs{\eta_{\text{seed}} - \eta_{\text{track}}}$, where $\eta_{\text{seed}}$ is the position of the seed cluster in $\eta$, and $\eta_{\text{track}}$ is the track $\eta$ extrapolated from the innermost track position. Similarly, $\abs{\Delta\phi_{\text{in}}} = \abs{\phi_{\text{SC}} - \phi_{\text{track}}}$ is another discriminating variable that uses the SC energy-weighted position in $\phi$ instead of the seed cluster $\phi$. An example of the $\Delta\phi_{\text{in}}$ distribution in data and simulation is shown in Fig.~\ref{fig:dphiin} for electrons in the barrel and endcap regions. An important source of background to prompt electrons arises from secondary electrons produced in conversions of photons in the tracker material. To reject this background, CMS algorithms exploit the pattern of hits associated with the electron track. When photon conversions take place inside the tracker volume, the first hit of the electron tracks from the converted photons is not likely to be located in the innermost tracker layer, and missing hits are therefore expected in the first tracker layers. For prompt electrons, whose trajectories start from the beamline, no missing hits are expected in the innermost layers. Distributions of some of the identification variables for electrons are shown in Figs.~\ref{fig:hoeEle} and~\ref{fig:dphiin}, for electrons from $\PZ$ boson decays in data and simulation. The data-to-simulation ratio is close to unity, except at the very high end tail, where background from events with nonprompt electrons start to contribute. \begin{figure*}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_023-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_023-b.pdf}\\ \caption{Distribution of $\Delta\phi_{\text{in}}$ for electrons in the barrel (left) and endcaps (right). For the cut-based identification, selection requirements on this variable are listed in Table~\ref{tab:barr_endcap_cutpho}. The simulation is shown with the filled histograms and data are represented by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the statistical uncertainty in the simulation. The lower panels display the ratio of the data to the simulation, with the bands representing the uncertainties in the simulation.} \label{fig:dphiin} \end{figure*} \subsection{Photon identification}\label{sec:phoID} A detailed description of photon identification strategies is given below. \subsubsection{Cut-based photon identification}\label{sec:phoIDCB} Requirements are made on $\sigma_{i \eta i \eta}$, $H/E$, and the isolation sums after correcting for pileup as detailed in Section~\ref{sec:isos}. A summary of the standard identification requirements for photons in the barrel and the endcaps is given in Table~\ref{tab:barr_endcap_cutpho} for the tight working point. The selection requirements were tuned using a MC sample with 2017 data-taking conditions, but these identification criteria are suitable for use in all three years of Run 2. The ``loose'' working point has an average signal efficiency of about 90\%, and is generally used when backgrounds are low. The ``medium'' and ``tight'' working points have an average efficiency of about 80\% and 70\%, respectively, and are used in situations where the background is expected to be larger. \begin{table}[thb] \centering \topcaption{Cut-based photon identification requirements for the tight working point in the EB and EE. } \begin{tabular}{ccc} \hline Variable & Barrel (tight WP) & Endcap (tight WP) \\ \hline $H/E$ & $<$0.021 & $<$0.032 \\ $\sigma_{i \eta i \eta}$ & $<$0.0099 & $<$0.027 \\ $I_{\text{ch}}$ & $<$0.65\GeV & $<$0.52\GeV \\ $I_\text{n}$ & $<$$0.32\GeV + 0.015 \ET +$ & $<$$2.72\GeV + 0.012 \ET +$ \\ & $2.26 \times 10^{-5} \ET^2/\GeVns$ & $2.3 \times 10^{-5} \ET^2/\GeVns$\\ $I_\gamma$ & $<$$2.04\GeV + 0.0040 \ET$ & $<$$3.03\GeV + 0.0037 \ET$ \\ \hline \end{tabular} \label{tab:barr_endcap_cutpho} \end{table} \subsubsection{Electron rejection}\label{sec:CSEV} Along with the cut-based photon identification criteria, a prescription is required to reject electrons in the photon identification scheme. The most commonly used method is the conversion-safe electron veto~\cite{Khachatryan:2015iwa}. This veto requires the absence of charged particle tracks, with a hit in the innermost layer of the pixel detector not matched to a reconstructed conversion vertex, pointing to the photon cluster in the ECAL. A more efficient rejection of electrons can be achieved by rejecting any photon for which a pixel detector seed consisting of at least two hits in the pixel detector points to the ECAL within some window defined around the photon SC position. The conversion-safe electron veto is appropriate in the cases where electrons do not constitute a major background, whereas the pixel detector seed veto is used when electrons misidentified as photons are expected to be an important background. \subsubsection{Photon identification using multivariate techniques}\label{sec:phoIDMVA} A more sophisticated photon identification strategy is based on a multivariate technique, employing a BDT implemented in the TMVA framework~\cite{Hocker:2007ht}. Here, a single discriminant variable is built based on multiple input variables, and provides excellent separation between signal (prompt photons) and background from misidentified jets. The signal is defined as reconstructed photons from a $\gamma$ + jets simulated sample that are matched at generator level with prompt photons within a cone of size $\Delta R = 0.1$, whereas the background is defined by reconstructed photons in the same sample that do not match with a generated photon within a cone of size $\Delta R = 0.1$. Photon candidates with $\ET > 15\GeV$, $\abs{\eta} < 2.5$, and satisfying very loose preselection requirements are used for the training of the BDT. The preselection requirements consist of very loose cuts on $H/E$, $\sigma_{i \eta i \eta}$, $R_9$, PF photon isolation and track isolation. The variables used as input to the BDT include shower-shape and isolation variables, already presented above. Three more quantities are used that improve the discrimination between signal and background by including the dependencies of the shower-shapes and isolation variables on the event pileup, $\eta$ and $\ET$ of the candidate photon: the median energy per unit area, $\rho$; the $\eta$; and the uncorrected energy of the SC corresponding to the photon candidate. A comparison of the performance between cut-based identification and BDT identification for photons is shown in Fig.~\ref{fig:MVAvsCutBasedPhotons}. The background efficiency as a function of the signal efficiency is reported for the multivariate identification (curves) and for the cut-based selection (discrete points). \begin{figure}[hbtp] \centering \includegraphics[width=0.5\textwidth]{Figure_024.pdf} \caption{Performance of the photon BDT and cut-based identification algorithms in 2017. Cut-based identification is shown for three different working points, loose, medium and tight.} \label{fig:MVAvsCutBasedPhotons} \end{figure} \subsection{Electron identification}\label{sec:eleID} A detailed description of electron identification strategies is given below. \subsubsection{Cut-based electron identification}\label{sec:eleIDCB} The sequential electron identification selection includes the requirements for seven identification variables, with thresholds as listed in Table~\ref{tab:cbarr_endcap_cutele} for the representative tight working point. The selection requirements were tuned using a MC sample with 2017 data-taking conditions, and this selection is suitable for use in all three years of Run 2. The combined PF isolation is used, combining information from $I_{\text{ch}}$, $I_{\gamma}$ and $I_\text{n}$. It is defined as: $I_{\text{combined}} = I_{\text{ch}} + \max( 0, I_\text{n}+I_{\gamma}-I_{\text{PU}} )$, where $I_{\text{PU}}$ is the correction related to the event pileup. The isolation-related variables are very sensitive to the extra energy from pileup interactions, which affects the isolation efficiency when there are many interactions per bunch crossing. The contribution from pileup in the isolation cone, which must be subtracted, is computed assuming $ I_{\text{PU}} = \rho A_{\text{eff}}$, where $\rho$ and $A_{\text{eff}}$ are defined before. The variable $I_{\text{combined}}$ is divided by the electron $\ET$, and is called the relative combined PF isolation. For the cut-based electron identification, four working points are generally used in CMS. The ``veto'' working point, which corresponds to an average signal efficiency of about 95\%, is used in analyses to reject events with more reconstructed electrons than expected from the signal topology. The ``loose'' working point corresponds to a signal efficiency of around 90\%, and is used in analyses where backgrounds to electrons are low. The ``medium'' working point can be used for generic measurements involving $\PW$ or $\PZ$ bosons, and corresponds to an average signal efficiency of around 80\%. The ``tight'' working point is around 70\% efficient for genuine electrons, and is used when backgrounds are larger. Requirements on the minimum number of missing hits, together with requirements on the pixel conversion veto described in Section~\ref{sec:CSEV}, are also applied. \begin{table}[h] \centering \topcaption{Cut-based electron identification requirements for the tight working point in the barrel and in the endcaps.} \begin{tabular}{ccc} \hline Variable & Barrel (tight WP) & Endcaps (tight WP) \\ \hline $\sigma_{i \eta i \eta}$ & $<$0.010 & $<$0.035 \\ $\abs{\Delta \eta_{\text{in}}^{\text{seed}}}$ & $<$0.0025 & $<$0.005 \\ $\abs{\Delta\phi_{\text{in}}}$ & $<$0.022\unit{rad} & $<$0.024\unit{rad} \\ $H/E$ & $<$$0.026 + 1.15\GeV/E_{\text{SC}}$ & $<$$0.019 + 2.06\GeV/E_{\text{SC}}$ \\ & $ + 0.032 \rho /E_{\text{SC}}$ & $ + 0.183 \rho /E_{\text{SC}}$ \\ $I_{\text{combined}}/\ET$ & $<$$0.029 + 0.51\GeV/\ET$ & $<$$0.0445 + 0.963\GeV/\ET$ \\ $\abs{ 1/E - 1/p }$ & $<$$0.16\GeV^{-1}$ & $<$$0.0197\GeV^{-1}$ \\ Number of missing hits & ${\le}1$ & ${\le}1$\\ Pass conversion veto & Yes & Yes \\ \hline \end{tabular} \label{tab:cbarr_endcap_cutele} \end{table} \subsubsection{Electron identification using multivariate techniques}\label{sec:eleIDMVA} To further improve the performance of the electron identification, especially at $\ET$ less than 40\GeV, several variables are combined using a BDT. The set of observables is extended relative to the simpler sequential selection: the track-cluster matching observables are computed both at the ECAL surface and at the vertex. More cluster-shape and track-quality variables are also used. The fractional difference between the track momentum at the innermost tracker layer and at the outermost tracker layer, $f_{\text{brem}}$, is also included. Similar sets of variables are used for electrons in the barrel and in the endcaps. Electron candidates in DY + jets simulated events with $\ET$ greater than 5\GeV and $\abs{\eta} < 2.5$ are used to train several BDTs in bins of $\ET$ and $\eta$ with the XGBoost algorithm~\cite{Chen:2016btl}. The splits in pseudorapidity are at the barrel-endcap transition and at $\abs{\eta} = 0.8$, because the tracker material budget steeply increases beyond this point. The split in $\ET$ is at 10\GeV, allowing for a dedicated training in a region where background composition and amount of background is different. Signal electrons are defined as reconstructed electrons that match generated prompt electrons within a cone of size $\Delta R = 0.1$. Background electrons are defined as all reconstructed electrons that either match generated nonprompt electrons (usually electrons from hard jets) or that don't match any generated electron. Reconstructed electrons that match generated electrons from $\tau$ leptonic decays are not considered either for the signal or for the background. For a maximum flexibility at analysis level, the electron identification BDTs are trained with and without including the isolation variables. The performance of the BDT-based identification is reported in Fig.~\ref{fig:MVAiso}, and compared with a sequential selection in which the BDT and isolation selection requirements are applied one after the other. The BDT trained with isolation variables shows a clear advantage over the sequential approach, especially for high selection efficiencies. Whereas the BDT is optimized on background from DY + jets simulated events, the cut-based identification is optimized on background from $\ttbar$ events, which contains a much higher fraction of nonprompt electrons. This optimizes the performance of the identification algorithms for the analyses in which they are applied, but prevents a like-for-like comparison between the two algorithms. Although the BDT-based identifications have better background rejection for a given signal efficiency, compared with the cut-based identifications, a significant fraction of physics analyses still prefer to use cut-based identifications because it is easy to flip or undo a specific cut to perform a sidebands study. This is particularly true for searches focussing on high-$\ET$ ranges, where background is so low that an improved electron identification does not bring any sizeable improvement, and these analyses profit from the simplicity and flexibility of the cut-based identifications. \begin{figure*}[hbtp] \centering \includegraphics[width=0.5\textwidth]{Figure_025.pdf} \caption{Performance of the electron BDT-based identification algorithm with (red) and without (green) the isolation variables, compared to an optimized sequential selection using the BDT without the isolations followed by a selection requirement on the combined isolation (blue). Electrons are selected for the BDT training with an $\ET$ of at least 20\GeV.} \label{fig:MVAiso} \end{figure*} \subsubsection{High-energy electron identification}\label{sec:HEEP} The CMS experiment employs a dedicated cut-based identification method for the selection of high-$\ET$ electrons, known as high-energy electron pairs (HEEP). Variables similar to those used for the cut-based general electron identification are used to select high-$\ET$ electrons, starting at 35\GeV and extending up to about 2\TeV or more. This selection requires that the lateral spread of energy deposits in the ECAL is consistent with that of a single electron and that the track is matched to the ECAL deposits and is consistent with a particle originating from the nominal interaction point. The associated energy in the HCAL around the electron direction must be less than 5\% of the reconstructed energy of the electron, once the noise and pileup contributions are included. The main difference between the high-$\ET$ electron identification and the cut-based electron identification is the use of subdetector-based isolation instead of PF isolation. Although the two algorithms are expected to provide similar performance, the detector-based isolation behavior is better suited for high-$\ET$ electrons. Very high-energy electrons can lead to saturation of the ECAL electronics. In the presence of a saturated crystal, the shower-shape variables become biased and the requirements on lateral shower-shape variables, as for example the ratio of the energy collected in $n{\times}m$ arrays of crystals $E_{2 {\times}5}/E_{5{\times}5}$ and $\sigma_{i \eta i \eta}$ are disabled if a saturated crystal occurs within a $5{\times}5$ crystal matrix around the central crystal. The selection requires that the electron be isolated in a cone of radius $\Delta R = 0.3$ in both the calorimeters ($I_{\text{ECAL}}$ and $I_{\text{HCAL}}$) and tracker ($I_{\text{tracker}}$). The HCAL subdetector had two readout depths available in the endcap regions in Run 2. Only the first longitudinal depth is used for the HCAL isolation, because the second one suffered from higher detector noise. Only well-measured tracks that are consistent with originating from the same vertex as the electron are included in the isolation sum. Moreover, in the barrel, the $E_{2 {\times}5}/E_{5{\times}5}$ and $E_{1{\times}5}/E_{5{\times}5}$ variables are used, since they are very effective at high-$\ET$. As mentioned in Sec.~\ref{sec:electron-track-reconstruction}, as \pt of the electrons increase, they are less likely to be seeded by a tracker-driven approach. Such electrons are rejected in the high-$\ET$ electron identification algorithm, which is mostly meant for high-energy electrons. Requirements are applied on the minimum number of hits the electron leaves in the inner tracker and on the impact parameter relative to the center of the luminous region in the transverse plane ($d_{\mathrm{xy}}$). This selection, which is about 90\% efficient for electrons with $\ET>50\GeV$, is used in many searches for exotic particles published by CMS in Run 2~\cite{CMS-PAS-EXO-19-019}. A summary of the requirements applied in the HEEP identification algorithm is shown in Table~\ref{tab:par1}. These selection criteria are valid for the entirety of Run 2. \begin{table}[h] \centering \topcaption{Identification requirements for high-$\ET$ electrons in the barrel and in the endcaps.} \cmsTable{ \begin{tabular}{ccc} \hline Variable & Barrel & Endcap \\ \hline $\ET$ & $>$35\GeV & $>$35\GeV \\ $\eta$ range & $\eta_{\text{SC}}<1.44$ & $1.57 < \eta_{\text{SC}} < 2.50$ \\ $\abs{\Delta \eta_{\text{in}}^{\text{seed}}}$ & $<$0.004 & $<$0.006 \\ $\abs{\Delta\phi_{\text{in}}}$ & $<$0.06 rad & $<$0.06 rad \\ $H/E$ & $<$$1\GeV/E_{\text{SC}} + 0.05$ & $<$$5\GeV/E_{\text{SC}} + 0.05$ \\ $\sigma_{i \eta i \eta}$ & \NA & $<$0.03 \\ $E_{2 {\times}5}/E_{5{\times}5}$ & $>$0.94 OR $E_{1{\times}5}/E_{5{\times}5} > 0.83$ & \NA \\ $I_{\text{ECAL}} + I_{\text{HCAL}}$ & $<$$2.0\GeV + 0 .03 \ET + 0.28 \rho$ & $<$$2.5\GeV + 0.28 \rho$ for $\ET<50\GeV$ \\ & & else $<$$2.5\GeV + 0.03 (\ET-50\GeV) + 0.28 \rho$ \\ $I_{\text{tracker}}$ & $<$5\GeV & $<$5\GeV \\ Number of missing hits & ${\le}1$ & ${\le}1$ \\ $d_{\mathrm{xy}}$ & $<$0.02\cm & $<$0.05\cm \\ \hline \end{tabular} \label{tab:par1} } \end{table} \clearpage \subsection{Selection efficiency and scale factors}\label{sec:sf} The electron and photon identification efficiencies, as well as the electron reconstruction efficiency, are measured in data using a tag-and-probe technique that utilizes $\PZ\to \Pe\Pe$ events~\cite{Khachatryan:2015hwa} described in detail in Sec.~\ref{sec:tnp}. The identification efficiency is measured for transverse energies above 10\GeV for electrons and above 20\GeV for photons. For photon identification efficiency no requirement is applied on the track and charge of the probe, pretending in this way to identify an electron from $\PZ$ boson decay as a photon. The performance achieved for the two reference electron selections is shown in Fig.~\ref{fig:electronIDeff_vsEta}. The electron identification efficiency in data (upper panels) and data-to-simulation efficiency ratios (lower panels) for the cut-based identification and the veto working point (left), and the BDT-based identification loosest working point (right) are shown as functions of the electron $\ET$. The efficiency is shown in four $\eta$ ranges. The vertical bars on the data-to-simulation ratios represent the combined statistical and systematic uncertainties. For the three years of data-taking, efficiencies and scale factors are measured with total uncertainties of the same order of magnitude. For 2017 data, where the latest calibrations were used to reconstruct the data, the measured data-to-simulation efficiency ratios are closer to unity by 3--5\% over the entire energy range compared to 2016 and 2018 data. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_026-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_026-b.pdf} \caption{Electron identification efficiency measured in data (upper panels) and data-to-simulation efficiency ratios (lower panels), as a function of the electron $\ET$, for the cut-based identification veto working point (left) and the BDT-based (without isolation) loosest working point (right). The vertical bars on the markers represent combined statistical and systematic uncertainties.} \label{fig:electronIDeff_vsEta} \end{figure} The photon identification efficiency in data and data-to-simulation efficiency ratios are shown in Fig.~\ref{fig:photonIDeff_vsEta}, for the loose cut-based (left) and loosest BDT-based (right) identification working points, as functions of the electron $\ET$. The efficiency is shown in four $\eta$ ranges. The vertical bars on the data-to-simulation ratio represent the combined statistical and systematic uncertainties. The electron and photon identification efficiency in data is very well described by the MC simulations, and it is reflected in the fact that the ratios are within 5\% from unity for all the cut-based identification working points and the BDT-based ones. There are some effects that are very difficult to simulate. For example, the variation of ECAL noise with time, which affects the $\sigma_{i \eta i \eta}$ variable in low- and medium-$\ET$ electrons and photons in the endcap. Such effects are included in the correction factors. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_027-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_027-b.pdf} \caption{Photon identification efficiency in data (upper panels) and data-to-simulation efficiency ratios (lower panels) for the loose cut-based (left) and loosest BDT-based (right) identification working points, as functions of the photon $\ET$. The vertical bars on the markers represent combined statistical and systematic uncertainties. } \label{fig:photonIDeff_vsEta} \end{figure} \section{Performance of recalibrated data sets} \label{sec:EOYvsUL} \label{sec:perfUL2017} During the LHC long shutdown 2 that began in 2019, the CMS experiment initiated a program of recalibrating the full Run 2 data set. It involved the realignment and recalibration of all the subdetector components and the related physics objects. The simulation was also improved with a more accurate description of the data in terms of dynamic inefficiencies, radiation damage, and description of the detector noise. Electron and photon reconstruction and identification performance strongly depend upon improvements on measurements with the ECAL subdetector. An updated method to monitor and correct the ECAL crystal transparency loss due to radiation damage has been introduced. In parallel a more granular calibration of the crystals has been performed allowing to calibrate precisely also the highest pseudorapidity region of the calorimeter. All these actions have led to better resolution and better agreement between data and simulation. Figure~\ref{fig:resolution} shows the improvement in resolution brought by the Legacy calibration of the ECAL for low showering electrons ($R_9 > 0.94$) as a function of pseudorapidity. This resolution is estimated after all the corrections described in Section~\ref{sec:sec6} are applied, also including the scale and spreading corrections of Section~\ref{sec:SS}. The relative resolution improves as a function of $\eta$ up to more than $50\%$ for $\abs{\eta} > 2.0$. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_028.pdf} \caption{Dielectron mass resolution from $\PZ\to \Pe\Pe$ events in bins of $\eta$ of the electron for the barrel and the endcaps for the Legacy calibration (green filled markers) and the EOY calibration (pink empty markers) of the 2017 data set. The grey vertical band represents the region $1.44 < \abs{\eta} < 1.57$ and it is not included since it corresponds to the transition between barrel and endcap electromagnetic calorimeters. } \label{fig:resolution} \end{figure} The improved detector calibration and simulation has led to an improved agreement between data and simulation. Figure~\ref{fig:ULdataMC} shows the improvement in the PF-based relative neutral hadron isolation in the barrel. These are obtained using $\PZ\to \Pe\Pe$ electrons, as described in Section~\ref{sec:AddVar}. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_029-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_029-b.pdf} \\ \caption{The PF-based neutral hadron isolation in the barrel. The left plot is from the EOY calibration and the right plot is from the Legacy calibration 2017 data set. The simulation is shown with the filled histograms and data are represented by the markers. The vertical bars on the markers represent combined statistical uncertainty in data. The hatched regions show the statistical uncertainty in the simulation. The lower panels display the ratio of the data to the simulation, with the bands representing the uncertainties in the simulation, and show an improvement for the Legacy calibration compared to the EOY calibration.} \label{fig:ULdataMC} \end{figure} Figure~\ref{fig:eleRECO} shows the improvement in the electron reconstruction data-to-simulation efficiency correction factors. The magnitude of correction factors in the bottom panel is below 2\% for the Legacy calibration compared with 3\% for the EOY calibration. The number of misreconstructed electron candidates per event is reported in Fig.~\ref{fig:fr2017} and shows a slight decrease in the Legacy data set, due to the better conditions and calibrations. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_030.pdf} \caption{Electron reconstruction efficiency (upper panel) and data-to-simulation correction factors (lower panel) comparing the Legacy and the EOY calibrations of the 2017 data set for electrons from $\PZ\to \Pe\Pe$ decays with $\ET$ between 45 and 75\GeV. The vertical bars on the markers represent combined statistical and systematic uncertainties.} \label{fig:eleRECO} \end{figure} \begin{figure}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_031-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_031-b.pdf} \caption{Number of misreconstructed electron candidates per event as a function of pileup in DY + jets MC events simulated with the Legacy and the EOY detector conditions of 2017. Results are shown for electrons with \pt within the 5--20\GeV range (left) and $\pt>20\GeV$ (right) before any selection criteria. Electrons in the region $1.44 <\abs{\eta}< 1.57$ are discarded. The vertical bars on the markers represent combined statistical uncertainties of the MC sample.} \label{fig:fr2017} \end{figure} Figure~\ref{fig:eleID} shows the improvement in the electron identification data-to-simulation efficiency corrections. The correction factors in the bottom panel are above 0.95 for the Legacy calibration compared with 0.88--0.91 for the EOY calibration. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_032.pdf} \caption{Tight BDT-based electron identification efficiency (upper panel) and data-to-simulation correction factors (lower panel) comparing the Legacy to the EOY calibration of the 2017 data set for electrons from $\PZ\to \Pe\Pe$ decays with $\eta$ between 2.00 and 2.50. The vertical bars on the markers represent combined statistical and systematic uncertainties.} \label{fig:eleID} \end{figure} \subsection{Comparison among the three data-taking years of Run 2} \label{sec:compare3years} Figure~\ref{fig:resolution3years} shows the $\PZ\to \Pe\Pe$ mass resolution as a function of $\eta$ of the low bremsstrahlung electrons for the three years of Run 2. As expected, the energy resolution measured in 2017 is significantly better than for the other two years because of the Legacy calibration, and the impact of improved calibration is the dominant effect when comparing the 3 years in this manner. Overall, the energy resolution throughout the entire Run 2 period ranges from 1 to 3.4\%, depending on the $\eta$ region considered. \begin{figure}[h] \centering \includegraphics[ width=\cmsFigWidth]{Figure_033.pdf} \caption{The relative $\PZ$ boson mass resolution from $\PZ\to \Pe\Pe$ decays in bins of the $\eta$ of the electron for the barrel and endcaps. Electrons from $\PZ\to \Pe\Pe$ decays are used and the resolution is shown for low-bremsstrahlung electrons. The plot compares the resolution achieved for the data collected at 13\TeV during Run 2 in 2016, 2017, and 2018. The 2016 and 2018 data are reconstructed with an initial calibration whereas the 2017 data are reconstructed with refined calibrations. The resolution is estimated after all the corrections are applied, as described in Section~\ref{sec:rescorr_Hgg}, including the scale and spreading corrections of Section~\ref{sec:SS}. The grey vertical band represents the region $1.44 < \abs{\eta} < 1.57$ and it is not included since it corresponds to the transition between barrel and endcap electromagnetic calorimeters. } \label{fig:resolution3years} \end{figure} In general, the agreement between data and simulation depends on the noise modeling in simulation and energy calibration for that object. Better calibration of the 2017 data, together with a more appropriate simulation of the noise levels in MC, have led to a better description of isolation variables. The identification efficiencies for 2016, 2017, and 2018 are shown in Fig.~\ref{fig:compare3years_eleIDeff} for cut-based loose electron identification requirements. The efficiencies are stable within 5\% for the full range of $\ET$ of the electrons across the full ECAL. The correction factors are also stable within 3\% over the full three years. \begin{figure}[h] \centering \includegraphics[ width=\cmsFigWidth]{Figure_034.pdf} \caption{Cut-based loose electron identification efficiency in data (upper panel) and data-to-simulation efficiency ratios (lower panel) as a function of the electron \pt shown for 2016, 2017 ``Legacy'', and 2018 data-taking periods. The vertical bars on the markers represent combined statistical and systematic uncertainties. } \label{fig:compare3years_eleIDeff} \end{figure} \section{Timing performance} \label{sec:sec8timing} In addition to the energy measurement, the ECAL provides a time of arrival for electromagnetic energy deposits that can separate prompt electrons and photons from backgrounds with a broader time of arrival distribution. The fast decay time of the PbWO$_{4}$ ECAL crystals, comparable to the LHC bunch crossing interval (80\% of the light is emitted in 25\unit{ns}), together with the use of electronic pulse shaping with an high sampling rate provides for an excellent timing resolution~\cite{Chatrchyan:2009aj}. The better the precision and synchronization of the timing measurement, the larger the rejection of the background. Background sources with a broad time distribution include cosmic rays, beam halo muons, electronic noise, spikes (hadrons that directly ionize the EB photodetectors), and out-of-time $\Pp\Pp$ interactions. A precise timing measurement also enables the identification of particles predicted by different models beyond the standard model. Possible examples could be slow heavy charged R-hadrons~\cite{CMS-PAS-EXO-16-036}, which travel through the calorimeter and interact before decaying, and photons from the decay of long-lived new particles that reach the ECAL out-of-time with respect to particles traveling at the speed of light from the interaction point. For example, to identify neutralinos decaying into photons with decay lengths comparable to the ECAL radial size, a time measurement resolution better than 1\unit{ns} is necessary. The CMS Collaboration published results on searches for displaced photons from decays of long-lived neutralinos during Run 1~\cite{Chatrchyan:2012jwg} and Run 2~\cite{Sirunyan:2019wau}. These are the most stringent limits to date on delayed photons at the LHC. The ECAL timing performance has been measured prior to data-taking using electrons from test beams, cosmic muons, and beam splash events~\cite{Chatrchyan:2009aj}. The resolution for large energy deposits ($E>50\GeV$) was estimated to be better than 30\unit{ps} and the linearity of the time response was also verified. During collisions in the LHC, there are many additional effects that could worsen the performance, such as residual timing jitter in the electronics or the clock distribution to the individual readout units, run-by-run variations, intercalibration effects, energy-dependent systematic uncertainties, and crystal damage due to radiation. A detailed description of the method to measure the ECAL timing with the crystals is given in the next section. \subsection{Time resolution measurement using \texorpdfstring{$\PZ \to \Pe\Pe$}{Zee} events} The method used to extract the electron and photon time resolutions with the ECAL detector is based on comparisons of the time of arrival of the two electrons arising from $\PZ$ decays. The time of arrival of each electron is the measured time of the most energetic hit of the energy deposit in the ECAL. This time is corrected for the electron time-of-flight, which is determined from the primary vertex position, obtained from the electron track. This correction is needed because the timestamp recorded by the ECAL crystal assumes a time-of-flight from the origin of the detector, such that the distribution for the most energetic hit time is centered around zero for all crystals. The two electron clusters are required to be in the EB and to pass loose identification criteria on the cluster shape. Their resulting invariant mass has to be consistent with the $\PZ$ boson mass ($60 < m_{\Pe\Pe}< 120\GeV$). The energy of each of the two hits must fall within the range $10 <E< 120\GeV$. The lower threshold is motivated by the minimal energy constraint applied to reconstruct good quality electrons, whereas the upper threshold is applied to include only ECAL signals below the lowest gain switch threshold described in Section~\ref{sec:SS}. The resulting resolution for the full Run 2 inclusive data set is shown in Fig.~\ref{fig:TimeResZee} as a function of the effective energy of the dielectron system, which depends on the individual energies of the two electrons measured in the two seed crystals as $E_{\text{eff}} = E_1 E_2 /\sqrt{\smash[b]{E_1^2 +E_2^2}}$. For 2017 data, the Legacy calibration is used. The resolution is extracted as the $\sigma$ parameter of a Gaussian fit to the core of the distribution of the time difference between the two electrons. The trend of the ECAL timing resolution as a function of $E_{\text{eff}}$ is modeled with a polynomial function as $\sigma(t_1-t_2) = N/E_{\text{eff}}\oplus \sqrt{2}C$ where $N$ represents the noise term and $C$ is the constant term and dominates at energies above 30-40\GeV. The noise term $N$ is very similar to that obtained prior to collisions~\cite{Chatrchyan:2009aj} and the constant term $C$ is about 200\unit{ps}. \begin{figure}[hbtp] \centering \includegraphics[ width=\cmsFigWidth]{Figure_035.pdf} \caption{ECAL timing resolution as a function of the effective energy of the electron pairs as measured in 2016, 2017 (Legacy), and 2018 data. Vertical bars on the markers represent the uncertainties coming from the fit procedure used to determine the $\sigma(t_1-t_2)$ parameters. } \label{fig:TimeResZee} \end{figure} \section{Electron and photon reconstruction performance in PbPb collisions} \label{sec:sec10} The quark-gluon plasma (QGP), a deconfined state of matter that is predicted~\cite{Bjorken} by quantum chromodynamics to exist at high temperatures and energy densities, is expected to be produced by colliding heavy nuclei at the LHC. Parton scattering reactions with large momentum transfer, which occur very early compared to QGP formation, provide tomographic probes of the plasma~\cite{dEnterria2010}. The outgoing partons interact strongly with the QGP and lose energy. This phenomenon has been observed at BNL RHIC~\cite{Adcox2005,Adams2005,Back2005,Arsene2005} and at CERN LHC~\cite{Chatrchyan2012, Aad2015,Aamodt2011,Aad2010,Chatrchyan2011} using measurements of hadrons with high $\pt$ and of jets, both created by the fragmentation of the high-momentum partons. Electroweak bosons such as photons and $\PZ$ bosons that decay into leptons do not interact strongly with the QGP. The electroweak boson $\pt$ reflects, on average, the initial energy of the associated parton that fragments into a jet, before any energy loss has occurred. Hence, the measurements of jets produced in the same hard scattering as a photon or a $\PZ$ boson have, in contrast to dijet measurements, a controlled configuration of the initial hard scattering. The degree of overlap of the two colliding heavy nuclei (e.g., Pb) is defined using signals in the HF calorimeters, and is known as ``centrality''. Centrality is determined by the fraction of the total energy deposited in the HF, with 0\% centrality corresponding to the most central collisions. The typical particle multiplicity in central PbPb collisions is $\mathcal{O}(10^4)$, giving rise to a dense underlying event (UE). For this reason, the reconstruction, identification, and energy correction algorithms must be revised and optimized to perform in the extreme conditions of central PbPb collisions. The PbPb collisions were recorded in 2018 at a nucleon-nucleon center-of-mass energy of $\sqrtsNN = 5.02$\TeV, corresponding to an integrated luminosity of 1.7\nbinv. \subsection{Electron and photon reconstruction}\label{sec:HIN_pho_ele_reco} Several changes have been made to the photon and electron reconstruction with respect to the algorithms used in $\Pp\Pp$ collisions. The out-of-time pileup in PbPb collisions is negligible, hence out-of-time hits and photons were excluded from the reconstruction. The PF ECAL clustering algorithm described in Section~\ref{sec:ecal-clustering} uses a dynamic window in the $\phi$ direction that is dependent on the seed $\ET$ to recover the shower spreads in $\phi$, which are considerable at low $\ET$. When applied to PbPb events, which have a denser environment of underlying events, this algorithm gives worse energy resolution and higher misidentification rates with respect to $\Pp\Pp$ collisions. To improve the performance, an upper bound of 0.2 was imposed on the extent of the SC in the $\phi$ direction. Following these changes, the misidentification rate decreased from 2.7\% to 0.5\% for photons with $40 < \ET< 60\GeV$, and the energy resolution, estimated from the effective width of the distribution of the ratio of the uncorrected SC energy ($E_{\text{SC,uncorr.}}$) to the true energy ($E_{\text{gen}}$), decreased: as shown in Fig.~\ref{fig:HIN_E}, the modified fixed-width PF algorithm resulted in an energy resolution between 8\% and 3\% at $\ET=20$ and $100\GeV$, respectively. In the simulation, the effect of the PbPb UE is modeled by embedding the \PYTHIA output created with \textsc{CMS} \PYTHIA8 \textsc{CP5} set of parameters~\cite{Sirunyan2020} in events generated using \HYDJET~\cite{Lokhtin2006}, which is tuned to reproduce the charged-particle multiplicity and $\pt$ spectrum in PbPb data. This embedding is applied with an event-by-event weight factor, based on the average number of nucleon-nucleon collisions calculated with a Glauber model~\cite{Loizides:2017ack} for each PbPb centrality interval. \begin{figure}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_036.pdf} \caption{Photon energy resolution comparison in simulation for different PF clustering algorithms in PbPb collisions as a function of the generated photon $\ET$. The default algorithm (in blue) has a dynamic $\eta$--$\phi$ window, and is compared to the modified PF clustering algorithm with an upper bound on the SC extent in the $\phi$ direction (in red). Barrel photons ($\abs{\eta} < 1.44$) are used in simulated PbPb events within the 0--30\% centrality range.} \label{fig:HIN_E} \end{figure} In PbPb collisions, the large particle multiplicities involved often result in excessively long reconstruction times. As a result, the following modifications were made to the PbPb reconstruction algorithm to keep the reconstruction timing at a reasonable level: (i) the tracker-driven electron seeds were removed, (ii) the tracking regions were changed to be centered in a narrow region around the primary vertex, and (iii) the SC energy was required to be at least 15\GeV. These changes resulted in an improvement of the overall reconstruction timing by a factor of more than $5$. The reconstruction performance for electrons with $\ET > 20\GeV$, the kinematic region of interest to the majority of analyses, was not affected. \subsection{Electron identification and selection efficiency}\label{sec:HIN_eleID_eff}\label{sec:HIN_eleID_enCorr} As described in Section~\ref{sec:eleID}, several strategies are used in CMS to identify prompt isolated electrons and to separate them from background sources, such as photon conversions, jets misidentified as electrons, or electrons from semileptonic decays of \cPqc or \cPqb quarks. A cut-based technique was chosen for the electron identification in PbPb collisions, using shower-shape and track-related variables to separate the signal from the background. The selection requirements are optimized using the TMVA framework, with the working point target efficiencies remaining the same as in $\Pp\Pp$ collisions. The input variables are $\sigma_{i \eta i \eta}$, $H/E$ ratio computed from a single tower, $1/E - 1/p$, $\abs{\Delta \eta^{\text{seed}}_{\text{in}}}$ between the ECAL seed crystal and the associated track, and $\abs{\Delta \phi_{\text{in}}}$ between the ECAL SC and the associated track. An optimization is performed in two centrality bins (0--30\% (central) and 30--100\% (peripheral)), since most of the included variables are centrality dependent. Variables that do not depend on centrality, i.e., the number of expected missing inner hits and three-dimensional impact parameter, were optimized in a second step. The efficiency of electron reconstruction and identification selection requirements is estimated in data and simulation using the tag-and-probe method, as described in Section~\ref{sec:tnp}. Events are required to pass standard calorimeter noise filters, to trigger the single-electron HLT with an $\ET$ threshold of 20\GeV, and to have a primary vertex position $\abs{v_z} < 15\unit{cm}$. The event must contain at least two reconstructed electrons. Each electron has to be within the acceptance region ($20< \ET <200\GeV$, $\abs{\eta} < 2.1$), and should not be in the barrel-endcap transition region or in the problematic HCAL region for 2018 PbPb data, because in 2018, a 40$^{\circ}$ section of one end of the hadronic endcap calorimeter lost power during the data-taking period. All PbPb data is affected by this power loss. Tag-and-probe electrons are defined as described in Section~\ref{sec:tnp}. The tag-and-probe pairs are required to be oppositely charged and to have an invariant mass in the range 60--120\GeV. For the loose identification working point the data-to-simulation correction factor is smaller than 3\%, both in the barrel and the endcaps. Several sources of systematic uncertainty are considered. The main uncertainty is related to the model used in the mass fit, and is estimated by comparing alternative distributions for signal and background. The second most important uncertainty is related to the tag requirement, varied from the tight to the medium working points. The total systematic uncertainty in the loose identification working point data-to-simulation correction factor is 2.0--4.5 (2.0--7.5)\% in the barrel (endcaps). \subsection{Electron energy corrections}\label{sec:HIN_ele_enCorr} In heavy ion collisions, the UE activity can vary greatly between the most central and peripheral collisions. The additional energy deposited by particles from the UE in the ECAL can be clustered together with the energy deposited from genuine electrons, and thus affect the energy scale of reconstructed electrons in a centrality-dependent manner. The electron energy scale is studied using control samples in data (as described in Section~\ref{sec:EnergyReg}) based on the invariant mass of the $\PZ$ boson, which is known precisely and within 5\% of the MC scale. The electron energy scale and resolution extracted from this study are used to correct the energy scale, and to smear the electron energy resolution in simulated samples, to match those observed in data. The invariant mass of dielectron pairs from $\PZ\to \Pe\Pe$ decays is constructed from the ECAL energy component in three categories corresponding to the detector regions in which the two electrons are reconstructed, namely the EB and EE regions. The events are further subdivided into three centrality regions: 0--10, 10--30, and 30--100\%. The electrons are required to have a minimum $\ET$ of 20\GeV, pass the loose identification selection, and to be located outside the ECAL transition region or the problematic HCAL region. The invariant mass distributions are fitted with a DSCB distribution, from which the mean values are extracted. This is performed separately for data and simulation, and the ratio of the extracted mean values to the world-average $\PZ$ boson mass~\cite{Zyla2020} is used as a correction factor applied to the mean energy scale. The energy resolutions are extracted after first applying the scale factors derived to shift the invariant mass distributions back to the nominal $\PZ$ boson mass. The energy scale and resolution spreading correction factors are applied to the ECAL energy component of the reconstructed electrons, with the final electron momentum obtained by redoing the ECAL-tracker recombination. The first two sources of systematic uncertainty are evaluated by constructing the invariant mass distributions after varying the electron selection criteria. The variations considered are to tighten the selection criteria from the loose to the medium working point, and increase the electron $\ET$ threshold from 20 to 25\GeV. The difference between the mean values of the nominal and the varied distributions is used as an estimate of the systematic uncertainty. The residual discrepancy between the corrected and the nominal $\PZ$ boson mass is also assumed as a systematic uncertainty, and is smaller than 1 (3)\% in the barrel (endcap) region. A comparison of the $\PZ \to \Pe\Pe$ invariant mass peak between data and simulated Drell--Yan events generated with MadGraph5 at NLO~\cite{Alwall:2014hca} is shown in Fig.~\ref{fig:HINZee}. The electron energy in simulation has been corrected using scale corrections and resolution spreading, and electron reconstruction and identification efficiency corrections have also been applied. \begin{figure}[h!] \centering \includegraphics[ width=\cmsFigWidth]{Figure_037-a.pdf} \includegraphics[ width=\cmsFigWidth]{Figure_037-b.pdf} \caption{Invariant mass distributions of the electron pairs from $\PZ \to \Pe\Pe$ decays selected in PbPb collisions, for the barrel-barrel electrons on the left, and with at least one electron in the endcaps on the right. The simulation is shown with the filled histograms and data are represented by the markers. The vertical bars on the markers represent the statistical uncertainties in data. The hatched regions show the combined statistical and systematic uncertainties in the simulation, with the uncertainty in the shapes of the predicted distributions being the main contributor. The lower panels display the ratio of data to simulation.} \label{fig:HINZee} \end{figure} \section{Summary} The performance of electron and photon reconstruction and identification in CMS during LHC Run 2 was measured using data collected in proton-proton collisions at $\sqrt{s}=13$\TeV in 2016--2018 corresponding to a total integrated luminosity of 136\fbinv. A clustering algorithm developed to cope with the increasing pileup conditions is described, together with the use of the new pixel detector with one more layer and a reduced material budget. These are major changes in electron and photon performance with respect to Run 1. Multivariate algorithms are used to correct the electron and photon energy measured in the electromagnetic calorimeter (ECAL), as well as to estimate the electron momentum by combining independent measurements in the ECAL and in the tracker. The overall energy scale and resolution are both calibrated using electrons from $\PZ \to \Pe\Pe$ decays. The uncertainty in the electron and photon energy scale is within 0.1\% in the barrel, and 0.3\% in the endcaps in the transverse energy ($\ET$) range from 10 to 50\GeV. The stability of this calibration is estimated to be within 2--3\% for higher energies. The measured energy resolution for electrons produced in $\PZ$ boson decays ranges from 2 to 5\%, depending on electron pseudorapidity and energy loss through bremsstrahlung in the detector material. The energy scale and resolution corrections have been checked for photons using $\PZ\to \mu\mu\gamma$ events and are adequate within the assigned systematic uncertainties. The performance of electron and photon reconstruction and identification algorithms in data is studied with a tag-and-probe method using $\PZ\to \Pe\Pe$ events. Good agreement is observed between data and simulation for most of the variables relevant to both reconstruction and identification. The reconstruction efficiency in data is better than 95\% in the $\ET$ range from 10 to 500\GeV. The data-to-simulation efficiency ratios, both for electron reconstruction and for the various electron and photon selections, are compatible with unity within 2\% over the full $\ET$ range, down to an $\ET$ as low as 10 (20)\GeV for electrons (photons) when using the 2017 data reconstructed with the dedicated Legacy calibration. Identification efficiencies target three working points with selection efficiencies of 70, 80, and 90\%, respectively. The energy resolution and energy scale measurements, together with the relevant identification efficiencies, remain stable throughout the full Run 2 data-taking period (2016--2018). For the 2017 data-taking period, the dedicated Legacy calibration brings an improvement of up to 50\% in terms of relative energy resolution in the ECAL, as well as an improved agreement between data and simulation, leading to smaller reconstruction and identification efficiency correction over the entire $\ET$ and $\eta$ ranges. As a result of these calibrations the electron and photon reconstruction and identification performance at Run 2 are similar to that of Run 1, despite the increased pileup and radiation damage. The evident success of the dedicated Legacy calibration of 2017 data motivates a plan to pursue the same techniques for the 2016 and 2018 data. The ECAL timing resolution is crucial at CMS to suppress noncollision backgrounds, as well as to perform dedicated searches for delayed photons or jets predicted in several models of physics beyond the standard model. A global timing resolution of 200\unit{ps} is measured for electrons from $\PZ$ decays with the full Run 2 collision data. Excellent performance in electron and photon reconstruction and identification has also been achieved in the case of lead-lead collisions at $\sqrtsNN = 5.02$\TeV in 2018 corresponding to a total integrated luminosity of 1.7\nbinv. Reconstruction, identification, and energy correction algorithms have been revised and optimized to perform in the extreme conditions of high underlying event activity in central lead-lead collisions. For electrons and photons reconstructed in lead-lead collisions, the uncertainty on the energy scale is estimated to be better than 1 (3)\% in the barrel (endcap) region. \begin{acknowledgments} \hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren Rachada-pisek} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Education, Science and Research and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research and Innovation Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via PRG780, PRG803 and PRG445 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy -- EXC 2121 ``Quantum Universe" -- 390833306, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Research, Development and Innovation Fund, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Ministry of Education and Science of the Republic of Latvia; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Ministry of Science of Montenegro; the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, the Russian Foundation for Basic Research, and the National Research Center ``Kurchatov Institute"; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on, Programa Consolider-Ingenio 2010, Plan Estatal de Investigaci\'on Cient\'{\i}fica y T\'ecnica y de Innovaci\'on 2017--2020, research project IDI-2018-000174 del Principado de Asturias, and Fondo Europeo de Desarrollo Regional, Spain; the Ministry of Science, Technology and Research, Sri Lanka; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440, 724704, 752730, and 765710 (European Union); the Leventis Foundation; the A.P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Scientific and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Higher Education, project no. 0723-2020-0041 (Russia); the Tomsk Polytechnic University Competitiveness Enhancement Program; the Programa de Excelencia Mar\'{i}a de Maeztu, and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF, and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University, and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). \end{acknowledgments}
train/arxiv
BkiUaUXxK03BfL1dS0jl
5
1
\section{Introduction} Systems of particles interacting through only excluded volume interaction may exist in different phases depending on the shape and density of the particles. These find a variety of applications, including in self-assembly~\cite{van2006colloids,2012-science-deg-predictive,2017-mpolpp-nc-observation}, efficient drug delivery~\cite{2007-jys-jcr-particle,2017-j-ijp-effect}, design of novel materials with specific optical and chemical properties~\cite{2001-vbsn-nature-onchip,2011-fpmnsocfd-acsn-assembly,2013-skrckpy-nature-shaping}, design of molecular logic gates~\cite{2011-acsn-smrmsahecj-manipulating,2011-prb-smsacrmehj-demonstration, 2013-acsn-gkkszsmej-contacting}, and adsorption of gas on metallic surfaces~\cite{1985-prb-twpbe-two,2000-ssr-psb-phase,1991-jcp-dmr-model}. More generally, they are of interest as simple models of fluids~\cite{solana2013perturbation} as well as being the simplest systems to study critical behavior. Many shapes have been studied in the literature. These include different types of polyhedra~\cite{2012-science-deg-predictive}, colloidal superballs~\cite{2017-mpolpp-nc-observation}, and rods~\cite{1992-vl-rpp-phase}. Parallel to the study of models in the continuum, models of hard-core particles on lattices, known as hard core lattice gases (HCLGs) have also been studied. In literature, many different geometrical shapes have been studied on two dimensional lattices, including triangles~\cite{1999-vn-prl-triangular}, squares~\cite{1967-bn-jcp-phase,1966-bn-prl-phase,1966-rc-jcp-phase,2012-rd-pre-high,2016-ndr-epl-stability,2017-mnr-jsm-estimating}, dimers~\cite{1961-k-physica-statistics,1961-tf-pm-dimer,2003-hkms-prl-coulomb,2017-naq-arxiv-polyomino}, Y-shaped particles~\cite{2018-mnr-pre-phase}, mixture of squares and dimers~\cite{2015-rdd-prl-columnar,2017-mr-pre-columnar}, rods~\cite{2007-gd-epl-on,2013-krds-pre-nematic}, rectangles~\cite{2014-kr-pre-phase,2015-kr-pre-asymptotic,2015-nkr-jsp-high,2017-gvgmv-jcp-ordering}, discretised discs or the k-NN model~\cite{2007-fal-jcp-monte,2014-nr-pre-multiple,darjani2019liquid,thewes2020phase,jaleel2021hard}, and hexagons~\cite{1980-b-jpa-exact}, the last being the only exactly solvable model. A variety of different ordered phases may be observed, including crystalline, columnar or striped, nematic, and power-law correlated phases. Though many examples exist, it is not clear {\em a priori} which phases are realized and in what order (as a function of increasing density) for a given shape. Comparatively less is known about HCLG models in three dimensions. A detailed phase diagram that encompasses all densities is known for only rods of shape $k\times 1 \times1$~\cite{2017-vdr-jsm-different,2017-gkao-pre-isotropic} or $2\times2\times2$ hard cubes~\cite{vigneshwar2019phase}. The numerical study of HCLG models is constrained by difficulties of equilibrating the system at densities close to the maximal possible density, as the system gets stuck in very long-lived metastable systems. These difficulties are substantially reduced by using Monte Carlo algorithms that include cluster moves~\cite{2014-kr-pre-phase,2015-kr-pre-asymptotic,2015-rdd-prl-columnar}, which significantly decrease the autocorrelation times. Systems of plates or board-like particles in the continuum have also been studied numerically~\cite{2017-cdmp-sm-phase,2011-mvv-pccp-biaxial,2018-dtdrd-prl-hard}. The phase diagram in the continuum is very rich, showing multiple transitions with increasing particle densities, and varying aspect ratios. A variety of different phases arise, including smectic, biaxial smectic, uniaxial and biaxial nematic, and columnar with alignment along the long or short axis. If the orientations of the plates are restricted to orthogonal cartesian directions, then it is possible to obtain some rigorous results regarding the nature of the phases, in particular for a system of hard parallelepipeds of size $1\times k^\alpha \times k$, $\alpha \in [0,1]$. For plate like objects ($1/2 < \alpha <1$), it is possible to show rigorously, for $k \gg 1$, the existence of a uniaxial nematic phase, where only minor axes of plates are aligned parallel to each other, and there is no translational order~\cite{2018-dgj-araiv-plate}. However, the behavior of the corresponding lattice model, which is also interesting in connection with certain resonating plaquette wavefunctions and the possibility of a lattice realization of a liquid state of fluctuating quadrupoles~\cite{pankov2007resonating}, has not been studied away from full-packing. With this motivation, here we study the phase diagram of a system of $2\times 2\times 1$ hard plates on the three dimensional cubic lattice, {\em i.e.} a lattice gas of plates that each cover an elementary plaquette of the cubic lattice and occupy its four vertices, with the constraint that no two plates occupy the same site of the cubic lattice. We use a cluster algorithm and focus here on the isotropic system, with equal fugacities for the three orientations of plates, so that ``$\mu$-type plates'' (with normal along the $\mu$ axis) have equal fugacity for all $\mu$ ($\mu = x, y, z$) (see Ref.~\cite{Geetpaper} for the anisotropic fully-packed case in which every site of the cubic lattice is occupied by exactly one plate). We show, using grand canonical Monte Carlo simulations, that the system undergoes two phase transitions as a function of increasing fugacity: first from a disordered fluid to a spontaneously layered phase, and second from this layered phase to a sublattice ordered phase. In the layered phase, the system breaks up into disjoint slabs of thickness two along one spontaneously chosen cartesian direction. Plates with normals perpendicular to this layering direction are preferentially contained entirely within these slabs, while plates straddling two successive slabs have a lower density. This corresponds to a two-fold symmetry breaking of translation symmetry along one spontaneously chosen cartesian direction, leading to ``occupied slabs'' stacked along the layering direction with a separation of one lattice spacing. Additionally the symmetry between the three types of plates is spontaneously broken, as plates with normal along the layering direction have a lower density than the other two types of plates. Intriguingly, the occupied slabs exhibit two-dimensional power-law columnar order. In contrast, inter-slab correlations of the two-dimensional columnar order parameter decay exponentially with the separation between the slabs. In addition, the layered phase breaks the symmetry between the three types of plates: plates with normal along the layering direction have a lower density than the other two types of plates. In the sublattice ordered phase, there is two-fold ($Z_2$) breaking of lattice translation symmetry along all three cartesian directions. In this phase, the corner of a $\mu$-type plate with the smallest $\nu$ coordinates (for both $\nu \neq \mu$) preferentially occupies one spontaneously chosen sublattice out of the eight sublattices of vertices of the cubic lattice, and each type of plate breaks translational symmetry along the two directions perpendicular to its normal (see Sec.~\ref{sec:phases} for a more detailed description). The disordered to layered transition occurs at density $\rho^{DL}\approx 0.941$. From finite size scaling, we show that this transition is continuous, with properties that are consistent with those of the $O(3)$ universality class perturbed by cubic anisotropy. The transition from layered to sublattice phase occurs at density $\rho^{LS}\approx 0.974$. We show that this second transition is first-order. The overall structure of the phase diagram found here is summarized in Fig.~\ref{schematic}. \begin{figure} \includegraphics[width=\columnwidth]{schematic_phase_diagram.eps} \caption{Schematic phase diagram of $2\times 2\times 1$ hard plates model. The red dot represents a continuous transition and the blue dots and dotted line represent the coexistence regime in a first order transition. \label{schematic}} \end{figure} Finally, we note that the fully packed system of $2\times 2\times 1$ hard plates on the cubic lattice also has a very rich phase diagram as a function of anisotropy in the fugacity of the three orientation of plates. This is discussed in a parallel work~\cite{Geetpaper}. \section{\label{sec:model}Model and algorithm} Consider a $L \times L \times L$ cubic lattice with periodic boundary along the three orthogonal directions. The lattice sites may be empty or occupied by $2 \times 2 \times 1$ plates, each of which covers an elementary plaquette of the cubic lattice and occupies the four sites of the corresponding plaquette. Three types of plates are possible depending on the orientation of the normal to the plate, i.e., $x$, $y$ and $z$ plates corresponding to plates lying in the $yz$, $zx$ and $xy$ planes respectively. The plates interact through a hard-core constraint, i.e., no two plates may occupy the same site of the cubic lattice. We associate activity $s_p$ and $s_0$ to each plate and vacancy respectively. These are normalized through \begin{equation} s_p^{1/4}+s_0=1, \end{equation} where the power $1/4$ accounts for the fact that a plate touches four vertices, while a vacancy resides on one vertex. We study the system using grand canonical Monte Carlo simulations. Conventional Monte Carlo simulations involving local evaporation, deposition, diffusion, and rotation moves are inefficient in equilibrating such systems especially when the packing fraction approaches full packing. These difficulties may be over come by algorithms that include cluster moves. The transfer-matrix algorithm we use updates strips of sites of size proportional to $L$. This has been particularly useful in earlier studies of other hard core lattice gas models~\cite{2012-krds-aipcp-monte,2013-krds-pre-nematic,2015-rdd-prl-columnar,2014-nr-pre-multiple}. Below we provide a brief description of this algorithm, and give details of its implementation for our system of hard plates; we will follow terminology of Ref.~\cite{2015-rdd-prl-columnar}, where the phase diagram was obtained for a mixture of dimers and squares on a square lattice at all packing densities using such a transfer-matrix algorithm. We define a ``tube'' to be a cuboidal subset of the $L \times L \times L$ lattice, of size $2\times2\times L$ and made up of $L$ plaquettes of size $2\times 2\times 1$ stacked along one cartesian axis. Choose a tube at random in any one of the three orthogonal directions. Remove all the plates that are completely contained within the tube. There may be some protruding plates that are not fully contained within the tube, but touch sites of this tube. These plates are left undisturbed. Due to these protrusions, the shape of the tube (after removal of fully contained plates) is complicated and can be characterized by assigning different morphologies to each section depending on the protrusion. There are are $16$ such morphologies possible for each section and they are listed in Fig.~\ref{states_label}(a). In order to provide a visual depiction that is easier to read, we use a space-filling convention for depicting the protruding plates. In this space filling convention, each site of the original cubic lattice maps to a unit cube of the dual cubic lattice, and each plate is a space-filling object that occupies a $2\times 2\times 1$ slab consisting of 4 adjacent elementary cubes of the dual lattice. Note that this alternate description is behind the commonly used terminology, also used here, which refers to the hard plates as $2\times 2\times 1$ cuboids. \begin{figure} \includegraphics[width=\columnwidth]{states2.eps} \caption{Schematic diagram of (a) sixteen possible morphologies and (b) eight possible states of the $2 \times 2 \times L$ tube, used to construct the transfer matrix. To represent different states we have taken the projection in $xy$-plane. Black represent blocked site and brown, red, green respectively represents projection of $y$, $x$ and $z$ plates. Note that a vertex of the original cubic lattice is represented by an elementary cube in this space-filling representation for ease of visualization, and the morphologies and states are then depicted in terms of a cross-sectional view of the tube.} \label{states_label} \end{figure} The aim is to refill the tube with a new configuration of plates that are fully contained within the tube, but with the correct equilibrium probability. The probability of this new configuration may be calculated using transfer matrices. Any $2\times2\times 1$ section with a given morphology may be filled by plates in at most eight different ways. The possible states for a section are listed in Fig.~\ref{states_label}(b). Among the sixteen possible morphologies, there are fifteen morphologies with partially blocked sites. The remaining one morphology [morphology-$16$ as shown in Fig.~\ref{states_label} (a)] represents a complete blockage in the chosen tube. We have to thus calculate $15^2=225$ different transfer matrices of size $8\times 8$. Let $T_{m_1,m_2}$ be the transfer matrix where the system is transferring from morphology $m_2$ to morphology $m_1$. The matrix element may be written as \begin{equation} T_{m_1,m_2}(i,j)=c_{m_1,m_2}(i,j)W_pW_0, \end{equation} where $c_{m_1,m_2}(i,j)$ is the compatibility factor, $W_p$ is the weight associated with the particle that sits on morphology $m_1$ and $W_0$ is the weight of vacancies present on morphology $m_2$ after depositing particle on morphology $m_1$. The compatibility factor $c_{m_1,m_2}(i,j)$ is $1$ if the states $i$ and $j$ are compatible on morphologies $m_1$ and $m_2$, otherwise it equals zero. The weights associated with the particles and vacancies may be written as \begin{eqnarray} W_p&=&s_p^{n_s}, ~n_s=0,1,2,\\ W_0&=&s_0^{n_0}, ~n_0=0,1,2,3,4. \end{eqnarray} Examples of few transfer matrices are given in Eqs.~(\ref{t11})--(\ref{t31}). \begin{equation} \label{t11} T_{1,1} = \left(\begin{array}{cccccccc} s_0^4 & s_0^2 & s_0^2 & s_0^2 & s_0^2 & 1 & 1 & 1 \\ s_ps_0^2 & 0 & s_p &0 &0 &0 &0 &0 \\ s_ps_0^2 & s_p & 0 &0 &0 &0 &0 &0 \\ s_ps_0^2 & 0 & 0 &0 &s_p &0 &0 &0 \\ s_ps_0^2 & 0 & 0 &s_p &0 &0 &0 &0 \\ s_p^2 & 0 & 0 &0 &0 &0 &0 &0 \\ s_p^2 & 0 & 0 &0 &0 &0 &0 &0 \\ s_ps_0^4 & s_ps_0^2 & s_ps_0^2 & s_ps_0^2 & s_ps_0^2 & s_p & s_p & s_p \end{array}\right) \end{equation} \begin{equation} \label{t13} T_{1,3} = \left(\begin{array}{cccccccc} s_0^3 & 0 & s_0 & 0 & s_0 & 0 & 0 & 0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ s_ps_0 & 0 & 0 &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ s_ps_0 & 0 & 0 &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ s_ps_0^3 & 0 & s_ps_0 & 0 & s_ps_0 & 0 & 0 & 0 \end{array}\right) \end{equation} \begin{equation} \label{t31} T_{3,1} = \left(\begin{array}{cccccccc} s_0^4 & s_0^2 & s_0^2 & s_0^2 & s_0^2 & 1 & 1 & 1 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ s_ps_0^2 & s_p & 0 &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ s_ps_0^2 & 0 & 0 &s_p &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \\ 0 & 0 & 0 &0 &0 &0 &0 &0 \end{array}\right) \end{equation} The partition function of a closed $2\times2\times L$ tube with morphology $m_1,\dots,m_L$ may be written as \begin{equation} Z^c=\sum_{i}\langle i|T_{m_L,m_1}T_{m_1,m_2}\dots T_{m_{L-1},m_L}|i\rangle, \end{equation} where $|i\rangle$ is the state vector of state $i$. The partition function for the open tube of length $X<L$ may be written as \begin{equation} Z^o=\langle \mathcal{L}_{m_1}|T_{m_1,m_2}T_{m_2,m_3}\dots T_{m_{X-1},m_X}|\mathcal{R}_{m_X}\rangle, \end{equation} where $\langle \mathcal{L}_{m_1}|$ and $|\mathcal{R}_{m_X}\rangle$ are respectively left and right vectors that may be written as \begin{eqnarray} \mathcal{L}_{m_1}(n) &=&T_{16,m_1}(1,n),\\ \mathcal{R}_{m_X}(n) &=&T_{m_X,16}(n,1). \end{eqnarray} Calculating the partition function, we re-occupy the tube, section by section, according to the calculated probabilities. Disjoint tubes are updated simultaneously in our parallelized implementation. To speed up equilibration as well as to reduce autocorrelation times, we also implement a flip move in which a pair of adjacent parallel plates of same type is replaced by another pair of adjacent parallel plates whose type is chosen randomly. For each value of activity, we ensure that equilibration has been achieved by starting the simulations with configurations that correspond to different phases, and ensuring that the final equilibrium state is independent of the initial state. \section{\label{sec:phases}Different Phases of system} \subsection{\label{subsec:characterisation} Observables and order parameters} As the density is varied, we observe three different phases in our simulations. To characterize them, it is convenient to divide the full lattice into eight sublattices depending on whether the $x$, $y$, and $z$ coordinates of a site are even (0) or odd (1), as shown in Fig.~\ref{sublat_label}. A lattice site $(x,y,z)$ belongs to the sublattice constructed out of the binary number $zyx$ where each of the digits is the corresponding coordinate taken modulo two. Except for plates that cover a plaquette on an edge that wraps around the periodic direction, we assign each plate to the site with least $x$, $y$ and $z$ coordinates (of the four sites touched by it). For plates on wrapping plaquettes, this definition is of course modified in the obvious way to remain consistent with the treatment of bulk plates. The corner that occupies this site to which a plate ``belongs'' is the ``head'' of the plate. \begin{figure} \includegraphics[width=\columnwidth]{cube_sublat_label_3nn3d.eps} \caption{Division of the full lattice into eight sublattices $0, 1, \dots, 7$, depending on whether each coordinate is odd or even. The arrows show the orientation of the three axes $x$, $y$ and $z$.} \label{sublat_label} \end{figure} To characterize the phases quantitatively, we define sublattice densities $\rho_i^j$ as the volume fraction of plates of type $j=x,y,z$ whose heads occupy sites of sublattice $i=0,\ldots,7$. We also define three particle densities $\rho^j$, eight sublattice densities $\rho_i$, and total density $\rho$ as \begin{align} \rho^j & =\sum_{i=0}^7 \rho_i^j,~j=x,y,z, \nonumber \\ \rho_i & =\sum_{j=x,y,z}\rho_i^j,~i=0,\dots,7, \\ \rho &= \sum_{i=0}^7 \rho_i. \nonumber \end{align} To quantify the breaking of translational invariance in the different directions, it is convenient to define the quantities \begin{eqnarray} \ell_x&=&\frac{1}{L^3}\sum_{x,y,z}\phi(x,y,z)(-1)^x,\nonumber\\ \ell_y&=&\frac{1}{L^3}\sum_{x,y,z}\phi(x,y,z)(-1)^y,\\ \ell_z&=&\frac{1}{L^3}\sum_{x,y,z}\phi(x,y,z)(-1)^z,\nonumber \end{eqnarray} where $\phi(x,y,z)$ is $1$ if the site is occupied by the head of a plate and zero otherwise. The square of the layering order parameter, which characterizes the layered phase, may be defined as \begin{equation} \Lambda^2=\ell_x^2+\ell_y^2+\ell_z^2. \label{eq:lambda_sq} \end{equation} The columnar vector $\vec{C}$ with components $({c}_x, {c}_y, {c}_z)$ may be written as \begin{eqnarray} {c}_x&=&\frac{1}{L^3}\sum_{x,y,z}(-1)^{y+z}\phi(x,y,z),\nonumber\\ {c}_y&=&\frac{1}{L^3}\sum_{x,y,z}(-1)^{x+z}\phi(x,y,z),\\ {c}_z&=&\frac{1}{L^3}\sum_{x,y,z}(-1)^{x+y}\phi(x,y,z).\nonumber \end{eqnarray} The square of the columnar order parameter may be defined as \begin{equation} \Gamma^2={c}_x^2+{c}_y^2+{c}_z^2. \end{equation} We also define the square of the order parameter $\omega$ to characterize the sublattice phase \begin{equation} \omega^2=\ell_x^2\ell_y^2\ell_z^2. \end{equation} To capture the breaking of particle number symmetry, we define a nematic order parameter $\Pi$ as \begin{equation} \Pi^2=\big({\rho^z-\frac{\rho^y}{2}-\frac{\rho^x}{2}}{\big)}^2+\frac{3}{4}\big({\rho^y}-{\rho^x}{\big)}^2. \label{eq:qn} \end{equation} When $\Pi$ is non-zero, particle symmetry is broken. In a sublattice ordered phase, we expect $\omega^2$, $\Lambda^2$ and $\Gamma^2$ to all tend to nonzero values in the thermodynamic limit. In contrast, in the layered phase, we expect $\omega^2$ to tend to zero as $1/L^6$ and $\Gamma^2$ to tend to zero as $1/L^3$ in the thermodynamic limit, while $\Lambda^2$ tends to a nonzero limit. \begin{figure*} \includegraphics[width=2.0\columnwidth]{op_square_L_inv_2.eps} \caption{Variation of the square of the (a) layered order parameter $\langle\Lambda^2 \rangle$, (b) columnar order parameter $\langle\Gamma^2 \rangle$ and (c) sublattice order parameter $\langle\omega^2 \rangle$ as a function of $L^{-1}$ for different values of $s_p$.} \label{fig:op_sq_Linv} \end{figure*} In Fig.~\ref{fig:op_sq_Linv}(a-c), we display $\Lambda^2$, $\Gamma^2$ and $\omega^2$ as a function of $L^{-1}$ for $s_p=0.300$, $s_p=0.360$ and $s_p=0.420$. The quantity $\Lambda^2$ decays to zero as $L^{-3}$ for $s_p=0.300$ and takes non-zero values for $s_p=0.360$ and $s_p=0.420$. The quantity $\Gamma^2$ decays to zero as $L^{-3}$ for both $s_p=0.300$ and $s_p=0.360$, and takes non-zero value for $s_p=0.420$. Similarly $\omega^2$ also decays to zero for both $s_p=0.300$ and $s_p=0.360$, but the decay obey different power laws, which are $L^{-9}$ and $L^{-6}$ respectively. For $s_p=0.420$, $\omega^2$ takes non-zero values. Taken together, these behaviours allow us to conclude that the system is successively in a disordered, layered and sublattice-ordered phase for $s_p=0.300$, $s_p=0.360$, and $s_p=0.420$ respectively. This establishes the presence of the three phases described in our introductory discussion. For a bird's eye view of the phase diagram as a function of plate fugacity, we plot the fugacity dependence of the various order parameters in Fig.~\ref{combined_translational}. We clearly observe a layered phase ($\Lambda^2 \neq 0$, $\Gamma^2=0$, $\omega^2=0$) and a sublattice phase ($\Lambda^2 \neq 0$, $\Gamma^2\neq 0$, $\omega^2\neq0$). The variation of $\Pi^2$ as a function of $s_p$ is also shown in Fig.~\ref{combined_translational}. $\Pi^2$ is zero in both disordered and sublattice phase, and takes nonzero values only in the layered phase, which indicates asymmetric densities of three types of particles in the layered phase. \begin{figure} \includegraphics[width=\columnwidth]{op_sq_all_range_with_error.eps} \caption{Variation of the square of the (a) translational order parameters $\langle\Lambda^2\rangle$, $\langle\Gamma^2\rangle$, and (b) nematic order parameter $\langle\Pi^2\rangle$ with activity of plate $s_p$. The data are for for system sizes $L=80, 100, 120, 150$.} \label{combined_translational} \end{figure} \subsection{Disordered phase} The characterization of the disordered phase is straightforward. All order parameters vanish in the thermodynamic limit in this low density phase. The plates form a disordered fluid, with their heads uniformly distributed, i.e. each of the sublattice densities are equal for the three different types of plates, {\em i.e.}, \begin{eqnarray} \rho_i &=& \frac{\rho}{8}, \quad i=0, 1, \ldots, 7,\nonumber \\ \rho^j &=& \frac{\rho}{3}, \quad j=x, y, z \; . \nonumber \end{eqnarray} \subsection{\label{subsec:layered} Layered phase} With increasing density, we observe that the system undergoes a transition into the layered phase described in the Introduction. In Fig.~\ref{time_pf_lay}, we display the time evolution of the sublattice densities when the system is in a layered phase with layering in $x$-direction. Fig.~\ref{time_pf_lay}(a) compares the densities of the three types of plates. It is clear that the density of $x$-plates is suppressed compared to $y$ and $z$-plates, when the layering is in the $x$-direction, i.e., $\rho^y\approx \rho^z \gg \rho^x$. At the same time, Fig.~\ref{time_pf_lay}(b)--(d) show that while the heads of $x$-plates occupy all sublattices equally, the heads of $y$ and $z$-plates preferentially occupy planes with odd $x$ (in this case), contributing to $\rho_1$, $\rho_3$, $\rho_5$, and $\rho_7$. These observations lead us to the basic picture of the layered phase described earlier in the Introduction. Evidence for power-law columnar order within the occupied slabs is discussed separately in Sec.~\ref{sec:layered}. \begin{figure} \includegraphics[width=\columnwidth]{time_profile_layer.eps} \caption{The temporal evolution of different thermodynamic quantities is shown for an equilibrated layered phase at activity $s_p=0.380$ and for system size $L=120$. (a) The three plate densities $\rho^x, \rho^y, \rho^z$. The eight sublattice densities $\rho_i$ for (b) $x$-plates, (c) $y$-plates, and (d) $z$-plates, where the subscripts $i=0,\dots,7$ denote the different sublattices and the superscripts $x$, $y$, $z$ denote the different types of plates.} \label{time_pf_lay} \end{figure} \subsection{\label{subsec:sublattice} Sublattice-ordered phase} At higher densities including full packing, we observe a sublattice-ordered phase. In this phase, all three types of plates are equivalent, but translational invariance is broken in all three directions, as in a solid. \begin{figure} \includegraphics[width=\columnwidth]{time_profile_sublattice.eps} \caption{The temporal evolution of different thermodynamic quantities is shown for an equilibrated sublattice phase at activity $s_p=0.460$ and for system size $L=120$. The eight sublattice densities (a) $\rho_i$ summing over three types of particles, for individual particle type (b) $x$-plates, (c) $y$-plates and (d) $z$-plates, where the subscripts $i=0,\dots,7$ denote the different sublattices and the superscripts $x$, $y$, $z$ denote the different types of plates.} \label{time_pf_sub} \end{figure} In Fig.~\ref{time_pf_sub}, we display the time evolution of the sublattice densities when the system is in a sublattice-ordered phase. Out of the eight sublattices, one of them is occupied preferentially. At the same time, there is a solid-like sublattice ordering as can be seen seen from Fig.~\ref{time_pf_sub}(a). The sublattice densities for each type of plate are shown in Fig.~\ref{time_pf_sub}(b)--(d). For each type of the plates, two sublattices are preferred, as in a columnar phase. The preferred sublattice densities are $[\rho_2, \rho_3]$, $[\rho_1, \rho_3]$ and $[\rho_3, \rho_7]$ for $x$, $y$ and $z$-plates respectively. Time profile of total sublattice density $\rho_i$ breaks into four labels [see Fig.~\ref{time_pf_sub}(a)]. The top and bottom labels are $\rho_3$ and $\rho_4$ respectively. Two intermediate labels are degenerate with three densities in each label. Higher intermediate label has densities $\rho_1, \rho_2, \rho_7$ and lower intermediate label has densities $\rho_0, \rho_5, \rho_6$ respectively. The pattern of the labels may be understood from the right panel of Fig.~\ref{sublat_label} where the sublattice division is shown schematically. The labels are divided depending on the lowest distance between the sublattice-$3$ (most occupied) and other sublattices. The density decreases with increasing distance between sublattices. One could imagine the sublattice phase as follows. Consider a collection of $2\times 2 \times 2$ cubes that are arranged in a periodic manner to favor one sublattice. If the cubes are now replaced by a pair of plates of the same kind (each cube can thus be replaced by parallel plates in three ways), then the phase that is obtained is similar to the sublattice phase that we see in the system of hard plates. Unlike the layered phase, the densities of the three types of plates are equal. For the fully packed case, this picture gives a lower bound to the entropy per site of $(1/8)\log(3)$. \section{\label{sec:transitions}Phase transitions} We now study the nature of the two phase transitions we observe, from the disordered to layered phase, and from the layered to sublattice-ordered phase. \subsubsection{Disordered to layered phase transition} As noted already, it is convenient to focus on the squared order parameter $\Lambda^2$ as defined in Eq.~(\ref{eq:lambda_sq}) to probe the symmetry breaking accompanying the layering transition. In the disordered phase, $\Lambda^2 \to 0$ in the thermodynamic limit, while the thermodynamic limit of $\Lambda^2$ in the layered phase is nonzero. As we have already seen in Fig.~\ref{combined_translational}(a), the first transition encountered as one increases the activity $s_p$ from small values is a transition from a disordered to a layered phase signalled by a threshold at which $\Lambda^2$ develops a nonzero value in the thermodynamic limit. \begin{figure} \includegraphics[width=\columnwidth]{disordered_layered_with_error_2.eps} \caption{Data for Binder cumulant $U_{\Lambda}$ near the disordered-layered transition. (a) $U_{\Lambda}$ for different system sizes intersect close to $s_p^{DL} \approx 0.323$. (b) $U_{\Lambda}$ for different system sizes collapse onto a curve when the parameter are scaled as in Eq.~(\ref{eq:uscaling}) with exponent $\nu=0.704$.} \label{transition_1} \end{figure} For a more detailed understanding of the disordered-layered transition, we also measure the Binder cumulant $U_{\Lambda}$ associated with $\Lambda^2$ \begin{equation} U_\Lambda=1-\frac{9}{15}\frac{\langle \Lambda^4\rangle}{\langle \Lambda^2\rangle ^2}.\label{eq:define_binder} \end{equation} From standard finite-size scaling theory of continuous phase transitions, we expect that the Binder cumulant obeys a scaling form near the critical point: \begin{equation} U_\Lambda(\epsilon, L) \simeq f_\Lambda (\epsilon L^{1/\nu}),\label{eq:uscaling} \end{equation} where $\epsilon=s_p-s_c$ is the deviation from the critical point, $\nu$ is the critical exponent, and $f_\Lambda$ is the scaling function. The nature of the symmetry breaking associated with the layering transition suggests that the finite-size scaling of this Binder cumulant should be governed by the scaling behavior in the O(3) universality class with cubic anisotropy, wherein the values of the critical exponents are known to be $\nu=0.704$, $\beta=0.362$, and $\gamma=1.389$ ~\cite{carmona-2000,caselle-1998}. The variation of $U_\Lambda$ with $s_p$ for different system sizes is shown in Fig.~\ref{transition_1}(a). The data for different system sizes cross each other at the critical point $s_p^{DL} \approx 0.323$. The corresponding critical density is $\rho^{DL}\approx0.940$. These data for the Binder cumulant for different $L$ collapse to a reasonable accuracy onto a single scaling curve when the variables $\epsilon$ is scaled as in Eq.~(\ref{eq:uscaling}) with the theoretical value of $\nu=0.704$, as shown in Fig.~\ref{transition_1}(b). The quality of this data collapse bears out our initial theoretical expectation that the transition is in the universality class of the three dimensional Heisenberg model with cubic anisotropy~\cite{carmona-2000,caselle-1998}. \subsubsection{Layered to sublattice phase transition} In this section, we study the nature of the second transition from layered to sublattice phase. Suitable order parameters are $\Gamma^2$ and $\Pi^2$ as defined in Eq.~(\ref{eq:lambda_sq}) and Eq.~(\ref{eq:qn}) respectively. The associated Binder cumulants may be defined as \begin{eqnarray} U_\Gamma&=&1-\frac{1}{2}\frac{\langle \Gamma^4\rangle}{\langle \Gamma^2\rangle ^2},\label{eq:binder_gamma}\\ U_\Pi&=&1-\frac{1}{2}\frac{\langle \Pi^4\rangle}{\langle \Pi^2\rangle ^2},\label{eq:binder_pi} \end{eqnarray} We show that the transition is first-order in nature. The variation of $\langle \Gamma^2\rangle$ and $\langle \Pi^2\rangle$ with $s_p$, for different system sizes, has already been displayed in Fig.~\ref{combined_translational}(a) and (b) respectively; both order parameters have a sharp variation across the transition point, and the data for different system sizes intersect each other with the curves becoming steeper with increasing system size. These are signatures of a first-order transition. In Fig.~\ref{transition_2_hist}(a), (b) and (c), we also display the measured histograms of the total plate density $\rho$ and order parameters $\Gamma^2$, $\Pi^2$ . To increase our signal to noise ratio, we easure these histograms by averaging the time series of each observable over a bin of $51$ successive measurements and then recording the histogram of the resulting bin averages. As is clear from this figure, these histograms have a double-peaked distribution characteristic of phase coexistence at a first-order transition. However, the jump in the density across the transition is quite small (of the order $10^{-4}$) and therefore quite difficult to detect directly in simulations. More evidence in support of the first-order nature of this transition is provided by the Binder cumulants of the order parameters. The variation of Binder cumulants $U_\Gamma$ and $U_\Pi$ is shown in Fig.~\ref{transition_2_binder}(a) and (b) respectively, for different system sizes. Both cumulants are seen to have non-monotonic behavior near the transition and go negative in this vicinity; this is another characteristic signature of a first order transition. We thus conclude that the layered to sublattice-ordered transition is first-order. \begin{figure*} \includegraphics[width=2.0\columnwidth]{histogram_2nd_trans.eps} \caption{Plot of probability distribution of (a) total density $\rho$, (b) $\Gamma^2$ and (c) $\Pi^2$ near layered to sublattice transition for $L=150$.} \label{transition_2_hist} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{binder_2nd_trans.eps} \caption{Variation of the Binder cumulant (a) $U_{\Gamma}$ and (b) $U_{\Pi}$ as a function of $s_p$ for different $L$.} \label{transition_2_binder} \end{figure} \section{\label{sec:layered}Correlations in layered phase} \begin{figure} \includegraphics[width=\columnwidth]{correlation_inter_plane_3.eps} \caption{(a) Variation of normalized layer correlation function $G_n(p, L)$ with interlayer separation $p$. (b) Variation of unnormalized layer correlation function $G(p;L)$ for $p=0$, (c) Variation of unnormalized layer correlation function $G(p;L)$ for $p=4$, and (d) in-plane two-point correlation function $C(r=L/4,p=0)$ as a function of $L$.} \label{layered_correlation} \end{figure} In this section we characterize the correlations in the layered phase. To examine the intra-slab and inter-slab correlations, we define the in-plane columnar order parameter $(\ell_x(z) ,\ell_y(z))$ of a layer $z$ as \begin{eqnarray} \ell_x(z)=\frac{1}{L^2}\sum_{x,y=0}^{L-1}(-1)^x \phi(x,y,z),\nonumber\\ \ell_y(z)=\frac{1}{L^2}\sum_{x,y=0}^{L-1}(-1)^y \phi(x,y,z), \end{eqnarray} where $\phi(x,y,z)=1$, if the site is occupied by the head of a plate, and zero otherwise. The inter-slab correlation $G(p,L)$ for two slabs separated by a distance $p$ in the layering direction is defined as \begin{equation} G(p,L)=\frac{1}{L} \sum_{z^\prime=0}^{L-1}[\ell_x(z^\prime) \ell_x(z^\prime+p) + \ell_y(z^\prime) \ell_y(z^\prime+p)], \end{equation} The variation of the normalized correlation function $G^n(p,L)=G(p,L)/G(0,L)$ for the layered phase is shown in Fig.~\ref{layered_correlation}(a) for different systems sizes. It is clear that it decays exponentially with $p$. We conclude that the interaction between the slabs is weak and decays rapidly with inter-slab distance in the layered phase. The variation of $G(0,L)$ and $G(4,L)$ as a function of $L$ is shown in Fig.~\ref{layered_correlation}(b) and (c) respectively. For large $L$, we see that these approach the behavior \begin{equation} G(p,L)\sim L^{-2} \;\;\; {\rm for} \;\;\; p=0,4 \; . \end{equation} To understand the significance of this observation, we need to also study the correlation function of a local two-dimensional columnar order parameter field $\psi(r)$. To this end, we first note that each occupied slab, viewed along the layering axis, can be thought of as a two-dimensional system of hard squares and dimers; the plates with normals perpendicular to the layering axis play the role of dimers in this mapping, while plates with normal along the layering axis are viewed as hard squares. With this in hand, we employ the definition of $\psi$ used in Ref.~\cite{2015-rdd-prl-columnar} for a two-dimensional system of dimers and hard squares, and measure the connected two-point correlation function $C(r, p=0)$ of $\psi(r)$ within each occupied slab. In Fig.~\ref{layered_correlation}(d), we display the $L$ dependence of $C(L/4,0)$ in the layered phase. As is clear from this figure, our data is consistent with $C(L/4,0) \sim 1/L^\eta$, with a $\eta > 2$ that depends on the plate fugacity. \begin{figure} \includegraphics[width=\columnwidth]{vacancy_charge.eps} \caption{(a) Variation of the absolute value of total vacancy charge $|\Delta|$ in a $k\times L$ rectangular box on a occupied layer with $kL^{-1}$, for different $L$ in a layered phase with $s_p=0.375$. (b) Power-law scaling of the saturation charge $|\Delta_s|$ with $L$ for different $s_p$ in the layered phase.} \label{fig:charge} \end{figure} This appearance of critical correlations in the layered phase at nonzero vacancy density is quite surprising at first sight. We understand it as a consequence of the constraints on the motion and relative positions of vacancies near full packing. Consider the time evolution of a system of hard plates at high density, with plates being only able to move to nearby empty spaces without causing any violations of the hard-core constraint; such a starting point is appropriate since the density of plates in the layered plates is quite high. In such a system, vacancies can only move in pairs that can be thought of as dipoles. In the layered phase, the system splits into occupied slabs that are weakly coupled to each other, as is evident from the exponential falloff of the inter-slab correlations (Fig.~\ref{layered_correlation} a). Each occupied slab, viewed along the layering axis, is a system of hard squares and dimers on a two-dimensional square lattice. If individual vacancies move freely, such a two-dimensional system cannot support power-law order~\cite{2015-rdd-prl-columnar}. However, and this is key, dipolar defects do not destroy power-law order in this equivalent two-dimensional system. Motivated by this line of thought, we have monitored the total ``charge'' in a single layer (each such single layer forms the top layer or the bottom layer of an occupied slab). This total charge is defined as: \begin{equation} |\Delta|=|\sum_{x,y \in {\rm strip of width k}}(-1)^{x+y}\delta_{\sigma,1} |, \end{equation} where we have denoted the layering direction as $z$ (thus each layer is periodic in the $x$ and $y$ directions), the vacancy field $\sigma$ at a site is $1$ if a site is empty, and $0$ if it is touched by a plate, and the sum is taken over a strip that wraps around one periodic direction (perdendicular to the layering axis) of a $L \times L$ layer of our sample, and has finite width $k$ in the other periodic direction (again, perpendicular to the layering axis). The variation of average absolute charge as a function of $k/L$ for different $L$ and fixed $s_p=0.375$ is shown in Fig.~\ref{fig:charge}(a). The average absolute charge as a function of $k/L$ reaches a saturation value $|\Delta_s(L)|$ in the vicinity of $k/L = 0.5$. This function of $k/L$ is symmetric about $k/L=1/2$ because of the periodic boundary conditions obeyed by the layer. The $L$ dependence of $|\Delta_s(L)/L|$ is shown in Fig.~\ref{fig:charge}(b), and is seen to be consistent with a ``perimeter-law'' scaling. This perimeter-law scaling clearly admits a natural interpretation, namely that individual vacancies in any layer are bound into pairs, with each vacancy on a $A$ sublattice site paired with a nearby vacancy on the $B$ sublattice. Since charges are bound into dipolar pairs, $|\Delta_s(L)|$ naturally displays perimeter-law scaling with $L$. However, the exponentially decaying correlations between slabs throws up another question. What prevents the power-law columnar order parameter of two adjacent occupied slabs from locking together? In the corresponding layered phase of fully-packed hard plates with anisotropic fugacityes, discussed in parallel work~\cite{Geetpaper}, the same question arises, and has an interesting answer: Namely, plates which straddle neighbouring occupied slabs occur in bound pairs, which can be viewed as quadrupolar defects. As a result, the corresponding coupling between slabs is irrelevant whenever the power-law exponent within the slab satisfies $\eta > 1/2$. With vacancies are allowed, as is the case here, there is a crucial difference: two adjacent occupied slabs can be coupled by single plates that straddle the two slabs. This is because a pair of vacancies can ``cut'' the string that binds two such plates into a quadrupolar defect in the fully-packed case~\cite{Geetpaper}. Thus, in the present case, the question reduces to whether this dipole-dipole coupling between neighbouring occupied slabs is a relevant coupling. From the scaling dimension of this dipole-dipole coupling, we see that this coupling is irrelevant whenever the transverse power-law exponent satisfies $\eta > 2$. Reassuringly, the measured value of $\eta$ throughout our layered phase does indeed satisfy this inequality. Thus it is the dipolar character of the defects within occupied slabs, and irrelevance of the dipolar couplings between slabs, that together lead to an a stable critical layered phase in this system. This critical layered phase is perhaps the most surprising aspect of the results presented here. \iffalse Additionally, as argued in parallel work~\cite{Geetpaper}, when the system is in a spontaneously layered phase, two neighboring occupied slabs can only be straddled by a pair of parallel adjacent plates in the thermodynamic limit (more precisely, other configurations are entropically subdominant in the thermodynamic limit). Taken together, these two features of the system imply that in the equivalent two dimensional hard square and dimer gas describing an occupied slab, there are no isolated ``charged'' defects. Indeed, the defects that dominate in the thermodynamic limit of the equivalent two-dimensional system have a dipolar character, and the coupling between neighboring occupied slabs is quadrupolar in nature as described in Ref.~\cite{Geetpaper}. \fi \section{\label{sec:discussion}Summary and discussion} In this paper we studied the phases and phase transitions (Fig.~\ref{schematic}) in a system of $2\times2\times1$ hard plates on the three dimensional cubic lattice using Monte Carlo simulations. Three types of plates are possible depending on their orientation, and our focus as been on the isotropic case with equal fugacity for all three types of plates. The system undergoes two phase transitions with increasing the density of particles: first, a continuous transition from disordered phase to layered phase that survives up to fairly high densities, and second, a first-order transition from the layered phase to a sublattice-ordered phase that is stable at~\cite{Geetpaper} and near full-packing. In the sublattice-ordered phase, the system displays two-fold breaking of translational symmetry along all three cartesian axis. Each type of plate has columnar order, and breaks translation symmetry in the two directions perpendicular to its axis [see Fig.~\ref{time_pf_sub}]. In the layered phase, the density of one type of plate is lower relative to the other two, and there is two-fold translation symmetry breaking along one spontaneously chosen cartesian axis, with occupied slabs (with a higher density of plates contained entirely within them) separated from each other by one lattice spacing as one moves along this layering axis. Remarkably, correlations within an occupied slab decay as an oscillatory power law, with wavevector corresponding to power-law columnar order within the slab. On the other hand, the correlations between different occupied slabs decrease exponentially with the separation between them. As mentioned earlier in Sec~\ref{sec:layered} (see also Ref.~\cite{Geetpaper}), this vacancy-driven physics of hard plates on the cubic lattice is particularly interesting from a vantage point that uses the fully-packed system as a reference and views the vacancies as defects introduced into a fully-packed configuration. This is best appreciated by contrasting the constraints on the position and mobility of individual vacancies in this system with the corresponding constraints (or lack thereof) in systems of $k$-mers ($k>2$)~\cite{2007-gd-epl-on,2013-krds-pre-nematic,dhar2021entropy} or dimers ($k=2$)~\cite{1961-k-physica-statistics,1961-tf-pm-dimer,2003-hkms-prl-coulomb,2017-naq-arxiv-polyomino}. Consider removing a single dimer from a fully-packed dimer model on the bipartite square or cubic lattice. This introduces two vacancies, one on the A sublattice and the other on the $B$ sublattice of the bipartite lattice. As the dimers move around while obeying the hard-core constraint on their positions, the two vacancies can separate from each other and move individually via hops to next-nearest-neighbor sites. In other words, the only constraint on them is that the two vacancies must occupy opposite sublattices. Turning to long rigid rods of length $k$ with $k>2$, the situation is not very different: Consider the $k$ vacancies, created by the removal of a single rod from the fully-packed system. Apart from some constraints on the sublattices of sites that can be simultaneously occupied by these $k$ vacancies, these vacancies can move around and separate from one another. This should be contrasted with the constraints faced by the four vacancies that are created when a single hard plate is removed from the fully-packed system on the cubic lattice. These vacancies are only free to move as two nearest neighbor pairs, and that too only in directions perpendicular to the pairing axis. This is a key distinction between the present problem and systems of long rods. Indeed, the problem studied here has a stable sublattice-ordered phase at densities close to full-packing, while in the case of long rods, the sublattice-ordered phase is unstable close to full-packing because of a sliding instability. In bipartite dimer models, each dimer can be thought of as a dipole, and the fully-packed limit is understood in terms of a coarse-grained height action that describes the potential field in a system of fluctuating dipoles. This provides a natural description of the Coulomb correlations of bipartite dimer models~\cite{Henley_coulombphasereview,Alet_2006pre_interactingdimers,Papa_Fradkin_interactingdimers,2015-rdd-prl-columnar,Huse_etal_3ddimers,Desai_Pujari_Damle_bilayerdimers}. Isolated vacancies correspond to charged monopoles in this description. Any nonzero density of vacancies then corresponds to a nonzero density of free charges, which introduces a finite correlation length and destroys the Coulomb liquid phase. Although less is known, the effect of a small density of vacancies on fully-packed k-mers is expected to be quite similar in two dimensions, since the full-packing limit again admits a multi-component height description in the two-dimensional case, and isolated vacancies now correspond to vector charges~\cite{2007-gd-epl-on,2013-krds-pre-nematic} within this description. In contrast, since vacancies in our fully-packed plate system can only move in pairs, there is no ``free charge'' associated with them. Instead, pairs of vacancies in a layer are more appropriately thought of as dipolar defects in the coarse-grained effective field theory~\cite{Geetpaper} for a layer. Thus, our results, particularly the transition to the spontaneously layered phase and the critical correlations of the occupied slabs, can be viewed as being a direct consequence of this restricted motion of vacancy defects in the hard plate lattice gas; this point of view is particularly appropriate since the transition to the critical layered phase occurs at a very small vacancy density of $\rho_{\rm vac}^{\rm crit} = 0.026$. Finally, we note that although numerous analytical~\cite{1973-a-prl-phase,1974-s-pra-ordered,2018-dgj-araiv-plate}, experimental~\cite{1980-ys-prl-observation,2004-l-nature-missing} and computer simulation~\cite{2000-bz-jcp-thermotropic,2008-bmorz-jpcm-computer,2018-dtdrd-prl-hard} studies indicate the presence of biaxial nematic phase (in which the system exhibits orientational order along all three internal axis of the particle) in systems of anisotropic plate-like objects in three dimensions, there has been some debate regarding the existence of this phase. In this paper, for the particular case of $2 \times 2 \times 1$ hard plates on the cubic lattice, we have not found any biaxial nematic phase. It would be very interesting to study the existence and stability of such a biaxial nematic phase for other lattice models of plates which cover more than one elementary face of the cubic lattice. A system of rectangular plates with different aspect ratio having a hard core and/or attractive interaction would also be a promising candidate for future study. \iffalse A two dimensional section of the system of hard plates corresponds to a problem of hard squares and dimers. This model, when the activities of dimers and squares can be varied independently, has a very rich phase diagram including two lines of critical points meeting at a point~\cite{2015-rdd-prl-columnar,2017-mr-pre-columnar}. Thus, one can expect that if the activities of the three kinds of plates in three dimensions can be independently varied, then a very rich phase diagram can be expected, especially at full packing, where regions of power-law correlated phases should exist. We will describe the phases of the fully packed regime in another paper~\cite{Geetpaper}. \fi \section*{Acknowledgments} We thank K. Ramola and N. Vigneshwar for helpful discussions. The simulations were carried out on the high performance computing machines Nandadevi at the Institute of Mathematical Sciences, and the computational facilities provided by the University of Warwick Scientific Computing Research Technology Platform. Some of this work contributed to the Ph.D thesis of DM submitted to the Homi Bhabha National Institute (HBNI). GR was supported by the TQM unit of Okinawa Institute of Science and Technology during the final stages of this work. KD was supported at the TIFR by DAE, India and in part by a J.C. Bose Fellowship (JCB/2020/000047) of SERB, DST India, and by the Infosys-Chandrasekharan Random Geometry Center (TIFR). D.D.'s work was partially supported by Grant No. DST-SR- S2/JCB-24/2005 of the Government of India, and partially by a Senior Scientist Fellowship by the National Academy of Sciences of India. {\em Author contributions} DM performed the computations with assistance from GR. KD, RR, and DD conceived and directed this work, and finalized the manuscript using detailed inputs from DM.
train/arxiv
BkiUdLbxK7Tt522WaXk3
5
1
\chapter{Notation and conventions} Let $\mathcal B\subset\mathbb{R}^d$ with $d\geq 2$. Throughout the whole thesis, the symbol ${\mathchoice {\rm 1\mskip-4mu l_{\mathcal B}$ denotes the characteristic function of $\mathcal B$. The notation $C_c^\infty (\mathcal B)$ stands for the space of $\infty$-times continuously differentiable functions on $\mathbb{R}^d$ and having compact support in $\mathcal B$. The dual space $\mathcal D^{\prime}(\mathcal B)$ is the space of distributions on $\mathcal B$. We use also the notation $C^0_w([0,T];\mathcal X)$, with $\mathcal X$ a Banach space, to refer to the space of continuous in time functions with values in $\mathcal X$ endowed with its weak topology. \\ Given $p\in[1,+\infty]$, by $L^p(\mathcal B)$ we mean the classical space of Lebesgue measurable functions $g$, where $|g|^p$ is integrable over the set $\mathcal B$ (with the usual modifications for the case $p=+\infty$). We use also the notation $L_T^p(L^q)$ to indicate the space $L^p\big([0,T];L^q(\mathcal B)\big)$, with $T>0$. Given $k \geq 0$, we denote by $W^{k,p}(\mathcal B)$ the Sobolev space of functions which belongs to $L^p(\mathcal B)$ together with all their derivatives up to order $k$. When $p=2$, we alternately use the notation $W^{k,2}(\mathcal B)$ and $H^k(\mathcal B)$. We denote by $\dot{W}^{k,p}(\mathcal B)$ the corresponding homogeneous Sobolev spaces, i.e. $\dot{W}^{k,p}(\mathcal B) = \{ g \in L^1_{\rm loc}(\mathcal B)\, : \, D^\alpha g \in L^p(\mathcal B),\ |\alpha| = k \}$. Recall that $\dot{W}^{k,p}$ is the completion of $C^\infty_c(\overline{\mathcal B})$ with respect to the $L^p$ norm of the $k$-th order derivatives. Moreover, the notation $B^s_{p,r}(\mathcal B)$ stands for the Besov spaces on $\mathcal B$ that are interpolation spaces between the Sobolev ones. \\ The symbol $\mathcal{M}^+(\mathcal B)$ denotes the cone of non-negative Borel measures on $\mathcal B$. For the sake of simplicity, we will omit from the notation the set $\mathcal B$, that we will explicitly point out if needed. In the whole thesis, the symbols $c$ and $C$ will denote generic multiplicative constants, which may change from line to line, and which do not depend on the small parameter $\varepsilon$. Sometimes, we will explicitly point out the quantities which these constants depend on, by putting them inside brackets.\\ In addition, we agree to write $f\sim g$ whenever we have $c\, g\leq f \leq C\, g$, and $f\lesssim g$ if $f\leq Cg$. Let $\big(f_\varepsilon\big)_{0<\varepsilon\leq1}$ be a family of functions in a normed space $Y$. If this family of functions is bounded in $Y$, we use the notation $\big(f_\varepsilon\big)_{\varepsilon} \subset Y$. \medbreak As we will see in the sequel (we refer in particular to Chapters \ref{chap:multi-scale_NSF} and \ref{chap:BNS_gravity}), one of the main features of our asymptotic analysis is that the limit-flow will be \emph{two-dimensional} and \emph{horizontal} along the plane orthogonal to the rotation axis. Then, let us introduce some notation to describe better this phenomenon. Let $\Omega$ be a domain in $\mathbb{R}^3$. We decompose $\vec x\in\Omega$ into $\vec x=(x^h,x^3)$, with $ x^h\in\mathbb{R}^2$ denoting its horizontal component. Analogously, for a vector-field $\vec v=(v^1,v^2,v^3)\in\mathbb{R}^3$, we set $\vec v^h=(v^1,v^2)$ and we define the differential operators $\nabla_h$ and ${\rm div}\,_{\!h}$ as the usual operators, but acting just with respect to $x^h$. In addition, we define the operator $\nabla^\perp_h\,:=\,\bigl(-\partial_2\,,\,\partial_1\bigr)$. Finally, we introduce the Helmholtz projection $\mathbb{H}[\vec{v}]$ of a vector field $\vec{v}\in L^p(\Omega; \mathbb{R}^3)$ on the subspace of divergence-free vector fields. It is defined by the decomposition \begin{equation*} \vec{v} = \mathbb{H}[\vec{v}] + \nabla_x \Psi\, , \end{equation*} where $\Psi \in \dot W^{1,p}(\Omega)$ is the unique solution of $$\int_{\Omega} \nabla_x \Psi \cdot \nabla_{x} \varphi \dx = \int_{\Omega} \vec{v} \cdot \nabla_x \varphi \dx \quad\mbox{for all } \varphi \in C^\infty_c (\overline{\Omega}),$$ which formally means: $\Delta \Psi = {\rm div}\, \vec{v}$ and $\vec{v} \cdot \n |_{ \partial \Omega } = 0$. The symbol $\mathbb{H}_h$ denotes instead the Helmholtz projection on $\mathbb{R}^2$. Observe that, in the sense of Fourier multipliers, one has $\mathbb{H}_h\vec f\,=\,-\nabla_h^\perp(-\Delta_h)^{-1}{\rm curl}_h\vec f$. Moreover, since we will deal with a periodic problem in the $x^{3}$-variable, we also introduce the following decomposition: for a vector-field $X$, we write \begin{equation} \label{eq:decoscil} X(x)=\langle X\rangle (x^{h})+\dbtilde{X}(x)\quad\qquad \text{ with }\quad \langle X\rangle(x^{h})\,:=\,\frac{1}{\left|\mathbb{T}^1\right|}\int_{\mathbb{T}^1}X(x^{h},x^{3})\, \dx^{3}\,, \tag{OSC} \end{equation} where $\mathbb{T}^1\,:=\,[-1,1]/\sim$ is the one-dimensional flat torus (here $\sim$ denotes the equivalence relation which identifies $-1$ and $1$) and $\left|\mathbb{T}^1\right|$ denotes its Lebesgue measure. Notice that $\dbtilde{X}$ has zero vertical average, and therefore we can write $\dbtilde{X}(x)=\partial_{3}\dbtilde{Z}(x)$ with $\dbtilde{Z}$ having zero vertical average as well. \chapter{Contributions of the thesis} \section*{The Navier-Stokes-Fourier problem: some physical insight} In this thesis, we devote ourselves to the study of the behaviour of fluid flows characterized by large time and space scales. Typical examples of those flows are currents in the atmosphere and the ocean, but of course there are many other cases where such fluids occur out of the Earth, like flows on stars or other celestial bodies. At those scales, the effects of rotation of the ambient space (which in the case of oceans or atmosphere is the Earth) are not negligible, and the fluid motion is influenced by the action of a strong Coriolis force. There are two other features that characterize the dynamics of these flows, usually called geophysical flows (see \cite{C-R}, \cite{Ped} and \cite{Val}, for instance): the compressibility or incompressibility of the fluid and the stratification effects (i.e. density variations, essentially due to the gravity). The relevance of the previous attributes is ``measured'' by introducing, in the mathematical model, three positive adimensional parameters which, for the geophysical flows, are assumed to be small. Those parameters are: \begin{itemize} \item the \emph{Mach} number $Ma$, which sets the size of isentropic departures from incompressible flows: the more $Ma$ is small, the more compressibility effects are low; \item the \emph{Froude} number $Fr$, which measures the importance of the stratification effects in the dynamics: the more $Fr$ is small, the more gravitational effects are strong; \item the \emph{Rossby} number $Ro$, that is related to the rotation of the ambient system: when $Ro$ is very small, the effects of the fast rotation are predominant in the dynamics. \end{itemize} We adopt a simplistic assumption (often assumed in physical and mathematical studies) which consists in restricting the attention to flows at mid-latitudes, i.e. flows which take place far enough from the poles and the equatorial zone. In this context, the variations of rotational effects due to the latitude are negligible. Denote by $\varrho ,\, \vartheta\geq 0$ the density and the absolute temperature of the fluid, respectively, and by $\vec{u}\in \mathbb{R}^3$ its velocity field: the full 3-D Navier-Stokes-Fourier system in its non-dimensional form, can be written (see e.g. \cite{F-N}) as \begin{equation} \label{eq_i:NSF} \begin{cases} \partial_t \varrho + {\rm div}\, (\varrho\vec{u})=0\ \\[3ex] \partial_t (\varrho\vec{u})+ {\rm div}\,(\varrho\vec{u}\otimes\vec{u}) + \dfrac{\vec{e}_3 \times \varrho\vec{u}}{Ro}\, + \dfrac{1}{Ma^2} \nabla_x p(\varrho,\vartheta) \\[1ex] \qquad \qquad \qquad \qquad \qquad \qquad \qquad \; \; \; ={\rm div}\, \mathbb{S}(\vartheta,\nabla_x\vec{u}) + \dfrac{\varrho}{Ro^2} \nabla_x F + \dfrac{\varrho}{Fr^2} \nabla_x G \\[3ex] \partial_t \bigl(\varrho s(\varrho, \vartheta)\bigr) + {\rm div}\, \bigl(\varrho s (\varrho,\vartheta)\vec{u}\bigr) + {\rm div}\,\left(\dfrac{\q(\vartheta,\nabla_x \vartheta )}{\vartheta} \right) = \sigma\,, \tag{NSF} \end{cases} \end{equation} which is setted in the infinite straight 3-D strip: \begin{equation} \label{eq:domain} \Omega\,:=\,\mathbb{R}^2\times\,]0,1[\,. \tag{DOM} \end{equation} In system \eqref{eq_i:NSF} above, the functions $s,\vec{q},\sigma$ are the specific entropy, the heat flux and the entropy production rate respectively, and $\mathbb{S}$ is the viscous stress tensor, which satisfies Newton's rheological law (see Subsections \ref{sss:primsys} and \ref{sss:structural} for the more precise formulation). The Coriolis force is represented by \begin{equation} \label{def:Coriolis} \mathfrak C(\vr,\vu)\,:=\,\frac{1}{Ro}\,\vec e_3\times\vr\,\vu\,, \tag{COR} \end{equation} where $\vec e_3=(0,0,1)$ and the symbol $\times$ stands for the classical external product of vectors in $\mathbb{R}^3$. In particular, the previous definition implies that the rotation takes place around the vertical axis, and its strength does not depend on the latitude (see e.g. \cite{C-R} and \cite{Ped} for details). We point out that, despite all those simplifications, the obtained model is already able to capture several physically relevant phenomena occurring in the dynamics of geophysical flows: the so-called \emph{Taylor-Proudman theorem}, the formation of \emph{Ekman layers} and the propagation of \emph{Poincar\'e waves}. We refer to \cite{C-D-G-G} for a more in-depth discussion. In the present thesis, we avoid boundary layer effects, i.e. the issue linked to the Ekman layers, by imposing \emph{complete-slip} boundary conditions. As established by the \emph{Taylor-Proudman theorem} in geophysics, the fast rotation imposes a certain rigidity/stability, forcing the motion to take place on planes orthogonal to the rotation axis. Therefore, the dynamics becomes purely two-dimensional and horizontal, and the fluid tends to move in vertical columns. However, such an ideal configuration is hindered by another fundamental force acting at geophysical scales, the gravity, which works to restore vertical stratification of the density. The gravitational force is described in system \eqref{eq_i:NSF} by the term \[ \mathcal G(\vr)\,:=\,\frac{1}{Fr^2}\,\vr\,\nabla_xG\, , \] where in our case $G(x)=\,G(x^3)\,=\,-\,x^3$. Moreover, the gravitational effects are weakened by the presence of the centrifugal force $$\mathfrak F(\vr):=\frac{1}{Ro^2} \, \vr\, \nabla_x F\, , $$ with $F(x)=|x^h|^2$. Such force is an inertial force that, at mid-latitude, slightly shifts the direction of the gravity. Thus, the competition between the stabilisation consequences, due to the rotational effects, and the vertical stratification (due to gravity), is translated in the model into the competition between the orders of magnitude of $Ro$ and $Fr$. Actually, it turns out that the gravity $\mathcal G$ acts in combination with the pressure force: $$ \mathfrak P(\vr, \vartheta)\,:=\,\frac{1}{Ma^2}\,\nabla_x p(\vr, \vartheta)\,, $$ where $p$ is a known smooth function of the density and the temperature of the fluid (see Subsection \ref{sss:structural}). We notice the fact that the terms $\mathfrak C,\, \mathcal G,\, \mathfrak P$ and $\mathfrak F$ enter into play in the model with a large prefactor, therefore our aim is to study the systems when $Ma, \, Fr$ and $Ro$ are small in different regimes. \section*{The multi-scale analysis} At the mathematical level, in the last 30 years there has been a huge amount of works devoted to the rigorous justification, in various functional frameworks, of the reduced models considered in geophysics. Reviewing the whole literature about this topic goes far beyond the scopes of this introductory part, therefore we make the choice of reporting only the works which deal with the presence of the Coriolis force \eqref{def:Coriolis}. We also decide to postpone, to the next part, the discussion about the incompressible models, because less pertinent for multi-scale analysis, due to the rigidity imposed by the divergence-free constraint on the velocity field of the fluid. The framework of compressible fluid models, instead, provides a much richer setting for the multi-scale analysis of geophysical flows. In addition, we choose to focus our attention mostly on works dealing with viscous fluids and which perform the asymptotic study for general ill-prepared initial data. \subsection*{Previous results} First results in the above direction were obtained by Feireisl, Gallagher and Novotn\'y in \cite{F-G-N} and together with G\'erard-Varet in \cite{F-G-GV-N}, for the barotropic Navier-Stokes system (see also \cite{B-D-GV} for a preliminary study and \cite{G-SR_Mem} for the analysis of equatorial waves). There, the authors investigated the combined low Rossby number regime (fast rotation effects) with low Mach number regime (weak compressibility of the fluid) under the scaling \begin{align} Ro\,&=\,\varepsilon\tag{LOW RO}\\ Ma\,&=\,\varepsilon^m \qquad\qquad \mbox{ with }\quad m\geq 0 \,, \tag{LOW MA}\label{eq:scale} \end{align} where $\varepsilon\in\,]0,1]$ is a small parameter, which one lets go to $0$ in order to derive the reduced model. In the case when $m=1$ in \eqref{eq:scale}, the system presents an isotropic scaling, since $Ro$ and $Ma$ act at the same order of magnitude and the pressure and rotation terms keep in balance (the so-called \emph{quasi-geostrophic balance}) at the limit. The limit system is identified as the so-called \emph{quasi-geostrophic equation} for a stream-function of the target velocity field. In \cite{F-G-GV-N} when $m>1$ and with in addition the centrifugal force, instead, the pressure term predominates (over the Coriolis force) in the dynamics of the fluid. In this case, the limit system is described by a $2$-D incompressible Navier-Stokes system and the difficulties generated by the anisotropy of scaling are overcome by using dispersive estimates. Afterwards, Feireisl and Novotn\'y continued the multi-scale analysis for the same system without the centrifugal force term yet, by considering the effects of a low stratification, i.e. $Ma/Fr\rightarrow 0$ when $\varepsilon\rightarrow 0^+$ (see \cite{F-N_AMPA}, \cite{F-N_CPDE}). We refer to \cite{F_MA} for a similar study in the context of capillary models, where the choice $m=1$ was made, but the anisotropy was given by the scaling fixed for the internal forces term (the so-called Korteweg stress tensor). In addition, we have to mention \cite{F_2019} for the case of large Mach numbers with respect to the Rossby parameter, namely $0\leq m<1$ in \eqref{eq:scale}. Since, in that instance, the pressure gradient is not strong enough to compensate the Coriolis force, in order to find some relevant limit dynamics one has to penalise the bulk viscosity coefficient. The analysis for models presenting also heat transfer is much more recent, and has begun with the work \cite{K-M-N} by Kwon, Maltese and Novotn\'y. In that paper, the authors considered a multi-scale problem for the full Navier-Stokes-Fourier system with Coriolis and gravitational forces ($F=0$ therein), taking the scaling \begin{equation} \label{eq:scale-G} {Fr}\,=\,\varepsilon^n\,,\qquad\qquad\mbox{ with }\quad 1\,\leq\,n\,<\,\frac{m}{2}\,.\tag{LOW FR} \end{equation} In particular, in that paper, the choice \eqref{eq:scale-G} implied that $m>2$ and the case $n=m/2$ was left open. Similar restrictions on the parameters can be found in \cite{F-N_CPDE} for the barotropic model. Such restrictions has to be ascribed to the techniques used for proving convergence, which are based on a combination of relative energy/relative entropy method with dispersive estimates derived from oscillatory integrals (notice that an even larger restriction, $m>10$, appears in \cite{F-G-GV-N}). On the other hand, it is worth underlying that the relative energy methods allow to get a precise rate of convergence and to consider also inviscid and non-diffusive limits (in those cases, one does not dispose of a uniform bound for $\nabla_x\vartheta$ and on $\nabla_x\vec{u}$). The case when $m=1$ was handled in the subsequent work \cite{K-N} by Kwon and Novotn\'y, resorting to similar techniques (however, the gravitational term is not penalised at all). \subsection*{Novelties} The first part of this thesis is devoted to the analysis of multi-scale problems, by focusing on the full Navier-Stokes-Fourier system introduced in \eqref{eq_i:NSF}. In a first instance, we improve the choice of scaling \eqref{eq:scale-G} taking the endpoint case $n=m/2$ with $m\geq1$ (this is the scaling adopted throughout the Chapter \ref{chap:multi-scale_NSF}). Of course, we are still in a regime of low stratification, since $Ma/Fr\ra0$, but having $Fr=\sqrt{Ma}$ allows us to capture some additional qualitative properties on the limit dynamics. In addition, we add to the system the centrifugal force term $\nabla_x F$ (in the spirit of \cite{F-G-GV-N}), which is a source of technical troubles, due to its unboundedness. Let us now comment all these issues in detail. First of all, in absence of the centrifugal force, namely when $F=0$, we are able to perform incompressible, mild stratification and fast rotation limits for the \emph{whole range} of values of $m\geq 1$, in the framework of \emph{finite energy weak solutions} to the Navier-Stokes-Fourier system \eqref{eq_i:NSF} and for general \emph{ill-prepared initial data}. In the case $m>1$, the incompressibility and stratification effects are predominant with respect to the Coriolis force: then we prove convergence to the well-known \emph{Oberbeck-Boussinesq system} (see for instance Paragraph 1.6.2 of \cite{Z} for physical insights about that system), giving a rigorous justification to this approximate model in the context of fast rotating fluids. Thus, we can state the following theorem (see Theorem \ref{th:m-geq-2} for the accurate statement). \begin{theoremnonum}\label{thm_1} Consider system \eqref{eq_i:NSF}. Let $\Omega = \mathbb{R}^2 \times\,]0,1[\,$. Let $F=|x^{h}|^{2}$ and $G=-x^{3}$. Take $n=m/2$ and either ${m\geq 2}$, or ${m>1}$ and ${F=0}$. Then, \begin{align*} \varrho_\ep \rightarrow 1 \\ R_{\ep}:=\frac{\varrho_\ep - 1}{\ep^m} \weakstar R \\ \vec{u}_\ep \weak \vec{U} \\ \Theta_{\ep}:=\frac{\vartheta_\ep - \bar{\vartheta}}{\ep^m} \weak \Theta \, , \end{align*} where in accordance to the Taylor-Proudman theorem, one has $$\vec{U} = (\vec U^h,0),\quad \quad \vec U^h=\vec U^h(t,x^h),\quad \quad {\rm div}\,_h\vec U^h=0.$$ Moreover, $\Big(\vec{U}^h,\, \, R ,\, \, \Theta \Big)$ solves, in the sense of distributions, the incompressible Oberbeck-Boussinesq type system \begin{align*} & \partial_t \vec U^{h}+{\rm div}\,_{h}\left(\vec{U}^{h}\otimes\vec{U}^{h}\right)+\nabla_h\Gamma-\mu (\overline\vartheta )\Delta_{h}\vec{U}^{h}=\delta_2(m)\langle R\rangle\nabla_{h}F \\[2ex]& \partial_t\Theta\,+\,{\rm div}_h(\Theta\,\vec U^h)\,-\,\kappa(\overline\vartheta)\,\Delta\Theta\,=\,\overline\vartheta\,\vec{U}^h\cdot\nabla_h \overline{\mathcal G} \\[2ex] & \nabla_{x}\left( \partial_\varrho p(1,\overline{\vartheta})\,R\,+\,\partial_\vartheta p(1,\overline{\vartheta})\,\Theta \right)\,=\,\nabla_{x}G\,+\,\delta_2(m)\,\nabla_{x}F\, , \end{align*} where $\overline{\mathcal G}$ is the sum of external force $\,G\,+\,\delta_2(m)F$, $\Gamma \in \mathcal D^\prime$ and $\delta_2(m)= 1$ if $m=2$, $\delta_2(m)=0$ otherwise. \end{theoremnonum} We point out that the target velocity field is $2$-dimensional, according to the celebrated Taylor-Proudman theorem in geophysics: in the limit of high rotation, the fluid motion tends to have a planar behaviour, it takes place on planes orthogonal to the rotation axis (i.e. horizontal planes in our model) and is essentially constant along the vertical direction. We refer to \cite{C-R}, \cite{Ped} and \cite{Val} for more details on the physical side. Notice however that, although the limit dynamics is purely horizontal, the limit density and temperature variations, $R$ and $\Theta$ respectively, appear to be stratified: this is the main effect of taking $n=m/2$ for the Froude number in \eqref{eq:scale-G}. This is also the main qualitative property which is new here, with respect to the previous studies, and justifies the epithet of \emph{``critical''} scaling. When $m=1$, instead, all the forces act at the same scale, and then they balance each other asymptotically for $\varepsilon\ra0^+$. As a result, the limit motion is described by the so-called \emph{quasi-geostrophic equation} for a suitable function $q$, which is linked to $R$ and $\Theta$ (respectively, the target density and temperature variations) and to the gravity, and which plays the role of a stream-function for the limit velocity field. This quasi-geostrophic equation is coupled with a scalar transport-diffusion equation for a new quantity $\Upsilon$, mixing $R$ and $\Theta$. The precise statement of the following theorem can be found in Paragraph \ref{ss:results}. \begin{theoremnonum} Consider system \eqref{eq_i:NSF}. Let $\Omega = \mathbb{R}^2 \times\,]0,1[\,$. Let ${F=0}$ and $G=-x^3$. Take ${m=1}$ and $n=1/2$. Then, one has the same convergences found in Theorem \ref{thm_1} and $\vec U$ satisfies the Taylor-Proudman theorem. We define $$ \Upsilon := \partial_\varrho s(1,\overline{\vartheta}) R + \partial_\vartheta s(1,\overline{\vartheta})\,\Theta$$ $$q= \partial_\varrho p(1,\overline{\vartheta}) R +\partial_\vartheta p(1,\overline{\vartheta})\Theta -G\, , $$ then $q=q(t,x^h)$ and $ \vec{U}^{h}=\nabla_h^{\perp} q$. Moreover, the couple $\Big(q,\, \, \Upsilon \Big)$ satisfies, in the sense of distributions, \begin{align*} & \partial_{t}\left(q-\Delta_{h}q\right) -\nabla_{h}^{\perp}q\cdot \nabla_{h}\left( \Delta_{h}q\right) +\mu (\overline{\vartheta}) \Delta_{h}^{2}q=\langle X\rangle \\[2.5ex] & \partial_{t} \Upsilon +\nabla_h^\perp q\cdot\nabla_h\Upsilon-\kappa(\overline\vartheta) \Delta \Upsilon\,=\, \,\kappa(\overline\vartheta)\,\Delta_hq\, , \end{align*} where $\langle X\rangle$ is a suitable ``external'' force. \end{theoremnonum} This is in the spirit of the result in \cite{K-N}, but once again, here we capture also gravitational effects in the limit, so that we cannot say anymore that $R$ and $\Theta$ (and then $\Upsilon$) are horizontal; on the contrary, and somehow surprisingly, $q$ and the target velocity $\vec U$ are purely horizontal. At this point, let us make a couple of remarks. First of all, we mention that, as announced above, we are able to add to the system the effects of the centrifugal force $\nabla_x F$. Unfortunately, in this case the restriction $m\geq 2$ appears (which is still less severe than the ones imposed in \cite{F-G-GV-N}, \cite{F-N_CPDE} and \cite{K-M-N}). However, we show that such a restriction is not of technical nature, but it is hidden in the structure of the wave system (see Proposition \ref{p:target-rho_bound} and Remark \ref{slow_rho}). The result for $F\neq 0$ is analogous to the one presented above for the case $F=0$ and $m>1$: when $m>2$, the anisotropy of scaling is too large in order to see any effect due to $F$ in the limit, and no qualitative differences appear with respect to the instance when $F=0$; when $m=2$, instead, additional terms, related to $F$, appear in the Oberbeck-Boussinesq system (see Theorem \ref{thm_1}). In any case, the analysis will be considerably more complicated, since $F$ is not bounded in $\Omega$ (defined in \eqref{eq:domain} above) and this will demand an additional localisation procedure (already employed in \cite{F-G-GV-N}). We also point out that the classical existence theory of finite energy weak solutions for \eqref{eq_i:NSF} requires the physical domain to be a smooth \emph{bounded} subset of $\mathbb{R}^3$ (see \cite{F-N} for a comprehensive study). The theory was later extended in \cite{J-J-N} to cover the case of unbounded domains, and this might appear suitable for us in view of \eqref{eq:domain}. Nonetheless, the notion of weak solutions developed in \cite{J-J-N} is somehow milder than the classical one (the authors speak in fact of \emph{very weak solutions}), inasmuch as the usual weak formulation of the entropy balance, i.e. the third equation in \eqref{eq_i:NSF}, has to be replaced by an inequality in the sense of distributions. Now, such a formulation is not convenient for us, because, when deriving the system of acoustic-Poincar\'e waves, we need to combine the mass conservation and the entropy balance equations together. In particular, this requires to have true equalities, satisfied in the (classical) weak sense. In order to overcome this problem, we resort to the technique of \emph{invading domains} (see e.g. Chapter 8 of \cite{F-N}, \cite{F-Scho} and \cite{WK}): namely, for each $\varepsilon\in\,]0,1]$, we solve system \eqref{eq_i:NSF}, with the choice $n=m/2$ for the Froude number, in a smooth bounded domain $\Omega_\varepsilon$, where $\big(\Omega_\varepsilon\big)_\varepsilon$ converges (in a suitable sense) to $\Omega$ when $\varepsilon\ra0^+$, with a rate higher than the wave propagation speed (which is proportional to $\varepsilon^{-m}$). Such an ``approximation procedure'' will need some extra work. In order to prove our results, and get the improvement on the values of the different parameters, we propose a unified approach, which actually works both for the case $m>1$ (allowing us to treat the anisotropy of scaling quite easily) and for the case $m=1$ (allowing us to treat the more complicate singular perturbation operator). This approach is based on \emph{compensated compactness} arguments, firstly employed by Lions and Masmoudi in \cite{L-M} for dealing with the incompressible limit of the barotropic Navier-Stokes equations, and later adapted by Gallagher and Saint-Raymond in \cite{G-SR_2006} to the case of fast rotating (incompressible homogeneous) fluids. More recent applications of that method in the context of geophysical flows can be found in \cite{F-G-GV-N}, \cite{F_JMFM}, \cite{Fan-G} and \cite{F_2019}. The quoted method does not give a quantitative convergence at all, but only qualitative. The technique is purely based on the algebraic structure of the system, which allows to find smallness (and vanishing to the limit) of suitable non-linear quantities, and fundamental compactness properties for other quantities. These strong convergence properties are by no means evident, because the singular terms are responsible for strong oscillations in time (the so-called acoustic-Poincar\'e waves) of the solutions, which may finally prevent the convergence of the non-linearities. Nonetheless, a fine study of the system for acoustic-Poincar\'e waves actually reveals compactness (for any $m\geq1$ if $F=0$, for $m\geq 2$ if $F\neq 0$) of a special quantity $\gamma_\varepsilon$, which combines (roughly speaking) the vertical averages of the momentum $\vec{V}_\varepsilon=\varrho_\varepsilon\vec{u}_\varepsilon$ (of its vorticity, in fact) and of another function $Z_\varepsilon$, obtained as a linear combination of density and temperature variations (see Subsections \ref{sss:term1} and \ref{ss:convergence_1} for more details in this respect). Similar compactness properties have been highlighted in \cite{Fan-G} for incompressible density-dependent fluids in $2$-D, and in \cite{F_2019} for treating a multi-scale problem at ``large'' Mach numbers. In the end, the strong convergence of $\big(\gamma_\varepsilon\big)_\varepsilon$ turns out to be enough to take the limit in the convective term, and to complete the proof of our results. To conclude this part, let us mention that we expect the same technique to enable us to treat also the case $m=1$ and $F\neq0$ (this was the case in \cite{F-G-GV-N}, for barotropic flows). Nonetheless, the presence of heat transfer deeply complicates the wave system, and new technical difficulties arise in the analysis of the convective term (the approach of \cite{F-G-GV-N}, in the case of constant temperature, does not work here). For that reason, here we are not able to handle that case, which still remains open. Another feature that remains uncovered in our analysis is the \emph{strong stratification} regime, namely when the ratio ${Ma}/{Fr}$ is of order $O(1)$. This regime is particularly delicate for fast rotating fluids. This is in stark contrast with the results available about the derivation of the anelastic approximation, where rotation is neglected: we refer e.g. to \cite{Masm}, \cite{BGL}, \cite{F-K-N-Z} and, more recently, \cite{F-Z} (see also \cite{F-N} and references therein for a more detailed account of previous works). The reason for that has to be ascribed exactly to the competition between vertical stratification (due to gravity) and horizontal stability (which the Coriolis force tends to impose): in the strong stratification regime, vertical oscillations of the solution (seem to) persist in the limit, and the available techniques do not allow at present to deal with this problem in its full generality. Nonetheless, partial results have been obtained in the case of well-prepared initial data, by means of a relative entropy method: we refer to \cite{F-L-N} for the first result, where the mean motion is derived, and to \cite{B-F-P} for an analysis of Ekman boundary layers in that framework. \section*{Going beyond the critical scaling $Fr=\sqrt{Ma}$} At this point, we are interested in going beyond the critical choice $Fr=\sqrt{Ma}$ considered in the previous paragraph and we would investigate other regimes where the stratification has an even more important effect. For clarity of exposition, we neglect the centrifugal effects and the heat transfer process in the fluid, focusing on the classical barotropic Navier-Stokes system: \begin{equation}\label{2D_Euler_system} \begin{cases} \partial_t \varrho + {\rm div}\, (\varrho\vec{u})=0\ \\[2ex] \partial_t (\varrho\vec{u})+ {\rm div}\,(\varrho\vec{u}\otimes\vec{u}) + \dfrac{\vec{e}_3 \times \varrho\vec{u}}{Ro}\, + \dfrac{1}{Ma^2} \nabla_x p(\varrho) ={\rm div}\, \mathbb{S}(\nabla_x\vec{u}) + \dfrac{\varrho}{Fr^2} \nabla_x G\, . \tag{NSC} \end{cases} \end{equation} The more general system presented in \eqref{eq_i:NSF} can be handled at the price of additional technicalities already discussed above (remember, in particular, the restriction on the Mach number due to the presence of the centrifugal force). The goal now is to perform the asymptotic limit for system \eqref{2D_Euler_system} in the regimes when we assume \begin{equation*} Ma=\varepsilon^m, \quad \quad Ro=\varepsilon \quad \quad \text{and}\quad \quad Fr=\varepsilon^n \end{equation*} with \begin{equation*} \mbox{ either }\qquad m\,>\,1\quad\mbox{ and }\quad m\,<\,2\,n\,\leq\,m+1\,,\qquad\qquad\mbox{ or }\qquad m\,=\,1\quad\mbox{ and }\quad \frac{1}{2}\,<\,n\,<\,1\,. \end{equation*} The restriction $n<1$ when $m=1$ is imposed in order to avoid a strong stratification regime: as already mentioned before, it is not clear at present how to deal with this case for general ill-prepared initial data, as all the available techniques seem to break down in that case. On the other hand, the restriction $2\,n\leq m+1$ (for $m>1$) looks to be of technical nature. However, it comes out naturally in at least two points of our analysis (see e.g. Subsections \ref{ss:ctl1_G} and \ref{sss:term2_G}), and it is not clear to us if, and how, it is possible to bypass it and consider the remaining range of values $(m+1)/2\,<\,n\,<\,m$. Let us point out that, in our considerations, the relation $n<m$ holds always true, so we will always work in a low stratification regime. At the qualitative level, our main results will be quite similar to the ones presented in the previous part. In particular the limit dynamics will be the same, after distinguishing the two cases $m>1$ and $m= 1$ (see Theorems \ref{th:m>1} and \ref{th:m=1}). The main point, we put the accent on now, is how using in a fine way not only the structure of the system, but also the precise structure of each term in order to pass to the limit. To be more precise, the fact of considering values of $n$ going above the threshold $2n=m$ is made possible thanks to special algebraic cancellations involving the gravity term in the system of wave equations. Such cancellations owe to the peculiar form of the gravitational force, which depends on the vertical variable only, and they do not appear, in general, if one wants to consider the action of different forces on the system. As one may easily guess, the case $2n=m+1$ is more involved: indeed, this choice of the scaling implies the presence of an additional bilinear term of order $O(1)$ in the computations; in turn, this term might not vanish in the limit, differently to what happens in the case $2n<m+1$. In order to see that this does not occur, and that this term indeed disappears in the limit process, one has to use more thoroughly the structure of the system to control the oscillations (see equation \eqref{rel_oscillations_bis} and computations below). \section*{The Euler system: the incompressible case} In the second part of this thesis, we change our focus dealing with an incompressible and inviscid system with a hyperbolic structure. More precisely, we are interested in describing the 2-D evolution of a fluid that takes places far enough from the physical boundaries. Therefore, in $\Omega:=\mathbb{R}^2$, the Euler type system reads \begin{equation}\label{full Euler_intro} \begin{cases} \partial_t \varrho +{\rm div}\, (\varrho \vu)=0\\ \partial_t (\varrho \vu)+{\rm div}\, (\varrho \vu \otimes \vu)+ \dfrac{1}{Ro}\varrho \vu^\perp+\nabla_x p=0\\ {\rm div}\, \vu =0\, ,\tag{E} \end{cases} \end{equation} where $\vu^\perp:=(-u^2, u^1)$ is the rotation of angle $\pi/2$ of the velocity field $\vu=(u^1,u^2)$. The pressure term $\nabla_x p$ represents the Lagrangian multiplier associated to the divergence-free constraint on the velocity field. The main scope of that analysis will be to study the asymptotic behaviour of the system \eqref{full Euler_intro} when $Ro=\varepsilon\rightarrow 0^+$. \subsection*{Known results} We will limit ourselves to give a short exposition on known results dealing with density-dependent fluids. We refer instead to \cite{C-D-G-G} for an overview of the broad literature in the context of homogeneous rotating fluids (see also \cite{B-M-N_EMF} and \cite{B-M-N_AA} for the pioneering studies, concerning the homogeneous 3-D Euler and Navier-Stokes equations). In the compressible cases discussed above, the fact that the pressure is a given function of the density implies a double advantage in the analysis: on the one hand, one can recover good uniform bounds for the oscillations (from the reference state) of the density; on the other hand, at the limit, one disposes of a stream-function relation between the densities and the velocities. On the contrary, although the incompressibility condition is physically well-justified for the geophysical fluids, only few studies tackle this case. We refer to \cite{Fan-G}, in which Fanelli and Gallagher have studied the fast rotation limit for viscous incompressible fluids with variable density. In the case when the initial density is a small perturbation of a constant state (the so-called \textit{slightly non-homogeneous} case), they proved convergence to a quasi-homogeneous type system. Instead, for general non-homogeneous fluids (the so-called \textit{fully non-homogeneous} case), they have shown that the limit dynamics is described in terms of the vorticity and the density oscillation function, since they lack enough regularity to prove convergence on the momentum equation itself (see more details below). We have also to mention \cite{C-F_RWA}, where the authors rigorously prove the convergence of the ideal magnetohydrodynamics (MHD) equations towards a quasi-homogeneous type system (see also \cite{C-F_Nonlin} where the compensated compactness argument is adopted). Their method relies on a relative entropy inequality for the primitive system that allows to treat also the inviscid limit but requires well-prepared initial data. \subsection*{New results} In Chapter \ref{chap:Euler}, we tackle the asymptotic analysis (for $\varepsilon\rightarrow 0^+$) in the case of density-dependent Euler system in the \textit{slightly non-homogeneous} context, i.e. when the initial density is a small perturbation of order $\varepsilon$ of a constant profile (say $\overline{\varrho}=1$). These small perturbations around a constant reference state are physically justified by the so-called \textit{Boussinesq approximation} (see e.g. Chapter 3 of \cite{C-R} or Chapter 1 of \cite{Maj} in this respect). As a matter of fact, since the constant state $\overline{\varrho}=1$ is transported by a divergence-free vector field, the density can be written as $\varrho_\varepsilon=1+\varepsilon R_\varepsilon$ at any time (provided this is true at $t=0$), where one can state good uniform bounds on $R_\varepsilon$. We also point out that in the momentum equation of \eqref{full Euler_intro}, with the scaling $Ro=\varepsilon$, the Coriolis term can be rewritten as \begin{equation}\label{Coriolis} \frac{1}{\varepsilon}\varrho_\varepsilon \vec u_\varepsilon^\perp=\frac{1}{\varepsilon}\vec u_\varepsilon^\perp+R_\varepsilon\vec u_\varepsilon^\perp \, .\tag{2D COR} \end{equation} We notice that, thanks to the incompressibility condition, the former term on the right-hand side of \eqref{Coriolis} is actually a gradient: it can be ``absorbed'' into the pressure term, which must scale as $1/\varepsilon$. In fact, the only force that can compensate the effect of fast rotation in system \eqref{full Euler_intro} is, at geophysical scale, the pressure term: i.e. we can write $\nabla_x p_\varepsilon= (1/\varepsilon) \, \nabla_x \Pi_\varepsilon$. Let us point out that the \textit{fully non-homogeneous} case (where the initial density is a perturbation of an arbitrary state) is out of our study. This case is more involved and new technical troubles arise in the well-posedness analysis and in the asymptotic inspection. Indeed, as already highlighted in \cite{Fan-G} for the Navier-Stokes-Coriolis system, the limit dynamics is described by an underdetermined system which mixes the vorticity and the density fluctuations. In order to depict the full limit dynamics (where the limit density variations and the limit velocities are decoupled), one had to assume stronger \textit{a priori} bounds than the ones which could be obtained by classical energy estimates. Nonetheless, the higher regularity involved is \textit{not} propagated uniformly in $\varepsilon$ in general, due to the presence of the Coriolis term. In particular, the structure of the Coriolis term is more complicated than the one in \eqref{Coriolis} above, since one has $\varrho_{\varepsilon}=\overline \varrho+ \varepsilon \sigma_\varepsilon$ (with $\sigma_\varepsilon$ the fluctuation), if at the initial time we assume $\varrho_{0, \varepsilon}=\overline \varrho+ \varepsilon R_{0,\varepsilon}$ with $\overline \varrho$ the arbitrary reference state. At this point, if one plugs the previous decomposition of $\varrho_\varepsilon$ in \eqref{Coriolis}, a term of the form $(1/\varepsilon)\, \overline \varrho \ue^\perp$ appears: this term is a source of troubles in order to propagate the higher regularity estimates needed. Equivalently, if one tries to divide the momentum equation in \eqref{full Euler_intro} by the density $\varrho_\varepsilon$, then the previous issue is only translated on the analysis of the pressure term, which becomes $1/(\varepsilon \varrho_\varepsilon)\, \nabla_x \Pi_\varepsilon$. In light of all the foregoing discussion, let us now point out the main difficulties arising in our problem. First of all, our model is an inviscid system with a hyperbolic type structure for which we can expect \textit{no} smoothing effects and \textit{no} gain of regularity. For that reason, it is natural to look at equations in \eqref{full Euler_intro} in a regular framework like the $H^s$ spaces with $s>2$. The Sobolev spaces $H^s(\mathbb{R}^2)$, for $s>2$, are in fact embedded in the space $W^{1,\infty}$ of globally Lipschitz functions: this is a minimal requirement to preserve the initial regularity (see e.g. Chapter 3 of \cite{B-C-D} and also \cite{D_JDE}, \cite{D-F_JMPA} for a broad discussion on this topic). Actually, all the Besov spaces $B^s_{p,r}(\mathbb{R}^d)$ which are embedded in $W^{1,\infty}(\mathbb{R}^d)$, a fact that occurs for $(s,p,r)\in \mathbb{R}\times [1,+\infty]^2$ such that \begin{equation}\label{Lip_assumption} s>1+\frac{d}{p} \quad \quad \quad \text{or}\quad \quad \quad s=1+\frac{d}{p} \quad \text{and}\quad r=1\, ,\tag{LIP} \end{equation} are good candidates for the well-posedness analysis (see Appendix \ref{app:Tools} for more details). However, the choice of working in $H^s\equiv B^s_{2,2}$ is dictated by the presence of the Coriolis force: we will deeply exploit the antisymmetry of this singular term. Moreover, the fluid is assumed to be incompressible, so that the pressure term is just a Lagrangian multiplier and does \textit{not} give any information on the density, unlike in the compressible case. In addition, due to the non-homogeneity, the analysis of the gradient of the pressure term is much more involved since we have to deal with an elliptic equation with \textit{non-constant} coefficients, namely \begin{equation}\label{elliptic_eq_introduction} -{\rm div}\, (A \, \nabla_x p)={\rm div}\, F \quad \text{where}\quad {\rm div}\, F:={\rm div}\, \left(\vu \cdot \nabla_x \vu+ \frac{1}{Ro} \vu^\perp \right)\quad \text{and}\quad A:=1/\varrho \, . \tag{ELL} \end{equation} The main difficulty is to get appropriate \textit{uniform} bounds (with respect to the rotation parameter) for the pressure term in the regular framework we will consider. We refer to \cite{D_JDE} and \cite{D-F_JMPA} for more details concerning the issues which arise in the analysis of the elliptic equation \eqref{elliptic_eq_introduction} in $B^s_{p,r}$ spaces. Once we have analysed the pressure term, we show that system \eqref{full Euler_intro} is locally well-posed in the $H^s$ setting. It is worth to notice that, in the local well-posedness theorem, all the estimates will be \textit{uniform} with respect to the rotation parameter and, in addition, we will have that the time of existence is independent of $\varepsilon$. \begin{theoremnonum} Let $s>2$. For any $\varepsilon>0$, there exists a time $T_\varepsilon^\ast >0$ such that the system \eqref{full Euler_intro} has a unique solution $(\varrho_\varepsilon, \vu_\varepsilon, \nabla_x \Pi_\varepsilon)$ where \begin{itemize} \item $\varrho_\varepsilon$ belongs to the space $C^0([0,T_\varepsilon^\ast]\times \mathbb{R}^2)$ with $\nabla_x \varrho_\varepsilon \in C^0([0,T_\varepsilon^\ast]; H^{s-1}(\mathbb{R}^2))$; \item $\vu_\varepsilon$ and $\nabla_x \Pi_\varepsilon$ belong to the space $C^0([0,T_\varepsilon^\ast]; H^s(\mathbb{R}^2))$. \end{itemize} Moreover, $$\inf_{\varepsilon>0}T_\varepsilon^\ast >0\, .$$ \end{theoremnonum} With the local well-posedness result at the hand, we perform the fast rotation limit for general \textit{ill-prepared} initial data. We show the convergence of system \eqref{full Euler_intro} towards what we call \textit{quasi-homogeneous} incompressible Euler system \begin{equation}\label{Q-H_E_intro} \begin{cases} \partial_t R+{\rm div}\, (R\vu)=0 \\ \partial_t \vec u+{\rm div}\, \left(\vec{u}\otimes\vec{u}\right)+R\vu^\perp+ \nabla_x \Pi =0 \\ {\rm div}\, \vec u\,=\,0\,, \tag{QHE} \end{cases} \end{equation} where $R$ represents the limit of fluctuations $R_\varepsilon$. We also point out that in the momentum equation of \eqref{Q-H_E_intro} a non-linear term of lower order (i.e. $R\vec u^\perp$) appears: it is a sort of remainder in the convergence for the Coriolis term, recasted as in \eqref{Coriolis}. Passing to the limit in the momentum equation of \eqref{full Euler_intro} is no more evident, although we are in the $H^s$ framework: the Coriolis term is responsible for strong oscillations in time of solutions (the already quoted \textit{Poincar\'e waves}) which may prevent the convergence of the convective term towards the one of \eqref{Q-H_E_intro}. To overcome this issue, we employ the same approach mentioned above: the compensated compactness technique. Now, once the limit system is rigorously depicted, one could address its well-posedness issue: it is worth noticing that the global well-posedness of system \eqref{Q-H_E_intro} remains an open problem. However, roughly speaking, for $R_0$ small enough, the system \eqref{Q-H_E_intro} is ``close'' to the $2$-D homogeneous and incompressible Euler system, for which it is well-known the global well-posedness. Thus, it is natural to wonder if there exists an ``asymptotically global'' well-posedness result in the spirit of \cite{D-F_JMPA} and \cite{C-F_sub}: for small initial fluctuations $R_0$, the quasi-homogeneous system \eqref{Q-H_E_intro} behaves like the standard Euler equations and the lifespan of its solutions tends to infinity. In particular, as already shown in \cite{C-F_sub} for the quasi-homogeneous ideal MHD system (see also references therein), we prove that the lifespan of solutions to \eqref{Q-H_E_intro} goes as \begin{equation}\label{lifespan_Q-H} T_\delta^\ast \sim \log \log \frac{1}{\delta}\,,\tag{LIFE} \end{equation} where $\delta>0$ is the size of the initial fluctuations. This result for the time of existence of solutions to \eqref{Q-H_E_intro} pushes our attention to the study of the lifespan of solutions to the primitive system \eqref{full Euler_intro}. For the $3$-D \textit{homogeneous} Euler system with the Coriolis force, Dutrifoy in \cite{Dut} has proved that the lifespan of solutions tends to infinity in the fast rotation regime (see also \cite{Gall}, \cite{Cha} and \cite{Scro}, where the authors inspected the lifespan of solutions in the context of viscous homogeneous fluids). For system \eqref{full Euler_intro} it is not clear to us how to find similar stabilization effects (due to the Coriolis term), in order to improve the lifespan of the solutions: for instance to show that $T_\varepsilon^\ast\longrightarrow +\infty$ when $\varepsilon\rightarrow 0^+$. Nevertheless, independently of the rotational effects, we are able to state an ``asymptotically global'' well-posedness result in the regime of \textit{small} oscillations, in the sense of \eqref{lifespan_Q-H}: namely, when the size of the initial fluctuation $R_{0,\varepsilon}$ is small enough, of size $\delta >0$, the lifespan $T^\ast_\varepsilon$ of the corresponding solution to system \eqref{full Euler_intro} can be bounded from below by $T^\ast_\varepsilon\geq T^\ast(\delta)$, with $T^\ast (\delta)\longrightarrow +\infty$ when $\delta\rightarrow 0^+$ (see also \cite{D-F_JMPA} for a density-depend fluid in the absence of Coriolis force). More precisely, one has the following result. \begin{propnonum} The lifespan $T_\varepsilon^\ast$ of the solution to the two-dimensional density-dependent incompressible Euler equations \eqref{full Euler_intro} with the Coriolis force is bounded from below by \begin{equation}\label{improv_life_fullE_intro} \frac{C}{\|\vec u_{0,\varepsilon}\|_{H^s}}\log\left(\log\left(\frac{C\, \|\vec u_{0,\varepsilon}\|_{H^s}}{\max \{\mathcal A_\varepsilon(0),\, \varepsilon \, \mathcal A_\varepsilon(0)\, \|\vec u_{0,\varepsilon}\|_{H^s}\}}+1\right)+1\right)\, ,\tag{BOUND} \end{equation} where $\mathcal A_\varepsilon (0):= \|\nabla_x R_{0,\varepsilon}\|_{H^{s-1}}+\varepsilon\, \|\nabla_x R_{0,\varepsilon}\|_{H^{s-1}}^{\lambda +1}$, for some suitable $\lambda\geq 1$. \end{propnonum} As an immediate corollary of the previous lower bound, if we consider the initial densities of the form $\varrho_{0,\varepsilon}=1+\varepsilon^{1+\alpha}R_{0,\varepsilon}$ with $\alpha >0$, then we get $T^\ast_\varepsilon\sim \log \log (1/\varepsilon)$. At this point, let us sketch the main steps to show \eqref{improv_life_fullE_intro} for the primitive system \eqref{full Euler_intro}. The key point in the proof of \eqref{improv_life_fullE_intro} is to study the lifespan of solutions in critical Besov spaces. In those spaces, we can take advantage of the fact that, when $s=0$, the $B^0_{p,r}$ norm of solutions can be bounded \textit{linearly} with respect to the Lipschitz norm of the velocity, rather than exponentially (see the works \cite{Vis} by Vishik and \cite{H-K} by Hmidi and Keraani). Since the triplet $(s,p,r)$ has to satisfy \eqref{Lip_assumption}, the lowest regularity Besov space we can reach is $B^1_{\infty,1}$. Then if $\vec u$ belongs to $B^1_{\infty,1}$, the vorticity $\omega := -\partial_2 u^1+\partial_1 u^2$ has the desired regularity to apply the quoted improved estimates by Hmidi-Keraani and Vishik (see Theorem \ref{thm:improved_est_transport} in this respect). Analysing the vorticity formulation of the system, we discover that the \emph{curl} operator cancels the singular effects produced by the Coriolis force: that cancellation is not apparent, since the skew-symmetric property of the Coriolis term is out of use in the critical framework considered. Finally, we need a continuation criterion (in the spirit of Baele-Kato-Majda criterion, see \cite{B-K-M}) which guarantees that we can ``measure'' the lifespan of solutions in the space of lowest regularity index, namely $s=r=1$ and $p=+\infty$. That criterion is valid under the assumptions that $$ \int_0^{T} \big\| \nabla_x \vec u(t) \big\|_{L^\infty} \dt < +\infty\qquad \text{with}\qquad T<+\infty\, . $$ The previous criterion ensures that the lifespan of solutions found in the critical Besov spaces is the same as in the sought Sobolev functional framework, allowing us to conclude the proof (see considerations in Subsection \ref{ss:cont_criterion+consequences}). \medskip \subsection*{Overview of the contents of this thesis} Before moving on, we give a brief overview of the structure of the present thesis. The Chapter \ref{chap:geophysics} has the goal to ``dip'' the reader in the discipline of the geophysical fluid dynamics, giving a brief physical justification of the mathematical models we will consider in the next chapters. In Chapter \ref{chap:multi-scale_NSF} we address the study of the singular perturbation problem, given by the Navier-Stokes-Fourier equations, in the scaling which we call ``critical''. The next Chapter \ref{chap:BNS_gravity} is devoted to the improvement of the previous scaling: we will go beyond the ``critical'' threshold. Finally, in the last Chapter \ref{chap:Euler} we change a bit the model, dedicating ourselves to the asymptotic analysis for the density-dependent Euler equations. In addition, we will focus on the lifespan of its solutions, proving an ``asymptotically global'' well-posedness result. At the end of this thesis, there are two more sections dedicated to the future perspectives and an appendix containing some tools and well-known results employed throughout the manuscript. \newpage \chapter{Contributions de la th\`ese} \section*{Le problème de Navier-Stokes-Fourier: un aperçu physique} Dans cette thèse, nous nous consacrons à l'étude du comportement des fluides caractérisés par de grandes échelles de temps et d'espace. Des exemples typiques sont les courants dans l'atmosphère et l'océan, mais bien sûr il y a de nombreux autres phénomènes liés aux fluides hors de la Terre, comme les écoulements sur une étoiles ou sur d'autres corps célestes. \`A ces échelles, les effets de la rotation de l'environnement (qui dans le cas de l'atmosphère ou de l'océan est la Terre) ne sont pas négligeables, et le mouvement du fluide est influencé par l'action d'une forte force de Coriolis. Il y a deux autres éléments qui caractérisent la dynamique de ce type de fluides, appelés géophysiques (voir \cite{C-R}, \cite{Ped} et \cite{Val}, par exemple): la faible compressibilité du fluide et les effets de la stratification (les variations de densité, essentiellement à cause de la gravité). L'importance des attributs précédents est ``mesurée'' en introduisant, dans le modèle mathématique, trois paramètres adimensionnels positifs qui, pour les fluides géophysiques, sont supposés faibles. Ces paramètres sont: \begin{itemize} \item le nombre de \emph{Mach} $Ma$, qui fixe la taille des écarts isentropiques par rapport aux fluides incompressibles: plus $Ma$ est petit, plus les effets de compressibilité sont faibles; \item le nombre de \emph{Froude} $Fr$, qui mesure l'importance des effets de la stratification dans la dynamique: plus $Fr$ est petit, plus les effets gravitationnels sont forts; \item le nombre de \emph{Rossby} $Ro$, qui est lié à la rotation du système: lorsque $Ro$ est très petit, les effets de la rotation rapide sont prédominants dans la dynamique. \end{itemize} Dans notre contexte, nous adoptons une hypothèse simpliste (souvent supposée dans les études physiques et mathématiques) qui consiste à restreindre l'étude du fluide aux latitudes moyennes, c'est-à-dire aux écoulements qui se déroulent assez loin des pôles et de la zone équatoriale. Dans ce cas, les variations des effets de rotation dues à la latitude sont négligeables. Notons par $\varrho ,\, \vartheta\geq 0$ respectivement la densité et la température absolue du fluide, et par $\vec{u}\in \mathbb{R}^3$ son champ de vitesse: le système de Navier-Stokes-Fourier 3-D dans sa forme adimensionnelle peut être écrit (voir par exemple \cite{F-N}) comme \begin{equation} \label{eq_i:NSF_fr} \begin{cases} \partial_t \varrho + {\rm div}\, (\varrho\vec{u})=0\ \\[3ex] \partial_t (\varrho\vec{u})+ {\rm div}\,(\varrho\vec{u}\otimes\vec{u}) + \dfrac{\vec{e}_3 \times \varrho\vec{u}}{Ro}\, + \dfrac{1}{Ma^2} \nabla_x p(\varrho,\vartheta) \\[1ex] \qquad \qquad \qquad \qquad \qquad \qquad \qquad \; \; \; ={\rm div}\, \mathbb{S}(\vartheta,\nabla_x\vec{u}) + \dfrac{\varrho}{Ro^2} \nabla_x F + \dfrac{\varrho}{Fr^2} \nabla_x G \\[3ex] \partial_t \bigl(\varrho s(\varrho, \vartheta)\bigr) + {\rm div}\, \bigl(\varrho s (\varrho,\vartheta)\vec{u}\bigr) + {\rm div}\,\left(\dfrac {\q(\vartheta,\nabla_x \vartheta )}{\vartheta} \right) = \sigma\,, \tag{NSF} \end{cases} \end{equation} dans la bande 3-D infinie: \begin{equation} \label{eq:domain_fr} \Omega\,:=\,\mathbb{R}^2\times\,]0,1[\,. \tag{DOM} \end{equation} Dans le système \eqref{eq_i:NSF_fr} ci-dessus, les fonctions $s,\vec{q},\sigma$ sont respectivement l'entropie spécifique, le flux de chaleur et le taux de production d'entropie, et $\mathbb{S}$ est le tenseur des contraintes visqueuses, qui satisfait la loi rhéologique de Newton (voir les sous-sections \ref{sss:primsys} and \ref{sss:structural} pour une formulation plus précise). La force de Coriolis est représentée par \begin{equation} \label{def:Coriolis_fr} \mathfrak C(\vr,\vu)\,:=\,\frac{1}{Ro}\,\vec e_3\times\vr\,\vu\,, \tag{COR} \end{equation} où $\vec e_3=(0,0,1)$ et le symbole $\times$ représente le produit extérieur usuel des vecteurs dans $\mathbb{R}^3$. En particulier, la définition précédente implique que la rotation a lieu autour de l'axe vertical, et sa force ne dépend pas de la latitude (voir par exemple \cite{C-R} et \cite{Ped} pour plus de détails). Nous soulignons que, malgré toutes ces simplifications, le modèle obtenu est déjà capable de capturer plusieurs phénomènes physiquement typiques dans la dynamique des écoulements géophysiques: le fameux \emph{théorème de Taylor-Proudman}, la formation des \emph{couches d'Ekman} et la propagation des \emph{ondes de Poincar\'e}. Nous renvoyons à \cite{C-D-G-G} pour une discussion plus approfondie. Dans la présente thèse, nous évitons les effets de couche limite, i.e. le problème lié aux couches d'Ekman, en imposant des conditions aux limites appelées conditions de \emph{glissement complèt}. Comme établi par le \emph{théorème de Taylor-Proudman} en géophysique, la rotation rapide impose une certaine rigidité/stabilité, forçant le mouvement à se dérouler sur des plans orthogonaux à l'axe de rotation. Par conséquent, la dynamique devient purement bidimensionnelle et horizontale, et le fluide a tendance à se déplacer long de colonnes verticales. Cependant, une telle configuration idéale est entravée par une autre force fondamentale agissant aux échelles géophysiques, la gravité, qui travaille à restaurer la stratification verticale de la densité. La force gravitationnelle est décrite dans le système \eqref{eq_i:NSF_fr} par le terme \[ \mathcal G(\vr)\,:=\,\frac{1}{Fr^2}\,\vr\,\nabla_xG\, , \] où dans notre cas $G(x)=\,G(x^3)\,=\,-\,x^3$. De plus, les effets gravitationnels sont affaiblis par la présence de la force centrifuge $$\mathfrak F(\vr):=\frac{1}{Ro^2}\, \vr\, \nabla_x F\, ,$$ avec $F(x)=|x^h| ^2$. Une telle force est une force d'inertie qui, aux latitudes moyennes, décale légèrement la direction de la gravité. Ainsi, la compétition entre les effets de stabilisation, dus à la rotation, et la stratification verticale (due à la gravité), se traduit dans le modèle par la compétition entre les ordres de grandeur de $Ro$ et $Fr$. De plus, il apparaît que la gravité $\mathcal G$ agit en combinaison avec la force de pression: $$ \mathfrak P(\vr, \vartheta)\,:=\,\frac{1}{Ma^2}\,\nabla_x p(\vr, \vartheta)\,, $$ où $p$ est une fonction lisse connue de la densité et de la température du fluide (voir sous-section \ref{sss:structural}). Nous remarquons que les termes $\mathfrak C,\, \mathcal G,\, \mathfrak P$ et $\mathfrak F$ entrent en jeu dans le modèle avec un grand pré-facteur, en conséquence notre but est d'étudier les systèmes quand $Ma, \, Fr$ et $Ro$ sont petits dans différents régimes. \section*{L'analyse multi-échelle} Au niveau mathématique, au cours des 30 dernières années, un nombre considérable de travaux a été consacré à la justification rigoureuse, dans divers cadres fonctionnels, des modèles réduits considérés en géophysique. L'examen de l'ensemble de la littérature sur ce sujet dépasse largement le cadre de cette partie introductive, c'est pourquoi nous faisons le choix de ne rapporter que les travaux qui abordent la présence de la force de Coriolis \eqref{def:Coriolis_fr}. Nous décidons également de reporter, à la partie suivante, la discussion sur les modèles incompressibles, car moins pertinents pour l'analyse multi-échelle, en raison de la rigidité imposée par la contrainte de divergence nulle sur le champ de vitesse du fluide. Les modèles de fluides compressibles, au contraire, fournit un cadre beaucoup plus riche pour l'analyse multi-échelle des écoulements géophysiques. De plus, nous avons choisi de nous concentrer principalement sur les travaux traitant des fluides visqueux et qui effectuent l'étude asymptotique pour des données initiales mal préparées. \subsection*{Résultats précédents} Les premiers résultats, dans le sens mentionné ci-dessus, ont été obtenus par Feireisl, Gallagher et Novotn\'y dans \cite{F-G-N} et avec G\'erard-Varet dans \cite{F-G-GV-N}, pour le système barotrope de Navier-Stokes (voir aussi \cite{B-D-GV} pour une étude préliminaire et \cite{G-SR_Mem} pour l'analyse des ondes équatoriales). Dans ces travaux, les auteurs ont étudié le régime combiné du faible nombre de Rossby (effets de rotation rapide) avec un régime de faible nombre de Mach (faible compressibilité du fluide) sous l'échelle \begin{align} Ro\,&=\,\varepsilon\tag{LOW RO}\\ Ma\,&=\,\varepsilon^m \qquad\qquad \mbox{ avec }\quad m\geq 0 \,, \tag{LOW MA} \label{eq:scale_fr} \end{align} où $\varepsilon\in\,]0,1]$ est un petit paramètre, qu'on aimerait faire tendre vers $0$ afin de dériver le modèle réduit. Dans le cas où $m=1$ dans \eqref{eq:scale_fr}, le système présente une échelle isotrope, puisque $Ro$ et $Ma$ agissent au même ordre de grandeur et les termes de pression et de rotation restent en équilibre (l'\emph{équilibre quasi-géostrophique}) à la limite. Le système limite est identifié par l'\emph{équation quasi-géostrophique} pour une fonction de flux du champ de vitesse. Au contraire, dans \cite{F-G-GV-N} lorsque $m>1$ et avec en plus la force centrifuge, le terme de pression prédomine (sur la force de Coriolis) dans la dynamique du fluide. Dans ce cas, le système limite est décrit par un système de Navier-Stokes incompressible en 2-D et les difficultés générées par l'anisotropie d'échelle ont été surmontées en utilisant des estimations de dispersion. Par la suite, Feireisl et Novotn\'y ont poursuivi l'analyse multi-échelle pour le même système, encore une fois sans le terme de force centrifuge, en considérant les effets d'une faible stratification, c'est-à-dire $Ma/Fr\rightarrow 0$ lorsque $\varepsilon\rightarrow 0^+$ (voir \cite{F-N_AMPA}, \cite{F-N_CPDE}). Nous renvoyons à \cite{F_MA} pour une étude similaire dans le cadre des modèles capillaires, où le choix $m=1$ a été fait, mais l'anisotropie a été donnée par l'échelle fixée pour le terme de forces internes (appelé tenseur des contraintes de Korteweg). De plus, il faut mentionner \cite{F_2019} pour le cas des grands nombres de Mach par rapport au paramètre de Rossby, à savoir $0\leq m<1$ dans \eqref{eq:scale_fr}. Dans ce cas, le gradient de pression n'est pas assez fort pour compenser la force de Coriolis, et afin de trouver une dynamique limite pertinente, il faut pénaliser le coefficient de viscosité. L'analyse des modèles présentant aussi des transferts de chaleur est beaucoup plus récente, et a été commencé avec le papier \cite{K-M-N} de Kwon, Maltese et Novotn\'y. Dans cet article, les auteurs ont considéré une approche multi-échelle pour le système de Navier-Stokes-Fourier complet avec Coriolis et la force gravitationnelle (et $F=0$), en prenant l'échelle \begin{equation} \label{eq:scale-G_fr} {Fr}\,=\,\varepsilon^n\,,\qquad\qquad\mbox{ avec }\quad 1\,\leq\,n\,<\,\frac{m}{2}\, .\tag{LOW FR} \end{equation} En particulier, dans cet article, le choix \eqref{eq:scale-G_fr} impliquait que $m>2$ et le cas $n=m/2$ était laissé ouvert. Des restrictions similaires sur les paramètres peuvent être trouvées dans \cite{F-N_CPDE} pour le modèle barotrope. Ces restrictions doivent être attribuées aux techniques utilisées pour prouver la convergence, qui sont basées sur une combinaison de méthode d'énergie relative/entropie relative avec des estimations dispersives (on note qu'une restriction encore plus grande, $m>10$ , apparaît dans \cite{F-G-GV-N}). D'autre part, on souligne que les méthodes d'énergie relative permettent d'obtenir un taux de convergence précis et de considérer également des limites non visqueuses et non diffusives (dans ces cas, on ne dispose pas d'une borne uniforme pour $\nabla_x\vartheta$ et sur $\nabla_x\vec{u}$). Le cas où $m=1$ a été traité postérieurement dans l'article \cite{K-N} de Kwon et Novotn\'y, en recourant à des techniques similaires (cependant, le terme gravitationnel n'est pas pénalisé). \subsection*{Nouveautés} La première partie de cette thèse est consacrée à l'étude des problèmes multi-échelle, en se concentrant sur le système de Navier-Stokes-Fourier complet introduit dans \eqref{eq_i:NSF_fr}. Dans un premier temps, nous améliorons le choix de l'échelle \eqref{eq:scale-G_fr} en prenant le cas limite $n=m/2$ avec $m\geq1$ (c'est l'échelle adoptée dans le chapitre \ref{chap:multi-scale_NSF}). Bien sûr, nous sommes toujours dans un régime de faible stratification, puisque $Ma/Fr\ra0$, mais le choix $Fr=\sqrt{Ma}$ nous permet de capturer quelques propriétés qualitatives supplémentaires sur la dynamique limite. De plus, nous ajoutons au système le terme de force centrifuge $\nabla_x F$ (dans l'esprit de \cite{F-G-GV-N}), qui est source des problèmes techniques dus à son caractère non borné. Commentons maintenant toutes ces questions en détail. Tout d'abord, en absence de la force centrifuge, c'est-à-dire $F=0$, nous sommes capables d'effectuer la limite incompressible, avec une faible stratification et une rotation rapide pour \emph{toute la gamme} de valeurs $m\geq 1$, dans le cadre des \emph{solutions faibles d'énergie finie} de le système de Navier-Stokes-Fourier \eqref{eq_i:NSF_fr} et pour des \emph{données initiales mal préparées}. Dans le cas $m>1$, les effets d'incompressibilité et de stratification sont prédominants par rapport à la force de Coriolis: on prouve alors la convergence vers le bien connu \emph{système d'Oberbeck-Boussinesq} (voir par exemple le paragraphe 1.6.2 de \cite{Z} pour des explications physiques sur ce système), donnant une justification rigoureuse à ce modèle approché dans le contexte des fluides en rotation rapide. Ainsi, nous pouvons énoncer le théorème suivant (voir le Théorème \ref {th:m-geq-2} pour l'énoncé précis). \begin{theoremnonum_fr} \label{thm_1_fr} On considère le système \eqref{eq_i:NSF_fr}. Soit $\Omega = \mathbb{R}^2 \times\,]0,1[\,$. Soit $F=|x^{h}|^{2}$ et $G=-x^{3}$. On prenne $n=m/2$ et ou bien ${m\geq 2}$, ou ${m>1}$ et ${ F=0}$. Alors, on a les convergences suivantes: \begin{align*} \varrho_\ep \rightarrow 1 \\ R_{\ep}:=\frac{\varrho_\ep - 1}{\ep^m} \weakstar R \\ \vec{u}_\ep \weak \vec{U} \\ \Theta_{\ep} :=\frac{\vartheta_\ep - \bar{\vartheta}}{\ep^m} \weak \Theta \, , \end{align*} où en accord avec le théorème de Taylor-Proudman, on a $$\vec{U} = (\vec U^h,0),\quad \quad \vec U^h=\vec U^h(t,x^h),\quad \quad {\rm div}\,_h\vec U^h=0.$$ De plus, $\Big(\vec{U}^h,\, \, R ,\, \, \Theta \Big)$ résout, au sens des distributions, le système incompressible de type Oberbeck-Boussinesq \begin{align*} & \partial_t \vec U^{h}+{\rm div}\,_{h}\left(\vec{U}^{h}\otimes\vec{U}^{h}\right)+\nabla_h\Gamma-\mu (\overline\vartheta )\Delta_{h}\vec{U}^{h}=\delta_2(m)\langle R\rangle\nabla_{h}F \\[2ex]& \partial_t\Theta\,+\,{\rm div}_h(\Theta\,\vec U^h)\,-\,\kappa(\overline\vartheta)\,\Delta\Theta\,=\,\overline\vartheta \,\vec{U}^h\cdot\nabla_h \overline{\mathcal G} \\[2ex] & \nabla_{x}\left( \partial_\varrho p(1,\overline{\vartheta})\,R\,+\,\partial_\vartheta p(1,\overline{\vartheta})\,\Theta \right)\,=\,\nabla_{x}G\,+\,\delta_2(m)\,\nabla_{x}F\, , \end{align*} où $\overline{\mathcal G}$ est la somme de forces externes $\,G\,+\,\delta_2(m)F$ , $\Gamma \in \mathcal D^\prime$ et $\delta_2(m )= 1$ si $m=2$ , $\delta_2(m)=0$ sinon. \end{theoremnonum_fr} Nous soulignons qu'à la limite le champ de vitesse est de dimension $2$, selon le célèbre théorème de Taylor-Proudman en géophysique: à la limite en rotation rapide, le mouvement du fluide a un comportement planaire, il se déroule sur des plans orthogonaux à l'axe de rotation (c'est-à-dire des plans horizontaux dans notre modèle) et il est essentiellement constant dans la direction verticale. On se réfère à \cite{C-R}, \cite{Ped} et \cite{Val} pour plus de détails sur la formulation physique. Notez cependant que, bien que la dynamique limite soit purement horizontale, la densité limite et les variations de température, $R$ et $\Theta$ respectivement, sont stratifiées: c'est l'effet principal du choix $n=m/2$ pour le nombre de Froude dans \eqref{eq:scale-G_fr}. C'est aussi la principale propriété qualitative qui est nouvelle dans notre travail par rapport aux études précédentes et qui justifie l'épithète d'échelle \emph{``critique''}. Lorsque $m=1$, au contraire, toutes les forces agissent à la même échelle, puis s'équilibrent asymptotiquement pour $\varepsilon\ra0^+$. En conséquence, le mouvement limite est décrit par une \emph{équation quasi-géostrophique} pour une fonction $q$, qui est liée à $R$ et $\Theta$ (respectivement, la densité et les variations de température à la limite) et à la gravité, et qui joue le rôle de fonction de flux pour le champ de vitesse limite. Cette équation quasi-géostrophique est couplée à une équation de transport-diffusion scalaire pour une nouvelle grandeur $\Upsilon$, qui mélange $R$ et $\Theta$. L'énoncé précis du théorème suivant se trouve dans le paragraphe \ref{ss:results}. \begin{theoremnonum_fr} On considère le système \eqref{eq_i:NSF_fr}. Soit $\Omega = \mathbb{R}^2 \times\,]0,1[\,$. Soit ${F=0}$ et $G=-x^3$. On prenne ${m=1}$ et $n=1/2$. Alors, on a les mêmes convergences trouvées dans le Théorème \ref{thm_1_fr} et $\vec U$ satisfait le théorème de Taylor-Proudman. Par ailleurs définissons $$ \Upsilon := \partial_\varrho s(1,\overline{\vartheta}) R + \partial_\vartheta s(1,\overline{\vartheta})\,\Theta$$ et $$q= \partial_\varrho p(1,\overline{\vartheta}) R +\partial_\vartheta p(1,\overline{\vartheta})\Theta -G \, . $$ Alors $q=q(t,x^h)$ et $ \vec{U}^{h}=\nabla_h^{\perp} q$. De plus, le couple $\Big(q,\, \, \Upsilon \Big)$ satisfait, au sens des distributions, \begin{align*} & \partial_{t}\left(q-\Delta_{h}q\right) -\nabla_{h}^{\perp}q\cdot \nabla_{h}\left( \Delta_{h}q\right) +\mu (\overline{\vartheta}) \Delta_{h}^{2}q=\langle X\rangle \\[2.5ex] & \partial_{t} \Upsilon +\nabla_h^\perp q\cdot\nabla_h\Upsilon-\kappa(\overline\vartheta) \Delta \Upsilon\,=\, \,\kappa(\overline\vartheta)\,\Delta_hq\, , \end{align*} où $\langle X\rangle$ est une force ``externe'' appropriée. \end{theoremnonum_fr} Ce théorème est dans l'esprit du résultat de \cite{K-N}, mais ici encore sont captés à la limite les effets gravitationnels, de sorte qu'il n'est plus possible d'affirmer que $R$ et $\Theta$ (donc $\Upsilon$) sont horizontaux. En revanche, et de façon surprenante, $q$ et la vitesse limite $\vec U$ sont purement horizontales. \`A ce stade, faisons quelques remarques. Tout d'abord, mentionnons que, comme déjà annoncé, nous sommes capables d'ajouter au système les effets de la force centrifuge $\nabla_x F$. Malheureusement, dans ce cas apparaît la restriction $m\geq 2$ (qui est quand même moins sévère que celles imposées dans \cite{F-G-GV-N}, \cite{F-N_CPDE} et \cite{K-M-N}). Cependant, nous montrons qu'une telle restriction n'est pas de nature technique, mais qu'elle est cachée dans la structure du système d'ondes (voir proposition \ref{p:target-rho_bound} et remarque \ref{slow_rho}). Le résultat pour $F\neq 0$ est analogue à celui présenté ci-dessus pour le cas $F=0$ et $m>1$: quand $m>2$, l'anisotropie de l'échelle est trop grande pour voir les effets dus à $F$ à la limite, et aucune différence qualitative n'apparaît par rapport à l'instance où $F=0$; lorsque $m=2$, en revanche, des termes supplémentaires liés à $F$ apparaissent dans le système de Oberbeck-Boussinesq (voir théorème \ref{thm_1_fr}). Dans tous les cas, l'analyse sera considérablement plus compliquée, puisque $F$ n'est pas bornée dans le domaine $\Omega$ (défini dans \eqref {eq:domain_fr} ci-dessus) et cela demandera une procédure de localisation supplémentaire (déjà employée dans \cite{F-G-GV-N}). Soulignons en outre que la théorie classique de l'existence des solutions faibles d'énergie finie pour \eqref{eq_i:NSF_fr} exige que le domaine physique soit un sous-ensemble \emph {borné} et lisse de $\mathbb{R}^3$ (voir \cite{F-N} pour une étude complète). La théorie a ensuite été étendue dans \cite{J-J-N} pour couvrir le cas des domaines non bornés, et cela pourrait nous sembler approprié à notre cas. Néanmoins, la notion de solutions faibles développée dans \cite{J-J-N} est en quelque sorte plus faible que la notion usuel (les auteurs parlent en fait de \emph{solutions très faibles}), dans la mesure où la formulation faible habituelle du bilan d'entropie, c'est-à-dire la troisième équation de \eqref {eq_i:NSF_fr}, doit être remplacée par une inégalité au sens des distributions. Une telle formulation ne nous convient pas, car, lors de la dérivation du système d'ondes acoustiques de Poincar\'e, nous devons combiner la conservation de la masse et l'équation de l'entropie. En particulier, cela nécessite d'avoir de vraies égalités, satisfaites au sens faible (usuel). Afin de pallier à ce problème, on recourt à la technique des \emph{domaines envahissants} (voir par exemple le chapitre 8 de \cite{F-N}, \cite{F-Scho} et \cite{WK}): pour chaque $\varepsilon\in\,]0,1]$, on résout le système \eqref {eq_i:NSF_fr}, avec le choix $n=m/2$ pour le nombre de Froude, dans un domaine lisse $ \Omega_\varepsilon$, où $\big(\Omega_\varepsilon\big)_\varepsilon$ converge (dans un sens approprié) vers $\Omega$, lorsque $\varepsilon\ra0^+$, plus vite que la vitesse de propagation des ondes (qui est proportionnelle à $\varepsilon^{-m}$). Une telle ``procédure d'approximation'' nécessitera un travail supplémentaire. Afin de prouver nos résultats, et d'obtenir l'amélioration sur les valeurs des différents paramètres, nous proposons une approche unifiée, qui fonctionne en fait à la fois pour le cas $m>1$ (permettant de traiter assez facilement l'anisotropie de l'échelle) et pour le cas $m=1$ (permettant de traiter l'opérateur de perturbation singulier plus compliqué). Cette approche est basée sur les arguments de \emph{compacité par compensation}, d'abord employés par Lions et Masmoudi dans \cite{L-M} pour traiter la limite incompressible du système barotrope de Navier-Stokes, et plus tard adaptées par Gallagher et Saint-Raymond dans \cite {G-SR_2006} au cas des fluides en rotation rapide (incompressibles et homogènes). Des applications plus récentes de cette méthode dans le cadre des écoulements géophysiques se trouvent dans \cite{F-G-GV-N}, \cite{F_JMFM}, \cite{Fan-G} et \cite{F_2019}. La méthode citée ne donne pas du tout une convergence quantitative, mais seulement qualitative. La technique est purement basée sur la structure algébrique du système, qui permet de trouver la petitesse (et disparaissant à la limite) de quantités non linéaires appropriées, et des propriétés de compacité pour d'autres quantités. Ces propriétés de convergence forte ne sont en aucun cas évidentes, car les termes singuliers sont responsables de fortes oscillations en temps des solutions (les ondes dites acoustiques de Poincar\'e), qui peuvent empêcher la convergence des non-linéarités. Néanmoins, une étude fine du système des ondes acoustiques de Poincar\'e révèle en fait la compacité (pour tout $m\geq1$ si $F=0$, pour $m\geq 2$ si $F\neq 0$) d'une quantité spéciale $\gamma_\varepsilon$, qui combine (grossièrement) les moyennes verticales de la quantité de mouvement $\vec{V}_\varepsilon=\varrho_\varepsilon\vec{u}_\varepsilon$ (de son tourbillon, en fait) et d'une autre fonction $Z_\varepsilon$, obtenu comme une combinaison linéaire des variations de densité et de température (voir les sous-sections \ref{sss:term1} et \ref{ss:convergence_1} pour plus de détails sur ce sujet). Des propriétés de compacité similaires ont été mises en évidence dans \cite{Fan-G} pour les fluides incompressibles dépendant de la densité en $2$-D, et dans \cite {F_2019} pour traiter un problème multi-échelles aux ``grands'' nombres de Mach. \`A la fin, la convergence forte de $\big(\gamma_\varepsilon\big)_\varepsilon$ s'avère suffisante pour prendre la limite dans le terme convectif, et pour compléter la preuve de nos résultats. Pour conclure cette partie, on remarque que nous nous attendons à ce que la même technique puisse aussi marcher dans le cas $m=1$ et $F\neq0$ (ce fut le cas dans \cite{F-G-GV-N}, pour des écoulements barotropes). Néanmoins, la présence de transfert de chaleur complique profondément le système des ondes, et de nouvelles difficultés techniques surviennent dans l'analyse du terme convectif (l'approche de \cite{F-G-GV-N}, dans le cas de température constante, ne fonctionne pas ici). Pour cette raison, nous ne sommes pas capables de traiter ici cette condition, qui reste toujours une question ouverte. Une autre caractéristique, qui n'est pas traitée dans notre analyse, est le régime de \emph{forte stratification}, c'est-à-dire que le rapport ${Ma}/{Fr}$ est d'ordre $O(1)$. Ce régime est particulièrement délicat pour les fluides en rotation rapide. Cela contraste fortement avec les résultats disponibles sur la dérivation de l'approximation anélastique, où la rotation est négligée: nous renvoyons pour exemple à \cite{Masm}, \cite{BGL}, \cite{F-K-N-Z} et, plus récemment, \cite{F-Z} (voir aussi \cite{F-N} et ses références pour un compte rendu plus détaillé des travaux antérieurs). La raison est précisément à attribuer à la compétition entre la stratification verticale (due à la gravité) et la stabilité horizontale (que la force de Coriolis tend à imposer): dans le régime de forte stratification, les oscillations verticales de la solution (semblent) persister à la limite, et les techniques disponibles ne permettent pas actuellement de traiter ce problème dans toute sa généralité. Néanmoins, des résultats partiels ont été obtenus dans le cas de données initiales bien préparées, au moyen d'une méthode d'entropie relative: nous nous référons à \cite{F-L-N} pour le premier résultat, où le mouvement moyen est dérivé, et à \cite{B-F-P} pour une analyse des couches limites d'Ekman dans ce cadre. \newpage \section*{Aller au-delà de l'échelle critique $Fr=\sqrt{Ma}$} \`A ce stade, nous sommes intéressés par l'étude des valeurs $Fr$ supérieures à la valeur critique $Fr=\sqrt{Ma}$ considéré dans le paragraphe précédente et nous étudierons d'autres régimes où la stratification a un effet encore plus important. Pour la clarté d'exposition, nous négligeons les effets de la force centrifuge et le processus de transfert de chaleur dans le fluide, en nous concentrant sur le système barotrope classique de Navier-Stokes: \begin{equation} \begin{cases} \partial_t \varrho + {\rm div}\, (\varrho\vec{u})=0\ \\[2ex] \partial_t (\varrho\vec{u})+ {\rm div}\,(\varrho\vec{u}\otimes\vec{u}) + \dfrac{\vec{e}_3 \times \varrho\vec{u}}{Ro}\, + \dfrac{1}{Ma^2} \nabla_x p(\varrho) ={\rm div}\, \mathbb{S}(\nabla_x\vec{u}) + \dfrac{\varrho}{Fr^2} \nabla_x G\, . \tag{NSC} \end{cases} \end{equation} Le système plus général présenté dans \eqref{eq_i:NSF_fr} peut être manipulé au prix de technicités supplémentaires déjà évoquées plus en haut (rappelons en particulier la restriction sur le nombre de Mach due à la présence de la force centrifuge). Le but est maintenant d'effectuer la limite asymptotique pour le système \eqref{2D_Euler_system} dans les régimes quand on suppose \begin{equation*} Ma=\varepsilon^m, \quad \quad Ro=\varepsilon \quad \quad \text{et}\quad \quad Fr=\varepsilon^n \end{equation*} avec \begin{equation*} \mbox{ bien ou }\qquad m\,>\,1\quad\mbox{ et }\quad m\,<\,2\,n\,\leq\,m+1\,,\qquad\qquad \mbox{ ou }\qquad m\,=\,1\quad \mbox{ et }\quad \frac{1}{2}\,<\,n\,<\,1\,. \end{equation*} La restriction $n<1$ lorsque $m=1$ est imposée afin d'éviter un régime de stratification forte: comme déjà mentionné précédemment, il n'est pas clair, à l'heure actuelle, comment traiter ce cas pour des données initiales générales mal préparés, car toutes les techniques disponibles semblent échouer dans ce cas-là. En revanche, la restriction $2\,n\leq m+1$ (pour $m>1$) semble être de nature technique. Cependant, elle sort naturellement dans au moins deux points de notre analyse (voir par exemple les sous-sections \ref{ss:ctl1_G} et \ref{sss:term2_G}), et il n'est pas clair si, et comment, il est possible de la contourner et de considérer la gamme de valeurs restante $(m+1)/2\,<\,n\,<\,m$. Précisons que, dans nos considérations, la relation $n<m$ est toujours vraie, donc nous travaillerons dans un régime de faible stratification. Au niveau qualitatif, nos principaux résultats seront assez similaires à ceux présentés dans la partie précédente. En particulier la dynamique limite sera la même, après avoir distingué les deux cas $m>1$ et $m= 1$ (voir théorèmes \ref{th:m>1} et \ref{th:m=1}). L'essentiel, sur lequel nous mettons l'accent maintenant, est de savoir comment utiliser de manière fine non seulement la structure du système, mais aussi la structure précise de chaque terme pour passer à la limite. Pour être plus précis, le fait de considérer les valeurs de $n$ dépassant le seuil $2n=m$ est rendu possible grâce à des annulations algébriques spéciales faisant intervenir le terme de gravité dans le système des ondes. Telles annulations sont dues à la forme particulière de la force gravitationnelle, qui ne dépend que de la variable verticale, et elles n'apparaissent pas, en général, si on veut considérer l'action de différentes forces sur le système. Comme on peut facilement le deviner, le cas $2n=m+1$ est plus complexe: en effet, ce choix d'échelle implique la présence dans les calculs d'un terme bilinéaire supplémentaire d'ordre $O(1)$; à son tour, ce terme pourrait ne pas disparaître à la limite, contrairement à ce qui se passe dans le cas $2n<m+1$. Afin de voir que cela ne se produit pas, et que ce terme disparaît bien dans le processus limite, il faut utiliser plus minutieusement la structure du système pour contrôler les oscillations (voir l'équation \eqref{rel_oscillations_bis} et les calculs là-dessous). \section*{Le système d'Euler: le cas incompressible} Dans la deuxième partie de cette thèse, nous changeons d'orientation en traitant un système incompressible, non visqueux et avec une structure hyperbolique. Plus précisément, nous nous intéressons à décrire l'évolution 2-D d'un fluide qui se déroule suffisamment loin des frontières physiques. Par conséquent, le système de type Euler, dans $\Omega:=\mathbb{R}^2$, est \begin{equation} \label{full Euler_intro_fr} \begin{cases} \partial_t \varrho +{\rm div}\, (\varrho \vu)=0\\ \partial_t (\varrho \vu)+{\rm div}\, (\varrho \vu \otimes \vu)+ \dfrac{1}{Ro}\varrho \vu^\perp+\nabla_x p=0\\ {\rm div}\, \vu =0\, ,\tag{E} \end{cases} \end{equation} où $\vu^\perp:=(-u^2, u^1)$ est la rotation d'angle $\pi/2$ du champ de vitesse $\vu=(u^1,u^2)$ . Le terme de pression $\nabla_x p$ représente le multiplicateur de Lagrange associé à la contrainte de divergence nulle sur le champ de vitesse. La portée principale de cette analyse sera d'étudier le comportement asymptotique du système \eqref{full Euler_intro_fr} lorsque $Ro=\varepsilon\rightarrow 0^+$. \subsection*{Résultats connus} Nous nous limiterons à donner un bref exposé sur les résultats connus concernant les fluides dépendant de la densité. Nous nous référons plutôt à \cite{C-D-G-G} pour un aperçu de la littérature dans le contexte des fluides homogènes en rotation rapide (voir aussi \cite{B-M-N_EMF} et \cite{B-M-N_AA} pour les études pionnières, concernant les équations homogènes d'Euler et de Navier-Stokes en 3-D). Dans les cas compressibles évoqués au-dessus, le fait que la pression soit une fonction donnée de la densité, implique un double avantage dans l'analyse: d'une part, on peut récupérer de bonnes bornes uniformes pour les oscillations (à partir de l'état de référence) de la densité; de plus, à la limite, on dispose d'une relation flux-fonction entre les densités et les vitesses. En revanche, bien que la condition d'incompressibilité soit physiquement bien justifiée pour les fluides géophysiques, peu d'études abordent ce cas. Nous nous référons à \cite{Fan-G}, dans lequel Fanelli et Gallagher ont étudié la limite en rotation rapide pour des fluides incompressibles, visqueux et à densité variable. Dans le cas où la densité initiale est une petite perturbation d'un état constant (le cas dit \textit{légèrement non-homogène}), les auteurs ont prouvé la convergence vers un système de type quasi-homogène. Par contre, pour les fluides non-homogènes généraux (le cas dit \textit{totalement non-homogène}), Fanelli et Gallagher ont montré que la dynamique limite est décrite en termes du tourbillon et des fonctions d'oscillation de densité, car il n'y avait pas de régularité suffisante pour prouver la convergence sur l'équation de la quantité de mouvement elle-même (voir plus de détails ci-dessous). Il faut aussi mentionner \cite{C-F_RWA}, où les auteurs prouvent rigoureusement la convergence des équations idéales de la magnétohydrodynamique (MHD) vers un système de type quasi-homogène (voir aussi \cite{C-F_Nonlin} où l'argument de compacité par compensation est adopté). Leur méthode repose sur des inégalités d'entropie relative pour le système primitif qui permettent de traiter également la limite non visqueuse mais nécessite des données initiales bien préparées. \subsection*{Nouveaux résultats} Dans le chapitre \ref{chap:Euler}, nous abordons l'analyse asymptotique (pour $\varepsilon\rightarrow 0^+$) dans le cas d'un système d'Euler dépendant de la densité dans le contexte \textit {légèrement non-homogène}, c'est-à-dire lorsque la densité initiale est une petite perturbation d'ordre $\varepsilon$ d'un profil constant (disons $\overline{\varrho}=1$). Ces petites perturbations autour d'un état de référence constant sont physiquement justifiées par l'\textit {approximation de Boussinesq} (voir par exemple le chapitre 3 de \cite{C-R} ou le chapitre 1 de \cite{Maj} à cet égard). En effet, puisque l'état constant $\overline{\varrho}=1$ est transporté par un champ vectoriel à divergence nulle, la densité peut s'écrire comme $\varrho_\varepsilon=1+\varepsilon R_\varepsilon$ pour tous les temps (à condition que cela soit vrai à $t=0$ ), où l'on peut énoncer de bonnes bornes uniformes sur $R_\varepsilon$. Nous soulignons aussi que dans l'équation de quantité de mouvement de \eqref{full Euler_intro_fr}, avec l'échelle $Ro=\varepsilon$, le terme de Coriolis peut être réécrit comme \begin{equation}\label{Coriolis_fr} \frac{1}{\varepsilon}\varrho_\varepsilon \vec u_\varepsilon^\perp=\frac{1}{\varepsilon}\vec u_\varepsilon^\perp+R_\varepsilon\vec u_\varepsilon^\perp \, .\tag{2D COR} \end{equation} On remarque que, grâce à la condition d'incompressibilité, le premier terme du membre de droite de \eqref{Coriolis_fr} est en fait un gradient: il peut être ``absorbé'' dans le terme de pression, qui doit s'échelonner comme $1/\varepsilon$. En fait, la seule force qui peut compenser l'effet de la rotation rapide dans le système \eqref{full Euler_intro_fr} est, à l'échelle géophysique, le terme de pression: c'est-à-dire qu'on peut écrire $\nabla_x p_\varepsilon= (1/\varepsilon)\, \nabla_x \Pi_\varepsilon$. Précisons que le cas \textit{totalement non-homogène} (où la densité initiale est une perturbation d'un état arbitraire) est hors de notre étude. Ce cas est plus complexe et de nouveaux problèmes techniques surviennent dans l'analyse du caractère bien posé et dans l'inspection asymptotique. En effet, comme déjà souligné dans \cite{Fan-G} pour le système de Navier-Stokes-Coriolis, la dynamique limite est décrite par un système sous-déterminé qui mélange le tourbillon et les fluctuations de densité. Afin de décrire la dynamique limite complète (où les variations de densité limite et les vitesses limites sont découplées), il fallait supposer des bornes \textit {a priori} plus fortes que celles qui pourraient être obtenues par des estimations d'énergie classiques. Néanmoins, la haute régularité impliquée n'est \textit{pas} propagée uniformément en $\varepsilon$ en général, à cause de la présence du terme de Coriolis. En particulier, la structure du terme de Coriolis est plus compliquée que celle montrée en \eqref{Coriolis_fr}, puisqu'on a $\varrho_{\varepsilon}=\overline \varrho+ \varepsilon \sigma_\varepsilon$ (avec $\sigma_ \varepsilon$ la fluctuation), si au départ on suppose $\varrho_{0, \varepsilon}=\overline \varrho+ \varepsilon R_{0,\varepsilon}$ avec $\overline \varrho$ l'état de référence arbitraire. \`A ce stade, si on insère la décomposition précédente de $\varrho_\varepsilon$ dans \eqref{Coriolis_fr}, un terme de la forme $(1/\varepsilon)\, \overline \varrho \ue^\perp$ apparaît: ce terme est source de difficultés pour propager les estimations de haute régularité nécessaires. De manière équivalente, si on essaie de diviser l'équation de quantité de mouvement dans \eqref{full Euler_intro_fr} par la densité $\varrho_\varepsilon$, alors le problème précédent ne se traduit que sur l'analyse du terme de pression, qui devient $1/(\varepsilon \varrho_\varepsilon)\, \nabla_x \Pi_\varepsilon$. \`A la lumière de toute la discussion qui précède, signalons maintenant les principales difficultés qui se posent dans notre problème. Tout d'abord, notre modèle est un système non visqueux et de type hyperbolique pour lequel nous ne pouvons \textit{pas} nous attendre à des effets de lissage et de gain de régularité. Pour cette raison, il est naturel de regarder les équations \eqref{full Euler_intro_fr} dans un cadre régulier comme les espaces $H^s$ avec $s>2$. Les espaces de Sobolev $H^s(\mathbb{R}^2)$, pour $s>2$, sont en fait plongés dans l'espace $W^{1,\infty}$ des fonctions globalement Lipschitziennes: c'est une exigence minimale pour préserver la régularité initiale (voir par exemple le chapitre 3 de \cite{B-C-D} et aussi \cite{D_JDE}, \cite{D-F_JMPA} pour une large discussion sur ce sujet). En fait, tous les espaces de Besov $B^s_{p,r}(\mathbb{R}^d)$ qui sont inclus dans $W^{1,\infty}(\mathbb{R}^d)$, un fait qui se produit pour $( s,p,r)\in \mathbb{R}\times [1,+\infty]^2$ tels que \begin{equation} \label{Lip_assumption_fr} s>1+\frac{d}{p} \quad \quad \quad \text{ou}\quad \quad \quad s=1+\frac{d}{p} \quad \text{et}\quad r=1\, ,\tag{LIP} \end{equation} sont de bons candidats pour l'analyse du caractère bien posé (reportez-vous à l'annexe \ref{app:Tools} pour plus de détails). Cependant, le choix de travailler dans $H^s\equiv B^s_{2,2}$ est dicté par la présence de la force de Coriolis: nous exploiterons en profondeur l'antisymétrie de ce terme singulier. Par ailleurs, le fluide est supposé incompressible, de sorte que le terme de pression n'est qu'un multiplicateur de Lagrange et ne donne \textit{pas} d'informations sur la densité, contrairement au cas compressible. De plus, à cause de la non-homogénéité, l'analyse du gradient de pression est beaucoup plus complexe puisqu'on doit étudier une équation elliptique à coefficients \textit{non constants}, c'est-à-dire \begin{equation}\label{elliptic_eq_introduction_fr} -{\rm div}\, (A \, \nabla_x p)={\rm div}\, F \quad \text{où}\quad {\rm div}\, F:={\rm div}\, \left(\vu \cdot \nabla_x \vu+ \frac{1}{Ro} \vu^\perp \right)\quad \text{et}\quad A:=1/\varrho \, . \tag{ELL} \end{equation} La principale difficulté est d'obtenir des bornes \textit{uniformes} appropriées (par rapport au paramètre de rotation) pour le terme de pression dans le cadre régulier que nous considérerons. Nous renvoyons à \cite{D_JDE} et \cite{D-F_JMPA} pour plus de détails concernant les problèmes qui se posent dans l'analyse de l'équation elliptique \eqref{elliptic_eq_introduction_fr} dans les espaces $B^s_{p,r}$. Après avoir analysé le terme de pression, nous montrons que le système \eqref{full Euler_intro_fr} est localement bien posé dans le cadre $H^s$. On note que, dans le théorème pour le caractère bien posé local (ci-dessous), toutes les estimations sont \textit{uniformes} par rapport au paramètre de rotation et, en plus, nous avons que le temps d'existence est indépendant de $\varepsilon$. \begin{theoremnonum_fr}\label{thm3_intro_french} Soit $s>2$. Pour tout $\varepsilon>0$, il existe un temps $T_\varepsilon^\ast >0$ tel que le système \eqref{full Euler_intro_fr} a une solution unique $(\varrho_\varepsilon, \vu_\varepsilon, \nabla_x \Pi_\varepsilon)$ où \begin{itemize} \item $\varrho_\varepsilon$ appartient à l'espace $C^0([0,T_\varepsilon^\ast]\times \mathbb{R}^2)$ avec $\nabla_x \varrho_\varepsilon \in C^0([ 0,T_\varepsilon^\ast]; H^{s-1}(\mathbb{R}^2))$; \item $\vu_\varepsilon$ et $\nabla_x \Pi_\varepsilon$ appartiennent à l'espace $C^0([0,T_\varepsilon^\ast]; H^s(\mathbb{R}^2))$. \end{itemize} Par ailleurs, $$\inf_{\varepsilon>0}T_\varepsilon^\ast >0\, .$$ \end{theoremnonum_fr} Une fois que nous avons énoncé le résultat du caractère bien posé local dans le théorème \ref{thm3_intro_french}, on passe à la limite en rotation rapide pour des données initiales générales \emph{mal préparées}. Nous prouvons la convergence du système \eqref{full Euler_intro_fr} vers ce que nous appelons système d'Euler incompressible \textit{quasi-homogène} \begin{equation} \label{Q-H_E_intro_fr} \begin{cases} \partial_t R+{\rm div}\, (R\vu)=0 \\ \partial_t \vec u+{\rm div}\, \left(\vec{u}\otimes\vec{u}\right)+R\vu^\perp+ \nabla_x \Pi =0 \\ {\rm div}\, \vec u\,=\,0\,, \tag{QHE} \end{cases} \end{equation} où $R$ représente la limite des fluctuations $R_\varepsilon$. Signalons aussi que dans l'équation de quantité de mouvement de \eqref{Q-H_E_intro_fr} apparaît un terme non linéaire d'ordre inférieur (i.e. $R\vec u^\perp$): c'est une sorte de reste dans la convergence pour le terme de Coriolis, défini dans \eqref{Coriolis_fr}. Le passage à la limite dans l'équation de la quantité de mouvement \eqref{full Euler_intro_fr} n'est plus évident, bien que nous soyons dans le cadre $H^s$: le terme de Coriolis est responsable de fortes oscillations en temps des solutions (les \textit{ondes de Poincar\'e}) qui peuvent empêcher la convergence du terme convectif vers celui de \eqref{Q-H_E_intro_fr}. Pour surmonter ce problème, nous utilisons la même approche mentionnée au-dessus: la technique de compacité par compensation. Par ailleurs, il est intéressant de noter que le caractère bien posé global du système \eqref{Q-H_E_intro_fr} reste un problème ouvert. Cependant, \textit{grosso modo}, pour $R_0$ assez petit, le système \eqref{Q-H_E_intro_fr} est ``proche'' du système 2-D d'Euler homogène et incompressible, pour lequel il est bien connu le caractère bien posé global. Ainsi, il est naturel de se demander s'il existe un résultat de caractère bien posé ``asymptotiquement global'' dans l'esprit de \cite{D-F_JMPA} et \cite{C-F_sub}: pour de petites fluctuations initiales $R_0$, le système quasi-homogène \eqref{Q-H_E_intro_fr} se comporte comme les équations d'Euler standards et la durée de vie de ses solutions tend vers l'infini. En particulier, comme déjà montré dans \cite{C-F_sub} pour le système MHD idéal et quasi-homogène (voir aussi la bibliographie de \cite{C-F_sub}), on prouve que la durée de vie des solutions de \eqref{Q-H_E_intro_fr} satisfait \begin{equation} \label{lifespan_Q-H_fr} T_\delta^\ast \sim \log \log \frac{1}{\delta}\,,\tag{LIFE} \end{equation} où $\delta>0$ est la taille des fluctuations initiales. Ce résultat pour le temps d'existence des solutions de \eqref{Q-H_E_intro_fr} pousse notre attention vers l'étude de la durée de vie des solutions du système primitif \eqref{full Euler_intro_fr}. Pour le système d'Euler en 3-D \textit {homogène} avec la force de Coriolis, Dutrifoy dans \cite{Dut} a prouvé que la durée de vie des solutions tend vers l'infini dans le régime de la rotation rapide (voir aussi \cite{Gall}, \cite{Cha} et \cite{Scro}, où les auteurs ont inspecté la durée de vie des solutions dans le contexte de fluides homogènes et visqueux). Pour le système \eqref{full Euler_intro_fr} il n'est pas clair comment trouver des effets de stabilisation similaires (dus au terme de Coriolis), afin d'améliorer la durée de vie des solutions: c'est-à-dire montrer que $T_\varepsilon^\ast \longrightarrow +\infty$ quand $\varepsilon\rightarrow 0^+$. Néanmoins, indépendamment des effets rotationnels, nous sommes en mesure d'énoncer un résultat de caractère bien posé ``asymptotiquement global'' dans le régime d'oscillations \textit{petites}, au sens de \eqref{lifespan_Q-H_fr}: à savoir, quand la taille de la fluctuation initiale $R_{0,\varepsilon}$ est assez petite, de taille $\delta >0$, la durée de vie $T^\ast_\varepsilon$ de la solution correspondante du système \eqref{full Euler_intro_fr} est bornée inférieurement par $T^\ast_\varepsilon\geq T^\ast(\delta)$, avec $T^\ast (\delta)\longrightarrow +\infty$ quand $\delta\rightarrow 0^+ $ (voir aussi \cite{D-F_JMPA} pour un fluide dépendant de la densité mais en absence de la force de Coriolis). Plus précisément, on a le résultat suivant. \begin{propnonum} La durée de vie $T_\varepsilon^\ast$ de la solution des équations d'Euler incompressibles dépendant de la densité en deux dimensions \eqref{full Euler_intro_fr} avec la force de Coriolis est bornée inférieurement par \begin{equation} \label{improv_life_fullE_intro_fr} \frac{C}{\|\vec u_{0,\varepsilon}\|_{H^s}}\log\left(\log\left(\frac{C\, \|\vec u_{0, \varepsilon}\|_{H^s}}{\max \{\mathcal A_\varepsilon(0),\, \varepsilon \, \mathcal A_\varepsilon(0)\, \|\vec u_{0, \varepsilon}\|_{H^s}\}}+1\right)+1\right)\, ,\tag{BOUND} \end{equation} où $\mathcal A_\varepsilon (0):= \|\nabla_x R_{0,\varepsilon}\|_{H^{s-1}}+\varepsilon\, \|\nabla_x R_{0,\varepsilon }\|_{H^{s-1}}^{\lambda +1}$, pour un $\lambda\geq 1$ approprié. \end{propnonum} Comme corollaire immédiat de la minoration précédente, si on considère des densités initiales de la forme $\varrho_{0,\varepsilon}=1+\varepsilon^{1+\alpha}R_{0,\varepsilon}$ avec $ \alpha >0$, alors on obtient $T^\ast_\varepsilon\sim \log \log (1/\varepsilon)$. De suite, on va illustrer schématiquement les principales étapes pour montrer \eqref{improv_life_fullE_intro_fr} pour le système primitif \eqref{full Euler_intro_fr}. Le point clé de la preuve de \eqref{improv_life_fullE_intro_fr} est d'étudier la durée de vie des solutions dans des espaces critiques de Besov. Dans ces espaces, on peut profiter du fait que, lorsque $s=0$, la norme $B^0_{p,r}$ des solutions peut être bornée \textit{linéairement} par rapport à la norme de Lipschitz de la vitesse, plutôt qu'exponentiellement (voir les travaux \cite{Vis} de Vishik et \cite{H-K} de Hmidi et Keraani). Puisque le triplet $(s,p,r)$ doit satisfaire \eqref{Lip_assumption_fr}, l'espace de Besov de régularité la plus basse que nous puissions atteindre est $B^1_{\infty,1}$. Par conséquent, si $\vec u$ appartient à $B^1_{\infty,1}$, le tourbillon $\omega := -\partial_2 u^1+\partial_1 u^2$ a la régularité désirée pour appliquer les estimations améliorées de Hmidi-Keraani et Vishik (voir le théorème \ref{thm:improved_est_transport} à cet égard). En analysant la formulation en tourbillon du système, on découvre que l'opérateur \emph{curl} annule les effets singuliers produits par la force de Coriolis: cette annulation n'est pas apparente, parce que la propriété antisymétrique du terme de Coriolis est hors d'usage dans le cadre critique considéré. Enfin, nous avons besoin d'un critère de continuation (dans l'esprit du critère de Baele-Kato-Majda, voir \cite{B-K-M}) qui garantit qu'on puisse ``mesurer'' la durée de vie des solutions dans l'espace d'indice de régularité plus faible, $ s=r=1$ et $p=+\infty$. Ce critère est valable sous l'hypothèse que $$ \int_0^{T} \big\| \nabla_x \vec u(t) \big\|_{L^\infty} \dt < +\infty\qquad \text{avec}\qquad T<+\infty\, . $$ Le critère précédent assure que la durée de vie des solutions trouvées dans les espaces critiques de Besov est la même que dans le cadre fonctionnel de Sobolev recherché: ce qui nous permet de conclure la preuve (voir les considérations dans la sous-section \ref{ss:cont_criterion+consequences}). \medskip \subsection*{Aperçu du contenu de cette thèse} Avant de poursuivre, nous donnons un bref aperçu de la structure de la présente thèse. Le chapitre \ref{chap:geophysics} a pour objectif de ``plonger'' le lecteur dans la discipline de la dynamique géophysique des fluides, en donnant une justification physique succincte des modèles mathématiques que nous considérerons dans les prochains chapitres. Dans le chapitre \ref{chap:multi-scale_NSF} nous abordons l'étude du problème de perturbation singulière, donné par les équations de Navier-Stokes-Fourier, dans l'échelle que nous appelons ``critique''. Le chapitre \ref{chap:BNS_gravity} est consacré à l'amélioration de l'échelle précédente: nous dépasserons le seuil ``critique''. Enfin, dans le dernier chapitre \ref{chap:Euler} nous modifions un peu le modèle et nous allons travailler sur l'analyse asymptotique des équations d'Euler dépendant de la densité. De plus, nous nous concentrerons sur l'étude de la durée de vie de ses solutions, prouvant un résultat de caractère bien posé ``asymptotiquement global''. \`A la fin de cette thèse, se trouvent deux autres sections consacrées aux perspectives d'avenir et une annexe contenant quelques outils et des résultats célèbres utilisés dans le manuscrit. \newpage \null \thispagestyle{empty} \newpage \mainmatter \chapter{Some geophysical considerations}\label{chap:geophysics} \begin{quotation} The scope of this chapter is to introduce the mathematical features of geophysical flows. The main reference is the book \cite{C-R} (see also \cite{K-C-D}, \cite{Ped}, \cite{Val}, \cite{Z}). We will briefly discuss the physical motivations of the mathematical models we will consider in the next chapters. The equations presented in the following paragraphs will be derived mainly from physical considerations. For this reason, the functions which will appear in the sequel have to be considered smooth. \medbreak Let us give an overview of the chapter. First of all, after a brief introduction (see Section \ref{sect:intro}), we present the two main characters which influence the dynamics of the geophysical fluids: the rotation (see Section \ref{sec:rotation}) and the stratification (we refer to Section \ref{sec:strat}). In Section \ref{sec:budgets} we show how to derive, from the physical point of view, the budget equations and Section \ref{sec:Bous_app} is instead devoted to the Boussinesq approximation. Next, in Section \ref{sec:scales_motion}, we perform a scale analysis and we define some important dimensionless numbers. Section \ref{sec:TP_thm} is dedicated to the celebrated Taylor-Proudman theorem. We conclude this chapter talking about stratified and quasi-incompressible fluids (see Section \ref{sec:strat_q-i}), and rewriting the Navier-Stokes system in its dimensionless form (Section \ref{sec:NS_dimenless}). \end{quotation} \section{The geophysical fluid flows}\label{sect:intro} The geophysical fluid dynamics (GFD) studies the naturally occurring flows on large scales that mostly take place on Earth but also on other planets or stars. The discipline encompasses the motion of both fluid phases: liquids (e.g. waters in the ocean) and gases (e.g. air in the Earth's atmosphere or in other planets). In addition, it is on large scales that the common features of atmospheric and oceanic dynamics come to light. In most of the problems concerning GFD, either the ambient rotation (of Earth, planets, stars) or density differences (warm and cold air masses, fresh and saline water) or both assume a relevant importance. Typical problems arising in geophysical fluid dynamics are, for example, the variability of the atmosphere (weather and climate dynamics), of the ocean (waves, vortices and currents) and vortices on other planet (Jupiter's Great Red Spot, see Figure \ref{fig:Jupiter}), and convection in stars. The effects of rotation and those of stratification distinguish the GFD from the traditional fluid mechanics. The fact that the ambient is rotating (e.g. the Earth's rotation around its axis) introduces in the equations the presence of two acceleration terms that, in view of the Newton's second law of motion, can be interpreted as forces: the Coriolis force and the centrifugal force. On the one hand, although the centrifugal effects are more palpable (on a planetary scale), they play a negligible role in the dynamics. On the other hand, the less intuitive Coriolis force turns out to be a crucial character in describing the behaviour of geophysical motions. The major effect of the Coriolis force is to impose certain vertical rigidity to the fluid: if the Coriolis effect is strong enough, we could observe that the homogeneous flow displaces itself in vertical columns: the particle along the same vertical move together and retain their alignment over long periods of time (e.g. currents in the Western North Atlantic). This property is known as \emph{Taylor-Proudman theorem}. That result was firstly derived by S. S. Hough in 1897, but was named after the works (in 1916-1917) of G. I. Taylor and J. Proudman. Five years later, G. I. Taylor verified experimentally such a property. \begin{figure}[htbp] \centering \includegraphics[scale=0.15]{Pictures/Great_Red_Spot} \caption{Jupiter's Great Red Spot (1979)}\label{fig:Jupiter} \end{figure} In the large scale atmospheric and oceanic flows, the previous state of perfect vertical rigidity is not realized due to the fact that the rotation is not sufficiently fast and due to the appearance of stratification, i.e. density variations. The cause of those vertical effects is attributable to the presence of the gravitational force, which tends to lower the regions of the fluid with heavier density and to raise the lightest. Under equilibrium conditions, the fluid is stably stratified in stacked horizontal layers of decreasing density. However, the fluid motions disturb this equilibrium that gravity tends to restore. We conclude this part pointing out that the advances in GFD touch considerably our real life. The progress in the ability to predict with some confidence the paths of hurricanes has led the creation of warning systems that have saved and will save numerous lives in sea and coastal areas (e.g. we can think to the Hurricane Frances in 2004, see Figure \ref{fig:Hurricane}). \begin{figure}[htbp] \centering \includegraphics[scale=0.15]{Pictures/Hurricane_Frances} \caption{Hurricane Frances (2004)}\label{fig:Hurricane} \end{figure} Another fundamental aspect is that the combined dynamics of atmosphere and oceans contribute to the global climate. The behaviour of the atmosphere modulates, for example, the agricultural success and the ocean currents affect navigation, fisheries and disposal of pollution. Thus, understanding and reliably predicting of geophysical events and trends are scientific, economic, humanitarian and political priorities. \newpage \section{First character: the rotation} \label{sec:rotation} We are now interested in which scales the ambient rotation is no more negligible in the fluid dynamics. For that reason, we introduce the following criterion considering the velocity $U$ and length $L$ scales of motion. If a particle at speed $U$ covers the distance $L$ in a time larger than or comparable to a rotation period (of the Earth, for example), we can imagine that the trajectory is influenced by the ambient rotation. Therefore, we write $$ \overline\varepsilon:=\frac{\text{time of one revolution}}{\text{time taken by a particle to cover }L \text{ at } U}=\frac{2\pi/\underline{\Omega}}{L/U}=\dfrac{2\pi U}{\underline{\Omega}L} \, ,$$ where $\underline{\Omega}:=\frac{2\pi}{\text{time of one revolution}}$ is the ambient rotation rate. If $\overline \varepsilon\lesssim 1$, then we can conclude that the rotation is important. In geophysical flows the previous inequality holds, since e.g. an ocean current usually flows at 10 cm/s over a distance of 10 km or a wind blows at 10 m/s in an anticyclonic formation 1000 km wide. \subsection{The Coriolis force} In this paragraph, we give a short mathematical inspection about the rotating framework of reference. To simplify the exposition, we focus on the two-dimensional case. Let $X^1$ and $X^2$ be the axes of the inertial framework of reference and $x^1$, $x^2$ be those of the rotating framework with angular velocity $\underline{\Omega}$. We denote by $\vec I$, $\vec J$ and $\vec i$, $\vec j$ the corresponding unit vectors (see Figure \ref{fig:ref} below). \begin{figure}[htbp] \centering \includegraphics[scale=0.2]{Pictures/reference_syst} \caption{Inertial framework versus rotating framework}\label{fig:ref} \end{figure} Then, it follows that \begin{equation*} \begin{split} \vec I&=\vec i \cos (\underline{\Omega}t)-\vec j\sin (\underline{\Omega}t)\\ \vec J&=\vec i \sin (\underline{\Omega} t)+\vec j \cos(\underline{\Omega}t) \end{split} \end{equation*} and the position vector is defined as \begin{equation}\label{definition_r} \begin{split} \vec r&=X^1\, \vec I+X^2\, \vec J\\ &=x^1\, \vec i+x^2\, \vec j\, . \end{split} \end{equation} Thus, it is easy to find that \begin{equation*} \begin{split} x^1&=X^1\cos (\underline{\Omega}t)+X^2\sin (\underline{\Omega}t)\\ x^2&=-X^1\sin (\underline{\Omega}t)+X^2\cos (\underline{\Omega}t)\, . \end{split} \end{equation*} At this point, taking the first derivative in time yields: \begin{equation*} \begin{split} \frac{dx^1}{dt}&=\frac{dX^1}{dt}\cos (\underline{\Omega}t)+\frac{dX^2}{dt}\sin (\underline{\Omega}t)+\underline{\Omega}x^2\\ \frac{dx^2}{dt}&=-\frac{dX^1}{dt}\sin (\underline{\Omega}t)+\frac{dX^2}{dt}\cos (\underline{\Omega}t)-\underline{\Omega}x^1\, . \end{split} \end{equation*} The previous expressions are the components of the relative velocity: \begin{equation*} \vec u =\frac{dx^1}{dt}\, \vec i+\frac{dx^2}{dt}\, \vec j=u^1\, \vec i+u^2 \, \vec j\, . \end{equation*} Similarly the absolute velocity is defined as \begin{equation*} \vec U=\frac{dX^1}{dt}\, \vec I+\frac{dX^2}{dt}\, \vec J\, . \end{equation*} Rewriting the absolute velocity in terms of the rotating framework, we get \begin{equation*} \begin{split} \vec U&=\left(\frac{dX^1}{dt}\cos (\underline{\Omega}t)+\frac{dX^2}{dt}\sin (\underline{\Omega}t)\right)\, \vec i+ \left(-\frac{dX^1}{dt}\sin (\underline{\Omega}t)+\frac{dX^2}{dt}\cos (\underline{\Omega}t)\right)\, \vec j\\ &=U^1\, \vec i+ U^2\, \vec j\, . \end{split} \end{equation*} Then, comparing the absolute and relative velocities, one has \begin{equation*} \begin{split} U^1&=u^1-\underline{\Omega}x^2\\ U^2&=u^2+\underline{\Omega}x^1\, . \end{split} \end{equation*} This means that the absolute velocity is the relative velocity with in addition the entrainment speed caused by the ambient rotation. In a similar manner, we can deduce that \begin{equation*} \frac{d^2x^1}{dt^2}=\left(\frac{d^2X^1}{dt^2}\cos (\underline{\Omega}t)+\frac{d^2X^2}{dt^2}\sin (\underline{\Omega}t)\right)+ 2\underline{\Omega}U^2-|\underline{\Omega}|^2x^1 \end{equation*} and \begin{equation*} \frac{d^2x^2}{dt^2}=\left(-\frac{d^2X^1}{dt^2}\sin (\underline{\Omega}t)+\frac{d^2X^2}{dt^2}\cos (\underline{\Omega}t)\right)- 2\underline{\Omega}U^1-|\underline{\Omega}|^2x^2\, . \end{equation*} In terms of acceleration, we have \begin{equation*} \begin{split} \vec a &=\frac{d^2x^1}{dt^2}\, \vec i+\frac{d^2x^2}{dt^2}\, \vec j=a^1\, \vec i+a^2\, \vec j\\ \vec A&=\frac{d^2X^1}{dt^2}\, \vec I+\frac{d^2X^2}{dt^2}\, \vec J=A^1\, \vec i+A^2\, \vec j \end{split} \end{equation*} and so \begin{equation*} \begin{split} A^1&=a^1-2\underline{\Omega}u^2-|\underline{\Omega}|^2x^1\\ A^2&=a^2+2\underline{\Omega}u^1-|\underline{\Omega}|^2x^2\, . \end{split} \end{equation*} Now, we notice that the absolute acceleration differs from the relative acceleration for two contributions: the term proportional to $\underline{\Omega}$ and the relative velocity, which is called \emph{Coriolis acceleration}; the term proportional to $|\underline{\Omega}|^2$ and the relative coordinates, i.e. the \emph{centrifugal acceleration}. The centrifugal force acts as an outward pull, whereas the Coriolis force depends on the relative speed. In three-dimensions, one can repeat the above computations deriving \begin{equation*} \begin{split} \vec U&=\vec u+ \underline{\vec \Omega}\times \vec r\\ \vec A&=\vec a+2 \underline{\vec \Omega}\times \vec u+ \underline{\vec \Omega}\times \left( \underline{\vec \Omega}\times \vec r\right)\, , \end{split} \end{equation*} where the symbol $\times$ stands for the external products by vectors in $\mathbb{R}^3$ and $\underline{\vec \Omega}=\underline{\Omega}\, \vec k$ (with $\vec k$ the unit vector in the third dimension). This means that if we would to take a derivative in time in the inertial framework, we have to apply $$ \frac{d}{dt}+\underline{\vec \Omega}\times $$ in the rotating framework of reference. \subsection{The centrifugal force} Unlike the Coriolis force which is proportional to the velocity (as we have seen above), the centrifugal force depends on the rotation rate and the distance of the particle to the rotation axis. The centrifugal force is responsible for the slightly flattened shape of the planets. \begin{figure}[htbp] \centering \includegraphics[scale=0.2]{Pictures/Centrifugal_Force} \caption{The effects of the centrifugal force}\label{fig:centrifugal-force} \end{figure} For example, due to the centrifugal effects, the terrestrial equatorial radius is 6378 km, slightly greater than its polar radius of 6357 km. Moreover, the centrifugal effects cause an outward pull to particles, that in any case they don't fly out to space thanks to the gravity. However, the centrifugal force affects the gravity: it shifts the direction of the gravity away from the Earth's center, thus weakening the gravitational effects. \section{Second character: the stratification effects}\label{sec:strat} As mentioned above, geophysical fluids consist of fluid masses of different densities, that the gravitational action tends to arrange in horizontal layers. On the contrary, the motion disturbs this equilibrium raising the dense zones and lowering the lighter ones. In this way, the potential energy increase at the cost of decreasing the kinetic energy and thus the flow slows down. Therefore, the importance of stratification can be evaluated in terms of potential and kinetic energies. We denote by $\Delta \varrho$ the scale of density variations and $H$ is its height scale. We perturb the stratification, raising a fluid particle of density $\varrho_0+\Delta \varrho$ over the height $H$ and lowering a fluid element of density $\varrho_0$ over the same $H$. The corresponding change in potential energy, per unit of volume, is \begin{equation*} (\varrho_0 +\Delta \varrho)gH-\varrho_0gH=gH\,\Delta \varrho\, . \end{equation*} Now, we define \begin{equation*} \overline \sigma:=\dfrac{\frac{1}{2}\varrho_0|U|^2}{gH\, \Delta \varrho}\, , \end{equation*} which is the comparative energy ratio between the kinetic energy (per unit of volume) $\frac{1}{2}\varrho_0|U|^2$ and the potential energy. Therefore, if $\overline \sigma\lesssim 1$ the stratification effects cannot be ignored in the dynamics of the fluid. In geophysical flows, an interesting situation is when rotation and stratification effects are both important, i.e. $\overline \varepsilon\sim 1$ and $\overline \sigma \sim 1$. This implies that \begin{equation*} L\sim \frac{U}{\underline{\Omega}}\quad \quad \text{and}\quad \quad U\sim \sqrt{\frac{\Delta \varrho}{\varrho_0}gH}\, . \end{equation*} In this way, we have a fundamental length scale $$ L\sim \frac{1}{\underline{\Omega}}\sqrt{\frac{\Delta \varrho}{\varrho_0}gH}\, . $$ On the Earth, the typical length and velocity scales for the atmosphere and oceans are, respectively, \begin{equation*} \begin{split} L_{\rm atm}\sim 500\; {\rm km}\quad \quad &\text{and}\quad \quad U_{\rm atm}\sim 30\; {\rm m/s}\\ L_{\rm oc}\sim 60\; {\rm km}\quad \quad &\text{and}\quad \quad U_{\rm oc}\sim 4\; {\rm m/s}\, . \end{split} \end{equation*} We point out that, in general, the oceanic motions are slower and slightly more confined than the atmospheric flows. \section{Mass, momentum, energy and entropy budgets}\label{sec:budgets} The object of this section is to establish, via physical considerations, the equations governing the movement of geophysical flows. \subsection{The continuity equation} We consider an infinitesimal cube with volume $\Delta V=\Delta x^1 \Delta x^2 \Delta x^3$ that is fixed in space. \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{Pictures/Cube} \caption{Mass conservation in an infinitesimal cube}\label{fig:cube} \end{figure} The fluid crosses the cube in the $x^1$-direction, passing through the faces in the $x^2$-$x^3$ plane of area $\Delta A=\Delta x^2 \Delta x^3$. The accumulation of the fluid in-out, in the $x^1$-direction, is: \begin{equation*} \Delta x^2 \Delta x^3 \left[(\varrho u^1)(x^1,x^2,x^3)-(\varrho u^1)(x^1+\Delta x^1, x^2,x^3)\right]=-\frac{\partial (\varrho u^1)}{\partial x^1}(x^1,x^2,x^3)\, \Delta x^1 \Delta x^2 \Delta x^3\, , \end{equation*} where $\varrho$ is the density of the fluid (in kg/$\rm m^3$) and $u^1$ (in m/s) is the first component of the flow velocity $\vec u=(u^1,u^2,u^3)$. Similarly for the $x^2$ and $x^3$-directions, we have $$ -\left[\frac{\partial(\varrho u^2)}{\partial x^2}+\frac{\partial (\varrho u^3)}{\partial x^3}\right]\Delta x^1 \Delta x^2 \Delta x^3\, . $$ This net accumulation of the fluid must be accompanied by an increase of fluid mass within the volume, represented by $$ \frac{\partial \varrho}{\partial t}\Delta x^1 \Delta x^2 \Delta x^3\, . $$ Since the mass is conserved, one has \begin{equation*} \Delta x^1 \Delta x^2 \Delta x^3 \left[\frac{\partial \varrho}{\partial t}+\frac{\partial (\varrho u^1)}{\partial x^1}+\frac{\partial (\varrho u^2)}{\partial x^2}+\frac{\partial (\varrho u^3)}{\partial x^3}\right]=0 \end{equation*} and, therefore, \begin{equation}\label{chap1:mass_conservation} \partial_t \varrho +{\rm div}\, (\varrho \vec u)=0\, . \end{equation} The previous equation \eqref{chap1:mass_conservation} is the so-called \emph{mass continuity equation}. Sturm in \cite{Sturm} reports that Leonardo da Vinci had already derived a simplified form of the statement of mass conservation in the 15th century. However, the three-dimensional form had to be accredited to Leonhard Euler (1707-1783). \subsection{The momentum budget} At this point, we are interested in performing budgets on momentum and energy. We sketch the approach on the momentum in 3-D. We consider $\varrho u^1$ the momentum which can be changed by forces and by in-out flow of momentum. The budget momentum fluxes can be calculated as in the case of masses except that $\varrho$ is now replaced by $\varrho u^1$. Instead, the forces applied to the infinitesimal cube in the $x^1$-direction are: \begin{equation*} \begin{split} &-p(x^1,x^2,x^3)\Delta x^2 \Delta x^3+p(x^1+\Delta x^1, x^2,x^3) \Delta x^2 \Delta x^3\\ &+ \mathbb S^{x^1 x^2}(x^1,x^2,x^3) \Delta x^1 \Delta x^3- \mathbb S^{x^1 x^2}(x^1, x^2+\Delta x^2,x^3)\Delta x^1 \Delta x^3\\ &+ \mathbb S^{x^1 x^1}(x^1,x^2,x^3) \Delta x^2 \Delta x^3- \mathbb S^{x^1 x^1}(x^1+\Delta x^1, x^2,x^3)\Delta x^2 \Delta x^3\\ &+ \mathbb S^{x^1 x^3}(x^1,x^2,x^3) \Delta x^1 \Delta x^2- \mathbb S^{x^1 x^3}(x^1, x^2,x^3+\Delta x^3)\Delta x^1 \Delta x^2\, , \end{split} \end{equation*} where the viscous stresses $\mathbb S$ depend on the nature of the matter. We have also assumed that $\mathbb S^{x^j x^k}=\mathbb S^{x^k x^j}$ (with $j \neq k$): if these stresses had not the same intensity, the cube would be subjected to an unbalanced torque. \medskip \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{Pictures/momentum} \caption{Two-dimensional situation with forces acting on the fluid parcel}\label{fig:moementum_budget} \end{figure} Therefore, with these forces and the in-out flow of momentum, we derive (for the $x^1$-direction): \begin{equation*} \frac{\partial}{\partial t}(\varrho u^1)+\frac{\partial}{\partial x^1}(\varrho u^1 u^1)+\frac{\partial}{\partial x^2}(\varrho u^1 u^2)+\frac{\partial}{\partial x^3}(\varrho u^1 u^3)=-\frac{\partial p}{\partial x^1}+\frac{\partial \mathbb S^{x^1 x^1}}{\partial x^1}+\frac{\partial \mathbb S^{x^1 x^2}}{\partial x^2}+\frac{\partial \mathbb S^{x^1 x^3}}{\partial x^3}\, . \end{equation*} Similarly to the $x^2$ and $x^3$-directions (taking into account the gravitational force $\varrho g x^3 \vec k$), we obtain the momentum equation: \begin{equation}\label{chap1:mom_equation} \partial_t (\varrho \vec u)+{\rm div}\, (\varrho \vec u \otimes \vec u)=-\nabla p + {\rm div}\, \mathbb S -\varrho g x^3 \vec k\, , \end{equation} where $\vec k=\vec e_3=(0,0,1)$ is the unit vector directed along the vertical axis. Since we are interested in fluids which are heavily subjected to the rotational effects of the ambient, we have to make use of the relation $$ \left(\frac{\partial}{\partial t}\vec u\right)_{\rm inertial}=\frac{\partial}{\partial t}\vec u+2\underline{\vec \Omega}\times \vec u+\underline{\vec \Omega}\times \left(\underline{\vec \Omega}\times \vec r\right)\, . $$ In the previous relation one can recognize: \begin{itemize} \item the Coriolis force $\underline{\vec \Omega}\times \vec u$; \item the centrifugal force $\underline{\vec \Omega}\times \left(\underline{\vec \Omega}\times \vec r\right)=\frac{1}{2}\nabla \left(\left|\underline{\vec \Omega}\times \vec r\right|^2\right)$, where $\vec r$ is the position defined in \eqref{definition_r}. \end{itemize} Then, at the end recalling \eqref{chap1:mom_equation}, we get the momentum equation in the rotational framework: \begin{equation*}\label{chap1:mom_equation_rot} \partial_t (\varrho \vec u)+{\rm div}\, (\varrho \vec u \otimes \vec u)+2\underline{\vec \Omega}\times \varrho \vec u+\nabla p = {\rm div}\, \mathbb S -\varrho g x^3 \vec k+\frac{1}{2}\varrho \nabla \left(\left|\underline{\vec \Omega}\times \vec r\right|^2\right) \end{equation*} and in general \begin{equation}\label{chap1:mom_equation_gen} \partial_t (\varrho \vec u)+{\rm div}\, (\varrho \vec u \otimes \vec u)+2\underline{\vec \Omega}\times \varrho \vec u+\nabla p = {\rm div}\, \mathbb S +\varrho \vec f\, , \end{equation} where $\varrho \vec f$ is called \emph{body force}. \subsection{The energy budget} The energy density $\mathcal E$ can be written as $$ \mathcal E := \frac{1}{2}\varrho |\vec u|^2+\varrho e\, , $$ where the function $e$ denotes the specific internal energy. Taking $\underline{\vec \Omega}= \Omega \vec k$ and multiplying the momentum equation \eqref{chap1:mom_equation_gen} on $\vec u$, we deduce the kinetic energy balance: \begin{equation*} \partial_t \left(\frac{1}{2}\varrho |\vec u|^2\right)+{\rm div}\, \left(\frac{1}{2}\varrho |\vec u|^2 \vec u\right)={\rm div}\, (\mathbb T \vec u)- \mathbb T:\nabla \vec u +\varrho \vec f\cdot \vec u\, , \end{equation*} where we have defined $\mathbb A:\mathbb B:=\sum_{j,k=1}^3 A^{jk}B^{kj}$ and the stress tensor $$ \mathbb T:=\mathbb S-p\, {\rm Id}\, $$ via the Stokes' law. We recall that ${\rm Id}\,$ represents the identity matrix. On the other hand, by virtue of the \emph{First law of thermodynamics}, the energy changes of system are due only to external sources, i.e. \begin{equation}\label{chap1:energy_relation} \partial_t(\varrho e)+{\rm div}\, (\varrho e \vec u)+ {\rm div}\, \vec q=\mathbb S:\nabla \vec u- p\, {\rm div}\, \vec u +\varrho Q\, , \end{equation} where $\varrho Q$ represents the volumetric rate of the internal energy production, and $\vec q$ is the internal energy flux. Therefore, the energy balance reads: \begin{equation*} \partial_t \mathcal E + {\rm div}\, (\mathcal E \vec u)+ {\rm div}\, (\vec q -\mathbb S \vec u+p\vec u)=\varrho \vec f +\varrho Q\, . \end{equation*} \subsection{The entropy budget}\label{subsec:entropy_bud} In accordance with the \emph{Second law of thermodynamics}, the quantities $p$, $e$ and $s$ are linked trough the Gibbs' relation \begin{equation*} \vartheta D_{\varrho,\vartheta}s=D_{\varrho, \vartheta}e +p\, D_{\varrho, \vartheta}\left(\frac{1}{\varrho}\right)\, , \end{equation*} where $D_{\varrho, \vartheta}$ stands for the differential with respect to the density $\varrho$ and the temperature $\vartheta$, and $s$ is the specific entropy. Accordingly, the internal energy balance equation \eqref{chap1:energy_relation} can be rewritten in the form of entropy balance \begin{equation*} \partial_t(\varrho s)+{\rm div}\, (\varrho s \vec u)+ {\rm div}\, \left(\frac{\vec q}{\vartheta}\right)=\sigma +\frac{\varrho}{\vartheta}Q\, , \end{equation*} with the entropy production rate $$ \sigma :=\frac{1}{\vartheta}\left(\mathbb S:\nabla \vec u-\frac{\vec q\cdot \nabla \vartheta}{\vartheta}\right)\, . $$ Moreover, the \emph{Second law of thermodynamics} postulates that $\sigma$ must be non-negative for any admissible thermodynamic process. \section{Boussinesq approximation}\label{sec:Bous_app} In most geophysical flows, the density of the fluid has ``small'' oscillations around a mean value. Indeed, variations in density within one ocean basin rarely exceed 0.3\% (instead, in the atmosphere, density variations due to the winds are usually no more than 5\%). So, it appears physically justifiable to assume that the fluid density $\varrho$ does not depart much from a reference state $\overline \varrho$, i.e. \begin{equation*} \varrho=\overline \varrho+\varrho^\prime (t, x^1,x^2,x^3)\quad \quad \text{with}\quad \quad |\varrho^\prime|\ll \overline\varrho\, . \end{equation*} Therefore, the continuity equation \eqref{chap1:mass_conservation} becomes \begin{equation*} \overline \varrho \left(\frac{\partial u^1}{\partial x^1}+\frac{\partial u^2}{\partial x^2}+\frac{\partial u^3}{\partial x^3}\right)+\varrho^\prime \left(\frac{\partial u^1}{\partial x^1}+\frac{\partial u^2}{\partial x^2}+\frac{\partial u^3}{\partial x^3}\right)+ \left(\frac{\partial \varrho^\prime}{\partial t}+u^1\frac{\partial \varrho^\prime}{\partial x^1}+u^2\frac{\partial \varrho^\prime}{\partial x^2}+u^3\frac{\partial \varrho^\prime}{\partial x^3}\right)=0\, . \end{equation*} Since $|\varrho^\prime|\ll \overline \varrho$, only the first group of terms has to be retained so that $$ \frac{\partial u^1}{\partial x^1}+\frac{\partial u^2}{\partial x^2}+\frac{\partial u^3}{\partial x^3}=0\, . $$ Physically, the statement means that we are dealing with an incompressible fluid. \section{Scales of motion and dimensionless numbers}\label{sec:scales_motion} We perform in this section an analysis of scales characterizing the geophysical flows. First of all, we compare the time, length and velocity scales with respect to the ambient rotation rate $\underline{\Omega}$. Typically, one has $$ T\gtrsim \frac{1}{\underline{\Omega}}\quad \quad \text{and}\quad \quad \frac{U}{L}\lesssim \underline{\Omega}\, . $$ It is generally not required to discriminate between the two horizontal directions and velocities, respectively: we assign indeed the same length scale $L$ and velocity scale $U$ for the horizontal components (respectively). The same, however, cannot be said of the vertical direction. Geophysical flows are in fact confined in domain, which are wider than they are thick: $H/L$ is small. If we assume the \emph{Boussinesq approximation}, the terms in the continuity equation (in its reduced form) have orders of magnitude $$ \frac{U}{L}\; , \quad \quad \frac{U}{L}\; , \quad \quad \frac{W}{H} \; .$$ By geophysical considerations, the vertical velocity scale must by constrained by $$ W\lesssim \frac{H}{L}U $$ and by virtue of $H\ll L$, one has $W\ll U$. In other words, large-scale geophysical flows are shallow ($H \ll L$) and nearly two-dimensional ($W\ll U$). At this point, we consider the momentum equation \eqref{chap1:mom_equation_gen} under the Boussinesq approximation, in which (only for clarity of exposition) we take ${\rm div}\, \mathbb S=\nu \Delta \vec u$, $\vec f=g\vec k$ and $\underline{\vec \Omega}=\underline{\Omega}\cos \varphi \, \vec j+\underline{\Omega}\sin \varphi \, \vec k$ where $\varphi$ is the latitude. Then, the equation reads \begin{equation}\label{chap1:system_momentum_3D} \begin{cases} \frac{\partial u^1}{\partial t}+u^1\frac{\partial u^1}{\partial x^1}+u^2\frac{\partial u^1}{\partial x^2}+u^3\frac{\partial u^1}{\partial x^3}+f_\ast u^3-fu^2=-\frac{1}{\overline \varrho}\frac{\partial p}{\partial x^1}+\nu \left(\frac{\partial^2 u^1}{\partial x^1 \partial x^1}+\frac{\partial^2 u^1}{\partial x^2 \partial x^2}+\frac{\partial^2 u^1}{\partial x^3 \partial x^3}\right)\\ \frac{\partial u^2}{\partial t}+u^1\frac{\partial u^2}{\partial x^1}+u^2\frac{\partial u^2}{\partial x^2}+u^3\frac{\partial u^2}{\partial x^3}+f u^1=-\frac{1}{\overline \varrho}\frac{\partial p}{\partial x^2}+\nu \left(\frac{\partial^2 u^2}{\partial x^1 \partial x^1}+\frac{\partial^2 u^2}{\partial x^2 \partial x^2}+\frac{\partial^2 u^2}{\partial x^3 \partial x^3}\right)\\ \frac{\partial u^3}{\partial t}+u^1\frac{\partial u^3}{\partial x^1}+u^2\frac{\partial u^3}{\partial x^2}+u^3\frac{\partial u^3}{\partial x^3}-f_\ast u^1=-\frac{1}{\overline \varrho}\frac{\partial p}{\partial x^3}-\frac{g\varrho}{\overline \varrho}+\nu \left(\frac{\partial^2 u^3}{\partial x^1 \partial x^1}+\frac{\partial^2 u^3}{\partial x^2 \partial x^2}+\frac{\partial^2 u^3}{\partial x^3 \partial x^3}\right)\, , \end{cases} \end{equation} with $f:=2\underline{\Omega}\sin \varphi$ and $f_\ast := 2 \underline{\Omega}\cos \varphi$. Let us consider the $x^1$-momentum: the terms scale sequentially as \begin{equation}\label{chap1:scales_mom} \frac{U}{T}\; , \; \quad \frac{U^2}{L}\; ,\; \quad \frac{U^2}{L}\; ,\;\quad \frac{WU}{H}\; , \; \quad \underline{\Omega}W\; ,\; \quad \underline{\Omega}U\; ,\; \quad \frac{P}{\overline \varrho L}\; , \; \quad \nu \frac{U}{L^2}\; , \; \quad \nu \frac{U}{L^2}\; , \; \quad \nu \frac{U}{H^2}\, . \end{equation} Due to the fact that $W\ll U$, the term $\underline{\Omega}W$ is always smaller than $\underline{\Omega}U$ and can be safely neglected. Moreover, since the rotation has a fundamental importance in the geophysical fluid dynamics, the pressure term will scale as the Coriolis force, i.e. $$ \frac{P}{\overline \varrho L}=\underline{\Omega}U\, . $$ For physical considerations also the last three terms in \eqref{chap1:scales_mom} are small: $$ \frac{U}{L^2}\, ,\, \frac{U}{H^2}\lesssim \underline{\Omega}U\, . $$ Similar arguments apply to the $x^2$-momentum equation. Now, let us analyse the vertical momentum in \eqref{chap1:system_momentum_3D}, which scales as \begin{equation*} \frac{W}{T}\; , \quad \quad \frac{UW}{L}\; , \quad \quad \frac{UW}{L}\; ,\quad \quad \frac{W^2}{H}\; ,\quad \quad \underline{\Omega}U\; ,\quad \quad \frac{P}{\overline \varrho H}\; , \quad \quad\frac{g\Delta \varrho}{\overline \varrho}\; , \quad \quad \nu \frac{W}{L^2}\; , \quad \quad \nu \frac{W}{L^2}\; , \quad \quad \nu \frac{W}{H^2}\, . \end{equation*} Regarding the first term $W/T$ one has $$ \frac{W}{T}\lesssim \underline{\Omega}W\ll \underline{\Omega}U\, . $$ Analogously, for the next three terms. Now, $$ \frac{\underline{\Omega}U}{\frac{P}{\overline \varrho H}}\sim \frac{H}{L}\ll 1\, . $$ The previous relation establishes the smallness of the fifth term. As already seen before: $$ \frac{W}{L^2}\, ,\,\frac{W}{H^2}\ll \underline{\Omega}U\, . $$ At the end only two terms remain, leading to the so-called \emph{hydrostatic balance} $$ 0=-\frac{1}{\overline \varrho}\frac{\partial p}{\partial x^3}-\frac{g \varrho}{\overline \varrho}\, , $$ and we observe that, in absence of stratification, $p$ is nearly $x^3$-independent. At this point, our main goal is to introduce some important dimensionless numbers. In the previous scale analysis the term $\underline{\Omega}U$ was central. A division of \eqref{chap1:scales_mom} by $\underline{\Omega}U$, allows us to compare the importance of the other terms with respect to the Coriolis force, yielding (for the $x^1$-momentum): \begin{equation*} \frac{1}{\underline{\Omega} T}\; , \quad \quad \frac{U}{\underline{\Omega}L}\; , \quad \quad \frac{U}{\underline{\Omega}L}\; ,\quad \quad \frac{WL}{UH}\frac{U}{\underline{\Omega}L}\; ,\quad \quad 1\; ,\quad \quad \frac{P}{\overline \varrho \underline{\Omega}LU}\; , \quad \quad \frac{\nu}{\underline{\Omega}L^2}\; , \quad \quad \frac{\nu}{\underline{\Omega}L^2}\; , \quad \quad \frac{\nu}{\underline{\Omega}H^2}\, . \end{equation*} The first ratio \begin{equation*} Ro_T:=\frac{1}{\underline{\Omega}T} \end{equation*} is called \emph{temporal Rossby number}. \newpage The next one, is the so-called \emph{Rossby number} \begin{equation*} Ro:=\frac{U}{\underline{\Omega}L}\, , \end{equation*} which compare advection to Coriolis force: it is at most on the order of unity. In addition, the last ratio measures the importance of vertical friction and it is called \emph{Ekman number} $$ Ek:=\frac{\nu}{\underline{\Omega}H^2}\, . $$ If now we focus our attention on the $x^3$-momentum equation, we have also $$ \frac{P}{\overline \varrho H}\; ,\quad \quad \frac{g\Delta \varrho}{\overline \varrho}\, . $$ Taking the ratio between these two quantities, we obtain $$ \frac{gH\Delta \varrho}{P}=\frac{gH\Delta \varrho}{\overline \varrho \underline{\Omega}LU}=Ro \, \frac{gH\Delta \varrho}{\overline \varrho U^2}\, . $$ This leads to another adimensional number: the \emph{Richardson number} $$ Ri:=\frac{gH\Delta \varrho}{\overline \varrho U^2}\, . $$ \section{The Taylor-Proudman theorem}\label{sec:TP_thm} Let us now focus on rapidly rotating fluids, ignoring the frictional and density-variation effects, i.e. $$ Ro_T\ll 1\; , \quad \quad Ro\ll 1\; ,\quad \quad Ek\ll 1\, . $$ Therefore, we get \begin{equation}\label{T-P_system} \begin{cases} -fu^2=-\frac{1}{\overline \varrho}\frac{\partial p}{\partial x^1}\\ fu^1=-\frac{1}{\overline \varrho}\frac{\partial p}{\partial x^2}\\ 0=-\frac{1}{\overline \varrho}\frac{\partial p}{\partial x^3}\\ {\rm div}\, \vec u=0\, . \end{cases} \end{equation} If now we take the vertical derivative in the first and second equations of \eqref{T-P_system}, then we infer that $$ -f\frac{\partial u^2}{\partial x^3}=-\frac{1}{\overline \varrho}\frac{\partial}{\partial x^3}\left(\frac{\partial p}{\partial x^1}\right)=0\\ $$ and $$ f\frac{\partial u^1}{\partial x^3}=-\frac{1}{\overline \varrho}\frac{\partial}{\partial x^3}\left(\frac{\partial p}{\partial x^2}\right)=0\, .\\ $$ The previous relations mean $\partial_{3}u^1=\partial_{3}u^2=0$. This is the so-celebrated \emph{Taylor-Proudman theorem}. Physically, it states that the horizontal velocity field has no vertical shear and the particles on the same vertical move together (the \emph{Taylor columns}). \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{Pictures/Taylor_column} \caption{Side view of a Taylor columns experiment}\label{fig:Taylor_column_exp} \end{figure} \section{Stratified and quasi-incompressible fluids}\label{sec:strat_q-i} \subsection{The Brunt-V\"ais\"al\"a frequency} Until now, we have devoted our attention to the effects of rotation, and stratification was avoided. Therefore, we introduce in a first instance a basic measure of stratification called \emph{Brunt-V\"ais\"al\"a frequency} and later the accompanying dimensionless ratio, the \emph{Froude number}. We consider a fluid in static equilibrium. We take a fluid parcel (of volume $V$) at height $x^3$ above a reference level with density $\varrho (x^3)$ and we displace it to the higher level $x^3+h$ where the ambient density is $\varrho (x^3+h)$, see Figure \ref{fig:B-V_frequency} below. If the fluid is incompressible, by Archimede buoyancy principle, the particle is subject to the force $$ g\left( \varrho (x^3)-\varrho(x^3+h)\right) V\, . $$ Thus, the Newton's law yields $$ \varrho(x^3)V\frac{d^2 h}{d t^2}=g\left( \varrho (x^3)-\varrho(x^3+h)\right) V\, . $$ Using a Taylor expansion for the term on the right-hand side and under the Boussinesq approximation, one gets $$ \frac{d^2h}{dt^2}-\frac{g}{\overline \varrho}\frac{d\varrho}{dx^3}h=0\, . $$ If the density is decreasing with the height ($d\varrho/dx^3<0$), we can define the \emph{Brunt-V\"ais\"al\"a frequency} as $$ N^2:=-\frac{g}{\overline \varrho}\frac{d\varrho}{dx^3}\, . $$ \begin{figure}[htbp] \centering \includegraphics[scale=0.2]{Pictures/B-V_frequency} \caption{Fluid parcel in a stratified environment}\label{fig:B-V_frequency} \end{figure} Physically this means that, when we displace upward the parcel, its weight is heavier than the ambient: then, it is subjected to a downward force. The particle, going down, acquires a vertical velocity and when it reaches the original level, goes further downward (due to the inertia). At this point, the particle is surrounded by an heavier ambient so that it is recalled upward and oscillations persist around the equilibrium level. \subsection{The measurement of stratification: the Froude number} In this paragraph, we illustrate how to derive the physical Froude number. \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{Pictures/Froude_number} \caption{Deep oceanic currents over an irregular bottom}\label{fig:Froude_num} \end{figure} We consider a stratified fluid of thickness $H$ and frequency $N$. We suppose its speed equal to $U$ over an obstacle of length $L$ and height $\Delta x^3$ (see Figure \ref{fig:Froude_num} above). We can think about deep oceanic currents over an irregular seabed. The obstacle forces the fluid to displace itself also vertically and hence requires some additional gravitational energy. Stratification will act to minimize such vertical displacement, forcing the flow to get around the block. To climb the impediment the fluid needs a vertical velocity $$ W=\frac{\Delta x^3}{T}=\frac{U\Delta x^3}{L}\, . $$ At this point, the vertical displacement produces a density variation $$\Delta \varrho =\left|\frac{d\varrho}{dx^3}\right|\Delta x^3=\frac{\overline \varrho N^2}{g}\Delta x^3 \, .$$ As a consequence, one has also a pressure disturbance that, due to the hydrostatic balance, is $$ \Delta P=gH\Delta \varrho=\overline \varrho N^2H\Delta x^3\, , $$ which in turn causes a change in the fluid velocity \begin{equation}\label{chap1:rel_fluid_vel} \frac{U^2}{L}=\frac{\Delta P}{\overline \varrho L} \, . \end{equation} Therefore, the last relation \eqref{chap1:rel_fluid_vel} tells us that $U^2=N^2H\Delta x^3$. If now we take the ratio $\frac{W/H}{U/L}$, we obtain that $$ \frac{W/H}{U/L}=\frac{\Delta x^3}{H}=\frac{U^2}{N^2H^2}\, . $$ We note if $U$ is less than $NH$, $W/H$ is less than $U/L$. This implies that the variation in the vertical direction cannot fully meet the horizontal displacement: the fluid is then deflected horizontally. In addition, the stronger the stratification, the smaller is $U$ compared to $NH$ and thus $W/H$ with respect to $U/L$. For that reason, to measure the stratification, we define the \emph{Froude number} $$ Fr:=\frac{U}{NH}\, . $$ \subsection{The Mach number}\label{subsect:Mach_number} To define the Mach number, we consider a flow in which the density-changes induced by the pressure are isentropic, i.e. $$ \frac{\partial p}{\partial x^i}=\overline c^2\frac{\partial \varrho}{\partial x^i} \quad \quad \quad \text{for }\; i=1,2,3 \; ,$$ where $\overline c$ is the sound speed. Therefore, the continuity equation reads: $$ {\rm div}\, \vec u=-\frac{1}{\varrho}\frac{D\varrho}{Dt}=-\frac{1}{\varrho \overline c^2}\frac{Dp}{Dt} \, ,$$ with $D/Dt=\partial/\partial t+\vec u\cdot \nabla$ the material derivative. Using the following dimensionless variables (for $i,j=1,2,3$) \begin{equation*} x^i_\ast=\frac{x^i}{L}\; ,\quad \quad t_\ast=\frac{Ut}{L}\; ,\quad \quad u^j_\ast=\frac{u^j}{U}\; , \quad \quad p_\ast=\frac{p}{\overline \varrho U^2}\; , \quad \quad \varrho_\ast=\frac{\varrho}{\overline \varrho}\, , \end{equation*} where $\overline \varrho$ is a reference density, we obtain $$ {\rm div}\,_\ast \vec u_\ast=-\frac{U^2}{\overline c^2}\frac{1}{\varrho_\ast}\frac{Dp_\ast}{Dt_\ast}\, , $$ with ${\rm div}\,_\ast= L\, {\rm div}\,$. Then, we can define the \emph{Mach number} as $$ Ma:= \frac{U}{\overline c}\, , $$ which sets the size of isentropic departures from incompressible flow: the flows are considered incompressible when $Ma<0.3$. \section{The dimensionless Navier-Stokes equations}\label{sec:NS_dimenless} We start now from the Navier-Stokes momentum equation for incompressible flows (${\rm div}\, \vec u=0$) of the following form \begin{equation}\label{Sect:rel_NS} \varrho \left(\frac{D}{Dt}\vec u\right)_{\rm inertial}=-\nabla p+\varrho \vec g+\nu \Delta \vec u\, . \end{equation} Remembering the connection between the inertial and the rotational frameworks, one has $$ \left(\frac{D}{Dt}\vec u\right)_{\rm inertial}=\frac{D}{Dt}\vec u +2\underline{\vec \Omega}\times \vec u+\underline{\vec \Omega}\times \left(\underline{\vec \Omega}\times \vec r\right)=\frac{D}{Dt}\vec u +2\underline{\vec \Omega}\times \vec u+ \nabla \left(\frac{1}{2}\left|\underline{\vec \Omega}\times \vec r\right|^2\right)\, ,$$ where $\underline{\vec \Omega}=\underline{\Omega}\vec k$ ($\underline{\Omega}$ is the scalar magnitude associated) and we will call $F:=\frac{1}{2}\left|\underline{\vec \Omega}\times \vec r\right|^2$. Moreover, we recall that the effects of compressibility can be recasted from the continuity equation (see the previous Subsection \ref{subsect:Mach_number}). So, relation \eqref{Sect:rel_NS} can be rendered dimensionless by defining (for $i,j=1,2,3$) \begin{equation*} x^i_\ast=\frac{x^i}{L}\; ,\quad \quad t_\ast=\underline{\Omega}t\; ,\quad \quad u^j_\ast=\frac{u^j}{U}\; , \quad \quad p_\ast=\frac{p}{\varrho U^2}\; , \quad \quad g^j_\ast=\frac{g^j}{g}\, , \end{equation*} where $g$ is the acceleration of gravity. Therefore, we get \begin{equation}\label{system_NSFC_nodim} St\frac{\partial \vec u_\ast}{\partial t_\ast}+\vec u_\ast \cdot \nabla_\ast \vec u_\ast +\frac{2}{Ro}\vec e_3\times \vec u_\ast =-\nabla_\ast p_\ast +\frac{1}{Fr^2}g_\ast+\frac{1}{Re}\Delta_\ast \vec u_\ast+\frac{1}{Ro^2}\nabla_\ast F_\ast \end{equation} where $\nabla_\ast =L\nabla$. In the previous equation the symbols $St$ and $Re$ stay for the \emph{Strouhal} and \emph{Reynolds} numbers (see \cite{K-C-D} for more details). \let\cleardoublepage\clearpage \part{The Navier-Stokes-Coriolis equations} \let\cleardoublepage\clearpage \chapter{A multi-scale limit}\label{chap:multi-scale_NSF} \begin{quotation} In this chapter we address the singular perturbation problem given by the full Navier-Stokes-Fourier system, which (for the reader's convenience) we remember to be \begin{equation}\label{chap2:syst_NSFC} \begin{cases} \partial_t \varrho + {\rm div}\, (\varrho\vec{u})=0\ \\[3ex] \partial_t (\varrho\vec{u})+ {\rm div}\,(\varrho\vec{u}\otimes\vec{u}) + \dfrac{\vec{e}_3 \times \varrho\vec{u}}{Ro}\, + \dfrac{1}{Ma^2} \nabla_x p(\varrho,\vartheta) \\[1ex] \qquad \qquad \qquad \qquad \qquad \qquad \qquad \; \; \; ={\rm div}\, \mathbb{S}(\vartheta,\nabla_x\vec{u}) + \dfrac{\varrho}{Ro^2} \nabla_x F + \dfrac{\varrho}{Fr^2} \nabla_x G \\[3ex] \partial_t \bigl(\varrho s(\varrho, \vartheta)\bigr) + {\rm div}\, \bigl(\varrho s (\varrho,\vartheta)\vec{u}\bigr) + {\rm div}\,\left(\dfrac{\q(\vartheta,\nabla_x \vartheta )}{\vartheta} \right) = \sigma\,, \end{cases} \end{equation} with $St=1$ and $Re=1$ (see also system \eqref{system_NSFC_nodim} in Section \ref{sec:NS_dimenless}). The contents of this chapter are included in the article \cite{DS-F-S-WK}. \medskip Let us now give an outline of the chapter. In Section \ref{s:result} we collect our assumptions and we state our main results. In Section \ref{s:sing-pert} we study the singular perturbation problem, stating uniform bounds on our family of weak solutions and establishing constraints that the limit points have to satisfy. Section \ref{s:proof} is devoted to the proof of the convergence result for $m\geq2$ and $F\neq0$, employing the compensated compactness technique. In the last Section \ref{s:proof-1}, with the same approach, we prove the convergence result for $m=1$ and $F=0$; actually, in absence of the centrifugal force, the same argument shows convergence for any $m>1$. \end{quotation} \section{The Navier-Stokes-Fourier system} \label{s:result} In this section, we formulate our working hypotheses (see Subsection \ref{ss:FormProb}) and we state our main results (in Subsection \ref{ss:results}). \subsection{Setting of the problem} \label{ss:FormProb} In this subsection, we present the rescaled Navier-Stokes-Fourier system with Coriolis, centrifugal and gravitational forces, which we are going to consider in our study, and we formulate the main working hypotheses. The material of this part is mostly classical: unless otherwise specified, we refer to \cite{F-N} for details. Paragraph \ref{sss:equilibrium} contains some original contributions, concerning the analysis of the equilibrium states under our hypotheses on the specific form of the centrifugal and gravitational forces. \subsubsection{Primitive system}\label{sss:primsys} To begin with, let us introduce the ``primitive system'', i.e. the rescaled compressible Navier-Stokes-Fourier system \eqref{chap2:syst_NSFC}, supplemented with the scaling \begin{equation} \label{eq_i:scales} Ro=\varepsilon \, , \quad Ma=\varepsilon^m \quad \text{and}\quad Fr=\varepsilon^{m/2}\,, \qquad\qquad \mbox{ for some }\quad m\geq 1\, , \end{equation} where $\varepsilon\in\,]0,1]$ is a small parameter. Thus, the system consists of the continuity equation (conservation of mass), the momentum equation, the entropy balance and the total energy balance: respectively, \begin{align} & \partial_t \vre + {\rm div}\, (\vre\ue)=0 \label{ceq}\tag{NSF$_{\ep}^1$} \\ & \partial_t (\vre\ue)+ {\rm div}\,(\vre\ue\otimes\ue) + \frac{1}{\ep}\,\vec{e}_3 \times \vre\ue + \frac{1}{\ep^{2m}} \nabla_x p(\vre,\tem) = \label{meq}\tag{NSF$_{\ep}^2$} \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad ={\rm div}\, \mathbb{S}(\tem,\nabla_x\ue) + \frac{\vre}{\ep^2} \nabla_x F + \frac{\vre}{\ep^m} \nabla_x G \nonumber \\ & \partial_t \bigl(\vre s(\vre, \tem)\bigr) + {\rm div}\, \bigl(\vre s (\vre,\tem)\ue\bigr) + {\rm div}\,\left(\frac{\q(\tem,\nabla_x \tem )}{\tem} \right) = \sigma_\ep \label{eiq}\tag{NSF$_{\ep}^3$} \\ & \frac{d}{dt} \int_{\Omega_\varepsilon} \left( \frac{\ep^{2m}}{2} \vre|\ue|^2 + \vre e(\vre,\tem) - {\ep^m} \vre G - {\ep^{2(m-1)}} \vre F \right) dx = 0\,. \label{eeq}\tag{NSF$_{\ep}^4$} \end{align} The unknowns are the fluid mass density $\vre=\vre(t,x)\geq0$ of the fluid, its velocity field $\ue=\ue(t,x)\in\mathbb{R}^3$ and its absolute temperature $\tem=\tem(t,x)\geq0$, $t\in \; ]0,T[$ , $x\in \Omega_\varepsilon$ which fills, for $\varepsilon \in \; ]0,1]$ fixed, the bounded domain \begin{equation}\label{dom} \Omega_\ep := {B}_{L_\varepsilon} (0) \times\,]0,1[\;,\qquad\qquad\mbox{ where }\qquad L_\varepsilon\,:=\,\frac{1}{\ep^{m+ \delta}}\,L_0 \end{equation} for $\delta >0$ and for some $L_0>0$ fixed. Here above, we have denoted by ${B}_{l}(x_0)$ the Euclidean ball of center $x_0$ and radius $l$ in $\mathbb{R}^2$. Notice that, roughly speaking, we have the property $$\Omega_\ep\, \longrightarrow \, \Omega := \mathbb{R}^2 \times\,]0,1[\, \quad \mbox{ as } \ep \to 0^+\,.$$ \begin{remark} \label{smooth domains} We explicitly point out that, throughout all the chapter, we tacitly assume \emph{rounded corners} in \eqref{dom}. In this way, we can apply the standard weak solutions existence theory developed in \cite{F-N}, which requires $C^{2,\nu}$ regularity, with $\nu \in (0,1)$, on the space domain. \end{remark} The pressure $p$, the specific internal energy $e$ and the specific entropy $s$ are given scalar valued functions of $\vr$ and $\temp$ which are related through Gibbs' equation \begin{equation}\label{gibbs} \temp D_{\varrho,\vartheta} s = D_{\varrho,\vartheta} e + p D_{\varrho, \vartheta} \left( \frac{1}{\vr}\right), \end{equation} where the symbol $D_{\varrho,\vartheta}$ stands for the differential with respect to the variables $\vr$ and $\vartheta$ (see also Subsection \ref{subsec:entropy_bud}). The viscous stress tensor in \eqref{meq} is given by Newton's rheological law \begin{equation}\label{S} \mathbb{S}(\tem,\nabla_x \ue) = \mu(\tem)\left( \nabla_x\ue \,+\, ^t\nabla_x \ue \,-\, \frac{2}{3}{\rm div}\, \ue \, {\rm Id}\, \right) \,+\, \eta(\tem) {\rm div}\,\ue \, {\rm Id}\,\,, \end{equation} for two suitable coefficients $\mu$ and $\eta$ (we refer to Paragraph \ref{sss:structural} below for the precise hypotheses), and here the apex $t$ stands for the transpose operator. Moreover, the entropy production rate $\sigma_\ep$ in \eqref{eiq} satisfies \begin{equation}\label{ss} \sigma_\ep \geq \frac{1}{\tem} \left({\ep^{2m}} \mathbb{S}(\tem,\nabla_x\ue) : \nabla_x \ue - \frac{\q(\tem,\nabla_x \tem )\cdot \nabla_x \tem}{\tem} \right). \end{equation} The heat flux $\q$ in \eqref{eiq} is determined by Fourier's law \begin{equation}\label{q} \q(\tem,\nabla_x \tem)= - \kappa(\tem) \nabla_x \tem , \end{equation} where $\kappa>0$ is the heat-conduction coefficient. The term $\vec{e}_3\times\varrho_\varepsilon\vec u_\varepsilon$ takes into account the (strong) Coriolis force acting on the fluid. Next, we turn our attention to centrifugal and gravitational forces, $F$ and $G$ respectively. We assume that they are of the form \begin{equation}\label{assFG} F(x) = \left|x^h\right|^2\qquad\mbox{ and }\qquad G(x)= -x^3\,. \end{equation} The precise expression of $F$ and $G$ will be useful in Paragraph \ref{sss:equilibrium} below (and even more in Chapter \ref{chap:BNS_gravity}), but the previous assumptions are certainly not optimal from the point of view of the weak solutions theory. \begin{remark} For the existence theory of weak solutions to our system, it would be enough to assume $F\in W^{1,\infty}_{\rm loc}\bigl(\mathbb{R}^2 \times\,]0,1[\,\bigr)$ to satisfy $$ F(x) \geq 0,\quad \ F(x_1,x_2,- x_3) = F(x_1,x_2,x_3),\quad \ | \nabla_x F(x) | \leq c\, (1 + |x^h| )\; \; \mbox{ for all }\; \; x\in \mathbb{R}^2\times\,]0,1[ $ and $G\in W^{1,\infty}\bigl(\mathbb{R}^2 \times\,]0,1[\,\bigr)$. \end{remark} The system is supplemented with \emph{complete slip boundary conditions}, namely \begin{equation}\label{bc1-2} \big(\ue \cdot \n_\varepsilon\big) _{|\partial \Omega_\varepsilon} = 0, \quad \mbox{ and } \quad \bigl([ \mathbb{S} (\tem, \nabla_x \ue) \n_\varepsilon ] \times \n_\varepsilon\bigr)_{|\partial\Omega_\varepsilon} = 0\,, \end{equation} where $\vec{n}_\varepsilon$ denotes the outer normal to the boundary $\partial\Omega_\varepsilon$. We also suppose that the boundary of physical space is \emph{thermally isolated}, i.e. one has \begin{equation}\label{bc3} \big(\q\cdot\n_\varepsilon\big)_{|\partial {\Omega_\varepsilon}}=0\,. \end{equation} \begin{remark} \label{r:speed-waves} Notice that, as $\delta>0$ in \eqref{dom} and the speed of sound is proportional to $\ep^{-m}$, hypothesis \eqref{dom} guarantees that $\partial B_{L_\varepsilon}(0)\times\,]0,1[\,$ of the boundary $\partial\Omega_\varepsilon$ of $\Omega_\ep$ becomes irrelevant when one considers the behaviour of acoustic waves on some compact set of the physical space. We refer to Subsections \ref{ss:acoustic} and \ref{ss:unifbounds_1} for details about this point. \end{remark} \subsubsection{Structural restrictions} \label{sss:structural} Now we need to impose structural restrictions on the thermodynamical functions $p$, $e$, $s$ as well as on the diffusion coefficients $\mu$, $\eta$, $\kappa$. We start by setting, for some real number $a>0$, \begin{equation}\label{pp1} p(\varrho,\vartheta)= p_M(\varrho,\vartheta) + \frac{a}{3} \vartheta^{4}\,,\qquad\qquad\mbox{ where }\qquad p_M(\varrho,\vartheta)\,=\,\vartheta^{5/2} P\left(\frac{\varrho}{\vartheta^{3/2}}\right)\,. \end{equation} The first component $p_M$ in relation \eqref{pp1} corresponds to the standard molecular pressure of a general \textit{monoatomic} gas (see Section 1.4 of \cite{F-N}), while the second one represents the thermal radiation. Here above, \begin{equation}\label{pp2} P\in C^1 [0,\infty)\cap C^2(0,\infty),\qquad P(0)=0,\qquad P'(Z)>0\quad \mbox{ for all }\,Z\geq 0\,, \end{equation} which in particular implies the positive compressibility condition \begin{equation}\label{p_comp} \partial_\vr p (\vr,\temp)>0. \end{equation} Additionally, we assume that $\partial_\temp e(\vr,\temp)$ is positive and bounded (see below): this translates into the condition \begin{equation}\label{pp3} 0<\frac{ \frac{5}{3} P(Z) - P'(Z)Z }{ Z }< c \qquad\qquad\mbox{ for all }\; Z > 0\,. \end{equation} In view of \eqref{pp3}, we have that $Z \mapsto P(Z) / Z^{5/3}$ is a decreasing function and in addition we assume \begin{equation}\label{pp4} \lim\limits_{Z\to +\infty} \frac{P(Z)}{Z^{5/3}} = P_\infty >0\,. \end{equation} Accordingly to Gibbs' relation \eqref{gibbs}, the specific internal energy and the specific entropy can be written in the following forms: $ e(\vr,\temp) = e_M(\vr,\temp) + a\frac{\temp^{4}}{\vr}\,, \quad\quad s(\vr,\temp)= S\left(\frac{\vr}{\temp^{3/2}}\right) + \frac{4}{3} a \frac{\temp^{3}}{\vr}\,, $ where we have set \begin{equation}\label{ss1s} e_M(\vr,\temp)=\frac{3}{2} \frac{\temp^{5/2}}{\vr} P\left( \frac{\vr}{\temp^{3/2}} \right)\qquad\mbox{ and }\qquad S'(Z) = -\frac{3}{2} \frac{ \frac{5}{3} P(Z) - Z P'(Z)}{Z^2}\quad \mbox{ for all }\, Z>0\,. \end{equation} The diffusion coefficients $\mu$ (shear viscosity), $\eta$ (bulk viscosity) and $\kappa$ (heat conductivity) are assumed to be continuously differentiable functions of the temperature $\temp \in [0,\infty[\,$, satisfying the following growth conditions for all $\temp\geq 0$: \begin{equation}\label{mu} 0<\underline\mu(1+\temp) \leq \mu(\temp) \leq \overline\mu (1 + \temp), \quad 0 \leq \eta(\temp) \leq \overline\eta(1 + \temp), \quad 0 < \underline\kappa (1 + \temp^3) \leq \kappa(\temp) \leq \overline\kappa(1+\temp^3), \end{equation} where $\underline\mu$, $\overline\mu$, $\overline\eta$, $\underline\kappa$ and $\overline\kappa$ are positive constants. Let us remark that the above assumptions may be not optimal from the point of view of the existence theory. \begin{remark}\label{rmk:pressure_choice} We point out that the choice in taking the pressure as above (which formulation describes the pressure for a \textit{monoatomic gas}) is dictated by the fact that we follow the ``solid'' existence theory developed by Feireisl and Novotn\'y in the book \cite{F-N}. Any other formulation for the pressure, for which one possesses an existence theory, is allowed in our analysis (see e.g. \cite{B-D_JMPA} for the polytropic gas case, and references therein): as we will see in the next chapter, if $\vartheta$ is constant, one can take in \eqref{pp4} any $\gamma >d/2$ as exponent (where $d$ is the dimension). We refer to \cite{Lions_2}, \cite{F-N-P} and references therein, in this respect (see also \cite{PL_Lions} for a first result in that context). \end{remark} \subsubsection{Analysis of the equilibrium states} \label{sss:equilibrium} For each scaled system \eqref{ceq} to \eqref{eeq}, the so-called \emph{equilibrium states} consist of static density $\vret$ and constant temperature distribution $\tems>0$ satisfying \begin{equation}\label{prF} \nabla_x p(\vret,\tems) = \ep^{2(m-1)} \vret \nabla_x F + \ep^m \vret \nabla_x G \quad \mbox{ in } \Omega. \end{equation} For later use, it is convenient to state \eqref{prF} on whole set $\Omega$. Notice that, \textsl{a priori}, it is not known that the target temperature has to be constant: this follows from the fact that $\nabla_x \temp_\ep$ needs to vanish as $\ep \to 0^+$ (see Section 4.2 of \cite{F-N} for more comments about this). Equation \eqref{prF} identifies $\widetilde{\varrho}_\varepsilon$ up to an additive constant: normalizing it to $0$, and taking the target density to be $1$, we get \begin{equation} \label{eq:target-rho} \Pi(\vret)\,=\,\widetilde{F}_\varepsilon\,:=\, \ep^{2(m-1)} F + \ep^m G\,,\qquad\qquad \mbox{ where }\qquad \Pi(\varrho) = \int_1^{\varrho} \frac{\partial_\varrho p(z,\overline{\vartheta})}{z} {\rm d}z\,. \end{equation} From this relation, we immediately get the following properties: \begin{itemize} \item[(i)] when $m>1$, or $m=1$ and $F=0$, for any $x\in\Omega$ one has $\widetilde{\varrho}_\varepsilon(x)\longrightarrow 1$ in the limit $\varepsilon\ra0^+$; \item[(ii)] for $m=1$ and $F\neq0$, $\bigl(\widetilde{\varrho}_\varepsilon\bigr)_\varepsilon$ converges pointwise to $\widetilde{\varrho}$, where $$ \widetilde\varrho\quad\mbox{ is a solution of the problem}\qquad \Pi\bigl(\widetilde{\varrho}(x)\bigr)\,=\,F(x)\,,\ \mbox{ with }\ x\in\Omega\,. $$ In particular, $\widetilde{\varrho}$ is non-constant, of class $C^2(\Omega)$ (keep in mind assumptions \eqref{pp2} and \eqref{p_comp} above) and it depends just on the horizontal variables due to \eqref{assFG}. \end{itemize} We are now going to study more in detail the equilibrium densities $\widetilde\varrho_\varepsilon$. In order to keep the discussion as general as possible, we are going to consider both cases (i) and (ii) listed above, even though our results will concern only case (i). The first problem we have to face is that the right-hand side of \eqref{eq:target-rho} may be negative: this means that $\widetilde{\varrho}_\varepsilon$ can go below the value $1$ in some regions of $\Omega$. Nonetheless, the next statement excludes the presence of vacuum. \begin{lemma} \label{l:target-rho_pos} Let the centrifugal force $F$ and the gravitational force $G$ be given by \eqref{assFG}. Let $\bigl(\widetilde{\varrho}_\varepsilon\bigr)_{0<\varepsilon\leq1}$ be a family of static solutions to equation \eqref{eq:target-rho} on $\Omega$. Then, there exist an $\varepsilon_0>0$ and a $0<\rho_*<1$ such that $\widetilde{\varrho}_\varepsilon\geq\rho_*$ for all $\varepsilon\in\,]0,\varepsilon_0]$ and all $x\in\Omega$. \end{lemma} \begin{proof} Let us consider the case $m>1$ (hence $F\neq0$) first. Suppose, by contradiction, that there exists a sequence $\bigl(\varepsilon_n,x_n\bigr)_n$ such that $0\,\leq\,\widetilde{\varrho}_{\varepsilon_n}(x_n)\,\leq\,1/n$, and we will observe that the sequence $\bigl(x_n\bigr)_n$ cannot be bounded. Indeed, relation \eqref{eq:target-rho}, computed on $\widetilde{\varrho}_{\varepsilon_n}(x_n)$, would immediately imply that $\widetilde{\varrho}_{\varepsilon_n}(x_n)$ should rather converge to $1$. In any case, since $1/n<1$ for $n\geq2$ and $x^3\in\; ]0,1[\, $, we deduce that $$ -\,(\varepsilon_n)^m\,\leq\,\widetilde{F}_{\varepsilon_n}(x_n)\,=\,(\varepsilon_n)^{2(m-1)}\,|\,(x_n)^h\,|^2\,-\,(\varepsilon_n)^m\,(x_n)^3\,<\,0\,, $$ which in particular implies that $\widetilde{F}_{\varepsilon_n}(x_n)$ has to go to $0$ for $\varepsilon\ra0^+$. As a consequence, since $\Pi(1)=0$, by the mean value theorem (see e.g. Chapter 5 of \cite{Rud}) and \eqref{eq:target-rho} we get $$ \widetilde{F}_{\varepsilon_n}(x_n)\,=\,\Pi\bigl(\widetilde{\varrho}_{\varepsilon_n}(x_n)\bigr)\,=\,\Pi'(z_n)\,\bigl(\widetilde{\varrho}_{\varepsilon_n}(x_n)-1\bigr)\,=\, \frac{\partial_\varrho p\bigl(z_n,\overline{\vartheta}\bigr)}{z_n}\,\bigl(\widetilde{\varrho}_{\varepsilon_n}(x_n)-1\bigr)\,\longrightarrow\,0\,, $$ for some $z_n\in\,]\widetilde{\varrho}_{\varepsilon_n}(x_n),1[\,\subset\,]0,1[\,$, for all $n\in\mathbb{N}$. In turn, this relation, combined with \eqref{p_comp}, implies that $\widetilde{\varrho}_{\varepsilon_n}(x_n)\longrightarrow 1$, which is in contradiction with the fact that it has to be $\leq1/n$ for any $n\in\mathbb{N}$. The case $m=1$ and $F=0$ can be treated in a similar way. Let us now assume that $m=1$ and $F\neq0$: relation \eqref{eq:target-rho} in this case becomes \begin{equation} \label{eq:target_m=1} \Pi\bigl(\widetilde{\varrho}_\varepsilon(x)\bigr)\,=\,|\,x^h\,|^2\,-\,\varepsilon\,x^3\,. \end{equation} We observe that the right-hand side of this identity is negative on the set $\left\{0\,\leq\,|\,x^h\,|^2\,\leq\,\varepsilon\,x^3\right\}$. By definition \eqref{eq:target-rho}, this is equivalent to having $\widetilde{\varrho}_\varepsilon(x)\leq1$. In particular, the smallest value of $\widetilde{\varrho}_\varepsilon(x)$ is attained for $x=x^0=(0,0,1)$, for which $\Pi\bigl(\widetilde{\varrho}_\varepsilon(x^0)\bigr)=-\varepsilon$. On the other hand, fixed a $x^0_\varepsilon$ such that $|\,(x^0_\varepsilon)^h\,|^2=\varepsilon$ and $(x^0_\varepsilon)^3=1$, we have $\Pi\bigl(\widetilde{\varrho}_\varepsilon(x_\varepsilon^0)\bigr)=0$, and then $\widetilde{\varrho}_\varepsilon(x^0_\varepsilon)=1$. Therefore, by mean value theorem again we get \begin{align*} -\,\varepsilon\;=\;\Pi\bigl(\widetilde{\varrho}_\varepsilon(x^0)\bigr)\,-\,\Pi\bigl(\widetilde{\varrho}_\varepsilon(x^0_\varepsilon)\bigr)\,&=\, \frac{\partial_\varrho p\bigl(\widetilde{\varrho}_\varepsilon(y_\varepsilon),\overline{\vartheta}\bigr)}{\widetilde{\varrho}_\varepsilon(y_\varepsilon)}\, \bigl(\widetilde{\varrho}_\varepsilon(x^0)\,-\,\widetilde{\varrho}_\varepsilon(x^0_\varepsilon)\bigr)\,=\,\frac{\partial_\varrho p\bigl(\widetilde{\varrho}_\varepsilon(y_\varepsilon),\overline{\vartheta}\bigr)}{\widetilde{\varrho}_\varepsilon(y_\varepsilon)}\, \bigl(\widetilde{\varrho}_\varepsilon(x^0)\,-\,1\bigr), \end{align*} for some suitable point $y_\varepsilon=\bigl((x_\varepsilon)^h,1\bigr)$ lying on the line connecting $x^0=(0,0,1)$ with $x^0_\varepsilon$. From this equality and the structural hypothesis \eqref{p_comp}, since $\widetilde{\varrho}_\varepsilon(x^0)\,-\,1<0$ (due to the fact that $\Pi\bigl(\widetilde{\varrho}_\varepsilon(x^0)\bigr)<0$), we deduce that $\widetilde{\varrho}_\varepsilon(y_\varepsilon)>0$ for all $\varepsilon>0$. On the other hand, \eqref{eq:target_m=1} says that, for $x^3$ fixed, the function $\Pi\circ\widetilde{\varrho}_\varepsilon$ is radially increasing on $\mathbb{R}^2$: then, in particular $\widetilde{\varrho}_\varepsilon(y_\varepsilon)\leq\widetilde{\varrho}_\varepsilon(x^0_\varepsilon)=1$. Finally, thanks to these relations and the regularity properties \eqref{pp1} and \eqref{pp2}, we see that $$ \widetilde{\varrho}_\varepsilon(x^0)\,=\,1\,-\,\varepsilon\, \frac{\widetilde{\varrho}_\varepsilon(y_\varepsilon)}{\partial_\varrho p\bigl(\widetilde{\varrho}_\varepsilon(y_\varepsilon),\overline{\vartheta}\bigr)} $$ remains strictly positive, at least for $\varepsilon$ small enough. \qed \end{proof} \medbreak For simplicity, and without loss of any generality, we assume from now on that $\varepsilon_0=1$ in Lemma \ref{l:target-rho_pos}. Next, denoted as above $B_l(0)$ the ball in the horizontal variables $x^h\in\mathbb{R}^2$ of center $0$ and radius $l>0$, we define the cylinder \emph{with smoothed corners} $ \mathbb B_{L} := \left\{ x\in \Omega \ : \ |x^h| < L \right\}=B_L(0)\times\, ]0,1[\, . $ We can now state the next boundedness property for the family $\bigl(\widetilde{\varrho}_\varepsilon\bigr)_\varepsilon$. \begin{lemma} \label{l:target-rho_bound} Let $m\geq1$. Let $F$ and $G$ satisfy \eqref{assFG}. Then, for any $l>0$, there exists a constant $C(l)>1$ such that for all $\varepsilon\in\,]0,1]$ one has \begin{equation} \label{est:target-rho_in} \widetilde{\varrho}_\varepsilon\,\leq\,C(l)\qquad\qquad\mbox{ on }\qquad \overline{\mathbb{B}}_{l}\,. \end{equation} If $F=0$, then there exists $C>1$ such that, for all $\varepsilon\in\,]0,1]$ and all $x\in\Omega$, one has $\left|\widetilde\varrho_\varepsilon(x)\right|\leq C$. \end{lemma} \begin{proof} Let us focus on the case $m>1$ and $F\neq 0$ for a while. In order to see \eqref{est:target-rho_in}, we proceed in two steps. First of all, we fix $\varepsilon$ and we show that $\widetilde{\varrho}_\varepsilon$ is bounded in the previous set. Assume it is not: there exists a sequence $\bigl(x_n\bigr)_{n}\subset \overline{\mathbb{B}}_{l}$ such that $\widetilde{\varrho}_\varepsilon(x_n)\geq n$. But then, thanks to hypothesis \eqref{pp3}, we can write $$ \Pi\bigl(\widetilde{\varrho}_\varepsilon(x_n)\bigr)\,\geq\,\int^n_1\frac{\partial_\varrho p(z,\overline{\vartheta})}{z}{\rm\,d}z\,\geq\,C(\overline{\vartheta})\, \int^{n/\overline{\vartheta}^{3/2}}_{1/\overline{\vartheta}^{3/2}}\frac{P(Z)}{Z^2}{\rm\,d}Z\,, $$ and, by use of \eqref{pp4}, it is easy to see that the last integral diverges to $+\infty$ for $n\rightarrow+\infty$. On the other hand, on the set $\overline{\mathbb{B}}_{l}$, the function $\widetilde{F}_\varepsilon$ is uniformly bounded by the constant $l^2+1$, and, recalling formula \eqref{eq:target-rho}, these two facts are in contradiction one with other. So, we have proved that, $\widetilde{\varrho}_\varepsilon\,\leq\,C(\varepsilon,l)$ on the set $\overline{\mathbb{B}}_{l}$. But, thanks to point (i) below \eqref{eq:target-rho}, the pointwise convergence of $\widetilde{\varrho}_\varepsilon$ to $1$ becomes uniform in the previous set, so that the constant $C(\varepsilon,l)$ can be dominated by a new constant $C(l)$, just depending on the fixed $l$. Let us now take $m=1$ and $F\neq0$. We start by observing that, again, the following property holds true: for any $\varepsilon$ and any $l>0$ fixed, one has $\widetilde{\varrho}_\varepsilon\,\leq\,C(\varepsilon,l)$ in $\overline{\mathbb{B}}_{l}$. Furthermore, by point (ii) below \eqref{eq:target-rho} we have that $\widetilde\varrho\in C^2(\Omega)$, and then $\widetilde\varrho$ is locally bounded: for any $l>0$ fixed, we have $\widetilde{\varrho}\,\leq\,C(l)$ on the set $\overline{\mathbb{B}}_{l}$. On the other hand, the pointwise convergence of $\bigl(\widetilde{\varrho}_\varepsilon\bigr)_\varepsilon$ towards $\widetilde\varrho$ becomes uniform on the compact set $\overline{\mathbb{B}}_{l}$: gluing these facts together, we infer that, in the previous bound for $\widetilde{\varrho}_\varepsilon$, we can replace $C(\varepsilon,l)$ by a constant $C(l)$ which is uniform in $\varepsilon$. Consider now the case $F=0$, and any value $m\geq1$. In this case, relation \eqref{eq:target-rho} becomes $$ \Pi\big(\widetilde\varrho_\varepsilon\big)\,=\,\varepsilon^m\,G,\qquad\text{which implies}\qquad \left|\Pi\big(\widetilde\varrho_\varepsilon\big)\right|\,\leq\,C\quad\mbox{ in }\;\Omega\,. $$ At this point, as a consequence of the structural assumptions \eqref{pp1}, \eqref{pp3} and \eqref{pp4}, we observe that $\Pi(z)\longrightarrow+\infty$ for $z\rightarrow+\infty$. Then, $\widetilde\varrho_\varepsilon$ must be uniformly bounded in $\Omega$. This completes the proof of the lemma. \qed \end{proof} \medbreak We conclude this paragraph by showing some additional bounds, which will be relevant in the sequel. \begin{proposition} \label{p:target-rho_bound} Let $F\neq0$. For any $l>0$, on the cylinder $\overline{\mathbb{B}}_{l}$ one has, for any $\varepsilon\in\,]0,1]$: \begin{enumerate}[(1)] \item $ \left|\widetilde{\varrho}_\varepsilon(x)\,-\,1\right|\,\leq\,C(l)\,\varepsilon^m\, \text{ if }m\geq2$; \item $ \left|\widetilde{\varrho}_\varepsilon(x)\,-\,1\right|\,\leq\,C(l)\,\varepsilon^{2(m-1)}\, \text{ if }1<m<2 $; \item $ \left|\widetilde{\varrho}_\varepsilon(x)\,-\,\widetilde{\varrho}(x)\right|\,\leq\,C(l)\,\varepsilon\, \text{ if }m=1$. \end{enumerate} When $F=0$ and $m\geq1$, instead, one has $\left|\widetilde{\varrho}_\varepsilon(x)\,-\,1\right|\,\leq\,C\,\varepsilon^m$, for a constant $C>0$ which is uniform in $x\in\Omega$ and in $\varepsilon\in\,]0,1]$. \end{proposition} \begin{proof} Assume $F\neq0$ for a while. Let $m\geq2$. Thanks to the Lemma \ref{l:target-rho_bound}, the estimate on $\left|\widetilde{\varrho}_\varepsilon(x)\,-\,1\right|$ easily follows applying the mean value theorem (see again e.g. Chapter 5 of \cite{Rud}) to equation \eqref{eq:target-rho}, and noticing that $$ \sup_{z\in[\rho_*,C(l)]}\left|\frac{z}{\partial_\varrho p(z,\overline{\vartheta})}\right|\,<\,+\infty\,, $$ on $\overline{\mathbb{B}}_l$ for any fixed $l>0$. According to the hypothesis $m\geq2$, we have $2(m-1)\geq m$. The claimed bound then follows. The proof of the inequality for $1<m<2$ is analogous, using this time that $2(m-1)\leq m$. In order to prove the inequality for $m=1$, we consider the equations satisfied by $\widetilde{\varrho}_\varepsilon$ and $\widetilde{\varrho}$: we have $$ \Pi\bigl(\widetilde{\varrho}_\varepsilon(x)\bigr)\,=\,|\,x^h\,|^2\,-\,\varepsilon\,x^3\qquad\qquad\mbox{ and }\qquad\qquad \Pi\bigl(\widetilde{\varrho}(x)\bigr)\,=\,|\,x^h\,|^2\,. $$ Now, we take the difference and we apply again the mean value theorem, finding $$ \Pi'\big(z_\varepsilon(x)\big)\,\big(\widetilde{\varrho}_\varepsilon(x)\,-\,\widetilde{\varrho}(x)\big)\,=\,-\varepsilon\,x^3\,, $$ for some $z_\varepsilon(x)\in\,]\widetilde{\varrho}_\varepsilon(x),\widetilde{\varrho}(x)[\,$ (or with exchanged extreme points, depending on $x$). By Lemma \ref{l:target-rho_bound}, we have uniform (in $\varepsilon$) bounds on the set $\overline{\mathbb{B}}_{l}$, depending on $l$, for $\widetilde{\varrho}_\varepsilon(x)$ and $\widetilde{\varrho}(x)$: then, from the previous identity, on this cylinder we find $$ \left|\widetilde{\varrho}_\varepsilon(x)\,-\,\widetilde{\varrho}(x)\right|\,\leq\,C(l)\,\varepsilon\,. $$ The bounds in the case $F=0$ can be shown in an analogous way. The proposition is now completely proved. \qed \end{proof} \medbreak From now on, we will focus on the following cases: \begin{equation} \label{eq:choice-m} \mbox{ either }\quad m\geq2\,,\qquad\quad \mbox{ or }\quad\qquad m\geq1\quad\mbox{ and }\quad F=0\,. \end{equation} Notice that in all those cases, the target density profile $\widetilde\varrho$ is constant, namely $\widetilde\varrho\equiv1$. \subsubsection{Initial data and finite energy weak solutions} \label{sss:data-weak} We address the singular perturbation problem described in Paragraph \ref{sss:primsys} for general \emph{ill prepared initial data}, in the framework of \emph{finite energy weak solutions}, whose theory was developed in \cite{F-N}. Since we work with weak solutions based on dissipation estimates and control of entropy production rate, we need to assume that the initial data are close to the equilibrium states $(\vret,\tems)$ that we have just identified. Namely, we consider initial densities and temperatures of the following form: \begin{equation}\label{in_vr} \vrez = \vret + \ep^m \vrez^{(1)} \qquad\qquad\mbox{ and }\qquad\qquad \temz = \tems + \ep^m \Theta_{0,\varepsilon}\,. \end{equation} For later use, let us introduce also the following decomposition of the initial densities: \begin{equation} \label{eq:in-dens_dec} \varrho_{0,\varepsilon}\,=\,1\,+\,\varepsilon^m\,R_{0,\varepsilon}\qquad\qquad\mbox{ with }\qquad R_{0,\varepsilon}\,=\,\varrho_{0,\varepsilon}^{(1)}\,+\,\widetilde r_\varepsilon\,,\qquad \widetilde r_\varepsilon\,:=\,\frac{\widetilde\varrho_\varepsilon-1}{\varepsilon^m}\,. \end{equation} Notice that $\widetilde r_\varepsilon$ is in fact a datum of the system, since it only depends on $p$, $F$ and $G$. We suppose $\vrez^{(1)}$ and $\Theta_{0,\varepsilon}$ to be bounded measurable functions satisfying the controls \begin{align} \sup_{\varepsilon\in\,]0,1]}\left\| \vrez^{(1)} \right\|_{(L^2\cap L^\infty)(\Omega_\varepsilon)}\,\leq \,c\,,\qquad \sup_{\varepsilon\in\,]0,1]}\left(\left\|\Theta_{0,\varepsilon}\right\|_{L^\infty(\Omega_\varepsilon)}\,+\,\left\| \sqrt{\widetilde\varrho_\varepsilon}\,\Theta_{0,\varepsilon}\right\|_{L^2(\Omega_\varepsilon)}\right)\,\leq\, c\,,\label{hyp:ill_data} \end{align} together with the mean-free conditions $ \int_{\Omega_\varepsilon} \vrez^{(1)} \dx = 0\qquad\qquad\mbox{ and }\qquad\qquad \int_{\Omega_\varepsilon}\Theta_{0,\varepsilon} \dx = 0\,. $ As for the initial velocity fields, we will assume instead the following uniform bounds: \begin{equation} \label{hyp:ill-vel} \sup_{\varepsilon\in\,]0,1]}\left(\left\| \sqrt{\widetilde\varrho_\varepsilon} \vec{u}_{0,\ep} \right\|_{L^2(\Omega_\varepsilon)}\,+\, \left\| \vec{u}_{0,\ep} \right\|_{L^\infty(\Omega_\varepsilon)}\right)\, \leq\, c\,. \end{equation} \begin{remark} \label{r:ill_data} In view of Lemma \ref{l:target-rho_pos}, the conditions in \eqref{hyp:ill_data} and \eqref{hyp:ill-vel} imply in particular that $$ \sup_{\varepsilon\in\,]0,1]}\left(\left\| \Theta_{0,\varepsilon}\right\|_{L^2(\Omega_\varepsilon)}\,+\,\left\| \vec{u}_{0,\ep} \right\|_{L^2(\Omega_\varepsilon)}\right)\,\leq\,c\,. $$ \end{remark} Thanks to the previous uniform estimates, up to extraction, we can assume that \begin{equation} \label{conv:in_data} \varrho^{(1)}_0\,:=\,\lim_{\varepsilon\ra0}\varrho^{(1)}_{0,\varepsilon}\;,\qquad R_0\,:=\,\lim_{\varepsilon\ra0}R_{0,\varepsilon}\;,\qquad \Theta_0\,:=\,\lim_{\varepsilon\ra0}\Theta_{0,\varepsilon}\;,\qquad \vec{u}_0\,:=\,\lim_{\varepsilon\ra0}\vec{u}_{0,\varepsilon}\,, \end{equation} where we agree that the previous limits are taken in the weak-$*$ topology of $L_{\rm loc}^\infty(\Omega)\,\cap\,L_{\rm loc}^2(\Omega)$. \medbreak Let us specify better what we mean for \emph{finite energy weak solution} (see \cite{F-N} for details). First of all, the equations have to be satisfied in a distributional sense: \begin{equation}\label{weak-con} -\int_0^T\int_{\Omega_\varepsilon} \left( \vre \partial_t \varphi + \vre\ue \cdot \nabla_x \varphi \right) \dxdt = \int_{\Omega_\varepsilon} \vrez \varphi(0,\cdot) \dx, \end{equation} for any $\varphi\in C^\infty_c([0,T[\,\times \overline\Omega_\varepsilon)$; \begin{align} &\int_0^T\!\!\!\int_{\Omega_\varepsilon} \left( - \vre \ue \cdot \partial_t \vec\psi - \vre [\ue\otimes\ue] : \nabla_x \vec\psi + \frac{1}{\ep} \, \vec{e}_3 \times (\vre \ue ) \cdot \vec\psi - \frac{1}{\ep^{2m}} p(\vre,\tem) {\rm div}\, \vec\psi \right) \dxdt \label{weak-mom} \\ & =\int_0^T\!\!\!\int_{\Omega_\varepsilon} \left(- \mathbb{S}(\vartheta_\varepsilon,\nabla_x\vec u_\varepsilon) : \nabla_x \vec\psi + \left(\frac{1}{\ep^2} \vre \nabla_x F + \frac{1}{\ep^m} \vre \nabla_x G \right)\cdot \vec\psi \right) \dxdt + \int_{\Omega_\varepsilon}\vrez \uez \cdot \vec\psi (0,\cdot) \dx, \nonumber \end{align} for any test function $\vec\psi\in C^\infty_c([0,T[\,\times \overline\Omega_\varepsilon; \mathbb{R}^3)$ such that $\big(\vec\psi \cdot \n_\varepsilon\big)_{|\partial {\Omega_\varepsilon}} = 0$; \begin{align} \int_0^T\!\!\!\int_{\Omega_\varepsilon} & \Bigl( - \vre s(\vre,\tem) \partial_t \varphi - \vre s(\vre,\tem) \ue \cdot \nabla_x \varphi \Bigr) \dxdt \label{weak-ent} \\ & - \int_0^T\int_{\Omega_\varepsilon} \frac{\q(\vartheta_\varepsilon,\nabla_x\vartheta_\varepsilon)}{\tem} \cdot \nabla_x \varphi \dxdt - \langle \sigma_\ep; \varphi \rangle_{ [{\cal{M}}; C^0]([0,T]\times \overline\Omega_\varepsilon)} = \int_{\Omega_\varepsilon} \vrez s(\vrez,\temz) \varphi (0,\cdot) \dx, \nonumber \end{align} for any $\varphi\in C^\infty_c([0,T[\,\times \overline\Omega_\varepsilon)$, with $\sigma_\ep \in {\cal{M}}^+ ([0,T]\times \overline\Omega_\varepsilon)$. In addition, we require that the energy identity \begin{align} \int_{\Omega_\varepsilon} & \left( \frac{1}{2} \vre |\ue|^2 + \frac{1}{\ep^{2m}} \vre e(\vre,\tem) - \frac{1}{\ep^2} \vre F - \frac{1}{\ep^m} \vre G \right) (t) \dx \label{weak-eng} \\ &= \int_{\Omega_\varepsilon} \left( \frac{1}{2} \vrez |\uez|^2 + \frac{1}{\ep^{2m}} \vrez e(\vrez,\temz) - \frac{1}{\ep^2} \varrho_{0,\ep} F -\frac{1}{\ep^m} \varrho_{0,\ep} G \right) \dx \nonumber \end{align} holds true for almost every $t\in\,]0,T[\,$. Notice that this is the integrated version of \eqref{eeq}. Under the previous assumptions (collected in Paragraphs \ref{sss:primsys} and \ref{sss:structural} and here above), at any \emph{fixed} value of the parameter $\varepsilon\in\,]0,1]$, the existence of a global in time finite energy weak solution $(\varrho_\varepsilon,\vec u_\varepsilon,\vartheta_\varepsilon)$ to system \eqref{ceq} to \eqref{eeq}, related to the initial datum $(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon},\vartheta_{0,\varepsilon})$, has been proved in e.g. \cite{F-N} (see Theorems 3.1 and 3.2 therein). Moreover, the following regularity of solutions $( \vre, \ue, \tem )$ can be obtained, which justifies all the integrals appearing in \eqref{weak-con} to \eqref{weak-eng}: for any $T>0$ fixed, one has \begin{equation*} \vre \in C^0_{w}\big([0,T];L^{5/3}(\Omega_\ep)\big),\quad \vre \in L^q\big((0,T)\times \Omega_\ep\big) \ \mbox{ for some } q>\frac{5}{3}\,, \qquad \ue \in L^2\big([0,T]; W^{1,2}(\Omega_\ep;\mathbb{R}^3)\big)\,. \end{equation*} In addition, the mapping $t \mapsto (\vre\ue)(t,\cdot)$ is weakly continuous, and one has $(\vre)_{|t=0} = \vrez$ together with $(\vre\ue)_{|t=0}= \vrez\uez$. Finally, the absolute temperature $\tem$ is a measurable function, $\tem>0$ a.e. in $\mathbb{R}_+ \times \Omega_\ep$, and given any $T>0$, one has \begin{equation*} \tem \in L^2\big([0,T]; W^{1,2}(\Omega_\ep)\big)\cap L^{\infty}\big([0,T]; L^4 (\Omega_\ep)\big), \quad \log \tem \in L^2\big([0,T]; W^{1,2}(\Omega_\ep)\big)\,. \end{equation*} Notice that, in view of \eqref{ceq}, the total mass is conserved in time, in the following sense: for almost every $t\in[0,+\infty[\,$, one has \begin{equation} \label{eq:mass_conserv} \int_{\Omega_\varepsilon}\bigl(\vre(t)\,-\,\vret\bigr)\,\dx\,=\,0\,. \end{equation} Let us now remark that, since the entropy production rate is a non-negative measure, and in particular it may possess jumps, the total entropy $\vre s(\vre,\tem)$ may not be weakly continuous in time. To avoid this problem, we introduce a time lifting $\Sigma_\ep$ of the measure $\sigma_\ep$ (see Paragraph 5.4.7 in \cite{F-N} for details) by the following formula: \begin{equation}\label{lift0} \langle \Sigma_\ep , \varphi \rangle = \langle \sigma_\ep , I[\varphi] \rangle, \quad \mbox{ where }\quad I [\varphi] (t,x) = \int _0^t \varphi (\tau,x) {\rm\, d}\tau\quad\mbox{for any } \varphi \in L^1(0,T; C^0(\overline\Omega_\varepsilon)). \end{equation} The time lifting $\Sigma_\ep$ can be identified with an abstract function $\Sigma_\ep \in L^{\infty}_{w}(0,T; {\cal{M}}^+(\overline{\Omega}))$, where $L_{w}^\infty$ stands for ``weakly measurable'', and $\Sigma_\varepsilon$ is defined by the relation \begin{equation*} \langle \Sigma_\ep(\tau),\varphi \rangle = \lim\limits_{\delta \to 0^+} \langle \sigma_\ep , \psi_\delta\varphi \rangle, \quad \mbox{ with } \quad \psi_\delta(t) = \left\{\begin{array}{cc} 0 & {\mbox{for }} t\in [0,\tau), \\ \frac{1}{\delta}(t - \tau) & {\mbox{for }} t\in (\tau, \tau + \delta), \\ 1 & {\mbox{for }} t \geq \tau +\delta. \end{array}\right. \end{equation*} In particular, the measure $\Sigma_\ep$ is well-defined for any $\tau\in[0,T]$, and the mapping $\tau \to \Sigma_\ep(\tau)$ is non-increasing in the sense of measures. Then, the weak formulation of the entropy balance can be equivalently rewritten as \begin{equation*} \begin{split} & \int_{\Omega_\varepsilon} \left[ \vre s(\vre,\tem)(\tau)\varphi(\tau) - \vrez s(\vrez,\temz)\varphi(0) \right] \dx + \langle \Sigma_\ep(\tau),\varphi(\tau) \rangle - \langle \Sigma_\ep(0),\varphi(0) \rangle\\ & = \int_0^\tau \langle \Sigma_\ep,\partial_t \varphi \rangle \dt + \int_0^\tau \int_{\Omega_\varepsilon} \left( \vre s(\vre,\tem) \partial_t \varphi + \vre s(\vre,\tem)\ue \cdot \nabla_x \varphi + \frac{\q(\vartheta_\varepsilon,\nabla_x\vartheta_\varepsilon)}{\tem} \cdot \nabla_x \varphi \right) \dxdt, \end{split} \end{equation*} for any $\varphi\in C^\infty_c([0,T]\times \overline\Omega_\varepsilon)$, and the mapping $ t \to \vre s(\vre,\tem)(t,\cdot) + \Sigma_\ep(t) \ $ is continuous with values in ${\cal{M}}^+(\overline{\Omega}_\varepsilon)$, provided that ${\cal{M}}^+$ is endowed with the weak-$*$ topology. \begin{remark} We explicitly point out that the previous properties are \emph{not} uniform in the small parameter $\varepsilon$. In order to deduce uniform properties on our family of weak solutions $\bigl(\varrho_\varepsilon,\vec u_\varepsilon,\vartheta_\varepsilon\bigr)_\varepsilon$, we ``measure'' the energy of the solutions with respect to the energy at the equilibrium states $\left(\widetilde\varrho_\varepsilon,0,\overline\vartheta\right)$. \end{remark} \medbreak To conclude this part, we introduce the ballistic free energy function $ H_{\tems}(\vr,\temp)\,:=\,\vr \bigl( e(\vr,\temp) - \tems s(\vr,\temp) \bigr)\,, $ and we define the \emph{relative entropy functional} (for details, see in particular Chapters 1, 2 and 4 of \cite{F-N}) $ \mathcal E\left(\rho,\theta\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,:=\,H_{\tems}(\rho,\theta) - (\rho - \vret)\,\partial_\varrho H_{\tems}(\vret,\tems) - H_{\tems}(\vret,\tems)\,. $ First of all, we notice that, by \eqref{eq:target-rho} and Gibbs' relation \eqref{gibbs}, equation \eqref{prF} can be rewritten as \begin{equation*} \partial_\varrho H_{\overline{\vartheta}} (\vret,\tems) \, =\, \ep^{2(m-1)} F + \ep^m G \end{equation*} in $\Omega_\varepsilon$ (up to some constant, that we have normalized to $0$). Then, combining the total energy balance \eqref{weak-eng}, the entropy equation \eqref{weak-ent} and the mass conservation \eqref{eq:mass_conserv}, we obtain the following total dissipation balance, for any $\varepsilon>0$ fixed: \begin{align} &\hspace{-0.7cm} \int_{\Omega_\varepsilon}\frac{1}{2}\vre|\ue|^2(t) \dx\,+\,\frac{1}{\ep^{2m}}\int_{\Omega_\varepsilon}\mathcal E\left(\varrho_\varepsilon,\vartheta_\varepsilon\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right) \dx + \frac{\tems}{\ep^{2m}}\sigma_\ep \left[[0,t]\times \overline\Omega_\varepsilon\right] \label{est:dissip} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \,\leq\, \int_{\Omega_\varepsilon}\frac{1}{2}\vrez|\uez|^2 \dx\,+\, \frac{1}{\ep^{2m}}\int_{\Omega_\varepsilon}\mathcal E\left(\varrho_{0,\varepsilon},\vartheta_{0,\varepsilon}\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right) \dx\,. \nonumber \end{align} Inequality \eqref{est:dissip} will be the only tool to derive uniform estimates for the family of weak solutions we consider. As a matter of fact, we will establish in Lemma \ref{l:initial-bound} below that, under the previous assumptions on the initial data, the quantity on the right-hand side of \eqref{est:dissip} is uniformly bounded for any $\varepsilon\in\,]0,1]$. \subsection{Main results}\label{ss:results} \medbreak We can now state our main results. The first statement concerns the case when low Mach number effects are predominant with respect to the fast rotation, i.e. $m>1$. For technical reasons which will appear clear in the course of the proof, when $F\neq0$ we need to take $m\geq2$. We also underline that the limit dynamics of $\vec{U}$ is purely horizontal (see \eqref{eq_lim_m:momentum} below) on the plane $\mathbb{R}^2\times \{0\}$ accordingly to the celebrated Taylor-Proudman theorem. Nonetheless the equations that involve $R$ and $\Theta$ (see \eqref{eq_lim_m:temp} and \eqref{eq_lim_m:Boussinesq} below) depend also on the vertical variable. \begin{theorem}\label{th:m-geq-2} For any $\varepsilon\in\,]0,1]$, let $\Omega_\ep$ be the domain defined by \eqref{dom} and $\Omega = \mathbb{R}^2 \times\,]0,1[\,$. Let $p$, $e$, $s$ satisfy Gibbs' relation \eqref{gibbs} and structural hypotheses from \eqref{pp1} to \eqref{ss1s}, and suppose that the diffusion coefficients $\mu$, $\eta$, $\kappa$ enjoy growth conditions \eqref{mu}. Let $G\in W^{1,\infty}(\Omega)$ be given as in \eqref{assFG}. Take either $m\geq 2$ and $F\in W_{loc}^{1,\infty}(\Omega)$ as in \eqref{assFG}, or $m>1$ and $F=0$. \\ For any fixed value of $\varepsilon \in \; ]0,1]$, let initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon},\vartheta_{0,\varepsilon}\right)$ verify the hypotheses fixed in Paragraph \ref{sss:data-weak}, and let $\left( \vre, \ue, \tem\right)$ be a corresponding weak solution to system \eqref{ceq} to \eqref{eeq}, supplemented with structural hypotheses from \eqref{S} to \eqref{q} and with boundary conditions \eqref{bc1-2} and \eqref{bc3}. Assume that the total dissipation balance \eqref{est:dissip} is satisfied. Let $\left(R_0,\vec u_0,\Theta_0\right)$ be defined as in \eqref{conv:in_data}. Then, for any $T>0$, one has the following convergence properties: \begin{align*} \varrho_\ep \rightarrow 1 \qquad\qquad \mbox{ in } \qquad &L^{\infty}\big([0,T]; L_{\rm loc}^{5/3}(\Omega )\big) \\ R_\varepsilon:=\frac{\varrho_\ep - 1}{\ep^m} \weakstar R \qquad\qquad \mbox{ weakly-$*$ in }\qquad &L^\infty\bigl([0,T]; L^{5/3}_{\rm loc}(\Omega)\bigr) \\ \Theta_\varepsilon:=\frac{\vartheta_\ep - \bar{\vartheta}}{\ep^m} \weak \Theta \qquad \mbox{ and }\qquad \vec{u}_\ep \weak \vec{U} \qquad\qquad \mbox{ weakly in }\qquad &L^2\big([0,T];W_{\rm loc}^{1,2}(\Omega)\big)\,, \end{align*} where $\vec{U} = (\vec U^h,0)$, with $\vec U^h=\vec U^h(t,x^h)$ such that ${\rm div}_h\vec U^h=0$. In addition, the triplet $\Big(\vec{U}^h,\, \, R ,\, \, \Theta \Big)$ is a weak solution to the incompressible Oberbeck-Boussinesq system in $\mathbb{R}_+ \times \Omega$: \begin{align} & \partial_t \vec U^{h}+{\rm div}_h\left(\vec{U}^{h}\otimes\vec{U}^{h}\right)+\nabla_h\Gamma-\mu (\overline\vartheta )\Delta_{h}\vec{U}^{h}=\delta_2(m)\langle R\rangle\nabla_{h}F \label{eq_lim_m:momentum} \\ & c_p(1,\overline\vartheta)\,\Bigl(\partial_t\Theta\,+\,{\rm div}_h(\Theta\,\vec U^h)\Bigr)\,-\,\kappa(\overline\vartheta)\,\Delta\Theta\,=\, \overline\vartheta\,\alpha(1,\overline\vartheta)\,\vec{U}^h\cdot\nabla_h\big({\,G\,+\,\delta_2(m)F}\big) \label{eq_lim_m:temp} \\ & \nabla_{x}\Big( \partial_\varrho p(1,\overline{\vartheta})\,R\,+\,\partial_\vartheta p(1,\overline{\vartheta})\,\Theta \Big)\,=\,\nabla_{x}G\,+\,\delta_2(m)\,\nabla_{x}F\, , \label{eq_lim_m:Boussinesq} \end{align} supplemented with the initial conditions $ \vec{U}_{|t=0}=\mathbb{H}_h\left(\langle\vec{u}^h_{0}\rangle\right)\quad \text{ and }\quad \Theta_{|t=0}\,=\,\frac{\overline\vartheta}{c_p(1,\overline\vartheta)}\,\Big(\partial_\varrho s(1,\overline\vartheta)\,R_0\,+\,\partial_\vartheta s(1,\overline\vartheta)\,\Theta_0\,+\, \alpha(1,\overline\vartheta)\,{\big(\,G\,+\,\delta_2(m)F\big)}\Big) $ and the boundary condition $\nabla_x \Theta \cdot\vec{n}_{|\partial\Omega}\,=\,0$, where $\vec{n}$ is the outer normal to $\partial\Omega\,=\,\{x_3=0\}\cup\{x_3=1\}$. \\ In \eqref{eq_lim_m:momentum}, $\Gamma$ is a distribution in $\mathcal D'(\mathbb{R}_+\times\mathbb{R}^2)$ and we have set $\delta_2(m)=1$ if $m=2$, $\delta_2(m)=0$ otherwise. In \eqref{eq_lim_m:temp}, we have defined $$ c_p(\varrho,\vartheta)\,:=\,\partial_\vartheta e(\varrho,\vartheta)\,+\,\alpha(\varrho,\vartheta)\,\frac{\vartheta}{\varrho}\,\partial_\vartheta p(\varrho,\vartheta)\,,\qquad \alpha(\varrho,\vartheta)\,:=\,\frac{1}{\varrho}\,\frac{\partial_\vartheta p(\varrho,\vartheta)}{\partial_\varrho p(\varrho,\vartheta)}\,. $$ \end{theorem} \begin{remark}\label{r:lim delta theta} We notice that, after defining $$ \Upsilon := \partial_\varrho s(1,\overline{\vartheta})R + \partial_\vartheta s(1,\overline{\vartheta})\,\Theta\qquad\mbox{ and }\qquad \Upsilon_0\,:=\,\partial_\varrho s(1,\overline\vartheta)\,R_0\,+\,\partial_\vartheta s(1,\overline\vartheta)\,\Theta_0\,, $$ from equation \eqref{eiq} one would get, in the limit $\varepsilon\ra0^+$, the equation \begin{equation} \label{eq:Upsilon} \partial_{t} \Upsilon +{\rm div}_h\left( \Upsilon \vec{U}^{h}\right) -\frac{\kappa(\overline\vartheta)}{\overline\vartheta} \Delta \Theta =0\,, \qquad\qquad \Upsilon_{|t=0}\,=\,\Upsilon_0\,, \end{equation} which is closer to the formulation of the target system given in \cite{K-M-N} and \cite{K-N}. From \eqref{eq:Upsilon} one easily recovers \eqref{eq_lim_m:temp} by using \eqref{eq_lim_m:Boussinesq}. Formulation \eqref{eq_lim_m:temp} is in the spirit of Chapters 4 and 5 of \cite{F-N}. \end{remark} The case $m=1$ realizes the \emph{quasi-geostrophic balance} in the limit. Namely the Mach and Rossby numbers have the same order of magnitude, and they keep in balance in the whole asymptotic process. The next statement is devoted to this case. Due to technical reasons, in this instance we have to assume $F=0$. Indeed, when $F\neq 0$, the coexistence of the centrifugal effects and the heat transfer deeply complicates the wave system and new technical troubles arise. \begin{theorem} \label{th:m=1_F=0} For any $\varepsilon\in\,]0,1]$, let $\Omega_\ep$ be the domain defined by \eqref{dom} and $\Omega = \mathbb{R}^2 \times\,]0,1[\,$. Let $p$, $e$, $s$ satisfy \eqref{gibbs} and the structural hypotheses from \eqref{pp1} to \eqref{ss1s}, and suppose that the diffusion coefficients $\mu$, $\eta$, $\kappa$ enjoy \eqref{mu}. Let $F=0$ and $G\in W^{1,\infty}(\Omega)$ be as in \eqref{assFG}. Take $m=1$. \\ For any fixed value of $\varepsilon$, let initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon},\vartheta_{0,\varepsilon}\right)$ verify the hypotheses fixed in Paragraph \ref{sss:data-weak}, and let $\left( \vre, \ue, \tem\right)$ be a corresponding weak solution to system \eqref{ceq} to \eqref{eeq}, supplemented with structural hypotheses from \eqref{S} to \eqref{q} and with boundary conditions \eqref{bc1-2} and \eqref{bc3}. Assume that the total dissipation balance \eqref{est:dissip} is satisfied. Let $\left(R_0,\vec u_0,\Theta_0\right)$ be defined as in \eqref{conv:in_data}. Then, for any $T>0$, the convergence properties stated in the previous theorem still hold true: namely, one has \begin{align*} \varrho_\ep \rightarrow 1 \qquad\qquad \mbox{ in } \qquad &L^{\infty}\big([0,T]; L_{\rm loc}^{5/3}(\Omega )\big) \\ R_\varepsilon:=\frac{\varrho_\ep - 1}{\ep} \weakstar R \qquad\qquad \mbox{ weakly-$*$ in }\qquad &L^\infty\bigl([0,T]; L^{5/3}_{\rm loc}(\Omega)\bigr) \\ \Theta_\varepsilon:=\frac{\vartheta_\ep - \bar{\vartheta}}{\ep} \weak \Theta \qquad \mbox{ and }\qquad \vec{u}_\ep \weak \vec{U} \qquad\qquad \mbox{ weakly in }\qquad &L^2\big([0,T];W_{\rm loc}^{1,2}(\Omega)\big)\,, \end{align*} where $\vec{U} = (\vec U^h,0)$, with $\vec U^h=\vec U^h(t,x^h)$ such that ${\rm div}_h\vec U^h=0$. Moreover, let us introduce the real number $\mathcal A>0$ by the formula \begin{equation} \label{def:A} \mathcal{A}\,=\,\partial_\varrho p(1,\overline{\vartheta})\,+\,\frac{\left|\partial_\vartheta p(1,\overline{\vartheta})\right|^2}{\partial_\vartheta s(1,\overline{\vartheta})}\,, \end{equation} and define $$ \Upsilon\, := \,\partial_\varrho s(1,\overline{\vartheta}) R + \partial_\vartheta s(1,\overline{\vartheta})\,\Theta\qquad\mbox{ and }\qquad q\,:=\, \partial_\varrho p(1,\overline{\vartheta})R +\partial_\vartheta p(1,\overline{\vartheta})\Theta -G-1/2\,. $$ Then we have $$ q\,=\,q(t,x^h)\,=\,\partial_\varrho p(1,\overline{\vartheta})\langle R\rangle\,+\,\partial_\vartheta p(1,\overline{\vartheta})\langle\Theta\rangle\qquad\mbox{ and }\qquad \vec{U}^{h}=\nabla_h^{\perp} q\,. $$ Moreover, the couple $\Big(q,\Upsilon \Big)$ satisfies (in the weak sense) the quasi-geostrophic type system \begin{align} & \partial_{t}\left(\frac{1}{\mathcal{A}}q-\Delta_{h}q\right) -\nabla_{h}^{\perp}q\cdot \nabla_{h}\left( \Delta_{h}q\right) +\mu (\overline{\vartheta}) \Delta_{h}^{2}q\,=\,\frac{1}{\mathcal A}\,\langle X\rangle \label{eq_lim:QG} \\ & c_p(1,\overline\vartheta)\Big(\partial_{t} \Upsilon +\nabla_h^\perp q\cdot\nabla_h\Upsilon \Big)-\kappa(\overline\vartheta) \Delta \Upsilon\,=\, \kappa(\overline\vartheta)\,\alpha(1,\overline\vartheta)\,\Delta_hq\,, \label{eq_lim:transport} \end{align} supplemented with the initial conditions $$ \left(\frac{1}{\mathcal{A}}q-\Delta_{h}q\right)_{|t=0}=\left(\langle R_0\rangle+\frac{1}{2\mathcal A}\right)-{\rm curl}_h\langle\vec u^h_{0}\rangle\,,\quad \Upsilon_{|t=0}=\partial_\varrho s(1,\overline\vartheta)R_0+\partial_\vartheta s(1,\overline\vartheta)\Theta_0 $$ and the boundary condition \begin{equation} \label{eq:bc_limit_2} \nabla_x\big(\Upsilon\,+\,\alpha(1,\overline\vartheta)\,G\big) \cdot\vec{n}_{|\partial\Omega}\,=\,0\,, \end{equation} where $\vec{n}$ is the outer normal to the boundary $\partial\Omega\,=\,\{x_3=0\}\cup\{x_3=1\}$. In \eqref{eq_lim:QG}, we have defined \begin{equation}\label{def:X} X:=\mathcal B\frac{\kappa(\overline\vartheta)}{c_p(1,\overline \vartheta)}\left(\Delta \Upsilon\,+\,\alpha(1,\overline\vartheta)\,\Delta_hq -\frac{1}{\kappa(\overline\vartheta)}\nabla^\perp_hq \cdot \nabla_h \Upsilon\right) \quad \quad \text{with}\quad \quad \mathcal B := \frac{\partial_\vartheta p(1,\overline \vartheta)}{\partial_\vartheta s(1,\overline \vartheta)}\, . \end{equation} \end{theorem} \begin{remark} \label{r:limit_1} Observe that $q$ and $\Upsilon$ can be equivalently chosen for describing the target problem. Indeed, straightforward computations show that \begin{align*} R\,&=\,-\,\frac{1}{\beta}\,\big(\partial_\vartheta p(1,\overline\vartheta)\,\Upsilon\,-\,\partial_\vartheta s(1,\overline\vartheta)\,q\,-\,\partial_\vartheta s(1,\overline\vartheta)\,G\big) \\ \Theta\,&=\,\frac{1}{\beta}\,\big(\partial_\varrho p(1,\overline\vartheta)\,\Upsilon\,-\,\partial_\varrho s(1,\overline\vartheta)\,q\,-\,\partial_\varrho s(1,\overline\vartheta)\,G\big)\,, \end{align*} where we have set $\beta\,=\,\partial_\varrho p(1,\overline\vartheta)\,\partial_\vartheta s(1,\overline\vartheta)\,-\,\partial_\vartheta p(1,\overline\vartheta)\,\partial_\varrho s(1,\overline\vartheta)$. In particular, equation \eqref{eq_lim:transport} can be deduced from \eqref{eq:Upsilon}, which is valid also when $m=1$, using the expression of $\Theta$ and the fact that $$ \beta\,=\,c_p(1,\overline\vartheta)\,\frac{\partial_\varrho p(1,\overline\vartheta)}{\overline\vartheta}\,. $$ Here we have chosen to formulate the target entropy balance equation in terms of $\Upsilon$ (as in \cite{K-N}) rather than $\Theta$ (as in Theorem \ref{th:m-geq-2} above), because the equation for $\Upsilon$ looks simpler (indeed, the equation for $\Theta$ would make a term in $\partial_tq$ appear). The price to pay is the non-homogeneous boundary condition \eqref{eq:bc_limit_2}, which may look a bit unpleasant. \end{remark} As pointed out for Theorem \ref{th:m-geq-2}, we notice that, despite the function $q$ is defined in terms of $G$, the dynamics described by \eqref{eq_lim:QG} is purely horizontal. On the contrary, dependence on $x^3$ and vertical derivatives appear in \eqref{eq_lim:transport}. \begin{remark} \label{r:energy} We have not investigated here the well-posedness of the target problems, formulated in Theorems \ref{th:m-geq-2} and \ref{th:m=1_F=0}. Very likely, when $F=0$, by standard energy methods (see e.g. \cite{C-D-G-G}, \cite{F-G-N}, \cite{DeA-F}) it is possible to prove that those systems are well-posed in the energy space, globally in time. \\ Yet, it is not clear for us that the solutions identified in the previous theorems are (the unique) finite energy weak solutions to the target problems. \end{remark} \section{Analysis of the singular perturbation} \label{s:sing-pert} The purpose of this section is twofold. First of all, in Subsection \ref{ss:unif-est} we establish uniform bounds and further properties for our family of weak solutions. Then, we study the singular operator underlying to the primitive equations \eqref{ceq} to \eqref{eeq}, and determine constraints that the limit points of our family of weak solutions have to satisfy (see Subsection \ref{ss:ctl1}). \subsection{Uniform bounds}\label{ss:unif-est} This section is devoted to establish uniform bounds on the sequence $\bigl(\varrho_\varepsilon,\vec u_\varepsilon,\vartheta_\varepsilon\bigr)_\varepsilon$. Since the Coriolis term does not contribute to the total energy balance of the system, most of the bounds can be proven as in the case without rotation; we refer to \cite{F-N} for details. First of all, let us introduce some preliminary material. \subsubsection{Preliminaries} \label{sss:unif-prelim} Let us recall here some basic notations and results, which we need in proving our convergence results. We refer to Sections 4, 5 and 6 of \cite{F-N} for more details. Let us introduce the so-called ``essential'' and ``residual'' sets. Recall that the positive constant $\rho_*$ has been defined in Lemma \ref{l:target-rho_pos}. Following the approach of \cite{F-N}, we define $ {\cal{O}}_{\ess}\,: = \, \left[2\rho_*/3\, ,\, 2 \right]\, \times\, \left[\tems/2\,,\, 2 \tems\right]\,,\qquad {\cal{O}}_{\res}\,: =\, \,]0,+\infty[\,^2\setminus {\cal{O}}_{\ess}\,. $ Then, we fix a smooth function $\mathfrak{b} \in C^\infty_c \bigl( \,]0,+\infty[\,\times\,]0,+\infty[\, \bigr)$ such that $0\leq \mathfrak b\leq 1, \ \mathfrak b\equiv1$ on the set $ {\cal{O}}_{\ess}$, and we introduce the decomposition on essential and residual part of a measurable function $h$ as follows: \begin{equation*}\label{ess-def} h = [h]_{\ess} + [h]_{\res},\qquad\mbox{ with }\quad [ h]_{\ess} := \mathfrak b(\vre,\tem) h\,,\quad \ [h]_{\res} = \bigl(1-\mathfrak b(\vre,\tem)\bigr)h\,. \end{equation*} We also introduce the sets $\mathcal{M}^\varepsilon_{\ess}$ and $\mathcal{M}^\varepsilon_{\res}$, defined as $$\mathcal{M}^\varepsilon_{\ess} := \left\{ (t,x) \in\, ]0,T[\, \times\, \Omega_\varepsilon \ : \ \bigl(\varrho_\ep(t,x),\vartheta_\ep(t,x)\bigr) \in {\cal{O}}_{\ess} \right\}\qquad\mbox{ and }\qquad \mathcal{M}^\varepsilon_{\res} := \big(\,]0,T[\,\times\,\Omega_\varepsilon\,\big) \setminus \mathcal{M}^\varepsilon_{\ess}\,,$$ and their version at fixed time $t\geq0$, i.e. $$\mathcal{M}^\varepsilon_{\ess} [t] := \{ x \in \Omega_\varepsilon \ : (t,x) \in \mathcal{M}^\varepsilon_{\ess} \}\qquad \mbox{ and }\qquad \mathcal{M}^\varepsilon_{\res}[t] := \Omega_\varepsilon \setminus \mathcal{M}^\varepsilon_{\ess}[t]\,.$$ The next result, which will be useful in the next subsection, is the analogous of Lemma 5.1 in \cite{F-N} in our context. Here we need to pay attention to the fact that, when $F\neq0$, the estimates for the equilibrium states (recall Proposition \ref{p:target-rho_bound}) are not uniform on the whole $\Omega_\varepsilon$. \begin{lemma}\label{l:H} Fix $m\geq1$ and let $\vret$ and $\tems$ be the static states identified in Paragraph \ref{sss:equilibrium}. Under the previous assumptions, and with the notations introduced above, we have the following properties. Let $F\neq0$. For all $l>0$, there exist $\varepsilon(l)$ and positive constants $c_j\,=\,c_j(\rho_*,\overline\vartheta,l)$, with $j=1,2,3$, such that, for all $0<\varepsilon\leq\varepsilon(l)$, the next properties hold true, for all $x\in\overline{\mathbb B}_{l}$: \begin{enumerate}[(a)] \item for all $(\rho,\theta)\,\in\,\mathcal O_{\ess}$, one has $$ c_1\,\left(\left|\rho-\widetilde\varrho_\varepsilon(x)\right|^2\,+\,\left|\theta-\overline\vartheta\right|^2\right)\,\leq\,\mathcal E\left(\rho,\theta\;|\;\widetilde\varrho_\varepsilon(x),\overline\vartheta\right)\,\leq\, c_2\,\left(\left|\rho-\widetilde\varrho_\varepsilon(x)\right|^2\,+\,\left|\theta-\overline\vartheta\right|^2\right)\,; $$ \item for all $(\rho,\theta)\,\in\,\mathcal O_{\res}$, one has $$ \mathcal E\left(\rho,\theta\;|\;\widetilde\varrho_\varepsilon(x),\overline\vartheta\right)\,\geq\,c_3\,. $$ \end{enumerate} When $F=0$, the previous constants $\big(c_j\big)_{j=1,2,3}$ can be chosen to be independent of $l>0$. \end{lemma} \begin{proof} Let us start by considering the case $F\neq0$. Fix $m\geq 1$. In view of Lemma \ref{l:target-rho_pos} and Proposition \ref{p:target-rho_bound}, for all $l>0$ fixed, there exists $\varepsilon(l)$ such that, for all $\varepsilon\leq\varepsilon(l)$, we have $\widetilde\varrho_\varepsilon(x)\,\in\,[\rho_*,3/2]\,\subset\,\mathcal O_{\ess}$ for all $x\in \overline{\mathbb B}_{l}$. With this inclusion at hand, the first inequality is an immediate consequence of the decomposition \begin{align*} \mathcal E\left(\rho,\theta\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,&=\,\Bigl(H_{\tems}(\rho,\theta)-H_{\tems}(\rho,\overline\vartheta)\Bigr)\,+\, \Bigl(H_{\tems}(\rho,\overline\vartheta) - H_{\tems}(\widetilde\varrho_\varepsilon,\overline\vartheta) - (\rho - \vret)\,\partial_\varrho H_{\tems}(\vret,\tems)\Bigr) \\ &=\,\partial_\vartheta H_{\overline\vartheta}(\rho,\eta)\,\bigl(\vartheta-\overline\vartheta\bigr)\,+\, \frac{1}{2}\partial^2_{\varrho\vrho}H_{\overline\vartheta}(z_\varepsilon,\overline\vartheta)\,\bigl(\rho-\widetilde\varrho_\varepsilon\bigr)^2\,, \end{align*} for some suitable $\eta$ belonging to the interval connecting $\theta$ and $\overline\vartheta$, and $z_\varepsilon$ belonging to the interval connecting $\rho$ and $\widetilde\varrho_\varepsilon$. Indeed, it is enough to use formulas (2.49) and (2.50) of \cite{F-N}, together with the fact that we are in the essential set. Next, thanks again to the property $\widetilde\varrho_\varepsilon(x)\,\in\,[\rho_*,3/2]\,\subset\,\mathcal O_{\ess}$, we can conclude, exactly as in relation (6.69) of \cite{F-N}, that $$ \inf_{(\rho,\theta)\in\mathcal O_\res}\mathcal E\left(\rho,\theta\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,\geq\, \inf_{(\rho,\theta)\in\partial\mathcal O_\ess}\mathcal E\left(\rho,\theta\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,\geq\,c\,>\,0\,. $$ The case $F=0$ follows by similar arguments, using that the various constants in Lemma \ref{l:target-rho_bound} and Proposition \ref{p:target-rho_bound} are uniform in $\Omega$. This completes the proof of the lemma. \qed \end{proof} \subsubsection{Uniform estimates for the family of weak solutions} \label{sss:uniform} With the total dissipation balance \eqref{est:dissip} and Lemma \ref{l:H} at hand, we can derive uniform bounds for our family of weak solutions. Since this derivation is somehow classical, we limit ourselves to recall the main inequalities and sketch the proofs; we refer the reader to Chapters 5, 6 and 8 of \cite{F-N} for details. To begin with, we remark that, owing to the assumptions fixed in Paragraph \ref{sss:data-weak} on the initial data and to the structural hypotheses of Paragraphs \ref{sss:primsys} and \ref{sss:structural}, the right-hand side of \eqref{est:dissip} is \emph{uniformly bounded} for all $\varepsilon\in\,]0,1]$. \begin{lemma} \label{l:initial-bound} Under the assumptions fixed in Paragraphs \ref{sss:primsys}, \ref{sss:structural} and \ref{sss:data-weak}, there exists an absolute constant $C>0$ such that, for all $\varepsilon\in\,]0,1]$, one has $$ \int_{\Omega_\varepsilon} \frac{1}{2}\vrez|\uez|^2\,\dx + \frac{1}{\ep^{2m}}\int_{\Omega_\varepsilon}\mathcal E\left(\varrho_{0,\varepsilon},\vartheta_{0,\varepsilon}\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,\dx\,\leq\,C\,. $$ \end{lemma} \begin{proof} The boundedness of the first term in the left-hand side is an obvious consequence of \eqref{hyp:ill-vel} and \eqref{hyp:ill_data} for the density. So, let us show how to control the term containing $\mathcal E\left(\varrho_{0,\varepsilon},\vartheta_{0,\varepsilon}\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)$. Owing to Taylor formula, one has \begin{align*} \mathcal E\left(\varrho_{0,\varepsilon},\vartheta_{0,\varepsilon}\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,&=\, \partial_\vartheta H_{\overline\vartheta}(\varrho_{0,\varepsilon},\eta_{0,\varepsilon})\,\bigl(\vartheta_{0,\varepsilon}-\overline\vartheta\bigr)\,+\,\frac{1}{2}\, \partial^2_{\varrho\vrho}H_{\overline\vartheta}(z_{0,\varepsilon},\overline\vartheta)\,\bigl(\varrho_{0,\varepsilon}-\widetilde\varrho_\varepsilon\bigr)^2\,, \end{align*} where we can write $\eta_{0,\varepsilon}(x)\,=\,\overline\vartheta\,+\,\varepsilon^m\,\lambda_\varepsilon(x)\,\Theta_{0,\varepsilon}$ and $z_{0,\varepsilon}\,=\,\widetilde\varrho_\varepsilon\,+\,\varepsilon^m\,\zeta_\varepsilon(x)\,\varrho^{(1)}_{0,\varepsilon}$, with both the families $\bigl(\lambda_\varepsilon\bigr)_\varepsilon$ and $\bigl(\zeta_\varepsilon\bigr)_\varepsilon$ belonging to $L^\infty(\Omega_\varepsilon)$, uniformly in $\varepsilon$ (in fact, $\lambda_\varepsilon(x)$ and $\zeta_{\varepsilon}(x)$ belong to the interval $\,]0,1[\,$ for all $x\in\Omega_\varepsilon$). Notice that $\big(\eta_{0,\varepsilon}\big)_\varepsilon\,\subset\,L^\infty(\Omega_\varepsilon)$ and that $\eta_{0,\varepsilon}\geq c_1>0$ and $z_{0,\varepsilon}\geq c_2>0$ (at least for $\varepsilon$ small enough). By the structural hypotheses fixed in Paragraph \ref{sss:structural} (and in particular Gibbs' law), we get (see also formula (2.50) in \cite{F-N}) \begin{align} \label{eq:d_th-H_th} \partial_\vartheta H_{\overline\vartheta}(\varrho_{0,\varepsilon},\eta_{0,\varepsilon})\,&=\,4\,a\,\eta_{0,\varepsilon}^2\,\bigl(\eta_{0,\varepsilon}-\overline\vartheta\bigr)\,+\, \frac{\varrho_{0,\varepsilon}}{\eta_{0,\varepsilon}}\,\bigl(\eta_{0,\varepsilon}-\overline\vartheta\bigr)\,\partial_\vartheta e_M(\rho_{0,\varepsilon},\eta_{0,\varepsilon})\,. \end{align} In view of condition \eqref{pp3}, we gather that $\left|\partial_\vartheta e_M\right|\,\leq\,c$; therefore, from hypotheses \eqref{hyp:ill_data} and Remark \ref{r:ill_data} it is easy to deduce that $$ \frac{1}{\varepsilon^{2m}}\int_{\Omega_\varepsilon}\partial_\vartheta H_{\overline\vartheta}(\varrho_{0,\varepsilon},\eta_{0,\varepsilon})\,\bigl(\vartheta_{0,\varepsilon}-\overline\vartheta\bigr)\dx\,\leq\,C\,. $$ Moreover, by \eqref{pp1} we get (keep in mind formula (2.49) of \cite{F-N}) $$ \partial^2_{\varrho\vrho}H_{\overline\vartheta}(z_{0,\varepsilon},\overline\vartheta)\,=\,\frac{1}{z_{0,\varepsilon}}\,\partial_\varrho p_M(z_{0,\varepsilon},\overline\vartheta)\,=\,\frac{1}{\sqrt{\overline\vartheta}}\,\frac{1}{Z_{0,\varepsilon}}\,P'(Z_{0,\varepsilon})\,, $$ where we have set $Z_{0,\varepsilon}\,=\,z_{0,\varepsilon}\,\overline\vartheta^{-3/2}$. Now, thanks to \eqref{pp3} again and to the fact that $z_{0,\varepsilon}$ is strictly positive, we can estimate, for some positive constants which depend also on $\overline\vartheta$, $$ \frac{1}{Z_{0,\varepsilon}}\,P'(Z_{0,\varepsilon})\,\leq\,C\,\frac{P(Z_{0,\varepsilon})}{Z_{0,\varepsilon}^2}\,\leq\,C\left(\frac{P(Z_{0,\varepsilon})}{Z_{0,\varepsilon}^2}\,{\mathchoice {\rm 1\mskip-4mu l_{\{0\leq Z_{0,\varepsilon}\leq1\}}\,+\, \frac{P(Z_{0,\varepsilon})}{Z_{0,\varepsilon}^{5/3}}\,{\mathchoice {\rm 1\mskip-4mu l_{\{Z_{0,\varepsilon}\geq1\}}\right)\,\leq\,C\,, $$ where we have used also \eqref{pp4}. Hence, we can check that $$ \frac{1}{2\varepsilon^{2m}}\int_{\Omega_\varepsilon}\partial^2_{\varrho\vrho}H_{\overline\vartheta}(z_{0,\varepsilon},\overline\vartheta)\,\bigl(\varrho_{0,\varepsilon}-\widetilde\varrho_\varepsilon\bigr)^2\dx\,\leq\,C\,. $$ This inequality completes the proof of the lemma. \qed \end{proof} \medbreak Owing to the previous lemma, from \eqref{est:dissip} we gather, for any $T>0$, the estimates \begin{align} \sup_{t\in[0,T]} \| \sqrt{\vre}\ue\|_{L^2(\Omega_\varepsilon;\mathbb{R}^3)}\, &\leq\,c \label{est:momentum} \\ \| \sigma_\ep\|_{{\mathcal{M}}^+ ([0,T]\times\overline\Omega_\varepsilon )}\, &\leq \,\ep^{2m}\, c\,. \label{est:sigma} \end{align} Fix now any $l>0$. Employing Lemma~\ref{l:H} (and keeping track of the dependence of constants only on $l$), we deduce \begin{align} \sup_{t\in[0,T]} \left\| \left[ \dfrac{\vre - \vret}{\ep^m}\right]_\ess (t) \right\|_{L^2(\mathbb B_{l})}\,+\, \sup_{t\in[0,T]} \left\| \left[ \dfrac{\tem - \tems}{\ep^m}\right]_\ess (t) \right\|_{L^2(\mathbb B_{l})}\,&\leq\, c(l)\,. \label{est:rho_ess} \end{align} In addition, we infer also that the measure of the ``residual set'' is small: more precisely, we have \begin{equation}\label{est:M_res-measure} \sup_{t\in[0,T]} \int_{\mathbb B_{l}} {\mathchoice {\rm 1\mskip-4mu l_{\mathcal{M}^\varepsilon_\res[t]} \dx\,\leq \,\ep^{2m}\, c(l)\,. \end{equation} {\begin{remark}\label{rmk:cut-off} When $F=0$, thanks to Lemma \ref{l:target-rho_bound} and Proposition \ref{p:target-rho_bound}, one can see that estimates \eqref{est:rho_ess} and \eqref{est:M_res-measure} hold on the whole $\Omega_\varepsilon$, without any need of taking the localisation on the cylinders $\mathbb{B}_l$. From this observation, it is easy to see that, when $F=0$, we can replace $\mathbb{B}_l$ with the whole $\Omega_\varepsilon$ in all the following estimates. \end{remark}} Now, we fix $l\geq 0$. We estimate $$ \int_{\mathbb{B}_{l}}\left|\left[\varrho_\varepsilon\,\log\varrho_\varepsilon\right]_\res\right|\dx\,=\, \int_{\mathbb{B}_{l}}\left|\varrho_\varepsilon\,\log\varrho_\varepsilon\right|\,{\mathchoice {\rm 1\mskip-4mu l_{\{0\leq\varrho_\varepsilon\leq2\rho_*/3\}}\dx\,+\, \int_{\mathbb{B}_{l}}\left|\varrho_\varepsilon\,\log\varrho_\varepsilon\right|\,{\mathchoice {\rm 1\mskip-4mu l_{\{\varrho_\varepsilon\geq2\}}\dx\,. $$ Thanks to \eqref{est:M_res-measure}, the former term in the right-hand side is easily controlled by $\varepsilon^{2m}$, up to a suitable multiplicative constant also depending on $l$. As for the latter term, we have to argue in a different way. Owing to inequalities \eqref{pp2}, \eqref{p_comp}, \eqref{pp3} and \eqref{pp4}, we get that $\partial^2_\varrho H_{\overline\vartheta}(\varrho,\overline\vartheta)\geq C/\varrho$; therefore, by direct integration we find \begin{align*} C\,\varrho_\varepsilon\,\log\varrho_\varepsilon\,-\,C\left(\varrho_\varepsilon-1\right)\,&\leq\,H_{\overline\vartheta}(\varrho_\varepsilon,\overline\vartheta)\,-\,H_{\overline\vartheta}(1,\overline\vartheta)\,-\, \partial_\varrho H_{\overline\vartheta}(1,\overline\vartheta)(\varrho_\ep-1) \\ &\leq\,\mathcal E\left(\varrho_\varepsilon,\vartheta_\varepsilon\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,+\,\mathcal E\left(\widetilde\varrho_\varepsilon,\overline\vartheta\;|\;1,\overline\vartheta\right)\,+\, \Big( \partial_\varrho H(\widetilde\varrho_\varepsilon,\overline\theta) - \partial_\varrho H(1,\overline\theta)\Big) \big( \varrho_\varepsilon - \widetilde\varrho_\varepsilon \big)\, , \end{align*} since an expansion analogous to \eqref{eq:d_th-H_th} allows to gather that $H_{\overline\vartheta}(\varrho_\varepsilon,\overline\vartheta)\,-\,H_{\overline\vartheta}(\varrho_\varepsilon,\vartheta_\varepsilon)\,\leq\,0$. On the one hand, using \eqref{est:dissip}, Proposition \ref{p:target-rho_bound} and \eqref{est:M_res-measure} one deduces \begin{equation*} \label{est:rho-log_prelim} \left|\int_{\mathbb{B}_{l}\cap\mathcal O_\res}\left(\mathcal E\left(\varrho_\varepsilon,\vartheta_\varepsilon\;|\;\widetilde\varrho_\varepsilon,\overline\vartheta\right)\,+\, \mathcal E\left(\widetilde\varrho_\varepsilon,\overline\vartheta\;|\;1,\overline\vartheta\right)\,+\,\Big( \partial_\varrho H(\widetilde\varrho_\varepsilon,\overline\theta) - \partial_\varrho H(1,\overline\theta)\Big) \big( \varrho_\varepsilon - \widetilde\varrho_\varepsilon \big)\,\right)\dx \right|\,\leq\,C\,\varepsilon^{2m}\,. \end{equation*} On the other hand, $\varrho_\varepsilon\log\varrho_\varepsilon-\left(\varrho_\varepsilon-1\right)\,\geq\,\varrho_\varepsilon\left(\log\varrho_\varepsilon-1\right)\,\geq\,(1/2)\,\varrho_\varepsilon\,\log\varrho_\varepsilon$ whenever $\varrho_\varepsilon\geq e^2$. Hence, since we have $$ \int_{\mathbb{B}_{l}}\left|\varrho_\varepsilon\,\log\varrho_\varepsilon\right|\,{\mathchoice {\rm 1\mskip-4mu l_{\{2\leq\varrho_\varepsilon\leq e^2\}}\dx\,\leq\,C\,\varepsilon^{2m} $$ owing to \eqref{est:M_res-measure} again, we finally infer that, for any fix $l>0$, \begin{equation} \label{est:rho*log-rho} \sup_{t\in[0,T]}\int_{\mathbb{B}_{l}}\left|\left[\varrho_\varepsilon\,\log\varrho_\varepsilon\right]_\res(t)\right|\dx\,\leq\,c(l)\,\varepsilon^{2m}\,. \end{equation} Owing to inequality \eqref{est:rho*log-rho}, we deduce (exactly as in \cite{F-N}, see estimates (6.72) and (6.73) therein) that \begin{align} \sup_{t\in [0,T]} \int_{\mathbb{B}_{l}} \bigl( \left| [ \vre e(\vre,\tem)]_\res\right| + \left| [ \vre s(\vre,\tem)]_\res\right|\bigr)\,\dx\,&\leq\,\ep^{2m}\, c (l)\,, \label{est:e-s_res} \end{align} which in particular implies (again, we refer to Section 6.4.1 of \cite{F-N} for details) the following bounds: \begin{align} \sup_{t\in [0,T]} \int_{\mathbb{B}_{l}} [ \vre]^{5/3}_\res (t)\,\dx \,+\,\sup_{t\in [0,T]} \int_{\mathbb{B}_{l}} [ \tem]^{4}_\res (t)\, \dx \,&\leq\,\ep^{2m}\, c (l)\,. \label{est:rho_res} \end{align} Let us move further. In view of \eqref{S}, \eqref{ss}, \eqref{q} and \eqref{mu}, relation \eqref{est:sigma} implies \begin{align} \int_0^T \left\| \nabla_x \ue +\, ^t\nabla_x \ue - \frac{2}{3} {\rm div}\, \ue \, {\rm Id}\, \right\|^2_{L^2(\Omega_\varepsilon;\mathbb{R}^{3\times3})}\, \dt\, &\leq\, c \label{est:Du} \\ \int_0^T \left\| \nabla_x \left(\frac{\tem - \tems}{\ep^m}\right) \right\|^2_{L^2(\Omega_\varepsilon;\mathbb{R}^3)}\, \dt\, +\, \int_0^T \left\| \nabla_x \left(\frac{\log(\tem) - \log(\tems)}{\ep^m}\right) \right\|^2_{L^2(\Omega_\varepsilon;\mathbb{R}^3)} \,\dt\, &\leq\, c\,. \label{est:D-theta} \end{align} {Thanks to the previous inequalities and \eqref{est:M_res-measure}, we can argue as in Subsection 8.2 of \cite{F-N}: by generalizations of respectively Poincar\'e and Korn-Poincar\'e inequalities (see Propositions \ref{app:poincare_prop} and \ref{app:korn-poincare_prop}), for all $l>0$ we gather also} \begin{align} \int_0^T \left\| \frac{\tem - \tems}{\ep^m} \right\|^2_{W^{1,2}({\mathbb{B}_l};\mathbb{R}^3)}\, \dt \,+\, \int_0^T \left\| \frac{\log(\tem) - \log(\tems)}{\ep^m} \right\|^2_{W^{1,2}({\mathbb{B}_l};\mathbb{R}^3)}\, \dt\,&\leq\,c(l) \label{est:theta-Sob} \\ \int_0^T \left\| \ue \right\|^2_{W^{1,2}(\mathbb{B}_l; \mathbb{R}^3)} \dt\,&\leq\,c(l)\,. \label{est:u-H^1} \end{align} Finally, we discover that \begin{align} \int^T_0\left\|\left[\frac{\varrho_\varepsilon\,s(\varrho_\varepsilon,\vartheta_\varepsilon)}{\varepsilon^m}\right]_{\res}\right\|^{2}_{L^{30/23}(\mathbb{B}_{l})}\dt\,+\, \int^T_0\left\|\left[\frac{\varrho_\varepsilon\,s(\varrho_\varepsilon,\vartheta_\varepsilon)}{\varepsilon^m}\right]_{\res}\, \vec{u}_\varepsilon\right\|^{2}_{L^{30/29}(\mathbb{B}_{l})}\dt\,&\leq\,c(l) \label{est:rho-s_res} \\ \int^T_0\left\|\frac{1}{\varepsilon^m}\,\left[\frac{\kappa(\vartheta_\varepsilon)}{\vartheta_\varepsilon}\right]_{\res}\, \nabla_{x}\vartheta_\varepsilon(t)\right\|^{2}_{L^{1}(\mathbb{B}_l)}\dt\,&\leq\,c(l)\,. \label{est:Dtheta_res} \end{align} The argument for proving \eqref{est:rho-s_res} and \eqref{est:Dtheta_res} is similar to one employed in the proof of Proposition 5.1 of \cite{F-N}, but here it is important to get bounds for the $L^2$ norm in time (see also Remark \ref{r:bounds} below). Indeed, we have that \begin{equation} \label{5.58_book} \left[\varrho_\varepsilon\,s(\varrho_\varepsilon,\vartheta_\varepsilon)\right]_{\res}\leq C\, \left[ \varrho_\varepsilon +\varrho_\varepsilon \, |\log \varrho_\varepsilon |\, +\varrho_\varepsilon \, |\log \vartheta_\varepsilon -\log \overline \vartheta|+\vartheta_{\varepsilon}^{3}\, \right]_{\res} \end{equation} and thanks to the previous uniform bounds \eqref{est:rho_res} and \eqref{est:theta-Sob}, one has that $\big(\left[\varrho_\varepsilon \right]_{\res}\big)_\varepsilon\subset L_{T}^{\infty}( L_{\rm loc}^{5/3})$, $\big(\left[\varrho_\varepsilon \, |\log \varrho_\varepsilon |\,\right]_{\res}\big)_\varepsilon\subset L_{T}^{\infty}( L_{\rm loc}^{q})$ for all $1\leq q< 5/3$ (see relation (5.60) in \cite{F-N}), $\big(\left[\varrho_\varepsilon \, |\log \vartheta_\varepsilon -\log \overline \vartheta|\, \right]_{\res}\big)_\varepsilon\subset L_{T}^{2}( L_{\rm loc}^{30/23})$ and finally $\big(\left[\vartheta_{\varepsilon}^{3}\, \right]_{\res}\big)_\varepsilon\subset L_{T}^{\infty}( L_{\rm loc}^{4/3})$. Let us recall that the inclusion symbol means that the sequences are uniformly bounded in the respective spaces. Then, it follows that the first term in \eqref{est:rho-s_res} is in $L_T^{2}(L_{\rm loc}^{30/23})$. Next, taking \eqref{5.58_book} we obtain \begin{equation*} \left[\varrho_\varepsilon\,s(\varrho_\varepsilon,\vartheta_\varepsilon)\ue\right]_{\res}\leq C\, \left[ \varrho_\varepsilon\ue +\varrho_\varepsilon \, |\log \varrho_\varepsilon |\, \ue\, +\varrho_\varepsilon \, |\log \vartheta_\varepsilon -\log \overline \vartheta|\,\ue +\vartheta_{\varepsilon}^{3}\ue \, \right]_{\res} \end{equation*} and using the uniform bounds \eqref{est:rho_res} and \eqref{est:u-H^1}, we have that $\big(\left[\varrho_\varepsilon\ue\right]_{\res}\big)_\varepsilon\subset L_T^{2}(L_{\rm loc}^{30/23})$. Now, we look at the second term. We know that $\big(\left[\varrho_\varepsilon \, |\log \varrho_\varepsilon |\, \right]_{\res}\big)_\varepsilon\subset L_{T}^{\infty}( L_{\rm loc}^{q})$ for all $1\leq q< 5/3$ and $\ue \in L_T^{2}(L_{\rm loc}^{6})$ (thanks to Sobolev embeddings, see Theorem \ref{app:sob_embedd_thm}). Then, we take $q$ such that $1/p:=1/q+1/6<1$ and so $$ \big(\left[\varrho_\varepsilon \, |\log \varrho_\varepsilon |\, \ue\,\right]_{\res}\big)_\varepsilon\subset L_T^{2}(L_{\rm loc}^{p})\, . $$ Keeping \eqref{est:rho_res}, \eqref{est:theta-Sob} and \eqref{est:momentum} in mind and using that $$\left[\varrho_\varepsilon \, |\log \vartheta_\varepsilon -\log \overline \vartheta|\, \vec{u}_\varepsilon\, \right]_{\res}= \left[\sqrt{\varrho_\varepsilon}\, |\log \vartheta_\varepsilon -\log \overline \vartheta|\, \sqrt{\varrho_\varepsilon}\, \vec{u}_{\varepsilon}\, \right]_{\res}\,,$$ we obtain that the third term is uniformly bounded in $L_{T}^{2}( L_{\rm loc}^{30/29})$. Using again the uniform bounds, we see that the last term is in $L_T^{\infty}(L_{\rm loc}^{12/11})$. Thus, we obtain \eqref{est:rho-s_res}. \\ To get \eqref{est:Dtheta_res}, we use instead the following estimate (see Proposition 5.1 of \cite{F-N}): \begin{equation*} \left[\frac{k(\vartheta_\varepsilon )}{\vartheta_\varepsilon}\right]_{\res}\left|\frac{\nabla_{x}\vartheta_\varepsilon}{\varepsilon^{m}}\right| \leq C \left(\left|\frac{\nabla_{x}(\log \vartheta_\varepsilon )}{\varepsilon^m}\right|+\left[\vartheta_{\varepsilon}^{2}\right]_{\res}\left|\frac{\nabla_{x}\vartheta_{\varepsilon}}{\varepsilon^m}\right|\right)\,. \end{equation*} Owing to the previous uniform bounds, the former term is uniformly bounded in $L_{T}^{2}( L_{\rm loc}^{2})$ and the latter one is uniformly bounded in $L_{T}^{2}( L_{\rm loc}^{1})$. So, we obtain the estimate \eqref{est:Dtheta_res}. \begin{remark} \label{r:bounds} Differently from \cite{F-N}, here we have made the integrability indices in \eqref{est:rho-s_res} and \eqref{est:Dtheta_res} explicit. In particular, having the $L^2$ norm in time will reveal to be fundamental for the compensated compactness argument, see Lemma \ref{l:source_bounds} below. \end{remark} \subsection{Constraints on the limit dynamics}\label{ss:ctl1} In this subsection, we establish some properties that the limit points of the family $\bigl(\varrho_\varepsilon,\vec u_\varepsilon,\vartheta_\varepsilon\bigr)_\varepsilon$ have to satisfy. These are static relations, which do not characterise the limit dynamics yet. \subsubsection{Preliminary considerations} \label{sss:constr_prelim} To begin with, we propose an extension of Proposition 5.2 of \cite{F-N}, which will be heavily used in the sequel. Two are the novelties here: firstly, for the sake of generality we will consider a non-constant density profile $\widetilde\varrho$ in the limit (although this property is not used in our analysis); in addition, due to the centrifugal force, when $F\neq0$ our result needs a localization procedure on compact sets. \begin{proposition} \label{p:prop_5.2} Let $m\geq1$ be fixed. Let $\widetilde\varrho_\varepsilon$ and $\overline\vartheta$ be the static solutions identified and studied in Paragraph \ref{sss:equilibrium}, and take $\widetilde\varrho$ to be the pointwise limit of the family $\left(\widetilde\varrho_\varepsilon\right)_\varepsilon$ (in particular, $\widetilde\varrho\equiv1$ if $m>1$ or $m=1$ and $F=0$). Let $(\varrho_\ep)_\ep$ and $(\vartheta_\ep)_\ep$ be sequences of non-negative measurable functions, and define $$ R_\varepsilon\,:=\,\frac{\varrho_\ep - \widetilde\varrho}{\ep^m}\qquad\mbox{ and }\qquad \Theta_\varepsilon\,:=\,\frac{\vartheta_\ep - \overline\vartheta}{\ep^m}\,. $$ Suppose that, in the limit $\varepsilon\ra0^+$, one has the convergence properties \begin{equation}\label{hyp_p5.2:1} \left[R_\varepsilon\right]_{\rm ess}\, \weakstar\, R \quad\mbox{ and }\quad \left[\Theta_\varepsilon\right]_{\rm ess}\, \weakstar\, \Theta\qquad\quad \mbox{ in the weak-$*$ topology of} \ L^\infty\bigl([0,T];L^2(K)\bigr)\,, \end{equation} for any compact $K\subset \Omega$, and that, for any $L>0$, one has \begin{equation}\label{hyp_p5.2:2} \sup_{t\in[0,T]} \int_{\mathbb B_{L}} {\mathchoice {\rm 1\mskip-4mu l_{\mathcal{M}^\ep_{\rm res} [t] }\,dx\, \leq \,c(L)\,\ep^{2m}\,. \end{equation} Then, for any given function $G \in C^1(\overline{\mathcal{O}}_{\rm ess})$, one has the convergence $ \frac{[G(\varrho_\ep,\vartheta_\ep)]_{\ess} - G(\widetilde\varrho,\overline\vartheta)}{\ep^m}\, \weakstar\,\partial_\varrho G(\widetilde\varrho,\overline\vartheta)\,R\, +\,\partial_\vartheta G(\widetilde\varrho,\overline\vartheta)\,\Theta \qquad \mbox{ in the weak-$*$ topology of} \ L^\infty\bigl([0,T];L^2(K)\bigr)\,, $ for any compact $K\subset \Omega$. \end{proposition} \begin{proof} The case $\widetilde\varrho\equiv1$ follows by a straightforward adaptation of the proof of Proposition 5.2 of \cite{F-N}. So, let us immediately focus on the case $m=1$ and $F\neq0$, so that the target profile $\widetilde\varrho$ is non-constant. We start by observing that, by virtue of \eqref{hyp_p5.2:2} and Lemma \ref{l:target-rho_bound}, the estimates $$ \frac{1}{\varepsilon}\,\left\|\left[G(\widetilde\varrho,\overline\vartheta)\right]_{\rm res}\right\|_{L^1(\mathbb B_L)}\,\leq\,C(L)\,\varepsilon\qquad\mbox{ and }\qquad \frac{1}{\varepsilon}\,\left\|\left[G(\widetilde\varrho,\overline\vartheta)\right]_{\rm res}\right\|_{L^2(\mathbb B_L)}\,\leq\,C(L) $$ hold true, for any $L>0$ fixed. Combining those bounds with hypothesis \eqref{hyp_p5.2:1}, after taking $L>0$ so large that $K\subset\mathbb B_L$, we see that it is enough to prove the convergence \begin{equation} \label{conv:to-prove} \int_{K}\left[\frac{G(\varrho_\ep,\vartheta_\ep)- G(\widetilde\varrho,\overline\vartheta)}{\ep}\,-\, \partial_\varrho G(\widetilde\varrho,\overline\vartheta)\,R_\varepsilon\,-\,\partial_\vartheta G(\widetilde\varrho,\overline\vartheta)\,\Theta_\varepsilon\right]_{\ess}\,\psi\,\dx\,\longrightarrow\,0 \end{equation} for any compact $K$ fixed and any $\psi\in L^1\bigl([0,T];L^2(K)\bigr)$. Next, we remark that, whenever $G\in C^2(\overline{\mathcal O}_{\rm ess})$, we have \begin{align} &\left|\left[\frac{G(\varrho_\ep,\vartheta_\ep)- G(\widetilde\varrho,\overline\vartheta)}{\ep}\,-\, \partial_\varrho G(\widetilde\varrho,\overline\vartheta)\,R_\varepsilon-\partial_\vartheta G(\widetilde\varrho,\overline\vartheta)\,\Theta_\varepsilon\right]_{\ess}\right|\,\leq \label{est:prelim_5.2} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \leq\,C\,\varepsilon\,\left\|{\rm Hess}(G)\right\|_{L^\infty(\overline{\mathcal O}_{\rm ess})}\left(\left[R_\varepsilon\right]_{\rm ess}^2+\left[\Theta_\varepsilon\right]_{\rm ess}^2\right), \nonumber \end{align} where we have denoted by ${\rm Hess}(G)$ the Hessian matrix of the function $G$ with respect to its variables $(\varrho,\vartheta)$. In particular, \eqref{est:prelim_5.2} implies the estimate \begin{align} \left\|\left[\frac{G(\varrho_\ep,\vartheta_\ep)- G(\widetilde\varrho,\overline\vartheta)}{\ep}\,-\, \partial_\varrho G(\widetilde\varrho,\overline\vartheta)\,R_\varepsilon\,-\,\partial_\vartheta G(\widetilde\varrho,\overline\vartheta)\,\Theta_\varepsilon\right]_{\ess}\right\|_{L^\infty_T(L^1(K))}\, \leq\,C\,\varepsilon\,. \label{est:prop_5.2} \end{align} Property \eqref{conv:to-prove} then follows from \eqref{est:prop_5.2}, after noticing that both terms $\left[G(\varrho_\ep,\vartheta_\ep)- G(\widetilde\varrho,\overline\vartheta)\right]_\ess/\varepsilon$ and $\left[\partial_\varrho G(\widetilde\varrho,\overline\vartheta)\,R_\varepsilon+\partial_\vartheta G(\widetilde\varrho,\overline\vartheta)\,\Theta_\varepsilon\right]_{\ess}$ are uniformly bounded in $L_T^\infty\bigl(L^2(K)\bigr)$. Finally, when $G$ is just $C^1(\overline{\mathcal O}_{\rm ess})$, we approximate it by a family of smooth functions $\bigl(G_n\bigr)_{n\in\mathbb{N}}$, uniformly in $C^1(\overline{\mathcal O}_{\rm ess})$. Obviously, for each $n$, convergence \eqref{conv:to-prove} holds true for $G_n$. Moreover, we have $$ \left|\left[\frac{G(\varrho_\ep,\vartheta_\ep)- G(\widetilde\varrho,\overline\vartheta)}{\ep}\right]_{\rm ess}- \left[\frac{G_n(\varrho_\ep,\vartheta_\ep)- G_n(\widetilde\varrho,\overline\vartheta)}{\ep}\right]_{\rm ess}\right|\,\leq\,C\, \left\|G\,-\,G_n\right\|_{C^1(\overline{\mathcal O}_{\rm ess})}\left(\left[R_\varepsilon\right]_{\rm ess}+\left[\Theta_\varepsilon\right]_{\rm ess}\right)\,, $$ and a similar bound holds for the terms presenting partial derivatives of $G$. In particular, these controls entail that the remainders, created replacing $G$ by $G_n$ in \eqref{conv:to-prove}, are uniformly small in $\varepsilon$, whenever $n$ is sufficiently large. This completes the proof of the proposition. \qed \end{proof} \medbreak From now on, we will focus on the two cases \eqref{eq:choice-m}: either $m\geq2$ and possibly $F\neq0$, or $m\geq1$ and $F=0$. We explain this in the next remark. \begin{remark} \label{slow_rho} If $1<m<2$ and $F\neq 0$, the structure of the wave system (see Paragraph 4.1.1) is much more complicated, since the centrifugal force term becomes singular; in turn, this prevents us from proving that the quantity $\gamma_\varepsilon$ (see details below) is compact, a fact which is a key point in the convergence step. On the other hand, the idea of combining the centrifugal force term with $\gamma_\varepsilon$, in order to gain compactness of a new quantity, does not seem to work either, because, owing to temperature variations (and differently from [15] where the temperature was constant), there is no direct relation between the centrifugal force and the pressure term. \end{remark} Recall that, in both cases presented in \eqref{eq:choice-m}, the limit density profile is always constant, say $\widetilde\varrho\equiv1$. Let us fix an arbitrary positive time $T>0$, which we keep fixed until the end of this paragraph. Thanks to \eqref{est:rho_ess}, \eqref{est:rho_res} and Proposition \ref{p:target-rho_bound}, we get \begin{equation}\label{rr1} \| \vre - 1 \|_{L^\infty_T(L^2 + L^{5/3}(K))}\,\leq\, \ep^m\,c(K) \qquad \mbox{ for all }\; K \subset \Omega\quad\mbox{ compact.} \end{equation} In particular, keeping in mind the notations introduced in \eqref{in_vr} and \eqref{eq:in-dens_dec}, we can define \begin{equation} \label{def_deltarho} R_\varepsilon\,:= \frac{\varrho_\ep -1}{\ep^m} = \,\varrho_\varepsilon^{(1)}\,+\,\widetilde{r}_\varepsilon\;,\qquad\quad\mbox{ where }\quad \varrho_\varepsilon^{(1)}(t,x)\,:=\,\frac{\vre-\widetilde{\varrho}_\varepsilon}{\ep^m}\quad\mbox{ and }\quad \widetilde{r}_\varepsilon(x)\,:=\,\frac{\widetilde{\varrho}_\varepsilon-1}{\ep^m}\,. \end{equation} Thanks to \eqref{est:rho_ess}, \eqref{est:rho_res} and Proposition \ref{p:target-rho_bound}, the previous quantities verify the following bounds: \begin{equation}\label{uni_varrho1} \sup_{\varepsilon\in\,]0,1]}\left\|\varrho_\varepsilon^{(1)}\right\|_{L^\infty_T(L^2+L^{5/3}({\mathbb{B}_{l}}))}\,\leq\, c \qquad\qquad\mbox{ and }\qquad\qquad \sup_{\varepsilon\in\,]0,1]}\left\| \widetilde{r}_\varepsilon \right\|_{L^{\infty}(\mathbb{B}_{l})}\,\leq\, c \,. \end{equation} As usual, here above the radius $l>0$ is fixed (and the constants $c$ depend on it). In addition, in the case $F=0$, there is no need of localising in $\mathbb{B}_l$, and one gets instead \begin{equation*} \sup_{\varepsilon\in\,]0,1]}\left\|\varrho_\varepsilon^{(1)}\right\|_{L^\infty_T(L^2+L^{5/3}(\Omega_\varepsilon))}\,\leq\, c \qquad\qquad\mbox{ and }\qquad\qquad \sup_{\varepsilon\in\,]0,1]}\left\| \widetilde{r}_\varepsilon \right\|_{L^{\infty}(\Omega_\varepsilon)}\,\leq\,\sup_{\varepsilon\in\,]0,1]}\left\| \widetilde{r}_\varepsilon \right\|_{L^{\infty}(\Omega)}\,\leq\, c \,. \end{equation*} In view of the previous properties, there exist $\varrho^{(1)}\in L^\infty_T(L^{5/3}_{\rm loc})$ and $\widetilde{r}\in L^\infty_{\rm loc}$ such that (up to the extraction of a suitable subsequence) \begin{equation} \label{conv:rr} \varrho_\varepsilon^{(1)}\,\weakstar\,\varrho^{(1)}\qquad\quad \mbox{ and }\qquad\quad \widetilde{r}_\varepsilon\,\weakstar\,\widetilde{r}\,, \end{equation} where we understand that limits are taken in the weak-$*$ topology of the respective spaces. Therefore, \begin{equation}\label{conv:r} R_\varepsilon\, \weakstar\,R\,:=\,\varrho^{(1)}\,+\,\widetilde{r}\qquad\qquad\qquad \mbox{ weakly-$*$ in }\quad L^\infty\bigl([0,T]; L^{5/3}_{\rm loc}(\Omega)\bigr)\,. \end{equation} Observe that $\widetilde r$ can be interpreted as a datum of our problem. Moreover, owing to Proposition~\ref{p:target-rho_bound} and \eqref{est:rho_ess}, we also get $ \left[R_\varepsilon \right]_{\rm ess}\weakstar R\qquad\qquad \mbox{ weakly-$*$ in }\quad L^\infty\bigl([0,T]; L^2_{\rm loc}(\Omega)\bigr)\,. $ In a pretty similar way, we also find that \begin{align} \Theta_\varepsilon\,:=\,\frac{\vartheta_\varepsilon\,-\,\overline{\vartheta}}{\varepsilon^m}\,&\rightharpoonup\,\Theta \qquad\qquad\mbox{ in }\qquad L^2\bigl([0,T];W^{1,2}_{\rm loc}(\Omega)\bigr) \label{conv:theta} \\ \vec{u}_\varepsilon\,&\weak\,\vec{U}\qquad\qquad\mbox{ in }\qquad L^2\bigl([0,T];W_{\rm loc}^{1,2}(\Omega)\bigr)\,. \label{conv:u} \end{align} Let us infer now some properties that these weak limits have to satisfy, starting with the case of anisotropic scaling, namely, in view of \eqref{eq:choice-m}, either $m\geq2$, or $m>1$ and $F=0$. \subsubsection{The case of anisotropic scaling} \label{ss:constr_2} When $m\geq 2$, or $m>1$ and $F=0$, the system presents multiple scales, which act and interact at the same time; however, the low Mach number limit has a predominant effect. As established in the next proposition, this fact imposes some rigid constraints on the target profiles. \begin{proposition} \label{p:limitpoint} Let $m\geq2$, or $m>1$ and $F=0$ in \eqref{ceq} to \eqref{eeq}. Let $\left( \vre, \ue, \tem\right)_{\varepsilon}$ be a family of weak solutions, related to initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon},\vartheta_{0,\varepsilon}\right)_\varepsilon$ verifying the hypotheses of Paragraph \ref{sss:data-weak}. Let $(R, \vec{U},\Theta )$ be a limit point of the sequence $\left( R_\varepsilon, \ue,\Theta_\varepsilon\right)_{\varepsilon}$, as identified in Subsection \ref{sss:constr_prelim}. Then, \begin{align} &\vec{U}\,=\,\,\Big(\vec{U}^h\,,\,0\Big)\,,\qquad\qquad \mbox{ with }\qquad \vec{U}^h\,=\,\vec{U}^h(t,x^h)\quad \mbox{ and }\quad {\rm div}\,_{\!h}\,\vec{U}^h\,=\,0 \label{eq:anis-lim_1} \\[1ex] &\nabla_x\Big(\partial_\varrho p(1,\overline{\vartheta})\,R\,+\,\partial_\vartheta p(1,\overline{\vartheta})\,\Theta\Big)\,=\,\nabla_x G\,+\,\delta_2(m)\nabla_x F \qquad\qquad\mbox{ a.e. in }\;\,\mathbb{R}_+\times \Omega \label{eq:anis-lim_2} \\[1ex] &\partial_{t} \Upsilon +{\rm div}\,_{h}\left( \Upsilon \vec{U}^{h}\right) -\frac{\kappa(\overline\vartheta)}{\overline\vartheta} \Delta \Theta =0\,,\qquad\qquad \mbox{ with }\qquad \Upsilon\,:=\,\partial_\varrho s(1,\overline{\vartheta})R + \partial_\vartheta s(1,\overline{\vartheta})\,\Theta\,, \label{eq:anis-lim_3} \end{align} where the last equation is supplemented with the initial condition $\Upsilon_{|t=0}=\partial_\varrho s(1,\overline\vartheta)\,R_0\,+\,\partial_\vartheta s(1,\overline\vartheta)\,\Theta_0$. \end{proposition} \begin{proof} Let us focus here on the case $m\geq 2$ and $F\neq 0$. A similar analysis yields the result also in the case $m>1$, provided we take $F=0$. First of all, let us consider the weak formulation of the mass equation \eqref{ceq}: for any test function $\varphi\in C_c^\infty\bigl(\mathbb{R}_+\times\Omega\bigr)$, denoting $[0,T]\times K\,=\,{\rm Supp}\, \, \varphi$, with $\varphi(T,\cdot)\equiv0$, we have $$ -\int^T_0\int_K\bigl(\varrho_\varepsilon-1\bigr)\,\partial_t\varphi \dxdt\,-\,\int^T_0\int_K\varrho_\varepsilon\,\vec{u}_\varepsilon\,\cdot\,\nabla_{x}\varphi \dxdt\,=\, \int_K\bigl(\varrho_{0,\varepsilon}-1\bigr)\,\varphi(0,\,\cdot\,)\dx\,. $$ We can easily pass to the limit in this equation, thanks to the strong convergence $\varrho_\varepsilon\longrightarrow1$ provided by \eqref{rr1} and the weak convergence of $\vec{u}_\varepsilon$ in $L_T^2\bigl(L^6_{\rm loc}\bigr)$ (by \eqref{conv:u} and Sobolev embeddings): we find $$ -\,\int^T_0\int_K\vec{U}\,\cdot\,\nabla_{x}\varphi \dxdt\,=\,0\, , $$ for any test function $\varphi \, \in C_c^\infty\bigl([0,T[\,\times\Omega\bigr)$, which in particular implies \begin{equation} \label{eq:div-free} {\rm div}\, \U = 0 \qquad\qquad\mbox{ a.e. in }\; \,\mathbb{R}_+\times \Omega\,. \end{equation} Let us now consider the momentum equation \eqref{meq}, in its weak formulation \eqref{weak-mom}. First of all, we test the momentum equation on $\varepsilon^m\,\vec\phi$, for a smooth compactly supported $\vec\phi$. By use of the uniform bounds we got in Subsection \ref{ss:unif-est}, it is easy to see that the only terms which do not converge to $0$ are the ones involving the pressure and the gravitational force; in the endpoint case $m=2$, we also have the contribution of the centrifugal force. Hence, let us focus on them, and more precisely on the quantity \begin{align} \Xi\,:&=\,\frac{\nabla_x p(\varrho_\varepsilon,\vartheta_\varepsilon)}{\varepsilon^m}\,-\,\varepsilon^{m-2}\,\varrho_\varepsilon\nabla_x F\,-\,\varrho_\varepsilon\nabla_x G \label{eq:mom_rest_1}\\ &=\frac{1}{\varepsilon^m}\nabla_x\left(p(\varrho_\varepsilon,\vartheta_\varepsilon)\,-\,p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\right)\,-\, \varepsilon^{m-2}\,\left(\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon\right)\nabla_x F\,-\,\left(\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon\right)\nabla_x G\,, \nonumber \end{align} where we have used relation \eqref{prF}. By uniform bounds and \eqref{conv:r}, the second and third terms in the right-hand side of \eqref{eq:mom_rest_1} converge to $0$, when tested against any smooth compactly supported $\vec\phi$; notice that this is true actually for any $m>1$. On the other hand, for the first item we can use the decomposition $$ \frac{1}{\varepsilon^m}\,\nabla_x\left(p(\varrho_\varepsilon,\vartheta_\varepsilon)\,-\,p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\right)\,=\, \frac{1}{\varepsilon^m}\,\nabla_x\left(p(\varrho_\varepsilon,\vartheta_\varepsilon)\,-\,p(1,\overline{\vartheta})\right)\,-\, \frac{1}{\varepsilon^m}\,\nabla_x\left(p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\,-\,p(1,\overline{\vartheta})\right)\,. $$ Due to the smallness of the residual set \eqref{est:M_res-measure} and to estimate \eqref{est:rho_res}, decomposing $p$ into essential and residual part and then applying Proposition \ref{p:prop_5.2}, we get the convergence $$ \frac{1}{\varepsilon^m}\,\nabla_x\left(p(\varrho_\varepsilon,\vartheta_\varepsilon)\,-\,p(1,\overline{\vartheta})\right)\;\stackrel{*}{\rightharpoonup}\; \nabla_x\left(\partial_\varrho p(1,\overline{\vartheta})\,R\,+\,\partial_\vartheta p(1,\overline{\vartheta})\,\Theta\right) $$ in $L_T^\infty(H^{-1}_{\rm loc})$, for any $T>0$. On the other hand, a Taylor expansion of $p(\,\cdot\,,\overline{\vartheta})$ up to the second order around $1$ gives, together with Proposition \ref{p:target-rho_bound}, the bound $$ \left\|\frac{1}{\varepsilon^m}\,\left(p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\,-\,p(1,\overline{\vartheta})\right)\,-\, \partial_\varrho p(1,\overline{\vartheta})\,\widetilde{r}_\varepsilon\right\|_{L^\infty(K)}\,\leq\,C(K)\,\varepsilon^m\, , $$ for any compact set $K\subset\Omega$. From the previous estimate we deduce that $\left(p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\,-\,p(1,\overline{\vartheta})\right)/\varepsilon^m\,\longrightarrow\, \partial_\varrho p(1,\overline{\vartheta})\,\widetilde{r}$ in e.g. $\mathcal{D}'\bigl(\mathbb{R}_+\times\Omega\bigr)$. Putting all these facts together and keeping in mind relation \eqref{conv:r}, thanks to \eqref{eq:mom_rest_1} we finally find the celebrated \emph{Boussinesq relation} \begin{equation} \label{eq:rho-theta} \nabla_x\left(\partial_\varrho p(1,\overline{\vartheta})\,\varrho^{(1)}\,+\,\partial_\vartheta p(1,\overline{\vartheta})\,\Theta\right)\,=\,0 \qquad\qquad\mbox{ a.e. in }\; \mathbb{R}_+\times \Omega\,. \end{equation} \begin{remark} \label{r:F-G} Notice that, dividing \eqref{prF} by $\varepsilon^m$ and passing to the limit in it, one gets the identity $$ \partial_\varrho p(1,\overline{\vartheta})\,\nabla_x\widetilde{r} \,=\,\nabla_x G\,+\,\delta_2(m)\nabla_x F\,, $$ where we have set $\delta_2(m)=1$ if $m=2$, $\delta_2(m)=0$ otherwise. Hence, relation \eqref{eq:rho-theta} is equivalent to equality \eqref{eq:anis-lim_2}, which might be more familiar to the reader (see formula (5.10) in Chapter 5 of \cite{F-N}). \end{remark} Up to now, the contribution of the fast rotation in the limit has not been seen: this is due to the fact that the incompressible limit takes place faster than the high rotation limit, because $m>1$. Roughly speaking, the rotation term enters into the singular perturbation operator as a ``lower order'' part; nonetheless, being singular, it does impose some conditions on the limit dynamics. To make this rigorous, we test \eqref{meq} on $\varepsilon\,\vec\phi$, where this time we take $\vec\phi\,=\,{\rm curl}\,\vec\psi$, for some smooth compactly supported $\vec\psi\,\in C^\infty_c\bigl([0,T[\,\times\Omega\bigr)$. Once again, by uniform bounds we infer that the $\partial_t$ term, the convective term and the viscosity term all converge to $0$ when $\varepsilon\ra0^+$. As for the pressure and the external forces, we repeat the same manipulations as before: making use of relation \eqref{prF} again, we are reconducted to work on $$ \int^T_0\int_K\left(\frac{1}{\varepsilon^{2m-1}}\nabla_x\left(p(\varrho_\varepsilon,\vartheta_\varepsilon)\,-\,p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\right)\,-\, \frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon}\,\nabla_x F\,-\,\frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^{m-1}}\nabla_x G\right)\cdot\vec\phi\,\dx\,dt\,, $$ where the compact set $K\subset\Omega$ is such that ${\rm Supp}\,\vec\phi\subset[0,T[\,\times K$, and $\varepsilon>0$ is small enough. According to \eqref{rr1}, the two forcing terms converge to $0$, in the limit for $\varepsilon\ra0^+$; on the other hand, the first term (which has no chance to be bounded uniformly in $\varepsilon$) simply vanishes, due to the fact that $\vec\phi\,=\,{\rm curl}\,\vec\psi$. Finally, using a priori bounds and properties \eqref{conv:r} and \eqref{conv:u}, it is easy to see that the rotation term converges to $\int^T_0\int_K\vec{e}_3\times\vec{U}\cdot\vec\phi$. In the end, passing to the limit for $\varepsilon\ra0^+$ we find $$ \mathbb{H}\left(\vec{e}_3\times\vec{U}\right)\,=\,0\qquad\qquad\text{and so}\qquad\qquad \vec{e}_3\times\vec{U}\,=\,\nabla_x\Phi\, $$ for some potential function $\Phi$. From this relation, which in components reads \begin{equation}\label{limit_U_components} \begin{pmatrix} -U^2 \\ U^1\\ 0 \end{pmatrix} =\begin{pmatrix} \partial_1 \Phi \\ \partial_2 \Phi \\ \partial_3 \Phi \end{pmatrix} \, , \end{equation} we deduce that $\Phi=\Phi(t,x^h)$, i.e. $\Phi$ does not depend on $x^3$, and that the same property is inherited by $\vec{U}^h\,=\,\bigl(U^1,U^2\bigr)$, i.e. $\vec{U}^h\,=\,\vec{U}^h(t,x^h)$. Furthermore, from \eqref{limit_U_components}, it is also easy to see that the $2$-D flow given by $\vec{U}^h$ is incompressible, namely ${\rm div}\,_{\!h}\,\vec{U}^h\,=\,0$. Combining this fact with \eqref{eq:div-free}, we infer that $\partial_3 U^3\,=\,0$; on the other hand, thanks to the boundary condition \eqref{bc1-2} we must have $\bigl(\vec{U}\cdot\vec{n}\bigr)_{|\partial\Omega}\,=\,0$. Keeping in mind that $\partial\Omega\,=\,\bigl(\mathbb{R}^2\times\{0\}\bigr)\cup\bigl(\mathbb{R}^2\times\{1\}\bigr)$, we finally get $U^3\,\equiv\,0$, whence \eqref{eq:anis-lim_1} finally follows. Next, we observe that we can by now pass to the limit in the weak formulation \eqref{weak-ent} of \eqref{eiq}. The argument being analogous to the one used in \cite{F-N} (see Paragraph 5.3.2), we only sketch it. First of all, testing \eqref{eiq} on $\varphi/\varepsilon^m$, for some $\varphi\in C^\infty_c\bigl([0,T[\,\times\Omega\bigr)$, and using \eqref{ceq}, for $\varepsilon>0$ small enough we get \begin{align} &-\int^T_0\!\!\int_K\varrho_\varepsilon\left(\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(1,\overline{\vartheta})}{\varepsilon^m}\right)\partial_t\varphi - \int^T_0\!\!\int_K\varrho_\varepsilon\left(\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(1,\overline{\vartheta})}{\varepsilon^m}\right)\vec{u}_\varepsilon\cdot\nabla_x\varphi \label{weak:entropy} \\ &+\int^T_0\!\!\int_K\frac{\kappa(\vartheta_\varepsilon)}{\vartheta_\varepsilon}\,\frac{1}{\varepsilon^m}\,\nabla_x\vartheta_\varepsilon\cdot\nabla_x\varphi- \frac{1}{\varepsilon^m}\,\langle\sigma_\varepsilon,\varphi\rangle_{[\mathcal{M}^+,C^0]([0,T]\times K)} = \int_K\varrho_{0,\varepsilon}\left(\frac{s(\varrho_{0,\varepsilon},\vartheta_{0,\varepsilon})-s(1,\overline{\vartheta})}{\varepsilon^m}\right)\varphi(0)\,. \nonumber \end{align} To begin with, let us decompose \begin{align} &\varrho_\varepsilon\left(\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(1,\overline{\vartheta})}{\varepsilon^m}\right) = \label{eq:dec_rho-s} \\ &=[\varrho_\varepsilon]_{\ess}\left(\frac{[s(\varrho_\varepsilon,\vartheta_\varepsilon)]_{\ess}-s(1,\overline{\vartheta})}{\varepsilon^m}\right) + \left[\frac{\varrho_\varepsilon}{\varepsilon^m}\right]_{\res}\left([s(\varrho_\varepsilon,\vartheta_\varepsilon)]_{\ess}-s(1,\overline{\vartheta})\right) + \left[\frac{\varrho_\varepsilon\,s(\varrho_\varepsilon,\vartheta_\varepsilon)}{\ep^m}\right]_{\res}\,. \nonumber \end{align} Thanks to \eqref{est:rho_res}, we discover that the second term in the right-hand side strongly converges to $0$ in $L_T^\infty(L^{5/3}_{\rm loc})$. Also the third term converges to $0$ in the space $ L_T^2(L^{30/23}_{\rm loc})$, as a consequence of \eqref{est:M_res-measure} and \eqref{est:rho-s_res}. Notice that these terms converge to $0$ even when multiplied by $\vec{u}_\varepsilon$: to see this, it is enough to put \eqref{est:M_res-measure}, \eqref{est:rho-s_res}, \eqref{est:u-H^1} and the previous properties together. As for the first term in the right-hand side of \eqref{eq:dec_rho-s}, Propositions \ref{p:prop_5.2} and \ref{p:target-rho_bound} and estimate \eqref{rr1} imply that it weakly converges to $\partial_\varrho s(1,\overline{\vartheta})\,R\,+\,\partial_\vartheta s(1,\overline{\vartheta})\,\Theta$, where $R$ and $\Theta$ are defined respectively in \eqref{conv:r} and \eqref{conv:theta}. On the other hand, an application of the Div-Curl Lemma (see Theorem \ref{app:div-curl_lem}) gives $$ [\varrho_\varepsilon]_{\ess}\left(\frac{[s(\varrho_\varepsilon,\vartheta_\varepsilon)]_{\ess}-s(1,\overline{\vartheta})}{\varepsilon^m}\right)\,\vec{u}_\varepsilon\,\rightharpoonup\, \Bigl(\partial_\varrho s(1,\overline{\vartheta})\,R\,+\,\partial_\vartheta s(1,\overline{\vartheta})\,\Theta\Bigr)\,\vec{U} $$ in the space $L_T^2(L^{3/2}_{\rm loc})$. In addition, from \eqref{est:sigma} we deduce that $$ \frac{1}{\varepsilon^m}\,\langle\sigma_\varepsilon,\varphi\rangle_{[\mathcal{M}^+,C^0]([0,T]\times K)}\,\longrightarrow\,0 $$ when $\varepsilon\ra0^+$. Finally, a separation into essential and residual part of the coefficient $\kappa(\vartheta_\varepsilon)/\vartheta_\varepsilon$, together with \eqref{mu}, \eqref{est:rho_ess}, \eqref{est:rho_res}, \eqref{est:theta-Sob} and \eqref{est:Dtheta_res} gives $$ \frac{\kappa(\vartheta_\varepsilon)}{\vartheta_\varepsilon}\,\frac{1}{\varepsilon^m}\,\nabla_x\vartheta_\varepsilon\,\rightharpoonup\, \frac{\kappa(\overline\vartheta)}{\overline\vartheta}\,\nabla_x\Theta\qquad\qquad\mbox{ in }\qquad L^2\bigl([0,T];L^{1}_{\rm loc}(\Omega)\bigr)\,. $$ In the end, we have proved that equation \eqref{weak:entropy} converges, for $\varepsilon\ra0^+$, to equation \begin{align} &-\int^T_0\int_\Omega\Bigl(\partial_\varrho s(1,\overline{\vartheta})R + \partial_\vartheta s(1,\overline{\vartheta})\,\Theta\Bigr)\left(\partial_t\varphi + \vec{U}\cdot\nabla_x\varphi\right)\dxdt \,+ \label{eq:ent_bal_lim_1} \\ &\qquad\qquad+ \int^T_0\int_\Omega\frac{\kappa(\overline\vartheta)}{\overline\vartheta} \nabla_x\Theta\cdot\nabla_x\varphi\dxdt = \int_\Omega\Bigl(\partial_\varrho s(1,\overline{\vartheta})\,R_0\,+\,\partial_\vartheta s(1,\overline{\vartheta})\,\Theta_0\Bigr)\,\varphi(0)\dx\, , \nonumber \end{align} for all $\varphi \in C_c^\infty([0,T[\,\times\Omega)$, with $T>0$ any arbitrary time. Relation \eqref{eq:ent_bal_lim_1} means that the quantity $\Upsilon$, defined in \eqref{eq:anis-lim_3}, is a weak solution of that equation, related to the initial datum $\Upsilon_0:=\partial_\varrho s(1,\overline\vartheta)\,R_0\,+\,\partial_\vartheta s(1,\overline\vartheta)\,\Theta_0$. Equation \eqref{eq:anis-lim_3} is in fact an equation for $\Theta$ only, keep in mind Remark \ref{r:lim delta theta}. \qed \end{proof} \subsubsection{The case of isotropic scaling} \label{ss:constr_1} We focus now on the case of isotropic scaling, namely $m=1$. Recall that, in this instance, we also set $F=0$. In this case, the fast rotation and weak compressibility effects are of the same order; in turn, this allows to reach the so-called \emph{quasi-geostrophic balance} in the limit (see equation \eqref{eq:streamq} below). \begin{proposition} \label{p:limit_iso} Take $m=1$ and $F=0$ in system \eqref{ceq} to \eqref{eeq}. Let $\left( \vre, \ue, \tem\right)_{\varepsilon}$ be a family of weak solutions to \eqref{ceq} to \eqref{eeq}, associated with initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon},\vartheta_{0,\varepsilon}\right)$ verifying the hypotheses fixed in Paragraph \ref{sss:data-weak}. Let $(R, \vec{U},\Theta )$ be a limit point of the sequence $\left(R_{\varepsilon} , \ue,\Theta_\varepsilon\right)_{\varepsilon}$, as identified in Subsection \ref{sss:constr_prelim}. Then, \begin{align} &\vec{U}\,=\,\,\Big(\vec{U}^h\,,\,0\Big)\,,\qquad\qquad \mbox{ with }\qquad \vec{U}^h\,=\,\vec{U}^h(t,x^h)\quad \mbox{ and }\quad {\rm div}\,_{\!h}\,\vec{U}^h\,=\,0 \nonumber \\[1ex] &\vec{U}^h\,=\,\nabla^\perp_hq \;\mbox{ a.e. in }\;\,]0,T[\, \times \Omega\, , \quad\mbox{ with } \label{eq:streamq} \\[1ex] & q\,=\,q(t,x^h)\,:=\,\partial_\varrho p(1,\overline{\vartheta})R+\partial_\vartheta p(1,\overline{\vartheta})\Theta-G-1/2 \label{eq:for q} \\[1ex] &\partial_{t} \Upsilon +{\rm div}_h\left( \Upsilon \vec{U}^{h}\right) -\frac{\kappa(\overline\vartheta)}{\overline\vartheta} \Delta \Theta =0\,,\qquad\quad\mbox{ with }\qquad \Upsilon_{|t=0}\,=\,\Upsilon_0\,, \nonumber \end{align} where $ \Upsilon$ and $\Upsilon_0$ are the same quantities defined in Proposition \ref{p:limitpoint}. \end{proposition} \begin{proof} Arguing as in the proof of Proposition \ref{p:limitpoint}, it is easy to pass to the limit in the continuity equation and in the entropy balance. In particular, we obtain again equations \eqref{eq:div-free} and \eqref{eq:ent_bal_lim_1}. The only changes concern the analysis of the momentum equation, written in its weak formulation \eqref{weak-mom}. We start by testing it on $\varepsilon\,\vec\phi$, for a smooth compactly supported $\vec\phi$. Similarly to what done above, the uniform bounds of Subsection \ref{ss:unif-est} allow us to say that the only quantity which does not vanish in the limit is the sum of the terms involving the Coriolis force, the pressure and the gravitational force: $$ \vec{e}_{3}\times \varrho_{\varepsilon}\ue\,+\frac{\nabla_x \left( p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\vartheta_\varepsilon)\right)}{\varepsilon}\,-\, \left(\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon \right)\nabla_x G\,=\,\mathcal O(\varepsilon)\,. $$ From this relation, following the same computations performed in the proof of Proposition \ref{p:limitpoint}, in the limit $\varepsilon\ra0^+$ we obtain that $$ \vec{e}_{3}\times \vec{U}+\nabla_x\left(\partial_\varrho p(1,\overline{\vartheta})\,\varrho^{(1)}\,+\,\partial_\vartheta p(1,\overline{\vartheta})\,\Theta\right)\,=\,0 \qquad\qquad\mbox{ a.e. in }\; \mathbb{R}_+\times \Omega\,. $$ After defining $q$ as in \eqref{eq:for q}, i.e. $$ q\,:=\,\partial_\varrho p(1,\overline{\vartheta})R+\partial_\vartheta p(1,\overline{\vartheta})\Theta-G-1/2 $$ and keeping Remark \ref{r:F-G} in mind, this equality can be equivalently written as $$ \vec{e}_{3}\times \vec{U}+\nabla_xq\,=\,0 \qquad\qquad\mbox{ a.e. in }\; \mathbb{R}_+ \times \Omega\,. $$ As done in the proof to Proposition \ref{p:limitpoint}, from this relation we immediately deduce that $q=q(t,x^h)$ and $\vec{U}^h=\vec{U}^h(t,x^h)$. In addition, we get $\vec{U}^h=\nabla^\perp_hq$, whence we gather that $q$ can be viewed as a stream function for $\vec U^h$. Using \eqref{eq:div-free}, we infer that $\partial_{3}U^{3}=0$, which in turn implies that $U^{3}\equiv0$, thanks to \eqref{bc1-2}. The proposition is thus proved. \qed \end{proof} \begin{remark} \label{r:q} Notice that $q$ is defined up to an additive constant. We fix it to be $-1/2$, in order to compensate the vertical mean of $G$ and have a cleaner expression for $\langle q\rangle$ (see Theorem \ref{th:m=1_F=0}). As a matter of fact, it is $\langle q\rangle$ the natural quantity to look at, see also Subsection \ref{ss:limit_1} in this respect. \end{remark} \section{Convergence in presence of the centrifugal force}\label{s:proof} In this section we complete the proof of Theorem \ref{th:m-geq-2}, in the case when $m\geq2$ and $F\neq0$. In the case $m>1$ and $F=0$, some arguments of the proof slightly change, due to the absence of the (unbounded) centrifugal force: we refer to Section \ref{s:proof-1} below for more details. \medbreak The uniform bounds of Subsection \ref{ss:unif-est} allow us to pass to the limit in the mass and entropy equations, but they are not enough for proving convergence in the weak formulation of the momentum equation: the main problem relies on identifying the weak limit of the convective term $\varrho_\varepsilon\,\vec u_\varepsilon\otimes\vec u_\varepsilon$. For this, we need to control the strong oscillations in time of the solutions: this is the aim of Subsection \ref{ss:acoustic}. In Subsection \ref{ss:convergence}, by using a compensated compactness argument together with Aubin-Lions Theorem (see Theorem \ref{app:Aubin_thm}), we establish strong convergence of suitable quantities related to the velocity fields. This property, which deeply relies on the structure of the wave system, allows us to pass to the limit in our equations (see Subsection \ref{ss:limit}). \subsection{Analysis of the acoustic waves} \label{ss:acoustic} The goal of the present subsection is to describe oscillations of solutions. First of all, we recast our equations into a wave system; there we also implement a localisation procedure, due to the presence of the centrifugal force. Then, we establish uniform bounds for the quantities appearing in the wave system. Finally, we apply a regularisation in space for all the quantities, which is preparatory in view of the computations of Subsection \ref{ss:convergence}. \subsubsection{Formulation of the acoustic equation} \label{sss:wave-eq} Let us define $$ \vec{V}_\varepsilon\,:=\,\varrho_\varepsilon\vec{u}_\varepsilon\,. $$ We start by writing the continuity equation in the form \begin{equation} \label{eq:wave_mass} \varepsilon^m\,\partial_t\varrho^{(1)}_\varepsilon\,+\,{\rm div}\,\vec{V}_\varepsilon\,=\,0\,. \end{equation} Of course, this relation, as well as the other ones which will follow, has to be read in the weak form. Using continuity equation and resorting to the time lifting \eqref{lift0} of the measure $\sigma_\varepsilon$, straightforward computations lead us to the following form of the entropy balance: $$ \varepsilon^m\partial_t\!\left(\varrho_\varepsilon\,\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}- \frac{1}{\varepsilon^m}\Sigma_\varepsilon\right)\,=\,\varepsilon^m\,{\rm div}\,\!\!\left(\frac{\kappa(\vartheta_\varepsilon)}{\vartheta_\varepsilon} \frac{\nabla_x\vartheta_\varepsilon}{\varepsilon^m}\right)+s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta}){\rm div}\,\!\!\left(\varrho_\varepsilon\,\vec{u}_\varepsilon\right)- {\rm div}\,\!\!\left(\varrho_\varepsilon s(\varrho_\varepsilon,\vartheta_\varepsilon)\vec{u}_\varepsilon\right), $$ where, with a little abuse of notation, we use the identification $\int_{\Omega_\varepsilon}\Sigma_\varepsilon\,\varphi\,dx\,=\,\langle\Sigma_\varepsilon,\varphi\rangle_{[\mathcal{M}^+,C^0]}$. Next, since $\widetilde{\varrho}_\varepsilon$ is smooth (recall relation \eqref{eq:target-rho} above), the previous equation can be finally written as \begin{align} &\varepsilon^m\,\partial_t\left(\varrho_\varepsilon\,\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,-\, \frac{1}{\varepsilon^m}\Sigma_\varepsilon\right)\,= \label{eq:wave_entropy} \\ &\qquad\quad=\, \varepsilon^m\,\biggl({\rm div}\,\!\left(\frac{\kappa(\vartheta_\varepsilon)}{\vartheta_\varepsilon}\,\frac{\nabla_x\vartheta_\varepsilon}{\varepsilon^m}\right)\,-\, \varrho_\varepsilon\,\vec{u}_\varepsilon\,\cdot\,\frac{1}{\varepsilon^m}\,\nabla_x s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\,-\, {\rm div}\,\!\left(\varrho_\varepsilon\,\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,\vec{u}_\varepsilon\right)\biggr)\,. \nonumber \end{align} Now, we turn our attention to the momentum equation. By \eqref{prF} we find \begin{align} &\varepsilon^m\,\partial_t\vec{V}_\varepsilon\,+\,\nabla_x\left(\frac{p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\right)\,+\,\varepsilon^{m-1}\,\vec{e}_3\times \vec V_\varepsilon\,=\, \varepsilon^{2(m-1)}\frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^m}\nabla_x F\,+ \label{eq:wave_momentum} \\ &\qquad\qquad\qquad\qquad\qquad +\,\varepsilon^m\left({\rm div}\,\mathbb{S}\!\left(\vartheta_\varepsilon,\nabla_x\vec{u}_\varepsilon\right)\,-\,{\rm div}\,\!\left(\varrho_\varepsilon\vec{u}_\varepsilon\otimes\vec{u}_\varepsilon\right)\,+\, \frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^m}\nabla_x G\right)\,. \nonumber \end{align} At this point, let us introduce two real numbers $\mathcal{A}$ and $\mathcal{B}$, such that the following relations are satisfied: \begin{equation} \label{relnum} \mathcal{A}\,+\,\mathcal{B}\,\partial_\varrho s(1,\overline{\vartheta})\,=\,\partial_\varrho p(1,\overline{\vartheta})\qquad\mbox{ and }\qquad \mathcal{B}\,\partial_\vartheta s(1,\overline{\vartheta})\,=\,\partial_\vartheta p(1,\overline{\vartheta})\,. \end{equation} Due to Gibbs' law \eqref{gibbs} and the structural hypotheses of Paragraph \ref{sss:structural} (see also Chapter 8 of \cite{F-N} and \cite{F-Scho}), we notice that $\mathcal A$ is given by formula \eqref{def:A}, and $\mathcal A>0$. Taking a linear combination of \eqref{eq:wave_mass} and \eqref{eq:wave_entropy}, with coefficients respectively $\mathcal{A}$ and $\mathcal{B}$, and keeping in mind equation \eqref{eq:wave_momentum}, we finally get the wave system \begin{equation} \label{eq:wave_syst} \left\{\begin{array}{l} \varepsilon^m\,\partial_tZ_\varepsilon\,+\,\mathcal{A}\,{\rm div}\,\vec{V}_\varepsilon\,=\,\varepsilon^m\,\left({\rm div}\,\vec{X}^1_\varepsilon\,+\,X^2_\varepsilon\right) \\[1ex] \varepsilon^m\,\partial_t\vec{V}_\varepsilon\,+\,\nabla_x Z_\varepsilon\,+\,\varepsilon^{m-1}\,\vec{e}_3\times \vec V_\varepsilon\,=\,\varepsilon^m\,\left({\rm div}\,\mathbb{Y}^1_\varepsilon\,+\,\vec{Y}^2_\varepsilon\,+\,\nabla_x Y^3_\varepsilon\right)\,, \qquad\big(\vec{V}_\varepsilon\cdot\vec n\big)_{|\partial\Omega_\varepsilon}\,=\,0\,, \end{array} \right. \end{equation} where we have defined the quantities \begin{eqnarray*} Z_\varepsilon & := & \mathcal{A}\,\varrho^{(1)}_\varepsilon\,+\,\mathcal{B}\,\left(\varrho_\varepsilon\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,-\,\frac{1}{\varepsilon^m}\Sigma_\varepsilon\right) \\ \vec{X}^1_\varepsilon & := & \mathcal{B}\left(\frac{\kappa(\vartheta_\varepsilon)}{\vartheta_\varepsilon}\,\frac{\nabla_x\vartheta_\varepsilon}{\varepsilon^m}\,-\, \varrho_\varepsilon\,\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,\vec{u}_\varepsilon\right) \\ X^2_\varepsilon & := & -\,\mathcal{B}\,\varrho_\varepsilon\,\vec{u}_\varepsilon\,\cdot\,\frac{1}{\varepsilon^m}\,\nabla_x s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta}) \\ \mathbb{Y}^1_\varepsilon & := & \mathbb{S}\!\left(\vartheta_\varepsilon,\nabla\vec{u}_\varepsilon\right)\,-\,\varrho_\varepsilon\vec{u}_\varepsilon\otimes\vec{u}_\varepsilon \\ \vec{Y}^2_\varepsilon & := & \frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^m}\nabla_x G\,+\, \varepsilon^{m-2}\,\frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^m}\nabla_x F \\ Y^3_\varepsilon & := &\frac{1}{\varepsilon^{m}}\left( \mathcal{A}\,\frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^m}\,+\mathcal{B}\,\varrho_\varepsilon\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,-\,\mathcal{B}\,\frac{1}{\varepsilon^m}\Sigma_\varepsilon\,-\, \frac{p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\right)\,. \end{eqnarray*} We remark that system \eqref{eq:wave_syst} has to be read in the weak sense: for any $\varphi\in C_c^\infty\bigl([0,T[\,\times\overline\Omega_\varepsilon\bigr)$, one has $$ -\,\varepsilon^m\,\int^T_0\int_{\Omega_\varepsilon} Z_\varepsilon\,\partial_t\varphi\,-\,\mathcal{A}\,\int^T_0\int_{\Omega_\varepsilon} \vec{V}_\varepsilon\cdot\nabla_x\varphi\,=\, \varepsilon^{m}\int_{\Omega_\varepsilon} Z_{0,\varepsilon}\,\varphi(0)\,+\,\varepsilon^m\,\int^T_0\int_{\Omega_\varepsilon}\left(-\,\vec{X}^1_\varepsilon\cdot\nabla_x\varphi\,+\,X^2_\varepsilon\,\varphi\right)\,, $$ and also, for any $\vec{\psi}\in C_c^\infty\bigl([0,T[\,\times\overline\Omega_\varepsilon;\mathbb{R}^3\bigr)$ such that $\big(\vec\psi \cdot \n_\varepsilon\big)_{|\partial {\Omega_\varepsilon}} = 0$, one has \begin{align*} &-\,\varepsilon^m\,\int^T_0\int_{\Omega_\varepsilon}\vec{V}_\varepsilon\cdot\partial_t\vec{\psi}\,-\,\int^T_0\int_{\Omega_\varepsilon} Z_\varepsilon\,{\rm div}\,\vec{\psi}\,+\,\varepsilon^{m-1}\int^T_0\int_{\Omega_\varepsilon} \vec{e}_3\times\vec V_\varepsilon\cdot\vec\psi \\ &\qquad\qquad\qquad\qquad =\,\varepsilon^{m}\int_{\Omega_\varepsilon}\vec{V}_{0,\varepsilon}\cdot\vec{\psi}(0)\,+\,\varepsilon^m\,\int^T_0\int_{\Omega_\varepsilon}\left(-\,\mathbb{Y}^1_\varepsilon:\nabla_x\vec{\psi}\,+\,\vec{Y}^2_\varepsilon\cdot\vec{\psi}\,-\, Y^3_\varepsilon\,{\rm div}\,\vec{\psi}\right)\,, \end{align*} where we have set \begin{equation} \label{def:wave-data} Z_{0,\varepsilon}\,=\,\mathcal{A}\,\varrho^{(1)}_{0,\varepsilon}\,+\,\mathcal{B}\,\left(\varrho_{0,\varepsilon}\, \frac{s(\varrho_{0,\varepsilon},\vartheta_{0,\varepsilon})-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\right) \qquad\mbox{ and }\qquad \vec{V}_{0,\varepsilon}\,=\,\varrho_{0,\varepsilon}\,\vec{u}_{0,\varepsilon}\,. \end{equation} At this point, analogously to \cite{F-G-GV-N}, for any fixed $l>0$, let us introduce a smooth cut-off \begin{align} &\chi_l\in C^\infty_c(\mathbb{R}^2)\quad \mbox{ radially decreasing}\,,\qquad \mbox{ with }\quad 0\leq\chi_l\leq1\,, \label{eq:cut-off} \\ &\mbox{such that }\quad \chi_l\equiv1 \ \mbox{ on }\ \mathbb{B}_l\,, \quad \chi_l\equiv0 \ \mbox{ out of }\ \mathbb{B}_{2l}\,,\quad \left|\nabla_{h}\chi_l(x^h)\right|\,\leq\,C(l)\; \; \text{for any}\;x^h\in\mathbb{R}^2\,. \nonumber \end{align} Then we define \begin{equation} \label{def:L-W} \Lambda_{\varepsilon,l}\,:=\,\chi_l\,Z_\varepsilon\,=\,\chi_l\,\mathcal{A}\,\varrho^{(1)}_\varepsilon\,+\,\chi_l\,\mathcal{B}\,\left(\varrho_\varepsilon\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,-\,\frac{1}{\varepsilon^m}\Sigma_\varepsilon\right) \quad\mbox{ and }\quad \vec W_{\varepsilon,l}\,:=\,\chi_l\,\vec V_\varepsilon\,. \end{equation} For notational convenience, in what follows we keep using the notation $\Lambda_{\varepsilon}$ and $\vec W_\varepsilon$ instead of $\Lambda_{\varepsilon,l}$ and $\vec W_{\varepsilon,l}\, $, tacitly meaning the dependence on $l$. So system \eqref{eq:wave_syst} becomes \begin{equation} \label{eq:wave_syst2} \left\{\begin{array}{l} \varepsilon^m\,\partial_t\Lambda_\varepsilon\,+\,\mathcal{A}\,{\rm div}\,\vec{W}_\varepsilon\,=\ \varepsilon^m f_\varepsilon \\[1ex] \varepsilon^m\,\partial_t\vec{W}_\varepsilon\,+\,\nabla_x \Lambda_\varepsilon\,+\,\varepsilon^{m-1}\,\vec{e}_3\times \vec W_\varepsilon\,=\, \varepsilon^m \vec{G}_\varepsilon\,, \qquad\big(\vec{W}_\varepsilon\cdot\vec n\big)_{|\partial\Omega_\varepsilon}\,=\,0\,, \end{array} \right. \end{equation} where we have defined $f_\varepsilon\,:=\,{\rm div}\,\vec{F}^1_\varepsilon\,+\,F^2_\varepsilon\;$ and $\;\vec G_\varepsilon\,:=\,{\rm div}\,\mathbb{G}^1_\varepsilon\,+\,\vec{G}^2_\varepsilon\,+\,\nabla_x G^3_\varepsilon$, with \begin{align*} & \vec{F}^1_\varepsilon\,=\,\chi_l\,\vec{X}^1_\varepsilon\qquad\qquad\mbox{ and }\qquad\qquad F^2_\varepsilon\,=\,\chi_l X^2_\varepsilon\,-\,\vec{X}^1_\varepsilon\cdot\nabla_{x}\chi_l\,+\,\mathcal{A}\,\vec V_\varepsilon\cdot \nabla_{x}\chi_l\, \\ & \mathbb{G}^1_\varepsilon\,=\,\chi_l\,\mathbb{Y}^1_\varepsilon\;,\qquad \vec G^2_\varepsilon\,=\,\chi_l\,\vec Y^2_\varepsilon\,+\,\left(\frac{Z_\varepsilon}{\varepsilon^{m}}-Y^3_\varepsilon \right)\,\nabla_{x}\chi_l\,-\,^t\mathbb{Y}^1_\varepsilon\cdot \nabla_{x}\chi_l\qquad\mbox{ and }\qquad G^3_\varepsilon\,=\,\chi_l\,Y^3_\varepsilon\,. \end{align*} \subsubsection{Uniform bounds} \label{sss:w-bounds} Here we use estimates of Subsection \ref{ss:unif-est} in order to show uniform bounds for the solutions and the data in the wave equation \eqref{eq:wave_syst2}. We start by dealing with the ``unknowns'' $\Lambda_\varepsilon$ and $\vec W_\varepsilon$. \begin{lemma} \label{l:S-W_bounds} Let $\bigl(\Lambda_\varepsilon\bigr)_\varepsilon$ and $\bigl(\vec W_\varepsilon\bigr)_\varepsilon$ be defined as above. Then, for any $T>0$ and all $\ep \in \, ]0,1]$, one has $ \| \Lambda_\varepsilon\|_{L^\infty_T(L^2+L^{5/3}+L^1+\mathcal{M}^+)} \leq c(l)\, ,\quad\quad \| \vec W_\varepsilon\|_{L^2_T(L^2+L^{30/23})} \leq c(l) \, . $ \end{lemma} \begin{proof} We start by writing $\vec W_\varepsilon\,=\,\vec W^1_\varepsilon\,+\,\vec W_\varepsilon^2$, where $$ \vec W^1_\varepsilon\,:=\,\chi_l\,[\varrho_\varepsilon]_{\ess}\,\vec u_\varepsilon\qquad\qquad\mbox{ and }\qquad\qquad \vec W^2_\varepsilon\,:=\,\chi_l\,[\varrho_\varepsilon]_{\res}\,\vec u_\varepsilon\,. $$ Since the density and temperature are uniformly bounded on the essential set, by \eqref{est:u-H^1} we infer that $\vec W_\varepsilon^1$ is uniformly bounded in $L_T^2(L^2)$. On the other hand, by \eqref{est:rho_res} and \eqref{est:u-H^1} again, we easily deduce that $\vec W_\varepsilon^2$ is uniformly bounded in $L_T^2(L^p)$, where $3/5+1/6\,=\,1/p$. The claim about $\vec W_\varepsilon$ is hence proved. Let us now consider $\Lambda_\varepsilon$, defined in \eqref{def:L-W}, i.e. $$ \Lambda_\varepsilon=\Lambda_{\varepsilon,l}\,:=\,\chi_l\,Z_\varepsilon\,=\,\chi_l\,\mathcal{A}\,\varrho^{(1)}_\varepsilon\,+\,\chi_l\,\mathcal{B}\,\left(\varrho_\varepsilon\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,-\,\frac{1}{\varepsilon^m}\Sigma_\varepsilon\right)\, . $$ First of all, owing to the bounds $\left\|\Sigma_\varepsilon\right\|_{L^\infty_T(\mathcal{M}^+)}\,\leq\,C\,\|\sigma_\varepsilon\|_{\mathcal{M}^+_{t,x}}$ and \eqref{est:sigma}, we have that $$ \left\|\frac{1}{\varepsilon^{2m}}\,\chi_l\,\Sigma_\varepsilon\right\|_{L^\infty_T(\mathcal{M}^+)} \leq c(l)\,, $$ uniformly in $\varepsilon>0$. Next, we can write the following decomposition: $$ \varrho_\varepsilon\,\chi_l\,\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,=\, \frac{1}{\varepsilon^m}\,\chi_l\,\left(\varrho_\varepsilon\,s(\varrho_\varepsilon,\vartheta_\varepsilon)\,-\,\widetilde{\varrho}_\varepsilon\,s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\right)\,-\, \chi_l\,\varrho_\varepsilon^{(1)}\,s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\,, $$ where the latter term in the right-hand side is bounded in $L^\infty_T(L^2+L^{5/3})$ in view of \eqref{uni_varrho1} and Proposition~\ref{p:target-rho_bound}. Concerning the former term, we can write it as \begin{equation}\label{eq:ub_1} \frac{1}{\varepsilon^m}\chi_l \left(\varrho_\ep s(\varrho_\varepsilon,\vartheta_\varepsilon)-\varrho_\ep s( \widetilde{\varrho}_\varepsilon,\overline\vartheta)\right)= \frac{1}{\varepsilon^m}\chi_l \bigl[\varrho_\ep s(\varrho_\varepsilon,\vartheta_\varepsilon)-\varrho_\ep s(\widetilde{\varrho}_\varepsilon,\overline\vartheta)\bigr]_{\ess}+ \frac{1}{\varepsilon^m}\chi_l \bigl[\varrho_\ep s(\varrho_\varepsilon,\vartheta_\varepsilon)\bigr]_{\res}\,, \end{equation} since the support of $\chi_l\varrho_\ep s(\widetilde{\varrho}_\varepsilon,\overline\vartheta)$ is contained in the essential set by Proposition \ref{p:target-rho_bound}, for small enough $\varepsilon$ (depending on the fixed $l>0$). By \eqref{est:e-s_res}, the last term on the right-hand side of \eqref{eq:ub_1} is uniformly bounded in $L^\infty_T(L^1)$; as for the first term, a Taylor expansion at the first order, together with inequality \eqref{est:rho_ess} and the structural restrictions on $s$, immediately yields its uniform boundedness in $L^\infty_T(L^2)$. The lemma is hence completely proved. \qed \end{proof} \medbreak In the next lemma, we establish bounds for the source terms in the system of acoustic waves \eqref{eq:wave_syst2}. \begin{lemma} \label{l:source_bounds} For any $T>0$ fixed, let us define the following spaces: \begin{itemize} \item $ \mathcal X_1\,:=\,L^2\Bigl([0,T];\big(L^2+L^{1}+L^{3/2}+L^{30/23}+L^{30/29}\big)(\Omega)\Bigr)$; \item $\mathcal X_2\,:=\,L^2\Bigl([0,T];\big(L^2+L^1+L^{4/3}\big)(\Omega)\Bigr)$; \item $\mathcal X_3\,:=\,\mathcal X_2\,+\,L^\infty\Bigl([0,T];\big(L^2+L^{5/3}+L^1\big)(\Omega)\Bigr)$; \item $\mathcal X_4\,:=\,L^\infty\Bigl([0,T];\big(L^2+L^{5/3}+L^1+\mathcal{M}^+\big)(\Omega)\Bigr)$. \end{itemize} Then, for any $l>0$ fixed, one has the following bounds, uniformly in $\varepsilon\in\,]0,1]$: $$ \left\|\vec F^1_\varepsilon\right\|_{\mathcal X_1}\,+\,\left\|F^2_\varepsilon\right\|_{\mathcal X_1}\,+\,\left\|\mathbb{G}^1_\varepsilon\right\|_{\mathcal X_2}\,+\,\left\|\vec G^2_\varepsilon\right\|_{\mathcal X_3}\,+\, \left\|G^3_\varepsilon\right\|_{\mathcal X_4}\,\leq\,C(l)\,. $$ In particular, the sequences $\bigl( f_\varepsilon\bigr)_\varepsilon$ and $\bigl(\vec{G}_\varepsilon\bigr)_\varepsilon$, defined in system \eqref{eq:wave_syst2}, are uniformly bounded in the space $L^{2}\big([0,T];W^{-1,1}(\Omega)\big)$, thus in $L^{2}\big([0,T];H^{-s}(\Omega)\big)$, for all $s>5/2$. \end{lemma} \begin{proof} We start by dealing with $\vec F^1_\varepsilon$. By relations \eqref{est:D-theta} and \eqref{est:Dtheta_res}, it is easy to see that $$ \left\|\frac{1}{\varepsilon^m}\,\chi_l\,\frac{\kappa(\vartheta_\varepsilon)}{\vartheta_\varepsilon}\,\nabla_{x}\vartheta_\varepsilon\right\|_{L^2_T(L^2+L^1)}\,\leq\,c(l)\,. $$ On the other hand, the analysis of the term $$ \varrho_\varepsilon\,\chi_l\,\frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,\vec u_\varepsilon $$ is based on an analogous decomposition as used in the proof of Lemma \ref{l:S-W_bounds} and on uniform bounds of Paragraph \ref{sss:uniform}: these facts allow us to bound it in $L^2_T(L^{3/2}+L^{30/23}+L^{30/29})$. The bounds for $F^2_\varepsilon$ easily follow from the previous ones and Lemma \ref{l:S-W_bounds} (indeed, the analysis for $\vec{W}_\varepsilon$ applies also to the terms of the form $\varrho_\varepsilon\vec{u}_\varepsilon$ which appear in the definition of $F^2_\varepsilon$), provided we show that $$ \frac{1}{\varepsilon^m}\,\left|\chi_l\,\nabla_{x}\widetilde{\varrho}_\varepsilon\right|\,\leq\,C(l)\,. $$ The previous bound immediately follows from the equation $$ \nabla_{x}\widetilde{\varrho}_\varepsilon\,=\,\frac{\widetilde{\varrho}_\varepsilon}{\partial_\varrho p(\widetilde{\varrho}_\varepsilon,\overline\vartheta)}\,\left(\varepsilon^{2(m-1)}\,\nabla_{x} F\,+\,\varepsilon^m\,\nabla_{x} G\right)\,, $$ which derives from \eqref{prF}. Hence, by Proposition \ref{p:target-rho_bound} and the definitions given in \eqref{assFG}, we get $$ \frac{1}{\varepsilon^m}\,\left|\chi_l\,\nabla_{x}\widetilde{\varrho}_\varepsilon\right|\,\leq\,C(l)\,\left(\varepsilon^{2(m-1)-m}+1\right)\,\leq\,C(l)\,. $$ The bound on $\mathbb{G}^1_\varepsilon$ is an immediate consequence of \eqref{est:Du} and \eqref{est:momentum}. Let us focus now on the term $\vec G^2_\varepsilon$. The control of the term $\,^t\mathbb{Y}^1_\varepsilon\cdot\nabla_{x}\chi_l$ is the same as above. The control of $\chi_l\vec Y^2_\varepsilon$, instead, gives rise to a bound in $L^\infty_T(L^2+L^{5/3})$: this is easily seen once we write $$ \chi_l\,\vec Y^2_\varepsilon\,=\,\chi_l\,\varrho_\varepsilon^{(1)}\,\nabla_{x} G\,+\,\varepsilon^{m-2}\,\chi_l\,\varrho_\varepsilon^{(1)}\,\nabla_{x}F $$ and we use \eqref{uni_varrho1} and \eqref{assFG}. Finally, we have the equality \begin{equation*} \begin{split} \nabla_x \chi_l\,\left( \frac{Z_\varepsilon}{\varepsilon^{m}}-Y^3_\varepsilon \right)&=\nabla_x \chi_{l}\,\left(\frac{p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\overline\vartheta)}{\varepsilon^m}\right)\\ &=\nabla_x \chi_{l}\left[\frac{p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\overline\vartheta)}{\varepsilon^m}\right]_\ess+\nabla_x \chi_{l}\left[\frac{p(\varrho_\varepsilon,\vartheta_\varepsilon)}{\varepsilon^m}\right]_\res. \end{split} \end{equation*} The second term in the last line is uniformly bounded in $L^\infty_T(L^1)$, in view of \eqref{est:rho_res}. For the first term, instead, we can proceed as in \eqref{eq:ub_1}. At this point, we switch our attention to the term $G^3_\varepsilon$, whose analysis is more involved. By definition, we have \begin{equation*} \begin{split} \chi_l\,Y^3_\varepsilon &:= \frac{1}{\varepsilon^{m}}\,\chi_l\,\left( \mathcal{A}\,\frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^m}\,+\mathcal{B}\,\varrho_\varepsilon\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,-\,\mathcal{B}\,\frac{1}{\varepsilon^m}\Sigma_\varepsilon\,-\, \frac{p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\right) \\ &=\frac{1}{\varepsilon^{m}}\,\chi_l\,\left( \mathcal{A}\,\frac{\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon}{\varepsilon^m}\,+\mathcal{B}\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,- \frac{p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\right)\\ &\qquad-\,\mathcal{B}\,\frac{1}{\varepsilon^{2m}}\,\chi_l\,\Sigma_\varepsilon\,+\,\mathcal{B}\,\chi_l\,\left(\frac{\varrho_\varepsilon -1}{\varepsilon^{m}}\right)\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\,, \end{split} \end{equation*} with $\mathcal A$ and $\mathcal B$ (see definition \eqref{relnum} above) such that $$ \mathcal{A}\,+\,\mathcal{B}\,\partial_\varrho s(1,\overline{\vartheta})\,=\,\partial_\varrho p(1,\overline{\vartheta})\qquad\mbox{ and }\qquad \mathcal{B}\,\partial_\vartheta s(1,\overline{\vartheta})\,=\,\partial_\vartheta p(1,\overline{\vartheta})\,. $$ Next, we use a Taylor expansion to write \begin{equation*} \begin{split} s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})&=s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(1,\overline{\vartheta})+s(1,\overline \vartheta)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})\\ &=\partial_\varrho \, s(1,\overline \vartheta )\, (\varrho_\varepsilon -1)+\partial_\vartheta \, s(1,\overline \vartheta )\, (\vartheta_\varepsilon -\overline \vartheta)+\frac{1}{2}\,{\rm Hess}(s)[\xi_1 ,\eta_1]\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}\cdot\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix} \\ &+\partial_\varrho \, s(1,\overline \vartheta )\, (1-\widetilde{\varrho}_\varepsilon )+\frac{1}{2}\, \partial_{\varrho \varrho}^2\,s(\xi_{2},\overline\vartheta)\, (\widetilde{\varrho}_\varepsilon -1)^{2}\\ &=\partial_\varrho \, s(1,\overline \vartheta )\, (\varrho_\varepsilon - \widetilde{\varrho}_\varepsilon)+\partial_\vartheta \, s(1,\overline \vartheta )\, (\vartheta_\varepsilon -\overline \vartheta)\\ &+\frac{1}{2}\left({\rm Hess}(s)[\xi_1 ,\eta_1]\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}\cdot\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}+\partial_{\varrho \varrho}^2\,s(\xi_{2},\overline \vartheta)\, (\widetilde{\varrho}_\varepsilon -1)^{2}\right)\,, \end{split} \end{equation*} where $\xi_1,\xi_2,\eta_1$ are suitable points between $1$ and $\varrho_\varepsilon$, $1$ and $\widetilde{\varrho}_\varepsilon$, $\overline \vartheta$ and $ \vartheta_\varepsilon $ respectively, and we have denoted by ${\rm Hess}(s)[\xi,\eta]$ the Hessian matrix of the function $s$ with respect to its variables $\big(\varrho,\vartheta\big)$, computed at the point $(\xi,\eta)$. Analogously, for the pressure term we have \begin{equation*} \begin{split} p(\varrho_\varepsilon,\vartheta_\varepsilon)-p(\widetilde{\varrho}_\varepsilon,\overline{\vartheta}) &=\partial_\varrho \, p(1,\overline \vartheta )\, (\varrho_\varepsilon - \widetilde{\varrho}_\varepsilon)+\partial_\vartheta \, p(1,\overline \vartheta )\, (\vartheta_\varepsilon -\overline \vartheta)\\ &+\frac{1}{2}\left({\rm Hess}(p)[\xi_3 ,\eta_2]\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}\cdot\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}+\partial_{\varrho \varrho}^2\,p(\xi_{4},\overline \vartheta)\, (\widetilde{\varrho}_\varepsilon -1)^{2}\right)\,, \end{split} \end{equation*} where $\xi_3,\xi_4,\eta_2$ are still between $1$ and $\varrho_\varepsilon$, $1$ and $\widetilde{\varrho}_\varepsilon$, $\overline \vartheta$ and $ \vartheta_\varepsilon $ respectively. Using now \eqref{relnum}, we find that the first order terms cancel out, and we are left with \begin{equation*} \begin{split} \chi_l\,Y^3_\varepsilon &=\frac{\mathcal{B}}{2\varepsilon^{2m}}\,\chi_l\,\left( \,{\rm Hess}(s)[\xi_1 ,\eta_1]\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}\cdot\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}+\partial_{\varrho \varrho}^2\,s(\xi_{2},\overline \vartheta)\, (\widetilde{\varrho}_\varepsilon -1)^{2}\right)\\ &-\,\frac{1}{2\varepsilon^{2m}}\,\chi_l\,\left({\rm Hess}(p)[\xi_3 ,\eta_2]\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}\cdot\begin{pmatrix} \varrho_\varepsilon -1 \\ \vartheta_\varepsilon -\overline\vartheta \end{pmatrix}+\partial_{\varrho \varrho}^2\,p(\xi_{4},\overline \vartheta)\, (\widetilde{\varrho}_\varepsilon -1)^{2}\right)\\ &-\,\frac{\mathcal B}{\varepsilon^{2m}}\,\chi_l\,\Sigma_\varepsilon\,+\,\mathcal{B}\,\chi_l\,\left(\frac{\varrho_\varepsilon -1}{\varepsilon^{m}}\right)\, \frac{s(\varrho_\varepsilon,\vartheta_\varepsilon)-s(\widetilde{\varrho}_\varepsilon,\overline{\vartheta})}{\varepsilon^m}\, . \end{split} \end{equation*} Thanks to the uniform bounds established in Paragraph \ref{sss:uniform} and the decomposition into essential and residual parts, the claimed control in the space $\mathcal X_4$ follows. \qed \end{proof} \subsubsection{Regularization and description of the oscillations}\label{sss:w-reg} Following \cite{F-N_AMPA} and \cite{F-N_CPDE} (see also \cite{Ebin}), it is convenient to reformulate our problem \eqref{ceq} to \eqref{eeq}, supplemented with complete slip boundary conditions \eqref{bc1-2} and \eqref{bc3}, in a completely equivalent way, in the domain $$ \widetilde{\Omega}_\varepsilon\,:=\,{B}_{L_\varepsilon} (0) \times \mathbb{T}^1\,,\qquad\qquad\mbox{ with }\qquad\mathbb{T}^1\,:=\,[-1,1]/\sim\,, $$ where $\sim$ denotes the equivalence relation which identifies $-1$ and $1$. For this, it is enough to extend $\varrho_\varepsilon$, $\vartheta_\varepsilon$, and $\vec u_\varepsilon^h$ as even functions with respect to $x^{3}$, $u_\varepsilon^3$ and $G$ as odd functions. Correspondingly, we consider also the wave system \eqref{eq:wave_syst2} to be satisfied in the new domain $\widetilde\Omega_\varepsilon$. It goes without saying that the uniform bounds established above hold true also when replacing $\Omega$ with $\widetilde\Omega$, where we have set $$\widetilde\Omega\,:=\,\mathbb{R}^2 \times \mathbb{T}^1\,.$$ Notice that the wave speed in \eqref{eq:wave_syst2} is proportional to $\varepsilon^{-m}$, while, in view of assumption \eqref{dom}, the domains $\widetilde\Omega_\varepsilon$ are expanding at speed proportional to $\varepsilon^{-m-\delta}$, for some $\delta>0$. Therefore, no interactions of the acoustic-Poincar\'e waves with the boundary of $\widetilde\Omega_\varepsilon$ take place (see also Remark \ref{r:speed-waves} in this respect), for any finite time $T>0$ and sufficiently small $\ep>0$. Thanks to this fact and the spatial localisation given by the cut-off function $\chi_l$, we can assume that \eqref{eq:wave_syst2} is satisfied (still in a weak sense) on the whole $\widetilde\Omega$. Now, for any $M\in\mathbb{N}$ let us consider the low-frequency cut-off operator ${S}_{M}$ of a Littlewood-Paley decomposition, as introduced in \eqref{eq:S_j}. We define \begin{equation*} \Lambda_{\varepsilon ,M}={S}_{M}\Lambda_{\varepsilon}\qquad\qquad \text{ and }\qquad\qquad \vec{W}_{\varepsilon ,M}={S}_{M}\vec{W}_{\varepsilon}\, . \end{equation*} The following result holds true. Recall that we are omitting from the notation the dependence of all quantities on $l>0$, due to multiplication by the cut-off function $\chi_l$ fixed above. \begin{proposition} \label{p:prop approx} For any $T>0$, we have the following convergence properties, in the limit $M\rightarrow +\infty $: \begin{equation}\label{eq:approx var} \begin{split} &\sup_{0<\varepsilon\leq1}\, \left\|\Lambda_{\varepsilon }-\Lambda_{\varepsilon ,M}\right\|_{L^{\infty}([0,T];H^{s})}\longrightarrow 0\quad \forall s<-3/2-\delta \\ &\sup_{0<\varepsilon\leq1}\, \left\|\vec{W}_{\varepsilon }-\vec{W}_{\varepsilon ,M}\right\|_{L^{\infty}([0,T];H^{s})}\longrightarrow 0\quad \forall s<-4/5-\delta\,, \end{split} \end{equation} for any $\delta >0$. Moreover, for any $M>0$, the couple $(\Lambda_{\varepsilon ,M},\vec W_{\varepsilon ,M})$ satisfies the approximate wave equations \begin{equation}\label{eq:approx wave} \left\{\begin{array}{l} \varepsilon^m\,\partial_t\Lambda_{\varepsilon ,M}\,+\,\mathcal{A}\,{\rm div}\,\vec{W}_{\varepsilon ,M}\,=\,\varepsilon^m\,f_{\varepsilon ,M} \\[1ex] \varepsilon^m\,\partial_t\vec{W}_{\varepsilon ,M}\,+\varepsilon^{m-1}\,e_{3}\times \vec{W}_{\varepsilon ,M}+\,\nabla_x \Lambda_{\varepsilon,M}\,=\,\varepsilon^m\,\vec G_{\varepsilon ,M}\, , \end{array} \right. \end{equation} where $(f_{\varepsilon ,M})_{\varepsilon}$ and $(\vec G_{\varepsilon ,M})_{\varepsilon}$ are families of smooth (in the space variables) functions satisfying, for any $s\geq0$, the uniform bounds \begin{equation}\label{eq:approx force} \sup_{0<\varepsilon\leq1}\, \left\|f_{\varepsilon ,M}\right\|_{L^{2}([0,T];H^{s})}\,+\,\sup_{0<\varepsilon\leq1}\,\left\|\vec G_{\varepsilon ,M}\right\|_{L^{2}([0,T];H^{s})}\,\leq\, C(l,s,M)\,, \end{equation} where the constant $C(l,s,M)$ depends on the fixed values of $l>0$, $s\geq 0$ and $M>0$, but not on $\varepsilon>0$. \end{proposition} \begin{proof} Thanks to Lemma \ref{lemma_sobolev_H^s}, properties \eqref{eq:approx var} are straightforward consequences of the uniform bounds establish in Subsection \ref{sss:w-bounds}. Next, applying the operator ${S}_{M}$ to \eqref{eq:wave_syst2} immediately gives us system \eqref{eq:approx wave}, where we have set \begin{equation*} f_{\varepsilon ,M}:={S}_{M}\left({\rm div}\,\vec{F}^1_\varepsilon\,+\,F^2_\varepsilon\right)\qquad \text{ and }\qquad \vec G_{\varepsilon ,M}:={S}_{M}\left({\rm div}\,\mathbb{G}^1_\varepsilon\,+\,\vec{G}^2_\varepsilon\,+\,\nabla_x G^3_\varepsilon\right)\,. \end{equation*} Thanks to Lemma \ref{l:source_bounds} and \eqref{eq:LP-Sob} (and also Lemma \ref{lemma_sobolev_H^s}), it is easy to verify inequality \eqref{eq:approx force}. \qed \end{proof} \medbreak We also have an important decomposition for the approximated velocity fields and their ${\rm curl}\,$. \begin{proposition} \label{p:prop dec} For any $M>0$ and any $\varepsilon\in\,]0,1]$, the following decompositions hold true: \begin{equation*} \vec{W}_{\varepsilon ,M}\,=\, \varepsilon^{m}\vec{t}_{\varepsilon ,M}^{1}+\vec{t}_{\varepsilon ,M}^{2}\qquad\mbox{ and }\qquad {\rm curl}\, \vec{W}_{\varepsilon ,M}=\varepsilon^{m}\vec{T}_{\varepsilon ,M}^{1}+\vec{T}_{\varepsilon ,M}^{2}\,, \end{equation*} where, for any $T>0$ and $s\geq 0$, one has \begin{align*} &\left\|\vec{t}_{\varepsilon ,M}^{1}\right\|_{L^{2}([0,T];H^{s})}+\left\|\vec{T}_{\varepsilon ,M}^{1}\right\|_{L^{2}([0,T];H^{s})}\leq C(l,s,M) \\ &\left\|\vec{t}_{\varepsilon ,M}^{2}\right\|_{L^{2}([0,T];H^{1})}+\left\|\vec{T}_{\varepsilon ,M}^{2}\right\|_{L^{2}\left([0,T];L^2\right)}\leq C(l)\,, \end{align*} for suitable positive constants $C(l,s,M)$ and $C(l)$, which are uniform with respect to $\varepsilon\in\,]0,1]$. \end{proposition} \begin{proof} We start by defining \begin{equation} \label{eq:t-T} \vec{t}_{\varepsilon,M}^{1}\,:=\,{S}_{M}\left(\chi_{l}\left(\frac{\varrho_\varepsilon -1}{\varepsilon^{m}}\right)\vec{u}_{\varepsilon}\right) \qquad\mbox{ and }\qquad \vec{t}_{\varepsilon,M}^{2}\,:=\,{S}_{M}\left(\chi_{l}\vec{u}_{\varepsilon}\right)\,. \end{equation} Then, it is apparent that $\vec{W}_{\varepsilon ,M}\,=\,\varepsilon^m\vec t_{\varepsilon,M}^{1}\,+\,\vec t_{\varepsilon,M}^{2}$. The decomposition of ${\rm curl}\, \vec W_{\varepsilon,M}$ is also easy to get, if we set $\vec T_{\varepsilon,M}^j\,:=\,{\rm curl}\, \vec t_{\varepsilon,M}^j$, for $j=1,2$. We have to prove uniform bounds for all those terms. But this is an easy verification, thanks to the $L^\infty_T(L^{5/3}_{\rm loc})$ bound on $R_\varepsilon$ and the $L^2_T(H^{1}_{\rm loc})$ bound on $\vec{u}_{\varepsilon}$, for any fixed time $T>0$ (recall the estimates obtained in Subsection \ref{ss:unif-est} above). On the one hand, for the estimate \begin{equation*} \left\|\vec{t}_{\varepsilon ,M}^{1}\right\|_{L_{T}^{2}(H^{s})}+\left\|\vec{T}_{\varepsilon ,M}^{1}\right\|_{L_{T}^{2}(H^{s})}\leq C(l,s,M)\, , \end{equation*} it is sufficient to employ relation \eqref{eq:LP-Sob} and Lemma \ref{lemma_sobolev_H^s}. On the other hand, we have \begin{equation*} \begin{split} \left\| S_{M}\left( \chi_{l} \ue \right) \right\|_{H^{1}}^{2}&\leq C \sum_{j=- 1}^{M-1}2^{2j}\left\| \Delta_{j} \left(\chi_l \ue\right)\right\|_{L^{2}}^{2}\\ &\leq C\|\chi_l \ue\|_{L^2}^2+C\sum_{j=0}^{M-1}\left\| \Delta_{j}\nabla_{x} \left(\chi_l \ue \right)\right\|_{L^{2}}^{2}\leq C(l)\, , \end{split} \end{equation*} and the estimate for $\vec{T}_{\varepsilon ,M}^{2}$ follows from analogous computations. This completes the proof. \qed \end{proof} \subsection{Convergence of the non-linear convective term} \label{ss:convergence} In this subsection we show convergence of the convective term, by using a compensated compactness argument. Namely, we manipulate this term, by performing algebraic computations on the wave system formulated above. As a consequence, we derive two key pieces of information: on the one hand, we see that some non-linear terms are small remainders (in the sense specified by relations \eqref{eq:test-f} and \eqref{eq:remainder} below); on the other hand, we derive a compactness property for a new quantity, called $\gamma_{\varepsilon,M}$. The first step is to reduce the study to the case of smooth vector fields $\vec{W}_{\varepsilon ,M}$. \begin{lemma} \label{lem:convterm} Let $T>0$. For any $\vec{\psi}\in C_c^\infty\bigl([0,T[\,\times\widetilde\Omega;\mathbb{R}^3\bigr)$, we have \begin{equation*} \lim_{M\rightarrow +\infty} \limsup_{\varepsilon \rightarrow 0^+}\left|\int_{0}^{T}\int_{\widetilde\Omega} \varrho_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\, \dxdt- \int_{0}^{T}\int_{\widetilde\Omega} \vec{W}_{\varepsilon ,M}\otimes \vec{W}_{\varepsilon,M}: \nabla_{x}\vec{\psi}\, \dxdt\right|=0\, . \end{equation*} \end{lemma} \begin{proof} Let $\vec \psi\in C_c^\infty\bigl(\mathbb{R}_+\times\widetilde\Omega;\mathbb{R}^3\bigr)$, with ${\rm Supp}\,\vec\psi\subset[0,T]\times K$, for some compact set $K\subset\widetilde\Omega$. Then, we take $l>0$ in \eqref{eq:cut-off} so large that $K\subset \widetilde{\mathbb{B}}_{l}\,:=\,B_l(0)\times\mathbb{T}^1$. Therefore, using \eqref{def_deltarho}, we get $$ \int_{0}^{T}\int_{\widetilde\Omega} \varrho_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\,=\, \int_{0}^{T}\int_{K}(\chi_l\,\vec{u}_\varepsilon)\otimes\vec{u}_\varepsilon:\nabla_{x}\vec{\psi}\,+\varepsilon^{m}\int_{0}^{T}\int_{K}R_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon:\nabla_{x}\vec{\psi}\,. $$ As a consequence of the uniform bounds $\big(\vec{u}_{\varepsilon}\big)_\varepsilon\subset L^{2}_{T}(L^{6}_{\rm loc})$ and $\big(R_{\varepsilon}\big)_\varepsilon\subset L^{\infty}_{T}(L_{\rm loc}^{5/3})$ (recall \eqref{uni_varrho1} above), the second integral in the right-hand side is of order $\varepsilon^{m}$. As for the first one, using \eqref{eq:t-T}, we can write $$ \int_{0}^{T}\int_{K}(\chi_l\,\vec{u}_\varepsilon)\otimes \vec{u}_\varepsilon:\nabla_{x}\vec{\psi}\,=\,\int_{0}^{T}\int_{K}\vec{t}^2_{\varepsilon,M}\otimes\vec{u}_\varepsilon:\nabla_{x}\vec{\psi} +\int_{0}^{T}\int_{K} \,({\rm Id}\,-{S}_{M})(\chi_l\,\vec{u}_\varepsilon)\otimes\vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\,. $$ Observe that, in view of characterisation \eqref{eq:LP-Sob}, one has the property (see also Lemma \ref{lemma_sobolev_H^s}) \begin{equation*} \left\|({\rm Id}\,-{S}_{M})(\chi_l\,\vec{u}_\varepsilon)\right\|_{L_{T}^{2}(L^{2})}\,\leq\,C\,2^{-M}\,\left\|\nabla_{x}(\chi_l\,\vec{u}_\varepsilon)\right\|_{L_{T}^{2}(L^{2})}\,\leq\,C(l)\,2^{-M}\,. \end{equation*} Therefore, it is enough to consider the first term in the right-hand side of the last relation: we have $$ \int_{0}^{T}\int_{K}\vec{t}^2_{\varepsilon,M}\otimes \vec{u}_\varepsilon:\nabla_{x}\vec{\psi}\,=\,\int_{0}^{T}\int_K\vec{t}^2_{\varepsilon,M}\otimes\vec{t}^2_{\varepsilon,M}:\nabla_{x}\vec{\psi}\,+\, \int_{0}^{T}\int_{K} \,\vec{t}^2_{\varepsilon,M}\otimes({\rm Id}\,-{S}_{M})(\chi_l\,\vec{u}_\varepsilon): \nabla_{x}\vec{\psi}\,, $$ where, for the same reason as before, we gather that \begin{equation*} \lim_{M\rightarrow +\infty}\limsup_{\varepsilon \rightarrow 0^+}\left|\int_{0}^{T}\int_{K}\vec{t}^2_{\varepsilon,M}\otimes ({\rm Id}\,-{S}_{M})(\chi_l\,\vec{u}_\varepsilon): \nabla_{x}\vec{\psi}\right|=0\, . \end{equation*} It remains us to consider the integral $$ \int_{0}^{T}\int_K\vec{t}^2_{\varepsilon,M}\otimes\vec{t}^2_{\varepsilon,M}:\nabla_{x}\vec{\psi}\,=\,\int_{0}^{T}\int_{K} \vec{W}_{\varepsilon ,M}\otimes \vec t^2_{\varepsilon,M}: \nabla_{x}\vec{\psi} -\varepsilon^{m}\int_{0}^{T}\int_{K}\vec t^1_{\varepsilon,M}\otimes \vec t^2_{\varepsilon,M}: \nabla_{x}\vec{\psi}\,, $$ where we notice that, owing to Proposition \ref{p:prop dec}, the latter term in the right-hand side is of order $\varepsilon^{m}$, so it vanishes at the limit. As a last step, we write $$ \int_{0}^{T}\int_{K} \vec{W}_{\varepsilon ,M}\otimes \vec t^2_{\varepsilon,M}: \nabla_{x}\vec{\psi}\,=\, \int_{0}^{T}\int_{K} \vec{W}_{\varepsilon ,M}\otimes \vec W_{\varepsilon,M}: \nabla_{x}\vec{\psi}\,-\,\varepsilon^m\int_{0}^{T}\int_{K} \vec{W}_{\varepsilon ,M}\otimes \vec t^1_{\varepsilon,M}: \nabla_{x}\vec{\psi}\,. $$ Using Lemma \ref{l:S-W_bounds} together with Bernstein's inequalities of Lemma \ref{l:bern}, we see that the latter integral in the right-hand side is of order $\varepsilon^{m}$. This concludes the proof of the lemma. \qed \end{proof} \medbreak From now on, in order to avoid the appearance of (irrelevant) multiplicative constants everywhere, we suppose that the torus $\mathbb{T}^1$ has been normalised so that its Lebesgue measure is equal to $1$. In view of the previous lemma and of Proposition \ref{p:limitpoint}, for any test-function \begin{equation} \label{eq:test-f} \vec\psi\in C_c^\infty\big([0,T[\,\times\widetilde\Omega;\mathbb{R}^3\big)\qquad\qquad \mbox{ such that }\qquad {\rm div}\,\vec\psi=0\quad\mbox{ and }\quad \partial_3\vec\psi=0\,, \end{equation} we have to pass to the limit in the term \begin{align*} -\int_{0}^{T}\int_{\widetilde\Omega} \vec{W}_{\varepsilon ,M}\otimes \vec{W}_{\varepsilon ,M}: \nabla_{x}\vec{\psi}\,&=\,\int_{0}^{T}\int_{\widetilde\Omega} {\rm div}\,\left(\vec{W}_{\varepsilon ,M}\otimes \vec{W}_{\varepsilon ,M}\right) \cdot \vec{\psi}\,. \end{align*} Notice that the integration by parts above is well-justified, since all the quantities inside the integrals are smooth. At this point, we observe that, resorting to the notation in \eqref{eq:decoscil} presented in the introductory part, we can write $$ \int_{0}^{T}\int_{\widetilde\Omega} {\rm div}\,\left(\vec{W}_{\varepsilon ,M}\otimes \vec{W}_{\varepsilon ,M}\right) \cdot \vec{\psi}\,=\, \int_{0}^{T}\int_{\mathbb{R}^2} \left(\mathcal{T}_{\varepsilon ,M}^{1}+\mathcal{T}_{\varepsilon, M}^{2}\right)\cdot\vec{\psi}^h\,, $$ where we have defined the terms \begin{equation} \label{def:T1-2} \mathcal T^1_{\varepsilon,M}\,:=\, {\rm div}_h\left(\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\otimes \langle \vec{W}_{\varepsilon ,M}^{h}\rangle\right)\qquad \mbox{ and }\qquad \mathcal T^2_{\varepsilon,M}\,:=\, {\rm div}_h\left(\langle \dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\otimes \dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\rangle \right)\,. \end{equation} So, it is enough to focus on each of them separately. For notational convenience, from now on we will generically denote by $\mathcal{R}_{\varepsilon ,M}$ any remainder term, that is any term satisfying the property \begin{equation} \label{eq:remainder} \lim_{M\rightarrow +\infty}\limsup_{\varepsilon \rightarrow 0^+}\left|\int_{0}^{T}\int_{\widetilde\Omega}\mathcal{R}_{\varepsilon ,M}\cdot \vec{\psi}\, \dxdt\right|=0\, , \end{equation} for all test functions $\vec{\psi}\in C_c^\infty\bigl([0,T[\,\times\widetilde\Omega;\mathbb{R}^3\bigr)$ as in \eqref{eq:test-f}. \subsubsection{The analysis of the $\mathcal{T}_{\varepsilon ,M}^{1}$ term}\label{sss:term1} We start by dealing with $\mathcal T^1_{\varepsilon,M}$. Standard computations give \begin{align} \mathcal{T}_{\varepsilon ,M}^{1}\,&=\,{\rm div}_h\left(\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\otimes \langle \vec{W}_{\varepsilon ,M}^{h}\rangle\right)= {\rm div}_h\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\, \langle \vec{W}_{\varepsilon ,M}^{h}\rangle+\langle \vec{W}_{\varepsilon ,M}^{h}\rangle \cdot \nabla_{h}\langle \vec{W}_{\varepsilon ,M}^{h}\rangle \label{eq:T1} \\ &={\rm div}_h\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\, \langle \vec{W}_{\varepsilon ,M}^{h}\rangle+\frac{1}{2}\, \nabla_{h}\left(\left|\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\right|^{2}\right)+ {\rm curl}_h\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\,\langle \vec{W}_{\varepsilon ,M}^{h}\rangle^{\perp}\,. \nonumber \end{align} Notice that we can forget about the second term, because it is a perfect gradient and we are testing against divergence-free test functions. For the first term, we take advantage of system \eqref{eq:approx wave}: averaging the first equation with respect to $x^{3}$ and multiplying it by $\langle \vec{W}^h_{\varepsilon ,M}\rangle$, we arrive at $$ {\rm div}_h\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\,\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\,=\,-\frac{\varepsilon^{m}}{\mathcal{A}}\partial_t\langle \Lambda_{\varepsilon ,M}\rangle \langle \vec{W}_{\varepsilon ,M}^{h}\rangle+ \frac{\varepsilon^{m}}{\mathcal{A}} \langle f_{\varepsilon ,M}^{h}\rangle \langle \vec{W}_{\varepsilon ,M}^{h}\rangle\,=\, \frac{\varepsilon^{m}}{\mathcal{A}}\langle \Lambda_{\varepsilon ,M}\rangle \partial_t \langle \vec{W}_{\varepsilon ,M}^{h}\rangle +\mathcal{R}_{\varepsilon ,M}\,. $$ We remark that the term presenting the total derivative in time is in fact a remainder. We use now the horizontal part of \eqref{eq:approx wave}, where we take the vertical average and then multiply by $\langle \Lambda_{\varepsilon ,M}\rangle$: we gather \begin{align*} \frac{\varepsilon^{m}}{\mathcal{A}}\langle \Lambda_{\varepsilon ,M}\rangle \partial_t \langle \vec{W}_{\varepsilon ,M}^{h}\rangle &= -\frac{1}{\mathcal{A}} \langle \Lambda_{\varepsilon ,M}\rangle \nabla_{h}\langle \Lambda_{\varepsilon ,M}\rangle+\frac{\varepsilon^{m}}{\mathcal{A}}\langle \Lambda_{\varepsilon ,M}\rangle \langle \vec G_{\varepsilon ,M}^{h}\rangle- \frac{\varepsilon^{m-1}}{\mathcal{A}}\langle \Lambda_{\varepsilon ,M}\rangle\langle \vec{W}_{\varepsilon ,M}^{h}\rangle^{\perp}\\ &=-\frac{\varepsilon^{m-1}}{\mathcal{A}}\langle \Lambda_{\varepsilon ,M}\rangle\langle \vec{W}_{\varepsilon ,M}^{h}\rangle^{\perp}-\frac{1}{2\mathcal{A}} \nabla_{h}\left( \left| \langle \Lambda_{\varepsilon ,M}\rangle \right|^{2}\right)+\mathcal{R}_{\varepsilon ,M}\\ &=-\frac{\varepsilon^{m-1}}{\mathcal{A}}\langle \Lambda_{\varepsilon ,M}\rangle\langle \vec{W}_{\varepsilon ,M}^{h}\rangle^{\perp}+\mathcal{R}_{\varepsilon ,M}\, , \end{align*} where we repeatedly exploited the properties proved in Proposition \ref{p:prop approx} and we included in the remainder term also the perfect gradient. Inserting this relation into \eqref{eq:T1}, we find \begin{equation*} \mathcal{T}_{\varepsilon ,M}^{1}= \gamma_{\varepsilon,M}\,\langle\vec{W}_{\varepsilon,M}^{h}\rangle^{\perp}+\mathcal{R}_{\varepsilon,M}\,, \qquad\qquad\mbox{ with }\qquad \gamma_{\varepsilon, M}:={\rm curl}_h\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\,-\,\frac{\varepsilon^{m-1}}{\mathcal{A}}\langle \Lambda_{\varepsilon ,M}\rangle\,. \end{equation*} We observe that, for passing to the limit in $\mathcal{T}_{\varepsilon ,M}^{1}$, there is no other way than finding some strong convergence property for $\vec W_{\varepsilon,M}$. Such a property is in fact hidden in the structure of the wave system \eqref{eq:approx wave}: in order to exploit it, some work on the term $\gamma_{\varepsilon,M}$ is needed. We start by rewriting the vertical average of the first equation in \eqref{eq:approx wave} as \begin{equation*} \frac{\varepsilon^{2m-1}}{\mathcal{A}}\,\partial_t \langle \Lambda_{\varepsilon ,M} \rangle\,+\,\varepsilon^{m-1}{\rm div}\,_{h} \langle \vec{W}_{\varepsilon ,M}^{h}\rangle\,=\,\frac{\varepsilon^{2m-1}}{\mathcal{A}}\, \langle f_{\varepsilon ,M}^{h}\rangle\,. \end{equation*} On the other hand, taking the vertical average of the horizontal components of \eqref{eq:approx wave} and then applying ${\rm curl}_h$, we obtain the relation \begin{equation*} \varepsilon^m\,\partial_t{\rm curl}_h\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\,+\varepsilon^{m-1}\,{\rm div}_h\langle \vec{W}_{\varepsilon ,M}^{h}\rangle\, =\,\varepsilon^m {\rm curl}_h\langle\vec G_{\varepsilon ,M}^{h}\rangle\, . \end{equation*} Summing up the last two equations, we discover that \begin{equation} \label{eq:gamma} \partial_{t}\gamma_{\varepsilon,M}\,=\,{\rm curl}_h\langle \vec G_{\varepsilon ,M}^{h}\rangle\,-\,\frac{\varepsilon^{m-1}}{\mathcal{A}}\,\langle f_{\varepsilon ,M}^{h}\rangle \, . \end{equation} Thanks to estimate \eqref{eq:approx force} in Proposition \ref{p:prop approx}, we discover that (for any $M>0$ fixed) the family $\left(\partial_{t}\,\gamma_{\varepsilon,M}\right)_{\varepsilon}$ is uniformly bounded (with respect to $\varepsilon$) in e.g. $L_{T}^{2}(L^{2})$. On the other hand, thanks to Lemma \ref{l:S-W_bounds} and Sobolev embeddings, we have that (for any $M>0$ fixed) the sequence $(\gamma_{\varepsilon,M})_{\varepsilon}$ is uniformly bounded (with respect to $\varepsilon$) in the space $L_{T}^{2}(H^{1})$. Since the embedding $H_{\rm loc}^{1}\hookrightarrow L_{\rm loc}^{2}$ is compact, the Aubin-Lions Theorem (see again the Appendix \ref{appendixA}) implies that, for any $M>0$ fixed, the family $(\gamma_{\varepsilon,M})_{\varepsilon}$ is compact in $L_{T}^{2}(L_{\rm loc}^{2})$. Then, it converges strongly (up to extracting a subsequence) to a tempered distribution $\gamma_M$ in the same space. Of course, by definition of $\gamma_{\varepsilon,M}$ (and whenever $m>1$), this tells us that also $\big({\rm curl}_h\langle\vec W_{\varepsilon,M}^h\rangle\big)_\varepsilon$ is compact in $L^2_T(L^2_{\rm loc})$. Now, we have that $\gamma_{\varepsilon ,M}$ converges strongly to $\gamma_M$ in $L_{T}^{2}(L_{\rm loc}^{2})$ and $\langle \vec{W}_{\varepsilon ,M}^{h}\rangle$ converges weakly to $\langle \vec{W}_{M}^{h}\rangle$ in $L_{T}^{2}(L_{\rm loc}^{2})$ (owing to Proposition \ref{p:prop dec}, for instance). Then, we deduce that \begin{equation*} \gamma_{\varepsilon,M}\langle \vec{W}_{\varepsilon ,M}^{h}\rangle^{\perp}\longrightarrow \gamma_M \langle \vec{W}_{M}^{h}\rangle^{\perp}\qquad \text{ in }\qquad \mathcal{D}^{\prime}\big(\mathbb{R}_+\times\mathbb{R}^2\big)\,. \end{equation*} Observe that, by definition of $\gamma_{\varepsilon,M}$, we must have $\gamma_M={\rm curl}_h\langle \vec{W}_{M}^{h}\rangle$. On the other hand, by Proposition \ref{p:prop dec} and \eqref{eq:t-T}, we know that $\langle \vec{W}_{M}^{h}\rangle= \langle{S}_{M}(\chi_l\vec{U}^{h})\rangle$. In the end, we have proved that, for any $T>0$ and any test-function $\vec \psi$ as in \eqref{eq:test-f}, one has the convergence (at any $M\in\mathbb{N}$ fixed, when $\varepsilon\ra0^+$) \begin{equation} \label{eq:limit_T1} \int_{0}^{T}\int_{\mathbb{R}^2}\mathcal{T}_{\varepsilon ,M}^{1}\cdot\vec{\psi}^h\,\dx^h\dt\,\longrightarrow\, \int^T_0\int_{\mathbb{R}^2}{\rm curl}_h\langle{S}_{M}(\chi_l\vec{U}^{h})\rangle\; \langle{S}_{M}\big(\chi_l(\vec{U}^{h})^{\perp}\big)\rangle\cdot\vec\psi^h\,\dx^h\dt\,. \end{equation} \subsubsection{Dealing with the term $\mathcal{T}_{\varepsilon ,M}^{2}$}\label{sss:term2} Let us now consider the term $\mathcal{T}_{\varepsilon ,M}^{2}$, defined in \eqref{def:T1-2}. By the same computation as above, we infer that \begin{align} \mathcal{T}_{\varepsilon ,M}^{2}\, &=\,\langle {\rm div}_h (\dbtilde{\vec{W}}_{\varepsilon ,M}^{h})\;\;\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\rangle+\frac{1}{2}\, \langle \nabla_{h}| \dbtilde{\vec{W}}_{\varepsilon ,M}^{h}|^{2} \rangle+ \langle {\rm curl}_h\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\,\left( \dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\right)^{\perp}\rangle\, . \label{eq:T2} \end{align} Let us now introduce now the quantities $$ \dbtilde{\Phi}_{\varepsilon ,M}^{h}\,:=\,( \dbtilde{\vec{W}}_{\varepsilon ,M}^{h})^{\perp}-\partial_{3}^{-1}\nabla_{h}^{\perp}\dbtilde{\vec{W}}_{\varepsilon ,M}^{3}\qquad\mbox{ and }\qquad \dbtilde{\omega}_{\varepsilon ,M}^{3}\,:=\,{\rm curl}_h \dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\,. $$ Then we can write \begin{equation*} \left( {\rm curl}\, \dbtilde{\vec{W}}_{\varepsilon ,M}\right)^{h}\,=\,\partial_3 \dbtilde{\Phi}_{\varepsilon ,M}^{h}\qquad \text{ and }\qquad \left( {\rm curl}\, \dbtilde{\vec{W}}_{\varepsilon ,M}\right)^{3}\,=\,\dbtilde{\omega}_{\varepsilon ,M}^{3}\,. \end{equation*} In addition, from the momentum equation in \eqref{eq:approx wave}, where we take the mean-free part and then the ${\rm curl}\,$, we deduce the equations \begin{equation} \label{eq:eq momentum term2} \begin{cases} \varepsilon^{m}\partial_t\dbtilde{\Phi}_{\varepsilon ,M}^{h}-\varepsilon^{m-1}\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}=\varepsilon^m\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec G}_{\varepsilon ,M} \right)^{h}\\[1ex] \varepsilon^{m}\partial_t\dbtilde{\omega}_{\varepsilon ,M}^{3}+\varepsilon^{m-1}{\rm div}_h\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}=\varepsilon^m\,{\rm curl}_h\dbtilde{\vec G}_{\varepsilon ,M}^{h}\, . \end{cases} \end{equation} Making use of the relations above and of Propositions \ref{p:prop approx} and \ref{p:prop dec}, we get \begin{equation*} \begin{split} {\rm curl}_h\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\;\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\right)^{\perp}&=\dbtilde{\omega}_{\varepsilon ,M}^{3}\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\right)^{\perp}\,=\, \varepsilon \partial_t\!\left( \dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\dbtilde{\omega}_{\varepsilon ,M}^{3}- \varepsilon\dbtilde{\omega}_{\varepsilon ,M}^{3}\left(\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec G}_{\varepsilon ,M}\right)^{h}\right)^\perp \\ &=-\varepsilon \left( \dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\partial_t\dbtilde{\omega}_{\varepsilon ,M}^{3}+\mathcal{R}_{\varepsilon ,M}= \left( \dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\,{\rm div}_h\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\mathcal{R}_{\varepsilon ,M}\, . \end{split} \end{equation*} Hence, including also the gradient term into the remainders, from \eqref{eq:T2} we arrive at \begin{align*} \mathcal{T}_{\varepsilon ,M}^{2}\,&=\,\langle {\rm div}_h\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}\,\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right) \rangle+\mathcal{R}_{\varepsilon ,M} \\ &=\,\langle {\rm div}\, \dbtilde{\vec{W}}_{\varepsilon ,M}\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right) \rangle - \langle \partial_3 \dbtilde{\vec{W}}_{\varepsilon ,M}^{3}\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right) \rangle+\mathcal{R}_{\varepsilon ,M}\, . \end{align*} The second term on the right-hand side of the last line is actually another remainder. Indeed, using the definition of the function $\dbtilde{\Phi}_{\varepsilon ,M}^{h}$ and the fact that the test function $\vec\psi$ does not depend on $x^3$, one has \begin{equation*} \begin{split} \partial_3 \dbtilde{\vec{W}}_{\varepsilon ,M}^{3}\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)&=\partial_3 \left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{3}\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)\right) - \dbtilde{\vec{W}}_{\varepsilon ,M}^{3}\, \partial_3\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)\\ &=\mathcal{R}_{\varepsilon ,M}-\frac{1}{2}\nabla_{h}\left|\dbtilde{\vec{W}}_{\varepsilon ,M}^{3}\right|^{2}=\mathcal{R}_{\varepsilon ,M}\, . \end{split} \end{equation*} As for the first term, instead, we use the first equation in \eqref{eq:approx wave} to obtain \begin{equation*} \begin{split} {\rm div}\, \dbtilde{\vec{W}}_{\varepsilon ,M}\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)&=-\frac{\varepsilon^{m}}{\mathcal{A}} \partial_t \dbtilde{\Lambda}_{\varepsilon ,M}\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)+\mathcal{R}_{\varepsilon ,M}\\ &=\frac{\varepsilon^{m}}{\mathcal{A}} \dbtilde{\Lambda}_{\varepsilon ,M}\, \partial_t\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)+\mathcal{R}_{\varepsilon ,M}\, . \end{split} \end{equation*} Now, equations \eqref{eq:approx wave} and \eqref{eq:eq momentum term2} immediately yield that \begin{equation*} \frac{\varepsilon^{m}}{\mathcal{A}} \dbtilde{\Lambda}_{\varepsilon ,M}\, \partial_t\left(\dbtilde{\vec{W}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)= \mathcal{R}_{\varepsilon ,M}-\frac{1}{\mathcal{A}}\dbtilde{\Lambda}_{\varepsilon ,M}\, \nabla_{h}\left(\dbtilde{\Lambda}_{\varepsilon ,M}\right)= \mathcal{R}_{\varepsilon ,M}-\frac{1}{2\mathcal{A}}\nabla_{h}\left|\dbtilde{\Lambda}_{\varepsilon ,M}\right|^{2}=\mathcal{R}_{\varepsilon ,M}\,. \end{equation*} This relation finally implies that $\mathcal{T}_{\varepsilon ,M}^{2}\,=\,\mathcal R_{\varepsilon,M}$ is a remainder, in the sense of relation \eqref{eq:remainder}: for any $T>0$ and any test-function $\vec \psi$ as in \eqref{eq:test-f}, one has the convergence (at any $M\in\mathbb{N}$ fixed, when $\varepsilon\ra0^+$) \begin{equation} \label{eq:limit_T2} \int_{0}^{T}\int_{\mathbb{R}^2}\mathcal{T}_{\varepsilon ,M}^{2}\cdot\vec{\psi}^h\,\dx^h\dt\,\longrightarrow\,0\,. \end{equation} \begin{remark}\label{r:T1-T2} Due to the presence of the term $\vec Y^2_\varepsilon$ in \eqref{eq:wave_syst}, the choice $m\geq2$ is fundamental. However, as soon as $F=0$, our analysis applies also in the case when $1<m<2$. \end{remark} \subsection{The limit dynamics} \label{ss:limit} With the convergences established in \eqref{conv:r} to \eqref{conv:u} and in Subsection \ref{ss:convergence}, we can pass to the limit in equation \eqref{weak-mom}. Since all the integrals will be made on $\mathbb{R}^2$ (in view of the choice of the test functions in \eqref{eq:test-f} above), we can safely come back to the notation on $\Omega$ instead of $\widetilde\Omega$. To begin with, we take a test-function $\vec\psi$ as in \eqref{eq:test-f}, specifically $$ \vec{\psi}=\big(\nabla_{h}^{\perp}\phi,0\big)\,,\qquad\qquad\mbox{ with }\qquad \phi\in C_c^\infty\big([0,T[\,\times\mathbb{R}^2\big)\,,\quad \phi=\phi(t,x^h)\,. $$ For such a $\vec\psi$, all the gradient terms vanish identically, as well as all the contributions due to the vertical component of the equation. Hence, after using also \eqref{prF}, equation \eqref{weak-mom} becomes \begin{align} \int_0^T\!\!\!\int_{\Omega} & \left( -\vre \ue^h \cdot \partial_t \vec\psi^h -\vre \ue^h\otimes\ue^h : \nabla_h \vec\psi^h + \frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h\right)\, \dxdt \label{eq:weak_to_conv}\\ & =\int_0^T\!\!\!\int_{\Omega} \left(-\mathbb{S}(\vartheta_\varepsilon,\nabla_x\vec\ue): \nabla_x \vec\psi+\frac{1}{\varepsilon^{2}}(\varrho_\varepsilon -\widetilde{\varrho}_\varepsilon)\, \nabla_x F\cdot \vec\psi\right)\,\dxdt+ \int_{\Omega}\vrez \uez \cdot \vec\psi(0,\cdot)\dx\,. \nonumber \end{align} Making use of the uniform bounds of Subsection \ref{ss:unif-est}, we can pass to the limit in the $\partial_t$ term, in the viscosity term and in the centrifugal force. Moreover, our assumptions imply that $\varrho_{0,\varepsilon}\vec{u}_{0,\varepsilon}\rightharpoonup \vec{u}_0$ in $L_{\rm loc}^2$. Let us consider now the Coriolis term. We can write \begin{align*} \int_0^T\!\!\!\int_{\Omega}\frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h\,&=\,\int_0^T\!\!\!\int_{\mathbb{R}^2}\frac{1}{\ep}\langle\vre \ue^{h}\rangle \cdot \nabla_{h}\phi\,=\, -\varepsilon^{m-1}\int_0^T\!\!\!\int_{\mathbb{R}^2}\langle R_\varepsilon\rangle\, \partial_t\phi\,-\,\varepsilon^{m-1}\int_{\mathbb{R}^2}\langle R_{0,\varepsilon}\rangle\, \phi(0,\cdot )\,, \end{align*} which of course converges to $0$ when $\varepsilon\ra0^+$. Notice that the second equality derives from the mass equation \eqref{weak-con}, tested against $\phi$: namely, \begin{equation*} -\varepsilon^m\int_0^T\!\!\!\int_{\mathbb{R}^2}\langle\frac{\varrho_\varepsilon-1}{\varepsilon^m}\rangle\, \partial_t\phi\,-\,\int_0^T\!\!\!\int_{\mathbb{R}^2}\langle \varrho_\varepsilon \ue^h\rangle \cdot \nabla_{h}\phi\,=\, \varepsilon^m\int_{\mathbb{R}^2}\langle\frac{\varrho_{0,\varepsilon}-1}{\varepsilon^m}\rangle\, \phi(0,\cdot )\,. \end{equation*} It remains to deal with the convective term $\varrho_\varepsilon \ue^h \otimes \ue^h$. For this, we take advantage of Lemma \ref{lem:convterm} and relations \eqref{eq:limit_T1} and \eqref{eq:limit_T2}. Next, we remark that, since $\vec U^h\in L^2_T(H_{\rm loc}^1)$ by \eqref{conv:u}, from \eqref{est:sobolev} we gather the strong convergence $S_M(\chi_l\vec U^h)\longrightarrow\chi_l\vec{U}^{h}$ in $L_{T}^{2}(H^{s})$ for any $s<1$ and any $l>0$ fixed, in the limit for $M\rightarrow +\infty$. Therefore, in the term on the right-hand side of \eqref{eq:limit_T1}, we can perform equalities \eqref{eq:T1} backwards, and then pass to the limit also for $M\rightarrow+\infty$. Using that $\chi_l\equiv1$ on ${\rm Supp}\,\vec\psi$ by construction, we finally get the convergence (for $\varepsilon\ra0^+$) \begin{equation*} \int_0^T\int_{\Omega} \vre \ue^h\otimes\ue^h : \nabla_h \vec\psi^h\, \longrightarrow\, \int_0^T\int_{\mathbb{R}^2}\vec{U}^h\otimes\vec{U}^h : \nabla_h \vec\psi^h\,. \end{equation*} In the end, letting $\varepsilon \rightarrow 0^+$ in \eqref{eq:weak_to_conv}, we may infer that \begin{align*} &\int_0^T\!\!\!\int_{\mathbb{R}^2} \left(\vec{U}^{h}\cdot \partial_{t}\vec\psi^h+\vec{U}^{h}\otimes \vec{U}^{h}:\nabla_{h}\vec\psi^h\right)\, \dx^h\dt\\ &\qquad\qquad= \int_0^T\!\!\!\int_{\mathbb{R}^2} \left(\mu(\overline\vartheta )\nabla_{h}\vec{U}^{h}:\nabla_{h}\vec\psi^h-\delta_2(m)\langle\varrho^{(1)}\rangle\nabla_{h}F\cdot \vec\psi^h\right)\, \dx^h\dt- \int_{\mathbb{R}^2}\langle\vec{u}_{0}^{h}\rangle\cdot \vec\psi^h(0,\cdot)\dx^h\,, \end{align*} where $\delta_2(m)=1$ if $m=2$, $\delta_2(m)=0$ otherwise. At this point, Remark \ref{r:F-G} applied to the case $m=2$ yields the equality $\partial_\varrho p(1,\overline\vartheta)\,\nabla_h\langle\widetilde r\rangle\,=\,\nabla_hF$. Therefore, keeping in mind that $R=\varrho^{(1)}+\widetilde r$, we get $$ \langle\varrho^{(1)}\rangle\nabla_{h}F\,=\,\langle R\rangle\nabla_{h}F\,-\,\langle\widetilde r\rangle\nabla_{h}F\,=\,\langle R\rangle\nabla_{h}F\,-\,\frac{\partial_\varrho p(1,\overline\vartheta)}{2}\,\nabla_h\left|\langle\widetilde r\rangle\right|^2\,. $$ Of course, the perfect gradient disappears from the weak formulation. Using this observation in the target momentum equation written above, we finally deduce \eqref{eq_lim_m:momentum}. This completes the proof of Theorem \ref{th:m-geq-2}, in the case when $m\geq2$ and $F\neq0$. When $m>1$ and $F=0$, most of the arguments above still apply. We refer to the next section for more details. \section{Proof of the convergence in the case when $F=0$} \label{s:proof-1} In the present section we prove the convergence result in the case $F=0$. For the sake of brevity, we focus on the case $m=1$, completing in this way the proof to Theorem \ref{th:m=1_F=0}. The case $m>1$ follows by a similar analysis, using at the end the compensated compactness argument depicted in Subsection \ref{ss:convergence} (recall also Remark \ref{r:T1-T2} above). \subsection{Analysis of the acoustic-Poincar\'e waves}\label{ss:unifbounds_1} We start by remarking that system \eqref{ceq} to \eqref{eeq} can be recasted in the form \eqref{eq:wave_syst}, with $m=1$: with the same notation introduced in Paragraph \ref{sss:wave-eq}, and after setting $X_\varepsilon\,:=\,{\rm div}\,\vec{X}^1_\varepsilon\,+\,X^2_\varepsilon$ and $\vec Y_\varepsilon\,:=\,{\rm div}\,\mathbb{Y}^1_\varepsilon\,+\,\vec{Y}^2_\varepsilon\,+\,\nabla_x Y^3_\varepsilon$, we have \begin{equation} \label{eq:wave_m=1} \left\{\begin{array}{l} \varepsilon\,\partial_tZ_\varepsilon\,+\,\mathcal{A}\,{\rm div}\,\vec{V}_\varepsilon\,=\,\varepsilon\,X_\varepsilon \\[1ex] \varepsilon\,\partial_t\vec{V}_\varepsilon\,+\,\nabla_x Z_\varepsilon\,+\,\,\vec{e}_3\times \vec V_\varepsilon\,=\,\varepsilon\,\vec Y_\varepsilon\,,\qquad\qquad\big(\vec{V}_\varepsilon\cdot\vec n\big)_{|\partial\Omega_\varepsilon}\,=\,0\,, \end{array} \right. \end{equation} where $\bigl(Z_\varepsilon\bigr)_\varepsilon$ and $\bigl(\vec V_\varepsilon\bigr)_\varepsilon$ are defined as in Paragraph \ref{sss:wave-eq}. This system is supplemented with the initial datum $\big(Z_{0,\varepsilon},\vec V_{0,\varepsilon}\big)$, where these two functions are defined as in relation \eqref{def:wave-data} above. \subsubsection{Uniform bounds and regularisation} In the next lemma, we establish uniform bounds for $Z_\varepsilon$ and $\vec V_\varepsilon$. Its proof is an easy adaptation of the one given in Lemma \ref{l:S-W_bounds}, hence omitted. One has to use the fact that, since $F=0$, all the bounds obtained in the previous sections hold now on the whole $\Omega_\varepsilon$, with constants which are uniform in $\varepsilon\in\,]0,1]$; therefore, one can abstain from using the cut-off functions $\chi_l$. \begin{lemma} \label{l:S-W_bounds_1} For any $T>0$ and all $\ep \in \, ]0,1]$, we have $ \sup_{\varepsilon\in\,]0,1]}\| Z_\varepsilon\|_{L^\infty_T((L^2+L^{5/3}+L^1+\mathcal{M}^+)(\Omega_\varepsilon))} \leq c\, ,\quad\quad \sup_{\varepsilon\in\,]0,1]}\| \vec V_\varepsilon\|_{L^2_T((L^2+L^{30/23})(\Omega_\varepsilon))} \leq c \, . $ \end{lemma} \medbreak Now, we state the analogous of Lemma \ref{l:source_bounds} for $m=1$ and $F=0$. \begin{lemma} \label{l:source_bounds_1} For $\varepsilon\in\,]0,1]$, let us introduce the following spaces: \begin{enumerate}[(i)] \item $\mathcal X^\varepsilon_1\,:=\,L^2_{\rm loc}\Bigl(\mathbb{R}_+;\big(L^2+L^{1}+L^{3/2}+L^{30/23}+L^{30/29}\big)(\Omega_\varepsilon)\Bigr)$; \item $\mathcal X^\varepsilon_2\,:=\,L^2_{\rm loc}\Bigl(\mathbb{R}_+;\big(L^2+L^1+L^{4/3}\big)(\Omega_\varepsilon)\Bigr)$; \item $\mathcal X^\varepsilon_3\,:=\,L^\infty_{\rm loc}\Bigl(\mathbb{R}_+;\big(L^2+L^{5/3}\big)(\Omega_\varepsilon)\Bigr)$; \item $\mathcal X^\varepsilon_4\,:=\,L^\infty_{\rm loc}\Bigl(\mathbb{R}_+;\big(L^2+L^{5/3}+L^1+\mathcal{M}^+\big)(\Omega_\varepsilon)\Bigr)$. \end{enumerate} Then, one has the following uniform bound, for a constant $C>0$ independent of $\varepsilon\in\,]0,1]$: $$ \left\|\vec X^1_\varepsilon\right\|_{\mathcal X_1^\varepsilon}\,+\,\left\|X^2_\varepsilon\right\|_{\mathcal X_1^\varepsilon}\,+\,\left\|\mathbb Y^1_\varepsilon\right\|_{\mathcal X_2^\varepsilon}\,+\, \left\|\vec{Y}^2_\varepsilon\right\|_{\mathcal X_3^\varepsilon}\,+\,\left\|Y^3_\varepsilon\right\|_{\mathcal X_4^\varepsilon}\,\leq\,C\,. $$ In particular, one has that the sequences $(X_\varepsilon)_\varepsilon$ and $(\vec Y_\varepsilon)_\varepsilon$, defined in system \eqref{eq:wave_m=1}, verify\footnote{For any $s\in\mathbb{R}$, we denote by $\lfloor s \rfloor$ the entire part of $s$, i.e. the greatest integer smaller than or equal to $s$.} $$ \left\|X_\varepsilon\right\|_{L^2_T(H^{-\lfloor s \rfloor-1}(\Omega_\varepsilon))}\,+\,\left\|\vec Y_\varepsilon\right\|_{L^2_T(H^{-\lfloor s \rfloor-1}(\Omega_\varepsilon))}\,\leq\,C ,$$ for all $s>5/2$ and for a constant $C>0$ independent of $\varepsilon\in\,]0,1]$. \end{lemma} \begin{proof} The proof follows the main lines of the proof of Lemma \ref{l:source_bounds}. Here, we limit ourselves to point out that we have a slightly better control on $\vec Y^2_\varepsilon\,=\,\varrho_\varepsilon^{(1)}\,\nabla_{x} G$, whose boundedness in $\mathcal X^\varepsilon_3$ follows from \eqref{assFG} and the estimate analogous to \eqref{uni_varrho1} for the case $F=0$. \qed \end{proof} \medbreak The next step consists in regularising all the terms appearing in \eqref{eq:wave_m=1}. Here we have to pay attention: since the domains $\Omega_\varepsilon$ are bounded, we cannot use the Littlewood-Paley operators $S_M$ directly. Rather than multiplying by a cut-off function $\chi_l$ as done in the previous section (a procedure which would create more complicated forcing terms in the wave system), we use here the arguments of Chapter 8 of \cite{F-N} (see also \cite{F-Scho}, \cite{WK}), based on finite propagation speed properties for \eqref{eq:wave_m=1}. First of all, similarly to Paragraph \ref{sss:w-reg} above, we extend our domains $\Omega_\varepsilon$ and $\Omega$ by periodicity in the third variable and denote $$ \widetilde\Omega_\varepsilon\,:=\,{B}_{L_\varepsilon} (0) \times \mathbb{T}^1\qquad\qquad\mbox{ and }\qquad\qquad \widetilde\Omega\,:=\,\mathbb{R}^2 \times \mathbb{T}^1\,. $$ Thanks to the complete slip boundary conditions \eqref{bc1-2} and \eqref{bc3}, system \eqref{ceq} to \eqref{eeq} can be equivalently reformulated in $\widetilde\Omega_\varepsilon$. Analogously, the wave system \eqref{eq:wave_m=1} can be recasted in $\widetilde\Omega_\varepsilon$ in a completely equivalent way. From now on, we will focus on the equations satisfied on the domain $\widetilde\Omega_\varepsilon$. Next, we fix a smooth radially decreasing function $\omega\in{C}^\infty_c(\mathbb{R}^3)$, such that $0\leq\omega\leq1$, $\omega(x)=0$ for $|x|\geq1$ and $\int_{\mathbb{R}^3}\omega(x) \dx=1$. Next, we define the mollifying kernel $\big(\omega_M\big)_{M\in\mathbb{N}}$ by the formula $$ \omega_M(x)\,:=\,2^{3M}\,\,\omega\!\left(2^Mx\right)\qquad\qquad \text{for any}\;\,M\in\mathbb{N}\,\; \text{and any}\;\,x\in\mathbb{R}^3\,. $$ Then, for any tempered distribution $\mathfrak S=\mathfrak S(t,x)$ on $\mathbb{R}_+\times\widetilde\Omega$ and any $M\in\mathbb{N}$, we define $$ \mathfrak S_M\,:=\,\omega_M\,*\,\mathfrak S\,, $$ where the convolution is taken only with respect to the space variables. Applying the mollifier $\omega_M$ to \eqref{eq:wave_m=1}, we deduce that $Z_{\varepsilon,M}\,:=\,\omega_M*Z_\varepsilon$ and $\vec V_{\varepsilon,M}\,:=\,\omega_M*\vec V_\varepsilon$ satisfy the regularised wave system \begin{equation} \label{eq:reg-wave} \left\{\begin{array}{l} \varepsilon\,\partial_tZ_{\varepsilon,M}\,+\,\mathcal{A}\,{\rm div}\,\vec{V}_{\varepsilon,M}\,=\,\varepsilon\,X_{\varepsilon,M} \\[1ex] \varepsilon\,\partial_t\vec{V}_{\varepsilon,M}\,+\,\nabla_x Z_{\varepsilon,M}\,+\,\,\vec{e}_3\times \vec V_{\varepsilon,M}\,=\,\varepsilon\,\vec Y_{\varepsilon,M} \end{array} \right. \end{equation} in the domain $\mathbb{R}_+\times\widetilde\Omega_{\varepsilon,M}$, where we have defined \begin{equation} \label{def:O_e-M} \widetilde\Omega_{\varepsilon,M}\,:=\,\left\{x\in\widetilde\Omega_\varepsilon\;:\quad{\rm dist}(x,\partial\widetilde\Omega_\varepsilon)\geq2^{-M} \right\}\,. \end{equation} Since the mollification commutes with standard derivatives, we notice that $X_{\varepsilon,M}\,=\,{\rm div}\,\vec{X}^1_{\varepsilon,M}\,+\,X^2_{\varepsilon,M}$ and $\vec Y_{\varepsilon,M}\,=\,{\rm div}\,\mathbb{Y}^1_{\varepsilon,M}\,+\,\vec{Y}^2_{\varepsilon,M}\,+\,\nabla_x Y^3_{\varepsilon,M}$. Moreover, system \eqref{eq:reg-wave} is supplemented with the initial data $$ Z_{0,\varepsilon,M}\,:=\,\omega_M*Z_{0,\varepsilon}\qquad\qquad\mbox{ and }\qquad\qquad \vec V_{0,\varepsilon,M}\,:=\,\omega_M*\vec V_{0,\varepsilon}\,. $$ In accordance with Lemmas \ref{l:S-W_bounds_1} and \ref{l:source_bounds_1}, by standard properties of mollifying kernels (see Theorem \ref{app:thm_mollifiers}), we get the following properties: for all $k\in\mathbb{N}$, one has \begin{align*} \left\|Z_{\varepsilon,M}\right\|_{L^\infty_T(H^k(\widetilde\Omega_{\varepsilon,M}))}\,+\,\left\|\vec V_{\varepsilon,M}\right\|_{L^2_T(H^k(\widetilde\Omega_{\varepsilon,M}))}\,\leq\,C(k,M) \\ \left\|X_{\varepsilon,M}\right\|_{L^2_T(H^k(\widetilde\Omega_{\varepsilon,M}))}\,+\,\left\|\vec Y_{\varepsilon,M}\right\|_{L^2_T(H^k(\widetilde\Omega_{\varepsilon,M}))}\,\leq\,C(k,M)\,, \end{align*} for some positive constants $C(k,M)$, only depending on the fixed $k$ and $M$. Of course, the constants blow up when $M\rightarrow+\infty$, but they are uniform for $\varepsilon\in\,]0,1]$. We have the following statement, analogous to Proposition \ref{p:prop dec} above. Its proof is also similar, hence omitted. In addition, we notice that the strong convergence follows from standard properties of the mollifying kernel. \begin{proposition} \label{p:dec_1} For any $M>0$ and any $\varepsilon\in\,]0,1]$, we have \begin{equation*} \vec{V}_{\varepsilon ,M}\,=\, \varepsilon\,\vec{v}_{\varepsilon ,M}^{(1)}\,+\,\vec{v}_{\varepsilon ,M}^{(2)}\,, \end{equation*} together with the following bounds: for any $T>0$, any compact set $K\subset\widetilde\Omega$ and any $s\in\mathbb{N}$, one has (for $\varepsilon>0$ small enough, depending only on $K$) \begin{align*} \left\|\vec{v}_{\varepsilon ,M}^{(1)}\right\|_{L^{2}([0,T];H^{s}(K))}\,\leq\,C(K,s,M) \qquad\qquad\mbox{ and }\qquad\qquad \left\|\vec{v}_{\varepsilon ,M}^{(2)}\right\|_{L^{2}([0,T];H^{1}(K))}\,\leq\,C(K)\,, \end{align*} for suitable positive constants $C(K,s,M)$ and $C(K)$ depending only on the quantities in the brackets, but uniform with respect to $\varepsilon\in\,]0,1]$. \end{proposition} In particular, we deduce the following fact: for any $T>0$ and any compact $K\subset\widetilde\Omega$, there exist $\varepsilon_K>0$ and $M_K\in\mathbb{N}$ such that, for all $\varepsilon\in\,]0,\varepsilon_K]$ and all $M\geq M_K$, there are positive constants $C(K)$ and $C(K,M)$ for which \begin{equation} \label{est:V_e-M_conv} \left\|\vec V_\varepsilon\,-\,\vec V_{\varepsilon,M}\right\|_{L^2_T(L^2(K))}\,\leq\,C(K,M)\,\varepsilon\,+\,C(K)\,2^{-M}\,. \end{equation} \subsubsection{Finite propagation speed and consequences} In this paragraph we show that, for the scopes of our study, we can safely assume that system \eqref{eq:reg-wave} is set in the whole $\widetilde\Omega$ and it is supplemented with compactly supported initial data and external forces. Take smooth initial data $\mathcal Z_0$ and $\vec{\mathcal V_0}$ and forces $\mathfrak X$ and $\vec{\mathcal Y}$. Consider, in $\mathbb{R}_+\times\widetilde\Omega$, the wave system \begin{equation} \label{eq:wave_Omega} \left\{\begin{array}{l} \varepsilon\,\partial_t\mathcal Z\,+\,\mathcal{A}\,{\rm div}\,\vec{\mathcal V}\,=\,\varepsilon\,\mathfrak X \\[1ex] \varepsilon\,\partial_t\vec{\mathcal V}\,+\,\nabla_x\mathcal Z\,+\,\,\vec{e}_3\times \vec{\mathcal V}\,=\,\varepsilon\,\vec{\mathcal Y}\,, \end{array} \right. \end{equation} supplemented with initial data $\mathcal Z_{|t=0}\,=\,\mathcal Z_0$ and $\vec{\mathcal V}_{|t=0}\,=\,\vec{\mathcal V_0}$. System \eqref{eq:wave_Omega} is a symmetrizable (in the sense of Friedrichs) first-order hyperbolic system with a skew-symmetric $0$-th order term. Therefore, classical arguments based on energy methods (see e.g. Chapter 3 of \cite{M-2008} and Chapter 7 of \cite{Ali}) allow to establish the properties of finite propagation speed and domain of dependence for solutions to \eqref{eq:wave_Omega}. Namely, set $\lambda\,:=\,\sqrt{\mathcal A}/\varepsilon$ to be the propagation speed of acoustic-Poincar\'e waves. Let $\mathfrak B$ be a cylinder included in $\widetilde\Omega$. Then one has the following two properties. \begin{enumerate}[(i)] \item \emph{Domain of dependence}: assume that $$ {\rm Supp}\,\mathcal Z_0\,,\;{\rm Supp}\,\vec{\mathcal V_0}\,\subset\,\mathfrak B\,,\qquad\qquad\qquad {\rm Supp}\,\mathfrak X(t)\,,\;{\rm Supp}\,\vec{\mathcal Y}(t)\,\subset\,\mathfrak B\quad\mbox{ for a.a. }t\in[0,T]\,; $$ then the corresponding solution $\big(\mathcal Z,\vec{\mathcal V}\big)$ to \eqref{eq:wave_Omega} is \emph{identically zero} outside the cone $$ \Big\{(t,x)\in\,]0,T[\,\times\,\widetilde\Omega\;:\quad {\rm dist}\big(x,\mathfrak B\big)\,<\,\lambda\,t\Big\}\,. $$ \item \emph{Finite propagation speed}: define the set $$ \mathfrak B_{\lambda T}\,:=\,\Big\{x\in\widetilde\Omega\;:\quad {\rm dist}\big(x,\mathfrak B\big)\,<\,\lambda\,T\Big\} $$ and assume that $$ {\rm Supp}\,\mathcal Z_0\,,\;{\rm Supp}\,\vec{\mathcal V_0}\,\subset\,\mathfrak B_{\lambda T}\,,\qquad\qquad\qquad {\rm Supp}\,\mathfrak X(t)\,,\;{\rm Supp}\,\vec{\mathcal Y}(t)\,\subset\,\mathfrak B_{\lambda T}\quad\mbox{ for a.a. }t\in[0,T]\,; $$ then the solution $\big(\mathcal Z,\vec{\mathcal V}\big)$ is \emph{uniquely determined} by the data inside the cone $$ \mathcal C_{\lambda T}\,:=\,\Big\{(t,x)\in\,]0,T[\,\times\mathfrak B_{\lambda T}\;:\quad {\rm dist}\big(x,\partial\mathfrak B_{\lambda T}\big)\,>\,\lambda\,t\Big\}\,, $$ and in particular in the space-time cylinder $\,]0,T[\,\times\,\mathfrak B$. \end{enumerate} \medbreak Next, fix any test-function $\vec\psi\in C^\infty_c\big(\mathbb{R}_+\times\widetilde\Omega;\mathbb{R}^3\big)$, and let $T>0$ and the compact set $K\subset\widetilde\Omega$ be such that ${\rm Supp}\,\vec\psi\subset[0,T[\,\times K$. Take a cylindrical neighborhood $\mathfrak B$ of $K$ in $\widetilde\Omega$. It goes without saying that there exist an $\varepsilon_0=\varepsilon_0(\mathfrak B)\in\,]0,1]$ and a $M_0=M_0(\mathfrak B)\in\mathbb{N}$ such that \begin{equation}\label{cyl-neigh} \overline{\mathfrak B}\,\subset\subset\,\widetilde\Omega_{\varepsilon,M}\qquad\qquad \mbox{ for all }\qquad 0<\varepsilon\leq\varepsilon_0\quad\mbox{ and }\quad M\geq M_0\,, \end{equation} where the set $\widetilde\Omega_{\varepsilon,M}$ has been defined in \eqref{def:O_e-M} above. Take now a cut-off function $\mathfrak h\in C^\infty_c(\widetilde\Omega)$ such that $\mathfrak h\equiv1$ on $\mathfrak B$ (and hence on $K$), and solve problem \eqref{eq:wave_Omega} with compactly supported data $$ \mathcal Z_0\,=\,\mathfrak h\,Z_{0,\varepsilon,M}\,,\qquad \vec{\mathcal V_0}\,=\,\mathfrak h\,\vec V_{0,\varepsilon,M}\,,\qquad \mathfrak X\,=\,\mathfrak h\,X_{\varepsilon,M}\,,\qquad \vec{\mathcal Y}\,=\,\mathfrak h\,\vec{Y}_{\varepsilon,M}\,. $$ We point out that all the data are now localised around the compact set $K$. Owing to assumption \eqref{dom}, the domains $\widetilde{\Omega}_{\varepsilon,M}$ are expanding at speed proportional to $\varepsilon^{-(1+\delta)}$, whereas, in view of finite propagation speed, the support of the solution is expanding at speed proportional to $\varepsilon^{-1}$ (keep in mind also Remark \ref{r:speed-waves}). Thus, thanks to the inclusion \eqref{cyl-neigh}, the previous discussion implies that, up to take a smaller $\varepsilon_0$, for any $\varepsilon\leq\varepsilon_0$ the corresponding solution $\big(\mathcal Z,\vec{\mathcal V}\big)$ of \eqref{eq:wave_Omega} has support inside a cylinder $\widetilde\mathbb{B}_L\,:=\,B_L(0)\times\mathbb{T}^1\subset\widetilde\Omega_\varepsilon$, for some $L=L(T,K,\lambda)>0$, and it must coincide with the solution $\big(Z_{\varepsilon,M},\vec V_{\varepsilon,M}\big)$ of \eqref{eq:reg-wave} on the set $\,]0,T[\,\times\,\mathfrak B$, for all $0<\varepsilon\leq \varepsilon_0$ and all $M\geq M_0$. In particular, for all $0<\varepsilon\leq \varepsilon_0$ and all $M\geq M_0$ we have $$ \mathcal Z\,\equiv\,Z_{\varepsilon,M}\quad\mbox{ and }\quad \vec{\mathcal V}\,\equiv\,\vec V_{\varepsilon,M}\qquad\qquad\mbox{ on }\qquad {\rm Supp}\,\vec\psi\,. $$ The previous argument shows that, without loss of generality, we can assume that the regularised wave system \eqref{eq:reg-wave} is verified on the whole $\widetilde\Omega$, with compactly supported initial data and forces, and with solutions supported on some cylinder $\widetilde\mathbb{B}_L$. In particular, we can safely work with system \eqref{eq:reg-wave} and its smooth solutions $\big(Z_{\varepsilon,M},\vec V_{\varepsilon,M}\big)$ in the computations below. \subsection{Convergence of the convective term} \label{ss:convergence_1} Here we tackle the convergence of the convective term, employing again a compensated compactness argument. The first step is to reduce the study to the case of smooth vector fields $\vec{V}_{\varepsilon ,M}$. Arguing as in Lemma \ref{lem:convterm}, and using Proposition \ref{p:dec_1} and property \eqref{est:V_e-M_conv}, one can easily prove the following approximation result. Again, the proof is omitted. \begin{lemma} \label{lem:convterm_1} Let $T>0$. For any $\vec{\psi}\in C_c^\infty\bigl([0,T[\,\times\widetilde\Omega;\mathbb{R}^3\bigr)$, we have \begin{equation*} \lim_{M\rightarrow +\infty} \limsup_{\varepsilon \rightarrow 0^+}\left|\int_{0}^{T}\int_{\widetilde\Omega} \varrho_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\, \dxdt- \int_{0}^{T}\int_{\widetilde\Omega} \vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon,M}: \nabla_{x}\vec{\psi}\,\ dxdt\right|=0\, . \end{equation*} \end{lemma} \medbreak Assume now $\vec\psi\in C_c^\infty\big([0,T[\,\times\widetilde\Omega;\mathbb{R}^3\big)$ such that ${\rm div}\,\vec\psi=0$ and $\partial_3\vec\psi=0$. Thanks to the previous lemma, it is enough to pass to the limit in the smooth term \begin{align*} -\int_{0}^{T}\int_{\widetilde\Omega} \vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon ,M}: \nabla_{x}\vec{\psi}\,&=\, \int_{0}^{T}\int_{\widetilde\Omega}{\rm div}\,\left(\vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon ,M}\right) \cdot \vec{\psi}\,=\, \int_{0}^{T}\int_{\mathbb{R}^2} \left(\mathcal{T}_{\varepsilon ,M}^{1}+\mathcal{T}_{\varepsilon, M}^{2}\right)\cdot\vec{\psi}^h\,, \end{align*} where, for simplicity, we agree that the torus $\mathbb{T}^1$ has been normalised so that its Lebesgue measure is equal to $1$ and, analogously to \eqref{def:T1-2}, we have introduced the quantities $$ \mathcal T^1_{\varepsilon,M}\,:=\, {\rm div}_h\left(\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\otimes \langle \vec{V}_{\varepsilon ,M}^{h}\rangle\right)\qquad \mbox{ and }\qquad \mathcal T^2_{\varepsilon,M}\,:=\, {\rm div}_h\left(\langle \dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\otimes \dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\rangle \right)\,. $$ We notice that the analysis of $\mathcal{T}_{\varepsilon ,M}^{2}$ is similar to the one performed in Paragraph \ref{sss:term2}, up to taking $m=1$ and replacing $\vec{W}_{\varepsilon ,M}$ and $\Lambda_{\varepsilon ,M}$ by $\vec{V}_{\varepsilon ,M}$ and $Z_{\varepsilon ,M}$ respectively. Indeed, it deeply relies on system \eqref{eq:eq momentum term2}, which remains unchanged when $m=1$. Also in this case, we find \eqref{eq:limit_T2}. Therefore, we can focus on the term $\mathcal{T}_{\varepsilon ,M}^{1}$ only. Its study presents some differences with respect to Paragraph \ref{sss:term1}, so let us give the full details. To begin with, like in \eqref{eq:T1}, we have \begin{equation*} \mathcal{T}_{\varepsilon ,M}^{1}\,=\, {\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\;\; \langle \vec{V}_{\varepsilon ,M}^{h}\rangle+\frac{1}{2}\, \nabla_{h}\left(\left|\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\right|^{2}\right)+ {\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\;\;\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}\,. \end{equation*} Of course, we can forget about the second term, because it is a perfect gradient. For the first term, we use system \eqref{eq:reg-wave}: averaging the first equation with respect to $x^{3}$ and multiplying it by $\langle \vec{V}^h_{\varepsilon ,M}\rangle$, we get \begin{equation*} {\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\;\;\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,=\,-\frac{\varepsilon}{\mathcal{A}}\partial_t\langle Z_{\varepsilon ,M}\rangle \langle \vec{V}_{\varepsilon ,M}^{h}\rangle+ \frac{\varepsilon}{\mathcal{A}} \langle X_{\varepsilon ,M}\rangle \langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,=\, \frac{\varepsilon}{\mathcal{A}}\langle Z_{\varepsilon ,M}\rangle \partial_t \langle \vec{V}_{\varepsilon ,M}^{h}\rangle +\mathcal{R}_{\varepsilon ,M}\,. \end{equation*} We now use the horizontal part of \eqref{eq:reg-wave}, multiplied by $\langle Z_{\varepsilon ,M}\rangle$, and we gather \begin{equation*} \begin{split} \frac{\varepsilon}{\mathcal{A}}\langle Z_{\varepsilon ,M}\rangle \partial_t \langle \vec{V}_{\varepsilon ,M}^{h}\rangle &=-\frac{1}{\mathcal{A}} \langle Z_{\varepsilon ,M}\rangle \nabla_{h}\langle Z_{\varepsilon ,M}\rangle- \frac{1}{\mathcal{A}}\langle Z_{\varepsilon ,M}\rangle\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}+\frac{\varepsilon}{\mathcal{A}}\langle Z_{\varepsilon ,M}\rangle \langle \vec Y_{\varepsilon ,M}^{h}\rangle\\ &=-\frac{1}{\mathcal{A}}\langle Z_{\varepsilon ,M}\rangle\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}+\mathcal{R}_{\varepsilon ,M}\, . \end{split} \end{equation*} This latter relation yields that \begin{equation*} \mathcal{T}_{\varepsilon ,M}^{1}\,=\,\left({\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle-\frac{1}{\mathcal{A}}\langle Z_{\varepsilon ,M}\rangle \right)\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}+\mathcal{R}_{\varepsilon ,M} . \end{equation*} Now we use the horizontal part of \eqref{eq:reg-wave}: averaging it with respect to the vertical variable and applying the operator ${\rm curl}_h$, we find \begin{equation*} \varepsilon\,\partial_t{\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,+\,{\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle \,=\,\varepsilon\, {\rm curl}_h\langle \vec Y_{\varepsilon ,M}^{h}\rangle\, . \end{equation*} Taking the difference of this equation with the first one in \eqref{eq:reg-wave}, we discover that \begin{equation*} \partial_t\gamma_{\varepsilon,M} \,=\,{\rm curl}_h\langle \vec Y_{\varepsilon ,M}^{h}\rangle\,-\,\frac{1}{\mathcal{A}}\,\langle X_{\varepsilon ,M}\rangle\,,\qquad\qquad \mbox{ with }\qquad \gamma_{\varepsilon, M}:={\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,-\,\frac{1}{\mathcal{A}}\langle Z_{\varepsilon ,M}\rangle\,. \end{equation*} An argument analogous to the one used after \eqref{eq:gamma} above, based on Aubin-Lions Theorem, shows also in this case that $(\gamma_{\varepsilon,M})_{\varepsilon}$ is compact in $L_{T}^{2}(L_{\rm loc}^{2})$. Then, this sequence converges strongly (up to extraction of a suitable subsequence) to a tempered distribution $\gamma_M$ in the same space. Since $\gamma_{\varepsilon ,M}\longrightarrow \gamma_M$ strongly in $L_{T}^{2}(L_{\rm loc}^{2})$ and $\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\rightharpoonup\langle \vec{V}_{M}^{h}\rangle$ weakly in $L_{T}^{2}(L_{\rm loc}^{2})$, we deduce that \begin{equation*} \gamma_{\varepsilon,M}\,\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}\,\longrightarrow\, \gamma_M\, \langle \vec{V}_{M}^{h}\rangle^{\perp}\qquad \text{ in }\qquad \mathcal{D}^{\prime}\big(\mathbb{R}_+\times\mathbb{R}^2\big), \end{equation*} where $\langle \vec{V}_{M}^{h}\rangle=\langle{\omega}_{M}*\vec{U}^{h}\rangle$ and $ \gamma_M={\rm curl}_h\langle{\omega}_{M}*\vec{U}^{h}\rangle-(1/\mathcal{A})\langle Z_{M}\rangle$. Notice that, in view of \eqref{conv:rr}, \eqref{conv:theta}, \eqref{est:sigma}, Proposition \ref{p:prop_5.2} and the definitions given in \eqref{relnum}, we have $$ Z_M\,=\,\partial_\varrho p(1,\overline\vartheta)\,\omega_M*\varrho^{(1)}\,+\,\partial_\vartheta p(1,\overline\vartheta)\,\omega_M*\Theta\,=\,\omega_M*q\,, $$ where $q$ is the quantity defined in \eqref{eq:for q}. Owing to the regularity of the target velocity $\vec U^h$, we can pass to the limit also for $M\rightarrow+\infty$, thus finding that \begin{equation} \label{eq:limit_T1-1} \int^T_0\!\!\!\int_{\widetilde\Omega}\varrho_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\, \dxdt\,\longrightarrow\, \int^T_0\!\!\!\int_{\mathbb{R}^2}\big(\vec U^h\otimes\vec U^h:\nabla_h\vec\psi^h\,+\,\frac{1}{\mathcal A}\,q\,(\vec U^h)^\perp\cdot\vec\psi^h\big)\dx^h\dt\, , \end{equation} for all test functions $\vec\psi$ such that ${\rm div}\,\vec\psi=0$ and $\partial_3\vec\psi=0$. Recall the convention $|\mathbb{T}^1|=1$. Notice that, since $\vec U^h=\nabla_h^\perp q$, the last term in the integral on the right-hand side is actually zero. \subsection{End of the proof} \label{ss:limit_1} Thanks to the previous analysis, we are now ready to pass to the limit in equation \eqref{weak-mom}. As done above, we take a test-function $\vec\psi$ such that $$ \vec{\psi}=\big(\nabla_{h}^{\perp}\phi,0\big)\,,\qquad\qquad\mbox{ with }\qquad \phi\in C_c^\infty\big([0,T[\,\times\mathbb{R}^2\big)\,,\quad \phi=\phi(t,x^h)\,. $$ Notice that ${\rm div}\,\vec\psi=0$ and $\partial_3\vec\psi=0$. Then, all the gradient terms and all the contributions coming from the vertical component of the momentum equation vanish identically, when tested against such a $\vec\psi$. In particular, we have $$ \int_0^T\int_{\Omega}\frac{1}{\varepsilon}\varrho_\varepsilon \nabla_x G\cdot \vec \psi\dxdt\equiv 0\, . $$ So, equation \eqref{weak-mom} reduces to\footnote{Remark that, in view of our choice of the test-functions, we can safely come back to the notation on $\Omega$ instead of $\widetilde\Omega$.} \begin{align* \int_0^T\!\!\!\int_{\Omega} \left( -\vre \ue \cdot \partial_t \vec\psi -\vre \ue\otimes\ue : \nabla \vec\psi + \frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h+\mathbb{S}(\vartheta_\varepsilon,\nabla_x\vec\ue): \nabla_x \vec\psi\right) =\int_{\Omega}\vrez \uez \cdot \vec\psi(0,\cdot)\,. \end{align*} As done in Subsection \ref{ss:limit}, we can limit ourselves to consider the rotation and convective terms only. As for the former term, we start by using the mass equation in \eqref{ceq} tested against $\phi$: we get (recalling also \eqref{def_deltarho}) \begin{equation*} \begin{split} -\int_0^T\!\!\!\int_{\mathbb{R}^2} \left( \langle R_{\varepsilon}\rangle\, \partial_{t}\phi +\frac{1}{\varepsilon}\, \langle\varrho_{\varepsilon}\ue^{h}\rangle\cdot \nabla_{h}\phi\right)= \int_{\mathbb{R}^2}\langle R_{0,\varepsilon }\rangle\, \phi (0,\cdot ) \, , \end{split} \end{equation*} whence we deduce that \begin{align*} \int_0^T\!\!\!\int_{\Omega}\frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h\,&=\,\int_0^T\!\!\!\int_{\mathbb{R}^2}\frac{1}{\ep}\langle\vre \ue^{h}\rangle \cdot \nabla_{h}\phi\, \\ &=\,-\,\int_0^T\!\!\!\int_{\mathbb{R}^2}\langle R_\varepsilon\rangle\, \partial_t\phi\,-\,\int_{\mathbb{R}^2}\langle R_{0,\varepsilon}\rangle\, \phi(0,\cdot )\,. \end{align*} Letting now $\varepsilon \rightarrow 0^+$, thanks to the previous relation and \eqref{eq:limit_T1-1}, we finally gather \begin{align*} &-\int_0^T\!\!\!\int_{\mathbb{R}^2} \left(\vec{U}^{h}\cdot \partial_{t}\nabla_{h}^{\perp} \phi+ \vec{U}^{h}\otimes \vec{U}^{h}:\nabla_{h}(\nabla_{h}^{\perp}\phi )+\langle R\rangle\, \partial_t \phi \right)\, \dx^h\dt\\ &=-\int_0^T\!\!\!\int_{\mathbb{R}^2} \mu (\overline\vartheta )\nabla_{h}\vec{U}^{h}:\nabla_{h}(\nabla_{h}^{\perp}\phi ) \, \dx^h\, \dt+\int_{\mathbb{R}^2}\left(\langle\vec{u}_{0}^{h}\rangle\cdot \nabla _{h}^{\perp}\phi (0,\cdot )+ \langle R_{0}\rangle\, \phi (0,\cdot )\right) \, \dx^h\, . \end{align*} Now, keeping in mind the relation for $R$ in Remark \ref{r:limit_1}, we have \begin{equation*} \begin{split} R\,&=\,-\,\frac{1}{\beta}\,\big(\partial_\vartheta p(1,\overline\vartheta)\,\Upsilon\,-\,\partial_\vartheta s(1,\overline\vartheta)\,q\,-\,\partial_\vartheta s(1,\overline\vartheta)\,G\big)\\ &=-\frac{1}{\mathcal A}\left(\mathcal B \Upsilon-q-G\right)\, , \end{split} \end{equation*} where we have also employed the definitions of $\mathcal A$ and $\mathcal B$ in \eqref{relnum}. Thus, we can write \begin{equation*} \begin{split} -\int_0^T\!\!\!\int_{\mathbb{R}^2} \langle R\rangle\, \partial_t \phi \, \dx^h\dt&=\frac{1}{\mathcal A}\int_0^T\!\!\!\int_{\mathbb{R}^2}\left(\mathcal B \, \langle \Upsilon \rangle -q+\frac{1}{2}\right) \, \partial_t \phi \, \dx^h\dt\\ &=\frac{1}{\mathcal A}\int_0^T\!\!\!\int_{\mathbb{R}^2}\Big(\mathcal B \, \langle \Upsilon \rangle -q\Big) \, \partial_t \phi \, \dx^h\dt- \int_{\mathbb{R}^2}\frac{1}{2\mathcal A}\, \phi (0,\cdot )\dx^h\, . \end{split} \end{equation*} At the end, noticing that $\Upsilon$ solves (in the sense of distributions) the transport-diffusion equation \eqref{eq_lim:transport}, we get \begin{align*} &-\int_0^T\!\!\!\int_{\mathbb{R}^2} \left(\vec{U}^{h}\cdot \partial_{t}\nabla_{h}^{\perp} \phi+ \vec{U}^{h}\otimes \vec{U}^{h}:\nabla_{h}(\nabla_{h}^{\perp}\phi )+\frac{1}{\mathcal A}\, q\, \partial_t \phi \right)\, \dx^h\dt\\ &=-\int_0^T\!\!\!\int_{\mathbb{R}^2} \mu (\overline\vartheta )\nabla_{h}\vec{U}^{h}:\nabla_{h}(\nabla_{h}^{\perp}\phi ) \, \dx^h\, \dt+\frac{1}{\mathcal A}\int_0^T\int_{\mathbb{R}^2} \langle X \rangle \, \phi \dx^h \dt\\ &+\int_{\mathbb{R}^2}\left(\langle\vec{u}_{0}^{h}\rangle\cdot \nabla _{h}^{\perp}\phi (0,\cdot )+ \left(\langle R_{0}\rangle+\frac{1}{2\mathcal A}\right)\phi (0,\cdot )\right) \, \dx^h\, , \end{align*} where we have defined $X$ as in \eqref{def:X}. Theorem \ref{th:m=1_F=0} is finally proved. \chapter{On the influence of gravity}\label{chap:BNS_gravity} \begin{quotation} In this chapter, we continue the investigation we began in Chapter \ref{chap:multi-scale_NSF}, regarding the multi-scale analysis of mathematical models for geophysical flows. Our focus here is on the effect of gravity in regimes of \emph{low stratification}, but which go beyond the choice of the scaling that, in light of previous results, we call ``critical''. For clarity of exposition, we consider the barotropic Navier-Stokes system with the Coriolis force, i.e. \begin{equation}\label{chap3:BNSC} \begin{cases} \partial_t \varrho + {\rm div}\, (\varrho\vec{u})=0\ \\[2ex] \partial_t (\varrho\vec{u})+ {\rm div}\,(\varrho\vec{u}\otimes\vec{u}) + \dfrac{\vec{e}_3 \times \varrho\vec{u}}{Ro}\, + \dfrac{1}{Ma^2} \nabla_x p(\varrho) ={\rm div}\, \mathbb{S}(\nabla_x\vec{u}) + \dfrac{\varrho}{Fr^2} \nabla_x G\, , \end{cases} \end{equation} where we will take $Ro=\varepsilon$, $Ma=\varepsilon^m$ and $Fr=\varepsilon^n$ with $m$ and $n$ in suitable ranges (see \eqref{eq:scale-our} below in this respect). The results presented in this chapter are contained in \cite{DS-F-S-WK_sub}. \medbreak Before moving on, let us give a recapitulation of the contents of chapter. In Section \ref{s:result_G} we collect our assumptions and we state the main theorems. In Section \ref{s:energy} we show the main consequences of the finite energy condition on the family of weak solutions we are going to consider. Namely, we derive uniform bounds in suitable norms, which allow us to extract weak-limit points, and we explore the constraints those limit points have to satisfy. In Sections \ref{s:proof_G} and \ref{s:proof-1_G}, we complete the proof of our main results, showing convergence in the weak formulation of the equations in the cases $m>1$ and $m=1$, respectively. \end{quotation} \section{Setting of the problem and main results} \label{s:result_G} In this section, we introduce the primitive system and formulate our working framework (see Subsection \ref{ss:FormProb_G}), then we state our main results (in Subsection \ref{ss:results_G}). \subsection{The barotropic Navier-Stokes-Coriolis system} \label{ss:FormProb_G} As already said in the introductory part, in this chapter we assumed that the motion of the fluid is described by system \eqref{chap2:syst_NSFC} with constant density and without the centrifugal force. Thus, given a small parameter $\varepsilon\in\,]0,1]$, the barotropic Navier-Stokes system with Coriolis and gravitational forces (see system \eqref{chap3:BNSC} in this respect) reads as follows: \begin{align} & \partial_t \vre + {\rm div}\, (\vre\ue)=0 \label{ceq_G}\tag{NSC$_{\ep}^1$} \\ & \partial_t (\vre\ue)+ {\rm div}\,(\vre\ue\otimes\ue) + \frac{1}{\ep}\,\vec{e}_3 \times \vre\ue + \frac{1}{\ep^{2m}} \nabla_x p(\vre) ={\rm div}\, \mathbb{S}(\nabla_x\ue) + \frac{\vre}{\ep^{2n}} \nabla_x G\, , \label{meq_G}\tag{NSC$_{\ep}^2$} \end{align} where we recall that $m$ and $n$ are taken \begin{equation}\label{eq:scale-our} \mbox{ either }\qquad m\,>\,1\quad\mbox{ and }\quad m\,<\,2\,n\,\leq\,m+1\,,\qquad\qquad\mbox{ or }\qquad m\,=\,1\quad\mbox{ and }\quad \frac{1}{2}\,<\,n\,<\,1\,. \end{equation} As in the previous Chapter \ref{chap:multi-scale_NSF}, here the unknowns in equations \eqref{ceq_G}-\eqref{meq_G} are the density $\vre=\vre(t,x)\geq0$ of the fluid and its velocity field $\ue=\ue(t,x)\in\mathbb{R}^3$, where $t\in\mathbb{R}_+$ but now $x\in \Omega:=\mathbb{R}^2 \times\; ]0,1[\,$. The viscous stress tensor in \eqref{meq_G} is given by Newton's rheological law, that we recall here, \begin{equation}\label{S_G} \mathbb{S}(\nabla_x \ue) = \mu\left( \nabla_x\ue + \,^t\nabla_x \ue - \frac{2}{3}{\rm div}\, \ue\, {\rm Id}\, \right) + \eta\, {\rm div}\,\ue \, {\rm Id}\,\,, \end{equation} where $\mu>0$ and $\eta\geq 0$ now don't depend on the temperature variations. As for the gravitational force, we recall its formulation (see \eqref{assFG} in this regard): \begin{equation}\label{assG} G(x)= -x^3\,. \end{equation} The precise expression of $G$ will be useful in some computations below, although some generalisations are certainly possible. As done in the previous Chapter \ref{chap:multi-scale_NSF}, the system is supplemented with \emph{complete slip boundary conditions}, namely \begin{align} \big(\ue \cdot \n\big) _{|\partial \Omega} = 0 \quad &\mbox{ and } \quad \bigl([ \mathbb{S} (\nabla_x \ue) \n ] \times \n\bigr)_{|\partial\Omega} = 0\,, \label{bc1-2_G} \end{align} where $\vec{n}$ denotes the outer normal to the boundary $\partial\Omega\,=\,\{x_3=0\}\cup\{x_3=1\}$. Notice that this is a true simplification, because it avoids complications due to the presence of Ekman boundary layers, when passing to the limit $\varepsilon\ra0^+$. \begin{remark} \label{r:period-bc} As it is well-known (see e.g. Subsection \ref{sss:w-reg} and \cite{Ebin}), the equations \eqref{ceq_G}-\eqref{meq_G}, supplemented by the complete slip boundary boundary conditions \eqref{bc1-2_G}, can be recasted as a periodic problem with respect to the vertical variable, in the new domain $$ \Omega\,=\,\mathbb{R}^2\,\times\,\mathbb{T}^1\,,\qquad\qquad\mbox{ with }\qquad\mathbb{T}^1\,:=\,[-1,1]/\sim\,, $$ where $\sim$ denotes the equivalence relation which identifies $-1$ and $1$. Indeed, the equations are invariant if we extend $\varrho$ and $\vec u^h$ as even functions with respect to $x^3$, and $u^3$ as an odd function. In what follows, we will always assume that such modifications have been performed on the initial data, and that the respective solutions keep the same symmetry properties. \end{remark} Now we need to impose structural restrictions on the pressure function $p$. We assume that \begin{equation}\label{pp1_G} p\in C^1 [0,\infty)\cap C^2(0,\infty),\qquad p(0)=0,\qquad p'(\varrho )>0\quad \mbox{ for all }\,\varrho\geq 0\, . \end{equation} Additionally to \eqref{pp1_G}, we require that (remember also Remark \ref{rmk:pressure_choice}) \begin{equation}\label{pp2_G} \text{there exists}\quad \;\gamma\,>\,\frac{3}{2}\quad\mbox{ such that }\qquad \lim\limits_{\varrho \to +\infty} \frac{p^\prime(\varrho)}{\varrho^{\gamma -1}} = p_\infty >0\, . \end{equation} Without loss of generality, we can suppose that $p$ has been renormalised so that $p^\prime (1)=1$. \begin{remark} For a more detailed discussion about the choice $\gamma>\overline \gamma:=3/2$, which is fundamental for the existence theory, we address the reader to \cite{F-N-P} by Feireisl, Novotný and Petzeltová, and references therein. In particular, we remark that in two space dimensions $\overline \gamma$ can be improved up to $1$. \end{remark} \subsubsection{Equilibrium states} \label{sss:equilibrium_G} Next, we focus our attention on the so-called \emph{equilibrium states}. For each value of $\varepsilon\in\,]0,1]$ fixed, the equilibria of system \eqref{ceq_G}-\eqref{meq_G} consist of static densities $\vret$ satisfying \begin{equation}\label{prF_G} \nabla_x p(\vret) = \ep^{2(m-n)} \vret \nabla_x G \qquad \mbox{ in }\; \Omega\,. \end{equation} Equation \eqref{prF_G} identifies $\widetilde{\varrho}_\varepsilon$ up to an additive constant: taking the target density to be $1$, we get \begin{equation} \label{eq:target-rho_G} H^\prime(\vret)=\, \ep^{2(m-n)} G + H^\prime (1)\,,\qquad\qquad \mbox{ where }\qquad H(\varrho) = \varrho \int_1^{\varrho} \frac{ p(z)}{z^2} {\rm d}z\,. \end{equation} Notice that relation \eqref{eq:target-rho_G} implies that \begin{equation*} H^{\prime \prime}(\varrho)=\frac{p^\prime (\varrho)}{\varrho} \quad \text{ and }\quad H^{\prime \prime}(1)=1\, . \end{equation*} Therefore, we infer that, whenever $m\geq1$ and $m>n$ as in the present chapter, for any $x\in\Omega$ one has $\widetilde{\varrho}_\varepsilon(x)\longrightarrow 1$ in the limit $\varepsilon\ra0^+$. More precisely, the next statement collects all the necessary properties of the static states. It corresponds to Lemma \ref{l:target-rho_pos} and Proposition \ref{p:target-rho_bound} of Chapter \ref{chap:multi-scale_NSF}. \begin{proposition} \label{p:target-rho_bound_G} Let the gravitational force $G$ be given by \eqref{assG}. Let $\bigl(\widetilde{\varrho}_\varepsilon\bigr)_{0<\varepsilon\leq1}$ be a family of static solutions to equation \eqref{prF_G} in $\Omega$. Then, there exist an $\varepsilon_0>0$ and a $0<\rho_*<1$ such that $\widetilde{\varrho}_\varepsilon\geq\rho_*$ for all $\varepsilon\in\,]0,\varepsilon_0]$ and all $x\in\Omega$. In addition, for any $\varepsilon\in\,]0,\varepsilon_0]$, one has: \begin{equation*} \left|\widetilde{\varrho}_\varepsilon(x)\,-\,1\right|\,\leq\,C\,\varepsilon^{2(m-n)}\, , \end{equation*} for a constant $C>0$ which is uniform in $x\in\Omega$ and in $\varepsilon\in\,]0,1]$. \end{proposition} Without loss of any generality, we can assume that $\varepsilon_0=1$ in Proposition \ref{p:target-rho_bound_G}. \medbreak In light of this analysis, it is natural to try to solve system \eqref{ceq_G}-\eqref{meq_G} in $\Omega$, supplemented with the \emph{far field conditions} \begin{equation} \label{ff} \varrho_{\varepsilon}\longrightarrow \vret \qquad \mbox{ and } \qquad \ue \longrightarrow 0 \qquad\qquad \text{ as }\quad |x|\rightarrow +\infty \, . \end{equation} \subsubsection{Initial data and finite energy weak solutions} \label{sss:data-weak_G} In view of the boundary conditions \eqref{ff} ``at infinity'', we assume that the initial data are close (in a suitable sense) to the equilibrium states $\vret$ that we have just identified. Namely, we consider initial densities of the following form: \begin{equation}\label{in_vr_G} \vrez = \vret + \ep^m \vrez^{(1)}. \end{equation} For later use, let us introduce also the following decomposition of the initial densities: \begin{equation} \label{eq:in-dens_dec_G} \varrho_{0,\varepsilon}\,=\,1\,+\,\varepsilon^{2(m-n)}\,R_{0,\varepsilon}\qquad\qquad\mbox{ with }\qquad R_{0,\varepsilon}\,=\,\widetilde r_\varepsilon\,+\,\varepsilon^{2n-m}\, \varrho_{0,\varepsilon}^{(1)}\,,\qquad \widetilde r_\varepsilon\,:=\,\frac{\widetilde\varrho_\varepsilon-1}{\varepsilon^{2(m-n)}}\,, \end{equation} where again $\widetilde r_\varepsilon$ is a datum of the system. We suppose the density perturbations $\vrez^{(1)}$ to be measurable functions and satisfy the control \begin{align} \sup_{\varepsilon\in\,]0,1]}\left\| \vrez^{(1)} \right\|_{(L^2\cap L^\infty)(\Omega)}\,\leq \,c\,,\label{hyp:ill_data_G} \end{align} together with the ``mean-free condition'' $ \int_{\Omega} \vrez^{(1)} \dx = 0\,. $ As for the initial velocity fields, we assume the following uniform bound \begin{equation} \label{hyp:ill-vel_G} \sup_{\varepsilon\in\,]0,1]}\left\| \sqrt{\widetilde\varrho_\varepsilon} \vec{u}_{0,\ep} \right\|_{L^2(\Omega)}\, \leq\, c\, \quad \text{which also implies}\quad \sup_{\varepsilon\in\,]0,1]}\,\left\| \vec{u}_{0,\ep} \right\|_{L^2(\Omega)}\,\leq\,c\,. \end{equation} Thanks to the previous uniform estimates, up to extraction, we can identify the limit points \begin{align} \varrho^{(1)}_0\,:=\,\lim_{\varepsilon\ra0}\varrho^{(1)}_{0,\varepsilon}\qquad\text{ weakly-$\ast$ in }&\qquad L^\infty \cap L^2\label{conv:in_data_vrho}\\ \vec{u}_0\,:=\,\lim_{\varepsilon\ra0}\vec{u}_{0,\varepsilon}\qquad \text{ weakly in }&\qquad L^2\label{conv:in_data_vel}\,. \end{align} \medbreak At this point, let us specify better what we mean by \emph{finite energy weak solution} (see \cite{F-N} for details). \begin{definition} \label{d:weak} Let $\Omega=\mathbb{R}^2 \times \, ]0,1[$. Fix $T>0$ and $\varepsilon>0$. Let $(\varrho_{0,\varepsilon}, \vec u_{0,\varepsilon})$ be an initial datum satisfying \eqref{in_vr_G} to \eqref{hyp:ill-vel_G}. We say that the couple $(\varrho_\varepsilon, \vec u_\varepsilon)$ is a \emph{finite energy weak solution} of the system \eqref{ceq_G}-\eqref{meq_G} in $\,]0,T[\,\times \Omega$, supplemented with the boundary conditions \eqref{bc1-2_G} and far field conditions \eqref{ff}, related to the initial datum $(\varrho_{0,\varepsilon}, \vec u_{0,\varepsilon})$, if the following conditions hold: \begin{enumerate}[(i)] \item the functions $\varrho_\varepsilon$ and $\ue$ belong to the class \begin{equation*} \varrho_\varepsilon\geq 0\,,\;\;\; \varrho_\varepsilon - \widetilde{\varrho}_\varepsilon\,\in L^\infty\big(\,]0,T[\,; L^2+L^\gamma (\Omega)\big)\,,\;\;\; \ue \in L^2\big(\,]0,T[\,;H^1(\Omega)\big),\;\;\; \big(\ue \cdot \n\big) _{|\partial \Omega} = 0; \end{equation*} \item the equations have to be satisfied in a distributional sense: \begin{equation}\label{weak-con_G} -\int_0^T\int_{\Omega} \left( \vre \partial_t \varphi + \vre\ue \cdot \nabla_x \varphi \right) \dxdt = \int_{\Omega} \vrez \varphi(0,\cdot) \dx \end{equation} for any $\varphi\in C^\infty_c([0,T[\,\times \overline\Omega)$ and \begin{align} &\int_0^T\!\!\!\int_{\Omega} \left( - \vre \ue \cdot \partial_t \vec\psi - \vre [\ue\otimes\ue] : \nabla_x \vec\psi + \frac{1}{\ep} \, \vec{e}_3 \times (\vre \ue ) \cdot \vec\psi - \frac{1}{\ep^{2m}} p(\vre) {\rm div}\, \vec\psi \right) \dxdt \label{weak-mom_G} \\ & =\int_0^T\!\!\!\int_{\Omega} \left(- \mathbb{S}(\nabla_x\vec u_\varepsilon) : \nabla_x \vec\psi + \frac{1}{\ep^{2n}} \vre \nabla_x G\cdot \vec\psi \right) \dxdt + \int_{\Omega}\vrez \uez \cdot \vec\psi (0,\cdot) \dx \nonumber \end{align} for any test function $\vec\psi\in C^\infty_c([0,T[\,\times \overline\Omega; \mathbb{R}^3)$ such that $\big(\vec\psi \cdot \n \big)_{|\partial {\Omega}} = 0$; \item the energy inequality holds for almost every $t\in (0,T)$: \begin{align} &\hspace{-0.7cm} \int_{\Omega}\frac{1}{2}\vre|\ue|^2(t) \dx\,+\,\frac{1}{\ep^{2m}}\int_{\Omega}\mathcal E\left(\varrho_\varepsilon,\widetilde\varrho_\varepsilon\right)(t) \dx + \int_0^t\int_{\Omega} \mathbb S(\nabla_x \ue):\nabla_x \ue \, \dx {\rm d}\tau \label{est:dissip_G} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \,\leq\, \int_{\Omega}\frac{1}{2}\vrez|\uez|^2 \dx\,+\, \frac{1}{\ep^{2m}}\int_{\Omega}\mathcal E\left(\varrho_{0,\varepsilon},\widetilde\varrho_\varepsilon\right) \dx\, , \nonumber \end{align} where \mathcal E\left(\rho,\widetilde\varrho_\varepsilon\right)\,:=\,H(\rho) - (\rho - \vret)\, H^\prime(\vret) - H(\vret) $ is the \emph{relative internal energy} of the fluid. \end{enumerate} The solution is \emph{global} if the previous conditions hold for all $T>0$. \end{definition} Under the assumptions fixed above, for any \emph{fixed} value of the parameter $\varepsilon\in\,]0,1]$, the existence of a global in time finite energy weak solution $(\varrho_\varepsilon,\vec u_\varepsilon)$ to system \eqref{ceq_G}-\eqref{meq_G}, related to the initial datum $(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon})$, in the sense of the previous definition, can be proved as in the classical case, see e.g. \cite{Lions_2}, \cite{Feireisl}. Notice that the mapping $t \mapsto (\vre\ue)(t,\cdot)$ is weakly continuous, and one has $(\vre)_{|t=0} = \vrez$ together with $(\vre\ue)_{|t=0}= \vrez\uez$. We remark also that, in view of \eqref{ceq_G}, the total mass is conserved in time: for almost every $t\in[0,+\infty[\,$, one has \begin{equation*} \label{eq:mass_conserv_G} \int_{\Omega}\bigl(\vre(t)\,-\,\vret\bigr)\,\dx\,=\,0\,. \end{equation*} To conclude, as already highlighted in Chapter \ref{chap:multi-scale_NSF}, in our framework of finite energy weak solutions, inequality \eqref{est:dissip_G} will be the only tool to derive uniform estimates for the family of weak solutions we are going to consider. \subsection{Main theorems}\label{ss:results_G} We can now state our main results. We point out that, due to the scaling \eqref{eq:scale-our}, the relation $m>n$ is always true, so we will always be in a low stratification regime. The first statement concerns the case when the effects linked to the pressure term are predominant in the dynamics (with respect to the fast rotation), i.e. $m>1$. \begin{theorem}\label{th:m>1} Let $\Omega= \mathbb{R}^2 \times\,]0,1[\,$ and $G\in W^{1,\infty}(\Omega)$ be as in \eqref{assG}. Take $m>1$ and $m+1\geq 2n >m$. For any fixed value of $\varepsilon \in \; ]0,1]$, let initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon}\right)$ verify the hypotheses fixed in Paragraph \ref{sss:data-weak_G}, and let $\left( \vre, \ue\right)$ be a corresponding weak solution to system \eqref{ceq_G}-\eqref{meq_G}, supplemented with the structural hypotheses \eqref{S_G} on $\mathbb{S}(\nabla_x \ue)$ and with boundary conditions \eqref{bc1-2_G} and far field conditions \eqref{ff}. Let $\vec u_0$ be defined as in \eqref{conv:in_data_vel}. Then, for any $T>0$, one has that \begin{align*} \varrho_\ep \rightarrow 1 \qquad\qquad &\mbox{ strongly in } \qquad L^{\infty}\big([0,T]; L_{\rm loc}^{\min\{2,\gamma\}}(\Omega )\big) \\ \vec{u}_\ep \weak \vec{U} \qquad\qquad &\mbox{ weakly in }\qquad L^2\big([0,T];H^{1}(\Omega)\big)\,, \end{align*} where $\vec{U} = (\vec U^h,0)$, with $\vec U^h=\vec U^h(t,x^h)$ such that ${\rm div}_h\vec U^h=0$. In addition, the vector field $\vec{U}^h $ is a weak solution to the following homogeneous incompressible Navier-Stokes system in $\mathbb{R}_+ \times \mathbb{R}^2$, \begin{align} & \partial_t \vec U^{h}+{\rm div}_h\left(\vec{U}^{h}\otimes\vec{U}^{h}\right)+\nabla_h\Gamma-\mu \Delta_{h}\vec{U}^{h}=0\, , \label{eq_lim_m:momentum_G} \end{align} for a suitable pressure function $\Gamma\in\mathcal D'(\mathbb{R}_+\times\mathbb{R}^2)$ and related to the initial condition $ \vec{U}_{|t=0}=\mathbb{H}_h\left(\langle\vec{u}^h_{0}\rangle\right)\, . $ \end{theorem} When $m=1$, the Mach and Rossby numbers have the same order of magnitude, and they keep in the so-called \emph{quasi-geostrophic balance} at the limit. Namely, the next statement is devoted to this isotropic case. \begin{theorem} \label{th:m=1} Let $\Omega = \mathbb{R}^2 \times\,]0,1[\,$ and $G\in W^{1,\infty}(\Omega)$ be as in \eqref{assG}. Take $m=1$ and $1/2<n<1$. For any fixed value of $\varepsilon \in \; ]0,1]$, let initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon}\right)$ verify the hypotheses fixed in Paragraph \ref{sss:data-weak_G}, and let $\left( \vre, \ue\right)$ be a corresponding weak solution to system \eqref{ceq_G}-\eqref{meq_G}, supplemented with the structural hypotheses \eqref{S_G} on $\mathbb{S}(\nabla_x \ue)$ and with boundary conditions \eqref{bc1-2_G} and far field conditions \eqref{ff}. Let $\left(\varrho^{(1)}_0,\vec u_0\right)$ be defined as in \eqref{conv:in_data_vrho} and \eqref{conv:in_data_vel}. Then, for any $T>0$, one has the following convergence properties: \begin{align*} \varrho_\ep \rightarrow 1 \qquad\qquad &\mbox{ strongly in } \qquad L^{\infty}\big([0,T]; L_{\rm loc}^{\min\{2,\gamma\}}(\Omega )\big) \\ \varrho^{(1)}_\varepsilon:=\frac{\varrho_\ep - \widetilde{\varrho_\varepsilon}}{\ep} \weakstar \varrho^{(1)} \qquad\qquad &\mbox{ weakly-$*$ in }\qquad L^{\infty}\big([0,T]; L^{2}+L^{\gamma}(\Omega )\big) \\ \vec{u}_\ep \weak \vec{U} \qquad\qquad &\mbox{ weakly in }\qquad L^2\big([0,T];H^{1}(\Omega)\big)\,, \end{align*} where, as above, $\vec{U} = (\vec U^h,0)$, with $\vec U^h=\vec U^h(t,x^h)$ such that ${\rm div}_h\vec U^h=0$. Moreover, one has the balance $\vec U^h=\nabla_h^\perp \varrho^{(1)}$, and $\varrho^{(1)}$ satisfies (in the weak sense) the quasi-geostrophic equation \begin{align} & \partial_{t}\left(\varrho^{(1)}-\Delta_{h}\varrho^{(1)}\right) -\nabla_{h}^{\perp}\varrho^{(1)}\cdot \nabla_{h}\left( \Delta_{h}\varrho^{(1)}\right) +\mu \Delta_{h}^{2}\varrho^{(1)}\,=\,0\,, \label{eq_lim:QG_G} \end{align} supplemented with the initial condition $$ \left(\varrho^{(1)}-\Delta_{h}\varrho^{(1)}\right)_{|t=0}=\langle \varrho_0^{(1)}\rangle-{\rm curl}_h\langle\vec u^h_{0}\rangle\, . $$ \end{theorem} \section{Consequences of the energy inequality} \label{s:energy} In Definition \ref{d:weak}, we have postulated that the family of weak solutions $\big(\varrho_\varepsilon,\vu_\varepsilon\big)_\varepsilon$ considered in Theorems \ref{th:m>1} and \ref{th:m=1} satisfies the energy inequality \eqref{est:dissip_G}. In this section, we take advantage of the energy inequality to infer uniform bounds for $\big(\varrho_\varepsilon,\vu_\varepsilon\big)_\varepsilon$: this will be done in Subsection \ref{ss:unif-est_G}. Thanks to those bounds, we can extract (in Subsection \ref{ss:ctl1_G}) weak-limit points of the sequence of solutions and deduce some properties these limit points have to satisfy. \subsection{Uniform bounds and weak limits}\label{ss:unif-est_G} This subsection is devoted to establish uniform bounds on the sequence $\bigl(\varrho_\varepsilon,\vec u_\varepsilon\bigr)_\varepsilon$. This can be done as in the classical case (see e.g. \cite{F-N} for details), since again the Coriolis term does not contribute to the total energy balance of the system. However, for the reader's convenience, let us present some details. To begin with, let us recall the partition of the space domain $\Omega$ into the so-called ``essential'' and ``residual'' sets. For this, for $t>0$ and for all $\varepsilon\in\,]0,1]$, we define the sets $$ \Omega_\ess^\varepsilon(t)\,:=\,\left\{x\in\Omega\;\big|\quad \varrho_\varepsilon(t,x)\in\left[1/2\,\rho_*\,,\,2\right]\right\}\,,\qquad\Omega^\varepsilon_\res(t)\,:=\,\Omega\setminus\Omega^\varepsilon_\ess(t)\,, $$ where the positive constant $\rho_*>0$ has been defined in Proposition \ref{p:target-rho_bound_G}. Next, we observe that \[ \Big[\mathcal E\big(\rho(t,x),\widetilde\varrho_\varepsilon(x)\big)\Big]_\ess\,\sim\,\left[\rho-\widetilde\varrho_\varepsilon(x)\right]_\ess^2 \qquad\quad \mbox{ and }\qquad\quad \Big[\mathcal E\big(\rho(t,x),\widetilde\varrho_\varepsilon(x)\big)\Big]_\res\,\geq\,C\left(1\,+\,\big[\rho(t,x)\big]_\res^\gamma\right)\,, \] where $\vret$ is the static density state identified in Paragraph \ref{sss:equilibrium_G}. Here above, the multiplicative constants are all strictly positive and may depend on $\rho_*$, and we agree to write $A\sim B$ whenever there exists a ``universal'' constant $c>0$ such that $(1/c)B\leq A\leq c\, B$. Thanks to the previous observations, we easily see that, under the assumptions fixed in Section \ref{s:result_G} on the initial data, the right-hand side of \eqref{est:dissip_G} is \emph{uniformly bounded} for all $\varepsilon\in\,]0,1]$: specifically, we have $$ \int_{\Omega} \frac{1}{2}\vrez|\uez|^2\,\dx + \frac{1}{\ep^{2m}}\int_{\Omega}\mathcal E\left(\varrho_{0,\varepsilon}, \, \widetilde\varrho_\varepsilon\right)\,\dx\,\leq\,C\,. $$ Owing to the previous inequalities and the finite energy condition \eqref{est:dissip_G} on the family of weak solutions, it is quite standard to derive, for any time $T>0$ fixed and any $\varepsilon\in\,]0,1]$, the following estimates, that we recall here for the reader's convenience: \begin{align} \sup_{t\in[0,T]} \| \sqrt{\vre}\ue\|_{L^2(\Omega;\, \mathbb{R}^3)}\, &\leq\,c \label{est:momentum_G} \\ \sup_{t\in[0,T]} \left\| \left[ \dfrac{\vre - \vret}{\ep^m}\right]_\ess (t) \right\|_{L^2(\Omega)}\,&\leq\, c \label{est:rho_ess_G} \\ \sup_{t\in[0,T]} \int_{\Omega} {\mathchoice {\rm 1\mskip-4mu l_{\mathcal{M}^\varepsilon_\res[t]} \,dx\,&\leq \, c\,\ep^{2m} \label{est:M_res-measure_G}\\ \sup_{t\in [0,T]} \int_{\Omega} [ \vre]^{\gamma}_\res (t)\,\dx \, \,&\leq\,c\,\ep^{2m} \label{est:rho_res_G} \\ \int_0^T \left\| \nabla_x \ue +\, ^t\nabla_x \ue - \frac{2}{3} {\rm div}\, \ue\, {\rm Id}\, \right\|^2_{L^2(\Omega ;\, \mathbb{R}^{3\times3})}\, \dt\, &\leq\, c\, . \label{est:Du_G} \end{align} We refer to \cite{F-N} (see also \cite{F-G-N}, \cite{F-G-GV-N}, \cite{F_2019}) for the details of the computations. Owing to \eqref{est:Du_G} and a generalisation of the Korn-Poincar\'e inequality (see Proposition \ref{app:korn-poincare_prop} in the Appendix), we gather that $\big(\nabla\vu_\varepsilon\big)_\varepsilon\,\subset\,L^2_T(L^2)$. On the other hand, by arguing as in \cite{F-G-N}, we can use \eqref{est:momentum_G}, \eqref{est:M_res-measure_G} and \eqref{est:rho_res_G} to deduce that also $\big(\vu_\varepsilon\big)_\varepsilon\,\subset\,L^2_T(L^2)$. Putting those bounds together, we finally infer that \begin{equation}\label{bound_for_vel} \int_0^T \left\|\ue \right\|^2_{H^{1}(\Omega ;\, \mathbb{R}^{3})}\, \dt\,\leq \, c\, . \end{equation} In particular, there exist $\vU\,\in\,L^2_{\rm loc}\big(\mathbb{R}_+;H^1(\Omega;\mathbb{R}^3)\big)$ such that, up to a suitable extraction (not relabelled here), we have \begin{equation} \label{conv:u_G} \vu_\varepsilon\,\rightharpoonup\,\vU\qquad\qquad \mbox{ in }\quad L^2_{\rm loc}\big(\mathbb{R}_+;H^1(\Omega;\mathbb{R}^3)\big)\,. \end{equation} Let us move further and consider the density functions. The previous estimates on the density tell us that we must find a finer decomposition for the densities. As a matter of fact, for any time $T>0$ fixed, we have \begin{equation}\label{rr1_G} \| \vre - 1 \|_{L^\infty_T(L^2 + L^{\gamma} + L^\infty)}\,\leq\,c\, \ep^{2(m-n)}\,. \end{equation} In order to see \eqref{rr1_G}, to begin with, we write \begin{equation}\label{rel:density_1} |\varrho_\varepsilon-1|\,\leq\,|\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon|+|\widetilde{\varrho}_\varepsilon-1|\,. \end{equation} From \eqref{est:rho_ess_G}, we infer that $\big[\vr_\varepsilon\,-\,\widetilde{\vr}_\varepsilon\big]_\ess$ is of order $O(\varepsilon^m)$ in $L^\infty_T(L^2)$. For the residual part of the same term, we can use \eqref{est:rho_res_G} to discover that it is of order $O(\varepsilon^{2m/\gamma})$. Observe that, if $1<\gamma<2$, the higher order is $O(\varepsilon^m)$, whereas, in the case $\gamma\geq2$, by use of \eqref{est:rho_res_G} and \eqref{est:M_res-measure_G} again, it is easy to get \begin{equation} \label{est:res_g>2} \left\|\left[\vr_\varepsilon\,-\,\widetilde{\vr}_\varepsilon\right]_\res\right\|_{L^\infty_T(L^2)}^2\,\leq\,C\,\varepsilon^{2m}\,. \end{equation} Finally, we apply Proposition \ref{p:target-rho_bound_G} to control the last term in the right-hand side of \eqref{rel:density_1}. In the end, estimate \eqref{rr1_G} is proved. This having been established, and keeping in mind the notation introduced in \eqref{in_vr_G} and \eqref{eq:in-dens_dec_G}, we can introduce the density oscillation functions \[ R_\varepsilon\,:=\, \frac{\varrho_\ep -1}{\ep^{2(m-n)}}\, =\,\widetilde{r}_\varepsilon\,+\,\varepsilon^{2n-m}\,\varrho_\varepsilon^{(1)}\,, \] where we have defined \begin{equation} \label{def_deltarho_G} \varrho_\varepsilon^{(1)}(t,x)\,:=\,\frac{\vre-\widetilde{\varrho}_\varepsilon}{\ep^m}\qquad\mbox{ and }\qquad \widetilde{r}_\varepsilon(x)\,:=\,\frac{\widetilde{\varrho}_\varepsilon-1}{\ep^{2(m-n)}}\,. \end{equation} Thanks again to \eqref{est:rho_ess_G}, \eqref{est:rho_res_G} and Proposition \ref{p:target-rho_bound_G}, we see that the previous quantities verify the following uniform bounds, for any time $T>0$ fixed: \begin{equation}\label{uni_varrho1_G} \sup_{\varepsilon\in\,]0,1]}\left\|\varrho_\varepsilon^{(1)}\right\|_{L^\infty_T(L^2+L^{\gamma}({\Omega}))}\,\leq\, c \qquad\qquad\mbox{ and }\qquad\qquad \sup_{\varepsilon\in\,]0,1]}\left\| \widetilde{r}_\varepsilon \right\|_{L^{\infty}(\Omega)}\,\leq\, c \,. \end{equation} In view of the previous properties, there exist $\varrho^{(1)}\in L^\infty_T(L^2+L^{\gamma})$ and $\widetilde{r}\in L^\infty$ such that (up to the extraction of a new suitable subsequence) \begin{equation} \label{conv:rr_G} \varrho_\varepsilon^{(1)}\,\weakstar\,\varrho^{(1)}\qquad\qquad \mbox{ and }\qquad\qquad \widetilde{r}_\varepsilon\,\weakstar\,\widetilde{r} \end{equation} in the weak-$*$ topology of the respective spaces. In particular, we get \begin{equation*} R_\varepsilon\, \weakstar\,\widetilde{r} \qquad\qquad\qquad \mbox{ weakly-$*$ in }\qquad L^\infty\bigl([0,T]; L^{\min\{\gamma,2\}}_{\rm loc}(\Omega)\bigr)\, . \end{equation*} \begin{remark} \label{r:g>2} Observe that, owing to \eqref{est:res_g>2}, when $\gamma\geq2$ we get \[ \sup_{\varepsilon\in\,]0,1]}\left\|\varrho_\varepsilon^{(1)}\right\|_{L^\infty_T(L^2)}\,\leq\, c\, . \] Therefore, in that case we actually have that $\varrho^{(1)}\,\in\,L^\infty_T(L^2)$ and $\varrho_\varepsilon^{(1)}\,\stackrel{*}{\rightharpoonup}\,\varrho^{(1)}$ in that space. Analogously, when $\gamma\geq2$ we also get \[ \| \vre - 1 \|_{L^\infty_T(L^2 + L^\infty)}\,\leq\,c\, \ep^{2(m-n)}. \] \end{remark} \subsection{Constraints on the limit}\label{ss:ctl1_G} In this subsection, we establish some properties that the limit points of the family $\bigl(\varrho_\varepsilon,\vec u_\varepsilon \bigr)_\varepsilon$, which have been identified here above, have to satisfy. We first need a preliminary result about the decomposition of the pressure function, which will be useful in the following computations. \begin{lemma}\label{lem:manipulation_pressure} Let $(m,n)\in \mathbb{R}^2$ verify the condition $m+1\,\geq\,2n\,>\,m\geq 1$. Let $p$ be the pressure term satisfying the structural hypotheses \eqref{pp1_G} and \eqref{pp2_G}. Then, for any $\varepsilon\in\,]0,1]$, one has \begin{equation}\label{p_decomp_lemma} \begin{split} \frac{1}{\varepsilon^{2m}}\,\nabla_x\Big(p(\varrho_\varepsilon)\,-\,p(\widetilde{\varrho}_\varepsilon)\Big)\,=\, \frac{1}{\varepsilon^m}\nabla_x \Big(p^\prime (1)\varrho_\varepsilon^{(1)}\Big)\,+\, \frac{1}{\varepsilon^{2n-m}}\,\nabla_x \Pi_\varepsilon\, , \end{split} \end{equation} where the functions $\vr_\varepsilon^{(1)}$ have been introduced in \eqref{def_deltarho_G} and for all $T>0$ the family $\big(\Pi_\varepsilon\big)_\varepsilon$ verifies the uniform bound \begin{equation}\label{unif-bound-Pi} \left\|\Pi_\varepsilon\right\|_{L^\infty_T(L^1+L^2+L^\gamma)}\,\leq\,C\,. \end{equation} When $\gamma\geq2$, one can dispense of the space $L^\gamma$ in the previous control of $\big(\Pi_\varepsilon\big)_\varepsilon$. \end{lemma} \begin{proof} First of all, we write \begin{equation}\label{rel_p} \begin{split} \frac{1}{\varepsilon^{2m}}\,\nabla_x\Big(p(\varrho_\varepsilon)\,-\,p(\widetilde{\varrho}_\varepsilon)\Big)\,&=\,\frac{1}{\varepsilon^{2m}}\,\nabla_x\Big(p(\varrho_\varepsilon)\,-\,p(\widetilde{\varrho}_\varepsilon)-p^\prime(\widetilde{\varrho}_\varepsilon)(\varrho_\varepsilon -\widetilde{\varrho}_\varepsilon ) \Big)\\ &\qquad\qquad+\,\frac{1}{\varepsilon^{m}}\,\nabla_x\Big(\big(p^\prime(\widetilde{\varrho}_\varepsilon)\,-\,p'(1)\big)\,\vr_\varepsilon^{(1)}\Big)\,+\, \frac{1}{\varepsilon^m}\nabla_x \Big(p^\prime (1)\varrho_\varepsilon^{(1)}\Big)\,. \end{split} \end{equation} We start by analysing the first term on the right-hand side of \eqref{rel_p}. For the essential part, we can employ a Taylor expansion to write $$ \left[p(\varrho_\varepsilon)-p(\widetilde{\varrho}_\varepsilon)-p^\prime(\widetilde{\varrho}_\varepsilon)(\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon)\right]_\ess=\left[p^{\prime \prime}(z_\varepsilon )(\varrho_\varepsilon - \widetilde{\varrho}_\varepsilon)^2\right]_\ess\, , $$ where $z_\varepsilon$ is a suitable point between $\varrho_\varepsilon$ and $\widetilde{\varrho}_\varepsilon$. Thanks to the uniform bound \eqref{est:rho_ess_G}, we have that this term is of order $O(\varepsilon^{2m})$ in $L^\infty_T(L^1)$, for any $T>0$ fixed. For the residual part, we can use \eqref{est:M_res-measure_G} and \eqref{est:rho_res_G}, together with the boundedness of the profiles $\widetilde\vr_\varepsilon$ (keep in mind Proposition \ref{p:target-rho_bound_G}), to deduce that \[ \left\|\left[p(\varrho_\varepsilon)-p(\widetilde{\varrho}_\varepsilon)-p^\prime(\widetilde{\varrho}_\varepsilon)(\varrho_\varepsilon-\widetilde{\varrho}_\varepsilon)\right]_\res\right\|_{L^\infty_T(L^1)}\,\leq C\,\varepsilon^{2m}\,. \] We refer to e.g. Lemma 4.1 of \cite{F_2019} for details. In a similar way, a Taylor expansion for the second term on the right-hand side of \eqref{rel_p} gives $$ \big( p^\prime(\widetilde{\varrho}_\varepsilon)-p^\prime(1)\big)\varrho^{(1)}_\varepsilon\,=\,p''(\eta_\varepsilon )( \widetilde{\varrho}_\varepsilon -1)\varrho^{(1)}_\varepsilon\, , $$ where $\eta_\varepsilon$ is a suitable point between $\widetilde{\varrho}_\varepsilon$ and 1. Owing to Proposition \ref{p:target-rho_bound_G} again and to bound \eqref{uni_varrho1_G}, we infer that this term is of order $O(\varepsilon^{2(m-n)})$ in $L^\infty_T(L^2+L^\gamma)$, for any time $T>0$ fixed. Then, defining \begin{equation}\label{nablaPiveps} \Pi_\varepsilon:= \frac{1}{\varepsilon^{2(m-n)}}\left[\frac{p(\varrho_\varepsilon )-p(\widetilde{\varrho}_\varepsilon)}{\varepsilon^m}-p^\prime(1)\varrho_\varepsilon^{(1)}\right] \end{equation} we have the control \eqref{unif-bound-Pi}. The final statement concerning the case $\gamma\geq2$ easily follows from Remark \ref{r:g>2}. This completes the proof of the lemma. \qed \end{proof} \medbreak Notice that the last term appearing in \eqref{p_decomp_lemma} is singular in $\varepsilon$. This is in stark contrast with the situation considered in previous works, see e.g. \cite{F-G-N}, \cite{F-G-GV-N}, \cite{F-N_CPDE} and \cite{F_2019}. However, its gradient structure will play a fundamental role in the computations below. This having been pointed out, we can now analyse the constraints on the weak-limit points $\big(\vr^{(1)},\vec U\big)$, identified in relations \eqref{conv:u_G} and \eqref{conv:rr_G} above. \subsubsection{The case of large values of the Mach number: $m>1$} \label{ss:constr_2_G} We start by considering the case of anisotropic scaling, namely $m>1$ and $m+1\geq 2n>m$. Notice that, in particular, one has $m>n$. \begin{proposition} \label{p:limitpoint_G} Let $m>1$ and $m+1\geq 2n>m$ in \eqref{ceq_G}-\eqref{meq_G}. Let $\left( \vre, \ue \right)_{\varepsilon}$ be a family of weak solutions, related to initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon}\right)_\varepsilon$ verifying the hypotheses of Paragraph \ref{sss:data-weak_G}. Let $(\varrho^{(1)}, \vec{U} )$ be a limit point of the sequence $\left(\varrho_\varepsilon^{(1)}, \ue\right)_{\varepsilon}$, as identified in Subsection \ref{ss:unif-est_G}. Then, \begin{align} &\vec{U}\,=\,\,\Big(\vec{U}^h\,,\,0\Big)\,,\qquad\qquad \mbox{ with }\qquad \vec{U}^h\,=\,\vec{U}^h(t,x^h)\quad \mbox{ and }\quad {\rm div}\,_{\!h}\,\vec{U}^h\,=\,0\,, \label{eq:anis-lim_1_G} \\[1ex] &\nabla_x \varrho^{(1)}\,=\, 0 \qquad\qquad\mbox{ in }\;\,\mathcal D^\prime(\mathbb{R}_+\times \Omega)\,. \label{eq:anis-lim_2_G} \end{align} \end{proposition} \begin{proof} First of all, let us consider the weak formulation of the mass equation \eqref{ceq_G}: for any test function $\varphi\in C_c^\infty\bigl(\mathbb{R}_+\times\Omega\bigr)$, denoting $[0,T]\times K\,:=\,{\rm Supp}\, \, \varphi$, with $\varphi(T,\cdot)\equiv0$, we have $$ -\int^T_0\int_K\bigl(\varrho_\varepsilon-1\bigr)\,\partial_t\varphi \dxdt\,-\,\int^T_0\int_K\varrho_\varepsilon\,\vec{u}_\varepsilon\,\cdot\,\nabla_{x}\varphi \dxdt\,=\, \int_K\bigl(\varrho_{0,\varepsilon}-1\bigr)\,\varphi(0,\,\cdot\,)\dx\,. $$ We can easily pass to the limit in this equation, thanks to the strong convergence $\varrho_\varepsilon\longrightarrow1$, provided by \eqref{rr1_G}, and the weak convergence of $\vec{u}_\varepsilon$ in $L_T^2\bigl(L^6_{\rm loc}\bigr)$, provided by \eqref{conv:u_G} and Sobolev embeddings. Notice that one always has $1/\gamma\,+\,1/6\,\leq\,1$. In this way, we find $$ -\,\int^T_0\int_K\vec{U}\,\cdot\,\nabla_{x}\varphi \dxdt\,=\,0\, , $$ for any test function $\varphi \, \in C_c^\infty\bigl(\mathbb{R}_+\times\Omega\bigr)$ taken as above. The previous relation in particular implies \begin{equation} \label{eq:div-free_G} {\rm div}\, \U = 0 \qquad\qquad\mbox{ a.e. in }\; \,\mathbb{R}_+\times \Omega\,. \end{equation} Next, we test the momentum equation \eqref{meq_G} on $\varepsilon^m\,\vec\phi$, for a smooth compactly supported $\vec\phi$. Using the uniform bounds established in Subsection \ref{ss:unif-est_G}, it is easy to see that the term presenting the derivative in time, the viscosity term and the convective term converge to $0$, in the limit $\varepsilon\ra0^+$. Since $m>1$, also the Coriolis term vanishes when $\varepsilon\ra0^+$. It remains us to consider the pressure and gravity terms in the weak formulation \eqref{weak-mom_G} of the momentum equation: using relation \eqref{prF_G}, we see that we can couple them to write \begin{align} \frac{1}{\varepsilon^{2m}}\,\nabla_x p(\varrho_\varepsilon)-\, \frac{1}{\varepsilon^{2n}}\,\varrho_\varepsilon\nabla_x G\,=\,\frac{1}{\varepsilon^{2m}}\nabla_x\Big(p(\varrho_\varepsilon)\,-\,p(\widetilde{\varrho}_\varepsilon)\Big)-\,\varepsilon^{m-2n}\varrho_\varepsilon^{(1)}\nabla_x G\,. \label{eq:mom_rest_1_G} \end{align} By \eqref{uni_varrho1_G} and the fact that $m>n$, we readily see that the last term in the right-hand side of \eqref{eq:mom_rest_1_G} converges to $0$, when tested against any smooth compactly supported $\varepsilon^m\,\vec\phi$. At this point, we use Lemma \ref{lem:manipulation_pressure} to treat the first term on the right-hand side of \eqref{eq:mom_rest_1_G}. So, taking $\vec\phi \in C^\infty_c([0,T[\, \times \Omega)$ (for some $T>0$), we test the momentum equation against $\varepsilon^m\,\vec\phi$ and using \eqref{conv:rr}, in the limit $\varepsilon\ra0^+$ we find that $$ \int_0^T \int_\Omega p'(1) \vr^{(1)} {\rm div}\, \vec\phi \dxdt = 0\, .$$ Recalling that $p^\prime (1)=1$, the previous relation implies \eqref{eq:anis-lim_2} for $\vr^{(1)}$. In particular, that relation implies that $\varrho^{(1)}(t,x)\,=\,c(t)$ for almost all $(t,x)\in\mathbb{R}_+\times\Omega$, for a suitable function $c=c(t)$ depending only on time. Now, in order to see effects due to the fast rotation in the limit, we need to ``filter out'' the contribution coming from the low Mach number. To this end, we test \eqref{meq_G} on $\varepsilon\,\vec\phi$, where this time we take $\vec\phi\,=\,{\rm curl}\,\vec\psi$, for some smooth compactly supported $\vec\psi\,\in C^\infty_c\bigl([0,T[\,\times\Omega\bigr)$, with $T>0$. Once again, by uniform bounds we infer that the $\partial_t$ term, the convective term and the viscosity term all converge to $0$ when $\varepsilon\ra0^+$. As for the pressure and the gravitational force, we argue as in \eqref{eq:mom_rest_1_G}: since the structure of $\vec\phi$ kills any gradient term, we are left with the convergence of the integral $$ \int^T_0\int_\Omega\varepsilon^{m-2n+1}\varrho_\varepsilon^{(1)}\nabla_x G\cdot\vec\phi\,\dxdt\,\longrightarrow\, \delta_0(m-2n+1)\int^T_0\int_\Omega\varrho^{(1)}\nabla_x G\cdot\vec\phi\,\dxdt\,, $$ where $\delta_0(\zeta)\,=\,1$ if $\zeta=0$, $\delta_0(\zeta)\,=\,0$ otherwise. Finally, arguing as done for the mass equation, we see that the Coriolis term converges to the integral $\int^T_0\int_\Omega\vec{e}_3\times\vec{U}\cdot\vec\phi$. Consider the case $m+1>2n$ for a while. Passing to the limit for $\varepsilon\ra0^+$, we find that $\mathbb{H}\left(\vec{e}_3\times\vec{U}\right)\,=\,0$, which implies that $\vec{e}_3\times\vec{U}\,=\,\nabla_x\Phi$, for some potential function $\Phi$. From this relation, one easily deduces that $\Phi=\Phi(t,x^h)$, i.e. $\Phi$ does not depend on $x^3$, and that the same property is inherited by $\vec{U}^h\,=\,\bigl(U^1,U^2\bigr)$, i.e. one has $\vec{U}^h\,=\,\vec{U}^h(t,x^h)$. Furthermore, since $\vec U^h\,=\,-\,\nabla_h^\perp\Phi$, we get that ${\rm div}\,_{\!h}\,\vec{U}^h\,=\,0$. At this point, we combine this fact with \eqref{eq:div-free_G} to infer that $\partial_3 U^3\,=\,0$; but, thanks to the boundary condition \eqref{bc1-2_G}, we must have $\bigl(\vec{U}\cdot\vec{n}\bigr)_{|\partial\Omega}\,=\,0$, which implies that $U^3$ has to vanish at the boundary of $\Omega$. Thus, we finally deduce that $U^3\,\equiv\,0$, whence \eqref{eq:anis-lim_1_G} follows (see also the proof of Proposition \ref{p:limitpoint} in this regard). Now, let us focus on the case when $m+1=2n$. The previous computations show that, when $\varepsilon\ra0^+$, we get \begin{equation}\label{eq:streamfunction_1} \vec{e}_{3}\times \vec{U}+\varrho^{(1)}\nabla_x G\,=\,\nabla_x\Phi \qquad\qquad\mbox{ in }\; \mathcal D^\prime(\mathbb{R}_+\times \Omega)\,, \end{equation} for a new suitable function $\Phi$. However, owing to \eqref{eq:anis-lim_2_G}, we can say that $\varrho^{(1)}\nabla_x G\,=\,\nabla_x\big(\vr^{(1)}\,G\big)$; hence, the previous relations can be recasted as $\vec{e}_3\times\vec U\,=\,\nabla_x\widetilde\Phi$, for a new scalar function $\widetilde\Phi$. Therefore, the same analysis as above applies, allowing us to gather \eqref{eq:anis-lim_1_G} also in the case $m+1=2n$. \qed \end{proof} \subsubsection{The case $m=1$} \label{ss:constr_1_G} Now we focus on the case $m=1$. In this case, the fast rotation and weak compressibility effects are of the same order: this allows to reach the so-called \emph{quasi-geostrophic balance} in the limit. \begin{proposition} \label{p:limit_iso_G} Take $m=1$ and $1/2<n<1$ in system \eqref{ceq_G}-\eqref{meq_G}. Let $\left( \vre, \ue\right)_{\varepsilon}$ be a family of weak solutions to \eqref{ceq_G}-\eqref{meq_G}, associated with initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon}\right)$ verifying the hypotheses fixed in Paragraph \ref{sss:data-weak_G}. Let $(\varrho^{(1)}, \vec{U} )$ be a limit point of the sequence $\left(\varrho^{(1)}_{\varepsilon} , \ue\right)_{\varepsilon}$, as identified in Subsection \ref{ss:unif-est_G}. Then, \begin{align} \vr^{(1)}\,=\,\vr^{(1)}(t,x^h)\quad\mbox{ and }\quad\vec{U}\,=\,\,\Big(\vec{U}^h\,,\,0\Big)\,,\quad \mbox{ with }\quad \vec{U}^h\,=\,\nabla^\perp_h \varrho^{(1)} \;\mbox{ a.e. in }\;\mathbb{R}_+ \times \mathbb{R}^2\,. \label{eq:for q_G} \end{align} In particular, one has $\vec U^h\,=\,\vec U^h(t,x^h)$ and ${\rm div}\,_{\!h}\vec U^h\,=\,0$. \end{proposition} \begin{proof} Arguing as in the proof of Proposition \ref{p:limitpoint_G}, it is easy to pass to the limit in the continuity equation. In particular, we obtain again relation \eqref{eq:div-free_G} for $\vec U$. Only the analysis of the momentum equation changes a bit with respect to the previous case $m>1$. Now, since the most singular terms are of order $\varepsilon^{-1}$ (keep in mind Lemma \ref{lem:manipulation_pressure}), we test the weak formulation \eqref{weak-mom_G} of the momentum equation against $\varepsilon\,\vec\phi$, where $\vec \phi$ is a smooth compactly supported function. Similarly to what done above, the uniform bounds of Subsection \ref{ss:unif-est_G} allow us to infer that the only quantity which does not vanish in the limit is the sum of the terms involving the Coriolis force, the pressure and the gravitational force: more precisely, using also Lemma \ref{lem:manipulation_pressure}, we have $$ \vec{e}_{3}\times \varrho_{\varepsilon}\ue\,+\frac{\nabla_x \Big( p(\varrho_\varepsilon)-p(\widetilde{\varrho}_\varepsilon)\Big)}{\varepsilon}\,-\, \varepsilon^{2(1-n)}\varrho_\varepsilon^{(1)}\nabla_x G\,=\,\mathcal O(\varepsilon)\, , $$ in the sense of $\mathcal D'(\mathbb{R}_+\times\Omega)$. Following the same computations performed in the proof of Proposition \ref{p:limitpoint_G}, in the limit $\varepsilon\ra0^+$ it is easy to get that $$ \vec{e}_{3}\times \vec{U}+\nabla_x\left(p^\prime (1) \varrho^{(1)}\right)\,=\,0\qquad\qquad\mbox{ in }\; \mathcal D'\big(\mathbb{R}_+\times \Omega\big)\,. $$ After recalling that $p^\prime (1)=1$, this equality can be equivalently written as $$ \vec{e}_{3}\times \vec{U}+\nabla_x \varrho^{(1)}\,=\,0 \qquad\qquad\mbox{ a.e. in }\; \mathbb{R}_+ \times \Omega\,. $$ Notice that $\vec U$ is in fact in $L^2_{\rm loc}(\mathbb{R}_+;L^2)$, therefore so is $\nabla_x \vr^{(1)}$; hence the previous relation is in fact satisfied almost everywhere in $\mathbb{R}_+\times\Omega$. At this point, we can repeat the same argument used in the proof of Proposition \ref{p:limitpoint_G} to deduce \eqref{eq:for q_G}. The proposition is thus proved. \qed \end{proof} \section{Convergence in the case $m>1$}\label{s:proof_G} In this section, we complete the proof of Theorem \ref{th:m>1}. Namely, we show convergence in the weak formulation of the primitive system, in the case when $m>1$ and $m+1\geq 2n>m$. In Proposition \ref{p:limitpoint_G}, we have already seen how passing to the limit in the mass equation. However, problems arise when tackling the convergence in the momentum equation. Indeed, the analysis carried out so far is not enough to identify the weak limit of the convective term $\varrho_\varepsilon\,\vec u_\varepsilon\otimes\vec u_\varepsilon$, which is highly non-linear. For proving that this term converges to the expected limit $\vec U\otimes\vec U$, the key point is to control the fast oscillations in time of the solutions, generated by the singular terms in the momentum equation. For this, we will use a compensated compactness argument and we exploit the algebraic structure of the wave system underlying the primitive equations \eqref{ceq_G}-\eqref{meq_G}. In Subsection \ref{ss:acoustic_G}, we start by giving a quite accurate description of those fast oscillations. Then, using that description, we are able, in Subsection \ref{ss:convergence_G}, to establish two fundamental properties: on the one hand, strong convergence of a suitable quantity related to the velocity fields; on the other hand, the other terms, which do not involve that quantity, tend to vanish when $\varepsilon\ra0^+$. In turn, this allows us to complete, in Subsection \ref{ss:limit_G}, the proof of the convergence. \subsection{Analysis of the strong oscillations} \label{ss:acoustic_G} The goal of the present subsection is to describe the fast oscillations in time of the solutions. First of all, we recast our equations into a wave system. Then, we establish uniform bounds for the quantities appearing in the wave system. Finally, we apply a regularisation in space procedure for all the quantities, which is preparatory in view of the computations of Subsection \ref{ss:convergence_G}. \subsubsection{Formulation of the acoustic wave system} \label{sss:wave-eq_G} We introduce the quantity $$ \vec{V}_\varepsilon\,:=\,\varrho_\varepsilon\vec{u}_\varepsilon\,. $$ Then, straightforward computations show that we can recast the continuity equation in the form \begin{equation} \label{eq:wave_mass_G} \varepsilon^m\,\partial_t\varrho^{(1)}_\varepsilon\,+\,{\rm div}\,\vec{V}_\varepsilon\,=\,0\,, \end{equation} where $\varrho^{(1)}_\varepsilon$ is defined in \eqref{def_deltarho_G}. Next, thanks to Lemma \ref{lem:manipulation_pressure} and the static relation \eqref{prF_G}, we can derive the following form of the momentum equation: \begin{align} \varepsilon^m\,\partial_t\vec{V}_\varepsilon\,+\,\varepsilon^{m-1}\,\vec{e}_3\times \vec V_\varepsilon\,+p^\prime(1)\,\nabla_x \varrho_{\varepsilon}^{(1)}\,&=\, \varepsilon^{2(m-n)}\left(\varrho_\varepsilon^{(1)}\nabla_x G\,-\,\nabla_x\Pi_\varepsilon \right) \label{eq:wave_momentum_G} \\ &\qquad\qquad +\,\varepsilon^m\,\Big({\rm div}\,\mathbb{S}\!\left(\nabla_x\vec{u}_\varepsilon\right)\,-\,{\rm div}\,\!\left(\varrho_\varepsilon\vec{u}_\varepsilon\otimes\vec{u}_\varepsilon\right) \Big)\,. \nonumber \end{align} Then, if we define \begin{equation}\label{def_f-g} \vec f_\varepsilon :={\rm div}\,\big(\mathbb{S}\!\left(\nabla_x\vec{u}_\varepsilon\right)\,-\,\varrho_\varepsilon\vec{u}_\varepsilon\otimes\vec{u}_\varepsilon\big)\qquad \mbox{ and }\qquad \vec g_\varepsilon :=\varrho_\varepsilon^{(1)}\nabla_x G\,-\,\nabla_x\Pi_\varepsilon\,, \end{equation} recalling that we have normalised the pressure function so that $p^\prime (1)=1$, we can rewrite the primitive system \eqref{ceq_G}-\eqref{meq_G} in the following form: \begin{equation} \label{eq:wave_syst_G} \left\{\begin{array}{l} \varepsilon^m\,\partial_t \varrho^{(1)}_\varepsilon\,+\,{\rm div}\,\vec{V}_\varepsilon\,=\,0 \\[1ex] \varepsilon^m\,\partial_t\vec{V}_\varepsilon\,+\,\nabla_x \varrho_\varepsilon^{(1)}\,+\,\varepsilon^{m-1}\,\vec{e}_3\times \vec V_\varepsilon\,=\,\varepsilon^m\,\vec f_\varepsilon +\varepsilon^{2(m-n)}\vec g_\varepsilon\,. \end{array} \right. \end{equation} We remark that system \eqref{eq:wave_syst_G} has to be read in the weak sense: for any $\varphi\in C_c^\infty\bigl([0,T[\,\times \overline\Omega\bigr)$, one has $$ -\,\varepsilon^m\,\int^T_0\int_{\Omega} \varrho^{(1)}_\varepsilon\,\partial_t\varphi\,-\,\int^T_0\int_{\Omega} \vec{V}_\varepsilon\cdot\nabla_x\varphi\,=\, \varepsilon^{m}\int_{\Omega} \varrho^{(1)}_{0,\varepsilon}\,\varphi(0)\,\,, $$ and also, for any $\vec{\psi}\in C_c^\infty\bigl([0,T[\,\times \overline\Omega;\mathbb{R}^3\bigr)$ such that $(\vec \psi \cdot \vec n)_{|\partial \Omega}=0$, one has \begin{align*} &\hspace{-0.5cm} -\,\varepsilon^m\,\int^T_0\int_{\Omega}\vec{V}_\varepsilon\cdot\partial_t\vec{\psi}\,-\,\int^T_0\int_{\Omega} \varrho^{(1)}_\varepsilon\,{\rm div}\,\vec{\psi}\,+\,\varepsilon^{m-1}\int^T_0\int_{\Omega} \vec{e}_3\times\vec V_\varepsilon\cdot\vec\psi \\ &\qquad\qquad\qquad\qquad\qquad =\,\varepsilon^{m}\int_{\Omega}\varrho_{0,\varepsilon}\,\vec{u}_{0,\varepsilon}\cdot\vec{\psi}(0)\,+\,\varepsilon^m\,\int^T_0\int_{\Omega} \vec f_\varepsilon \cdot\vec{\psi}+\,\varepsilon^{2(m-n)}\,\int^T_0\int_{\Omega} \vec g_\varepsilon \cdot\vec{\psi}\,. \end{align*} Here we use estimates of Subsection \ref{ss:unif-est_G} in order to show uniform bounds for the solutions and the data in the wave equation \eqref{eq:wave_syst_G}. We start by dealing with the ``unknown'' $\vec V_\varepsilon$. Splitting the term into essential and residual parts, one can obtain for all $T>0$, \begin{equation}\label{eq:V_bounds} \|\vec V_\varepsilon\|_{L^\infty_T(L^2+L^{2\gamma/(\gamma +1)})}\leq c\, . \end{equation} In the next lemma, we establish bounds for the source terms in the system of acoustic waves \eqref{eq:wave_syst_G}. \begin{lemma} \label{l:source_bounds_G} Write $\vec f_\varepsilon\,=\,{\rm div}\, \widetilde{\vec f}_\varepsilon$ and $\vec g_\varepsilon\,=\,\vec g^1_\varepsilon\,-\,\nabla_x\Pi_\varepsilon$, where we have defined the quantities $\widetilde{\vec f}_\varepsilon:=\mathbb{S}\!\left(\nabla_x\vec{u}_\varepsilon\right)-\varrho_\varepsilon \ue \otimes \ue$, $\vec g^1_\varepsilon\,:=\,\vr_\varepsilon^{(1)}\,\nabla_xG$ and the functions $\Pi_\varepsilon$ have been introduced in \eqref{nablaPiveps} of Lemma \ref{lem:manipulation_pressure}. For any $T>0$ fixed, one has the uniform embedding properties \[ \big(\widetilde{\vec f}_\varepsilon\big)_\varepsilon\,\subset\,L^2_T(L^2+L^1)\qquad\mbox{ and }\qquad \big(\vec g^1_\varepsilon\big)_\varepsilon\,\subset\,L^2_T(L^2+L^\gamma)\,. \] In the case $\gamma\geq2$, we may get rid of the space $L^\gamma$ in the control of $\big(\vec g^1_\varepsilon\big)_\varepsilon$. In particular, the sequences $\bigl(\vec f_\varepsilon\bigr)_\varepsilon$ and $\bigl(\vec g_\varepsilon\bigr)_\varepsilon$, defined in system \eqref{eq:wave_syst_G}, are uniformly bounded in the space $L^{2}\big([0,T];H^{-s}(\Omega)\big)$, for all $s>5/2$. \end{lemma} \begin{proof} From \eqref{est:momentum_G}, \eqref{est:Du_G} and \eqref{bound_for_vel}, we immediately infer the uniform bound for the family $\big(\widetilde{\vec f}_\varepsilon\big)_\varepsilon$ in $L^2_T(L^1+L^2)$, from which we deduce also the uniform boundedness of $\big(\vec f_\varepsilon\big)_\varepsilon$ in $L^2_T(H^{-s})$, for any $s>5/2$ (see Theorem \ref{app:thm_dual_Sob} in this respect). Next, for bounding $\big(\vec g^1_\varepsilon\big)_\varepsilon$ we simply use \eqref{uni_varrho1_G}, together with Remark \ref{r:g>2} when $\gamma\geq2$. Keeping in mind the bounds established in Lemma \ref{lem:manipulation_pressure}, the uniform estimate for $\big(\vec g_\varepsilon\big)_\varepsilon$ follows. \qed \end{proof} \subsubsection{Regularization and description of the oscillations}\label{sss:w-reg_G} As already mentioned in Remark \ref{r:period-bc}, in order to apply the Littlewood-Paley theory, it is convenient to reformulate problem \eqref{ceq_G}-\eqref{meq_G} in the new domain (which we keep calling $\Omega$, with a little abuse of notation) $$ \Omega:= \mathbb{R}^2 \times \mathbb{T}^1,\quad \quad \text{with}\quad \quad \mathbb{T}^1:=[-1,1]/\sim\, . $$ In addition, to avoid the appearing of the (irrelevant) multiplicative constants on the computations, we suppose that the torus $\mathbb{T}^1$ has been renormalised so that its Lebesgue measure is equal to 1. Now, for any $M\in\mathbb{N}$ we consider the low-frequency cut-off operator ${S}_{M}$ of a Littlewood-Paley decomposition, as introduced in \eqref{eq:S_j} of Section \ref{app:LP}. Then, we define \begin{equation}\label{def_reg_vrho-V} \varrho^{(1)}_{\varepsilon ,M}={S}_{M}\varrho^{(1)}_{\varepsilon}\qquad\qquad \text{ and }\qquad\qquad \vec{V}_{\varepsilon ,M}={S}_{M}\vec{V}_{\varepsilon}\, . \end{equation} The previous regularised quantities satisfy the following properties. \begin{proposition} \label{p:prop approx_G} For any $T>0$, we have the following convergence properties, in the limit $M\rightarrow +\infty $: \begin{equation}\label{eq:approx var_G} \begin{split} &\sup_{0<\varepsilon\leq1}\, \left\|\varrho^{(1)}_{\varepsilon }-\vrm\right\|_{L^{\infty}([0,T];H^{-s})}\longrightarrow 0\qquad \forall\,s>\max\left\{0,3\left(\frac{1}{\gamma}\,-\,\frac12\right)\right\}\\ &\sup_{0<\varepsilon\leq1}\, \left\|\vec{V}_{\varepsilon }-\vec{V}_{\varepsilon ,M}\right\|_{L^{\infty}([0,T];H^{-s})}\longrightarrow 0\qquad \forall\,s>\frac{3}{2\,\gamma}\,. \end{split} \end{equation} Moreover, for any $M>0$, the couple $(\vrm,\vec V_{\varepsilon ,M})$ satisfies the approximate wave equations \begin{equation}\label{eq:approx wave_G} \left\{\begin{array}{l} \varepsilon^m\,\partial_t \vrm \,+\,\,{\rm div}\,\vec{V}_{\varepsilon ,M}\,=\,0 \\[1ex] \varepsilon^m\,\partial_t\vec{V}_{\varepsilon ,M}\,+\varepsilon^{m-1}\,e_{3}\times \vec{V}_{\varepsilon ,M}+\,\nabla_x \vrm\,=\,\varepsilon^m\,\vec f_{\varepsilon ,M}\,+\varepsilon^{2(m-n)} \vec g_{\varepsilon,M} \, , \end{array} \right. \end{equation} where $(\vec f_{\varepsilon ,M})_{\varepsilon}$ and $(\vec g_{\varepsilon ,M})_{\varepsilon}$ are families of smooth (in the space variables) functions satisfying, for any $s\geq0$, the uniform bounds \begin{equation}\label{eq:approx force_G} \sup_{0<\varepsilon\leq1}\, \left\|\vec f_{\varepsilon ,M}\right\|_{L^{2}([0,T];H^{s})}\,+\,\sup_{0<\varepsilon\leq1}\,\left\|\vec g_{\varepsilon ,M}\right\|_{L^{\infty}([0,T];H^{s})}\,\leq\, C(s,M)\,, \end{equation} where the constant $C(s,M)$ depends on the fixed values of $s\geq 0$ and $M>0$, but not on $\varepsilon>0$. \end{proposition} \begin{proof} Thanks to characterization \eqref{eq:LP-Sob} of $H^{s}$, properties \eqref{eq:approx var_G} are straightforward consequences of the uniform bounds establish in Subsection \ref{ss:unif-est_G}. For instance, let us consider the functions $\vr^{(1)}_\varepsilon$: when $\gamma\geq2$, owing to Remark \ref{r:g>2} one has $\big(\vr^{(1)}_\varepsilon\big)_\varepsilon\,\subset\,L^\infty_T(L^2)$, and then we use estimate \eqref{est:sobolev} from Section \ref{app:LP}. When $1<\gamma<2$, instead, we first apply the dual Sobolev embedding (see Theorem \ref{app:thm_dual_Sob}) to infer that $\big(\vr^{(1)}_\varepsilon\big)_\varepsilon\,\subset\,L^\infty_T(H^{-\sigma})$, with $\sigma\,=\,\sigma(\gamma)\,=\,3\big(1/\gamma-1/2\big)$, and then we use \eqref{est:sobolev} again. The bounds for the momentum $\big(\vec V_\varepsilon\big)_\varepsilon$ can be deduced by a similar argument, after observing that $2\gamma/(\gamma+1)<2$ always. Next, applying the operator ${S}_{M}$ to \eqref{eq:wave_syst_G} immediately gives us system \eqref{eq:approx wave_G}, where we have set \begin{equation*} \vec f_{\varepsilon ,M}:={S}_{M}\vec f_\varepsilon \qquad \text{ and }\qquad \vec g_{\varepsilon ,M}:={S}_{M}\vec g_\varepsilon\,. \end{equation*} Thanks to Lemma \ref{l:source_bounds_G} and \eqref{eq:LP-Sob}, it is easy to verify inequality \eqref{eq:approx force_G}. \qed \end{proof} \medbreak At this point, we will need also the following important decomposition for the momentum vector fields $\vec V_{\varepsilon,M}$ and their ${\rm curl}\,$. \begin{proposition} \label{p:prop dec_G} For any $M>0$ and any $\varepsilon\in\,]0,1]$, the following decompositions hold true: \begin{equation*} \vec{V}_{\varepsilon ,M}\,=\, \varepsilon^{2(m-n)}\vec{t}_{\varepsilon ,M}^{1}+\vec{t}_{\varepsilon ,M}^{2}\qquad\mbox{ and }\qquad {\rm curl}\, \vec{V}_{\varepsilon ,M}=\varepsilon^{2(m-n)}\vec{T}_{\varepsilon ,M}^{1}+\vec{T}_{\varepsilon ,M}^{2}\,, \end{equation*} where, for any $T>0$ and $s\geq 0$, one has \begin{align*} &\left\|\vec{t}_{\varepsilon ,M}^{1}\right\|_{L^{2}([0,T];H^{s})}+\left\|\vec{T}_{\varepsilon ,M}^{1}\right\|_{L^{2}([0,T];H^{s})}\leq C(s,M) \\ &\left\|\vec{t}_{\varepsilon ,M}^{2}\right\|_{L^{2}([0,T];H^{1})}+\left\|\vec{T}_{\varepsilon ,M}^{2}\right\|_{L^{2}\left([0,T];L^2\right)}\leq C\,, \end{align*} for suitable positive constants $C(s,M)$ and $C$, which are uniform with respect to $\varepsilon\in\,]0,1]$. \end{proposition} \begin{proof} We decompose $\vec{V}_{\varepsilon ,M}\,=\,\varepsilon^{2(m-n)}\vec t_{\varepsilon,M}^{1}\,+\,\vec t_{\varepsilon,M}^{2}$, where we define \begin{equation} \label{eq:t-T_G} \vec{t}_{\varepsilon,M}^{1}\,:=\,{S}_{M}\left(\frac{\varrho_\varepsilon -1}{\varepsilon^{2(m-n)}}\, \vec{u}_{\varepsilon}\right) \qquad\mbox{ and }\qquad \vec{t}_{\varepsilon,M}^{2}\,:=\,{S}_{M} \vec{u}_{\varepsilon}\,. \end{equation} The decomposition of ${\rm curl}\, \vec V_{\varepsilon,M}$ follows after setting $\vec T_{\varepsilon,M}^j\,:=\,{\rm curl}\, \vec t_{\varepsilon,M}^j$, for $j=1,2$. We have to prove uniform bounds for all those terms, by using the estimates established in Subsection \ref{ss:unif-est_G} above. First of all, we have that $\big(\vu_\varepsilon\big)_\varepsilon\,\subset\,L^2_T(H^1)$, for any $T>0$ fixed. Then, we immediately gather the sought bounds for the vector fields $\vec t_{\varepsilon,M}^2$ and $\vec T_{\varepsilon,M}^2$. For the families of $\vec t_{\varepsilon,M}^1$ and $\vec T_{\varepsilon,M}^1$, instead, we have to use the bounds provided by \eqref{rr1_G} and (when $\gamma\geq2$) Remark \ref{r:g>2}. In turn, we see that for any $T>0$, \[ \left(\frac{\varrho_\varepsilon -1}{\varepsilon^{2(m-n)}}\, \vec{u}_{\varepsilon}\right)\,\subset\,L^2_T(L^1+L^2+L^{6\gamma/(\gamma+6)})\,\hookrightarrow\, L^2_T(H^{-\sigma})\,, \] for some $\sigma>0$ large enough. Therefore, the claimed bounds follow thanks to the regularising effect of the operators $S_M$. The proof of the proposition is thus completed. \qed \end{proof} \subsection{Analysis of the convection} \label{ss:convergence_G} In this subsection we show the convergence of the convective term. The first step is to reduce its analysis to the case of smooth vector fields $\vec{V}_{\varepsilon ,M}$. \begin{lemma} \label{lem:convterm_G} Let $T>0$. For any $\vec{\psi}\in C_c^\infty\bigl([0,T[\,\times\Omega;\mathbb{R}^3\bigr)$, we have \begin{equation*} \lim_{M\rightarrow +\infty} \limsup_{\varepsilon \rightarrow 0^+}\left|\int_{0}^{T}\int_{\Omega} \varrho_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\, \dxdt- \int_{0}^{T}\int_{\Omega} \vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon,M}: \nabla_{x}\vec{\psi}\, \dxdt\right|=0\, . \end{equation*} \end{lemma} \begin{proof} The proof is very similar to the one of Lemma \ref{lem:convterm} from Chapter \ref{chap:multi-scale_NSF}, for this reason we just outline it. One starts by using the decomposition $\vr_\varepsilon\,=\,1\,+\,\varepsilon^{2(m-n)}\,R_\varepsilon$ to reduce (owing to the uniform bounds of Subsection \ref{ss:unif-est_G}) the convective term to the ``homogeneous counterpart'': for any test function $\vec\psi\in C^\infty_c\big(\mathbb{R}_+\times\Omega;\mathbb{R}^3\big)$, one has \[ \lim_{\varepsilon \rightarrow 0^+}\left|\int_{0}^{T}\int_{\Omega} \varrho_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\, \dxdt- \int_{0}^{T}\int_{\Omega}\vec{u}_\varepsilon\otimes\vec{u}_\varepsilon:\nabla_{x}\vec{\psi}\,\dxdt\right|\,=\,0\,. \] Notice that, here, one has to use that $\gamma\geq 3/2$. After that, we write $\vu_\varepsilon\,=\,S_M \vu_\varepsilon\,+\,({\rm Id}\,-S_M)\vu_\varepsilon\,=\,\vec t^2_{\varepsilon,M}\,+\,({\rm Id}\,-S_M)\vu_\varepsilon$. Using Proposition \ref{p:prop dec_G} and the fact that $\left\|({\rm Id}\,-{S}_{M})\,\vec{u}_\varepsilon\right\|_{L_{T}^{2}(L^{2})}\leq C\,2^{-M}\|\nabla_x\vec u_\varepsilon\|_{L^2_T(L^2)}\,\leq C\,2^{-M}$, which holds in view of estimate \eqref{est:sobolev} from Section \ref{app:LP} and the uniform bound \eqref{bound_for_vel}, one can conclude. \qed \end{proof} \medbreak From now on, for notational convenience, we generically denote by $\mathcal{R}_{\varepsilon ,M}$ any remainder term, that again is any term satisfying the property \begin{equation} \label{eq:remainder_G} \lim_{M\rightarrow +\infty}\limsup_{\varepsilon \rightarrow 0^{+}}\left|\int_{0}^{T}\int_{\Omega}\mathcal{R}_{\varepsilon ,M}\cdot \vec{\psi}\, \dxdt\right|=0\, , \end{equation} for all test functions $\vec{\psi}\in C_c^\infty\bigl([0,T[\,\times\Omega;\mathbb{R}^3\bigr)$ lying in the kernel of the singular perturbation operator, namely (in view of Proposition \ref{p:limitpoint_G}) such that \begin{equation} \label{eq:test-f_G} \vec\psi\in C_c^\infty\big([0,T[\,\times\Omega;\mathbb{R}^3\big)\qquad\qquad \mbox{ with }\qquad {\rm div}\,\vec\psi=0\quad\mbox{ and }\quad \partial_3\vec\psi=0\,. \end{equation} Notice that, in order to pass to the limit in the weak formulation of the momentum equation and derive the limit system, it is enough to use test functions $\vec\psi$ as above. Thus, for $\vec\psi$ as in \eqref{eq:test-f_G}, we have to pass to the limit in the term \begin{align*} -\int_{0}^{T}\int_{\Omega} \vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon ,M}: \nabla_{x}\vec{\psi}\,&=\,\int_{0}^{T}\int_{\Omega} {\rm div}\,\left(\vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon ,M}\right) \cdot \vec{\psi}\,. \end{align*} Notice that the integration by parts above is well-justified, since all the quantities inside the integrals are now smooth with respect to the space variable. Owing to the structure of the test function, and resorting to the notation \eqref{eq:decoscil} setted in the introductory part, we remark that we can write $$ \int_{0}^{T}\int_{\Omega} {\rm div}\,\left(\vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon ,M}\right) \cdot \vec{\psi}\,=\, \int_{0}^{T}\int_{\mathbb{R}^2} \left(\mathcal{T}_{\varepsilon ,M}^{1}+\mathcal{T}_{\varepsilon, M}^{2}\right)\cdot\vec{\psi}^h\,, $$ where we have defined the terms \begin{equation} \label{def:T1-2_G} \mathcal T^1_{\varepsilon,M}\,:=\, {\rm div}_h\left(\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\otimes \langle \vec{V}_{\varepsilon ,M}^{h}\rangle\right)\qquad \mbox{ and }\qquad \mathcal T^2_{\varepsilon,M}\,:=\, {\rm div}_h\left(\langle \dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\otimes \dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\rangle \right)\,. \end{equation} In the next two paragraphs, we will deal with each one of those terms separately. We borrow most of the arguments from Chapter \ref{chap:multi-scale_NSF} (see also \cite{F-G-GV-N}, \cite{F_2019} for a similar approach). However, the special structure of the gravity force will play a key role here, in order (loosely speaking) to compensate the stronger singularity due to our scaling $2n>m$. Finally, we point out that, in what follows, all the equalities (which will involve the derivative in time) will hold in the sense of distributions. \subsubsection{Convergence of the vertical averages}\label{sss:term1_G} We start by dealing with $\mathcal T^1_{\varepsilon,M}$. It is standard to write \begin{align} \mathcal{T}_{\varepsilon ,M}^{1}\,&=\,{\rm div}_h\left(\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\otimes \langle \vec{V}_{\varepsilon ,M}^{h}\rangle\right)= {\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\, \langle \vec{V}_{\varepsilon ,M}^{h}\rangle+\langle \vec{V}_{\varepsilon ,M}^{h}\rangle \cdot \nabla_{h}\langle \vec{V}_{\varepsilon ,M}^{h}\rangle \label{eq:T1_G} \\ &={\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\, \langle \vec{V}_{\varepsilon ,M}^{h}\rangle+\frac{1}{2}\, \nabla_{h}\left(\left|\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\right|^{2}\right)+ {\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}\,. \nonumber \end{align} Notice that the second term is a perfect gradient, so it vanishes when tested against divergence-free test functions. Hence, we can treat it as a remainder, in the sense of \eqref{eq:remainder_G}. For the first term in the second line of \eqref{eq:T1_G}, instead, we take advantage of system \eqref{eq:approx wave_G}: averaging the first equation with respect to $x^{3}$ and multiplying it by $\langle \vec{V}^h_{\varepsilon ,M}\rangle$, we arrive at $$ {\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,=\,-\varepsilon^m\partial_t\langle \vrm\rangle \langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,=\, \varepsilon^m\langle \vrm\rangle \partial_t \langle \vec{V}_{\varepsilon ,M}^{h}\rangle +\mathcal{R}_{\varepsilon ,M}\,. $$ We remark that the term presenting the total derivative in time is in fact a remainder, thanks to the factor $\varepsilon^m$ in front of it. Now, we use the horizontal part of \eqref{eq:approx wave_G}, where we first take the vertical average and then multiply by $\langle \vrm\rangle$: since $m>1$, we gather \begin{align*} &\varepsilon^m\langle \vrm \rangle \partial_t \langle \vec{V}_{\varepsilon ,M}^{h}\rangle \\ &\qquad\quad= - \langle \vrm\rangle \nabla_{h}\langle \vrm \rangle- \varepsilon^{m-1}\langle \vrm \rangle\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp} +\varepsilon^{m}\langle \vrm \rangle \langle \vec f_{\varepsilon ,M}^{h}\rangle+\varepsilon^{2(m-n)}\langle \vrm \rangle \langle \vec g_{\varepsilon ,M}^{h}\rangle\\ &\qquad\quad=-\varepsilon^{m-1}\langle \vrm \rangle\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}+\mathcal{R}_{\varepsilon ,M}\, , \end{align*} where we have repeatedly exploited the properties proved in Proposition \ref{p:prop approx_G} and we have included in the remainder term also the perfect gradient. Inserting this relation into \eqref{eq:T1_G} yields \begin{equation*} \mathcal{T}_{\varepsilon ,M}^{1}= \left({\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,-\,\varepsilon^{m-1}\langle \vrm \rangle\right) \langle\vec{V}_{\varepsilon,M}^{h}\rangle^{\perp}+\mathcal{R}_{\varepsilon,M}\,. \end{equation*} Observe that the first addendum appearing in the right-hand side of the previous relation is bilinear. Thus, in order to pass to the limit in it, one needs some strong convergence properties. As a matter of fact, in the next computations we will work on the regularised wave system \eqref{eq:approx wave_G} to show that the quantity \[ \Gamma_{\varepsilon, M}:={\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,-\,\varepsilon^{m-1}\langle \vrm \rangle \] is \emph{compact} in some suitable space. In particular, as $m>1$, also ${\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle$ is compact. In order to see this, we write the vertical average of the first equation in \eqref{eq:approx wave_G} as \begin{equation*} \varepsilon^{2m-1}\,\partial_t \langle \vrm \rangle\,+\,\varepsilon^{m-1}{\rm div}\,_{h} \langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,=0\,. \end{equation*} Next, we take the vertical average of the horizontal components of \eqref{eq:approx wave_G} and then we apply ${\rm curl}_h$: one obtains \begin{equation*} \varepsilon^m\,\partial_t{\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,+\varepsilon^{m-1}\,{\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\, =\,\varepsilon^m {\rm curl}_h\langle\vec f_{\varepsilon ,M}^{h}\rangle+\varepsilon^{2(m-n)} {\rm curl}_h\langle\vec g_{\varepsilon ,M}^{h}\rangle\, . \end{equation*} At this point, we recall the definition \eqref{def_f-g} of $\vec g_\varepsilon$, and we see that ${\rm curl}_h\langle\vec g_{\varepsilon ,M}^{h}\rangle\,\equiv\,0$. This property is absolutely fundamental, since it allows to erase the last term in the previous relation, which otherwise would have represented an obstacle to get compactness for $\Gamma_{\varepsilon,M}$. Indeed, thanks to this observation, we can sum up the last two equations to get \begin{equation} \label{eq:gamma_G} \partial_{t}\Gamma_{\varepsilon,M}\,=\,{\rm curl}_h\langle \vec f_{\varepsilon ,M}^{h}\rangle\, . \end{equation} Using estimate \eqref{eq:approx force_G} in Proposition \ref{p:prop approx_G}, we discover that, for any $M>0$ fixed, the family $\left(\partial_{t}\,\Gamma_{\varepsilon,M}\right)_{\varepsilon}$ is uniformly bounded (with respect to $\varepsilon$) in e.g. the space $L_{T}^{2}(L^{2})$. On the other hand, we have that, again for any $M>0$ fixed, the sequence $(\Gamma_{\varepsilon,M})_{\varepsilon}$ is uniformly bounded (with respect to $\varepsilon$) e.g. in the space $L_{T}^{2}(H^{1})$. Since the embedding $H_{\rm loc}^{1}\hookrightarrow L_{\rm loc}^{2}$ is compact, the Aubin-Lions Theorem implies that, for any $M>0$ fixed, the family $(\Gamma_{\varepsilon,M})_{\varepsilon}$ is compact in $L_{T}^{2}(L_{\rm loc}^{2})$. Then, up to extraction of a suitable subsequence (not relabelled here), that family converges strongly to a tempered distribution $\Gamma_M$ in the same space. Now, we have that $\Gamma_{\varepsilon ,M}$ converges strongly to $\Gamma_M$ in $L_{T}^{2}(L_{\rm loc}^{2})$ and $\langle \vec{V}_{\varepsilon ,M}^{h}\rangle$ converges weakly to $\langle \vec{V}_{M}^{h}\rangle$ in $L_{T}^{2}(L_{\rm loc}^{2})$ (owing to Proposition \ref{p:prop dec_G}, for instance). Then, we deduce \begin{equation*} \Gamma_{\varepsilon,M}\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}\longrightarrow \Gamma_M \langle \vec{V}_{M}^{h}\rangle^{\perp}\qquad \text{ in }\qquad \mathcal{D}^{\prime}\big(\mathbb{R}_+\times\mathbb{R}^2\big)\,. \end{equation*} Observe that, by definition of $\Gamma_{\varepsilon,M}$, we must have $\Gamma_M={\rm curl}_h\langle \vec{V}_{M}^{h}\rangle$. On the other hand, owing to Proposition \ref{p:prop dec_G} and \eqref{eq:t-T_G}, we know that $\langle \vec{V}_{M}^{h}\rangle= \langle{S}_{M}\vec{U}^{h}\rangle$. Therefore, in the end we have proved that, for $m>1$ and $m+1\geq 2n >m$, one has the convergence (at any $M\in\mathbb{N}$ fixed, when $\varepsilon\ra0^+$) \begin{equation} \label{eq:limit_T1_G} \int_{0}^{T}\int_{\mathbb{R}^2}\mathcal{T}_{\varepsilon ,M}^{1}\cdot\vec{\psi}^h\dx^h\dt\,\longrightarrow\, \int^T_0\int_{\mathbb{R}^2}{\rm curl}_h\langle{S}_{M}\vec{U}^{h}\rangle\; \langle{S}_{M}(\vec{U}^{h})^{\perp}\rangle\cdot\vec\psi^h\dx^h\dt\, , \end{equation} for any $T>0$ and for any test-function $\vec \psi$ as in \eqref{eq:test-f_G}. \subsubsection{Vanishing of the oscillations}\label{sss:term2_G} We now focus on the term $\mathcal{T}_{\varepsilon ,M}^{2}$, defined in \eqref{def:T1-2_G}. Recall that $m>1$. In what follows, we consider separately the two cases $m+1>2n$ and $m+1=2n$. As a matter of fact, in the case when $m+1=2n$, a bilinear term involving $\vec g_{\varepsilon,M}$ has no power of $\varepsilon$ in front of it, so it is not clear that it converges to $0$ and, in fact, it might persist in the limit, giving rise to an additional term in the target system. To overcome this issue and show that this actually does not happen, we deeply exploit the structure of the wave system to recover a quantitative smallness for that term (namely, in terms of positive powers of $\varepsilon$). \subsubsection*{The case $m+1>2n$}\label{sss:term2_osc} Starting from the definition of $\mathcal T_{\varepsilon,M}^2$, the same computations as above yield \begin{align} \mathcal{T}_{\varepsilon ,M}^{2}\, &=\,\langle {\rm div}_h (\dbtilde{\vec{V}}_{\varepsilon ,M}^{h})\;\;\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\rangle+\frac{1}{2}\, \langle \nabla_{h}| \dbtilde{\vec{V}}_{\varepsilon ,M}^{h}|^{2} \rangle+ \langle {\rm curl}_h\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\,\left( \dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\right)^{\perp}\rangle\, . \label{eq:T2_G} \end{align} Let us now introduce the quantities $$ \dbtilde{\Phi}_{\varepsilon ,M}^{h}\,:=\,( \dbtilde{\vec{V}}_{\varepsilon ,M}^{h})^{\perp}-\partial_{3}^{-1}\nabla_{h}^{\perp}\dbtilde{\vec{V}}_{\varepsilon ,M}^{3}\qquad\mbox{ and }\qquad \dbtilde{\omega}_{\varepsilon ,M}^{3}\,:=\,{\rm curl}_h \dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\,. $$ Then we can write \begin{equation*} \left( {\rm curl}\, \dbtilde{\vec{V}}_{\varepsilon ,M}\right)^{h}\,=\,\partial_3 \dbtilde{\Phi}_{\varepsilon ,M}^{h}\qquad \text{ and }\qquad \left( {\rm curl}\, \dbtilde{\vec{V}}_{\varepsilon ,M}\right)^{3}\,=\,\dbtilde{\omega}_{\varepsilon ,M}^{3}\,. \end{equation*} In addition, from the momentum equation in \eqref{eq:approx wave_G}, where we take the mean-free part and then the ${\rm curl}\,$, we deduce the equations \begin{equation} \label{eq:eq momentum term2_G} \begin{cases} \varepsilon^{m}\partial_t\dbtilde{\Phi}_{\varepsilon ,M}^{h}-\varepsilon^{m-1}\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}=\varepsilon^m\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec f}_{\varepsilon ,M} \right)^{h}+\varepsilon^{2(m-n)}\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec g}_{\varepsilon ,M} \right)^{h}\\[1ex] \varepsilon^{m}\partial_t\dbtilde{\omega}_{\varepsilon ,M}^{3}+\varepsilon^{m-1}{\rm div}_h\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}=\varepsilon^m\,{\rm curl}_h\dbtilde{\vec f}_{\varepsilon ,M}^{h}\, . \end{cases} \end{equation} Making use of the relations above, recalling the definitions in \eqref{def_f-g}, and thanks to Propositions \ref{p:prop approx_G} and \ref{p:prop dec_G}, we can write \begin{equation}\label{rel_oscillations} \begin{split} {\rm curl}_h\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\;\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\right)^{\perp}&=\dbtilde{\omega}_{\varepsilon ,M}^{3}\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\right)^{\perp}\,\\ &=\varepsilon \partial_t\!\left( \dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\dbtilde{\omega}_{\varepsilon ,M}^{3}- \varepsilon\dbtilde{\omega}_{\varepsilon ,M}^{3}\left(\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec f}_{\varepsilon ,M}\right)^{h}\right)^\perp\\ &\qquad\qquad\qquad\qquad\qquad\qquad -\varepsilon^{m+1-2n}\, \dbtilde{\omega}_{\varepsilon ,M}^{3}\left(\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec g}_{\varepsilon ,M}\right)^{h}\right)^\perp \\ &=-\varepsilon \left( \dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\partial_t\dbtilde{\omega}_{\varepsilon ,M}^{3}+\mathcal{R}_{\varepsilon ,M}= \left( \dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\,{\rm div}_h\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\mathcal{R}_{\varepsilon ,M}\, . \end{split} \end{equation} We point out that, thanks to the scaling $m+1>2n$, we could include in the remainder also the last term appearing in the second equality, which was of order $O(\varepsilon^{m+1-2n})$. Hence, putting the gradient term into $\mathcal R_{\varepsilon,M}$, from \eqref{eq:T2_G} we arrive at \begin{align*} \mathcal{T}_{\varepsilon ,M}^{2}\,&=\,\langle {\rm div}_h\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\,\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right) \rangle+\mathcal{R}_{\varepsilon ,M} \\ &=\,\langle {\rm div}\, \dbtilde{\vec{V}}_{\varepsilon ,M}\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right) \rangle - \langle \partial_3 \dbtilde{\vec{V}}_{\varepsilon ,M}^{3}\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right) \rangle+\mathcal{R}_{\varepsilon ,M}\, . \end{align*} At this point, the computations mainly follow the same lines of \cite{F-G-GV-N} (see also \cite{F_2019}). First of all, we notice that, in the last line, the second term on the right-hand side is another remainder. Indeed, using the definition of the function $\dbtilde{\Phi}_{\varepsilon ,M}^{h}$ and the fact that the test-function $\vec\psi$ does not depend on $x^3$, one has \begin{equation*} \begin{split} \partial_3 \dbtilde{\vec{V}}_{\varepsilon ,M}^{3}\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)&=\partial_3 \left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{3}\left(\widetilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)\right) - \dbtilde{\vec{V}}_{\varepsilon ,M}^{3}\, \partial_3\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)\\ &=\mathcal{R}_{\varepsilon ,M}-\frac{1}{2}\nabla_{h}\left|\dbtilde{\vec{V}}_{\varepsilon ,M}^{3}\right|^{2}=\mathcal{R}_{\varepsilon ,M}\, . \end{split} \end{equation*} Next, in order to deal with the first term, we use the first equation in \eqref{eq:approx wave_G} to obtain \begin{equation*} \begin{split} {\rm div}\, \dbtilde{\vec{V}}_{\varepsilon ,M}\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)&=-\varepsilon^{m} \partial_t \dbtilde{\varrho}^{(1)}_{\varepsilon ,M}\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)+\mathcal{R}_{\varepsilon ,M}\\ &=\varepsilon^{m} \dbtilde{\varrho}_{\varepsilon,M}^{(1)}\, \partial_t\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)+\mathcal{R}_{\varepsilon ,M}\, . \end{split} \end{equation*} Now, equations \eqref{eq:approx wave_G} and \eqref{eq:eq momentum term2_G} immediately yield that \begin{equation*} \varepsilon^{m}\dbtilde{\varrho}^{(1)}_{\varepsilon ,M}\, \partial_t\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}+\left(\dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\right)= \mathcal{R}_{\varepsilon ,M}-\dbtilde{\varrho}_{\varepsilon ,M}^{(1)}\, \nabla_{h}\dbtilde{\varrho}^{(1)}_{\varepsilon ,M}= \mathcal{R}_{\varepsilon ,M}-\frac{1}{2}\nabla_{h}\left|\dbtilde{\varrho}_{\varepsilon ,M}^{(1)}\right|^{2}=\mathcal{R}_{\varepsilon ,M}\,. \end{equation*} This relation finally implies that $\mathcal{T}_{\varepsilon ,M}^{2}\,=\,\mathcal R_{\varepsilon,M}$ is a remainder, in the sense of relation \eqref{eq:remainder_G}: for any $T>0$ and any test-function $\vec \psi$ as in \eqref{eq:test-f_G}, one has the convergence (at any $M\in\mathbb{N}$ fixed, when $\varepsilon\ra0^{+}$) \begin{equation} \label{eq:limit_T2_G} \int_{0}^{T}\int_{\mathbb{R}^2}\mathcal{T}_{\varepsilon ,M}^{2}\cdot\vec{\psi}^h\dx^h\dt\,\longrightarrow\,0\,. \end{equation} \subsubsection*{The case $m+1=2n$}\label{sss:term2_osc_bis} In the case $m+1=2n$, most of the previous computations may be reproduced exactly in the same way. The only (fundamental) change concerns relation \eqref{rel_oscillations}: since now $m+1-2n=0$, that equation reads \begin{equation}\label{rel_oscillations_bis} \begin{split} {\rm curl}_h\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\;\left(\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\right)^{\perp}\,=\,\left( \dbtilde{\Phi}_{\varepsilon ,M}^{h}\right)^{\perp}\,{\rm div}_h\dbtilde{\vec{V}}_{\varepsilon ,M}^{h}\,- \dbtilde{\omega}_{\varepsilon ,M}^{3}\left(\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec g}_{\varepsilon ,M}\right)^{h}\right)^\perp+\mathcal{R}_{\varepsilon ,M}\,, \end{split} \end{equation} and, repeating the same computations performed for $\mathcal T^2_{\varepsilon, M}$ in the previous paragraph, we have \begin{equation*}\label{T^2-bis} \mathcal T^2_{\varepsilon, M}= \mathcal R_{\varepsilon, M}-\langle\dbtilde{\omega}_{\varepsilon ,M}^{3}\left(\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec g}_{\varepsilon ,M}\right)^{h}\right)^\perp \rangle\, . \end{equation*} Hence, the main difference with respect to the previous case is that we have to take care of the term $\widetilde{\omega}_{\varepsilon ,M}^{3}\left(\left(\partial_{3}^{-1}{\rm curl}\,\dbtilde{\vec g}_{\varepsilon ,M}\right)^{h}\right)^\perp $, which is non-linear and of order $O(1)$, so it may potentially give rise to oscillations which persist in the limit. In order to show that this does not happen, we make use of definition \eqref{def_f-g} of $\vec g_{\varepsilon}$ to compute \begin{align*} \left({\rm curl}\,\dbtilde{\vec g}_{\varepsilon ,M}\right)^{h,\perp}\,&=\,\left({\rm curl}\, \left(\dbtilde{\varrho}_{\varepsilon, M}^{(1)}\nabla_x G-\nabla_x\dbtilde{\Pi}_{\varepsilon,M}\right)\right)^{h,\perp} \\ &=\, \begin{pmatrix} -\partial_2 \dbtilde{\varrho}^{(1)}_{\varepsilon, M} \\ \partial_1 \dbtilde{\varrho}^{(1)}_{\varepsilon ,M} \\ 0 \end{pmatrix}^{h,\perp}\,=-\,\nabla_h \dbtilde{\varrho}_{\varepsilon,M}^{(1)}\, . \end{align*} From this relation, in turn we get \begin{equation}\label{T^2} \mathcal T^2_{\varepsilon, M}\,=\,\mathcal R_{\varepsilon, M}\,+\,\langle \dbtilde{\omega}_{\varepsilon ,M}^{3}\, \partial_3^{-1}\nabla_h \dbtilde{\varrho}_{\varepsilon,M}^{(1)} \rangle \, . \end{equation} Now, we have to employ the potential part of the momentum equation in \eqref{eq:approx wave_G}, which has not been used so far. Taking the oscillating component of the solutions, we obtain \begin{equation*} \nabla_h \dbtilde{\varrho}_{\varepsilon,M}^{(1)}\,=-\, \varepsilon^m\,\partial_t\dbtilde{\vec{V}}^h_{\varepsilon ,M}\,-\varepsilon^{m-1} (\dbtilde{\vec{V}}^h_{\varepsilon ,M})^\perp+\varepsilon^m\,\dbtilde{\vec f}^h_{\varepsilon ,M}\,+\varepsilon^{2(m-n)} \dbtilde{\vec g}^h_{\varepsilon,M}= -\, \varepsilon^m\,\partial_t\dbtilde{\vec{V}}^h_{\varepsilon ,M}\,+ \mathcal R_{\varepsilon,M}\,. \end{equation*} Inserting this relation into \eqref{T^2} and using \eqref{eq:eq momentum term2_G}, we finally gather \begin{equation*} \mathcal T^2_{\varepsilon, M}=-\varepsilon^m \langle \dbtilde{\omega}_{\varepsilon ,M}^{3}\, \partial_t\partial_3^{-1}\dbtilde{\vec{V}}^h_{\varepsilon ,M} \rangle +\mathcal R_{\varepsilon,M}= \varepsilon^m \langle \partial_t \dbtilde{\omega}_{\varepsilon ,M}^{3}\, \partial_3^{-1}\dbtilde{\vec{V}}^h_{\varepsilon ,M} \rangle +\mathcal R_{\varepsilon,M}=\mathcal R_{\varepsilon,M}\, , \end{equation*} because we have taken $m>1$. This relation finally implies that, also in the case when $m+1=2n$, $\mathcal{T}_{\varepsilon ,M}^{2}$ is a remainder: for any $T>0$ and any test-function $\vec \psi$ as in \eqref{eq:test-f_G}, one has the convergence \eqref{eq:limit_T2_G}. \subsection{The limit system} \label{ss:limit_G} Thanks to the computations of the previous subsections, we can now pass to the limit in equation \eqref{weak-mom_G}. Recall that $m>1$ and $m+1\geq 2n >m$ here. To begin with, we take a test-function $\vec\psi$ as in \eqref{eq:test-f_G}, specifically \begin{equation} \label{eq:test-2} \vec{\psi}=\big(\nabla_{h}^{\perp}\phi,0\big)\,,\qquad\qquad\mbox{ with }\qquad \phi\in C_c^\infty\big([0,T[\,\times\mathbb{R}^2\big)\,,\quad \phi=\phi(t,x^h)\,. \end{equation} We point out that since all the integrals will be made on $\mathbb{R}^2$ (in view of the choice of the test functions in \eqref{eq:test-2} above), we can work on the domain $\Omega=\mathbb{R}^2 \times \, ]0,1[\, $. In addition, for such $\vec\psi$ as in \eqref{eq:test-2}, all the gradient terms vanish identically, as well as all the contributions due to the vertical component of the equation. In particular, we do not see any contribution of the pressure and gravity terms: equation \eqref{weak-mom_G} becomes \begin{align} \int_0^T\!\!\!\int_{\Omega} & \left( -\vre \ue^h \cdot \partial_t \vec\psi^h -\vre \ue^h\otimes\ue^h : \nabla_h \vec\psi^h + \frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h\right)\, \dxdt \label{eq:weak_to_conv_G}\\ &\qquad\qquad\qquad\qquad =-\int_0^T\!\!\!\int_{\Omega} \mathbb{S}(\nabla_x\vec\ue): \nabla_x \vec\psi\dxdt+ \int_{\Omega}\vrez \uez \cdot \vec\psi(0,\cdot)\dx\,. \nonumber \end{align} Making use of the uniform bounds of Subsection \ref{ss:unif-est_G}, we can pass to the limit in the $\partial_t$ term and in the viscosity term. Moreover, our assumptions imply that $\varrho_{0,\varepsilon}\vec{u}_{0,\varepsilon}\rightharpoonup \vec{u}_0$ in e.g. $L_{\rm loc}^2$. Next, the Coriolis term can be arranged in a standard way: using the structure of $\vec\psi$ and the mass equation \eqref{weak-con_G}, we can write \begin{align*} \int_0^T\!\!\!\int_{\Omega}\frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h\,&=\,\int_0^T\!\!\!\int_{\mathbb{R}^2}\frac{1}{\ep}\langle\vre \ue^{h}\rangle \cdot \nabla_{h}\phi\,=\, -\varepsilon^{m-1}\int_0^T\!\!\!\int_{\mathbb{R}^2}\langle \varrho^{(1)}_\varepsilon\rangle\, \partial_t\phi\,-\,\varepsilon^{m-1}\int_{\mathbb{R}^2}\langle \varrho^{(1)}_{0,\varepsilon}\rangle\, \phi(0,\cdot )\,, \end{align*} which of course converges to $0$ when $\varepsilon\ra0^+$. It remains us to tackle the convective term $\varrho_\varepsilon \ue^h \otimes \ue^h$. For it, we take advantage of Lemma \ref{lem:convterm_G} and relations \eqref{eq:limit_T1_G} and \eqref{eq:limit_T2_G}, but we still have to take care of the convergence for $M\rightarrow+\infty$ in \eqref{eq:limit_T1_G}. We start by performing equalities \eqref{eq:T1_G} backwards in the term on the right-hand side of \eqref{eq:limit_T1_G}: thus, we have to pass to the limit for $M\rightarrow+\infty$ in \[ \int^T_0\int_{\mathbb{R}^2}\vec U_M^h\otimes\vec U_M^h : \nabla_h \vec\psi^h\,\dx^h\,\dt\,. \] Now, we remark that, since $\vec U^h\in L^2_T(H^1)$ by \eqref{conv:u_G}, from \eqref{est:sobolev} we gather the strong convergence $S_M \vec U^h\longrightarrow \vec{U}^{h}$ in $L_{T}^{2}(H^{s})$ for any $s<1$, in the limit for $M\rightarrow +\infty$. Then, passing to the limit for $M\rightarrow+\infty$ in the previous relation is an easy task: we finally get, for $\varepsilon\ra0^+$, \begin{equation*} \int_0^T\int_{\Omega} \vre \ue^h\otimes\ue^h : \nabla_h \vec\psi^h\, \longrightarrow\, \int_0^T\int_{\mathbb{R}^2}\vec{U}^h\otimes\vec{U}^h : \nabla_h \vec\psi^h\,. \end{equation*} In the end, we have shown that, letting $\varepsilon \rightarrow 0^+$ in \eqref{eq:weak_to_conv_G}, one obtains \begin{align*} &\int_0^T\!\!\!\int_{\mathbb{R}^2} \left(\vec{U}^{h}\cdot \partial_{t}\vec\psi^h+\vec{U}^{h}\otimes \vec{U}^{h}:\nabla_{h}\vec\psi^h\right)\, \dx^h \dt= \int_0^T\!\!\!\int_{\mathbb{R}^2} \mu \nabla_{h}\vec{U}^{h}:\nabla_{h}\vec\psi^h \, \dx^h \dt- \int_{\mathbb{R}^2}\langle\vec{u}_{0}^{h}\rangle\cdot \vec\psi^h(0,\cdot)\dx^h, \end{align*} for any test-function $\vec\psi$ as in \eqref{eq:test-f_G}. This implies \eqref{eq_lim_m:momentum_G}, concluding the proof of Theorem \ref{th:m>1}. \section{Proof of the convergence for $m=1$} \label{s:proof-1_G} In the present section, we complete the proof of the convergence in the case $m=1$ and $1/2<n<1$. We will use again the compensated compactness argument depicted in Subsection \ref{ss:convergence_G}, and in fact most of the computations apply also in this case. \subsection{The acoustic-Poincar\'e waves system}\label{ss:unifbounds_1_G} When $m=1$, the wave system \eqref{eq:wave_syst_G} takes the form \begin{equation} \label{eq:wave_m=1_G} \left\{\begin{array}{l} \varepsilon\,\partial_t \varrho_\varepsilon^{(1)}\,+\,{\rm div}\,\vec{V}_\varepsilon\,=\,0 \\[1ex] \varepsilon\,\partial_t\vec{V}_\varepsilon\,+\,\nabla_x \varrho^{(1)}_\varepsilon\,+\,\,\vec{e}_3\times \vec V_\varepsilon\,=\,\varepsilon\,\vec f_\varepsilon+\varepsilon^{2(1-n)}\vec g_\varepsilon\, \end{array} \right. \end{equation} where $\bigl(\varrho^{(1)}_\varepsilon\bigr)_\varepsilon$ and $\bigl(\vec V_\varepsilon\bigr)_\varepsilon$ are defined as in Paragraph \ref{sss:wave-eq_G}. This system is supplemented with the initial datum $\big(\varrho^{(1)}_{0,\varepsilon},\vr_{0,\varepsilon}\vec u_{0,\varepsilon}\big)$. Next, we regularise all the quantities, by applying the Littlewood-Paley cut-off operator $S_M$ to \eqref{eq:wave_m=1_G}: we deduce that $\varrho^{(1)}_{\varepsilon,M}$ and $\vec V_{\varepsilon,M}$, defined as in \eqref{def_reg_vrho-V}, satisfy the regularised wave system \begin{equation} \label{eq:reg-wave_G} \left\{\begin{array}{l} \varepsilon\,\partial_t \varrho_{\varepsilon,M}^{(1)}\,+\,{\rm div}\,\vec{V}_{\varepsilon,M}\,=\,0 \\[1ex] \varepsilon\,\partial_t\vec{V}_{\varepsilon,M}\,+\,\nabla_x \varrho^{(1)}_{\varepsilon,M}\,+\,\,\vec{e}_3\times \vec V_{\varepsilon,M}\,=\,\varepsilon\,\vec f_{\varepsilon,M}+\varepsilon^{2(1-n)}\vec g_{\varepsilon,M}\, , \end{array} \right. \end{equation} in the domain $\mathbb{R}_+\times\Omega$, where we recall that $\vec f_{\varepsilon,M}:=S_M \vec f_\varepsilon$ and $\vec g_{\varepsilon,M}:=S_M \vec g_\varepsilon$. It goes without saying that a result similar to Proposition \ref{p:prop approx_G} holds true also in this case. As it is apparent from the wave system \eqref{eq:wave_m=1_G} and its regularised version, when $m=1$ the pressure term and the Coriolis term are in balance, since they are of the same order. This represents the main change with respect to the case $m>1$, and it comes into play in the compensated compactness argument. Therefore, despite most of the computations may be repeated identical as in the previous section, let us present the main points of the argument. \subsection{Handling the convective term when $m=1$} \label{ss:convergence_1_G} Let us take care of the convergence of the convective term when $m=1$. First of all, it is easy to see that Lemma \ref{lem:convterm_G} still holds true. Therefore, given a test-function $\vec\psi\in C_c^\infty\big([0,T[\,\times\Omega;\mathbb{R}^3\big)$ such that ${\rm div}\,\vec\psi=0$ and $\partial_3\vec\psi=0$, we have to pass to the limit in the term \begin{align*} -\int_{0}^{T}\int_{\Omega} \vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon ,M}: \nabla_{x}\vec{\psi}\,&=\, \int_{0}^{T}\int_{\Omega}{\rm div}\,\left(\vec{V}_{\varepsilon ,M}\otimes \vec{V}_{\varepsilon ,M}\right) \cdot \vec{\psi}\,=\, \int_{0}^{T}\int_{\mathbb{R}^2} \left(\mathcal{T}_{\varepsilon ,M}^{1}+\mathcal{T}_{\varepsilon, M}^{2}\right)\cdot\vec{\psi}^h\,, \end{align*} where we agree again that the torus $\mathbb{T}^1$ has been normalised so that its Lebesgue measure is equal to $1$ and we have adopted the same notation as in \eqref{def:T1-2_G}. At this point, we notice that the analysis of $\mathcal{T}_{\varepsilon ,M}^{2}$ can be performed as in Paragraph \ref{sss:term2_G}, because we have $m+1>2n$, i.e. $n<1$. \emph{Mutatis mutandis}, we find relation \eqref{eq:limit_T2_G} also in the case $m=1$. Let us now deal with the term $\mathcal{T}_{\varepsilon ,M}^{1}$. Arguing as in Paragraph \ref{sss:term1_G}, we may write it as \begin{equation*} \mathcal{T}_{\varepsilon ,M}^{1}\,=\,\left({\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle-\langle \varrho^{(1)}_{\varepsilon ,M}\rangle \right)\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}+\mathcal{R}_{\varepsilon ,M} . \end{equation*} Now we use the horizontal part of \eqref{eq:reg-wave_G}: averaging it with respect to the vertical variable and applying the operator ${\rm curl}_h$, we find \begin{equation*} \varepsilon\,\partial_t{\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,+\,{\rm div}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle \,=\, \varepsilon\, {\rm curl}_h\langle \vec f_{\varepsilon ,M}^{h}\rangle\, . \end{equation*} Taking the difference of this equation with the first one in \eqref{eq:reg-wave_G}, we discover that \begin{equation*} \partial_t\widetilde\Gamma_{\varepsilon,M} \,=\,{\rm curl}_h\langle \vec f_{\varepsilon ,M}^{h}\rangle\,,\qquad\qquad \mbox{ where }\qquad \widetilde\Gamma_{\varepsilon, M}:={\rm curl}_h\langle \vec{V}_{\varepsilon ,M}^{h}\rangle\,-\,\langle \varrho^{(1)}_{\varepsilon ,M}\rangle\,. \end{equation*} An argument analogous to the one used after \eqref{eq:gamma_G} above, based on Aubin-Lions Theorem, shows that $\big(\widetilde\Gamma_{\varepsilon,M}\big)_{\varepsilon}$ is compact in e.g. $L_{T}^{2}(L_{\rm loc}^{2})$. Then, this sequence converges strongly (up to extraction of a suitable subsequence, not relabelled here) to a tempered distribution $\widetilde\Gamma_M$ in the same space. Using the previous property, we may deduce that \begin{equation*} \widetilde\Gamma_{\varepsilon,M}\,\langle \vec{V}_{\varepsilon ,M}^{h}\rangle^{\perp}\,\longrightarrow\, \widetilde\Gamma_M\, \langle \vec{V}_{M}^{h}\rangle^{\perp}\qquad \text{ in }\qquad \mathcal{D}^{\prime}\big(\mathbb{R}_+\times\mathbb{R}^2\big), \end{equation*} where we have $\langle \vec{V}_{M}^{h}\rangle=\langle S_M\vec{U}^{h}\rangle$ and $\widetilde\Gamma_M={\rm curl}_h\langle S_M \vec{U}^{h}\rangle-\langle S_M\varrho^{(1)}\rangle$. Owing to the regularity of the target velocity $\vec U^h$, we can pass to the limit also for $M\rightarrow+\infty$, as detailed in Subsection \ref{ss:limit_G} above. Thus, we find \begin{equation} \label{eq:limit_T1-1_G} \int^T_0\!\!\!\int_{\Omega}\varrho_\varepsilon\,\vec{u}_\varepsilon\otimes \vec{u}_\varepsilon: \nabla_{x}\vec{\psi}\, \dxdt\,\longrightarrow\, \int^T_0\!\!\!\int_{\mathbb{R}^2}\big(\vec U^h\otimes\vec U^h:\nabla_h\vec\psi^h\,-\, \varrho^{(1)}\,(\vec U^h)^\perp\cdot\vec\psi^h\big)\dx^h\dt, \end{equation} for all test functions $\vec\psi$ such that ${\rm div}\,\vec\psi=0$ and $\partial_3\vec\psi=0$. Recall the convention $|\mathbb{T}^1|=1$. Notice that, since $\vec U^h=\nabla_h^\perp \varrho^{(1)}$ when $m=1$ (keep in mind Proposition \ref{p:limit_iso_G}), the last term in the integral on the right-hand side is actually zero. \subsection{End of the study} \label{ss:limit_1_G} Thanks to the previous analysis, we are now ready to pass to the limit in equation \eqref{weak-mom_G} also in the case when $m=1$. For this, we take a test-function $\vec\psi$ as in \eqref{eq:test-2}; notice in particular that ${\rm div}\,\vec\psi=0$ and $\partial_3\vec\psi=0$. Then, once again all the gradient terms and all the contributions coming from the vertical component of the momentum equation vanish identically, when tested against such a $\vec\psi$. Recall that all the integrals will be performed on $\mathbb{R}^2$. So, equation \eqref{weak-mom_G} reduces to \begin{align* \int_0^T\!\!\!\int_{\Omega} \left( -\vre \ue \cdot \partial_t \vec\psi -\vre \ue\otimes\ue : \nabla \vec\psi + \frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h+\mathbb{S}(\nabla_x\vec\ue): \nabla_x \vec\psi\right) =\int_{\Omega}\vrez \uez \cdot \vec\psi(0,\cdot)\,. \end{align*} For the rotation term, we can test the first equation in \eqref{eq:wave_m=1_G} against $\phi$ to get \begin{equation*} \begin{split} -\int_0^T\!\!\!\int_{\mathbb{R}^2} \left( \langle \varrho^{(1)}_{\varepsilon}\rangle\, \partial_{t}\phi +\frac{1}{\varepsilon}\, \langle\varrho_{\varepsilon}\ue^{h}\rangle\cdot \nabla_{h}\phi\right)= \int_{\mathbb{R}^2}\langle \varrho^{(1)}_{0,\varepsilon }\rangle\, \phi (0,\cdot ) \, , \end{split} \end{equation*} whence we deduce that \begin{align*} \int_0^T\!\!\!\int_{\Omega}\frac{1}{\ep}\vre\big(\ue^{h}\big)^\perp\cdot\vec\psi^h\,&=\,\int_0^T\!\!\!\int_{\mathbb{R}^2}\frac{1}{\ep}\langle\vre \ue^{h}\rangle \cdot \nabla_{h}\phi\,=\,-\,\int_0^T\!\!\!\int_{\mathbb{R}^2}\langle \varrho^{(1)}_\varepsilon\rangle\, \partial_t\phi\,-\,\int_{\mathbb{R}^2}\langle \varrho^{(1)}_{0,\varepsilon}\rangle\, \phi(0,\cdot )\,. \end{align*} In addition, the convergence of the convective term has been performed in \eqref{eq:limit_T1-1_G}, and for the other terms, we can argue as in Subsection \ref{ss:limit_G}. Hence, letting $\varepsilon \rightarrow 0^+$ in the equation above, we get \begin{align*} &-\int_0^T\!\!\!\int_{\mathbb{R}^2} \left(\vec{U}^{h}\cdot \partial_{t}\nabla_{h}^{\perp} \phi+ \vec{U}^{h}\otimes \vec{U}^{h}:\nabla_{h}(\nabla_{h}^{\perp}\phi )+\varrho^{(1)}\, \partial_t \phi \right)\dx^h \dt\\ &\qquad\qquad=-\int_0^T\!\!\!\int_{\mathbb{R}^2} \mu \nabla_{h}\vec{U}^{h}:\nabla_{h}(\nabla_{h}^{\perp}\phi ) \dx^h \dt+\int_{\mathbb{R}^2}\left(\langle\vec{u}_{0}^{h}\rangle\cdot \nabla _{h}^{\perp}\phi (0,\cdot )+ \langle \varrho^{(1)}_{0}\rangle\phi (0,\cdot )\right) \dx^h\, , \end{align*} which is the weak formulation of equation \eqref{eq_lim:QG_G}. In the end, also Theorem \ref{th:m=1} is proved. \part{The Euler equations} \chapter{The fast rotation limit for Euler}\label{chap:Euler} \begin{quotation} We conclude this thesis describing the behaviour of a fluid which, in contrast with Chapters \ref{chap:multi-scale_NSF} and \ref{chap:BNS_gravity}, evolves far from the physical boundaries. An example of such fluid flows is represented by currents in the middle of the oceans, i.e. far enough from the surface and the bottom. In this context, we can assume the following physical approximations: \begin{itemize} \item the fluid is incompressible; \item the density is a small perturbation of a constant profile (the \textit{Boussinesq} approximation); \item the domain is the $\mathbb{R}^2$ plane: the motion of a $3$-D highly rotating fluid is, in a first approximation, planar (the \textit{Taylor-Proudman} theorem). \end{itemize} Then, the system reads: \begin{equation}\label{full Euler} \begin{cases} \partial_t \varrho +{\rm div}\, (\varrho \vu)=0\\ \partial_t (\varrho \vu)+{\rm div}\, (\varrho \vu \otimes \vu)+ \dfrac{1}{Ro}\varrho \vu^\perp+\nabla p=0\\ {\rm div}\, \vu =0 \end{cases} \end{equation} in the domain $\Omega =\mathbb{R}^2$. Since in this chapter there will be no more competition between the horizontal and vertical scales, for notational convenience, we will drop everywhere (in the differential operators) the subscript $x$. With respect to the previous systems (see \eqref{chap2:syst_NSFC} and \eqref{chap3:BNSC} in this respect), the Euler equations \eqref{full Euler} is an incompressible system without the viscosity effects and with a hyperbolic structure. For that reason, we will need different analysis techniques (see Appendix \ref{app:Tools}) and in addition the functional framework will change (now we will be in regular spaces) in order to preserve the initial regularity. The topics presented here are part of the pre-print \cite{Sbaiz}. \medbreak Let us now give a summary of the chapter. In Section \ref{s:result_E}, we collect our assumptions and we state our main results. In Section \ref{s:well-posedness_original_problem}, we investigate the well-posedness issues in the Sobolev spaces $H^s$ for any $s>2$. In Section \ref{s:sing-pert_E}, we study the singular perturbation problem, establishing constraints that the limit points have to satisfy and proving the convergence to the quasi-homogeneous Euler system thanks to a \textit{compensated compactness} technique. In Section \ref{s:well-posedness_Q-H} we review, for the quasi-homogeneous limiting system, the results presented in \cite{C-F_RWA} and \cite{C-F_sub}, and we explicitly derive the lifespan of solutions to the limit equations (see relation \eqref{T-ast:improved} in this regard). In the last section, we deal with the lifespan analysis for system \eqref{full Euler} and we point out some consequences of the continuation criterion we have established (see in particular Subsection \ref{ss:cont_criterion+consequences}). \end{quotation} \section{The density-dependent Euler problem} \label{s:result_E} In this section, we formulate our working setting (Subsection \ref{ss:FormProb_E}) and we state the main results (Subsection \ref{ss:results_E}). \subsection{Formulation} \label{ss:FormProb_E} In this subsection, we present the rescaled density-dependent Euler equations with the Coriolis force, which we are going to consider in our study, and we formulate the main working hypotheses. To begin with, let us introduce the ``primitive system'', that is the rescaled incompressible Euler system \eqref{full Euler}, supplemented with the scaling $Ro=\varepsilon$, where $\varepsilon \in \,]0,1]$ is a small parameter. Thus, the system consists of continuity equation (conservation of mass), the momentum equation and the divergence-free condition: respectively \begin{equation}\label{Euler_eps} \begin{cases} \partial_t \varrho_{\varepsilon} +{\rm div}\, (\varrho_{\varepsilon} \vu_{\varepsilon})=0\\ \partial_t (\varrho_{\varepsilon} \vu_{\varepsilon})+{\rm div}\, (\varrho_{\varepsilon} \vu_{\varepsilon} \otimes \vu_{\varepsilon})+ \dfrac{1}{\varepsilon}\varrho_{\varepsilon} \vu_{\varepsilon}^{\perp}+\dfrac{1}{\varepsilon}\nabla \Pi_\varepsilon=0\\ {\rm div}\, \vu_{\varepsilon} =0\, . \end{cases} \end{equation} We point out that, here above in \eqref{Euler_eps}, the domain $\Omega$ is the plane $\mathbb{R}^2$ and the unknowns are $\varrho_\varepsilon\in \mathbb{R}_+$ and $\vec u_\varepsilon\in \mathbb{R}^2$. In \eqref{Euler_eps}, the pressure term has to scale like $1/\varepsilon$, since it is the only force that allows to compensate the effect of fast rotation, at the geophysical scale. From now on, in order to make the Lipschitz condition \eqref{Lip_cond} holds, we fix $$ s>2\, . $$ We assume that the initial density is a small perturbation of a constant profile. Namely, we consider initial densities of the following form: \begin{equation}\label{in_vr_E} \vrez = 1 + \ep \, R_{0,\varepsilon} \, , \end{equation} where we suppose $R_{0,\varepsilon}$ to be a bounded measurable function satisfying the controls \begin{align} &\sup_{\varepsilon\in\,]0,1]}\left\| R_{0,\varepsilon} \right\|_{L^\infty(\mathbb{R}^2)}\,\leq \, C \label{hyp:ill_data_R_0}\\ &\sup_{\varepsilon\in\,]0,1]}\left\| \nabla R_{0,\varepsilon} \right\|_{H^{s-1}(\mathbb{R}^2)}\,\leq \, C \label{hyp:ill_data_nablaR_0} \end{align} and the initial mass density is bounded and bounded away from zero, i.e. for all $\varepsilon \in\;]0,1]$: \begin{equation}\label{assumption_densities} 0<\underline{\varrho}\leq \varrho_{0,\varepsilon}(x) \,\leq \, \overline{\varrho}\, , \qquad x\in \mathbb{R}^2 \end{equation} where $\underline{\varrho},\overline{\varrho}>0$ are positive constants. As for the initial velocity fields, due to framework needed for the well-posedness issues, we require the following uniform bound \begin{equation}\label{hyp:data_u_0} \sup_{\varepsilon\in\,]0,1]}\left\| \vu_{0,\varepsilon} \right\|_{H^s(\mathbb{R}^2)}\,\leq \, C\, . \end{equation} Thanks to the previous uniform estimates, we can assume (up to passing to subsequences) that there exist $R_0 \in W^{1,\infty}(\mathbb{R}^2)$, with $\nabla R_0\in H^{s-1}(\mathbb{R}^2)$, and $\vec u_0\in H^s(\mathbb{R}^2)$ such that \begin{equation}\label{init_limit_points} \begin{split} R_0:=\lim_{\varepsilon \rightarrow 0}R_{0,\varepsilon} \quad &\text{in}\quad L^\infty (\mathbb{R}^2) \\ \nabla R_0:=\lim_{\varepsilon \rightarrow 0}\nabla R_{0,\varepsilon}\quad &\text{in}\quad H^{s-1} (\mathbb{R}^2)\\ \vu_0:=\lim_{\varepsilon \rightarrow 0}\vu_{0,\varepsilon}\quad &\text{in}\quad H^s (\mathbb{R}^2)\, , \end{split} \end{equation} where we agree that the previous limits are taken in the corresponding weak-$\ast$ topology. \subsection{Main statements}\label{ss:results_E} \medbreak We can now state our main results. We recall the notation $\big(f_\varepsilon\big)_{\varepsilon} \subset X$ to denote that the family $\big(f_\varepsilon\big)_{\varepsilon}$ is uniformly (in $\varepsilon$) bounded in $X$. The following theorem establishes the local well-posedness of system \eqref{Euler_eps} in the Sobolev spaces $B^s_{2,2}\equiv H^s$ (see Section \ref{s:well-posedness_original_problem}) and gives a lower bound for the lifespan of solutions (see Section \ref{s:lifespan_full}). \begin{theorem}\label{W-P_fullE} For any $\varepsilon \in\, ]0,1]$, let initial densities $\varrho_{0,\varepsilon}$ be as in \eqref{in_vr_E} and satisfy the controls \eqref{hyp:ill_data_R_0} to \eqref{assumption_densities}. Let $\vu_{0,\varepsilon}$ be divergence-free vector fields such that $\vu_{0,\varepsilon} \in H^s(\mathbb{R}^2)$ for $s>2$. \\ Then, for any $\varepsilon>0$, there exists a time $T_\varepsilon^\ast >0$ such that the system \eqref{Euler_eps} has a unique solution $(\varrho_\varepsilon, \vu_\varepsilon, \nabla \Pi_\varepsilon)$ where \begin{itemize} \item $\varrho_\varepsilon$ belongs to the space $C^0([0,T_\varepsilon^\ast]\times \mathbb{R}^2)$ with $\nabla \varrho_\varepsilon \in C^0([0,T_\varepsilon^\ast]; H^{s-1}(\mathbb{R}^2))$; \item $\vu_\varepsilon$ belongs to the space $C^0([0,T_\varepsilon^\ast]; H^s(\mathbb{R}^2))$; \item $\nabla \Pi_\varepsilon$ belongs to the space $C^0([0,T_\varepsilon^\ast]; H^s(\mathbb{R}^2))$. \end{itemize} Moreover, the lifespan $T_\varepsilon^\ast$ of the solution to the two-dimensional density-dependent incompressible Euler equations with the Coriolis force is bounded from below by \begin{equation}\label{improv_life_fullE} \frac{C}{\|\vec u_{0,\varepsilon}\|_{H^s}}\log\left(\log\left(\frac{C\, \|\vec u_{0,\varepsilon}\|_{H^s}}{\max \{\mathcal A_\varepsilon(0),\, \varepsilon \, \mathcal A_\varepsilon(0)\, \|\vec u_{0,\varepsilon}\|_{H^s}\}}+1\right)+1\right)\, , \end{equation} where $\mathcal A_\varepsilon (0):= \|\nabla R_{0,\varepsilon}\|_{H^{s-1}}+\varepsilon\, \|\nabla R_{0,\varepsilon}\|_{H^{s-1}}^{\lambda +1}$, for some suitable $\lambda\geq 1$. In particular, one has $$\inf_{\varepsilon>0}T_\varepsilon^\ast >0\, .$$ \end{theorem} Looking at \eqref{improv_life_fullE}, we stress the fact that the fast rotational effects are not enough to state a global well-posedness result for system \eqref{Euler_eps}, in the sense that $T^\ast_\varepsilon$ does not tend to $+\infty$ when $\varepsilon\rightarrow 0^+$ Now, once we have stated the local in time well-posedness for system \eqref{Euler_eps} in the Sobolev spaces $H^s$, in Section \ref{s:sing-pert_E} we address the singular perturbation problem describing, in a rigorous way, the limit dynamics depicted by the quasi-homogeneous incompressible Euler system \eqref{system_Q-H_thm} below. \begin{theorem}\label{thm:limit_dynamics} Let $s>2$. For any fixed value of $\varepsilon \in \; ]0,1]$, let initial data $\left(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon}\right)$ verify the hypotheses fixed in Paragraph \ref{ss:FormProb_E}, and let $\left( \vre, \ue\right)$ be a corresponding solution to system \eqref{Euler_eps}. Let $\left(R_0,\vec u_0\right)$ be defined as in \eqref{init_limit_points}. Then, one has the following convergence properties \begin{align*} \varrho_\ep \rightarrow 1 \qquad\qquad \mbox{ in } \qquad &L^\infty\big([0,T^\ast]; L^\infty(\mathbb{R}^2 )\big)\,, \\ R_\varepsilon:=\frac{\varrho_\ep - 1}{\ep} \weakstar R \qquad\qquad \mbox{ in }\qquad &L^\infty\bigl([0,T^\ast]; L^\infty(\mathbb{R}^2)\bigr)\,, \\ \nabla R_\varepsilon \weakstar \nabla R \qquad\qquad \mbox{ in }\qquad &L^\infty\bigl([0,T^\ast]; H^{s-1}(\mathbb{R}^2)\bigr)\,, \\ \vec{u}_\ep \weakstar \vec{u} \qquad\qquad \mbox{ in }\qquad &L^\infty\big([0,T^\ast];H^s(\mathbb{R}^2)\big)\, . \end{align*} In addition, $\Big(R\, ,\, \vec{u} \Big)$ is a solution to the quasi-homogeneous incompressible Euler system in $[0,T^\ast] \times \mathbb{R}^2$: \begin{equation}\label{system_Q-H_thm} \begin{cases} \partial_t R+{\rm div}\, (R\vu)=0 \\ \partial_t \vec u+{\rm div}\, \left(\vec{u}\otimes\vec{u}\right)+R\vu^\perp+ \nabla \Pi =0 \\ {\rm div}\, \vec u\,=\,0\, , \end{cases} \end{equation} where $\nabla \Pi$ is a suitable pressure term belonging to $L^\infty\big([0,T^\ast];H^s(\mathbb{R}^2)\big)$. \end{theorem} \begin{remark} Due to the fact that the system \eqref{system_Q-H_thm} is well-posed in the previous functional setting (see Theorem \ref{thm:well-posedness_Q-H-Euler} below), we get the convergence of the whole sequence of weak solutions to the solutions of the target equations on the large time interval where the weak solutions to the primitive equations exist. \end{remark} Then, at the limit, we have found that the dynamics is prescribed by the quasi-homogeneous incompressible Euler system \eqref{system_Q-H_thm}, for which we state the local well-posedness in $H^s$ (see Section \ref{s:well-posedness_Q-H}). It is worth to remark that the global well-posedness issue for this system is still an open problem. \begin{theorem}\label{thm:well-posedness_Q-H-Euler} Take $s>2$. Let $\big(R_0,u_0 \big)$ be initial data such that $R_0\in L^{\infty}(\mathbb{R}^2)$ and $\vu_0 \,\in H^s(\mathbb{R}^2)$, with $\nabla R_0\in H^{s-1}(\mathbb{R}^2)$ and ${\rm div}\, \vu_0\,=\,0$. Then, there exists a time $T^\ast > 0$ such that, on $[0,T^\ast]\times\mathbb{R}^2$, problem \eqref{system_Q-H_thm} has a unique solution $(R,\vu, \nabla \Pi)$ with the following properties: \begin{itemize} \item $R\in C^0\big([0,T^\ast]\times \mathbb{R}^2\big)$ and $\nabla R\in C^0\big([0,T^\ast];H^{s-1}(\mathbb{R}^2)\big)$; \item $\vu$ belongs to $C^0\big([0,T^\ast]; H^{s}(\mathbb{R}^2)\big)$; \item the pressure term $\nabla \Pi$ is in $C^0\big([0,T^\ast];H^s(\mathbb{R}^2)\big)$. \end{itemize} In addition, the lifespan $T^\ast>0$ of the solution $(R, \vu, \nabla \Pi)$ to the $2$-D quasi-homogeneous Euler system \eqref{system_Q-H_thm} enjoys the following lower bound: \begin{equation}\label{improved_low_bound} T^\ast\geq \frac{C}{\|\vu_0\|_{H^s}}\log\left(\log \left(C\frac{\|\vu_0\|_{H^s}}{\|R_0\|_{L^\infty}+\|\nabla R_0\|_{H^{s-1}}}+1\right)+1\right)\, , \end{equation} where $C>0$ is a ``universal'' constant, independent of the initial datum. \end{theorem} The proof of the previous ``asymptotically global'' well-posedness result is presented in Subsection \ref{ss:improved_lifespan}. \section{Well-posedness for the original problem}\label{s:well-posedness_original_problem} This section is devoted to the well-posedness issue in the $H^s$ spaces stated in Theorem \ref{W-P_fullE}. We recall that, due to the Littlewood-Paley theory, we have the equivalence between $H^s$ and $B^s_{2,2}$ spaces. We also underline that in this section we keep $\varepsilon \in \; ]0,1] $ fixed. However, we will take care of it, explicitly pointing out the dependence to the Rossby number in all the computations in order to get controls that are uniform with respect to the $\varepsilon$-parameter. The choice in keeping explicit the dependence on the rotational parameter is motivated by the fact that we will perform the fast rotation limit (see Section \ref{s:sing-pert_E} below). First of all, since $\varrho_\varepsilon$ is a small perturbation of a constant profile, we set \begin{equation}\label{def_a_veps} \alpha_\varepsilon :=\frac{1}{\varrho_\varepsilon}-1=\varepsilon a_\varepsilon\quad \text{with}\quad a_\varepsilon:=-R_\varepsilon/\varrho_\varepsilon \, . \end{equation} The choice of looking at $\alpha_\varepsilon$ is dictated by the fact that we will extensively exploit the elliptic equation \begin{equation}\label{general_elliptic_est} -{\rm div}\, \left(\alpha_\varepsilon \nabla \Pi_\varepsilon \right)=\, \varepsilon\, {\rm div}\, \left(\ue \cdot \nabla \ue +\frac{1}{\varepsilon}\vec u_\varepsilon^\perp \right)\, . \end{equation} Now, using the divergence-free condition, we can rewrite the system \eqref{Euler_eps} in the following way (see also Lemma 3 in \cite{D-F_JMPA}): \begin{equation}\label{Euler-a_eps_1} \begin{cases} \partial_t a_{\varepsilon} +\vu_\varepsilon \cdot \nabla a_\varepsilon=0\\ \partial_t \vu_{\varepsilon}+ \vu_{\varepsilon} \cdot \nabla \vu_{\varepsilon}+ \dfrac{1}{\varepsilon} \vu_{\varepsilon}^{\perp}+(1+\varepsilon a_\varepsilon)\dfrac{1}{\varepsilon}\nabla \Pi_\varepsilon=0\\ {\rm div}\, \vu_{\varepsilon} =0\, , \end{cases} \end{equation} with the initial condition $(a_\varepsilon, \ue)_{|t=0}=(a_{0,\varepsilon},\vu_{0,\varepsilon})$. We start by presenting the proof of existence of solutions at the claimed regularity. For that scope, we follow a standard procedure: first, we construct a sequence of smooth approximate solutions. Next, we deduce uniform bounds (with respect to the approximation parameter and also to $\varepsilon$) for those regular solutions. Finally, by use of those uniform bounds and an energy method, together with an interpolation argument, we are able to take the limit in the approximation parameter and gather the existence of a solution to the original problem. We end this Section \ref{s:well-posedness_original_problem}, proving uniqueness of solutions in the claimed functional setting, by using a relative entropy method. \subsection{Construction of smooth approximate solutions}\label{sec:construction_smooth_sol} For any $n\in \mathbb{N}$, let us define \begin{equation*} (a_{0,\varepsilon}^n, \vec u_{0,\varepsilon}^n ):= (S_n a_{0,\varepsilon}, S_n \vec u_{0,\varepsilon})\, , \end{equation*} where $S_{n}$ is the low frequency cut-off operator introduced in \eqref{eq:S_j} in Section \ref{app:LP}. We stress also the fact that $a_{0,\varepsilon}\in C^0_{\rm loc}$, since $a_{0,\varepsilon}$ and $\nabla a_{0,\varepsilon}$ are in $L^\infty$. Then, for any $n\in \mathbb{N}$, we have the density functions $a_{0,\varepsilon}^n\in L^\infty$. Moreover, one has that $\nabla a_{0,\varepsilon}^n$ and $\vec u_{0,\varepsilon}^n$ belong to $H^\infty:=\bigcap_{\sigma \in \mathbb{R}} H^\sigma$ which is embedded (for a suitable topology on $H^\infty$) in the space $C_b^\infty$ of $C^\infty$ functions which are globally bounded together with all their derivatives. In addition, by standard properties of mollifiers (see Section \ref{section:molli} of the Appendix \ref{appendixA}), one has the following strong convergences \begin{equation}\label{conv_in_data} \begin{split} a^n_{0,\varepsilon}\rightarrow a_{0,\varepsilon}\quad &\text{in}\quad C_{\rm loc}^{0} \\ \nabla a^n_{0,\varepsilon}\rightarrow \nabla a_{0,\varepsilon}\quad &\text{in}\quad H^{s-1}\\ \vu_{0,\varepsilon}^n\rightarrow \vu_{0,\varepsilon} \quad &\text{in}\quad H^s \, . \end{split} \end{equation} This having been established, we are going to define a sequence of approximate solutions to system \eqref{Euler-a_eps_1} by induction. First of all, we set $(a_\varepsilon^0,\vu_\varepsilon^0, \nabla \Pi_\varepsilon^0)=(a^0_{0,\varepsilon},\vu^0_{0,\varepsilon},0)$. Then, for all $\sigma \in \mathbb{R}$, we have that $\nabla a_{\varepsilon}^0 ,\vec u_{\varepsilon}^0 \in H^\sigma$ and $a^0_\varepsilon \in L^\infty$ with ${\rm div}\, \vec u_{\varepsilon}^0=0$. Next, assume that the couple $(a_\varepsilon^n, \ue^n)$ is given such that, for all $\sigma \in \mathbb{R}$, \begin{equation*} a^n_\varepsilon \in C^0(\mathbb{R}_+;L^\infty) \quad \nabla a_{\varepsilon}^n ,\vec u_{\varepsilon}^n \in C^0(\mathbb{R}_+; H^\sigma)\quad \quad \text{and}\quad {\rm div}\, \vec u_{\varepsilon}^n=0\, . \end{equation*} First of all, we define $a_\varepsilon^{n+1}$ as the unique solution to the linear transport equation \begin{equation}\label{mass_eq} \partial_t a_{\varepsilon}^{n+1} +\vu_\varepsilon^n \cdot \nabla a_\varepsilon^{n+1}=0 \quad \text{with}\quad {(a_\varepsilon^{n+1})}_{|t=0}=a_{0,\varepsilon}^{n+1}\, . \end{equation} Since, by inductive hypothesis and embeddings, $\ue^n$ is divergence-free, smooth and uniformly bounded with all its derivatives, we can deduce that $a_\varepsilon^{n+1} \in L^\infty(\mathbb{R}_+; L^\infty )$. Moreover, from \begin{equation*} \partial_t\, \partial_i a_\varepsilon^{n+1} +\ue^{n}\cdot \nabla\, \partial_i a_\varepsilon^{n+1} =-\partial_i \ue^n \cdot \nabla a_\varepsilon^{n+1} \quad \text{with}\quad {(\partial_ia_\varepsilon^{n+1})}_{|t=0}=\partial_i a_{0,\varepsilon}^{n+1} \quad \text{for}\; i=1,2 \end{equation*} and thanks to the Theorem \ref{thm_transport}, we can propagate all the $H^{\sigma}$ norms of the initial datum. We deduce that $a_\varepsilon^{n+1}\in C^0(\mathbb{R}_+;L^\infty)$ and $\nabla a_\varepsilon^{n+1}\in C^0(\mathbb{R}_+;H^{\sigma})$ for any $\sigma \in \mathbb{R}$. Next, we consider the approximate linear iteration \begin{equation}\label{approx_iteration_1} \begin{cases} \partial_t \vu_{\varepsilon}^{n+1}+ \vu_{\varepsilon}^n \cdot \nabla \vu_{\varepsilon}^{n+1}+ \dfrac{1}{\varepsilon} \vu_{\varepsilon}^{\perp, n+1}+(1+\varepsilon a_\varepsilon^{n+1})\dfrac{1}{\varepsilon}\nabla \Pi_\varepsilon^{n+1}=0\\ {\rm div}\, \vu_{\varepsilon}^{n+1} =0\\ (\vu_\varepsilon^{n+1})_{|t=0}=\vu_{0,\varepsilon}^{n+1}\, . \end{cases} \end{equation} At this point, one can solve the previous linear problem finding a unique solution $\vu_\varepsilon^{n+1}\in C^0(\mathbb{R}_+; H^\sigma )$ for any $\sigma \in \mathbb{R}$ and the pressure term $\nabla \Pi_\varepsilon^{n+1}$ can be uniquely determined (we refer to \cite{D_AT} for details in this respect). \subsection{Uniform estimates for the approximate solutions}\label{ss:unif_est} We now have to show (by induction) uniform bounds for the sequence $(a_\varepsilon^n, \ue^n, \nabla \Pi_\varepsilon^n)_{n\in \mathbb{N}}$ we have constructed above. We start by finding uniform estimates for $a_\varepsilon^{n+1}$. Thanks to equation \eqref{mass_eq} and the divergence-free condition on $\ue^n$, we can propagate the $L^\infty$ norm for any $t\geq 0$: \begin{equation}\label{eq:transport_density} \|a_\varepsilon^{n+1}(t)\|_{L^\infty} \leq\|a_{0,\varepsilon}^{n+1}\|_{L^\infty}\leq C \|a_{0,\varepsilon}\|_{L^\infty}\, . \end{equation} At this point we want to estimate $\nabla a_\varepsilon^{n+1}$ in $H^{s-1}$. We have for $i=1,2$: \begin{equation*} \partial_t\, \partial_i a_\varepsilon^{n+1} +\ue^{n}\cdot \nabla\, \partial_i a_\varepsilon^{n+1} =-\partial_i \ue^n \cdot \nabla a_\varepsilon^{n+1} \, . \end{equation*} Taking the non-homogeneous dyadic blocks $ \Delta_j$, we obtain \begin{equation*} \partial_t \Delta_j \, \partial_i a_\varepsilon^{n+1}+\ue^n\cdot \nabla \Delta_j\, \partial_i a_\varepsilon^{n+1}=[\ue^n\cdot \nabla,\Delta_j]\, \partial_i a_\varepsilon^{n+1} -\Delta_j\left(\partial_i \ue^n \cdot \nabla a_\varepsilon^{n+1} \right)\, . \end{equation*} Multiplying the previous relation by $ \Delta_j\, \partial_i a_\varepsilon^{n+1}$, we have \begin{equation*} \| \Delta_j\, \partial_i a_\varepsilon^{n+1}(t)\|_{L^2}\leq \| \Delta_j\, \partial_i a_{0,\varepsilon}^{n+1}\|_{L^2}+C \int_0^t\left(\left\|[\ue^n\cdot \nabla, \Delta_j] \, \partial_i a_\varepsilon^{n+1}\right\|_{L^2}+\|\Delta_j\left(\partial_i \ue^n \cdot \nabla a_\varepsilon^{n+1} \right) \|_{L^2} \right)\, \detau \, . \end{equation*} We apply now the second commutator estimate stated in Lemma \ref{l:commutator_est} to the former term in the integral on the right-hand side, getting \begin{equation*} 2^{j(s-1)}\left\|[\ue^n\cdot \nabla, \Delta_j]\, \partial_i a_\varepsilon^{n+1}\right\|_{L^2}\leq C\, c_j(t)\left(\|\nabla \ue^n\|_{L^\infty}\|\partial_i a_\varepsilon^{n+1}\|_{ H^{s-1}}+\|\nabla \ue^n\|_{H^{s-1}}\|\partial_i a_\varepsilon^{n+1} \|_{L^\infty}\right)\, , \end{equation*} where $(c_j(t))_{j\geq -1}$ is a sequence in the unit ball of $\ell^2$. Instead, the latter term can be bounded in the following way: \begin{equation*}\label{eq:nabla-u_nabla-a} 2^{j(s-1)}\|\Delta_j \left(\partial_i \ue^n \cdot \nabla a_\varepsilon^{n+1} \right)\|_{L^2}\leq C\, c_j(t)\, \|\nabla \ue^n\|_{H^{s-1}}\|\nabla a_\varepsilon^{n+1} \|_{H^{s-1}}\, . \end{equation*} Then, due to the embedding $H^\sigma(\mathbb{R}^2)\hookrightarrow L^\infty (\mathbb{R}^2)$ for $\sigma>1$, one has \begin{equation*} 2^{j(s-1)}\| \Delta_j\, \nabla a_\varepsilon^{n+1}(t)\|_{L^2}\leq 2^{j(s-1)}\| \Delta_j \nabla a_{0,\varepsilon}^{n+1}\|_{L^2}+\int_0^t C\, c_j(\tau )\left(\| \ue^n\|_{ H^s}\|\nabla a_\varepsilon^{n+1}\|_{ H^{s-1}}\right)\, \detau\, . \end{equation*} At this point, after summing on indices $j\geq -1$, thanks to the Minkowski inequality (for which we refer to Proposition \ref{app:mink}) combined with a Gr\"onwall type argument (see Section \ref{app:sect_gron}), we finally obtain \begin{equation}\label{est:a^(n+1)} \sup_{0\leq t\leq T}\|\nabla a_\varepsilon^{n+1} (t)\|_{H^{s-1}}\leq \|\nabla a_{0,\varepsilon}^{n+1}\|_{H^{s-1}}\, \exp \left(\int_0^T C \, \|\ue^n(t)\|_{ H^s}\, \dt\right)\, . \end{equation} Now, we have to estimate the velocity field $\ue^{n+1}$ and for that purpose we start with the $L^2$ estimate. We take the momentum equation in the original form: \begin{equation}\label{mom_eq_original_prob} \varrho_\varepsilon^{n+1}\left(\partial_t\ue^{n+1}+\ue^n\cdot \nabla \ue^{n+1}\right)+\frac{1}{\varepsilon}\varrho_\varepsilon^{n+1}\ue^{\perp, n+1}+\frac{1}{\varepsilon}\nabla \Pi_\varepsilon^{n+1}=0 \, , \end{equation} where we construct $\varrho_\varepsilon^{n+1}:=1/(1+\varepsilon a_\varepsilon^{n+1})$ starting from $a_\varepsilon^{n+1}$. Notice that $\varrho_\varepsilon^{n+1}$ satisfies the transport equation \begin{equation*} \partial_t \varrho_\varepsilon^{n+1}+\ue^{n}\cdot \nabla \varrho_\varepsilon^{n+1}=0\, . \end{equation*} At this point, we test equation \eqref{mom_eq_original_prob} against $\ue^{n+1}$. We integrate by parts on $\mathbb{R}^2$, deriving the following estimate: $$ \int_{\mathbb{R}^2} \varrho_\varepsilon^{n+1}\partial_t|\ue^{n+1}|^2+\int_{\mathbb{R}^2}\varrho_\varepsilon^{n+1}\ue^n\cdot \nabla |\ue^{n+1}|^2=0 \, ,$$ which implies (making use of the transport equation for $\varrho_\varepsilon^{n+1}$) \begin{equation*} \left\|\sqrt{\varrho_\varepsilon^{n+1}(t)}\, \ue^{n+1}(t)\right\|_{L^2}\leq \left\|\sqrt{\varrho_{0,\varepsilon}^{n+1}} \, \vec u_{0,\varepsilon}^{n+1}\right\|_{L^2}\, . \end{equation*} From the previous bound, due to the assumption \eqref{assumption_densities}, we can deduce the preservation of the $L^2$ norm for the velocity field $\ue^{n+1}$: \begin{equation*} \left\| \ue^{n+1}(t)\right\|_{L^2}\leq C\left\| \vec u_{0,\varepsilon}^{n+1}\right\|_{L^2}\leq C\left\| \vec u_{0,\varepsilon}\right\|_{L^2}\, . \end{equation*} Taking now the operator $ \Delta_j$ in the momentum equation in \eqref{approx_iteration_1}, we obtain \begin{equation*} \partial_t \Delta_j \ue^{n+1}+\ue^n\cdot \nabla \Delta_j \ue^{n+1}=[\ue^n\cdot \nabla, \Delta_j]\ue^{n+1}-\frac{1}{\varepsilon}\Delta_j\ue^{\perp ,n+1}-\Delta_j\left[\left(1+\varepsilon a_\varepsilon^{n+1}\right)\frac{1}{\varepsilon}\nabla \Pi_\varepsilon^{n+1}\right] \end{equation*} and multiplying again by $\Delta_j\ue^{n+1}$, we have cancellations so that \begin{equation}\label{eq:vel_est_dyadic} \left\|\Delta_j\ue^{n+1}(t)\right\|_{L^2}\leq \left\|\Delta_j\vec u_{0,\varepsilon}^{n+1}\right\|_{L^2}+C\int_0^t \left(\left\|[\ue^n\cdot \nabla, \Delta_j]\ue^{n+1}\right\|_{L^2}+\left\|\Delta_j\left( a_\varepsilon^{n+1}\nabla \Pi_\varepsilon^{n+1}\right)\right\|_{L^2}\right) \, \detau\, . \end{equation} As done before, we employ here the commutator estimates of Lemma \ref{l:commutator_est} in order to have \begin{equation*} \begin{split} 2^{js} \left\|[\ue^n\cdot \nabla, \Delta_j]\ue^{n+1}\right\|_{L^2}&\leq C\,c_j\, \left(\|\nabla \ue^n\|_{L^\infty}\|\ue^{n+1}\|_{ H^s}+\|\nabla \ue^{n+1}\|_{L^\infty}\|\ue^n\|_{ H^s}\right)\\ &\leq C\,c_j\, \left(\| \ue^n\|_{ H^s}\|\ue^{n+1}\|_{ H^s}\right)\, . \end{split} \end{equation*} For the latter term on the right-hand side of \eqref{eq:vel_est_dyadic}, we take advantage of the Bony decomposition (see Paragraph \ref{app_paradiff}) and we apply Proposition \ref{prop:app_fine_tame_est}. We may infer that \begin{equation*} \begin{split} \left\|a_\varepsilon^{n+1}\nabla \Pi_\varepsilon^{n+1}\right\|_{ H^s}\leq C\left(\|a_\varepsilon^{n+1} \|_{L^\infty}+\|\nabla a_\varepsilon^{n+1}\|_{ H^{s-1}}\right)\|\nabla \Pi_\varepsilon^{n+1}\|_{ H^s}\, . \end{split} \end{equation*} To finish with, we have to find a uniform bound for the pressure term. For that, we apply the ${\rm div}\,$ operator in \eqref{approx_iteration_1}. Thus, we aim at solving the elliptic problem \begin{equation}\label{eq:elliptic_problem} -{\rm div}\, \left(\left(1+\varepsilon a_\varepsilon^{n+1}\right)\nabla \Pi_\varepsilon^{n+1}\right)=\, \varepsilon\, {\rm div}\, (\ue^n \cdot \nabla \ue^{n+1} )- {\rm curl}\, \ue^{n+1}\, . \end{equation} Thanks to the assumption \eqref{assumption_densities} and Lemma \ref{lem:Lax-Milgram_type}, we can obtain \begin{equation}\label{est:Pi_L^2} \begin{split} \|\nabla \Pi_\varepsilon^{n+1}\|_{L^2}&\leq C\left(\varepsilon \, \|\ue^n\cdot \nabla \ue^{n+1}\|_{L^2}+\|\ue^{\perp,n+1}\|_{L^2}\right)\\ &\leq C\left(\varepsilon \, \|\ue^n\|_{L^2} \| \ue^{n+1}\|_{H^s}+\|\ue^{n+1}\|_{L^2}\right)\, . \end{split} \end{equation} Now, we apply the spectral cut-off operator $ \Delta_j$ to \eqref{eq:elliptic_problem}. We get \begin{equation*} -\, {\rm div}\, \left( A_\varepsilon \Delta_j\nabla \Pi_\varepsilon^{n+1}\right)={\rm div}\, \left(\left[ \Delta_j,A_\varepsilon\right]\nabla \Pi_\varepsilon^{n+1}\right)+\, {\rm div}\, \Delta_j F_\varepsilon\, , \end{equation*} for all $j\geq 0$ and where we have defined $A_\varepsilon:=\left(1+\varepsilon a_\varepsilon^{n+1}\right)$ and $F_\varepsilon:=\varepsilon \ue^n \cdot \nabla \ue^{n+1}+\ue^{\perp,n+1}$. Hence multiplying both sides by $ \Delta_j \Pi_\varepsilon^{n+1}$ and integrating over $\mathbb{R}^2$, we have \begin{equation*} -\int_{\mathbb{R}^2}\Delta_j\Pi_\varepsilon^{n+1} {\rm div}\, \left( A_\varepsilon \Delta_j\nabla \Pi_\varepsilon^{n+1}\right) \, \dx= \int_{\mathbb{R}^2}\Delta_j\Pi_\varepsilon^{n+1} {\rm div}\, \left( \left[\Delta_j, A_\varepsilon\right] \nabla \Pi_\varepsilon^{n+1}\right) \, \dx+\int_{\mathbb{R}^2}\Delta_j\Pi_\varepsilon^{n+1} {\rm div}\, \Delta_j F_\varepsilon \, \dx. \end{equation*} Since for $j\geq 0$ we have $\|\Delta_j \nabla \Pi_\varepsilon^{n+1}\|_{L^2}\sim 2^j\|\Delta_j \Pi_\varepsilon^{n+1}\|_{L^2}$ (according to Lemma \ref{l:bern}) and using H\"older's inequality (see Proposition \ref{app:hold_ineq}) for the right-hand side, we obtain for all $j\geq 0$: \begin{equation*} 2^{j}\| \Delta_j \nabla \Pi_\varepsilon^{n+1}\|^2_{L^2}\leq C\| \Delta_j \nabla \Pi_\varepsilon^{n+1}\|_{L^2}\left( \| {\rm div}\, \left[ \Delta_j,A_\varepsilon \right]\nabla \Pi_\varepsilon^{n+1}\|_{L^2}+\|{\rm div}\, \Delta_j F_\varepsilon\|_{L^2} \right)\, . \end{equation*} To deal with the former term on the right-hand side, we take advantage of the following commutator estimate (see Lemma \ref{l:commutator_pressure}): \begin{equation*} \| {\rm div}\, \left[ \Delta_j,A_\varepsilon \right]\nabla \Pi_\varepsilon^{n+1}\|_{L^2}\leq C\, c_j \, 2^{-j(s-1)}\|\nabla A_\varepsilon\|_{H^{s-1}}\|\nabla \Pi_\varepsilon^{n+1}\|_{H^{s-1}}\, , \end{equation*} for a suitable sequence $(c_j)_{j}$ belonging to the unit sphere of $\ell^2$. After multiplying by $2^{j(s-1)}$, we get \begin{equation*} 2^{js}\| \Delta_j \nabla \Pi_\varepsilon^{n+1}\|_{L^2}\leq C\left(c_j\, \|\nabla A_\varepsilon\|_{H^{s-1}}\|\nabla \Pi_\varepsilon^{n+1}\|_{H^{s-1}}+2^{j(s-1)}\|{\rm div}\, \Delta_j F_\varepsilon\|_{L^2} \right)\, . \end{equation*} Taking the $\ell^2$ norm of both sides and summing up the low frequency blocks related to $\Delta_{-1}\nabla \Pi_\varepsilon^{n+1}$, we may have \begin{equation*} \|\nabla \Pi_\varepsilon^{n+1}\|_{H^s}\leq C\left( \|\nabla A_\varepsilon\|_{H^{s-1}}\|\nabla \Pi_\varepsilon^{n+1}\|_{H^{s-1}}+\|{\rm div}\, F_\varepsilon\|_{H^{s-1}}+\|\Delta_{-1}\nabla \Pi_\varepsilon^{n+1}\|_{L^2} \right)\, . \end{equation*} We observe that $\|\Delta_{-1}\nabla \Pi_\varepsilon^{n+1}\|_{L^2}\leq C\|\nabla \Pi_\varepsilon^{n+1}\|_{L^2}$ and \begin{equation*} \|\nabla \Pi_\varepsilon^{n+1}\|_{H^{s-1}}\leq C\|\nabla \Pi_\varepsilon^{n+1}\|_{L^2}^{1/s}\|\nabla \Pi_\varepsilon^{n+1}\|_{H^s}^{1-1/s}\, . \end{equation*} Therefore, \begin{equation*} \|\nabla \Pi_\varepsilon^{n+1}\|_{H^s}\leq C\left( \|\nabla A_\varepsilon\|_{H^{s-1}}\|\nabla \Pi_\varepsilon^{n+1}\|_{L^2}^{1/s}\|\nabla \Pi_\varepsilon^{n+1}\|_{H^{s}}^{1-1/s}+\|{\rm div}\, F_\varepsilon\|_{H^{s-1}}+\|\nabla \Pi_\varepsilon^{n+1}\|_{L^2} \right)\, . \end{equation*} Then applying Young's inequality (see Proposition \ref{app:young_ineq}) we finally infer that \begin{equation}\label{est_Pi_H^s_1} \|\nabla \Pi_\varepsilon^{n+1}\|_{H^s}\leq C\left( \left(1+\|\nabla A_\varepsilon\|_{H^{s-1}}\right)^s \|\nabla \Pi_\varepsilon^{n+1}\|_{L^2}+\|{\rm div}\, F_\varepsilon\|_{H^{s-1}} \right)\, . \end{equation} It remains to analyse the term ${\rm div}\, F_\varepsilon$ where $F_\varepsilon:=\varepsilon \ue^n \cdot \nabla \ue^{n+1}+\ue^{\perp,n+1}$. Due to the divergence-free conditions, we can write \begin{equation*} {\rm div}\, (\ue^n \cdot \nabla \ue^{n+1} )=\nabla \ue^{n}:\nabla \ue^{n+1} \end{equation*} and as $H^{s-1}$ is an algebra, the term ${\rm div}\, (\ue^n \cdot \nabla \ue^{n+1} )$ is in $H^{s-1}$, with \begin{equation}\label{est:div_u} \|{\rm div}\, (\ue^n \cdot \nabla \ue^{n+1} )\|_{H^{s-1}}\leq C \|\ue^n\|_{H^s}\|\ue^{n+1}\|_{H^s}\, . \end{equation} Putting \eqref{est:Pi_L^2} and \eqref{est:div_u} into \eqref{est_Pi_H^s_1}, we find that \begin{equation}\label{est_Pi_final} \begin{split} \|\nabla \Pi_\varepsilon^{n+1}\|_{ H^{s}}&\leq C \left(1+\varepsilon \|\nabla a_\varepsilon^{n+1}\|_{ H^{s-1}}\right)^s \left(\varepsilon \|\ue^n\|_{L^2}\|\ue^{n+1}\|_{ H^s}+\|\ue^{\perp,n+1}\|_{L^2}\right)\\ &+C\, \left(\varepsilon \|\ue^{n}\|_{ H^{s}}\|\ue^{n+1}\|_{ H^{s}} +\|\ue^{\perp,n+1}\|_{ H^{s}}\right)\\ &\leq C \left(1+\varepsilon \|\nabla a_\varepsilon^{n+1}\|_{ H^{s-1}}\right)^s \left(\varepsilon \|\ue^n\|_{H^s}+1\right)\|\ue^{n+1}\|_{ H^s}\, , \end{split} \end{equation} which implies the $L^\infty_T(H^s)$ estimate for the pressure term: \begin{equation}\label{est:Pi^(n+1)} \|\nabla \Pi_\varepsilon^{n+1}\|_{L^{\infty}_T H^{s}}\leq C \left(1+\varepsilon \|\nabla a_\varepsilon^{n+1}\|_{L^\infty_T H^{s-1}}\right)^s\left(\varepsilon \|\ue^n\|_{L^\infty_T H^s}+1\right)\|\ue^{n+1}\|_{L^\infty_TH^s}\, . \end{equation} Combining all the previous estimates together with a Gr\"onwall type inequality (again we refer to Section \ref{app:sect_gron}), we finally obtain an estimate for the velocity field: \begin{equation}\label{est:u^(n+1)} \sup_{0\leq t\leq T}\|\ue^{n+1}(t)\|_{H^s}\leq \|\vec u_{0,\varepsilon}^{n+1}\|_{H^s}\exp \left(\int_0^T A_n(t)\, \dt\right)\, , \end{equation} where \begin{equation*} \begin{split} A_n(t)=C\left(\|a_\varepsilon^{n+1} (t)\|_{L^\infty}+\|\nabla a_\varepsilon^{n+1}(t)\|_{ H^{s-1}}\right)\left(1+\varepsilon \|\nabla a_\varepsilon^{n+1}(t)\|_{ H^{s-1}}\right)^s\left(\varepsilon \|\ue^n(t)\|_{H^{s}}+1\right)+C\|\ue^n(t)\|_{ H^s}\, . \end{split} \end{equation*} We point out that the above constants $C$ do not depend on $n$ nor on $\varepsilon$. The scope in what follows is to obtain uniform estimates by induction. Thanks to the assumptions stated in Paragraph \ref{ss:FormProb_E}, we can suppose that the initial data satisfy \begin{equation*} \|a_{0,\varepsilon}\|_{L^\infty}\leq \frac{C_0}{2}\, ,\quad \quad \|\nabla a_{0,\varepsilon}\|_{H^{s-1}}\leq \frac{C_1}{2} \quad \quad \text{and}\quad \quad \|\vec{u}_{0,\varepsilon}\|_{H^s}\leq \frac{C_2}{2} \, , \end{equation*} for some $C_0, C_1, C_2>0$. Due to the relation \eqref{eq:transport_density} we immediately infer that, for all $n\geq 0$, \begin{equation*} \|a_\varepsilon^{n+1} \|_{L^\infty_t L^\infty}\leq C\|a_{0,\varepsilon}\|_{L^\infty}\leq C\, C_0 \quad \text{for all }t\in \mathbb{R}_+. \end{equation*} At this point, the aim is to show (by induction) that the following uniform bounds hold for all $n\geq 0$: \begin{equation}\label{eq:unifbounds} \begin{split} &\|\nabla a_\varepsilon^{n+1}\|_{L^\infty_{T^\ast}H^{s-1}}\leq C_1\\ &\|\ue^{n+1}\|_{L^\infty_{T^\ast}H^s}\leq C_2\\ &\|\nabla \Pi_\varepsilon^{n+1}\|_{L^\infty_{T^\ast}H^s}\leq C_3\, , \end{split} \end{equation} provided that $T^\ast$ is sufficiently small. The previous estimates in \eqref{eq:unifbounds} obviously hold for $n=0$. At this point, we will prove them for $n+1$, supposing that the controls in \eqref{eq:unifbounds} are true for $n$. From \eqref{est:a^(n+1)}, \eqref{est:u^(n+1)} and \eqref{est:Pi^(n+1)} we obtain \begin{align*} &\|\nabla a_\varepsilon^{n+1}\|_{L^\infty_{T}H^{s-1}}\leq \frac{C_1}{2}\exp \Big(CTC_2\Big)\\ &\|\ue^{n+1}\|_{L^\infty_{T}H^s}\leq \frac{C_2}{2}\exp \Big(CT(C_0+C_1)\left(1+\varepsilon C_1\right)^s(\varepsilon C_2+1)C_2 \Big)\\ &\|\nabla \Pi_\varepsilon^{n+1}\|_{L^\infty_{T}H^s}\leq C(\varepsilon C_2+1)\left(1+\varepsilon C_1\right)^s\|\ue^{n+1}\|_{L^\infty_{T}H^s}\, . \end{align*} So we can choose $T^{\ast}$ such that $\exp \Big(\max\{C_0+C_1,\, 1\}\, CT\left(1+ C_1\right)^s(1+ C_2)\, C_2 \Big)\leq 2$. Notice that $T^\ast$ does not depend on $\varepsilon$. Thus, by induction, \eqref{eq:unifbounds} holds for the step $n+1$, and therefore it is true for any $n\in \mathbb{N}$. \subsection{Convergence}\label{ss:conv_H^s} To prove the convergence, we will estimate the difference between two iterations at different steps. First of all, let us define \begin{equation*} \widetilde{a}_\varepsilon^{n}:=a_\varepsilon^n -a^n_{0,\varepsilon}\, , \end{equation*} that satisfies the transport equation \begin{equation*} \begin{cases} \partial_t \widetilde{a}^n_\varepsilon+\ue^{n-1}\cdot \nabla \widetilde{a}_\varepsilon^n=-\ue^{n-1}\cdot \nabla a_{0,\varepsilon}^n\\ \widetilde{a}{_{\varepsilon}^n}_{|t=0}=0\, . \end{cases} \end{equation*} Hence, since the right-hand side is definitely uniformly bounded (with respect to $n$) in $L^1_{\rm loc}(\mathbb{R}_+;L^2)$, from classical results for transport equations we get that $(\widetilde{a}^n_\varepsilon)_{n\in \mathbb{N}}$ is uniformly bounded in $C^0([0,T];L^2)$. Now, we want to prove that the sequence $(\widetilde{a}^n_\varepsilon, \ue^n, \nabla \Pi^n_\varepsilon)_{n\in \mathbb{N}}$ is a Cauchy sequence in $C^0([0,T];L^2)$. So, let us define, for $(n,l)\in \mathbb{N}^2$, the following quantities, \begin{align} &\delta {a}_\varepsilon^{n,l}:=a_\varepsilon^{n+l}-a_\varepsilon^n \nonumber\\ &\delta\widetilde{a}_\varepsilon^{n,l} :=\widetilde{a}_\varepsilon^{n+l}-\widetilde{a}_\varepsilon^{n}=\delta a_\varepsilon^{n,l}-\delta a_{0,\varepsilon}^{n,l}\, , \quad \text{ where }\quad \delta {a}_{0,\varepsilon}^{n,l}:=a_{0,\varepsilon}^{n+l}-a_{0,\varepsilon}^n \nonumber\\ &\delta \ue^{n,l}:=\ue^{n+l}-\ue^n \label{general_uniform_bounds}\\ &\delta \Pi_\varepsilon^{n,l}:=\Pi_\varepsilon^{n+l}-\Pi_\varepsilon^n \nonumber \, . \end{align} Of course, we have that ${\rm div}\, \delta \vec u_\varepsilon^{n,l}=0$ for any $(n,l)\in \mathbb{N}^2$. The previous quantities defined in \eqref{general_uniform_bounds} solve the system \begin{equation}\label{syst_convergence_analysis} \begin{cases} \partial_t \delta \widetilde{a}_{\varepsilon}^{n,l} +\vu_\varepsilon^{n+l-1} \cdot \nabla \delta \widetilde{a}_\varepsilon^{n,l}=-\delta\ue^{n-1,l}\cdot \nabla a_\varepsilon^n-\ue^{n+l-1}\cdot \nabla \delta a^{n,l}_{0,\varepsilon}\\[1ex] \partial_t\delta \vu_{\varepsilon}^{n,l}+ \vu_{\varepsilon}^{n+l-1} \cdot \nabla \delta\vu_{\varepsilon}^{n,l}=-\delta \ue^{n-1,l} \cdot \nabla \ue^n \\[0.5ex] \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad - \dfrac{1}{\varepsilon} (\delta \vu_{\varepsilon}^{n,l})^\perp-(1+\varepsilon a_\varepsilon^{n+l})\dfrac{1}{\varepsilon}\nabla \delta \Pi_\varepsilon^{n,l}-\delta a_\varepsilon^{n,l} \nabla \Pi_\varepsilon^n \\[2ex] {\rm div}\, \delta \vu_{\varepsilon}^{n,l} =0 \\[1ex] (\delta \widetilde{a}_\varepsilon^{n,l},\delta \vu_\varepsilon^{n,l})_{|t=0}=(0,\delta \vu_{0,\varepsilon}^{n,l})\, . \end{cases} \end{equation} We perform an energy estimate for the first equation in \eqref{syst_convergence_analysis}, getting \begin{equation*} \|\delta \widetilde{a}_\varepsilon^{n,l}(t)\|_{L^2}\leq C\int_0^t\left(\|\nabla a_\varepsilon^n\|_{L^\infty}\|\delta \vu_\varepsilon^{n-1,l}\|_{L^2}+\|\ue^{n+l-1}\|_{L^2}\|\nabla \delta a_{0,\varepsilon}^{n,l}\|_{L^\infty}\right)\, \detau\, . \end{equation*} Moreover, from the momentum equation multiplied by $\delta \ue^{n,l}$, integrating by parts over $\mathbb{R}^2$, we obtain \begin{equation*} \begin{split} \int_{\mathbb{R}^2}\frac{1}{2}\partial_t\, |\delta \ue^{n,l}|^2=-\int_{\mathbb{R}^2}(\delta \ue^{n-1,l}\cdot \nabla \ue^{n})\cdot\delta \ue^{n,l}+\int_{\mathbb{R}^2}(a_\varepsilon^{n+l}\, \nabla \delta \Pi_\varepsilon^{n,l})\cdot \delta \ue^{n} +\int_{\mathbb{R}^2}(\delta a_\varepsilon^{n,l}\, \nabla \Pi_\varepsilon^{n})\cdot \delta \ue^{n,l}\, , \end{split} \end{equation*} which implies \begin{equation*} \begin{split} \|\delta \ue^{n,l}(t)\|_{L^2}&\leq C \|\delta \vec u_{0,\varepsilon}^{n,l}\|_{L^2}+C\int_0^t\|\nabla \ue^{n}(\tau)\|_{L^\infty}\|\delta \ue^{n-1,l}(\tau)\|_{L^2}+\|a_\varepsilon^{n+l}(\tau)\|_{L^\infty}\|\nabla \delta \Pi_\varepsilon^{n,l}(\tau)\|_{L^2}\, \detau\\ &+C\int_0^t\left(\|\delta \widetilde{a}_\varepsilon^{n,l}(\tau )\|_{L^2}+\|\delta a_{0,\varepsilon}^{n,l}\|_{L^\infty}\right)\|\nabla \Pi_\varepsilon^{n}(\tau)\|_{L^2 \cap L^\infty} \, \detau\, , \end{split} \end{equation*} where we have also employed the fact that $\delta a_\varepsilon^{n,l} =\delta \widetilde{a}_\varepsilon^{n,l}+\delta a_{0,\varepsilon}^{n,l}$. Finally, for the pressure term we take the ${\rm div}\,$ operator in the momentum equation of system \eqref{syst_convergence_analysis}, obtaining \begin{equation*} -{\rm div}\, \left((1+\varepsilon a_\varepsilon^{n+l})\frac{1}{\varepsilon}\nabla \delta \Pi_\varepsilon^{n,l}\right)={\rm div}\, \left(-\delta \vu_{\varepsilon}^{n,l} \cdot \nabla \vu_{\varepsilon}^{n+l-1} +\delta \ue^{n-1,l} \cdot \nabla \ue^n + \frac{1}{\varepsilon} (\delta \vu_{\varepsilon}^{n,l})^\perp+\delta a_\varepsilon^{n,l} \nabla \Pi_\varepsilon^n \right), \end{equation*} so that we have \begin{equation}\label{Pi_cauchy} \begin{split} \|\nabla \delta \Pi_\varepsilon^{n,l}\|_{L^2}&\leq C\varepsilon \left(\|\delta \ue^{n-1,l}\|_{L^2}\|\nabla \ue^n\|_{L^\infty}+\|\delta a_\varepsilon^{n,l}\, \nabla \Pi_\varepsilon^n\|_{L^2}\right)\\ &+C\varepsilon\|\delta \ue^{n,l}\|_{L^2}\|\nabla \ue^{n+l-1}\|_{L^\infty}+C\|(\delta \ue^{n,l})^\perp\|_{L^2}\\ &\leq C\varepsilon \left(\|\delta \ue^{n-1,l}\|_{L^2}\|\nabla \ue^n\|_{L^\infty}+\|\delta \widetilde{a}_\varepsilon^{n,l}\|_{L^2}\|\nabla \Pi_\varepsilon^n\|_{L^\infty}+\|\delta a_{0,\varepsilon}^{n,l}\|_{L^\infty}\|\nabla \Pi_\varepsilon^n\|_{L^2}\right)\\ &+C\varepsilon\|\delta \ue^{n,l}\|_{L^2}\|\nabla \ue^{n+l-1}\|_{L^\infty} +C\|\delta \ue^{n,l}\|_{L^2}\, . \end{split} \end{equation} At this point, applying Gr\"onwall lemma (see Lemma \ref{app:lem_gron}) and using the bounds established in Paragraph \ref{ss:unif_est}, we thus argue that for $t\in [0,T^\ast]$: \begin{equation*} \begin{split} \|\delta \widetilde{a}_\varepsilon^{n,l}(t)\|_{L^2}+\|\delta \ue^{n,l}(t)\|_{L^2}&\leq C_{T^\ast}\left(\|\nabla \delta a_{0,\varepsilon}^{n,l}\|_{L^\infty}+\| \delta a_{0,\varepsilon}^{n,l}\|_{L^\infty}+\|\delta \vec u_{0,\varepsilon}^{n,l}\|_{L^2}\right)\\ &+C_{T^\ast}\int_0^t\left(\|\delta \widetilde{a}_\varepsilon^{n-1,l}(\tau)\|_{L^2}+\|\delta \ue^{n-1,l}(\tau)\|_{L^2}\right) \detau\, , \end{split} \end{equation*} where the constant $C_{T^\ast}$ depends on $T^\ast$ and on the initial data. After setting \begin{equation*} F_0^n:=\sup_{l\geq 0}\left(\|\nabla \delta a_{0,\varepsilon}^{n,l}\|_{L^\infty}+\| \delta a_{0,\varepsilon}^{n,l}\|_{L^\infty}+\|\delta \vec u_{0,\varepsilon}^{n,l}\|_{L^2}\right) \end{equation*} and \begin{equation*} G^n(t):=\sup_{l\geq 0}\sup_{[0,t]}\left(\|\delta \widetilde{a}_\varepsilon^{n,l}(\tau)\|_{L^2}+\|\delta \ue^{n,l}(\tau)\|_{L^2}\right), \end{equation*} by induction we may conclude that, for all $t\in [0,T^\ast]$, \begin{equation*} \begin{split} G^n(t)\leq C_{T^\ast}\sum_{k=0}^{n-1}\frac{(C_{T^\ast}T^\ast)^k}{k!}F_0^{n-k}+\frac{\left(C_{T^\ast}T^\ast\right)^n}{n!}G^0(t)\, . \end{split} \end{equation*} Now, bearing \eqref{conv_in_data} in mind, we have \begin{equation*} \lim_{n\rightarrow +\infty}F_0^n=0\, . \end{equation*} Hence, we may infer that \begin{equation}\label{est_fin_conv} \lim_{n\rightarrow +\infty}\sup_{l\geq 0}\sup_{t\in [0,T^\ast]}\left(\|\delta \widetilde{a}_\varepsilon^{n,l}(t)\|_{L^2}+\|\delta \ue^{n,l}(t)\|_{L^2}\right)=0\, . \end{equation} Property \eqref{est_fin_conv} implies that both $(\widetilde{a}_\varepsilon^n)_{n\in \mathbb{N}}$ and $(\ue^n)_{n\in \mathbb{N}}$ are Cauchy sequences in $C^0([0,T^\ast];L^2)$: therefore, such sequences converge to some functions $\widetilde{a}_\varepsilon$ and $\ue$ in the same space. Taking advantage of previous computations in \eqref{Pi_cauchy}, we have also that $(\nabla \Pi_\varepsilon^n)_{n\in \mathbb{N}}$ converge to a function $\nabla \Pi_\varepsilon$ in $C^0([0,T^\ast];L^2)$. Now, we define $a_\varepsilon:= a_{0,\varepsilon} +\widetilde{a}_\varepsilon$. Hence, $ a_\varepsilon -a_{0,\varepsilon}$ is in $C^0([0,T^\ast];L^2)$. Moreover, as $(\nabla a_\varepsilon^n)_{n\in \mathbb{N}}$ is uniformly bounded in $L^\infty ([0,T^\ast];H^{s-1})$ and Sobolev spaces have the \textit{Fatou property} (we refer to Proposition \ref{proposition_Fatou} in this respect), we deduce that $\nabla a_\varepsilon$ belongs to the same space. Moreover, since $(a^n_\varepsilon )_{n\in \mathbb{N}}$ is uniformly bounded in $L^\infty ([0,T^\ast]\times \mathbb{R}^2)$, we also have that $a_\varepsilon\in L^\infty ([0,T^\ast]\times \mathbb{R}^2)$. Analogously, as $(\ue^n)_{n\in \mathbb{N}}$ and $(\nabla \Pi_\varepsilon^n)_{n\in \mathbb{N}}$ are uniformly bounded in $L^\infty ([0,T^\ast];H^s)$, we deduce that $\ue$ and $\nabla \Pi_\varepsilon$ belong to $L^\infty ([0,T^\ast];H^s)$. Due to an interpolation argument, we see that the above sequences converge strongly in every intermediate $C^0([0,T^\ast];H^\sigma)$ for all $\sigma <s$. This is enough to pass to the limit in the equations satisfied by $(a_\varepsilon^n,\ue^n,\nabla \Pi_\varepsilon^n)_{n\in \mathbb{N}}$. Hence, $(a_\varepsilon, \ue,\nabla \Pi_\varepsilon)$ satisfies the original problem \eqref{Euler-a_eps_1}. This having been established, we look at the time continuity of $a_\varepsilon$. We exploit the transport equation: $$\partial_t a_{\varepsilon}=-\vu_\varepsilon \cdot \nabla a_\varepsilon\, ,$$ noticing that the term on the right-hand side belongs to $L^\infty_{T^\ast}(L^\infty)$. Thus, we can deduce that $\partial_t a_\varepsilon \in L^\infty_{T^\ast}(L^\infty)$. Moreover, by embeddings, we already know that $\nabla a_\varepsilon \in L^\infty_{T^\ast}(L^\infty)$. The previous two relations imply that $a_\varepsilon \in W^{1,\infty}_{T^\ast}(L^\infty)\cap L^\infty_{T^\ast}(W^{1,\infty})$. That give us the desired regularity property $a_\varepsilon \in C^0([0,T^\ast]\times \mathbb{R}^2)$. In addition, looking at the momentum equation in \eqref{Euler-a_eps_1} and employing Theorem \ref{thm_transport}, one obtains the claimed time regularity property for $\ue$. At this point, the time regularity for the pressure term $\nabla \Pi_\varepsilon$ is recovered from the elliptic problem \eqref{general_elliptic_est}. \subsection{Uniqueness}\label{ss:uniqueness_H^s} We conclude this section showing the uniqueness of solutions in our framework. We start by stating a uniqueness result, that is a consequence of a standard stability result based on energy methods. Since the proof is similar to the convergence argument of the previous paragraph, we will omit it (see e.g. \cite{D_JDE} for details). We recall that, in what follows, the parameter $\varepsilon>0$ is fixed. \begin{theorem} \label{th:uniq} Let $\left(\varrho_\varepsilon^{(1)}, \ue^{(1)}, \nabla \Pi_\varepsilon^{(1)}\right)$ and $\left(\varrho_\varepsilon^{(2)}, \ue^{(2)}, \nabla \Pi_\varepsilon^{(2)}\right)$ be two solutions to the Euler system \eqref{Euler_eps} associated with the initial data $\left(\varrho_{0,\varepsilon}^{(1)}, \vec u_{0,\varepsilon}^{(1)}\right)$ and $\left(\varrho_{0,\varepsilon}^{(2)}, \vec u_{0,\varepsilon}^{(2)}\right)$. Assume that, for some $T>0$, one has the following properties: \begin{enumerate}[(i)] \item the densities $\varrho_\varepsilon^{(1)}$ and $\varrho_\varepsilon^{(2)}$ are bounded and bounded away from zero; \item the quantities $\delta \varrho_\varepsilon:=\varrho_\varepsilon^{(2)}-\varrho_\varepsilon^{(1)}$ and $\delta \ue:=\ue^{(2)}-\ue^{(1)}$ belong to the space $C^1\big([0,T];L^2(\mathbb{R}^2)\big)$; \item $\nabla \ue^{(1)}$, $\nabla \varrho_\varepsilon^{(1)}$ and $\nabla \Pi_\varepsilon^{(1)}$ belong to $ L^1\big([0,T];L^\infty(\mathbb{R}^2)\big)$. \end{enumerate} Then, for all $t\in[0,T]$, we have the stability inequality: \begin{equation}\label{stability_ineq} \|\delta \varrho_\varepsilon (t)\|_{L^2}+ \left\|\sqrt{\varrho_\varepsilon^{(2)}(t)}\, \delta \ue (t)\right\|_{L^2}\leq \left(\|\delta \varrho_{0,\varepsilon}\|_{L^2}+ \left\|\sqrt{\varrho_{0,\varepsilon}^{(2)}}\, \delta \vec u_{0,\varepsilon}\right\|_{L^2}\right)\, e^{CA(t)}\, , \end{equation} for a universal constant $C>0$, where we have defined \begin{equation*}\label{eq:def_A(t)} A(t):=\int_0^t\left(\left\|\frac{\nabla \varrho_\varepsilon^{(1)}}{\sqrt{\varrho_\varepsilon^{(2)}}}\right\|_{L^\infty}+\left\|\frac{\nabla \Pi_\varepsilon^{(1)}}{\varrho_\varepsilon^{(1)}\sqrt{\varrho_\varepsilon^{(2)}}}\right\|_{L^\infty}+\|\nabla \ue^{(1)}\|_{L^\infty}\right)\, \detau\, . \end{equation*} \end{theorem} It is worth to notice that, adapting the \textit{relative entropy} arguments presented in Subsection 4.3 of \cite{C-F_RWA}, we can replace (in the statement above) the $C^1_T(L^2)$ requirement for $\delta \varrho_\varepsilon$ and $\delta \ue$ with the $C^0_T (L^2)$ regularity. However, one needs to pay an additional $L^2$ assumption on the densities. In this way, we will have a weak-strong uniqueness type result and we will prove it in the next theorem. Concerning weak-strong results for density-dependent fluids, we refer to \cite{Ger}, where Germain exhibited a weak-strong uniqueness property within a class of (weak) solutions to the compressible Navier-Stokes system satisfying a relative entropy inequality with respect to a (hypothetical) strong solution of the same problem (see also the work \cite{F-N-S} by Feireisl, Novotn\'y and Sun). Moreover, in \cite{F-J-N}, the authors established the weak-strong uniqueness property in the class of finite energy weak solutions, extending thus the classical results of Prodi \cite{Pr} and Serrin \cite{Ser} to the class of compressible flows. Before presenting the proof of the weak-strong uniqueness result, we state the definition of a \textit{finite energy weak solution} to system (2.1), such that $\varrho_{0,\varepsilon}-1\in L^2(\mathbb{R}^2)$. We also recall that our densities have the form $\varrho_\varepsilon=1+\varepsilon R_\varepsilon$. \begin{definition}\label{weak_sol_L^2} Let $T>0$ and $\varepsilon \in \; ]0,1]$ be fixed. Let $(\varrho_{0,\varepsilon},\vec u_{0,\varepsilon})$ be an initial datum fulfilling the assumptions in Paragraph \ref{ss:FormProb_E}. We say that $(\varrho_\varepsilon, \ue)$ is a \textit{finite energy weak solution} to system \eqref{Euler_eps} in $[0,T]\times \mathbb{R}^2$, related to the previous initial datum, if: \begin{itemize} \item $\varrho_\varepsilon \in L^\infty([0,T]\times \mathbb{R}^2)$ and $\varrho_\varepsilon -1\in C^0([0,T];L^2(\mathbb{R}^2))$; \item $\ue \in L^\infty([0,T];L^2(\mathbb{R}^2))\cap C_w^0([0,T];L^2(\mathbb{R}^2))$; \item the mass equation is satisfied in the weak sense: \begin{equation*} \int_0^T\int_{\mathbb{R}^2}\Big(\varrho_\varepsilon \, \partial_t \varphi+ \varrho_\varepsilon \ue \cdot \nabla \varphi\Big) \, \dxdt +\int_{\mathbb{R}^2}\varrho_{0,\varepsilon}\varphi(0, \cdot )\dx =\int_{\mathbb{R}^2}\varrho_\varepsilon (T)\varphi(T,\cdot)\dx \, , \end{equation*} for all $\varphi\in C_c^\infty ([0,T]\times \mathbb{R}^2;\mathbb{R})$; \item the divergence-free condition ${\rm div}\, \ue =0$ is satisfied in $\mathcal{D}^\prime (]0,T[\, \times \mathbb{R}^2)$; \item the momentum equation is satisfied in the weak sense: \begin{equation*} \begin{split} \int_0^T\int_{\mathbb{R}^2}\Big(\varrho_\varepsilon \ue \cdot \partial_t \vec \psi +[\varrho_\varepsilon\ue \otimes \ue] :\nabla \vec \psi - \frac{1}{\varepsilon}\varrho_\varepsilon \ue^\perp \cdot \vec \psi \Big) \dxdt+ \int_{\mathbb{R}^2}\varrho_{0,\varepsilon}& \vec u_{0,\varepsilon}\cdot \vec \psi (0)\dx \\ &=\int_{\mathbb{R}^2}\varrho_\varepsilon(T)\vec u_\varepsilon(T)\vec \psi(T)\dx, \end{split} \end{equation*} for any $\vec \psi \in C_c^\infty([0,T]\times \mathbb{R}^2;\mathbb{R}^2)$ such that ${\rm div}\, \vec \psi=0$; \item for almost every $t\in [0,T]$, the two following energy balances hold true: \begin{equation*} \int_{\mathbb{R}^2} \varrho_\varepsilon (t) |\ue (t)|^2\dx\leq \int_{\mathbb{R}^2} \varrho_{0,\varepsilon} |\vec u_{0,\varepsilon}|^2\dx \quad \text{and}\quad \int_{\mathbb{R}^2}(\varrho_{\varepsilon}(t)-1)^2\dx\leq \int_{\mathbb{R}^2}(\varrho_{0,\varepsilon}-1)^2\dx\, . \end{equation*} \end{itemize} \end{definition} \begin{theorem} \label{th:uniq_bis} Let $\varepsilon\in \, ]0,1]$ be fixed. Let $\left(\varrho_\varepsilon^{(1)}, \ue^{(1)}\right)$ and $\left(\varrho_\varepsilon^{(2)}, \ue^{(2)}\right)$ be two finite energy weak solutions to the Euler system \eqref{Euler_eps} as in Definition \ref{weak_sol_L^2} with initial data $\left(\varrho_{0,\varepsilon}^{(1)}, \vec u_{0,\varepsilon}^{(1)}\right)$ and $\left(\varrho_{0,\varepsilon}^{(2)}, \vec u_{0,\varepsilon}^{(2)}\right)$. Assume that, for some $T>0$, one has the following properties: \begin{enumerate}[(i)] \item $\nabla \ue^{(1)}$ and $\nabla \varrho_\varepsilon^{(1)}$ belong to $ L^1\big([0,T];L^\infty(\mathbb{R}^2)\big)$; \item $\nabla \Pi_\varepsilon^{(1)}$ is in $ L^1\big([0,T];L^\infty(\mathbb{R}^2)\cap L^2(\mathbb{R}^2)\big)$. \end{enumerate} Then, for all $t\in[0,T]$, we have the stability inequality \eqref{stability_ineq}. \end{theorem} \begin{proof} We start by defining, for $i=1,2$: \begin{equation*} R_\varepsilon^{(i)}:=\frac{\varrho_\varepsilon^{(i)}-1}{\varepsilon}\quad \text{and}\quad R_{0,\varepsilon}^{(i)}:=\frac{\varrho_{0,\varepsilon}^{(i)}-1}{\varepsilon}\, , \end{equation*} and we notice that, owing to the continuity equation in \eqref{Euler_eps} and the divergence-free conditions ${\rm div}\, \vec u_\varepsilon^{(i)}=0$, one has \begin{equation}\label{ws:transport_R} \partial_tR_\varepsilon^{(i)}+{\rm div}\, (R_\varepsilon^{(i)}\vec u_\varepsilon^{(i)})=0 \quad \text{with}\quad R_\varepsilon^{(i)}(0)=R_{0,\varepsilon}^{(i)}. \end{equation} For simplicity of notation, we fix $\varepsilon=1$ throughout this proof and let us assume for a while the couple $(R^{(1)},\vec u^{(1)})$ be a pair of smooth functions such that $R^{(1)},\vec u^{(1)}\in C^\infty_c(\mathbb{R}_+\times \mathbb{R}^2)$ and ${\rm div}\, \vec u^{(1)}=0$, with the support of $R^{(1)}$ and $\vec u^{(1)}$ included in $[0,T]\times \mathbb{R}^2$. First of all, we use $\vec u^{(1)}$ as a test function in the weak formulation of the momentum equation, finding that \begin{align} \int_{\mathbb{R}^2} \varrho^{(2)}(T)\vec u^{(2)}(T)\cdot \vec u^{(1)}(T) \dx&= \int_{\mathbb{R}^2}\varrho_0^{(2)}\vec u_0^{(2)}\cdot \vec u_0^{(1)} \dx +\int_0^T\int_{\mathbb{R}^2}\varrho^{(2)}\vec u^{(2)}\cdot \partial_t\vec u^{(1)} \dxdt \label{mom-eq-reg-1}\\ &+\int_0^T\int_{\mathbb{R}^2}(\varrho^{(2)}\vec u^{(2)}\otimes \vec u^{(2)}):\nabla \vec u^{(1)} \dxdt+\int_0^T\int_{\mathbb{R}^2}\varrho^{(2)}\vec u^{(2)}\cdot (\vec u^{(1)})^\perp \dxdt \nonumber\, , \end{align} where we have also noted that $ (\vec u^{(2)})^\perp\cdot \vec u^{(1)}=- \vec u^{(2)}\cdot (\vec u^{(1)})^\perp$. Next, testing the mass equation against $|\vec u^{(1)}|^2/2$, we obtain \begin{equation}\label{mom-eq-reg-2} \begin{split} \frac{1}{2}\int_{\mathbb{R}^2}\varrho^{(2)}(T)|\vec u^{(1)}(T)|^2\dx&=\frac{1}{2}\int_{\mathbb{R}^2}\varrho_0^{(2)}|\vec u_0^{(1)}|^2\dx +\int_0^T\int_{\mathbb{R}^2}\varrho^{(2)}\vec u^{(1)}\cdot \partial_t \vec u^{(1)} \dxdt\\ &+\frac{1}{2}\int_0^T\int_{\mathbb{R}^2}\varrho^{(2)}\vec u^{(2)}\cdot \nabla |\vec u^{(1)}|^2 \dxdt\\ &=\frac{1}{2}\int_{\mathbb{R}^2}\varrho_0^{(2)}|\vec u_0^{(1)}|^2\dx +\int_0^T\int_{\mathbb{R}^2}\varrho^{(2)}\vec u^{(1)}\cdot \partial_t \vec u^{(1)} \dxdt\\ &+\frac{1}{2}\int_0^T\int_{\mathbb{R}^2}(\varrho^{(2)}\vec u^{(2)}\otimes\vec u^{(1)}): \nabla \vec u^{(1)} \dxdt. \end{split} \end{equation} Recall also that the energy inequality reads \begin{equation*} \frac{1}{2}\int_{\mathbb{R}^2}\varrho^{(2)}(T)|\vec u^{(2)}(T)|^2\dx \leq\dfrac{1}{2}\int_{\mathbb{R}^2}\varrho_0^{(2)}|\vec u_0^{(2)}|^2\dx \, . \end{equation*} Now, we take care of the density oscillations $R^{(i)}$. We test the transport equation \eqref{ws:transport_R} for $R^{(2)}$ against $R^{(1)}$, getting \begin{align} \int_{\mathbb{R}^2}R^{(2)}(T)R^{(1)}(T)\dx&=\int_{\mathbb{R}^2}R_0^{(2)}R_0^{(1)}\dx \label{transp-eq-reg-1}\\ &+\int_0^T\int_{\mathbb{R}^2}R^{(2)}\partial_tR^{(1)}\dxdt+\int_0^T\int_{\mathbb{R}^2}R^{(2)}\vec u^{(2)}\cdot \nabla R^{(1)}\dxdt \nonumber . \end{align} Recalling Definition \ref{weak_sol_L^2}, we have the following energy balance: \begin{equation*} \int_{\mathbb{R}^2}|R^{(2)}(T)|^2\dx \leq\int_{\mathbb{R}^2}|R^{(2)}_0|^2\dx\, . \end{equation*} At this point, testing $\partial_t1+{\rm div}\,(1\, \vec u^{(2)})=0$ against $|R^{(1)}|^2/2$, we may infer that \begin{equation}\label{transp-eq-reg-2} \frac{1}{2}\int_{\mathbb{R}^2} |R^{(1)}(T)|^2\dx=\frac{1}{2}\int_{\mathbb{R}^2}|R_0^{(1)}|^2 \dx +\int_0^T\int_{\mathbb{R}^2}R^{(1)}\partial_tR^{(1)}\dxdt+\int_0^T\int_{\mathbb{R}^2}R^{(1)}\vec u^{(2)}\cdot \nabla R^{(1)}\dxdt . \end{equation} Now, for notational convenience, let us define \begin{equation*} \delta R:=R^{(2)}-R^{(1)}\quad \text{and}\quad \delta \vec u:=\vec u^{(2)}-\vec u^{(1)}\, . \end{equation*} Putting all the previous relations together, we obtain \begin{align} \frac{1}{2}\int_{\mathbb{R}^2}\Big(\varrho^{(2)}(T)|\delta \vec u(T)|^2+|\delta R(T)|^2\Big) \dx&\leq \frac{1}{2}\int_{\mathbb{R}^2}\Big(\varrho_0^{(2)}|\delta \vec u_0|^2+|\delta R_0|^2\Big) \dx-\int_0^T\int_{\mathbb{R}^2}\varrho^{(2)}\vec u^{(2)}\cdot (\vec u^{(1)})^\perp\dxdt \nonumber\\ &-\int_0^T\int_{\mathbb{R}^2}\Big(\varrho^{(2)}\delta \vec u\cdot \partial_t\vec u^{(1)}+ \partial_tR^{(1)}\, \delta R\Big)\dxdt \label{rel_en_ineq}\\ &-\int_0^T\int_{\mathbb{R}^2}\Big((\varrho^{(2)}\vec u^{(2)}\otimes \delta \vec u) :\nabla \vec u^{(1)}+\delta R \, \vec u^{(2)}\cdot \nabla R^{(1)}\Big)\dxdt \nonumber\, . \end{align} Next, we remark that we can write $$ (\varrho^{(2)}\vec u^{(2)}\otimes \delta \vec u) :\nabla \vec u^{(1)}=(\varrho^{(2)}\vec u^{(2)}\cdot \nabla \vec u^{(1)})\cdot \delta \vec u $$ and that we have $\vec u^{(2)}\cdot (\vec u^{(1)})^\perp=\delta \vec u\cdot (\vec u^{(1)})^\perp$ by orthogonality. Therefore, relation \eqref{rel_en_ineq} can be recasted as \begin{equation*}\label{rel_en_ineq_1} \begin{split} \frac{1}{2}\int_{\mathbb{R}^2}\Big(\varrho^{(2)}(T)|\delta \vec u(T)|^2+|\delta R(T)|^2\Big) \dx&\leq \frac{1}{2}\int_{\mathbb{R}^2}\Big(\varrho_0^{(2)}|\delta \vec u_0|^2+|\delta R_0|^2\Big) \dx\\ &-\int_0^T\int_{\mathbb{R}^2}\varrho^{(2)}\Big(\partial_t \vec u^{(1)}+\vec u^{(2)}\cdot \nabla \vec u^{(1)}+(\vec u^{(1)})^\perp\Big)\cdot \delta \vec u\, \dxdt \\ &-\int_0^T\int_{\mathbb{R}^2}\Big(\partial_tR^{(1)}+\vec u^{(2)}\cdot \nabla R^{(1)}\Big)\, \delta R\, \dxdt\, . \end{split} \end{equation*} At this point, we add and subtract the quantities $\pm (\varrho^{(2)}\vec u^{(1)}\cdot \nabla \vec u^{(1)})\cdot \, \delta \vec u\pm \varrho^{(2)}\frac{1}{\varrho^{(1)}}\nabla \Pi^{(1)}\cdot \, \delta \vec u$ and $\pm (\vec u^{(1)}\cdot \nabla R^{(1)}) \, \delta R$, yielding \begin{equation}\label{rel_en_ineq_2} \begin{split} \frac{1}{2}\int_{\mathbb{R}^2}\Big(\varrho^{(2)}(T)|\delta \vec u(T)|^2+|\delta R(T)|^2\Big) \dx&\leq \frac{1}{2}\int_{\mathbb{R}^2}\Big(\varrho_0^{(2)}|\delta \vec u_0|^2+|\delta R_0|^2\Big) \dx\\ &-\int_0^T\int_{\mathbb{R}^2}\Big(\varrho^{(2)}\delta \vec u\cdot \nabla \vec u^{(1)} +\delta R\frac{1}{\varrho^{(1)}}\nabla \Pi^{(1)}\Big)\cdot \delta \vec u \, \dxdt \\ &-\int_0^T\int_{\mathbb{R}^2}(\delta \vec u\cdot \nabla R^{(1)})\, \delta R \dxdt\, , \end{split} \end{equation} where we have used the fact that $(R^{(1)},\, \vec u^{(1)})$ is solution to the Euler system and $\int_{\mathbb{R}^2}\nabla \Pi^{(1)}\cdot \delta \vec u \, \dx=0$. Therefore, setting $\mathcal E(T):=\|\sqrt{\varrho^{(2)}(T)}\delta \vec u(T)\|^2_{L^2}+\|\delta R(T)\|^2_{L^2}$, from relation \eqref{rel_en_ineq_2} we can deduce that \begin{equation*}\label{rel_en_inq_3} \mathcal E(T)\leq \mathcal E(0)+C\int_0^T\left(\|\nabla \vec u^{(1)}\|_{L^\infty}+\left\|\frac{1}{\sqrt{\varrho^{(2)}}\varrho^{(1)}}\nabla \Pi^{(1)}\right\|_{L^\infty}+\left\|\frac{1}{\sqrt{\varrho^{(2)}}}\nabla R^{(1)}\right\|_{L^\infty}\right)\mathcal E(t) \dt\, . \end{equation*} An application of Gr\"onwall lemma (we refer to Section \ref{app:sect_gron}) yields the desired stability inequality \eqref{stability_ineq}. In order to get the result, having the regularity stated in the theorem, we argue by density. Thanks to the regularity (stated in Definition \ref{weak_sol_L^2}) of weak solutions to the Euler equations \eqref{Euler_eps} and assumption $(i)$ of the theorem, all the terms appearing in relations \eqref{mom-eq-reg-1} and \eqref{mom-eq-reg-2} are well-defined, if in addition we have $\partial_t \vec u^{(1)}\in L^1_T(L^2)$. However, this condition on the time derivative of the velocity field $\vec u^{(1)}$ comes for free from the momentum equation \begin{equation}\label{vec_1} \partial_t \vec u^{(1)}=-\left(\vec u^{(1)}\cdot \nabla \vec u^{(1)}+(\vec u^{(1)})^\perp + \frac{1}{\varrho^{(1)}}\nabla \Pi^{(1)}\right)\, . \end{equation} Since $\vec u^{(1)}\in L^\infty_T(L^2)$ with $\nabla \vec u^{(1)}\in L^1_T(L^\infty)$ and $\varrho^{(1)}\in L^\infty_T(L^\infty)$, condition $(ii)$ implies that the right-hand side of \eqref{vec_1} is in $L^1_T(L^2)$. Recalling the regularity in Definition \ref{weak_sol_L^2} of $\vec u^{(1)}$, one gets $\vec u^{(1)}\in W^{1,1}_T(L^2)$ and hence $\vec u^{(1)}\in C^0_T(L^2)$. Analogously in order to justify computations in \eqref{transp-eq-reg-1} and \eqref{transp-eq-reg-2}, besides the previous regularity conditions, one needs the additional assumption $\partial_t R^{(1)}\in L^1_T(L^2)$. Once again, one can take advantage of the continuity equation \eqref{ws:transport_R} to obtain the required regularity for $\partial_t R^{(1)}$. Finally, condition $(ii)$ is necessary to make sense of relation \eqref{rel_en_ineq_2}. This concludes the proof of the theorem. \qed \end{proof} \section{Asymptotic analysis} \label{s:sing-pert_E} The main goal of this section is to show the convergence when $\varepsilon\rightarrow 0^+$: we achieve it employing a \textit{compensated compactness} technique. We point out that, in the sequel, the time $T>0$ is fixed by the existence theory developed in Section \ref{s:well-posedness_original_problem}. We will show that \eqref{full Euler} converges towards a limit system, represented by the quasi-homogeneous incompressible Euler equations: \begin{equation}\label{QH-Euler_syst} \begin{cases} \partial_tR+\vu \cdot \nabla R=0\\ \partial_t \vu +\vu \cdot \nabla \vu +R\vu^\perp +\nabla \Pi=0\\ {\rm div}\, \vu=0\, . \end{cases} \end{equation} The previous system consists of a transport equation for the quantity $R$ (that can be interpreted as the deviation with respect to the constant density profile) and an Euler type equation for the limit velocity field $\vec u$. In Section \ref{s:well-posedness_original_problem}, we have proved that the sequence $(\varrho_\varepsilon, \ue, \nabla \Pi_\varepsilon)_\varepsilon$ is uniformly bounded (with respect to $\varepsilon$) in suitable spaces. Next, thanks to the uniform bounds, we extract weak limit points, for which one has to find some constraints: the singular terms have to vanish at the limit (see Subsection \ref{sss:unif-prelim_E}). Finally, after performing the \textit{compensated compactness} arguments, we describe the limit dynamics (see Paragraph \ref{ss:wave_system} below). The choice of using this technique derives from the fact that the oscillations in time of the solutions are out of control (see Subsection \ref{ss:wave_system}). To overcome this issue, rather than employing the standard $H^s$ estimates, we take advantage of the weak formulation of the problem. We test the equations against divergence-free test functions: this will lead to useful cancellations. In particular, we avoid to study the pressure term. At the end, we close the argument by noticing that the weak limit solutions are actually regular solutions. \subsection{Preliminaries and constraint at the limit} \label{sss:unif-prelim_E} We start this subsection by recalling the uniform bounds developed in Section \ref{s:well-posedness_original_problem}. The fluctuations $R_{\varepsilon}$ satisfy the controls \begin{align*} &\sup_{\varepsilon\in\,]0,1]}\left\| R_{\varepsilon} \right\|_{L^\infty_T(L^\infty)}\,\leq \, C\\ &\sup_{\varepsilon\in\,]0,1]}\left\| \nabla R_{\varepsilon} \right\|_{L^\infty_T(H^{s-1})}\,\leq \, C \, , \end{align*} where $R_\varepsilon:=(\varrho_\varepsilon-1)/\varepsilon$ as above. As for the velocity fields, we have obtained the following uniform bound: \begin{equation*} \sup_{\varepsilon\in\,]0,1]}\left\| \vu_{\varepsilon} \right\|_{L^\infty_T(H^s)}\,\leq \, C\, . \end{equation*} Thanks to the previous uniform estimates, we can assume (up to passing to subsequences) that there exist $R\in L^\infty_T( W^{1,\infty})$, with $\nabla R\in L^\infty_T(H^{s-1})$, and $\vec u \in L^\infty_T(H^s)$ such that \begin{equation}\label{limit_points} \begin{split} R:=\lim_{\varepsilon \rightarrow 0^+}R_{\varepsilon} \quad &\text{in}\quad L^\infty_T(L^\infty) \\ \nabla R:=\lim_{\varepsilon \rightarrow 0^+}\nabla R_{\varepsilon}\quad &\text{in}\quad L^\infty_T( H^{s-1})\\ \vu:=\lim_{\varepsilon \rightarrow 0^+}\vu_{\varepsilon}\quad &\text{in}\quad L^\infty_T(H^s) \, , \end{split} \end{equation} where we agree that the previous limits are taken in the corresponding weak-$\ast$ topology. \begin{remark}\label{rmk:conv_rho_1} It is evident that $\varrho_\varepsilon -1=O(\varepsilon)$ in $L^\infty_T (L^{\infty})$ and therefore that $\varrho_\varepsilon \ue$ weakly-$\ast$ converge to $\vec u$ e.g. in the space $L^\infty_T( L^2)$. \end{remark} Next, we notice that the solutions stated in Theorem \ref{W-P_fullE} are \textit{strong} solutions. In particular, they satisfy in a weak sense the mass equation and the momentum equation, respectively: \begin{equation}\label{weak-con_E} -\int_0^T\int_{\mathbb{R}^2} \left( \vre \partial_t \varphi + \vre\ue \cdot \nabla_x \varphi \right) \dxdt = \int_{\mathbb{R}^2} \vrez \varphi(0,\cdot) \dx\, , \end{equation} for any $\varphi\in C^\infty_c([0,T[\,\times \mathbb{R}^2;\mathbb{R})$; \begin{align} &\int_0^T\!\!\!\int_{\mathbb{R}^2} \left( - \vre \ue \cdot \partial_t \vec\psi - \vre [\ue\otimes\ue] : \nabla_x \vec\psi + \frac{1}{\ep} \, \vre \ue^\perp \cdot \vec\psi \right) \dxdt = \int_{\mathbb{R}^2}\vrez \uez \cdot \vec\psi (0,\cdot) \dx\, , \label{weak-mom_E} \end{align} for any test function $\vec\psi\in C^\infty_c([0,T[\,\times \mathbb{R}^2; \mathbb{R}^2)$ such that ${\rm div}\, \vec\psi=0$; Moreover, the divergence-free condition on $\ue$ is satisfied in $\mathcal D^\prime (]0,T[\times \mathbb{R}^2)$. Before going on, in the following lemma, we characterize the limit for the quantity $R_\varepsilon \ue$. We recall that $R_\varepsilon$ satisfies \begin{equation}\label{eq_transport_R_veps} \partial_t R_\varepsilon =-{\rm div}\, (R_\varepsilon \ue)\, , \quad \quad (R_\varepsilon)_{|t=0}=R_{0,\varepsilon}. \end{equation} \begin{lemma}\label{l:reg_Ru_veps} Let $(R_\varepsilon)_\varepsilon$ be uniformly bounded in $L^\infty_T(L^\infty(\mathbb{R}^2))$ with $(\nabla R_\varepsilon)_\varepsilon\subset L^\infty_T(H^{s-1}(\mathbb{R}^2))$, and let the velocity fields $(\ue)_\varepsilon$ be uniformly bounded in $L^\infty(H^s(\mathbb{R}^2))$. Moreover, for any $\varepsilon \in\; ]0,1]$, assume that the couple $(R_\varepsilon, \vec u_\varepsilon)$ solves the transport equation \eqref{eq_transport_R_veps}. Let $(R, \vec u)$ be the limit point identified in \eqref{limit_points}. Then, up to an extraction: \begin{enumerate} \item[(i)] $R_\varepsilon \rightarrow R$ in $C^0_T(C^0_{\rm loc}(\mathbb{R}^2))$; \item[(ii)] the product $R_\varepsilon \ue$ converges to $R\vec u$ in the distributional sense. \end{enumerate} \end{lemma} \begin{proof} We look at the transport equation \eqref{eq_transport_R_veps} for $R_\varepsilon$. We employ Proposition \ref{prop:app_fine_tame_est} on the term in the right-hand side, obtaining \begin{equation*} \|R_\varepsilon \ue\|_{H^s}\leq C\left(\|R_\varepsilon\|_{L^\infty}\|\ue \|_{H^s}+\|\nabla R_\varepsilon\|_{H^{s-1}}\|\ue\|_{L^\infty}\right)\, . \end{equation*} By embeddings, this implies that the sequence $(\partial_t R_\varepsilon)_\varepsilon$ is uniformly bounded e.g. in $L^\infty_T (L^\infty)$ and so $(R_\varepsilon)_\varepsilon$ is bounded in $W^{1,\infty}_T(L^\infty)$ uniformly in $\varepsilon$. On the other hand, we know that $(\nabla R_{\varepsilon})_\varepsilon$ is bounded in $L^\infty_T (L^\infty)$. Then, by Ascoli-Arzel\`a theorem (see Theorem \ref{app:th_A-A}), we gather that the family $(R_\varepsilon)_\varepsilon$ is compact in e.g. $C^0_T (C_{\rm loc}^{0})$ and hence we deduce the strong convergence property, up to passing to a suitable subsequence (not relabelled here), \begin{equation*} R_\varepsilon \rightarrow R \quad \text{in} \quad C^0([0,T]\, ;C_{\rm loc}^{0})\, . \end{equation*} Finally, since $(\ue)_\varepsilon$ is weakly-$\ast$ convergent e.g. in $L^\infty_T( L^2)$ to $\vec u$, we get $R_\varepsilon \ue \weakstar R \vec u$ in the space $L^\infty_T( L_{\rm loc}^2)$. \qed \end{proof} Now, as anticipated in the introduction of this section, we have to highlight the constraint that the limit points have to satisfy. We have to point out that this condition does not fully characterize the limit dynamics (see Subsection \ref{ss:wave_system} below). The only singular term (of order $\varepsilon^{-1}$) appearing in the equations is the Coriolis force. Then, we test the momentum equation in \eqref{weak-mom_E} against $\varepsilon \vec \psi$ with $\vec \psi \in C_c^\infty([0,T[\, \times \mathbb{R}^2;\mathbb{R}^2)$ such that ${\rm div}\, \vec \psi =0$. Keeping in mind the assumptions on the initial data and due to the fact that $(\varrho_\varepsilon \ue )_\varepsilon$ is uniformly bounded in e.g. $L^\infty_T(L^2) $ and so is $(\varrho_\varepsilon \ue \otimes \ue)_\varepsilon$ in $L^\infty_T(L^1)$, it follows that all the terms in equation \eqref{weak-mom_E}, apart from the Coriolis operator, go to $0$ in the limit for $\varepsilon\rightarrow 0^+$. Therefore, we infer that, for any $\vec \psi \in C_c^\infty([0,T[\, \times \mathbb{R}^2;\mathbb{R}^2)$ such that ${\rm div}\, \vec \psi =0$, \begin{equation*} \lim_{\varepsilon \rightarrow 0^+}\int_0^T\int_{\mathbb{R}^2}\varrho_\varepsilon \ue^\perp \cdot \vec \psi \dx \dt=\int_0^T \int_{\mathbb{R}^2}\vec u^\perp \cdot \vec \psi \dx \dt=0\, . \end{equation*} This property tells us that $\vec u^\perp =\nabla \pi$, for some suitable function $\pi$. However, this relation does \textit{not} add more information on the limit dynamics, since we already know that the divergence-free condition ${\rm div}\, \ue=0$ is satisfied for all $\varepsilon>0$. \subsection{Wave system and convergence}\label{ss:wave_system} The goal of the present subsection is to describe oscillations of solutions in order to show convergence to the limit system. The Coriolis term is responsible for strong oscillations in time of solutions, which may prevent the convergence. To overcome this issue we implement a strategy based on \textit{compensated compactness} arguments. Namely, we perform algebraic manipulations on the wave system (see \eqref{wave system} below), in order to derive compactness properties for the quantity $\widetilde\gamma_\varepsilon:={\rm curl}\, (\varrho_\varepsilon \ue)$. This will be enough to pass to the limit in the momentum equation (and, in particular, in the convective term). Let us define \begin{equation*} \vec{V}_\varepsilon:=\varrho_\varepsilon \ue \, , \end{equation*} that is uniformly bounded in $L^\infty_T(H^s)$, due to Proposition \ref{prop:app_fine_tame_est}. Now, using the fact that $\varrho_\varepsilon=1+\varepsilon R_\varepsilon$, we recast the continuity equation in the following way: \begin{equation}\label{eq:wave_mass_E} \varepsilon\partial_t R_\varepsilon+{\rm div}\, \vec V_\varepsilon=0\, . \end{equation} In light of the uniform bounds and convergence properties stated in Lemma \ref{l:reg_Ru_veps}, we can easily pass to the limit in the previous formulation (or rather in \eqref{eq_transport_R_veps}) finding \begin{equation}\label{eq:limit_transp_R} \partial_t R+{\rm div}\,(R \vec u)=0\, . \end{equation} We decompose $$ \varrho_\varepsilon \ue^\perp=\ue^\perp +\varepsilon\, R_\varepsilon \ue^\perp $$ and from the momentum equation one can deduce \begin{equation}\label{eq:wave_mom} \varepsilon \partial_t \vec V_\varepsilon +\nabla \Pi_\varepsilon +\ue^\perp= \varepsilon f_\varepsilon\, , \end{equation} where we have defined \begin{equation}\label{def_f} f_\varepsilon:=-{\rm div}\, (\varrho_\varepsilon \ue \otimes \ue)-R_\varepsilon \ue^\perp\, . \end{equation} In this way, we can rewrite system \eqref{Euler_eps} in the wave form \begin{equation}\label{wave system} \begin{cases} \varepsilon\partial_t R_\varepsilon+{\rm div}\, \vec V_\varepsilon=0\\ \varepsilon \partial_t \vec V_\varepsilon +\nabla \Pi_\varepsilon +\ue^\perp= \varepsilon f_\varepsilon\, . \end{cases} \end{equation} Applying again Proposition \ref{prop:app_fine_tame_est}, one can show that the terms $\varrho_\varepsilon \ue \otimes \ue$ and $R_\varepsilon \ue^\perp$ are uniformly bounded in $L^\infty_T(H^s)$. Thus, it follows that $(f_\varepsilon)_\varepsilon \subset L^\infty_T(H^{s-1})$. However, the uniform bounds in Section \ref{s:well-posedness_original_problem} are not enough for proving convergence in the weak formulation of the momentum equation. Indeed, on the one hand, those controls allow to pass to the limit in the $\partial_t$ term and in the initial datum; on the other hand, the non-linear term and the Coriolis force are out of control. We postpone the convergence analysis of the Coriolis force in the next Paragraph \ref{ss:limit_E} and now we focus on the the convective term ${\rm div}\, (\varrho_\varepsilon \ue \otimes \ue)$ in \eqref{weak-mom_E}. We proceed as follows: first of all, we reduce our study to the constant density case (see Lemma \ref{l:approx_convective_term} below); next, we apply the \textit{compensated compactness} argument. \begin{lemma}\label{l:approx_convective_term} Let $T>0$. For any test function $\vec \psi \in C_c^\infty([0,T[\times \mathbb{R}^2;\mathbb{R}^2)$, we get \begin{equation}\label{eq_approx_convective_term} \limsup_{\varepsilon\rightarrow 0^+}\left|\int_0^T \int_{\mathbb{R}^2}\varrho_\varepsilon \ue\otimes \ue :\nabla \vec \psi \, \dxdt- \int_0^T\int_{\mathbb{R}^2} \ue \otimes \ue :\nabla \vec \psi \, \dxdt\right|=0\, . \end{equation} \end{lemma} \begin{proof} Let $\psi \in C_c^\infty([0,T[\times \mathbb{R}^2;\mathbb{R}^2)$ with ${\rm Supp}\, \vec \psi \subset [0,T]\times K$ for some compact set $K\subset \mathbb{R}^2$. Therefore, we can write \begin{equation*} \int_0^T \int_{K}\varrho_\varepsilon \ue\otimes \ue :\nabla \vec \psi \, \dxdt= \int_0^T\int_{K} \ue \otimes \ue :\nabla \vec \psi \, \dxdt+\varepsilon \int_0^T\int_{K} R_\varepsilon \ue \otimes \ue :\nabla \vec \psi \, \dxdt\, . \end{equation*} As a consequence of the uniform bounds e.g. $(\ue)_\varepsilon \subset L^\infty_T(H^s)$ and $(R_\varepsilon)_\varepsilon\subset L_T^\infty(L^\infty)$, the second integral in the right-hand side is of order $\varepsilon$. \qed \end{proof} Thanks to Lemma \ref{l:approx_convective_term}, we are reduced to study the convergence (with respect to $\varepsilon$) of the integral \begin{equation*} -\int_0^T\int_{\mathbb{R}^2} \ue \otimes \ue :\nabla \vec \psi \, \dxdt=\int_0^T\int_{\mathbb{R}^2} {\rm div}\,(\ue \otimes \ue) \cdot \vec \psi \, \dxdt\, . \end{equation*} Owing to the divergence-free condition we can write: \begin{equation}\label{eq:conv_term_rel} {\rm div}\, (\ue \otimes \ue )=\ue \cdot \nabla \ue =\frac{1}{2}\nabla |\ue|^2+\omega_\varepsilon \, \ue^\perp\, , \end{equation} where we have denoted $\omega_\varepsilon:={\rm curl}\, \ue=-\partial_2u^1+\partial_1u^2$. Notice that the former term, since it is a perfect gradient, vanishes identically when tested against $\vec \psi$ such that ${\rm div}\, \vec \psi=0$. As for the latter term we take advantage of equation \eqref{eq:wave_mom}. Taking the ${\rm curl}\,$, we get \begin{equation}\label{relation_gamma} \partial_t \widetilde\gamma_\varepsilon={\rm curl}\, f_\varepsilon \, , \end{equation} where we have set $\widetilde\gamma_\varepsilon:= {\rm curl}\, \vec V_\varepsilon$ with $\vec V_\varepsilon:= \varrho_\varepsilon \ue$. We recall also that $f_\varepsilon$ defined in \eqref{def_f} is uniformly bounded in the space $L^\infty_T(H^{s-1})$. Then, relation \eqref{relation_gamma} implies that the family $(\partial_t \widetilde\gamma_\varepsilon)_\varepsilon$ is uniformly bounded in $L^\infty_T(H^{s-2})$. As a result, we get $(\widetilde\gamma_\varepsilon)_\varepsilon \subset W^{1,\infty}_T(H^{s-2})$. On the other hand, the sequence $(\nabla \widetilde\gamma_\varepsilon) _\varepsilon$ is also uniformly bounded in $L^\infty_T(H^{s-2})$. At this point, the Ascoli-Arzel\`a theorem (we refer to Theorem \ref{app:th_A-A} in this regard) gives compactness of $(\widetilde\gamma_\varepsilon)_\varepsilon$ in e.g. $C^0_T(H^{s-2}_{\rm loc})$. Then, it converges (up to extracting a subsequence) to a tempered distribution $\widetilde\gamma$ in the same space. Thus, it follows that \begin{equation*} \widetilde\gamma_\varepsilon \longrightarrow \widetilde\gamma \quad \text{in}\quad C^0_T(H_{\rm loc}^{s-2})\, \end{equation*} But since we already know the convergence $\vec{V}_\varepsilon:=\varrho_\varepsilon \ue \weakstar \vec u$ in e.g. $L^\infty_T(L^2)$, it follows that $\widetilde\gamma_\varepsilon:= {\rm curl}\, \vec V_\varepsilon \longrightarrow \omega:={\rm curl}\, \vec u$ in $\mathcal D^\prime$, hence $\widetilde\gamma ={\rm curl}\, \vec u=\omega$. Finally, writing $\varrho_\varepsilon=1+\varepsilon R_\varepsilon$, we obtain $$ \widetilde\gamma_\varepsilon:={\rm curl}\, (\varrho_\varepsilon \ue)=\omega_\varepsilon+\varepsilon {\rm curl}\, (R_\varepsilon \ue)\, , $$ where the family $({\rm curl}\, (R_\varepsilon \ue))_\varepsilon$ is uniformly bounded in $L^\infty_T(H^{s-1})$. From this relation and the previous analysis, we deduce the strong convergence (up to an extraction) for $\varepsilon\rightarrow 0^+$: \begin{equation*} \omega_\varepsilon\longrightarrow \omega \qquad \text{ in }\qquad L^\infty_T(H^{s-2}_{\rm loc})\,. \end{equation*} In the end, we have proved the following convergence result for the convective term ${\rm div}\, (\vec u_\varepsilon \otimes \ue)$. \begin{lemma}\label{lemma:limit_convective_term} Let $T>0$. Up to passing to a suitable subsequence, one has the following convergence for $\varepsilon\rightarrow 0^+$: \begin{equation*}\label{limit_convective_term} \int_0^T\int_{\mathbb{R}^2} \ue \otimes \ue :\nabla \vec \psi \, \dxdt\longrightarrow \int_0^T\int_{\mathbb{R}^2} \omega\, \vu^\perp \cdot \vec \psi\, \dxdt\, , \end{equation*} for any test function $\vec \psi \in C_c^\infty([0,T[\times \mathbb{R}^2;\mathbb{R}^2)$ such that ${\rm div}\, \vec \psi=0$. \end{lemma} As a consequence of the previous lemma, performing equalities \eqref{eq:conv_term_rel} backwards, for the convective term $\varrho_\varepsilon \ue \otimes \ue$ we find that \begin{equation}\label{lim_conv-term} \int_0^T \int_{\mathbb{R}^2}\varrho_\varepsilon \ue\otimes \ue :\nabla \vec \psi \, \dxdt \longrightarrow \int_0^T\int_{\mathbb{R}^2} \vu \otimes \vu :\nabla \vec \psi \, \dxdt \end{equation} for $\varepsilon \rightarrow 0^+$ and for all smooth divergence-free test functions $\vec \psi$. \subsection{Description of the limit system} \label{ss:limit_E} With the convergence established in Paragraph \ref{ss:wave_system}, we can pass to the limit in the momentum equation. To begin with, we take a test-function $\vec\psi$ such that \begin{equation}\label{def:test_function} \vec \psi =\nabla^\perp \varphi\quad \quad \text{with}\quad \quad \varphi \in C_c^\infty ([0,T[\times \mathbb{R}^2;\mathbb{R})\, . \end{equation} For such a $\vec\psi$, all the gradient terms vanish identically. First of all, we recall the momentum equation in its weak formulation: \begin{equation}\label{weak_mom_limit} \int_0^T\!\!\!\int_{\mathbb{R}^2} \left( - \vre \ue \cdot \partial_t \vec\psi - \vre [\ue\otimes\ue] : \nabla_x \vec\psi + \frac{1}{\ep} \, \vre \ue^\perp \cdot \vec\psi \right) \dxdt = \int_{\mathbb{R}^2}\vrez \uez \cdot \vec\psi (0,\cdot) \dx\, . \end{equation} Making use of the uniform bounds, we can pass to the limit in the $\partial_t$ term and thanks to our assumptions and embeddings we have $\varrho_{0,\varepsilon}\vu_{0,\varepsilon}\weak \vu_0$ in e.g. $L_{\rm loc}^{2}$. Let us consider now the Coriolis term. We can write: \begin{equation*} \int_0^T\int_{\mathbb{R}^2}\frac{1}{\varepsilon}\varrho_\varepsilon \ue^\perp \cdot \vec \psi\, \dxdt=\int_0^T\int_{\mathbb{R}^2}R_\varepsilon \ue^\perp \cdot \vec \psi\, \dxdt+\int_0^T\int_{\mathbb{R}^2}\frac{1}{\varepsilon} \ue^\perp \cdot \vec \psi\, \dxdt\, . \end{equation*} Since $\ue$ is divergence-free, the latter term vanishes when tested against such $\vec \psi$ defined as in \eqref{def:test_function}. On the other hand, again thanks to Lemma \ref{l:reg_Ru_veps}, one can get \begin{equation*} \int_0^T\int_{\mathbb{R}^2}R_\varepsilon \ue^\perp \cdot \vec \psi\, \dxdt\longrightarrow \int_0^T\int_{\mathbb{R}^2}R \vu^\perp \cdot \vec \psi\, \dxdt\, . \end{equation*} In the end, letting $\varepsilon \rightarrow 0^+$ in \eqref{weak_mom_limit}, we gather (remembering also \eqref{lim_conv-term}) \begin{equation*} \int_0^T\!\!\!\int_{\mathbb{R}^2} \left( - \vu \cdot \partial_t \vec\psi - \vu \otimes\vu : \nabla_x \vec\psi + \, R \vu^\perp \cdot \vec\psi \right) \dxdt = \int_{\mathbb{R}^2} \vu_0 \cdot \vec\psi (0,\cdot) \dx\, , \end{equation*} for any test function $\vec \psi$ defined as in \eqref{def:test_function}. From this relation, we immediately obtain that \begin{equation*} \partial_t \vu +{\rm div}\, (\vu \otimes \vu) +R\vu^\perp +\nabla \Pi=0\, , \end{equation*} for a suitable pressure term $\nabla \Pi$. This term appears as a result of the weak formulation of the problem. It can be viewed as a Lagrangian multiplier associated to the to the divergence-free constraint on $\vu$. Finally, the quantity $R$ satisfies the transport equation found in \eqref{eq:limit_transp_R}. We conclude this paragraph, devoting our attention to the analysis of the regularity of $\nabla \Pi$. We apply the ${\rm div}\,$ operator to the momentum equation in \eqref{QH-Euler_syst}, deducing that $\Pi$ satisfies \begin{equation}\label{time_reg_pressure} -\Delta \Pi= {\rm div}\, G \qquad \text{where}\qquad G:= \vec u \cdot \nabla \vec u+R\vec u^\perp\, . \end{equation} On the one hand, Lemma \ref{lem:Lax-Milgram_type} gives \begin{equation*} \|\nabla \Pi\|_{L^2}\leq C\|G\|_{L^2}\leq C\left(\|\vec u\|_{L^2}\|\nabla \vec u\|_{L^\infty}+\|R\|_{L^\infty}\|\vec u\|_{L^2}\right)\, . \end{equation*} This implies that $\nabla \Pi\in L^\infty_T(L^2)$. On the other hand, owing to the divergence-free condition on $\vec u$, we have \begin{equation*} \|\Delta \Pi\|_{H^{s-1}}\leq C\left(\|\vec u\|^2_{H^s}+\|R\|_{L^\infty}\|u\|_{H^s}+\|\nabla R\|_{H^{s-1}}\|\vec u\|_{L^\infty}\right)\, , \end{equation*} where we have also used Proposition \ref{prop:app_fine_tame_est}. In the end, we deduce that $\Delta \Pi \in L^\infty_T(H^{s-1})$. Thus, we conclude that $\nabla \Pi\in L^\infty_T(H^s)$. At this point, employing classical results on solutions to transport equations in Sobolev spaces, we may infer the claimed $C^0$ time regularity of $\vec u$ and $R$. Moreover, thanks to the fact that $R$ and $\vec u$ are both continuous in time, from the elliptic equation \eqref{time_reg_pressure}, we get that also $\nabla \Pi \in C^0_T(H^s)$. \section{Well-posedness for the quasi-homogeneous system}\label{s:well-posedness_Q-H} In this section, for the reader's convenience, we review the well-posedness theory of the quasi-homogeneous Euler system \eqref{system_Q-H_thm}, in particular, the ``asymptotically global'' well-posedness result presented in \cite{C-F_sub}. In the first Subsection \ref{sub:local_well-pos_H^S}, we sketch the local well-posedness theorem for system \eqref{system_Q-H_thm} in the $H^s$ framework. Actually, equations \eqref{system_Q-H_thm} are locally well-posedness in all $B^s_{p,r}$ Besov spaces, under the condition \eqref{Lip_cond}. We refer to \cite{C-F_RWA} where the authors apply the standard Littlewood-Paley machinery to the quasi-homogeneous ideal MHD system to recover local in time well-posedness in spaces $B^s_{p,r}$ for any $1<p<+\infty$. The case $p=+\infty$ was reached in \cite{C-F_sub} with a different approach based on the vorticity formulation of the momentum equation (see also Subsection \ref{ss:W-P_Besov} for more details concerning the ``critical'' case $p=+\infty$). In Subsection \ref{ss:improved_lifespan}, we explicitly derive the lower bound for the lifespan of solutions to system \eqref{system_Q-H_thm}. The reason in detailing the derivation of \eqref{T-ast:improved} for $T^\ast$ is due to the fact that it is much simpler than the one presented in \cite{C-F_sub}, where (due to the presence of the magnetic field) the lifespan behaves like the fifth iterated logarithm of the norms of the initial oscillation $R_0$ and the initial magnetic field. In addition, that lower bound (see \eqref{T-ast:improved} below) improves the standard lower bound coming from the hyperbolic theory, where the lifespan is bounded from below by the inverse of the norm of the initial data. \subsection{Local well-posedness in $H^s$ spaces}\label{sub:local_well-pos_H^S} In this subsection, we state the well-posedness result for system \eqref{system_Q-H_thm} in the $H^s$ functional framework with $s>2$, in which we have analysed the well-posedness issue for system \eqref{Euler_eps}. We limit ourselves to present the proof, by energy methods, of uniqueness of solutions (see Subsection \ref{s:uniq_QH-E}) and the implications of the continuation criterion (see Subsection \ref{ss:subsect_cont_cri}): in order to show that, we need some preparatory material, stated in Subsection \ref{s:existence_QH-E}. \begin{theorem}\label{thm:well-posedness_Q-H-Euler_bis} Take $s>2$. Let $\big(R_0,u_0 \big)$ be initial data such that $R_0\in L^{\infty}$, with $\nabla R_0\in H^{s-1}$, and the divergence-free vector field $\vu_0 \,\in H^s$. Then, there exists a time $T^\ast > 0$ such that, on $[0,T^\ast]\times\mathbb{R}^2$, problem \eqref{system_Q-H_thm} has a unique solution $(R,\vu, \nabla \Pi)$ with: \begin{itemize} \item $R\in C^0\big([0,T^\ast]\times \mathbb{R}^2\big)$ and $\nabla R\in C^0_{T^\ast}(H^{s-1}(\mathbb{R}^2))$; \item $\vu$ and $\nabla \Pi$ belong to $C^0_{T^\ast}( H^{s}(\mathbb{R}^2))$. \end{itemize} Moreover, if $(R, \vec u, \nabla \Pi)$ is a solution to \eqref{system_Q-H_thm} on $[0,T^\ast[\, \times \mathbb{R}^2$ ($T^\ast<+\infty$) with the properties described above, and \begin{equation*} \int_0^{T^\ast} \big\| \nabla \vu(t) \big\|_{L^\infty} \dt < +\infty\,, \end{equation*} then the triplet $(R, \vu, \nabla \Pi)$ can be continued beyond $T^\ast$ into a solution of system \eqref{system_Q-H_thm} with the same regularity. \end{theorem} \subsubsection{Uniqueness by an energy argument}\label{s:uniq_QH-E} Uniqueness in our functional framework is a consequence of the following stability result, whose proof is based on an energy method for the difference of two solutions to the quasi-homogeneous Euler system \eqref{system_Q-H_thm}. We present here the classical proof with the $C^1_T$ regularity assumption on the time variable (see condition $(i)$ in the theorem below). In order to relax this requirement, one has to argue as done in Theorem \ref{th:uniq_bis}, with the additional $L^2$ integrability condition on densities. \begin{theorem}\label{thm:stability_criterion} Let $(R_1, \vec u_1)$ and $(R_2, \vec u_2)$ be two solutions to the quasi-homogeneous Euler system \eqref{system_Q-H_thm}. Assume that, for some $T>0$, one has the following properties: \begin{enumerate}[(i)] \item the two quantities $\delta R\,:=\,R_1-R_2$ and $\delta \vec u\,:=\,\vec u_1-\vec u_2$ belong to the space $C^1\big([0,T];L^2(\mathbb{R}^2)\big)$; \item $\vec u_1 \in L^1\big([0,T];W^{1, \infty}(\mathbb{R}^2)\big)$ and $\nabla R_1 \in L^1\big([0,T];L^\infty(\mathbb{R}^2)\big)$. \end{enumerate} Then, for all $t\in[0,T]$, we have the stability inequality: \begin{equation}\label{stab_est_QH-E} \|\delta R(t)\|_{L^2}^2+\|\delta \vu(t)\|_{L^2}^2\leq C \left(\|\delta R_0\|_{L^2}^2+\|\delta \vu_0\|_{L^2}^2\right) \, e^{CB(t)}\, , \end{equation} for a universal constant $C>0$, where we have defined \begin{equation}\label{Def_A} B(t):= \displaystyle \int_0^t\left(\|\nabla R_1(\tau )\|_{L^\infty}+\| \vu_1(\tau )\|_{W^{1,\infty}}\right)\, \detau\, . \end{equation} \end{theorem} \begin{proof} First of all, we take the difference of the two systems \eqref{system_Q-H_thm} solved by the triplets $(R_1,\vu_1, \nabla \Pi_1)$ and $(R_2,\vu_2, \nabla \Pi_2)$, obtaining \begin{equation}\label{system_diff_QH-E} \begin{cases} \partial_t \delta R+\vu_2 \cdot \nabla \delta R=-\delta \vu \cdot \nabla R_1\\ \partial_t \delta \vu+\vu_2 \cdot \nabla \delta \vu + R_2\, \delta \vu^\perp+\nabla \delta \Pi =-\delta \vu \cdot \nabla \vu_1 -\delta R\, \vu_1^\perp\\ {\rm div}\, \delta \vec u=0\, , \end{cases} \end{equation} where $\delta \Pi := \Pi_1-\Pi_2$. We start by testing the first equation of \eqref{system_diff_QH-E} against $\delta R$ and we get \begin{equation*} \frac{1}{2}\frac{d}{dt}\|\delta R\|_{L^2}^2=-\int_{\mathbb{R}^2}(\delta \vu \cdot \nabla R_1)\, \delta R\leq \frac{1}{2}\|\nabla R_1\|_{L^\infty}\left(\|\delta R\|_{L^2}^2+\|\delta \vu\|_{L^2}^2\right)\, . \end{equation*} Next, testing the second equation on $\delta \vu$, due to the divergence-free conditions ${\rm div}\, \vu_1={\rm div}\, \vu_2=0$, we gather \begin{equation*} \frac{1}{2}\frac{d}{dt}\|\delta \vu\|_{L^2}^2=-\int_{\mathbb{R}^2}(\delta \vu \cdot \nabla \vu_1) \cdot\, \delta \vu-\int_{\mathbb{R}^2}(\delta R\; \vu_1^\perp)\cdot \delta \vu\leq \|\nabla \vu_1\|_{L^\infty}\|\delta \vu\|_{L^2}^2+\frac{1}{2}\| \vu_1\|_{L^\infty}\left(\|\delta R\|_{L^2}^2+\|\delta \vu\|_{L^2}^2\right) . \end{equation*} Putting the previous inequalities together, we finally infer \begin{equation*} \frac{1}{2}\frac{d}{dt}\left(\|\delta R\|_{L^2}^2+\|\delta \vu\|_{L^2}^2\right)\leq C \left(\|\nabla R_1\|_{L^\infty}+\| \vu_1\|_{W^{1,\infty}}\right)\left(\|\delta R\|_{L^2}^2+\|\delta \vu\|_{L^2}^2\right)\, . \end{equation*} An application of Gr\"onwall's lemma gives us the stability estimate \eqref{stab_est_QH-E}, i.e. \begin{equation*} \|\delta R(t)\|_{L^2}^2+\|\delta \vu(t)\|_{L^2}^2\leq C \left(\|\delta R_0\|_{L^2}^2+\|\delta \vu_0\|_{L^2}^2\right) \, e^{CB(t)}\, , \end{equation*} for a universal constant $C>0$ and $B(t)$ defined as in \eqref{Def_A}. \qed \end{proof} At this point, the uniqueness in the claimed framework (see Theorem \ref{thm:well-posedness_Q-H-Euler_bis}) follows from the previous statement. Let us sketch the proof. We take an initial datum $(R_0, \vec u_0)$ satisfying the assumptions in Theorem \ref{thm:well-posedness_Q-H-Euler_bis}. We consider two solutions $(R_1, \vec u_1)$ and $(R_2, \vec u_2)$ of system \eqref{system_Q-H_thm}, related to the initial datum $(R_0, \vec u_0)$. Moreover, those solutions have to fulfill the regularity properties stated in the quoted theorem. Now, due to embeddings, we have only to detail how the previous solutions match the condition $(i)$ in Theorem \ref{thm:stability_criterion}. We focus on the regularity of $\delta R$, since similar arguments apply to $\delta \vec u$. We look at the first equation in \eqref{system_diff_QH-E}: $\delta R$ is transported by the divergence-free vector field $\vec u_2$, with in addition the presence of an ``external force'' $g:=-\delta \vec u \cdot \nabla R_1$. Thanks to the regularity properties presented in Theorem \ref{thm:well-posedness_Q-H-Euler_bis} and embeddings, we know that $\delta\vec u\in C^0_T(L^2)$ and $R_1\in C^0_T(W^{1,\infty})$. Thus, one can deduce that $g\in C^0_T(L^2)$. Therefore, from classical results for transport equations, we get that $\delta R\in C^1_T(L^2)$, as claimed. In the end, recalling that at the initial time $(\delta R, \delta \vec u)_{|t=0}=0$, we can apply Theorem \ref{thm:stability_criterion} to infer that $\|(\delta R, \delta \vec u)\|_{L^\infty_T(L^2)}=0$. This implies the desired uniqueness. \subsubsection{A priori estimates}\label{s:existence_QH-E} We start by bounding the $L^p$ norms of the solutions. First, since $R$ is transported by $\vu$ we have, for any $t\geq 0$, \begin{equation*} \|R(t)\|_{L^\infty}=\|R_0\|_{L^\infty}\, . \end{equation*} In addition, an energy estimate for the momentum equation in \eqref{system_Q-H_thm} yields \begin{equation}\label{eq:L^2_velocity} \|\vu(t)\|_{L^2}\leq \|\vu_0\|_{L^2}\, . \end{equation} Making use of the dyadic blocks $\Delta_j$, for $i=1,2$ we find \begin{equation}\label{QH-Euler_vor_dyadic} \begin{cases} \partial_t\Delta_j\, \partial_iR+\vu \cdot \nabla \Delta_j\, \partial_i R=[\vu\cdot \nabla,\Delta_j]\, \partial_i R-\Delta_j(\partial_i \vec u \cdot \nabla R)\\ \partial_t \Delta_j \vec u +\vu \cdot \nabla \Delta_j \vu + \Delta_j \nabla \Pi=[\vu\cdot \nabla, \Delta_j]\vu-\Delta_j (R\vu^\perp)\, . \end{cases} \end{equation} Following the same lines performed in Subsection \ref{ss:unif_est}, we can write \begin{equation*} \begin{split} 2^{j(s-1)}\|\Delta_j\nabla R(t)\|_{L^2}+2^{js}\|\Delta_j \vu(t)\|_{L^2}&\leq C\left(2^{j(s-1)}\|\Delta_j \nabla R_0\|_{L^2}+2^{js}\|\Delta_j \vu_0\|_{L^2}\right)\\ &+C\int_0^t c_j\, (\tau)\|\vu(\tau) \|_{H^s}\|R(\tau)\|_{L^\infty}\, \detau \\ &+C\int_0^t c_j(\tau)\left(\|\vu(\tau) \|_{H^s}\|\nabla R(\tau)\|_{H^{s-1}}+\|\vu(\tau) \|_{H^s}^2\right) \, \detau \, , \end{split} \end{equation*} for suitable sequences $(c_j(t))_{j\geq -1}$ belonging to the unit sphere of $\ell^2$. Now, we define for all $t\geq0$: \begin{equation}\label{def_E(t)} \widetilde E(t):=\|R(t)\|_{L^\infty}+\|\nabla R(t)\|_{H^{s-1}}+\|\vu(t)\|_{H^s}\, . \end{equation} Thanks to the previous bounds, employing Minkowski's inequality (see Section \ref{sec:assorted_ineq}), we gather \begin{equation*} \widetilde E(t)\leq C\, \widetilde E(0)+C\int_0^t \widetilde E(\tau)^2 \, \detau\, . \end{equation*} At this point, the goal is to close the estimate, bounding the integral on the right-hand side in a small time. To this purpose, we define the time $T^\ast >0$ such that \begin{equation}\label{def_T} T^\ast :=\sup \left\{t>0:\int_0^t \widetilde E(\tau)^2 \, \detau \leq \widetilde E(0) \right\}\, . \end{equation} Then, we deduce $\widetilde E(t)\leq C\, \widetilde E(0)$ for all times $t\in [0,T^\ast ]$ and for some positive constant $C=C(s)$. \subsubsection{The continuation criterion}\label{ss:subsect_cont_cri} This subsection is devoted to the implications of the continuation result (Proposition \ref{th:cont-crit} below) for solutions to system \eqref{system_Q-H_thm}. The proof is omitted, since it is an easy adaptation of the more complex case we will present in Subsection \ref{subsec:cont_crit_besov}. \begin{proposition} \label{th:cont-crit} Let $T > 0$ and let $(R, \vu)$ be a solution to system \eqref{system_Q-H_thm} on $[0,T[\,\times\mathbb{R}^2$, enjoying the properties described in the previous Theorem \ref{thm:well-posedness_Q-H-Euler} for all $t<T$. Assume that \begin{equation}\label{cond_crit-cont} \int_0^{T} \big\| \nabla \vu(t) \big\|_{L^\infty} \dt < +\infty\,. \end{equation} Then, $$ \sup_{0\leq t<T}\left(\|R\|_{L^\infty}+\|\nabla R\|_{H^{s-1}}+\|\vec u\|_{H^s}\right)<+\infty \, . $$ \end{proposition} As an immediate corollary we have that if $T<+\infty$, then the couple $(R, \vu)$ can be continued beyond $T$ into a solution of system \eqref{system_Q-H_thm} with the same regularity. As a matter of fact, Proposition \ref{th:cont-crit} ensures that $\|R\|_{L^\infty_T(L^\infty)}$, $\|\nabla R\|_{L^\infty_T(H^{s-1})}$ and $\|\vec u\|_{L^\infty_T({H^s})}$ are finite. From the previous Subsection \ref{s:existence_QH-E}, we know that there exists a time $\overline \tau$ depending on $s$, $\|R\|_{L^\infty_T(L^\infty)}$, $\|\nabla R\|_{L^\infty_T(H^{s-1})}$, $\|\vec u\|_{L^\infty_T({H^s})}$ and on the norm of the data such that for all $\widetilde{T}<T$, the quasi-homogeneous system with data $\big(R(\widetilde{T}),\vec u(\widetilde{T})\big)$ has a unique solution until time $\overline \tau$. Now, taking $\widetilde{T}=T-\overline \tau/2$, we get a continuation of $(R,\vec u)$ up to time $T+\overline \tau/2$. \subsection{Well-posedness in Besov spaces} \label{ss:W-P_Besov} The main goal of this subsection is to review the lifespan estimate presented in \cite{C-F_sub} (for the MHD system) in order to get \eqref{improved_low_bound} (we refer also to Subsection \ref{ss:improved_lifespan} for the details of the proof). To show that, one has to work in critical Besov spaces where one can take advantage of the improved estimates for linear transport equations \textit{à la} Hmidi-Keraani-Vishik. In order to ensure that the condition \eqref{Lip_cond} is satisfied, the lowest regularity space we can reach is $B^1_{\infty,1}$. In addition, since $\vec u\in B^1_{\infty,1}$, we have that the $B^0_{\infty,1}$ norm of the ${\rm curl}\, \vu$ can be bounded \textit{linearly} with respect to $\|\nabla \vu\|_{L^1_t(L^\infty)}$, instead of exponentially as in classical $B^s_{p,r}$ estimates (see Theorem \ref{thm:improved_est_transport}). Finally, we construct a ``bridge'' between $H^s$ and $B^1_{\infty,1}$ Besov spaces establishing a continuation criterion, in the spirit of the one by Beale-Kato-Majda in \cite{B-K-M} (see Subsection \ref{subsec:cont_crit_besov}). We start by proving the local well-posedness result for system \eqref{system_Q-H_thm} in $B^s_{\infty,r}$ and, in particular, in the end-point space $B^1_{\infty,1}$. In this regard, Subsection \ref{subsec:aprioriBesov} is devoted to the \emph{a priori} estimates, presenting also the standard lower bound (coming from the hyperbolic theory) for the lifespan of solutions. Next, we construct the smooth approximate solutions (in Subsection \ref{ss:approx_sol_QH-E}) showing the uniform bounds for those regular solutions in Subsection \ref{ss:unif_bounds_QH-E}, and sketching the convergence (in the regularisation parameter $n$) argument in Subsection \ref{subsect:conv_argument}. \begin{theorem}\label{thm:W-P_besov_spaces_p-infty} Let $(s,r)\in \mathbb{R}\times [1,+\infty]$ such that $s>1$ or $s=r=1$. Let $(R_0, \vec u_0)$ be an initial datum such that $R_0\in B^s_{\infty, r}(\mathbb{R}^2)$ and the divergence-free vector field $\vec u_0\in L^2(\mathbb{R}^2)\cap B^s_{\infty,r}(\mathbb{R}^2)$. Then, there exists a time $T^\ast>0$ such that system \eqref{system_Q-H_thm} has a unique solution $(R, \vec u)$ with the following regularity properties, if $r<+\infty$: \begin{itemize} \item $R\in C^0([0,T^\ast];B^s_{\infty,r}(\mathbb{R}^2))\cap C^1([0,T^\ast];B^{s-1}_{\infty,r}(\mathbb{R}^2)) $; \item $\vec u$ and $\nabla \Pi$ belong to $C^0([0,T^\ast];B^s_{\infty,r}(\mathbb{R}^2))\cap C^1([0,T^\ast];L^2(\mathbb{R}^2)\cap B^{s-1}_{\infty,r}(\mathbb{R}^2))$. \end{itemize} In the case when $r=+\infty$, we need to replace $C^0([0,T^\ast];B^s_{\infty,r}(\mathbb{R}^2))$ by the space $C_w^0([0,T^\ast];B^s_{\infty,r}(\mathbb{R}^2))$. \end{theorem} We highlight that the physically relevant $L^2$ condition on $\vec u$, in the previous theorem, is necessary to control the low frequency part of the solution, so as to reconstruct the velocity from its ${\rm curl}\,$ (see Lemma \ref{l:rel_curl} below). \subsubsection{A priori estimate in $B^s_{\infty,r}$}\label{subsec:aprioriBesov} To begin with, we prove a general relation between a function and its ${\rm curl}\,$ that will be useful in the sequel. \begin{lemma}\label{l:rel_curl} Assume $f\in (L^2 \cap B^s_{\infty ,r})(\mathbb{R}^2)$ to be divergence-free. Denote by ${\rm curl}\, f:=-\partial_2f^1+\partial_1f^2$ its ${\rm curl}\,$ in $\mathbb{R}^2$. Then, we have \begin{equation}\label{eq:rel_curl} \|f\|_{L^2\cap B^s_{\infty ,r}}\sim\|f\|_{L^2}+\|{\rm curl}\, f\|_{B^{s-1}_{\infty, r}}\, . \end{equation} \end{lemma} \begin{proof} Using the divergence-free condition ${\rm div}\, f=0$, we can write the \textit{Biot-Savart law}: \begin{equation*} f^1=(-\Delta )^{-1}\partial_2\, {\rm curl}\, f \quad \quad \text{and}\quad \quad f^2=-(-\Delta )^{-1}\partial_1\, {\rm curl}\, f\, . \end{equation*} From that, we deduce \begin{equation*} \|f\|_{B^s_{\infty ,r}}\sim \left\|\Delta_{-1}(-\Delta )^{-1}\sum_{i=1}^2(-1)^i\partial_i\, {\rm curl}\, f\right\|_{L^\infty}+\left\| \mathbbm{1}_{\{\nu \geq 0\}}\, 2^{\nu s}\|\Delta_{\nu}(-\Delta )^{-1}\sum_{i=1}^2(-1)^i\partial_i\, {\rm curl}\, f\|_{L^\infty}\right\|_{\ell^r}\, . \end{equation*} On the one hand, if $\nu \geq 0$ we know that $\Delta_{\nu}{\rm curl}\, f$ is spectrally supported in an annulus, on which the symbol of $(-\Delta )^{-1}\partial_i $ is smooth. Hence by employing Bernstein inequalities of Lemma \ref{l:bern}, we gather \begin{equation*} 2^{\nu s}\|\Delta_{\nu}(-\Delta )^{-1}\sum_{i=1}^2\partial_i\, {\rm curl}\, f\|_{L^\infty}\sim \, 2^{(s-1)\nu}\|\Delta_{\nu}{\rm curl}\, f\|_{L^\infty}\, . \end{equation*} On the other hand, using the fact that the symbol of $(-\Delta )^{-1}\nabla {\rm curl}\,$ is homogeneous of degree zero and bounded on the unit sphere, Bernstein inequalities yield \begin{equation*} \|\Delta_{-1}(-\Delta )^{-1}\sum_{i=1}^2\partial_i\, {\rm curl}\, f\|_{L^\infty}\leq C \|\Delta_{-1}(-\Delta )^{-1}\nabla {\rm curl}\, f\|_{L^\infty}\leq C\|f\|_{L^2}\, . \end{equation*} Therefore, \begin{equation*} \|f\|_{B^s_{\infty ,r}}\leq C\left( \|f\|_{L^2}+\|{\rm curl}\, f\|_{B_{\infty, r}^{s-1}}\right)\, . \end{equation*} This completes the proof of the lemma. \qed \end{proof} \medskip In the sequel of this subsection, we will show \textit{a priori} estimates for smooth solutions in the relevant norms. We start by recalling that the $L^2$ norm of the velocity field is preserved. In other words, we have: \begin{equation}\label{L2_velocity} \|\vec u(t)\|_{L^2}=\|\vec u_0\|_{L^2}\, . \end{equation} Thanks to Lemma \ref{l:rel_curl}, in order to bound $\vu$ in $B^s_{\infty ,r}$, it will be enough to focus on estimates for ${\rm curl}\, \vu$ in $B^{s-1}_{\infty ,r}$. Hence, we apply the ${\rm curl}\,$ operator to the second equation in system \eqref{system_Q-H_thm} to get \begin{equation}\label{QH-Euler_vorticity_Bes} \begin{cases} \partial_tR+\vu \cdot \nabla R=0\\ \partial_t \omega +\vu \cdot \nabla \omega=-{\rm div}\, (R\vu)\, , \end{cases} \end{equation} where we recall $\omega :={\rm curl}\, \vu=-\partial_2u^1+\partial_1u^2$. Now, since $R$ is transported by $\vu$ we have, for any $t\geq 0$, \begin{equation*} \|R(t)\|_{L^\infty}=\|R_0\|_{L^\infty}\leq \|R_0\|_{B^s_{\infty ,r}}\, . \end{equation*} At this point we apply the dyadic blocks $\Delta_j$ to the system \eqref{QH-Euler_vorticity_Bes} and we find \begin{equation}\label{QH-Euler_vor_dyadic_Bes} \begin{cases} \partial_t\Delta_j R+\vu \cdot \nabla \Delta_j R=[\vu\cdot \nabla,\Delta_j]R\\ \partial_t \Delta_j \omega +\vu \cdot \nabla \Delta_j \omega=[\vu\cdot \nabla, \Delta_j]\omega-\Delta_j{\rm div}\, (R\vu)\, . \end{cases} \end{equation} For the term ${\rm div}\, (R\vec u)$, we have \begin{equation}\label{est_force term_Bes} \|{\rm div}\, (R\vu)\|_{B^{s-1}_{\infty,r}}\leq C\, \| R\vu\|_{B^{s}_{\infty,r}}\leq C \left(\|R\|_{L^\infty}\|\vu \|_{B^s_{\infty ,r}}+\|\vu\|_{L^\infty}\|R\|_{B^s_{\infty ,r}}\right)\, . \end{equation} Next, employing the commutator estimates (see Lemma \ref{l:commutator_est}), we get \begin{equation}\label{eq:commutator_R_Bes} \begin{split} 2^{js}\|[\vu\cdot \nabla, \Delta_j]R\|_{L^\infty}&\leq C\, c_j(t)\,\left( \|\nabla \vu \|_{L^\infty}\|R\|_{B^s_{\infty,r}}+ \|\nabla \vu \|_{B^{s-1}_{\infty,r}}\|\nabla R\|_{L^\infty}\right)\\ &\leq C\, c_j(t)\, \|\vu \|_{B^s_{\infty,r}}\|R\|_{B^s_{\infty,r}} \end{split} \end{equation} and \begin{equation}\label{eq:commutator_omega_Bes} \begin{split} 2^{j(s-1)}\|[\vu\cdot \nabla, \Delta_j]\omega\|_{L^\infty}&\leq C\, c_j(t)\,\left( \|\nabla \vu \|_{L^\infty}\|\omega\|_{B^{s-1}_{\infty,r}}+ \|\nabla \vu \|_{B^{s-1}_{\infty,r}}\|\omega\|_{L^\infty}\right)\\ &\leq C\, c_j(t)\, \|\vu \|_{B^s_{\infty,r}}^2\, , \end{split} \end{equation} for suitable sequences $(c_j(t))_{j\geq -1}$ belonging to the unit sphere of $\ell^r$. \begin{remark} We point out that we need the second estimate in Lemma \ref{l:commutator_est} to deal with \eqref{eq:commutator_omega_Bes} in the cases $s<2$, and $s=2$ and $r\neq 1$. In those cases, the Besov space $B^{s-1}_{\infty,r}$ is not contained in the Lipschitz space $W^{1,\infty}$. \end{remark} Summing up estimates \eqref{est_force term_Bes}, \eqref{eq:commutator_R_Bes} and \eqref{eq:commutator_omega_Bes}, one may derive \begin{equation}\label{Bound_E} \begin{split} 2^{js}\|\Delta_jR(t)\|_{L^\infty}+2^{j(s-1)}\|\Delta_j \omega(t)\|_{L^\infty}&\leq C\left(2^{js}\|\Delta_jR_0\|_{L^\infty}+2^{j(s-1)}\|\Delta_j \omega_0\|_{L^\infty}\right)\\ &+C\int_0^t c_j(\tau)\left(\|\vu(\tau) \|_{B^s_{\infty,r}}\|R(\tau)\|_{B^s_{\infty,r}}+\|\vu(\tau) \|_{B^s_{\infty,r}}^2\right) \, \detau. \end{split} \end{equation} At this point, we define for all $t\geq0$: \begin{equation*} E(t):=\|R(t)\|_{B^s_{\infty,r}}+\|\vu(t)\|_{L^2}+\|\omega (t)\|_{B^{s-1}_{\infty,r}}\, . \end{equation*} Thanks to the $L^2$ estimate \eqref{L2_velocity} and the bound \eqref{Bound_E}, employing the Minkowski inequality, one may infer that \begin{equation*} E(t)\leq C\, E(0)+\int_0^t E(\tau)^2 \, \detau\, . \end{equation*} We define now $T^\ast >0$ such that \begin{equation*} T^\ast =\sup \left\{t>0:\int_0^tE(\tau)^2 \, \detau \leq E(0) \right\}\, . \end{equation*} Then, we deduce $E(t)\leq C\, E(0)$ for all times $t\in [0,T^\ast ]$ and for some positive constant $C=C(s,r,d)$. Therefore, for all $t\in [0,T^\ast]$, we gather \begin{equation*} \int_0^tE(\tau)^2 \, \detau \leq CtE(0)^2\, . \end{equation*} By using the definition of $T^\ast$ and Lemma \ref{l:rel_curl}, we finally argue that \begin{equation}\label{est:T-star} T^\ast \geq \frac{C}{\|R_0\|_{B^s_{\infty,r}}+\|\vu_0\|_{L^2\cap B^s_{\infty,r}}}\, . \end{equation} In other words, we have shown that one can close the estimates for a small time $T^\ast$, which is bounded from below by \eqref{est:T-star}. \subsubsection{Construction of approximate solutions} \label{ss:approx_sol_QH-E} Since the material in this subsection is standard and already presented in Subsection \ref{sec:construction_smooth_sol} for system \eqref{Euler_eps}, we will only sketch it. For any $n\in \mathbb{N}$, let \begin{equation*} (R^n_0, \vu_0^n):=(S_n R_0,\, S_n \vu_0)\, , \end{equation*} where $S_n$ is the low frequency cut-off operator as in \eqref{eq:S_j}. By the assumption $\vu_0\in L^2$, we have $\vu_0^n \in H^\infty$ and similarly $R_0^n \in C^\infty_b$. Moreover, one has \begin{equation}\label{conv-properties} \begin{split} R_0^n\rightarrow R_0 \quad &\text{in}\quad B^s_{\infty,r}\\ \vu_0^n\rightarrow \vu_0 \quad &\text{in}\quad L^2 \cap B^s_{\infty,r}\, . \end{split} \end{equation} Now, we will define the sequence of approximate solutions. First of all, we take $(R^0,\vu^0)=(R^0_0,\vu^0_0)$. Then, for all $\sigma \in \mathbb{R}$ we get $R^0\in C^0(\mathbb{R}_+;B^\sigma_{\infty,r})$ and $\vu^0\in C^0(\mathbb{R}_+;H^\sigma)$ with ${\rm div}\, \vu^0=0$. Next, we assume that $(R^n,\vu^n)$ is given such that, for all $\sigma \in \mathbb{R}$, \begin{equation*} R^n\in C^0(\mathbb{R}_+;B^{\sigma}_{\infty,r}),\quad \vu^n\in C^0(\mathbb{R}_+;H^\sigma)\quad \text{and}\quad {\rm div}\, \vu^n=0\, . \end{equation*} We start by defining $R^{n+1}$ as the unique solution to the linear transport equation \begin{equation}\label{approx_R_QH-E} \begin{cases} \partial_tR^{n+1}+\vu^n \cdot \nabla R^{n+1}=0\\ R^{n+1}_{|t=0}=R_0^{n+1}\, , \end{cases} \end{equation} and we deduce that $R^{n+1}\in C^0(\mathbb{R}_+;B^\sigma_{\infty, r})$ for all $\sigma \in \mathbb{R}$. Next, we solve the linear transport equation with the divergence-free condition: \begin{equation}\label{approx_u_QH-E} \begin{cases} \partial_t\vu^{n+1}+\vu^n \cdot \nabla \vu^{n+1}+\nabla \Pi^{n+1}=-R^{n+1}\vu^{\perp ,n}\\ {\rm div}\, \vu ^{n+1}=0\\ \vu^{n+1}_{|t=0}=\vu_0^{n+1}\, . \end{cases} \end{equation} We point out that the right-hand side term belongs to $L^1_{\rm loc}(\mathbb{R}_+;H^\sigma)$ for any $\sigma \in \mathbb{R}$. At this point, one can solve the previous linear problem by energy methods (see Propositions 3.2 and 3.4 in \cite{D_AT}) to find an unique solution $\vu^{n+1}\in C^0(\mathbb{R}_+;H^\sigma)$. \subsubsection{Uniform bounds}\label{ss:unif_bounds_QH-E} We show now uniform bounds for the sequence $(R^n,\vu^n)_{n\in \mathbb{N}}$ constructed in the previous Paragraph \ref{ss:approx_sol_QH-E}. We argue by induction and prove that there exists a time $T^\ast>0$ such that, for all $n\in \mathbb{N}$ and $t\in [0,T^\ast]$, one has \begin{align} &\|R^n(t)\|_{L^\infty}\leq C\|R_0\|_{L^\infty}\label{eq:induc_R}\\ &\|R^n(t)\|_{B^s_{\infty,r}}+\|\vu^n(t)\|_{L^2 \cap B^s_{\infty,r}}\leq CK_0\, e^{CK_0t}\label{eq:induc_u}\, , \end{align} where the constant $C>0$ does not depend on the data neither on the solutions, and where we have defined \begin{equation*} K_0:=\|R^n_0\|_{B^s_{\infty,r}}+\|\vu^n_0\|_{L^2 \cap B^s_{\infty,r}}\, . \end{equation*} It is clear that the couple $(R^0,\vu^0)$ satisfies the previous bounds. Assume now that $(R^n,\vu^n)$ verifies \eqref{eq:induc_R} and \eqref{eq:induc_u} on some interval $[0,T^\ast]$. Then, we have to prove the same properties for the step $n+1$. We start by bounding $R^{n+1}$. We deduce that, for any $t\geq 0$, \begin{equation*} \|R^{n+1}(t)\|_{L^\infty}=\|R^{n+1}_0\|_{L^\infty}\leq C\|R_0\|_{L^\infty}\leq C\|R_0\|_{B^s_{\infty ,r}}\, . \end{equation*} Next, employing an energy estimate for the velocity field, one can get \begin{equation*} \begin{split} \|\vu^{n+1}(t)\|_{L^2}&\leq \|\vu^{n+1}_0\|_{L^2}+C\int_0^t\|R^{n+1}(\tau)\vu^{\perp, n}(\tau)\|_{L^2}\, \detau\\ &\leq C \|\vu_0\|_{L^2}+C\|R_0\|_{B^s_{\infty ,r}}\int_0^t\|\vu^{n}(\tau)\|_{L^2}\, \detau\, . \end{split} \end{equation*} At this point, to get uniform bounds for the Besov norms, we resort the vorticity formulation: \begin{equation*} \partial_t \omega^{n+1}+\vu^n\cdot \nabla \omega^{n+1}=\mathcal{L}(\nabla \vu^n,\nabla \vu^{n+1})+{\rm div}\, (R^{n+1}\vu^n)\, , \end{equation*} where \begin{equation}\label{eq:def_L} \mathcal{L}(\nabla \vu^n,\nabla \vu^{n+1})=\sum_{k=1}^2\partial_2 u_k^n\, \partial_k u_1^{n+1}-\partial_1 u_k^n\, \partial_k u_2^{n+1}\, . \end{equation} Since the bound for ${\rm div}\, (R^{n+1}\vu^n)$ is analogous to the one performed in \eqref{est_force term_Bes}, it remains to bound $\mathcal{L}(\nabla \vu^n,\nabla \vu^{n+1})$ in $B^{s-1}_{\infty,r}$. \begin{lemma}\label{lem:L} Let $(\vec{v},\vec{w})$ be a couple of divergence-free vector fields in $B^s_{\infty,r}$. Then, one has \begin{equation*} \|\mathcal{L}(\nabla \vec{v},\nabla \vec{w})\|_{B^{s-1}_{\infty,r}}\leq C\left(\|\nabla \vec{v}\|_{L^\infty}\|\vec{w}\|_{B^s_{\infty, r}}+\|\nabla \vec{w}\|_{L^\infty}\|\vec{v}\|_{B^s_{\infty, r}}\right)\, . \end{equation*} \end{lemma} \begin{proof} The estimate easily follows from Corollary \ref{cor:tame_est} if $s>1$. Then, we have to show the bound when $\nabla \vec{v}$ and $\nabla \vec{w}$ are in $B^0_{\infty ,1}$ which is not an algebra. Due to the fact that $\vec v$ and $\vec w$ are divergence-free, we can write \begin{equation}\label{rel_L} \mathcal{L}(\nabla \vec{v},\nabla \vec{w})=\sum_{k=1}^2 \partial_k(w^1\, \partial_2v^k)-\partial_k(w^2\, \partial_1v^k)\, . \end{equation} Now, making use of Bony decomposition (we refer to Section \ref{app_paradiff} for more details), we have \begin{equation*} \mathcal{L}(\nabla \vec{v},\nabla \vec{w})=\mathcal{L}_{\mathcal{T}}(\nabla \vec v,\nabla \vec w)+\mathcal{L}_{\mathcal{R}}(\nabla \vec{v},\nabla \vec{w})\, , \end{equation*} where \begin{equation*} \mathcal{L}_{\mathcal{T}}(\nabla \vec{v},\nabla \vec{w}):=\sum_{k=1}^2\mathcal{T}_{\partial_kw^1}(\partial_2v^k)+\mathcal{T}_{\partial_2v^k}(\partial_k w^1)-\mathcal{T}_{\partial_kw^2}(\partial_1v^k)-\mathcal{T}_{\partial_1v^k}(\partial_k w^2) \end{equation*} and \begin{equation*} \mathcal{L}_{\mathcal{R}}(\nabla \vec{v},\nabla \vec{w})=\sum_{k=1}^2 \mathcal{R}(\partial_kw^1,\, \partial_2v^k)-\mathcal{R}(\partial_kw^2,\, \partial_1v^k)\, . \end{equation*} On the one hand, thanks to Proposition \ref{T-R}, we can estimate the paraproducts in the following way: \begin{equation*} \|\mathcal{T}_{\nabla \vec{v}}(\nabla \vec{w})\|_{B^0_{\infty,1}}+\|\mathcal{T}_{\nabla \vec{w}}(\nabla \vec{v})\|_{B^0_{\infty,1}}\leq C\left( \|\nabla \vec{v}\|_{L^\infty}\|\nabla \vec{w}\|_{B^0_{\infty,1}}+\|\nabla \vec{w}\|_{L^\infty}\|\nabla \vec{v}\|_{B^0_{\infty,1}}\right)\, . \end{equation*} On the other hand, due to relation \eqref{rel_L} we may write \begin{equation*} \mathcal{L}_{\mathcal{R}}(\nabla \vec{v},\nabla \vec{w})=\sum_{k=1}^2 \partial_k\mathcal{R}(w^1,\, \partial_2v^k)-\partial_k\mathcal{R}(w^2,\, \partial_1v^k)\, . \end{equation*} Now, again thanks to Proposition \ref{T-R} we have \begin{equation*} \|\partial_k\mathcal{R}(w^2,\, \partial_1v^k)\|_{B^0_{\infty,1}}\leq C\, \|\mathcal{R}(w^2,\, \partial_1v^k)\|_{B^1_{\infty,1}}\leq C\|\nabla \vec w\|_{B^0_{\infty,\infty}}\|\vec v\|_{B^1_{\infty,1}}\leq C\|\nabla \vec w\|_{L^\infty}\|\vec v\|_{B^1_{\infty,1}} \end{equation*} since $L^\infty\hookrightarrow B^0_{\infty,\infty}$. Similar argumentations apply to $\|\partial_k\mathcal{R}(w^1,\, \partial_2v^k)\|_{B^0_{\infty,1}}$. Then, one has \begin{equation*} \|\mathcal{L}(\nabla \vec{v},\nabla \vec{w})\|_{B^{0}_{\infty,r}}\leq C\left(\|\nabla \vec{v}\|_{L^\infty}\|\vec{w}\|_{B^1_{\infty, r}}+\|\nabla \vec{w}\|_{L^\infty}\|\vec{v}\|_{B^1_{\infty, r}}\right)\, . \end{equation*} This concludes the proof in the case $s=1$. \qed \end{proof} \medskip Therefore, applying Lemma \ref{lem:L} with $\vec v=\vu^n$ and $\vec w=\vu^{n+1}$, one can get \begin{equation*} \|\mathcal{L}(\nabla \vu^n,\nabla \vu^{n+1})\|_{B^{s-1}_{\infty,r}}\leq C\left(\|\nabla \vu^n\|_{L^\infty}\|\vu^{n+1}\|_{B^s_{\infty, r}}+\|\nabla \vu^{n+1}\|_{L^\infty}\|\vu^n\|_{B^s_{\infty, r}}\right)\, . \end{equation*} Reached this point, one can exactly proceed as in the proof for the \textit{a priori} estimates, finding that \begin{equation*} \begin{split} 2^{js}\|\Delta_jR^{n+1}(t)\|_{L^\infty}+2^{j(s-1)}\|\Delta_j \omega^{n+1}(t)\|_{L^\infty}&\leq C\left(2^{js}\|\Delta_jR^{n+1}_0\|_{L^\infty}+2^{j(s-1)}\|\Delta_j \omega^{n+1}_0\|_{L^\infty}\right)\\ &+C\int_0^t c_j(\tau)\left(\|\vu^{n+1} \|_{B^s_{\infty,r}}+\|R^{n+1}\|_{B^s_{\infty,r}}\right)\|\vu^n \|_{B^s_{\infty,r}} \, \detau, \end{split} \end{equation*} where the sequence $(c_j(t))_{j\geq -1}$ belongs to the unit sphere of $\ell^r$. Now, we define for all $t\geq 0$: \begin{equation*} \overline E^{n+1}(t):=\|R^{n+1}(t)\|_{B^s_{\infty, r}}+ \|\vu^{n+1}(t)\|_{L^2\cap B^s_{\infty,r}}\, . \end{equation*} Thus, recalling Lemma \ref{l:rel_curl}, from the previous inequalities we obtain \begin{equation*} \overline E^{n+1}(t)\leq C\, \overline E^{n+1}(0)+C\int_0^t \overline E^{n+1}(\tau)\|\vu^n(\tau)\|_{L^2\cap B^s_{\infty,r}}\, \detau. \end{equation*} An application of Gr\"onwall arguments and the fact that $\overline E^{n+1}(0)\leq CK_0$, give \begin{equation*} \overline E^{n+1}(t)\leq CK_0\exp \left(C\int_0^t\|\vu^n(\tau)\|_{L^2\cap B^s_{\infty,r}}\detau \right), \end{equation*} where $K_0:=\|R^n_0\|_{B^s_{\infty,r}}+\|\vu^n_0\|_{L^2 \cap B^s_{\infty,r}}$. Next, from the inductive assumption \eqref{eq:induc_u}, we get \begin{equation*} \int_0^t\|\vu^n(\tau)\|_{L^2\cap B^s_{\infty,r}}\detau \leq e^{CK_0t}-1 \end{equation*} and we notice that for $0\leq x\leq1$ one has $e^x-1\leq x+x^2\leq 2x$. So, if we choose $T^\ast>0$ such that $CK_0T^\ast\leq 1$, we have \begin{equation*} \overline E^{n+1}(t)\leq CK_0\exp (e^{CK_0t}-1)\leq CK_0\exp (CK_0t)\quad \quad \text{for}\; \; t\in [0,T^\ast]\, . \end{equation*} In this way we have completed the proof of the uniform bounds. \subsubsection{Convergence}\label{subsect:conv_argument} We show now convergence of the sequence $(R^n,\vu^n)_{n\in \mathbb{N}}$ towards a solution $(R,\vu)$ of the original problem. The proof follows the arguments already performed in Subsection \ref{ss:conv_H^s}: we limit ourselves to highlight only the main steps. We define \begin{equation*} \widetilde{R}^n:=R^n-R^n_0 \end{equation*} which satisfies \begin{equation*} \begin{cases} \partial_t \widetilde{R}^{n+1}=-\vu^n\cdot \nabla R^{n+1}\\ \widetilde{R}^{n+1}_{|t=0}=0\, . \end{cases} \end{equation*} Thus, one can check that $(\widetilde{R}^n)_{n\in \mathbb{N}}$ is uniformly bounded in $C^0([0,T];L^2)$. Now, we will prove that $(\widetilde{R}^n,\vu^n)_{n\in \mathbb{N}}$ is a Cauchy sequence in $C^0([0,T];L^2)$. For any couple $(n,l)\in \mathbb{N}^2$, we introduce \begin{equation*} \begin{split} &\delta \widetilde{R}^{n,l}:=\widetilde{R}^{n+l}-\widetilde{R}^{n}\\ &\delta R^{n,l}:=R^{n+l}-R^n\\ &\delta \vu^{n,l}:=\vu^{n+l}-\vu^n\\ &\delta \Pi^{n,l}:=\Pi^{n+l}-\Pi^n\, , \end{split} \end{equation*} and we have that ${\rm div}\, \delta \vec u^{n,l}=0$ for any $(n,l)\in \mathbb{N}^2$. Taking the difference between the $(n+l)$-iterate and the $n$-iterate, we may find \begin{equation}\label{syst_approx_conv} \begin{cases} \partial_t \delta \widetilde{R}^{n,l}+\vu^{n+l-1}\cdot \nabla \delta \widetilde{R}^{n,l}=-\delta \vu^{n-1,l}\cdot \nabla R^n+\vu^{n+l-1} \cdot \nabla \delta R_0^{n,l}\\ \partial_t \delta \vu^{n,l} +\vu^{n+l-1}\cdot \nabla \delta \vu^{n,l}+\nabla \delta \Pi^{n,l}=-\delta \vu^{n-1,l}\cdot \nabla \vu^n -R^{n+l}(\delta \vu^{\perp})^{n-1,l}-\delta R^{n,l}\vu^{\perp ,n-1}\, , \end{cases} \end{equation} supplemented with the initial data $(\delta \widetilde{R}^{n,l} ,\delta \vu^{n,l})_{|t=0}=(0,\delta \vu^{n,l}_0)$. An energy estimate for the first equation of \eqref{syst_approx_conv} yields \begin{equation*} \|\delta \widetilde{R}^{n,l}(t)\|_{L^2}\leq C\int_0^t \|\delta \vu ^{n-1,l}\|_{L^2}\|\nabla R^n\|_{L^\infty}+\|\vu^{n+l-1}\|_{L^2}\|\nabla \delta R_0^{n,l}\|_{L^\infty}\, \detau\, , \end{equation*} and similarly from the second equation we obtain \begin{equation*} \begin{split} \|\delta \vu^{n,l}(t)\|_{L^2}&\leq C\|\delta \vu_0^{n,l}\|_{L^2}+C\int_0^t\left(\|\delta \vu^{n-1,l}\|_{L^2}\|\nabla \vu^n\|_{L^\infty}+\|\delta \vu^{n-1,l}\|_{L^2}\|R^{n+l}\|_{L^\infty}\right)\, \detau\\ &+C\int_0^t\left(\|\delta \widetilde{R}^{n,l}\|_{L^2}+\|\delta R^{n,l}_0\|_{L^\infty}\right)\|\vu^{n-1}\|_{L^2\cap L^\infty}\, \detau \, . \end{split} \end{equation*} Employing the uniform bounds established in Paragraph \ref{ss:unif_bounds_QH-E} and the embeddings, we note that \begin{equation*} \sup_{t\in [0,T^\ast]}\left(\|\nabla R^n(t)\|_{L^\infty}+\|\nabla \vu^n(t)\|_{L^\infty}+\| R^{n+l}(t)\|_{L^\infty}\right)+\int_0^{T^\ast}\|\vu^{n+l-1}\|_{L^2}+\|\vu^{n-1}\|_{L^2\cap L^\infty}\dt \leq C_{T^\ast}, \end{equation*} for a constant $C_{T\ast}$ which depends only on $T^\ast$ and on the initial data. Therefore, thanks to the Gr\"onwall lemma, we get \begin{equation*} \|\delta \widetilde{R}^{n,l}(t)\|_{L^2}+\|\delta \vu^{n,l}(t)\|_{L^2}\leq C_{T^\ast}\left(\|\delta R^{n,l}_0\|_{W^{1,\infty}}+\|\delta \vu_0^{n,l}\|_{L^2}+\int_0^t\|\delta \vu^{n-1,l}(\tau)\|_{L^2}\detau \right) , \end{equation*} for all $t\in [0,T^\ast]$. As already done in Subsection \ref{ss:conv_H^s}, after setting \begin{equation*} F_0^n:=\sup_{l\geq 0}\left(\|\delta R^{n,l}_0\|_{W^{1,\infty}}+\|\delta \vu_0^{n,l}\|_{L^2}\right)\quad \quad \text{ and }\quad \quad G^n(t):=\sup_{l\geq 0}\sup_{[0,t]}\left(\|\delta \widetilde{R}^{n,l}\|_{L^2}+\|\delta \vec u^{n,l}\|_{L^2}\right)\, , \end{equation*} we may infer that, for all $t\in [0,T^\ast]$, \begin{equation*} G^n(t)\leq C_{T^\ast}\sum_{k=0}^{n-1}\frac{(C_{T^\ast}T^\ast)^k}{k!}F_0^{n-k}+\frac{(C_{T^\ast} T^\ast)^n}{n!}G^0(t)\, , \end{equation*} and, bearing in mind \eqref{conv-properties}, we have \begin{equation*} \lim_{n\rightarrow +\infty}F^n_0=0\, . \end{equation*} Hence, \begin{equation*} \lim_{n\rightarrow +\infty}\sup_{l\geq 0}\sup_{t\in [0,T^\ast]}\left(\|\delta \widetilde{R}^{n,l}(t)\|_{L^2}+\|\delta \vu^{n,l}(t)\|_{L^2}\right)=0\, . \end{equation*} This property implies that $(\widetilde{R}^n)_{n\in \mathbb{N}}$ and $(\vu^n)_{n\in \mathbb{N}}$ are Cauchy sequences in $C^0_{T^\ast}(L^2)$. Hence, converge to some function $\widetilde{R}$ and $\vu$ in the same space. Define $R:=\widetilde{R}+R_0$. We notice that, owing to the embedding $L^2\hookrightarrow B^{-1}_{\infty,2}$, and thanks to the uniform bounds and to an interpolation argument, the sequence $(\vu^n)_{n\in \mathbb{N}}$ strongly converges in any intermediate space $L^\infty_{T^\ast} (B^\sigma_{\infty,r})$ with $\sigma <s$ and in particular in $L^\infty([0,T^\ast]\times \mathbb{R}^2)$. Moreover, we have that $R^n=\widetilde{R}^n+R^n_0$ strongly converges to $R$ in $L^\infty_{T^\ast}(L^2_{\rm loc})$. This is enough to pass to the limit in the weak formulation of \eqref{approx_R_QH-E} and \eqref{approx_u_QH-E} finding that $(R,\vu)$ is a weak solution to the original problem for a suitable pressure term $\nabla \Pi$. The regularity for $(R,\vu)$ in $B^s_{\infty,r}$ follows by the uniform bounds and Fatou's property in Besov spaces. Moreover, an argument similar to the one performed in Subsection \ref{ss:conv_H^s} apply here to show the desired regularity for the pressure term, after noticing that \begin{equation*} \|\nabla \Pi\|_{L^2\cap B^s_{\infty,r}}\sim \|\nabla \Pi\|_{L^2}+\|\Delta \Pi\|_{B^{s-1}_{\infty,r}}\, . \end{equation*} Finally, employing classical results for transport equations in Besov spaces (remember Theorem \ref{thm_transport}), we can get the claimed time continuity of $R$ with values in $B^s_{\infty,r}$, of $\vu$ with values in $L^2\cap B^s_{\infty,r}$ and of $\nabla \Pi$ with values in $L^2\cap B^s_{\infty,r}$. In addition, the sought regularity properties for the time derivatives $\partial_t R$ and $\partial_t \vu$ follow from an analysis of system \eqref{system_Q-H_thm}. \subsubsection{Continuation criterion in Besov spaces}\label{subsec:cont_crit_besov} We conclude this section showing the following continuation criterion for solutions of system \eqref{system_Q-H_thm} in $B^s_{\infty,r}$, where the couple $(s,r)$ satisfies the Lipschitz condition \eqref{Lip_cond}. \begin{proposition}\label{prop:cont_criterion_Bes} Let $(R_0,\vu_0)\in B^s_{\infty,r}\times (L^2\cap B^s_{\infty,r})$ with ${\rm div}\, \vu_0=0$. Given a time $T>0$, let $(R,\vu)$ be a solution of \eqref{system_Q-H_thm} on $[0,T[$ that belongs to $L^\infty_t(B^s_{\infty,r})\times L^\infty_t(L^2\cap B^s_{\infty,r})$ for any $t\in [0,T[$. If we assume that \begin{equation}\label{cont_cond_Bes} \int_0^T\|\nabla \vu \|_{L^\infty}\, \dt<+\infty\, , \end{equation} then $(R, \vu)$ can be continued beyond $T$ into a solution of \eqref{system_Q-H_thm} with the same regularity. Moreover, the lifespan of a solution $(R,\vu)$ to system \eqref{system_Q-H_thm} does not depend on $(s,r)$ and, in particular, the lifespan of solutions in Theorem \ref{thm:well-posedness_Q-H-Euler} is the same as the lifespan in $B^1_{\infty,1}\times \left( L^2 \cap B^1_{\infty,1}\right)$. \end{proposition} \begin{proof} It is enough to show that, under condition \eqref{cont_cond_Bes}, the solution $(R,\vu)$ remains bounded in the space $L^\infty_T(B^s_{\infty,r})\times L^\infty_T(L^2\cap B^s_{\infty,r})$. Recalling the \textit{a priori} estimates for the non-linear terms in system \eqref{QH-Euler_vor_dyadic_Bes}, we have \begin{equation*} \begin{split} 2^{js}\|[\vu\cdot \nabla, \Delta_j]R\|_{L^\infty}\leq C\, c_j(t)\,\left( \|\nabla \vu \|_{L^\infty}+ \|\nabla R\|_{L^\infty}\right)\left( \|R\|_{B^s_{\infty,r}}+ \| \vu \|_{B^{s}_{\infty,r}}\right) \end{split} \end{equation*} and \begin{equation*} \begin{split} 2^{j(s-1)}\|[\vu\cdot \nabla, \Delta_j]\omega\|_{L^\infty}\leq C c_j(t)\, \|\nabla \vu\|_{L^\infty}\|\vu\|_{B^s_{\infty,r}}\, , \end{split} \end{equation*} where we have used the fact that $\|\omega\|_{L^\infty}\leq C\|\nabla \vu\|_{L^\infty}$ and $\|\omega\|_{B^{s-1}_{\infty,r}}\leq C\| \vu\|_{B^s_{\infty,r}}$. Moreover, from relation \eqref{est_force term_Bes}, we obtain \begin{equation*} \begin{split} \|{\rm div}\, (R\vu)\|_{B^{s-1}_{\infty,r}}\leq C \left(\|R\|_{L^\infty}+\|\vu\|_{L^\infty}\right) \left(\|\vu \|_{B^s_{\infty ,r}}+\|R\|_{B^s_{\infty ,r}}\right)\, . \end{split} \end{equation*} Summing the previous bounds, for all $0\leq t\leq T$, we get \begin{equation*} \begin{split} \|R(t)\|_{B^s_{\infty,r}}+\|\omega (t)\|_{B^{s-1}_{\infty,r}}&\leq C \left(\|R_0\|_{B^s_{\infty,r}}+\|\omega_0\|_{B^{s-1}_{\infty,r}}\right)\\ &+C\int_0^t\left(\|\nabla \vu\|_{L^\infty}+\|R\|_{W^{1,\infty}}+\|\vu\|_{L^\infty}\right)\left( \|R\|_{B^s_{\infty,r}}+ \| \vu \|_{B^{s}_{\infty,r}}\right) \, \detau \, . \end{split} \end{equation*} At this point, we have to find estimates for $\|\vu\|_{L^\infty}$ and $\|R\|_{W^{1,\infty}}$. To deal with $\|\vu\|_{L^\infty}$, we separate low and hight frequencies deducing \begin{equation*} \|\vu\|_{L^\infty}\leq \|\Delta_{-1}\vu\|_{L^\infty}+\sum_{j\geq 0}\|\Delta_j \vu\|_{L^\infty} \leq C\|\vu_0\|_{L^2} +\sum_{j\geq 0}\|\Delta_j \vu\|_{L^\infty} \, , \end{equation*} where we have also employed the Bernstein inequalities (see Lemma \ref{l:bern}). For the high frequency terms, we can write \begin{equation*} \sum_{j\geq 0}\|\Delta_j \vu\|_{L^\infty}\leq C\sum_{j\geq 0}2^{-j}\|\Delta_j \nabla \vu\|_{L^\infty}\leq C\|\nabla \vu\|_{L^\infty}\, . \end{equation*} Therefore, \begin{equation}\label{u_low-high_freq} \|\vu\|_{L^\infty}\leq C\left(\|\vu_0\|_{L^2}+\|\nabla \vu\|_{L^\infty}\right)\, . \end{equation} Now, we focus on the bound for $\|R\|_{W^{1,\infty}}$. On the one hand, $\|R(t)\|_{L^\infty}=\|R_0\|_{L^\infty}$, for all $t\geq 0$. On the other hand, differentiating the continuity equation, we obtain \begin{equation}\label{R_Lipschitz} \|\nabla R(t)\|_{L^\infty_T(L^\infty)}\leq \|\nabla R_0\|_{L^\infty}\exp \left(C\int_0^T\|\nabla \vu\|_{L^\infty}\, \dt \right)\, . \end{equation} Thus, using the previous relations and recalling equation \eqref{eq:rel_curl}, we finally have \begin{equation*} \begin{split} \|R(t)\|_{B^s_{\infty,r}}+\|\vu(t)\|_{L^2\cap B^s_{\infty,r}}&\leq C\left(\|R_0\|_{B^s_{\infty,r}}+\|\vu_0\|_{L^2\cap B^s_{\infty,r}}\right)\\ &+C\int_0^t\left(\|\nabla \vu \|_{L^\infty}+\|R_0\|_{W^{1,\infty}}+\|\vu_0\|_{L^2}\right)\left( \|R\|_{B^s_{\infty,r}}+ \| \vu \|_{L^2\cap B^{s}_{\infty,r}}\right) \, \detau \, . \end{split} \end{equation*} In the end, employing Gr\"onwall's type inequalities, we may conclude that, under the assumption \eqref{cont_cond_Bes}, \begin{equation*} \sup_{t\in [0,T]}\left(\|R(t)\|_{B^s_{\infty,r}}+\|\vu(t)\|_{L^2\cap B^s_{\infty,r}}\right)<+\infty\, . \end{equation*} \qed \end{proof} \subsection{The asymptotically global well-posedness result}\label{ss:improved_lifespan} In this paragraph we focus on finding an asymptotic behaviour (in the regime of small oscillations for the densities) for the lifespan of solutions to system \eqref{system_Q-H_thm}. Namely, for small fluctuations $R_0$ of size $\delta>0$, the lifespan of solutions to this system tends to infinity when $\delta\rightarrow 0^+$. To show that, we have to take advantage of the \textit{linear} estimate in Theorem \ref{thm:improved_est_transport} for the transport equations in Besov spaces with zero regularity index. For that reason, it is important to work with the vorticity formulation of \eqref{system_Q-H_thm}, since $\omega \in B^0_{\infty,1}$. Thanks to the continuation criterion presented in Proposition \ref{prop:cont_criterion_Bes}, it is enough to find the bound of the lifespan in the lowest regularity space $B^1_{\infty,1}$. To begin with, we recall relation \eqref{eq:rel_curl}, i.e. \begin{equation*} \|f\|_{L^2\cap B^s_{\infty ,r}}\sim\|f\|_{L^2}+\|{\rm curl}\, f\|_{B^{s-1}_{\infty, r}}\, . \end{equation*} Therefore, due to the previous relation, we can define (for $t\geq 0$) \begin{equation}\label{def_mathcal_E} \mathcal{E}(t):= \|\vu(t)\|_{L^2}+\|\omega(t)\|_{B^0_{\infty,1}}\sim \|\vu(t)\|_{L^2\cap B^1_{\infty,1}}\, . \end{equation} Since the $L^2$ norm of the velocity field is preserved, to control $\vu$ in $B^1_{\infty ,1}$, it will be enough to find estimates for ${\rm curl}\, \vu$ in $B^{0}_{\infty ,1}$. Hence, we apply again the ${\rm curl}\,$ operator to the second equation in system \eqref{system_Q-H_thm} to get the system \eqref{QH-Euler_vorticity_Bes}, i.e. \begin{equation*}\label{QH-Euler_vorticity} \begin{cases} \partial_tR+\vu \cdot \nabla R=0\\ \partial_t \omega +\vu \cdot \nabla \omega=-{\rm div}\, (R\vu)\, . \end{cases} \end{equation*} Making use of Theorem \ref{thm:improved_est_transport}, we obtain \begin{equation*} \|\omega (t)\|_{B^0_{\infty,1}}\leq C \left( \|\omega_0\|_{B^0_{\infty,1}}+\int_0^t\|{\rm div}\, (R\vu)\|_{B^0_{\infty,1}}\detau\right)\left(1+\int_0^t\|\nabla \vu\|_{L^\infty}\detau\right)\, . \end{equation*} Now, we look at the bound for ${\rm div}\, (R\vu)$, finding that \begin{equation*} \|{\rm div}\, (R\vu)\|_{B^0_{\infty,1}}\leq C \left(\|R\|_{L^\infty}\|\vu\|_{B^1_{\infty,1}}+\|\vu\|_{L^\infty}\|R\|_{B^1_{\infty,1}}\right)\leq C \|R\|_{B^1_{\infty,1}}\mathcal{E}(\tau)\, . \end{equation*} Then, we deduce \begin{equation}\label{eq:en_est_1} \mathcal{E}(t)\leq C \left(\mathcal E(0)+\int_0^t\mathcal E(\tau)\|R(\tau)\|_{B^1_{\infty,1}}\, \detau \right)\left(1+\int_0^t\mathcal E(\tau)\, \detau\right)\, . \end{equation} At this point, Theorem \ref{thm_transport} implies that \begin{equation*} \|R(t)\|_{B^1_{\infty,1}}\leq \|R_0\|_{B^1_{\infty,1}}\exp \left(C\int_0^t \mathcal E(\tau)\, \detau\right)\, . \end{equation*} Plugging this bound into \eqref{eq:en_est_1} gives \begin{equation*} \mathcal E(t)\leq C \left(1+\int_0^t\mathcal E(\tau)\, \detau\right)\left(\mathcal E(0)+\|R_0\|_{B^1_{\infty,1}}\int_0^t \mathcal E(\tau)\exp \left(\int_0^\tau \mathcal E(s)\, \ds \right)\, \detau \right)\, . \end{equation*} We now define \begin{equation}\label{def_T^ast} T^\ast:=\sup \left\{t>0:\|R_0\|_{B^1_{\infty,1}}\int_0^t \mathcal E(\tau)\exp \left(\int_0^\tau \mathcal E(s)\, \ds \right)\, \detau \leq \mathcal E(0)\right\}\, . \end{equation} Then, for all $t\in [0,T^\ast]$, we deduce \begin{equation*} \mathcal E(t)\leq C\left(1+\int_0^t\mathcal E(\tau)\, \detau\right)\mathcal E(0) \end{equation*} and thanks to the Gr\"onwall's lemma we infer \begin{equation}\label{eq:en_est_2} \mathcal E(t)\leq C\mathcal E(0)\, e^{C\mathcal E(0)t}\, , \end{equation} for a suitable constant $C>0$. It remains to find a control on the integral of $\mathcal E(t)$. We have \begin{equation*} \int_0^t\mathcal E(\tau) \, \detau \leq e^{C\mathcal E(0)t}-1 \end{equation*} and, due to the previous bound \eqref{eq:en_est_2}, we get \begin{equation}\label{estimate_integral_energy} \begin{split} \|R_0\|_{B^1_{\infty,1}}\int_0^t \mathcal E(\tau)\exp \left(\int_0^\tau \mathcal E(s)\, \ds \right)\, \detau &\leq C\|R_0\|_{B^1_{\infty,1}}\int_0^t \mathcal E(0)\, e^{C\mathcal E(0)\tau}\exp \left(e^{C\mathcal E(0)\tau}-1 \right)\, \detau \\ &\leq C\|R_0\|_{B^1_{\infty,1}}\left(\exp \left(e^{C\mathcal E(0)t}-1\right)-1\right)\, . \end{split} \end{equation} Finally, by definition \eqref{def_T^ast} of $T^\ast$, we can argue that \begin{equation*} \mathcal E(0)\leq C\|R_0\|_{B^1_{\infty,1}}\left(\exp \left(e^{C\mathcal E(0)T^\ast}-1\right)-1\right)\, , \end{equation*} which gives the following lower bound for the lifespan of solutions: \begin{equation*} T^\ast\geq \frac{C}{\mathcal E(0)}\log\left(\log \left(C\frac{\mathcal E(0)}{\|R_0\|_{B^1_{\infty,1}}}+1\right)+1\right)\, . \end{equation*} From there, recalling the definition \eqref{def_mathcal_E} for $\mathcal E(0)$, we have \begin{equation}\label{T-ast:improved} T^\ast\geq \frac{C}{\|\vu_0\|_{L^2\cap B^1_{\infty,1}}}\log\left(\log \left(C\frac{\|\vu_0\|_{L^2\cap B^1_{\infty,1}}}{\|R_0\|_{B^1_{\infty,1}}}+1\right)+1\right)\, , \end{equation} for a suitable constant $C>0$. This is the claimed lower bound stated in Theorem \ref{thm:well-posedness_Q-H-Euler}. \section{The lifespan of solutions to the primitive problem}\label{s:lifespan_full} The main goal of this section is to present an ``asymptotically global'' well-posedness result for system \eqref{Euler-a_eps_1}, when the size of fluctuations of the densities goes to zero, in the spirit of Subsection \ref{ss:improved_lifespan}. We start by showing a continuation type criterion for system \eqref{Euler-a_eps_1} and discussing the related consequences (see Subsection \ref{ss:cont_criterion+consequences} below for details). We conclude this section presenting the asymptotic behaviour of the lifespan of solutions to system \eqref{Euler-a_eps_1}: the lifespan may be very large, if the size of non-homogeneities $a_{0,\varepsilon}$ defined in \eqref{def_a_veps} is small (see relation \eqref{asymptotic_time} below). We point out that it is \textit{not} clear at all that the global existence holds in a fast rotation regime without any assumption of smallness on the size of the densities. \subsection{The continuation criterion and consequences}\label{ss:cont_criterion+consequences} In this paragraph, we start by presenting a continuation type result in Sobolev spaces for system \eqref{Euler-a_eps_1}, in the spirit of the Beale-Kato-Majda continuation criterion \cite{B-K-M}. The proof is an adaptation of the arguments in \cite{B-L-S} by Bae, Lee and Shin. \begin{proposition}\label{prop:cont_criterion_original_prob} Take $\varepsilon \in\; ]0,1]$ fixed. Let $(a_{0,\varepsilon},\vu_{0,\varepsilon})\in L^\infty \times H^s$ with $\nabla a_{0,\varepsilon}\in H^{s-1} $ and ${\rm div}\, \vu_{0,\varepsilon}=0$. Given a time $T>0$, let $(a_\varepsilon,\vu_\varepsilon, \nabla \Pi_\varepsilon)$ be a solution of \eqref{Euler-a_eps_1} on $[0,T[$ that belongs to $L^\infty_t(L^\infty)\times L^\infty_t(H^s)\times L^\infty_t(H^s)$ and $\nabla a_\varepsilon \in L^\infty_t(H^{s-1}) $ for any $t\in [0,T[$. If we assume that \begin{equation}\label{cont_cond_orig_prob} \int_0^T\|\nabla \vu_\varepsilon \|_{L^\infty}\, \dt<+\infty\, , \end{equation} then $(a_\varepsilon, \vu_\varepsilon,\nabla \Pi_\varepsilon)$ can be continued beyond $T$ into a solution of \eqref{Euler-a_eps_1} with the same regularity. \end{proposition} \begin{proof} As already pointed out in the proof of Proposition \ref{prop:cont_criterion_Bes}, it would be enough to show that $$ \sup_{0\leq t<T}\left(\|\vec u_\varepsilon\|_{H^s}+\|\nabla a_\varepsilon\|_{H^{s-1}}\right)<+\infty \, . $$ Since $\varepsilon \in \, ]0,1]$ is fixed and does not play any role in the following proof, for notational convenience, we set it equal to 1. We start by recalling that, from the continuity equation of \eqref{Euler-a_eps_1}, one gets \begin{equation}\label{cont_eq_nabla_a} \partial_t \partial_i a+\vec u\cdot \nabla \partial_i a =-\partial_i \vec u \cdot \nabla a\quad \quad \text{for}\; i=1,2\, . \end{equation} So, applying the operator $\Delta_j$ to the above relation and using the divergence-free condition ${\rm div}\, \vec u=0$, one has $$ \partial_t \Delta_j \partial_i a+\vec u \cdot \nabla \Delta_j \partial_i a=-\Delta_j\left(\partial_i \vec u \cdot \nabla a\right)+[\vec u\cdot \nabla,\Delta_j]\partial_i a \, .$$ Therefore, thanks to the commutator estimates (see Lemma \ref{l:commutator_est}), one may argue that \begin{equation*} 2^{j(s-1)}\|[\vec u \cdot \nabla,\Delta_j]\partial_i a\|_{L^2}\leq C\, c_j(t)\left( \|\nabla \vec u\|_{L^\infty}+\|\nabla a\|_{L^\infty}\right)\left(\|\nabla a\|_{H^{s-1}}+\|\vec u\|_{H^s}\right)\, , \end{equation*} where $(c_j(t))_{j\geq -1}$ is a sequence in the unit ball of $\ell^2$, and due to Corollary \ref{cor:tame_est} one has \begin{equation*} \|\partial_i \vec u\cdot \nabla a\|_{H^{s-1}}\leq C\left(\|\nabla \vec u\|_{L^\infty}\|\nabla a\|_{H^{s-1}}+\|\nabla \vec u\|_{H^{s-1}}\|\nabla a\|_{L^\infty}\right)\, . \end{equation*} At this point, we recall the bounds for the momentum equation in system \eqref{Euler-a_eps_1}. First of all, we apply the non-homogeneous dyadic blocks $\Delta_j$, getting \begin{equation*} \partial_t \Delta_j \vec u+\vec u\cdot \nabla \Delta_j \vec u+\Delta_j \vec u^\perp +\nabla \Delta_j \Pi +\Delta_j\left(a\nabla \Pi\right)=[\vec u \cdot \nabla,\Delta_j]\vec u\,. \end{equation*} Then, we obtain \begin{equation*} 2^{js}\|[\vec u\cdot \nabla, \Delta_j]\vec u\|_{L^2}\leq C \, c_j(t)\|\nabla \vec u\|_{L^\infty}\|\vec u\|_{H^s} \end{equation*} with $(c_j(t))_{j\geq -1}$ a sequence in the unit ball of $\ell^2$, and in addition we have \begin{equation*} \|a\nabla \Pi\|_{H^s}\leq C \left(\|a\|_{L^\infty}\|\nabla \Pi\|_{H^s}+\|\nabla \Pi\|_{L^\infty}\|\nabla a\|_{H^{s-1}}\right)\, . \end{equation*} Summing up the previous inequalities, for all $t\in [0,T[$ we may infer that \begin{equation*} \begin{split} \|\nabla a (t)\|_{H^{s-1}}+\|\vec u(t)\|_{H^s}&\leq \left(\|\nabla a_0\|_{H^{s-1}}+\|\vec u_0\|_{H^s}\right)+C\int_0^t\|a\|_{L^\infty}\|\nabla \Pi\|_{H^s}\detau\\ &+C\int_0^t \left(\|\nabla a\|_{L^\infty}+\|\nabla \vec u\|_{L^\infty}+\|\nabla \Pi\|_{L^\infty}\right)\left(\|\nabla a \|_{H^{s-1}}+\|\vec u\|_{H^s}\right)\detau\, . \end{split} \end{equation*} To close the proof, under the hypothesis of the theorem, we have to find the bounds $\|\nabla a\|_{L^1_T(L^\infty)}$, $\|\nabla \Pi\|_{L^1_T(L^\infty)}$ and $\|\nabla \Pi\|_{L^1_T(H^s)}$. From the continuity equation \eqref{cont_eq_nabla_a}, we obtain \begin{equation*} \|\nabla a\|_{L^\infty_T(L^\infty)}\leq C\|\nabla a_0\|_{L^\infty}\exp\left(\int_0^T\|\nabla \vec u\|_{L^\infty}\detau\right)\, . \end{equation*} Now, we focus on the estimate $\|\nabla \Pi\|_{H^s}$. We recall again the elliptic equation \begin{equation}\label{ellip_eq_cont_thm} -{\rm div}\, (A\nabla \Pi)={\rm div}\, F\, \quad \text{where}\quad F:= \vu \cdot \nabla \vu+ \vu^\perp \quad \text{and}\quad A:=1/\varrho\, . \end{equation} From the previous relation, it is a standard matter to deduce that (see also Proposition \ref{app:prop7_danchin}): \begin{equation*} \|\nabla \Pi\|_{H^s}\leq C\left(\|\nabla \vec u\|_{L^\infty}\|\vec u\|_{H^s}+\|\vec u\|_{H^s}+\|\nabla a\|_{L^\infty}\|\nabla \Pi\|_{H^{s-1}}+\|\nabla \Pi \|_{L^\infty}\|\nabla a\|_{H^{s-1}}\right)\, . \end{equation*} Using an interpolation argument, one has $$ \|\nabla \Pi\|_{H^{s-1}}\leq C\|\nabla \Pi\|_{L^2}^{1/s}\|\nabla \Pi\|_{H^s}^{1-1/s} $$ and due to the Young's inequality we end up with \begin{equation*} \|\nabla a\|_{L^\infty}\|\nabla \Pi\|_{H^{s-1}}\leq C\left(\|\nabla a\|_{L^\infty}^s \|\nabla \Pi\|_{L^2}+\left(1-\frac{1}{s}\right)\|\nabla \Pi\|_{H^s}\right)\, . \end{equation*} In addition, we already know that $$ \|\nabla \Pi\|_{L^2}\leq C \left(\|\vec u\|_{L^2}\|\nabla \vec u\|_{L^\infty}+\|\vec u\|_{L^2}\right)\leq C \left(\|\vec u_0\|_{L^2}\|\nabla \vec u\|_{L^\infty}+\|\vec u_0\|_{L^2}\right)\leq C \left(\|\nabla \vec u\|_{L^\infty}+1\right) \, .$$ As it is apparent the constant term on the right-hand side will be irrelevant in the next computations: hence, it will be omitted supposing e.g. that $\|\nabla \vec u\|_{L^\infty}>1$. At this point, we have only to take care of the $L^\infty$ bound for the pressure term. Thanks to an application of Gagliardo-Nirenberg (see Theorem \ref{app:thm_G-N}) and Young inequalities, we get \begin{equation*} \|\nabla \Pi\|_{L^\infty}\leq C \|\Delta \Pi\|_{L^4}^{2/3}\|\nabla \Pi\|_{L^2}^{1/3}\leq C\left(\|\Delta \Pi\|_{L^4}+\|\nabla \Pi\|_{L^2}\right)\leq C\left(\|\Delta \Pi\|_{L^4}+\|\nabla \vec u\|_{L^\infty}\right)\, . \end{equation*} Again from the elliptic equation \eqref{ellip_eq_cont_thm}, one can find \begin{equation*} \Delta \Pi=-\varrho \nabla a\cdot \nabla \Pi-\varrho \, {\rm div}\, \left( \vec u\cdot \nabla \vec u\right)-\varrho\, {\rm div}\, \vec u^\perp \end{equation*} and then \begin{equation*} \begin{split} \|\Delta \Pi\|_{L^4}&\leq C\left(\|\varrho\|_{L^\infty}\|\nabla a\|_{L^\infty}\|\nabla \Pi\|_{L^4}+\|\varrho\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\|\nabla \vec u\|_{L^4}+\|\varrho\|_{L^\infty}\|\nabla \vec u\|_{L^4}\right)\\ &\leq C\left(\|\nabla a\|_{L^\infty}\|\nabla \Pi\|_{L^4}+\|\nabla \vec u\|_{L^\infty}\|\nabla \vec u\|_{L^4}+\|\nabla \vec u\|_{L^4}\right)\, , \end{split} \end{equation*} where we have employed the fact that the densities are bounded from above. Once again, due to the Gagliardo-Nirenberg's inequality, we obtain $$ \|\nabla \Pi\|_{L^4}\leq C\|\Delta \Pi\|_{L^4}^{1/3}\|\nabla \Pi\|_{L^2}^{2/3}\, . $$ So, \begin{equation*} \|\Delta \Pi\|_{L^4}\leq C\left(\|\nabla a\|_{L^\infty}\|\Delta \Pi\|_{L^4}^{1/3}\|\nabla \Pi\|_{L^2}^{2/3}+\|\nabla \vec u\|_{L^\infty}\|\nabla \vec u\|_{L^4}+\|\nabla \vec u\|_{L^4}\right)\, . \end{equation*} Therefore, Young inequality implies that \begin{equation*} \begin{split} \|\Delta \Pi\|_{L^4}&\leq C\left(\|\nabla a\|_{L^\infty}^{3/2}\|\nabla \Pi\|_{L^2}+\|\nabla \vec u\|_{L^\infty}\|\nabla \vec u\|_{L^4}+\|\nabla \vec u\|_{L^4}\right)\\ &\leq C\left(\|\nabla a\|_{L^\infty}^{3/2}\|\nabla \vec u\|_{L^\infty}+\|\nabla \vec u\|_{L^\infty}\|\nabla \vec u\|_{L^4}+\|\nabla \vec u\|_{L^4}\right)\, . \end{split} \end{equation*} At the end, we ensure that \begin{equation*} \|\nabla \Pi\|_{L^\infty}\leq C\left(\|\nabla a\|_{L^\infty}^{3/2}\|\nabla \vec u\|_{L^\infty}+\|\nabla \vec u\|_{L^\infty}\|\nabla \vec u\|_{L^4}+\|\nabla \vec u\|_{L^4}+\|\nabla \vec u\|_{L^\infty}\right)\, . \end{equation*} Now, we have to estimate $\|\nabla \vec u\|_{L^4}$: to do so, we take advantage of the vorticity formulation $$ \partial_t \omega+\vec u\cdot \nabla \omega=-\nabla a \wedge \nabla \Pi\, , $$ where $\nabla a \wedge \nabla \Pi:=\partial_1 a\, \partial_2 \Pi-\partial_2 a \, \partial_1 \Pi$. From that formulation, we get the following bound for all $t\in [0,T[$ : \begin{equation*} \|\omega(t)\|_{L^4}\leq \|\omega_0\|_{L^4}+C\int_0^t\|\nabla a\|_{L^\infty}\|\nabla \Pi\|_{L^4}\detau\, . \end{equation*} As done in the previous computations, we can deduce that \begin{equation*} \begin{split} \|\omega(t)\|_{L^4}&\leq \|\omega_0\|_{L^4}+C\int_0^t \|\nabla a\|_{L^\infty}\|\Delta \Pi\|_{L^4}^{1/3}\|\nabla \vec u\|_{L^\infty}^{2/3}\\ &\leq \|\omega_0\|_{L^4}+C\int_0^t \|\nabla a\|_{L^\infty}\left(\|\nabla a\|_{L^\infty}^{3/2}\|\nabla \vec u\|_{L^\infty}+\|\nabla \vec u\|_{L^\infty}\|\nabla \vec u\|_{L^4}+\|\nabla \vec u\|_{L^4}\right)^{1/3}\|\nabla \vec u\|_{L^\infty}^{2/3}\\ &\leq \|\omega_0\|_{L^4}+C\int_0^t \left(\|\nabla a\|_{L^\infty}^{3/2}\|\nabla \vec u\|_{L^\infty}+\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\|\omega\|_{L^4}^{1/3}+\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}^{2/3}\|\omega\|_{L^4}^{1/3}\right). \end{split} \end{equation*} At this point, we apply the Young's inequality to infer: \begin{itemize} \item[(i)] $\|\nabla a\|_{L^\infty}^{3/2}\|\nabla \vec u\|_{L^\infty}\leq C\left(\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}+\|\nabla a\|_{L^\infty}^{2}\|\nabla \vec u\|_{L^\infty}\right)$; \item[(ii)] $\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\|\omega\|_{L^4}^{1/3}\leq C\left(\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\|\omega\|_{L^4}+\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\right)$; \item[(iii)] $\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}^{2/3}\|\omega\|_{L^4}^{1/3}\leq C\left(\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}+\|\nabla a\|_{L^\infty}\|\omega\|_{L^4}\right)$. \end{itemize} \medskip In the end, for all $t\in [0,T[\, $, \begin{equation*} \begin{split} \|\omega(t)\|_{L^4}&\leq \|\omega_0\|_{L^4}+C\int_0^t\left(\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\|\omega\|_{L^4}+\|\nabla a\|_{L^\infty}\|\omega\|_{L^4}\right) \detau\\ &+C\int_0^t\left(\|\nabla a\|_{L^\infty}^2\|\nabla \vec u\|_{L^\infty}+\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\right)\detau \, . \end{split} \end{equation*} Hence, thanks to the Gr\"onwall's lemma, we get $$ \|\omega(t)\|_{L^4}\leq \exp\left[C\int_0^t \|\nabla a\|_{L^\infty}\left(\|\nabla \vec u\|_{L^\infty}+1\right) \right]\left[\|\omega_0\|_{L^4}+C\int_0^t\|\nabla a\|_{L^\infty}\|\nabla \vec u\|_{L^\infty}\left(\|\nabla a\|_{L^\infty}+1\right) \right] .$$ Recalling condition \eqref{cont_cond_orig_prob}, the theorem is thus proved. \qed \end{proof} \medskip At this point, we discuss some consequences of the previous result. In particular, it would be enough to control $\vec u_\varepsilon$ in $L^\infty_T(L^2\cap B^1_{\infty,1})$ in order to have the existence of the solution until the time $T$. Indeed, if we are able to control the norm $\|\ue \|_{L^\infty_T(L^2\cap B^1_{\infty,1})}$, then we are able to bound $\|\nabla \ue \|_{L^\infty_T(L^\infty)}$. This will imply \eqref{cont_cond_orig_prob} and, therefore, the solution will exist until time $T$. Let us give some details. First of all, we have \begin{equation*} \|\nabla \ue \|_{L^\infty_T(L^\infty)}\leq C\, \| \ue \|_{L^\infty_T(B^1_{\infty,1})}\, . \end{equation*} As already pointed out in Lemma \ref{l:rel_curl}, to control the $B^1_{\infty,1}$ norm of $\ue$ it is enough to have a $L^2$ estimate for $\ue$ and a $B^0_{\infty, 1}$ estimate for its ${\rm curl}\,$. Those estimates are the topic of the next Subsection \ref{ss:asym_lifespan}, provided that the time $T>0$ is defined as in \eqref{def_T^ast_veps} below. Therefore, $\|\nabla \ue\|_{L^\infty_T(L^\infty)}<+\infty$ and so $$ \int_0^T \|\nabla \ue\|_{L^\infty}<+\infty\, ,$$ for such $T>0$. Finally, we note that we have already shown the existence and uniqueness of solutions to system \eqref{Euler-a_eps_1} in the Sobolev spaces $H^s$ with $s>2$ (see Section \ref{s:well-posedness_original_problem}) and, thanks to Proposition \ref{p:embed}, those spaces are continuously embedded in the space $B^1_{\infty,1}$. Thus, the solution will exist until $T$. \subsection{The asymptotic lifespan}\label{ss:asym_lifespan} In this paragraph we focus our attention on the lifespan $T^\ast_\varepsilon$ of solutions $(\varrho_\varepsilon, \vec u_\varepsilon, \nabla \Pi_\varepsilon)$ to the primitive system \eqref{Euler-a_eps_1}. We point out that if we consider the initial densities as in \eqref{in_vr_E}, i.e. $\varrho_{0,\varepsilon}=1+\varepsilon R_{0,\varepsilon}$, it is not clear to us how to show that $T_\varepsilon^\ast \longrightarrow +\infty$ when $\varepsilon\rightarrow 0^+$. Nevertheless, on the one hand, as soon as the densities are of the form $\varrho_{0,\varepsilon}=1+\varepsilon^{1+\alpha}R_{0,\varepsilon}$ (with $\alpha >0$), we obtain that $T^\ast_\varepsilon \sim \log \log \left(1/\varepsilon\right)$. On the other hand, independently of the rotational effects, we can state an ``asymptotically global'' well-posedness result in the regime of small oscillations: namely, we get $T^\ast_\varepsilon \geq T^\ast(\delta)$, with $T^\ast (\delta)\longrightarrow +\infty$, when the size $\delta >0$ of $R_{0,\varepsilon}$ goes to $0^+$ (this is coherent with the result in \cite{D-F_JMPA} for a density-depend fluid in the absence of Coriolis force). Therefore, the main goal of this subsection is to prove estimate \eqref{improv_life_fullE} of Theorem \ref{W-P_fullE}. First of all, we have to take advantage of the vorticity formulation of system \eqref{Euler-a_eps_1}. To do so, we apply the ${\rm curl}\,$ operator to the momentum equation, obtaining \begin{equation}\label{curl_eq} \partial_t \omega_\varepsilon +\ue \cdot \nabla \omega_\varepsilon+ \nabla a_\varepsilon\, \wedge\, \nabla \Pi_\varepsilon=0\, , \end{equation} where we recall $\omega_\varepsilon:={\rm curl}\, \ue$ and $\nabla a_\varepsilon\, \wedge\, \nabla \Pi_\varepsilon:=\partial_1 a_\varepsilon \,\partial_2\Pi_\varepsilon-\partial_2 a_\varepsilon \partial_1\Pi_\varepsilon$. We notice that the vorticity formulation is the key point to bypass the issues coming from the Coriolis force, whose singular effects disappear in \eqref{curl_eq}. Next, we make use of Theorem \ref{thm:improved_est_transport} and we deduce that \begin{equation}\label{transp_curl} \| \omega_\varepsilon \|_{B^0_{\infty,1}}\leq C \left(\| \omega_{0,\varepsilon}\|_{B^0_{\infty,1}}+\int_0^t \|\nabla a_\varepsilon\, \wedge\, \nabla \Pi_\varepsilon \|_{B^0_{\infty,1}}\detau \right)\left(1+\int_0^t\|\nabla \ue\|_{L^\infty} \detau\right)\, . \end{equation} We start by bounding the $B^0_{\infty,1}$ norm of $\nabla a_\varepsilon\, \wedge\, \nabla \Pi_\varepsilon$. We observe that \begin{equation}\label{l-p_dec_wedge} \begin{split} \partial_1a_\varepsilon\, \partial_2 \Pi_\varepsilon-\partial_2a_\varepsilon\, \partial_1\Pi_\varepsilon&=\mathcal T_{\partial_1a_\varepsilon}\partial_2 \Pi_\varepsilon-\mathcal T_{\partial_2a_\varepsilon}\partial_1 \Pi_\varepsilon+\mathcal T_{\partial_2\Pi_\varepsilon}\partial_1 a_\varepsilon-\mathcal T_{\partial_1\Pi_\varepsilon}\partial_2 a_\varepsilon\\ &+\partial_1\mathcal R(a_\varepsilon -\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)-\partial_2\mathcal R(a_\varepsilon -\Delta_{-1}a_\varepsilon,\, \partial_1 \Pi_\varepsilon)\\ &+\mathcal R(\partial_1\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)+\mathcal R(\partial_2\Delta_{-1}a_\varepsilon,\, \partial_1 \Pi_\varepsilon)\, . \end{split} \end{equation} Applying Proposition \ref{T-R} directly to the terms involving the paraproduct $\mathcal T$, we have \begin{equation*} \|\mathcal T_{\nabla a_\varepsilon}\nabla \Pi_\varepsilon\|_{B^0_{\infty,1}}+\|\mathcal T_{\nabla \Pi_\varepsilon}\nabla a_\varepsilon\|_{B^0_{\infty,1}}\leq C \left(\|\nabla a_\varepsilon\|_{L^\infty}\|\nabla \Pi_\varepsilon\|_{B^0_{\infty,1}}+\|\nabla a_\varepsilon\|_{B^0_{\infty,1}}\|\nabla \Pi_\varepsilon\|_{L^\infty}\right)\, . \end{equation*} Next, we have to deal with the remainders $\mathcal R$. We start by bounding the $B^0_{\infty,1}$ norm of $\partial_1\mathcal R(a_\varepsilon -\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)$. One has: \begin{equation*} \begin{split} \|\partial_1\mathcal R(a_\varepsilon -\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)\|_{B^0_{\infty,1}}&\leq C\|\mathcal R(a_\varepsilon -\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)\|_{B^1_{\infty,1}}\\ &\leq C\left(\|\nabla \Pi_\varepsilon\|_{B^0_{\infty,\infty}}\|(\text{Id}-\Delta_{-1})\, a_\varepsilon\|_{B^1_{\infty,1}}\right)\\ &\leq C \left(\|\nabla \Pi_\varepsilon\|_{L^\infty}\|\nabla a_\varepsilon\|_{B^0_{\infty,1}}\right)\, , \end{split} \end{equation*} where we have employed the localization properties of the Littlewood-Paley decomposition. In a similar way, one can argue for $\partial_2\mathcal R(a_\varepsilon -\Delta_{-1}a_\varepsilon,\, \partial_1 \Pi_\varepsilon)$. It remains to bound $\mathcal R(\partial_1\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)$, and analogously one can treat the term $\mathcal R(\partial_2\Delta_{-1}a_\varepsilon,\, \partial_1 \Pi_\varepsilon)$ in \eqref{l-p_dec_wedge}. We obtain that \begin{equation*} \|\mathcal R(\partial_1\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)\|_{B^0_{\infty,1}}\leq C \|\mathcal R(\partial_1\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)\|_{B^1_{\infty,1}}\leq C\left(\|\nabla \Pi_\varepsilon\|_{L^\infty}\|\partial_1\Delta_{-1}a_\varepsilon\|_{B^1_{\infty,1}}\right)\, . \end{equation*} Employing the spectral properties of operator $\Delta_{-1}$, one has that \begin{equation*} \|\partial_1\Delta_{-1}a_\varepsilon\|_{B^1_{\infty,1}}\leq C\|\Delta_{-1}\nabla a_\varepsilon\|_{L^\infty}\, . \end{equation*} Then, $$ \|\mathcal R(\partial_1\Delta_{-1}a_\varepsilon,\, \partial_2 \Pi_\varepsilon)\|_{B^0_{\infty,1}}\leq C \left(\|\nabla \Pi_\varepsilon\|_{L^\infty}\|\nabla a_\varepsilon\|_{B^0_{\infty,1}}\right)\, .$$ Finally, we get \begin{equation*} \|\nabla a_\varepsilon\, \wedge\, \nabla \Pi_\varepsilon \|_{B^0_{\infty,1}}\leq C\left(\|\nabla a_\varepsilon\|_{L^\infty}\|\nabla \Pi_\varepsilon\|_{B^0_{\infty,1}}+\|\nabla a_\varepsilon\|_{B^0_{\infty,1}}\|\nabla \Pi_\varepsilon\|_{L^\infty}\right)\, . \end{equation*} So plugging the previous estimate in \eqref{transp_curl}, one gets \begin{equation*} \| \omega_\varepsilon \|_{B^0_{\infty,1}}\leq C \left(\| \omega_{0,\varepsilon}\|_{B^0_{\infty,1}}+\int_0^t \|\nabla a_\varepsilon\|_{B^0_{\infty,1}} \| \nabla \Pi_\varepsilon \|_{B^0_{\infty,1}}\detau \right)\left(1+\int_0^t\|\nabla \ue\|_{L^\infty} \detau\right)\, . \end{equation*} At this point, we define \begin{equation}\label{def_energy_and_A} E_\varepsilon(t):=\|\ue (t)\|_{L^2\cap B^1_{\infty,1}}\qquad \text{and}\qquad \mathcal A_\varepsilon(t):=\|\nabla a_\varepsilon (t)\|_{B^0_{\infty,1}}\, . \end{equation} In this way, we have \begin{equation}\label{energy} E_\varepsilon(t)\leq C \left(E_\varepsilon (0)+\int_0^t \mathcal A_\varepsilon(\tau) \|\nabla \Pi_\varepsilon(\tau)\|_{B^0_{\infty,1}}\detau \right)\left(1+\int_0^t E_\varepsilon (\tau )\detau \right)\, . \end{equation} Next, we recall that, for $i=1,2$: \begin{equation*} \partial_t\, \partial_i a_\varepsilon+\ue \cdot \nabla \, \partial_i a_\varepsilon=-\partial_i \ue \cdot \nabla a_\varepsilon \end{equation*} and, due to the divergence-free condition on $\vec u_\varepsilon$, we can write \begin{equation*} \partial_i \ue \cdot \nabla a_\varepsilon=\sum_j\partial_i \ue^j \, \partial_ja_\varepsilon=\sum_j\Big(\partial_i(\ue^j\, \partial_j a_\varepsilon)-\partial_j(\ue^j\, \partial_ia_\varepsilon)\Big)\, . \end{equation*} So, using Proposition \ref{T-R} and the fact that \begin{equation*} \|\mathcal \partial_i \mathcal R(\ue^j,\, \partial_j a_\varepsilon)\|_{B^0_{\infty,1}}\leq C\, \|\mathcal R(\ue^j,\, \partial_j a_\varepsilon)\|_{B^1_{\infty,1}}\leq C\, \|\nabla a_\varepsilon\|_{B^0_{\infty,1}}\|\ue\|_{B^1_{\infty,1}}\, , \end{equation*} we may finally get \begin{equation*} \|\partial_i \ue \cdot \nabla a_\varepsilon\|_{B^0_{\infty,1}}\leq C \|\nabla a_\varepsilon\|_{B^0_{\infty,1}}\|\ue \|_{B^1_{\infty,1}}\, . \end{equation*} Thus, \begin{equation*} \|\nabla a_\varepsilon (t)\|_{B^0_{\infty,1}}\leq \|\nabla a_{0, \varepsilon}\|_{B^0_{\infty,1}}\exp \left(C\int_0^t \| \ue\|_{B^1_{\infty,1}}\detau \right)\, . \end{equation*} Therefore, recalling \eqref{def_energy_and_A}, one has \begin{equation}\label{est_A} \mathcal A_\varepsilon(t)\leq \mathcal A_\varepsilon (0) \exp \left( C\int_0^t E(\tau) \detau\right)\, . \end{equation} The next goal is to bound the pressure term in $B^0_{\infty,1}$. Actually, we shall bound its $B^1_{\infty,1}$ norm. Similarly to the analysis performed in Subsection \ref{ss:unif_est} for the $H^s$ norm (see e.g. inequality \eqref{est_Pi_H^s_1} in this respect), there exists some exponent $\lambda \geq 1$ such that \begin{equation*}\label{est_pres} \|\nabla \Pi_\varepsilon\|_{B^1_{\infty,1}}\leq C\left(\left(1+\varepsilon \|\nabla a_\varepsilon \|_{B^0_{\infty,1}}^\lambda \right)\|\nabla \Pi_\varepsilon \|_{L^2}+\varepsilon\, \|\varrho_\varepsilon\, {\rm div}\, (\ue \cdot \nabla \ue)\|_{B^0_{\infty,1}}+\|\varrho_\varepsilon \, {\rm div}\, \ue^\perp\|_{B^0_{\infty,1}}\right)\, . \end{equation*} The $L^2$ estimate for the pressure term follows in a similar way to one performed in \eqref{est:Pi_L^2}, i.e. \begin{equation}\label{L^2-est-pressure} \|\nabla \Pi_\varepsilon\|_{L^2}\leq C\, \varepsilon \, \|\ue \|_{L^2}\|\nabla \ue \|_{L^\infty}+\|\ue \|_{L^2}\, . \end{equation} Next, as showed above in the bound for $\|\partial_i \ue \cdot \nabla a_\varepsilon\|_{B^0_{\infty,1}}$, combining Bony's decomposition with the fact that ${\rm div}\, (\ue \cdot \nabla \ue)=\nabla \ue :\nabla \ue$, we may infer: \begin{equation*} \|{\rm div}\, (\ue \cdot \nabla \ue)\|_{B^0_{\infty,1}}\leq C\, \|\ue \|^2_{B^1_{\infty,1}}\, . \end{equation*} Now, our scope is to estimate the $B^1_{\infty,1}$ norm of the density. To do so, we make use of the following proposition, whose proof can be found in \cite{D_JDE}. \begin{proposition} Let $I$ be an open interval of $\mathbb{R}$ and $\overline F:I\rightarrow \mathbb{R}$ a smooth function. Then, for any compact subset $J\subset I$, $s>0$ and $(p,r)\in [1,+\infty]^2$, there exists a constant $C$ such that for any function $g$ valued in $J$ and with gradient in $B^{s-1}_{p,r}$, we have $\nabla \big(\overline F(g)\big)\in B^{s-1}_{p,r}$ and $$\left\|\nabla \big(\overline F(g)\big)\right\|_{B^{s-1}_{p,r}}\leq C\left\|\nabla g\right\|_{B^{s-1}_{p,r}}\, .$$ \end{proposition} Then, from the definition of $B^1_{\infty,1}$ and the previous proposition, the $B^1_{\infty,1}$ estimate for $\varrho_\varepsilon$ reads: \begin{equation*} \|\varrho_\varepsilon\|_{B^1_{\infty,1}}\leq C\, \left( \overline\varrho+\varepsilon\, \|\nabla a_\varepsilon\|_{B^0_{\infty,1}}\right)\, . \end{equation*} Finally, plugging the $L^2$ estimate \eqref{L^2-est-pressure} and all the above inequalities in \eqref{est_pres}, one may conclude that \begin{equation}\label{est_pres_1} \begin{split} \|\nabla \Pi_\varepsilon\|_{B^1_{\infty,1}}&\leq C \left(1+\varepsilon\, \|\nabla a_\varepsilon \|_{B^0_{\infty,1}}^\lambda \right)\left(\varepsilon\, \|\ue\|_{L^2}\|\nabla \ue\|_{B^0_{\infty,1}}+\|\ue \|_{L^2}\right)\\ &+C \left(1+\varepsilon\, \|\nabla a_\varepsilon \|_{B^0_{\infty,1}} \right)\left(\varepsilon\, \| \ue\|_{B^1_{\infty,1}}^2+\|\ue \|_{B^1_{\infty,1}}\right)\\ &\leq C (1+\varepsilon\, \mathcal A_\varepsilon^\lambda)(\varepsilon\, E_\varepsilon^2+E_\varepsilon)+C(1+\varepsilon \mathcal A_\varepsilon)(\varepsilon\, E_\varepsilon^2+E_\varepsilon)\\ &\leq C (\varepsilon \, E_\varepsilon^2+E_\varepsilon)(1+\varepsilon\,\mathcal A_\varepsilon +\varepsilon\, \mathcal A_\varepsilon^\lambda)\\ &\leq C (\varepsilon \, E_\varepsilon^2+E_\varepsilon)(1 +\varepsilon\, \mathcal A_\varepsilon^\lambda)\, . \end{split} \end{equation} We insert now in \eqref{energy} the estimates found in \eqref{est_pres_1} and in \eqref{est_A}, deducing that \begin{equation}\label{est_E} \begin{split} &E_\varepsilon(t)\leq C\left(E_\varepsilon(0)+\mathcal B_\varepsilon(0)\int_0^t \exp \left(C \int_0^\tau E_\varepsilon (s)\, \ds\right) \left(\varepsilon\, E_\varepsilon^2(\tau)+ E_\varepsilon(\tau)\right)\, \detau \right)\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \left(1+\int_0^t E_\varepsilon (\tau) \detau \right) , \end{split} \end{equation} where we have set $\mathcal B_\varepsilon (0):=\mathcal A_\varepsilon(0)+\varepsilon\, \mathcal A_\varepsilon(0)^{\lambda +1} $. At this point, we define $T_\varepsilon^\ast>0$ such that \begin{equation}\label{def_T^ast_veps} T_\varepsilon^\ast:= \sup \left\{t>0 : \, \mathcal B_\varepsilon(0)\int_0^t \exp \left(C \int_0^\tau E_\varepsilon (s)\, \ds\right) \left(\varepsilon\, E_\varepsilon^2(\tau)+ E_\varepsilon(\tau)\right)\, \detau \leq E_\varepsilon(0) \right\}\, . \end{equation} So, from \eqref{est_E} and using Gr\"onwall's inequality, we obtain that \begin{equation*} E_\varepsilon(t)\leq C\, E_\varepsilon(0)e^{CtE_\varepsilon(0)}\, , \end{equation*} for all $t\in [0,T_\varepsilon^\ast]$. The previous estimate implies that, for all $t\in [0,T_\varepsilon^\ast]$, one has \begin{equation*} \int_0^t E_\varepsilon(\tau) \detau \leq e^{CtE_\varepsilon(0)}-1\, . \end{equation*} Analogously to inequality \eqref{estimate_integral_energy} in Subsection \ref{ss:improved_lifespan}, we can argue that \begin{equation*} \mathcal B_\varepsilon(0)\int_0^t \exp \left(C \int_0^\tau E_\varepsilon (s)\, \ds\right) E_\varepsilon(\tau)\, \detau\leq C \mathcal B_\varepsilon(0) \left(\exp \left(e^{CtE_\varepsilon(0)}-1\right)-1\right)\, . \end{equation*} Then, it remains to control $$ \varepsilon \mathcal B_\varepsilon(0) \int_0^t \exp \left(C \int_0^\tau E_\varepsilon (s)\, \ds\right) E_\varepsilon^2(\tau)\, \detau\, . $$ For this term, we may infer that \begin{equation*} \begin{split} \varepsilon \mathcal B_\varepsilon(0)\int_0^t \exp \left(C \int_0^\tau E_\varepsilon (s)\, \ds\right) E_\varepsilon^2(\tau)\, \detau &\leq C\, \varepsilon \,\mathcal B_\varepsilon(0) \int_0^t E_\varepsilon^2(0)e^{C\tau E_\varepsilon(0)}\, \exp\left(e^{C \tau E_\varepsilon(0)}-1\right)\, \detau\\ &\leq C\, \varepsilon\, \mathcal B_\varepsilon(0) E_\varepsilon (0) \left(\exp \left(e^{CtE_\varepsilon(0)}-1\right)-1\right)\, . \end{split} \end{equation*} In the end, by definition \eqref{def_T^ast_veps} of $T^\ast_\varepsilon$, we deduce \begin{equation}\label{asymptotic_time} T^\ast_\varepsilon\geq \frac{C}{E_\varepsilon(0)}\log\left(\log\left(\frac{CE_\varepsilon(0)}{\max \{\mathcal B_\varepsilon(0),\, \varepsilon \, \mathcal B_\varepsilon(0)E_\varepsilon(0)\}}+1\right)+1\right), \end{equation} for a suitable constant $C>0$. This concludes the proof of Theorem \ref{W-P_fullE}. \chapter*{Future perspectives}\addcontentsline{toc}{chapter}{Future perspectives} We conclude this manuscript pointing out some possible upcoming goals. First of all, we would like to continue the studies started in Chapter \ref{chap:multi-scale_NSF} and Chapter \ref{chap:BNS_gravity} in two different directions: \begin{itemize} \item on the one hand, we will investigate the regimes not covered yet, i.e. either $m=1$ with the centrifugal effects or $(m+1)/2<n<m$; \item on the other hand, we would focus on the well-posedness analysis for the Oberbeck-Boussinesq limiting system. \end{itemize} In a second instance, we would like to dedicate ourselves to the analysis of a different system that describes the evolution of temperature on the ocean surface: the so-called \emph{surface quasi-geostrophic system}. Specifically, we are interested in the asymptotic analysis in regimes of fast rotational effects and we would like to inspect the well-posedness of such system on manifolds, like the sphere. To conclude, we highlight that we would like to spend more time regarding the lifespan of solutions to Euler equations in order to study more deeply the stabilization effects due to the rotation. \chapter*{Perspectives d'avenir}\addcontentsline{toc}{chapter}{Perspectives d'avenir} Nous concluons ce manuscrit en soulignant quelque but possible à venir. Tout d'abord, nous continuerons les études commencées dans le chapitre \ref{chap:multi-scale_NSF} et dans le chapitre \ref{chap:BNS_gravity} vers deux directions différentes : \begin{itemize} \item d'une part, nous étudierons les régimes pas encore couverts dans nos études précédentes, soit $m=1$ avec les effets centrifuges soit $(m+1)/2<n<m$; \item d'autre part, nous nous dédierons à l'analyse du caractère bien posé pour le système limite du type Oberbeck-Boussinesq. \end{itemize} Dans un second temps, nous nous consacrerons à l'analyse d'un autre système décrivant l'évolution de la température à la surface de l'océan, qu'on appelle le \emph{système quasi-géostrophique à la surface}. Plus précisément, nous nous intéressons à l'analyse asymptotique des régimes en rotation rapide et nous aimerions examiner le caractère bien posé d'un tel système sur des variétés, comme la sphère. En conclusion, nous soulignons qu'on aimerait consacrer plus de temps à l'étude de la durée de vie des solutions des équations d'Euler afin de comprendre plus en profondeur les effets de stabilisation dûs à la rotation.
train/arxiv
BkiUdME4uzlhfMbxTyDT
5
1
\section{Introduction} Analysis of error terms in asymptotic formulas is of considerable importance in various fields of mathematics. For example, consider the von-Mangoldt function \[\Lambda(n)=\begin{cases} \log p \quad &\mbox{ if } \ n=p^r, \ r\in \mathbb{N}, \text{ and } p \text{ prime,}\\ 0 & \mbox{ otherwise .} \end{cases} \] The Prime Number Theorem says that \[\sum_{n\leq x}\Lambda(n)=x + \Delta(x),\] where $\Delta(x)$ is $o(x)$. It is also known that the famous Riemann Hypothesis is equivalent to (see \cite{PNT_Under_RH}, also Theorem~\ref{thm:landu_omegapm} below ) \begin{equation}\label{PNT_Under_RH} \Delta(x)=O\left(x^{\frac{1}{2}}\log^2 x\right). \end{equation} The following result, proved by Hardy and Littlewood \cite{HardyLittlewoodPNTOmegapm}, shows that such an upper bound for $\Delta(x)$ is optimal in terms of the power of $x$: \begin{align*} \limsup \frac{\Delta(x)}{x^\frac{1}{2}\log\log\log x} > 0 \quad \text{ and } \quad \liminf \frac{\Delta(x)}{x^\frac{1}{2}\log\log\log x} < 0. \end{align*} A weaker result by Landau \cite{Landau} gives \begin{align*} \limsup \frac{\Delta(x)}{x^\frac{1}{2}} > 0 \quad \text{ and } \quad \liminf \frac{\Delta(x)}{x^\frac{1}{2}} < 0. \end{align*} However, Landau's method has wide applications, and it is flexible to obtain some measure theoretic results. In Landau's method, the existence of a complex pole with real part $\frac{1}{2}$ serves as a criterion for existence of the above limits. In this paper, we shall investigate on a quantitative version of Landau's result by obtaining the Lebesgue measure of the sets where $\Delta(x)>\lambda x^{1/2}$ and $\Delta(x)<-\lambda x^{\frac{1}{2}}$, for some $\lambda>0$. We shall show that the large Lebesgue measure of the set where $|\Delta(x)|>\lambda x^{\frac{1}{2}}$, for some $\lambda>0$, will replace the criterion for existence of a complex pole in Landau's method. These ideas will become clear later in this paper. \subsection*{Outline} In general, consider a sequence of real numbers $\{a_n\}_{n=1}^{\infty}$ having Dirichlet series \[D(s)=\sum_{n=1}^{\infty}\frac{a_n}{n^s}\] that converges in some half-plane. The Perron summation formula \cite[II.2.1]{TenenAnPr} uses analytic properties of $D(s)$ to give \[\sum^*_{n\leq x}a_n=\mathcal{M}(x)+\Delta(x),\] where $\mathcal{M}(x)$ is the main term, $\Delta(x)$ is the error term ( which would be specified later ) and $\sum^*$ is defined as \begin{equation*} \sum^*_{n\leq x} a_n = \begin{cases} \sum_{n\leq x} a_n \ & \text{if } x\notin \mathbb N \\ \sum_{n< x} a_n +\frac{1}{2} a_x \ &\text{if } x\in\mathbb N. \end{cases} \end{equation*} Standard measures of fluctuations ( in this case, fluctuations of $\Delta(x)$ ) are $\Omega$ and $\Omega_{\pm}$ estimates, which are defined as follows. \begin{defi} Let $f_1(x)$ be a real valued function and $f_2(x)$ be a positive monotonically increasing function. We say that $f_1(x)=\Omega (f_2(x))$ if \[ \limsup_{x\rightarrow \infty} \frac{|f_1(x)|}{f_2(x)}>0. \] Also $f_1(x)=\Omega_{\pm} (f_2(x))$ if \[ \limsup_{x\rightarrow \infty} \frac{f_1(x)}{f_2(x)}>0 \quad \mbox{and}\quad \liminf_{x\rightarrow \infty} \frac{f_1(x)}{f_2(x)}<0 .\] \end{defi} \noindent In this paper, we obtain bounds for Lebesgue measures of the sets on which $\Omega$ and $\Omega_{\pm}$ results hold. To obtain $\Omega$ and $\Omega_{\pm}$ estimates, we shall analyze the Mellin transform of $\Delta(x)$. \begin{defi}\label{def:mellin_transform} For a complex variable $s$, the Mellin transform $A(s)$ of $\Delta(x)$ is defined as: \[A(s)=\int_{1}^{\infty}\frac{\Delta(x)}{x^{s+1}}\mathrm{d} x.\] \end{defi} \noindent In general, $A(s)$ is holomorphic in some half plane. In Section~\ref{sec:analytic_continuation}, we shall discuss a method to continue $A(s)$ meromorphically. In particular, we prove in Theorem~\ref{thm:analytic_continuation_mellin_transform} that under some natural assumptions \[A(s)=\int_{\mathscr{C}}\frac{D(\eta)}{\eta(s-\eta)}\mathrm{d} \eta,\] where the contour $\mathscr{C}$ is as in Definition \ref{def:contour} and $s$ lies to the right of $\mathscr{C}$. In Section~\ref{sec:preliminaries}, this result complements Theorem~\ref{thm:omega_pm_main} in its applications. In section \ref{sec:preliminaries}, we revisit Landau's method and obtain some measure theoretic results. If $A(s)$ has a pole at $\sigma_0+it_0$ for some $t_0\neq 0$, and has no real pole for $\text{Re}(s)\ge \sigma_0$, then Landau's method ( Theorem~\ref{thm:landu_omegapm} ) gives \[ \Delta(x)=\Omega_{\pm} (x^{\sigma_0}). \] We also discuss a result of Kaczorowski and Szyd\l o \cite{KaczMeasure} on $E_2(x)$, where \[ \int_0^x \left|\zeta\left(\frac{1}{2}+it \right)\right|^4 \mathrm{d} t = x P(\log x) + E_2(x)\] and $P$ being a certain polynomial of degree $4$. Motohashi \cite{Motohashi1} proved that \[ E_2(x) \ll x^{2/3+\epsilon}, \] and further in \cite{Motohashi2} he showed that \[ E_2(x)=\Omega_{\pm}(\sqrt x). \] The result of Kaczorowski and Szyd\l o mentioned above says that there exist constants $\lambda_0, \nu>0$ such that \begin{align*} &&\mu\{1\le x\le T: E_2(x)>\lambda_0\sqrt x \} = \Omega(T/{(\log T)^{\nu}})& \\ &\text{ and }&\mu\{1\le x\le T: E_2(x)<-\lambda_0\sqrt x \} = \Omega(T/{(\log T)^{\nu}})& \ \text{ as } T\rightarrow \infty, \end{align*} and where $\mu$ is the Lebesgue measure \footnote{Throughout this paper, $\mu$ will denote the Lebesgue measure on the real line $\mathbb R$.}. These results not only prove $\Omega_{\pm}$ bounds, but also give quantitative estimates for the occurrences of such fluctuations. This result of Kaczorowski and Szyd\l o has been genrealized by Bhowmik, Ramar\'e and Schlage-Puchta \cite{gautami} to localize fluctuations of $\Delta(x)$ to a dyadic range. Let \begin{align*} &\sum_{n\leq x} G_k(n) = \frac{x^k}{k!} -k \sum_\rho \frac{x^{k-1+\rho}}{\rho(1+\rho)\cdots(k-1+\rho)} + \Delta_k(x), \end{align*} where the Goldbach numbers $G_k(n)$ are defined as \[\quad G_k(n)=\sum_{\substack{n_1,\ldots n_k \\ n_1+\cdots+n_k=n}}\Lambda(n_1)\cdots\Lambda(n_k),\] and $\rho$ runs over nontrivial zeros of the Riemann zeta function $\zeta(s)$. Bhowmik, Ramar\'e and Schlage-Puchta proved that under Riemann Hypothesis \begin{align*} &&\mu\{T\le x\le 2T: \Delta_k(x)>(\mathfrak c_k + \mathfrak c_k')x^{k-1} \} = \Omega(T/{(\log T)^{6}})& \\ &\text{ and }&\mu\{T\le x\le 2T: \Delta_k(x)<(\mathfrak c_k-\mathfrak c_k') x^{k-1} \} = \Omega(T/{(\log T)^{6}})& \ \text{ as } T\rightarrow \infty, \end{align*} where $k\geq 2$ and $\mathfrak{c}_k, \mathfrak{c}_k'$ are well defined real number depending on $k$ with $\mathfrak{c}_k'>0$. In this paper, we obtain analogous results for other functions. In Theorem~\ref{thm:omega_pm_main}, we further generalize this theorem of Bhowmik, Ramar\'e and Schlage-Puchta so that it has more applications. Moreover, we carry forward this idea to study the influence of measures on the $\Omega$ and $\Omega_{\pm}$ results. In Section \ref{sec:stronger_omega}, we obtain an $\Omega$ bound for the second moment of $\Delta(x)$ in a special case, namely \[\int_T^{2T}\Delta^2(x)\mathrm{d} x=\Omega(T^{2\alpha + 1 + \epsilon}),\] for any $\epsilon>0$ and for some $\alpha>0$. This is an adaptation of a technique due to Balasubramanian, Ramachandra and Subbarao \cite{BaluRamachandraSubbarao}. In particular, we obtain \[\Delta(x)=\Omega(x^{\alpha}).\] Also we derive an $\Omega$ bound for the measure of the set \[\mathcal{A}(\alpha, T):=\{x: x\in[T, 2T], |\Delta(x)|>x^{\alpha}\}.\] In Section \ref{sec:measure_analysis}, we establish a connection between $\mu(\mathcal{A}(\alpha, T))$ and fluctuations of $\Delta(x)$. In Proposition~\ref{prop:refine_omega_from_measure}, we see that \[\mu(\mathcal{A}(\alpha, T))\ll T^{1-\delta} \ \text{ implies } \ \Delta(x)=\Omega(x^{\alpha + \delta/2}).\] However, Theorem~{\ref{thm:omega_pm_measure}} gives that \[ \mu(\mathcal{A}(\alpha, T))=\Omega(T^{1-\delta}) \ \text{ implies } \ \Delta(x) =\Omega_\pm(x^{\alpha-\delta}),\] provided $A(s)$ does not have a real pole for $\text{Re} (s) \geq \alpha-\delta$. In particular, this says that either we can improve on the $\Omega$ result or we can obtain a tight $\Omega_\pm$ result for $\Delta(x)$. In this paper, we formulate our results in a way to be applicable in a wide generality. The nature of the problem on which the methods of this paper apply are formalized in various assumptions. A summary of the applications of these results obtained in this paper are given below. \subsection*{Applications} We conclude the introduction to this paper by mentioning a few applications. \subsubsection*{Error Term of a Twisted Divisor Function} For a fixed $\theta\neq 0$, we consider \begin{equation}\label{eq:tau-n-theta_def} \tau(n, \theta)=\sum_{d\mid n}d^{i\theta}\ . \end{equation} This function is used in \cite[Chapter 4]{DivisorsHallTenen} to measure the clustering of divisors. The Dirichlet series of $|\tau(n, \theta)|^2$ can be expressed in terms of the Riemann zeta function as \begin{equation}\label{eq:dirichlet_series_tauntheta} D(s)=\sum_{n=1}^{\infty}\frac{|\tau(n, \theta)|^2}{n^s}=\frac{\zeta^2(s)\zeta(s+i\theta)\zeta(s-i\theta)}{\zeta(2s)} \quad\quad \text{for}\quad \text{Re}(s)>1. \end{equation} In \cite[Theorem 33]{DivisorsHallTenen}, Hall and Tenenbaum proved that \begin{equation}\label{eq:formmula_tau_ntheta} \sum_{n\leq x}^*|\tau(n, \theta)|^2=\omega_1(\theta)x\log x + \omega_2(\theta)x\cos(\theta\log x) +\omega_3(\theta)x + \Delta(x), \end{equation} where $\omega_i(\theta)$s are explicit constants depending only on $\theta$. They also showed that \begin{equation}\label{eq:upper_bound_delta} \Delta(x)=O_\theta(x^{1/2}\log^6x). \end{equation} Here the main term comes from the residues of $D(s)$ at $s=1, 1\pm i\theta $. All other poles of $D(s)$ come from the zeros of $\zeta(2s)$. Using a pole on the line $\text{Re}(s)=1/4$, Landau's method gives \[\Delta(x)=\Omega_{\pm}(x^{1/4}).\] In order to apply the method of Bhowmik, Ramar\'e and Schlage-Puchta, we need \[ \int_T^{2T} \Delta^2(x) \mathrm{d} x \ll T^{2\sigma_0+1+\epsilon}, \] for any $\epsilon >0$ and $\sigma_0=1/4$; such an estimate is not possible due to Corollary~\ref{coro:balu_ramachandra1}. Generalization of this method in Theorem~\ref{thm:omega_pm_main} can be applied to get \begin{align*} \mu \left( \mathcal{A}_j\cap [T, 2T]\right)=\Omega\left(T^{1/2}(\log T)^{-12}\right) \quad \text{ for } j=1, 2, \end{align*} and here $\mathcal{A}_j$s' for $\Delta(x)$ are defined as \begin{align*} &&\mathcal{A}_1&=\left\{x: \Delta(x)>(\lambda(\theta)-\epsilon)x^{1/4}\right\}&\\ &\text{and}&\mathcal{A}_2&=\left\{ x : \Delta(x)<(-\lambda(\theta)+\epsilon)x^{1/4}\right\},&\\ \end{align*} for any $\epsilon>0$ and $\lambda(\theta)>0$ as in (\ref{eqn:lambda_theta}). But under Riemann Hypothesis, we show in (\ref{result:tau_n_theta_omegapm_underRH}) that the above $\Omega$ bounds can be improved to \begin{align*} \mu\left(\mathcal{A}_j\right) =\Omega\left(T^{3/4-\epsilon}\right),\quad \text{ for } j=1, 2 \end{align*} and for any $\epsilon>0$. Fix a constant $c_1>0$ and define \[\alpha(T) =\frac{3}{8}-\frac{c_1}{(\log T)^{1/8}}.\] In Corollary~\ref{coro:balu_ramachandra2}, we prove that \[\Delta(T)=\Omega\left(T^{\alpha(T)}\right).\] In Proposition~\ref{Balu-Ramachandra-measure}, we give an $\Omega$ estimate for the measure of the sets involved in the above bound: \[ \mu(\mathcal{A}\cap [T,2T])=\Omega\left(T^{2\alpha(T)}\right),\] where \[\mathcal{A}=\{x: |\Delta(x)|\ge Mx^{\alpha}\}\] for a positive constant $M>0$. In Theorem~\ref{thm:tau_theta_omega_pm}, we show that \[ \text{either } \ \Delta(x)=\Omega\left(x^{ \alpha(x)+\delta/2}\right) \ \text{ or } \ \Delta(x)=\Omega_{\pm}\left(x^{3/8-\delta'}\right), \] for $0<\delta<\delta'<1/8$. We may conjecture that \[ \Delta(x)=O(x^{3/8+\epsilon}) \ \text{ for any } \epsilon>0.\] Theorem~\ref{thm:tau_theta_omega_pm} and this conjecture imply that \[\Delta(x)=\Omega_{\pm}(x^{3/8-\epsilon})\ \text{ for any } \epsilon>0.\] \subsubsection*{Square Free Divisors} Let $\Delta(x)$ be the error term in the asymptotic formula for partial sums of the square free divisors: \begin{align*} \Delta(x)=\sum_{n\leq x}^* 2^{\omega(n)}-\frac{x\log x}{\zeta(2)}+\left(-\frac{2\zeta'(2)}{\zeta^2(2)} + \frac{2\gamma - 1}{\zeta(2)}\right)x, \end{align*} where $\omega(n)$ denotes the number of distinct primes divisors of $n$. It is known that $\Delta(x)\ll x^{1/2}$ (see \cite{holder}). Let $\lambda_1>0$ and the sets $\mathcal{A}_j$, for $j=1, 2,$ be defined as in Section~\ref{subsec:sqfree_divisors}: \begin{align*} \mathcal{A}_1&=\left\{x: \Delta(x)>(\lambda_1-\epsilon)x^{1/4}\right\},\\ \mathcal{A}_2&=\left\{ x : \Delta(x)<(-\lambda_1+\epsilon)x^{1/4}\right\}.\\ \end{align*} In (\ref{eq:sqfree_divisors_omegaset_uc}), we show that \begin{equation*} \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{1/2}\right), \text{ for } j=1, 2. \end{equation*} But under Riemann Hypothesis, we prove the following $\Omega$ bounds in (\ref{eq:sqfree_divisors_omegaset}): \begin{equation*} \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{1-\epsilon}\right), \text{ for } j=1, 2 \end{equation*} and for any $\epsilon>0$. \subsubsection*{The Error Term in Prime Number Theorem} Let $\Delta(x)$ be the error term in the Prime Number Theorem: \[\Delta(x)=\sum_{n\leq x}^*\Lambda(n)-x.\] We know from Landau's theorem \cite{Landau} that \[\Delta(x)=\Omega_\pm\left(x^{1/2}\right)\] and from the theorem of Hardy and Littlewood \cite{HardyLittlewoodPNTOmegapm} that \[\Delta(x)=\Omega_\pm\left(x^{1/2}\log\log x\right).\] We define \begin{align*} \mathcal{A}_1&=\left\{x: \Delta(x)>(\lambda_2-\epsilon)x^{1/2}\right\},\\ \mathcal{A}_2&=\left\{ x : \Delta(x)<(-\lambda_2+\epsilon)x^{1/2}\right\},\\ \end{align*} where $\lambda_2>0$ be as in Section~\ref{subsec:pnt_error}. If we assume Riemann Hypothesis, then the theorem of Bhowmik, Ramar\'e and Schlage-Puchta ( see Theorem~\ref{thm:kaczorowski} below ) along with (\ref{PNT_Under_RH}) gives \[ \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(\frac{T}{\log^4 T}\right) \text{ for } j=1, 2.\] However, as an application of Corollary~\ref{cor:measure_omega_pm_from_upper_bound}, we prove the following weaker bound unconditionally: \[ \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{1-\epsilon}\right), \text{ for } j=1, 2\] and for any $\epsilon>0$. \subsubsection*{Non-isomorphic Abelian Groups} Let $a_n$ be the number of non-isomorphic abelian groups of order $n$, and the corresponding Dirichlet series is given by \[ \sum_{n=1}^{\infty}\frac{a(n)}{n^s} = \prod_{k=1}^{\infty}\zeta(ks)\ \text{ for } \text{Re}(s)>1. \] Let $\Delta(x)$ be defined as \begin{equation*} \Delta(x)=\sum_{n\leq x}^*a_n - \sum_{k=1}^{6}\Big(\prod_{j \neq k}\zeta(j/k)\Big) x^{1/k}. \end{equation*} It is an open problem to show that \begin{equation}\label{conj1} \Delta(x)\ll x^{1/6+\epsilon}\quad \text{for any} \ \epsilon>0. \end{equation} The best result on upper bound of $\Delta(x)$ is due to O. Robert and P. Sargos \cite{sargos}, which gives \[\Delta(x)\ll x^{1/4+\epsilon}.\] Balasubramanian and Ramachandra \cite{BaluRamachandra2} proved that \begin{equation*} \int_T^{2T}\Delta^2(x)\mathrm{d} x=\Omega(T^{4/3}\log T). \end{equation*} Following the proof of Proposition \ref{Balu-Ramachandra-measure}, we get \begin{equation*} \mu\left( \{ T\le x\le 2T: |\Delta(x)|\ge \lambda_3 x^{1/6}(\log x)^{1/2}\} \right) =\Omega(T^{5/6-\epsilon}), \end{equation*} for some $\lambda_2>0$ and for any $\epsilon>0$. Balasubramanian and Ramachandra \cite{BaluRamachandra2} also obtained \[ \Delta(x)=\Omega_{\pm}(x^{92/1221}).\] Sankaranarayanan and Srinivas \cite{srini} improved this to \[ \Delta(x)=\Omega_{\pm}\left(x^{1/10}\exp\left(c\sqrt{\log x}\right)\right),\] for some constant $c>0$. It has been conjectured that \[\Delta(x)=\Omega_\pm(x^{1/6-\delta}),\] for any $\delta>0$. In Proposition~\ref{prop:abelian_group}, we prove that either \[\int_T^{2T}\Delta^4(x)\mathrm{d} x=\Omega( T^{5/3+\delta} )\text{ or }\Delta(x)=\Omega_\pm(x^{1/6-\delta}),\] for any $0<\delta< 1/42$. The conjectured upper bound (\ref{conj1}) of $\Delta(x)$ gives \[\int_T^{2T}\Delta^4(x)\mathrm{d} x \ll T^{5/3+\delta}.\] This along with Proposition~\ref{prop:abelian_group} implies that \[\Delta(x)=\Omega_\pm(x^{1/6-\delta}),\] for any $0<\delta <1/42$. \subsection*{Acknowledgment} We thank Gautami Bhowmik for initiating us on this topic and providing us with a preprint of \cite{gautami}. We thank R. Balasubramanian for his insightful comments and useful discussions. We thank A. Ivi\'c, O. Ramare and K. Srinivas for carefully reading the manuscript and for their suggestions, which made the paper more readable and up to date. \section{Mellin Transform Of The Error Term}\label{sec:analytic_continuation} Recall that we have a sequence of real numbers $\{a_n\}_{n=1}^{\infty}$, with its Dirichlet series $D(s)$. We also have \[\sum_{n\leq x}^*a_n=\mathcal{M}(x)+\Delta(x),\] where $\mathcal{M}(x)$ is the main term and $\Delta(x)$ is the error term. The following set of assumptions will represent $\mathcal{M}(x)$ and $D(s)$ in a wide generality. \begin{asump}\label{as:for_continuation_mellintran} Suppose there exist real numbers $\sigma_1$ and $\sigma_2$ satisfying $0<\sigma_1<\sigma_2$, such that \begin{enumerate} \item[(i)] $D(s)$ is absolutely convergent for $\text{Re}(s)> \sigma_2$. \item[(ii)] $D(s)$ can be meromorphically continued to the half plane $\text{Re}(s)>\sigma_1$ with only finitely many poles $\rho$ of $D(s)$ satisfying \[ \sigma_1<\text{Re}(\rho)\leq\sigma_2. \] We shall denote this set of poles by $\mathcal{P}$. \item[(iii)] The main term $\mathcal{M}(x)$ is sum of residues of $\frac{D(s)x^s}{s}$ at poles in $\mathcal P$: \[\mathcal{M}(x)=\sum_{\rho\in \mathcal{P}}Res_{s=\rho}\left( \frac{D(s)x^s}{s}\right).\] \end{enumerate} \end{asump} \begin{note}\label{note:initial_assumption_consequences} We may also observe: \begin{enumerate} \item[(i)] For any $\epsilon>0$, we have \[|a_n|, |\mathcal{M}(x)|, |\Delta(x)|, \left|\sum_{n\leq x}a_n\right| \ll x^{\sigma_2+\epsilon}. \] \item[(ii)] The main term $\mathcal{M}(x)$ is a polynomial in $x$, and $\log x$: \[\mathcal{M}(x)=\sum_{j\in\mathscr{J}}\nu_{1, j}x^{\nu_{2, j}}(\log x)^{\nu_{3, j}},\] where $\nu_{1, j}$ are complex numbers, $\nu_{2, j}$ are real numbers with $\sigma_1<\nu_{2, j}\leq \sigma_2$, $\nu_{3, j}$ are positive integers, and $\mathscr J$ is a finite index set. \end{enumerate} \end{note} \noindent Now we shall discuss a method to obtain a meromorphic continuation of $A(s)$ ( see Definition~\ref{def:mellin_transform} ) by expressing it as a contour integration involving $D(s)$. Below, we define our required contour $\mathscr{C}$. \begin{defi}\label{def:contour} Let $\sigma_1, \sigma_2$ and $T_0$ be as defined in Assumptions~\ref{as:for_continuation_mellintran}. Choose a positive real number $\sigma_3$ such that $\sigma_3>\sigma_2.$ We define the contour $\mathscr{C}$, as in Figure~\ref{fg:contour_c0}, as the union of the following five line segments: \[\mathscr{C}=L_1\cup L_2\cup L_3 \cup L_4 \cup L_5 ,\] where \begin{align*} L_1=&\{\sigma_3+iv: T_0 \leq v < \infty\}, &L_2=\{u+iT_0: \sigma_1\leq u\leq \sigma_3 \}, \\ L_3=&\{\sigma_1+iv: -T_0 \leq v \leq T_0\}, &L_4=\{u-iT_0: \sigma_1\leq u\leq \sigma_3 \}, \\ L_5=&\{\sigma_3+iv: -\infty< v \leq -T_0\}. \\ \end{align*} \end{defi} \begin{center} \begin{figure} \begin{tikzpicture}[yscale=0.8]\label{fg:contour_c0} \draw [<->][dotted] (0, -4.4)--(0, 4.4); \node at (-0.5, 2) {$T_0$}; \draw [thick] (-0.1, 2 )--(0.1,2); \node at (-0.3, 0.3) {$0$}; \node at (-0.5, -2) {$-T_0$}; \draw [thick] (-0.1,-2 )--(0.1, -2); \draw [<->][dotted] (5, 0)--(-2, 0); \draw [dotted] (2, -4.4)--(2, 4.4); \node at (1.7, 0.3) {$\sigma_1$}; \draw [dotted] (3, -4.4)--(3, 4.4); \node at (2.7, 0.3) {$\sigma_2$}; \draw [dotted] (4, -4.4)--(4, 4.4); \node at (3.7, 0.3) {$\sigma_3$}; \draw [thick] [postaction={decorate, decoration={ markings, mark= between positions 0.1 and 0.99 step 0.2 with {\arrow[line width=1.2pt]{>},}}}] (4, -4.4)--(4, -2)--(2, -2)--(2, 2)--(4, 2)--(4, 4.4); \node [below left] at (1.8, 1.6) {$\mathscr{C}$}; \end{tikzpicture} \caption{Contour $\mathscr{C}$} \end{figure} \end{center} In the above definition, the set of poles of $D(s)$ that lie to the right of $\mathscr{C}$ is exactly the set $\mathcal{P}$. The main theorem of this section gives analytic continuation of $A(s)$ as follows: \begin{thm}\label{thm:analytic_continuation_mellin_transform} Under the conditions in Assumptions-\ref{as:for_continuation_mellintran}, we have \[A(s)=\int_{\mathscr{C}}\frac{D(\eta)}{\eta(s-\eta)}\mathrm{d}\eta,\] when $s$ lies to the right of $\mathscr{C}$. \end{thm} \noindent We shall use several preparatory lemmas to prove the above theorem. Our first lemma gives an integral expression for $\Delta(x)$. \begin{lem} The error term $\Delta(x)$ can be expressed as the following integral: \begin{equation*} \Delta(x) = \int_{\mathscr{C}}\frac{D(\eta)x^{\eta}}{\eta}{\mathrm{d} \eta}. \end{equation*} \end{lem} \begin{proof} Follows from the definition of $\mathscr{C}$ and $\Delta$, and using Perron's formula. \end{proof} \noindent As a consequence of the above lemma, we get: \begin{equation}\label{eq:A_in_eta_x} A(s)= \int_1^{\infty}\int_{\mathscr{C}}\frac{D(\eta)x^{\eta}}{\eta}{\mathrm{d}\eta}\frac{\mathrm{d} x}{x^{s+1}}. \end{equation} Now we shall justify interchange of the integrals of $\eta$ and $x$ in (\ref{eq:A_in_eta_x}), which will help us to continue $A(s)$ meromorphically. \begin{defi} Define the following complex valued function $B(s)$: \begin{align*} B(s)&:=\int_{\mathscr{C}}\frac{D(\eta)}{\eta}\int_1^{\infty}\frac{\mathrm{d} x}{x^{s-\eta+1}}{\mathrm{d}\eta}\\ &=\int_{\mathscr{C}}\frac{D(\eta){\mathrm{d}\eta}}{(s-\eta)\eta},\quad \quad \text{ for } \text{Re}(s)>\text{Re}(\eta). \end{align*} \end{defi} \noindent Observe that $B(s)$ is well defined and analytic as the integral defining $B(s)$ is absolutely convergent. \begin{defi} For a positive integer $N$, define the contour $\mathscr{C}(N)$ as: \[\mathscr{C}(N)=\{\eta \in \mathscr{C}: |\text{Im}(\eta)|\leq N\}.\] \end{defi} \begin{defi} Integrating over $\mathscr{C}(N)$, define $B_N(s)$ as: \begin{align*} B_N(s)&=\int_{\mathscr{C}(N)}\frac{D(\eta)}{\eta}\int_1^{\infty}\frac{\mathrm{d} x}{x^{s-\eta+1}}{\mathrm{d} \eta}\\ &=\int_{\mathscr{C}(N)}\frac{D(\eta){\mathrm{d}\eta}}{(s-\eta)\eta},\quad \quad \text{ for } \text{Re}(s)>\text{Re}(\eta). \end{align*} \end{defi} \noindent With the above definitions, we prove: \begin{lem}\label{lem:FubiniForExchange} The functions $B$ and $B_N$ satisfy the following identities: \begin{align} \label{eq:limitAn_prime} B(s)&= \lim_{N\rightarrow \infty}B_N(s)\\ \label{eq:fubini_onAn_prime} &=\lim_{N\rightarrow \infty}\int_1^{\infty}\int_{\mathscr{C}(N)}\frac{D(\eta)x^{\eta}}{\eta}{\mathrm{d}\eta}\frac{\mathrm{d} x}{x^{s+1}}. \end{align} \end{lem} \begin{proof} Assume that $N>T_0$. To show (\ref{eq:limitAn_prime}), note: \begin{align*} |B(s)-B_N(s)|&\leq\left|\int_{\mathscr{C}-\mathscr{C}(n)} \frac{D(\eta){\mathrm{d}\eta}}{(s-\eta)\eta}\right| \ll \left| \int_{\sigma_3+iN}^{\sigma_3+i\infty}\frac{D(\eta){\mathrm{d}\eta}}{(s-\eta)\eta} + \int_{\sigma_3-i\infty}^{\sigma_3-iN}\frac{D(\eta){\mathrm{d}\eta}}{(s-\eta)\eta}\right|\\ & \ll \int_N^{\infty}\frac{\mathrm{d} v}{v^2} \ll \frac{1}{N}.\quad (\text{ substituting }\eta=\sigma_3+iv) \end{align*} This completes proof of (\ref{eq:limitAn_prime}). We shall prove (\ref{eq:fubini_onAn_prime}) using a theorem of Fubini and Tonelli \cite[Theorem~B.3.1,~(b)]{DeitmarHarmonic}. To show that the integrals commute, we need to show that one of the iterated integrals in (\ref{eq:fubini_onAn_prime}) converges absolutely. Note: \begin{align*} \int_{\mathscr{C}(N)}\int_1^{\infty}\left|\frac{D(\eta)}{\eta x^{s-\eta+1}}\right| \mathrm{d} x|\mathrm{d}\eta| \ll \int_{\mathscr{C}(N)}\left|\frac{D(\eta)}{\eta (Re(s)-Re(\eta))}\right| |\mathrm{d}\eta|< \infty, \end{align*} as $\mathscr{C}_N$ is a finite contour. Thus (\ref{eq:fubini_onAn_prime}) follows. \end{proof} \noindent Define \[B'_N(s):= \int_1^{\infty}\int_{\mathscr{C}(N)}\frac{D(\eta)x^{\eta}}{\eta}{\mathrm{d}\eta}\frac{\mathrm{d} x}{x^{s+1}}.\] Hence, (\ref{eq:fubini_onAn_prime}) of Lemma~\ref{lem:FubiniForExchange} can be restated as \[\lim_{N\rightarrow\infty}B_N'(s)=B(s).\] Now to show $A(s)=B(s)$, it is enough to show that \[\lim_{N\rightarrow \infty} \int_1^{\infty}\int_{\mathscr{C}-\mathscr{C}(N)} \frac{D(\eta)x^{\eta}}{\eta}{\mathrm{d}\eta}\frac{\mathrm{d} x}{x^{s+1}}=0.\] Observe that the uniform convergence of the integrand is required to interchange the integral of $x$ with the limit, which in turn force the above limit to be zero. However, we do not have this. It is easy to see from Perron's formula that the problem arises when $x$ is an integer. To handle this problem, we shall divide the integral in two parts, with one part being a neighborhood of integers. \begin{defi} For $\delta=\frac{1}{\sqrt N}$ ( where $N\geq 2$ ), we construct the following set as a neighborhood of integers: \[\mathcal{S(\delta)}:=[1, 1+\delta]\cup(\cup_{m\geq 2}[m-\delta, m+\delta]).\] \end{defi} \noindent Write \begin{equation}\label{eq:A_min_BN} A(s)-B'_N(s)=J_{1, N}(s) + J_{2, N}(s) - J_{3, N}(s),\\ \end{equation} where \begin{align*} &J_{1, N}(s)= \int_{\mathcal{S}(\delta)^c}\int_{\mathscr{C}-\mathscr{C}_N}\frac{D(\eta)x^\eta}{\eta}\mathrm{d}\eta \frac{\mathrm{d} x}{x^{s+1}},&&\\ &J_{2, N}(s)= \int_{\mathcal{S}(\delta)}\int_{\sigma_3-i\infty}^{\sigma_3+i\infty}\frac{D(\eta)x^\eta}{\eta}\mathrm{d}\eta \frac{\mathrm{d} x}{x^{s+1}},&&\\ &J_{3, N}(s)= \int_{\mathcal{S}(\delta)}\int_{\sigma_3-iN}^{\sigma_3+iN}\frac{D(\eta)x^\eta}{\eta}\mathrm{d}\eta \frac{\mathrm{d} x}{x^{s+1}}.&&\\ \end{align*} In the next three lemmas, we shall show that each of $J_{i, N}(s) \rightarrow 0$ as $N \rightarrow \infty$. \begin{lem}\label{lem:J1lim} For $\text{Re}(s)=\sigma>\sigma_3+1$, we have the limit \begin{equation*} \lim_{N\rightarrow \infty} J_{1, N}(s)= 0. \end{equation*} \end{lem} \begin{proof} Using Perron's formula \cite[Theorem~II.2.2]{TenenAnPr} for $x\in \mathcal{S}(\delta)^c$, we have \begin{align*} \left| \int_{\mathscr{C}-\mathscr{C}_N}\frac{D(\eta)x^\eta}{\eta}\mathrm{d}\eta \right|& \ll x^{\sigma_3}\sum_{n=1}^{\infty} \frac{|a_n|}{n^{\sigma_3}(1+N|\log(x/n)|)} \\ &\ll \frac{x^{\sigma_3}}{N}\sum_{n=1}^{\infty}\frac{|a_n|}{n^{\sigma_3}} + \frac{1}{N}\sum_{x/2\leq n \leq 2x}\frac{x|a_n|}{|x-n|}\left(\frac{x}{n}\right)^{\sigma_3}\\ &\ll \frac{x^{\sigma_3}}{N} + \frac{x^{\sigma_3+1+\epsilon}}{\delta N} \ll \frac{x^{\sigma_3+1+\epsilon}}{\sqrt N}, \end{align*} as $\delta=\frac{1}{\sqrt N}$. From the above calculation, we see that \[|J_{1, N}|\ll \frac{1}{\sqrt N}\int_{1}^{\infty}x^{\sigma_3-\sigma+\epsilon}dx\ll \frac{1}{\sqrt N},\] as $\sigma=\text{Re}(s)>\sigma_3+1+\epsilon$. This proves our required result. \end{proof} \begin{lem}\label{lem:J2lim} For $\text{Re}(s)=\sigma>\sigma_3$, \[\lim_{N\rightarrow \infty} J_{2, N}(s) = 0. \] \end{lem} \begin{proof} Recall that \begin{equation*} \sum_{n\leq x}^*a_n = \begin{cases} \sum_{n<x} a_n + a_x/2 &\mbox{ if } x \in \mathbb{N}, \\ \sum_{n\leq x}a_n &\mbox{ if } x \notin \mathbb{N}. \end{cases} \end{equation*} By Note~\ref{note:initial_assumption_consequences}, $$\sum_{n\leq x}^*a_n \ll x^{\sigma_3}.$$ Using this bound, we calculate an upper bound for $J_{2, N}$ as follows: \begin{align*} &\left|\int_{\mathcal{S}(\delta)}\int_{\sigma_3-i\infty}^{\sigma_3+i\infty}\frac{D(\eta)x^\eta}{\eta}\mathrm{d}\eta\frac{\mathrm{d} x}{x^{s+1}}\right| \leq\int_{\mathcal{S}(\delta)}\frac{\left|\sum_{n\leq x}^* a_n\right|}{x^{\sigma+1}}\mathrm{d} x\\ &\ll \int_{\mathcal{S}(\delta)} x^{\sigma_3-\sigma-1}\mathrm{d} x \ll \int_1^{1+\delta}x^{\sigma_3-\sigma-1} + \sum_{m=2}^{\infty} \int_{m-\delta}^{m+\delta}x^{\sigma_3-\sigma-1}\mathrm{d} x . \end{align*} This gives \[|J_{2, N}(s)|\ll \delta + \sum_{m\geq 2}\left(\frac{1}{(m-\delta)^{\sigma-\sigma_3}}-\frac{1}{(m+\delta)^{\sigma-\sigma_3}}\right).\] Using the mean value theorem, for all $m\geq 2$ there exists a real number $\overline{m}\in[m-\delta, m+\delta]$ such that \begin{align*} |J_{2, N}(s)| \ll \delta + \sum_{m\geq 2}\frac{\delta}{\overline{m}^{\sigma-\sigma_3+1}} \ll \delta=\frac{1}{\sqrt N} \quad \text{ by choosing } \sigma> \sigma_3. \end{align*} This implies that $J_{2, N}$ goes to zero as $N\rightarrow\infty$. \end{proof} \begin{lem}\label{lem:J3lim} For $\sigma>\sigma_3$, we have \[\lim_{N\rightarrow \infty} J_{3, N}(s) = 0.\] \end{lem} \begin{proof} Consider \begin{align*} J_{3, N}(s) &= \int_{\mathcal{S}(\delta)} \int_{\sigma_3-iN}^{\sigma_3+iN}\frac{D(\eta)x^\eta}{\eta}\mathrm{d}\eta \frac{\mathrm{d} x}{x^{s+1}}. \end{align*} This double integral is absolutely convergent for $\text{Re}(s)>\sigma_3$. Using the Theorem of Fubini-Tonelli \cite[Theorem~B.3.1,~(b)]{DeitmarHarmonic}, we can interchange the integrals: \begin{align*} J_{3, N}(s)&=\int_{\sigma_3-iN}^{\sigma_3+iN}\frac{D(\eta)}{\eta}\int_{\mathcal{S}(\delta)}x^{\eta-s-1}{\mathrm{d} x}~{\mathrm{d}\eta}\\ &=\int_{\sigma_3-iN}^{\sigma_3+iN} \frac{D(\eta)}{\eta}\left\{\int_1^{1+\delta}\frac{x^{\eta}}{x^{s+1}}\mathrm{d} x +\sum_{m\geq 2}\int_{m-\delta}^{m+\delta}\frac{x^{\eta}}{x^{s+1}}\mathrm{d} x\right\}\mathrm{d}\eta. \end{align*} For any $\theta_1, \theta_2 $ such that $0<\theta_1<\theta_2<\infty$, we have \begin{align*} \int_{\theta_1}^{\theta_2}x^{\eta-s-1}\mathrm{d} x = \frac{1}{s-\eta}\left\{ \frac{1}{\theta_1^{s-\eta}}-\frac{1}{\theta_2^{s-\eta}} \right\} = \frac{\theta_2-\theta_1}{\overline{\theta}^{s-\eta+1}}, \end{align*} for some $\overline{\theta}\in [\theta_1,\theta_2]$. Applying the above formula to $J_{3, N}(s)$, we get \begin{align*} J_{3, N}(s)&=\int_{\sigma_3-iN}^{\sigma_3+iN} \frac{D(\eta)}{\eta}\sum_{m\geq 1} \frac{2\delta}{\overline{m}^{s-\eta+1}}\mathrm{d}\eta =2\delta\sum_{m\geq 1} \int_{\sigma_3-iN}^{\sigma_3+iN}\frac{D(\eta)}{\overline{m}^{s-\eta+1}\eta}\mathrm{d} \eta, \end{align*} where $\overline{1/2}\in [1, 1+\delta]$ and $\overline{m}\in [m-\delta, m+\delta]$ for all integers $m\geq2$. In the above calculation, we can interchange the series and the integral as the series is absolutely convergent. So we have \begin{align*} J_{3, N}(s)&\ll \delta \sum_{m\geq 1} \int_{-N}^{N}\frac{1}{(1+|v|)\overline{m}^{\sigma-\sigma_3+1}}\mathrm{d} v \quad \text{( substituting }\eta=\sigma_3+iv\text{ )}\\ &\ll \delta \log N \sum_{m\geq 1} \frac{1}{\overline{m}^{\sigma-\sigma_3+1}} \ll \frac{\log N}{\sqrt N}. \end{align*} Here we used the fact that for $\sigma>\sigma_3$, the series \[\sum_{m\geq 1}\frac{1}{\overline{m}^{s-\eta+1}}\] is absolutely convergent. This proves our required result. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm:analytic_continuation_mellin_transform}}] From equation (\ref{eq:A_min_BN}) and Lemma \ref{lem:J1lim}, \ref{lem:J2lim} and \ref{lem:J3lim}, we get \[A(s)=\lim_{N\rightarrow \infty}B'_N(s),\] when $\text{Re}(s)>\sigma_3+1$. From Lemma~\ref{lem:FubiniForExchange}, we have \[B(s)=\lim_{N\rightarrow \infty}B'_N(s).\] This gives $A(s)$ and $B(s)$ are equal for $\text{Re}(s)>\sigma_3+1$. By analytic continuation, $A(s)$ and $B(s)$ are equal for any $s$ that lies right to $\mathscr{C}$. \end{proof} \begin{rmk} Though Theorem~\ref{thm:analytic_continuation_mellin_transform} has its significance in terms of its elegance and generality, there are alternative and easier ways to meromorphically continue $A(s)$ in many special cases (see \cite{AnderOsci}). \end{rmk} In the next section, we shall use the meromorphic continuation of $A(s)$ derived in Theorem~\ref{thm:analytic_continuation_mellin_transform} to obtain $\Omega_\pm$ results for $\Delta(x)$. \section{The Oscillation Theorem Of Landau }\label{sec:preliminaries} We begin with a criterion for functions that do not change sign. This theorem appears in \cite{AnderOsci} and attributed to Landau \cite{Landau}. \begin{thm}[Landau\label{thm:landau_representation_integral}] Let $f(x)$ be a piecewise continuous function defined on $[1, \infty)$, bounded on every compact intervals and does not change sign when $x>x_0$ for some $1<x_0<\infty$. Define \[F(s):=\int_{1}^{\infty}\frac{f(x)}{x^{s+1}}\mathrm{d} x,\] and assume that the above integral is absolutely convergent in some half plane. Further, assume that we have an analytic continuation of $F(s)$ in a region containing the following part of the real line: \[l(\sigma_0, \infty):=\{\sigma + i0: \sigma > \sigma_0\}.\] Then the integral representing $F(s)$ is absolutely convergent for $\text{Re}(s)>\sigma_0$, and hence $F(s)$ is an analytic function in this region. \end{thm} Landau's theorem gives a criteria when a function does not oscillate. We shall use Landau's theorem indirectly by method of contradiction to show the sign changes of $\Delta(x)$. Consider the Mellin transformation $A(s)$ of $\Delta(x)$. We need the following situation to apply Landau's theorem. \begin{asump}\label{as:for_landau} Suppose there exists a real number $\sigma_0$, $0<\sigma_0<\sigma_1$, such that $A(s)$ has the following properties: \begin{itemize} \item[(i)] There exists $t_0\neq 0$ such that \begin{equation*} \lambda:=\limsup\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)|A(\sigma+it_0)|>0 . \end{equation*} \item[(ii)] We also have \begin{align*} l_s&:=\limsup\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)A(\sigma) < \infty,\\ l_i&:= \liminf\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)A(\sigma) > -\infty. \end{align*} \item[(iii)]The limits $l_i, l_s$ and $\lambda$ satisfy \[l_i+\lambda>0\quad \text{and}\quad l_s-\lambda<0.\] \item[(iv)] We can analytically continue $A(s)$ in a region containing the real line $l(\sigma_0, \infty)$. \end{itemize} \end{asump} \begin{rmk} From Assumptions~\ref{as:for_landau} (i), we see that $\sigma_0+it_0$ is a singularity of $A(s)$. \end{rmk} We construct the following sets for further use. \begin{defi}\label{def:A1A2} With $l_s, l_i$ and $\lambda$ as in Assumptions~\ref{as:for_landau}, and for an $\epsilon$ such that $ 0<\epsilon<\min(\lambda+l_i, \lambda-l_s)$, define \begin{align*} &\mathcal{A}_1:=\{x:x \in [1, \infty), \Delta(x)>(l_i+\lambda-\epsilon)x^{\sigma_0}\}\\ \text{ and }\quad&\mathcal{A}_2:=\{x:x \in [1, \infty), \Delta(x)<(l_s-\lambda+\epsilon)x^{\sigma_0}\}. \end{align*} \end{defi} \subsection{$\Omega_\pm$ Results} Under Assumptions~\ref{as:for_landau} and using methods from \cite{KaczMeasure}, we can derive the following measure theoretic theorem. \begin{thm}\label{thm:landu_omegapm} Let the conditions in Assumptions~\ref{as:for_landau} hold. Then for any real number $M>1$, we have \begin{align*} \mu(\mathcal{A}_j\cap[M, \infty])>0, \ \text{ for } j=1, 2. \end{align*} In particular, we have \begin{align*} \Delta(x)=\Omega_\pm(x^{\sigma_0}). \end{align*} \end{thm} \begin{proof} We prove the Theorem only for $\mathcal{A}_1$, as the other part is similar. Define \begin{align*} &g(x):=\Delta(x)-(l_i+\lambda-\epsilon)x^{\sigma_0}, \quad &&G(s):=\int_{1}^{\infty}\frac{g(x)}{x^{s+1}}{\mathrm{d} x};\\ &g^+(x):=\max(g(x), 0), \quad &&G^+(s):= \int_{1}^{\infty}\frac{g^+(x)}{x^{s+1}}{\mathrm{d} x};\\ &g^-(x):=\max(-g(x), 0), \quad &&G^-(s):= \int_{1}^{\infty}\frac{g^-(x)}{x^{s+1}}{\mathrm{d} x}. \end{align*} With the above notations, we have \begin{align*} &g(x)=g^+(x)-g^-(x) \\ \text{ and }\quad &G(s)=G^+(s)-G^-(s). \end{align*} Note that \begin{align*} G(s)&=A(s) - \int_1^{\infty}(l_i+\lambda-\epsilon)x^{\sigma_0-s-1}{\mathrm{d} x} \\ &=A(s) + \frac{l_i+\lambda-\epsilon}{\sigma_0-s},\quad \text{for } \text{Re}(s)> \sigma_0. \end{align*} So $G(s)$ is analytic wherever $A(s)$ is, except possibly for a pole at $\sigma_0$. This gives \begin{align}\label{eq:G_lim_sigma0t0} \limsup\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)|G(\sigma+it_0)| =\limsup\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)|A(\sigma+it_0)|=\lambda. \end{align} We shall use the above limit to prove our theorem. We proceed by method of contradiction. Assume that there exists an $M$ such that \[\mu(\mathcal{A}_1\cap[M, \infty))=0.\] This implies \[G^+(\sigma)=\int_{1}^{\infty}\frac{g^+(x)}{x^{s+1}}{\mathrm{d} x} = \int_{1}^{M}\frac{g^+(x)}{x^{s+1}}{\mathrm{d} x} \] is bounded for any $s$, and so is an entire function. By Assumptions~\ref{as:for_landau}, $A(s)$ and $G(s)$ can be analytically continued on the line $l(\sigma_0, \infty)$. As $G(s)$ and $G^+(s)$ are analytic on $l(\sigma_0, \infty)$, $G^-(s)$ is also analytic on $l(\sigma_0, \infty)$. The integral for $G^-(s)$ is absolutely convergent for $\text{Re}(s)>\sigma_3+1$, and $g^-(x)$ is a piecewise continuous function bounded on every compact sets. This suggests that we can apply Theorem~\ref{thm:landau_representation_integral} to $G^-(s)$, and conclude that \[G^-(s)=\int_{1}^{\infty}\frac{g^-(x)}{x^{s+1}}{\mathrm{d} x} \] is absolutely convergent for $\text{Re}(s)>\sigma_0$. From the above discussion, we summarize that the Mellin transformations of $g, g^+$ and $g^-$ converge absolutely for $\text{Re}(s)>\sigma_0$. As a consequence, we see that $G(\sigma), G^+(\sigma)$ and $G^-(\sigma)$ are finite real numbers for $\sigma>\sigma_0$. For $\sigma>\sigma_0$, we compare $G^+(\sigma)$ and $G^-(\sigma)$ in the following two cases. \begin{itemize} \item [Case 1: ] $G^+(\sigma)<G^-(\sigma)$. In this case, \begin{align*} (\sigma-\sigma_0)|G(\sigma+ it_0)|&\leq (\sigma-\sigma_0)|G(\sigma)| =-(\sigma-\sigma_0)G(\sigma)\\ &=-(\sigma-\sigma_0)A(\sigma) + l_i+\lambda-\epsilon. \end{align*} So we have \begin{equation*} \limsup\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)|G(\sigma+it_0)| \leq l_i+\lambda-\epsilon - \liminf\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)A(\sigma) \leq \lambda-\epsilon. \end{equation*} This contradicts (\ref{eq:G_lim_sigma0t0}). \item[Case 2: ] $G^+(\sigma)\geq G^-(\sigma)$. We have, \begin{align*} (\sigma-\sigma_0)|G(\sigma+it_0)|&\leq (\sigma-\sigma_0)G^+(\sigma)\\ &=O(\sigma-\sigma_0) \quad (G^+(\sigma)\text{ being a bounded integral}). \end{align*} Thus \[ \limsup\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)|G(\sigma+it_0)|=0.\] This contradicts (\ref{eq:G_lim_sigma0t0}) again. \end{itemize} Thus $\mu(\mathcal{A}_1\cap[M, \infty))>0$ for any $M>1$, which completes the proof. \end{proof} \subsection{Measure Theoretic $\Omega_\pm$ Results} Results in last section shows that $\mathcal{A}_1$ and $\mathcal{A}_2$ are unbounded. But we do not know how the size of these sets grow. An answer to this question was given by Kaczorowski and Szyd\l o in \cite[Theorem~4]{KaczMeasure}. \begin{thm}[Kaczorowski and Szyd\l o \cite{KaczMeasure}]\label{thm:kaczorowski_correct} Let the conditions in Assumptions~\ref{as:for_landau} hold. Also assume that for a non-decreasing positive continuous function $h$ satisfying \[h(x)\ll x^{\epsilon},\] we have \begin{equation}\label{eq:normDelta} \int_{T}^{2T}\Delta^2(x){\mathrm{d} x}\ll T^{2\sigma_0 + 1}h(T). \end{equation} Then as $T\rightarrow \infty$, \[\mu\left( \mathcal{A}_j\cap[1, T]\right)=\Omega\Big(\frac{T}{h(T)}\Big)\quad \text{ for } j=1, 2.\] \end{thm} The above theorem of Kaczorowski and Szyd\l o has been generalized by Bhowmik, Ramar\'e and Schlage-Puchta by localizing the fluctuations of $\Delta(x)$ to $[T, 2T]$. Proof of this theorem follows from \cite[Theorem~2]{gautami}. \begin{thm}[Bhowmik, Ramar\'e and Schlage-Puchta \cite{gautami}]\label{thm:kaczorowski} Let the assumptions in Theorem~\ref{thm:kaczorowski_correct} hold. Then as $T\rightarrow \infty$, \[\mu\left( \mathcal{A}_j\cap[T, 2T]\right)=\Omega\Big(\frac{T}{h(T)}\Big)\quad \text{ for } j=1, 2.\] \end{thm} In the above two theorems, (\ref{eq:normDelta}) is a very strong condition to hold. For example, if $\Delta(x)$ is the error term in approximating $\sum_{n\leq x}|\tau(n, \theta)|^2$, we can not apply Theorem~\ref{thm:kaczorowski}. In our next theorem, we generalize Theorem~\ref{thm:kaczorowski} by replacing the condition (\ref{eq:normDelta}) by bounds that help to choose $h$. \begin{thm}\label{thm:omega_pm_main} Let the conditions in Assumptions~\ref{as:for_landau} hold. Assume that there is an analytic continuation of $A(s)$ in a region containing the real line $l(\sigma_0, \infty)$. Let $h_1$ and $h_2$ be two positive functions such that \begin{equation}\label{eq:second_moment_error} \int_{[T, 2T]\cap \mathcal{A}_j} \frac{\Delta^2(x)}{x^{2\sigma_0+1}}\mathrm{d} x\ll h_j(T),\quad\text{ for } j=1, 2. \end{equation} Then as $T \longrightarrow \infty$, \begin{equation}\label{eq:omega_pm_measure} \mu(\mathcal{A}_j\cap[T, 2T])=\Omega\Big(\frac{T}{h_j(T)}\Big),\quad\text{ for } j=1, 2. \end{equation} \end{thm} \begin{proof} We prove the theorem for the measure of $\mathcal{A}_1$; the proof is similar for $\mathcal{A}_2$ . We define $g, g^+, g^-, G, G^+$ and $G^-$, as in Theorem~\ref{thm:landu_omegapm} of Section~\ref{sec:preliminaries}. Assume that \begin{equation}\label{eq:measure_contradic_A1} \mu\left(\mathcal{A}_1\cap[T, 2T]\right)=o\left(\frac{T}{h_1(T)}\right). \end{equation} This implies that for any $\varepsilon>0$, there exists an integer $k(\varepsilon)>0$ such that \begin{equation}\label{eq:bound_from_littleo_assumption} \frac{h_1(2^k)\mu(\mathcal{A}_1\cap[2^k, 2^{k+1}])}{2^k}<\varepsilon, \end{equation} for all $k>k(\varepsilon)$. Using the above assumption, we may obtain an upper bound for $G^+(\sigma)$ as follows: \begin{align*} & \int_{\mathcal{A}_1}\frac{g^+(x)\mathrm{d} x}{x^{\sigma+1}} \leq \sum_{k\geq 0}\int_{\mathcal{A}_1\cap[2^k, 2^{k+1}]}\frac{\Delta(x)\mathrm{d} x}{x^{\sigma+1}}\\ &( \text{as } \Delta(x)\geq g(x) \text{ on } \mathcal{A}_1 \text{ by Assumptions~\ref{as:for_landau},(iii)})\\ &\leq \sum_{k\geq 0}\left(\int_{\mathcal{A}_1\cap[2^k, 2^{k+1}]}\frac{\Delta^2(x)\mathrm{d} x}{x^{2\sigma_0+1}}\right)^{1/2}- \left(\frac{\mu(\mathcal{A}_1\cap[2^k, 2^{k+1}])}{2^{k(2(\sigma-\sigma_0)+1)}}\right)^{1/2}\\ &\leq c_2 \sum_{k\geq 0}\left(\frac{h_1(2^k)\mu(\mathcal{A}_1\cap[2^k, 2^{k+1}])}{2^{k(2(\sigma-\sigma_0)+1)}}\right)^{1/2}\\ &\leq c_3(\varepsilon) + \varepsilon^{1/2}\sum_{k\geq k(\varepsilon)}\frac{1}{2^{k(\sigma-\sigma_0)}} \quad (\text{Using } (\ref{eq:bound_from_littleo_assumption})).\\ \end{align*} In the above inequalities, $c_2$ and $c_3(\varepsilon)$ are some constants, and $c_3(\varepsilon)$ depends on $\varepsilon$. We summarize the above calculation to \begin{equation}\label{eq:bound_G+} G^+(\sigma)\leq c_3(\varepsilon) + \varepsilon\frac{2^{-(\sigma-\sigma_0)}}{(\sigma-\sigma_0)\log2} . \end{equation} Therefore \[G^+(s)=\int_1^{\infty}\frac{g^+(x)\mathrm{d} x}{x^{s+1}}\] is absolutely convergent for $\text{Re}(s)>\sigma_0$, and so it is analytic in this region. But \[G^-(s)=G(s)-G^+(s),\] and $G$ is analytic on $l(\sigma_0, \infty)$. So $G^-$ is also analytic on $l(\sigma_0, \infty)$. Using Theorem~\ref{thm:landau_representation_integral}, we get that \[G^+(s)=\int_1^{\infty}\frac{g^+(x)\mathrm{d} x}{x^{s+1}}\] is absolutely convergent for $\text{Re}(s)>\sigma_0$. Absolute convergence of the integrals of $G$ and $G^+$ implies that the Mellin transformation of $g^-(x)$, given by \[ G^-(s)=\int_1^{\infty}\frac{g^-(x)\mathrm{d} x}{x^{s+1}},\] is also absolutely convergent for $\text{Re}(s)>\sigma_0$. As a consequence, we get $G(\sigma), G^+(\sigma)$, and $G^-(\sigma)$ are real numbers for $\sigma>\sigma_0$. As indicated in Case-1 of Theorem~\ref{thm:landu_omegapm}, we can not have \[G^+(\sigma)<G^-(\sigma),\] when $\sigma > \sigma_0$. So we always have \[G^+(\sigma)\geq G^-(\sigma).\] From this inequality, we shall deduce a contradiction to (\ref{eq:G_lim_sigma0t0}). Using (\ref{eq:bound_G+}) and the form of $G^+$, we get \begin{align*} (\sigma-\sigma_0)|G(\sigma+it)|&\leq (\sigma-\sigma_0)G^+(\sigma)\\ & \le (\sigma-\sigma_0)c_3(\varepsilon) + \frac{\varepsilon}{2^{\sigma-\sigma_0}\log2}. \end{align*} From the above inequality, for $t=t_0$, we get \[\limsup\limits_{\sigma\searrow\sigma_0}(\sigma-\sigma_0)|G(\sigma+it_0)|<\frac{\varepsilon}{\log2},\] for any $\varepsilon>0$. This is a contradiction to (\ref{eq:G_lim_sigma0t0}); thus, our assumption (\ref{eq:measure_contradic_A1}) is wrong. Hence \begin{equation} \mu\left( \mathcal{A}_1\cap[x, 2x)\right)=\Omega\left(\frac{T}{h_1(T)}\right). \end{equation} \end{proof} \begin{coro}\label{cor:measure_omega_pm_from_upper_bound} Let the conditions of Theorem~\ref{thm:omega_pm_main} hold. If we have a monotonic positive function $h$ such that \begin{equation} \Delta(x)=O(h(x)), \end{equation} then \begin{equation} \mu(\mathcal{A}_j\cap[T, 2T])=\Omega\left(\frac{T^{1+2\sigma_0}}{h^2(T)+h^2(2T)}\right)\quad \text{ for } j=1, 2. \end{equation} \end{coro} \begin{coro}\label{coro:omega_pm_secondmoment} Similar to Corollary~\ref{cor:measure_omega_pm_from_upper_bound}, we assume the conditions of Theorem~\ref{thm:omega_pm_main}. Then we have \begin{equation}\label{eq:omega_pm_secondmoment} \int_{[T, 2T]\cap \mathcal{A}_j}\Delta^2(x)\mathrm{d} x = \Omega(T^{2\sigma_0 + 1}) \quad \text{ for } j=1, 2. \end{equation} \end{coro} \begin{proof} The proof of this Corollary follows from an observation in the proof of Theorem~\ref{thm:omega_pm_main}. We shall prove this Corollary for $\mathcal{A}_1$, and the proof for $\mathcal{A}_2$ is similar. Note that as an important part of the proof of Theorem~\ref{thm:omega_pm_main}, we showed that the integral for $G^+(s)$ is absolutely convergent for $\text{Re}(s)>\sigma_0$, by assuming (\ref{eq:omega_pm_measure}) is false. Then we got a contradiction that proves (\ref{eq:omega_pm_measure}). Now we proceed in a similar manner by assuming (\ref{eq:omega_pm_secondmoment}) is false. So we have \begin{equation}\label{eq:second_moment_contra} \int_{[T, 2T]\cap \mathcal{A}_1}\Delta^2(x)\mathrm{d} x = o(T^{2\sigma_0 + 1}). \end{equation} So for an arbitrarily small constant $\varepsilon$, we have \begin{align*} & |G^+(s)|\leq \int_{\mathcal{A}_1}\frac{g^+(x)\mathrm{d} x}{x^{\sigma+1}} \leq \sum_{k\geq 0}\int_{\mathcal{A}_1\cap[2^k, 2^{k+1}]}\frac{\Delta(x)\mathrm{d} x}{x^{\sigma+1}}\\ &\leq \sum_{k\geq 0}\frac{1}{2^{k(\sigma-\sigma_0)}}\left(\int_{\mathcal{A}_1\cap[2^k, 2^{k+1}]}\frac{\Delta^2(x)\mathrm{d} x}{x^{2\sigma_0+1}}\right)^{1/2}\\ &\leq c_4(\varepsilon) + \varepsilon\sum_{k\geq k(\varepsilon)}\frac{1}{2^{k(\sigma-\sigma_0)}}, \\ \end{align*} where $c_4(\varepsilon)$ is a positive constant depending on $\varepsilon$. From this, we obtain that $G^+(s)$ is absolutely convergent for $\text{Re}(s)>\sigma_0$. Now, arguments similar to the proof of Theorem~\ref{thm:omega_pm_main} yield a contradiction to (\ref{eq:second_moment_contra}). \end{proof} We may also verify that Corollary~\ref{coro:omega_pm_secondmoment} implies Corollary~\ref{cor:measure_omega_pm_from_upper_bound}. \begin{rmk} Observe that in Theorem~\ref{thm:kaczorowski}, the analytic continuation of $A(s)$, for $\text{Re}(s)>\sigma_0$, is implied by (\ref{eq:normDelta}), while in Theorem~\ref{thm:omega_pm_main} we need to assume an analytic continuation. For analytic continuation of $A(s)$, we shall use Theorem~\ref{thm:analytic_continuation_mellin_transform} of the previous section. Demonstrations of these techniques are given in the following applications. \end{rmk} \subsection{Applications} Here we give three applications of Theorem~\ref{thm:omega_pm_main}. In the first application, we consider the error term that appears in an asymptotic formula for $\sum_{n\leq x}^*|\tau(n, \theta)|^2$. Theorem~\ref{thm:kaczorowski} is not applicable to this example. In the second application, we consider an error term that appears in an asymptotic formula for average order of the square-free divisors $d^{(2)}(n)$. In this example, Theorem~\ref{thm:kaczorowski} is applicable under Riemann Hypothesis, whereas Theorem~\ref{thm:omega_pm_main} gives a weaker measure theoretic $\Omega_\pm$ result unconditionally. In the third example, we obtain some results on the error term of the Prime Number Theorem. Though Theorem~\ref{thm:kaczorowski} is applicable in this case under Riemann hypothesis, we prove a slightly weaker result unconditionally by applying Corollary~\ref{cor:measure_omega_pm_from_upper_bound}. \subsubsection{The Twisted Divisor Function} Let us write \[a_n=|\tau(n, \theta)|^2\quad \text{ for }\quad \theta\neq 0,\] where $\tau(n, \theta)$'s are defined in (\ref{eq:tau-n-theta_def}), and \[D(s)=\sum_{n=1}^{\infty}\frac{a_n}{n^{s}}=\frac{\zeta^2 (s)\zeta(s+i\theta)\zeta(s-i\theta)}{\zeta(2s)}\] is its Dirichlet series that converges absolutely for $\text{Re}(s)>1$. We define $\Delta(x)$ as in (\ref{eq:formmula_tau_ntheta}). An upper bound for $\Delta(x)$ (as in (\ref{eq:upper_bound_delta})) can be computed using Perron's formula and fourth moment estimates of $\zeta(s)$ at $\text{Re}(s)=\frac{1}{2}$ ( see \cite[Theorem~33 ]{DivisorsHallTenen} ). Define a contour $\mathscr{C}$ as given in Figure~\ref{fg:contour_c0_taun}: \begin{align*} \mathscr{C}=&\left(\frac{5}{4}-i\infty, \frac{5}{4}-i(\theta+1)\right]\cup \left[\frac{5}{4}-i(\theta+1), \frac{3}{4}-i(\theta+1)\right]\\ &\cup \left[\frac{3}{4}-i(\theta+1), \frac{3}{4}+i(\theta+1)\right]\cup \left[\frac{3}{4}+i(\theta+1), \frac{5}{4}+i(\theta+1)\right]\\ &\cup \left[\frac{5}{4}+i(\theta+1), \frac{5}{4}+i\infty \right). \end{align*} From the Perron's formula, we can show that \[\Delta(x)=\int_{\mathscr{C}}\frac{D(\eta)x^\eta}{\eta}\mathrm{d} \eta.\] Using Theorem~\ref{thm:analytic_continuation_mellin_transform}, we have \[A(s)=\int_{1}^{\infty}\frac{\Delta(x)}{x^{s+1}}\mathrm{d} x=\int_{\mathscr{C}}\frac{D(\eta)}{\eta(s-\eta)}\mathrm{d} \eta,\] when $s$ lies right to the contour $\mathscr{C}$. Denote the first nontrivial zero of $\zeta(s)$ with least positive imaginary part by $2s_0$, which is approximately \begin{equation}\label{s0} 2s_0=\frac{1}{2}+i14.134\ldots . \end{equation} Define the contour $\mathscr{C}(s_0)$, as in Figure~\ref{fg:contour_c0s0}, such that $s_0$ and any real number $s\geq1/4$ lie in the right side of this contour. A meromorphic continuation of $A(s)$ to all $s$ that lies right side of $\mathscr{C}(s_0)$ is given by \begin{equation}\label{eq:analytic_cont1/4} A(s)=\int_{\mathscr{C}(s_0)}\frac{D(\eta)}{\eta(s-\eta)}\mathrm{d} \eta + \frac{\underset{\eta=s_0}{\text{Res}}D(\eta)}{s_0(s-s_0)}. \end{equation} \begin{center} \begin{figure} \begin{tikzpicture}[yscale=0.8] \draw [<->][dotted] (-1, -4.4)--(-1, 4.4); \node at (-1.2, 0.3) {$0$}; \draw [<->][dotted] (4, 0)--(-3, 0); \draw [dotted] (3, -4.4)--(3, 4.4); \fill (3, 0) circle[radius=2pt]; \node at (2.8, 0.3) {$1$}; \fill (3, 1.8) circle[radius=2pt]; \node at (2.48, 1.8) {\scriptsize{$1+i\theta$}}; \fill (3, -1.8) circle[radius=2pt]; \node at (2.48, -1.8) {\scriptsize{$1-i\theta$}}; \node at (1.8, 0.3) {$\frac{3}{4}$}; \node at (3.35, 0.3) {$\frac{5}{4}$}; \draw [thick] (3.5,-0.1 )--(3.5, 0.1); \node at (-1.8, -2.2) {$-(1+\theta)$}; \draw [thick] (-1.1,-2.2)--(-0.9, -2.2); \node at (-1.55, 2.2) {$1+\theta$}; \draw [thick] (-1.1,2.2)--(-0.9, 2.2); \draw [thick] [postaction={decorate, decoration={ markings, mark= between positions 0.09 and 0.98 step 0.21 with {\arrow[line width=1pt]{>},}}}] (3.5, -4.4)--(3.5, -2.2)--(2, -2.2)--(2, 2.2)--(3.5, 2.2)--(3.5, 4.4); \node [below left] at (1.55, 2) {$\mathscr{C}$}; \end{tikzpicture} \caption{Contour $\mathscr{C}$, for $D(s)=\sum_{n=1}^{\infty}\frac{|\tau(n, \theta)|^2}{n^s}$.}\label{fg:contour_c0_taun} \end{figure} \end{center} From (\ref{eq:analytic_cont1/4}) we calculate the following two limits: \begin{equation}\label{eqn:lambda_theta} \lambda(\theta):= \lim_{\sigma\searrow0}\sigma|A(\sigma+s_0)| = |s_0|^{-1} \left|\underset{\eta=s_0}{\text{Res}}D(\eta)\right|>0, \end{equation} and \begin{equation*} \lim_{\sigma\searrow0}\sigma A(\sigma+ 1/4)=0. \end{equation*} For a fixed $\epsilon_0>0$, let \begin{align*} \mathcal{A}_1&=\left\{x: \Delta(x)>(\lambda(\theta)-\epsilon_0)x^{1/4}\right\}\\ \text{and} \quad \mathcal{A}_2&=\left\{ x : \Delta(x)<(-\lambda(\theta)+\epsilon_0)x^{1/4}\right\}. \end{align*} \begin{center} \begin{figure} \begin{tikzpicture}[yscale=0.8] \draw [<->][dotted] (-1, -4.4)--(-1, 4.4); \node at (-1.2, 0.3) {$0$}; \draw [<->][dotted] (4, 0)--(-3, 0); \draw [dotted] (3, -4.4)--(3, 4.4); \fill (3, 0) circle[radius=2pt]; \node at (2.8, 0.3) {$1$}; \fill (3, 1.8) circle[radius=2pt]; \node at (2.48, 1.8) {\scriptsize{$1+i\theta$}}; \fill (3, -1.8) circle[radius=2pt]; \node at (2.48, -1.8) {\scriptsize{$1-i\theta$}}; \draw [dotted] (0, -4.4)--(0, 4.4); \node at (-0.28, 0.3) {$\frac{1}{4}$}; \fill (0, 3) circle[radius=2pt]; \node at (-0.28, 3) {$s_0$}; \draw [thick] [postaction={decorate, decoration={ markings, mark= between positions 0.03 and 0.99 step 0.147 with {\arrow[line width=1pt]{>},}}}] (3.5, -4.4)--(3.5, -2.5)--(2, -2.5)--(2, -0.5)--(-0.6, -0.5)--(-0.6, 3.8)--(0.4, 3.8)--(0.4, 0.65) --(2, 0.65)--(2, 2.5)--(3.5, 2.5)--(3.5, 4.4); \node [below left] at (-0.55, 2) {$\mathscr{C}(s_0)$}; \end{tikzpicture} \caption{Contour $\mathscr{C}(s_0)$}\label{fg:contour_c0s0} \end{figure} \end{center} Upper-bound of $\Delta$ from (\ref{eq:upper_bound_delta}) and Corollary~\ref{cor:measure_omega_pm_from_upper_bound} give \begin{equation}\label{result:tau_n_theta_omegapm} \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{1/2}(\log T)^{-12}\right) \text{ for } j=1, 2. \end{equation} Under Riemann Hypothesis, Theorem~\ref{thm:omega_pm_main} and Proposition~\ref{prop:upper_bound_second_moment_twisted_divisor} give \begin{equation}\label{result:tau_n_theta_omegapm_underRH} \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{3/4-\epsilon}\right) \text{ for } j=1, 2. \end{equation} From Corollary~\ref{coro:omega_pm_secondmoment}, we get \begin{equation}\label{result:tau_n_theta_secondmoment} \int_{\mathcal{A}_j\cap[T, 2T]}\Delta^2(x)\mathrm{d} x = \Omega\left(T^{3/2}\right) \text{ for } j=1, 2. \end{equation} \subsubsection{Square Free Divisors}\label{subsec:sqfree_divisors} Let $a_n=2^{\omega(n)}$, where $\omega(n)$ denotes the number of distinct prime factors of $n$; equivalently, $a_n$ denotes the number of square free divisors of $n$. We write \begin{align*} \sum_{n\leq x}^* 2^{\omega(n)}=\mathcal{M}(x) + \Delta(x), \end{align*} where \[\mathcal{M}(x)= \frac{x\log x}{\zeta(2)}+\left(-\frac{2\zeta'(2)}{\zeta^2(2)} + \frac{2\gamma - 1}{\zeta(2)}\right)x,\] and by a theorem of H{\"o}lder \cite{holder} \begin{equation}\label{eq:sqfree_divisors_ub_unconditional} \Delta(x)\ll x^{1/2}. \end{equation} Under Riemann Hypothesis, Baker \cite{baker} has improved the above upper bound to \[\Delta(x)\ll x^{4/11}.\] We may check that the Dirichlet series $D(s)$ has the following meromorphic continuation: \[D(s)=\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^s}=\frac{\zeta^2(s)}{\zeta(2s)}.\] Let $A(s)$ be the Mellin transform of $\Delta(x)$ at $s$, and let $s_0$ be as in (\ref{s0}). Similar arguments as in the previous application assure that $A(s)$ has no real pole for $\text{Re}(s)\geq 1/4$, and yield the following limits: \begin{equation*} \lambda_1:= \lim_{\sigma\searrow0}\sigma|A(\sigma+s_0)| = |s_0|^{-1} \left|\underset{\eta=s_0}{\text{Res}}D(\eta)\right|>0 \end{equation*} and \begin{equation*} \lim_{\sigma\searrow0}\sigma A(\sigma+ 1/4)=0. \end{equation*} For a fixed $\epsilon_0>0$, let \begin{align*} \mathcal{A}_1&=\left\{x: \Delta(x)>(\lambda_1-\epsilon_0)x^{1/4}\right\}\\ \text{and} \quad \mathcal{A}_2&=\left\{ x : \Delta(x)<(-\lambda_1+\epsilon_0)x^{1/4}\right\}. \end{align*} Using Corollary~\ref{cor:measure_omega_pm_from_upper_bound} and (\ref{eq:sqfree_divisors_ub_unconditional}), we get \begin{equation}\label{eq:sqfree_divisors_omegaset_uc} \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{1/2}\right) \text{ for } j=1, 2. \end{equation} However, assuming the Riemann Hypothesis and arguing similarly as in Proposition~\ref{prop:upper_bound_second_moment_twisted_divisor}, we may show that \[\int_{T}^{2T}\Delta^2(x)\ll T^{3/2+\epsilon}\ \text{ for any } \epsilon>0.\] This upper bound along with Theorem~\ref{thm:omega_pm_main} gives \begin{equation}\label{eq:sqfree_divisors_omegaset} \mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{1-\epsilon}\right), \text{ for } j=1, 2 \end{equation} and for any $\epsilon>0$. \subsubsection{The Prime Number Theorem Error}\label{subsec:pnt_error} Now we consider the error term in the Prime Number Theorem: \[\Delta(x)=\sum_{n\leq x}^*\Lambda(n)-x.\] Let \[\lambda_2=|2s_0|^{-1}, \] where $2s_0$ is the first nontrivial zero of $\zeta(s)$ and is same as in the previous applications. As an application of Corollary~\ref{cor:measure_omega_pm_from_upper_bound}, we shall prove the following proposition: \begin{prop}\label{prop:pnt} We denote \begin{align*} \mathcal{A}_1&=\left\{x: \Delta(x)>(\lambda_2-\epsilon_0)x^{1/2}\right\}\\ \text{and} \quad \mathcal{A}_2&=\left\{ x : \Delta(x)<(-\lambda_2+\epsilon_0)x^{1/2}\right\}, \end{align*} for a fixed $\epsilon_0$ such that $0<\epsilon_0<\lambda_2$. Then \[\mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(T^{1-\epsilon}\right), \text{ for } j=1, 2 \] and for any $\epsilon>0$. \end{prop} \begin{proof} Here we apply Corollary~\ref{cor:measure_omega_pm_from_upper_bound} in a similar way as in the previous applications, so we shall skip the details. The Riemann Hypothesis, Theorem~\ref{thm:kaczorowski} and (\ref{PNT_Under_RH}) give \[\mu\left(\mathcal{A}_j\cap[T, 2T]\right)=\Omega\left(\frac{T}{\log^4 T}\right), \text{ for } j=1, 2; \] this implies the proposition. But if the Riemann Hypothesis is false, then there exists a constant $\mathfrak{a}$, with $1/2<\mathfrak{a}\leq1$, such that \[\mathfrak{a}=\sup\{\sigma:\zeta(\sigma+it)=0\}.\] Using Perron summation formula, we may show that \[\Delta(x)\ll x^{\mathfrak{a}+\epsilon},\] for any $\epsilon>0$. Also for any arbitrarily small $\delta$, we have $\mathfrak{a}-\delta<\sigma'<\mathfrak{a}$ such that $\zeta(\sigma'+it')=0$ for some real number $t'$. If $\lambda'':=|\sigma'+it'|^{-1}$, then by Corollary~\ref{cor:measure_omega_pm_from_upper_bound} we get \begin{align*} \mu\left(\left\{x\in[T, 2T]:\Delta(x)>(\lambda''/2)x^{\sigma'}\right\}\right)&=\Omega\left(T^{1-2\delta-2\epsilon}\right)\\ \text{ and } \quad \mu\left(\left\{x\in[T, 2T]:\Delta(x)<-(\lambda''/2)x^{\sigma'}\right\}\right)&=\Omega\left(T^{1-2\delta-2\epsilon}\right). \end{align*} As $\delta$ and $\epsilon$ are arbitrarily small and $\sigma'>1/2$, the above $\Omega$ bounds imply the proposition. \end{proof} \begin{rmk} Results similar to Proposition~\ref{prop:pnt} can be obtained for error terms in asymptotic formulas for partial sums of Mobius function and for partial sums of the indicator function of square-free numbers. \end{rmk} \begin{rmk} In Section~\ref{subsec:sqfree_divisors}~and~\ref{subsec:pnt_error}, we saw that $\mu(\mathcal{A}_j)$ are large. Now suppose that $\mu(\mathcal{A}_1\cup\mathcal{A}_2)$ is large, then what can we say about the individual sizes of $\mathcal{A}_j$? We may guess that $\mu(\mathcal{A}_1)$ and $\mu(\mathcal{A}_2)$ are both large and almost equal. But this may be very difficult to prove. In Section~\ref{sec:measure_analysis}, we shall show that if $\mu(\mathcal{A}_1\cup\mathcal{A}_2)$ is large, then both $\mathcal{A}_1$ and $\mathcal{A}_2$ are nonempty. In the next section, we obtain an $\Omega$ bound for $\mu(\mathcal{A}_1\cup\mathcal{A}_2)$, with $\sigma_0=3/8$ and $\Delta(x)$ being the error term in (\ref{eq:formmula_tau_ntheta}). \end{rmk} \section{An Omega Theorem For The Twisted Divisor Function}\label{sec:stronger_omega} In \cite{BaluRamachandra1} and \cite{BaluRamachandra2}, Balasubramanian and Ramachandra introduced a method to obtain a lower bound for \[ \int_T^{T^{\mathfrak b}} \frac{ |\Delta(x)|^2}{x^{2\alpha+1}} \mathrm{d} x\] in terms of the second moment of $D(s)$, for some $\mathfrak b>0$ and $\alpha>0$. A nondecreasing lower bound gives \[ \Delta(x)=\Omega(x^{\alpha-\epsilon}), \ \text{for any } \epsilon>0 .\] In these papers, they considered the error terms in asymptotic formulas for partial sums of certain arithmetic functions such as sum of square-free divisors and counting function for non-isomorphic abelian groups. This method requires the Riemann Hypothesis to be assumed in certain cases. Balasubramanian, Ramachandra and Subbarao \cite{BaluRamachandraSubbarao} modified this technique to apply on error term in the asymptotic formula for the counting function of $k$-full numbers without assuming Riemann Hypothesis. This method has been used by several authors including \cite{Nowak} and \cite{srini}. In this section, we consider the Dirichlet series $$D(s) =\sum_{n \ge 1} \frac{|\tau(n,\theta)|^2}{n^s}= \frac{\zeta^2(s)\zeta(s+i\theta)\zeta(s-i\theta)}{\zeta(2s)}, $$ for $\text{Re}(s)>1$. In accordance with the notation of the last section, we write $$\sum_{n\le x} |\tau(n,\theta)|^2 =\mathcal{M}(x) + \Delta(x),$$ where the main term $\mathcal{M}(x)=\omega_1(\theta)x\log x + \omega_2(\theta)x\cos(\theta\log x) +\omega_3(\theta)x$ comes from the poles of $D(s)$ at $s=1, 1+i\theta$ and $s=1-i\theta$. Adopting the method of Balasubramanian, Ramachandra and Subbarao, we derive the following theorem. \begin{thm}\label{omega_integral} For any $c>0$ and for a sufficiently large $T$ depending on $c$, we get \begin{equation}\label{lb-increasing} \int_T^{\infty} \frac{|\Delta(x)|^2}{x^{2\alpha+1}}e^{-2x/y} \mathrm{d} u \gg_c\exp\left( c(\log T)^{7/8} \right), \end{equation} where \[ \alpha(T)=\frac{3}{8} -\frac{c}{(\log T)^{1/8} }.\] \end{thm} \noindent In particular, this implies \[ \Delta(x)=\Omega (x^{3/8}\exp(-c(\log x)^{7/8}), \] for some suitable $c>0$. In order to prove the theorem, we need several lemmas, which form the content of this section. We begin with a fixed $\delta_0 \in (0,1/16]$ for which we would choose a numerical value at the end of this section. \begin{defi} For $T>1$, let $Z(T)$ be the set of all $\gamma$ such that \begin{enumerate} \item $T\le \gamma \le 2T$, \item either $\zeta(\beta_1+i\gamma)=0$ for some $\beta_1\ge \frac{1}{2}+\delta_0$ \\ or $\zeta(\beta_2+i 2\gamma)=0$ for some $\beta_2\ge \frac{1}{2} +\delta_0$. \end{enumerate} Let \[ I_{\gamma,k} = \{ T\le t \le 2T: |t-\gamma| \le k\log^2 T \} \text{ for } k=1, 2.\] We finally define \[J_k(T)=[T,2T] \setminus \cup_{\gamma\in Z(T)} I_{\gamma,k}. \] \end{defi} \begin{lem}\label{size-J(T)} With the above definition, we have for $k=1,2$ \[ \mu(J_k(T)) = T +O\left( T^{1-\delta_0/4} \log^3 T \right). \] \end{lem} \begin{proof} We shall use an estimate on the function $N(\sigma, T)$, which is defined as \[N(\sigma, T):=\left|\{\sigma'+it:\sigma'\ge\sigma,\ 0<t\leq T,\ \zeta(\sigma'+it)=0\}\right|.\] Selberg \cite[Page~237]{Titchmarsh} proved that $$N(\sigma, T) \ll T^{1-\frac{1}{4}(\sigma -\frac{1}{2})} \log T, \ \text{ for } \ \sigma>1/2.$$ Now the lemma follows from the above upper bound on $N(\sigma, t)$, and the observation that $$\left|\cup_{\gamma\in Z(T)} I_{\gamma,k}\right| \ll N\left(\frac{1}{2}+ \delta_0, T\right)\log^2 T.$$ \end{proof} The next lemma closely follows Theorem 14.2 of \cite{Titchmarsh}, but does not depend on Riemann Hypothesis. \begin{lem}\label{estimate-on-J(T)} For $t\in J_1(T)$ and $\sigma= 1/2+\delta$ with $\delta_0 < \delta < 1/4-{\delta_0}/2$, we have $$|\zeta(\sigma+it)|^{\pm 1} \ll \exp\left(\log\log t \left(\frac{\log t}{\delta_0}\right)^{\frac{1-2\delta}{1-2\delta_0}}\right)$$ and \[ |\zeta(\sigma+2it)|^{\pm 1} \ll \exp\left(\log\log t \left(\frac{\log t}{\delta_0}\right)^{\frac{1-2\delta}{1-2\delta_0}}\right).\] \end{lem} \begin{proof} We provide a proof of the first statement, and the second statement can be similarly proved. Let $1 <\sigma' \le \log t$. We consider two concentric circles centered at $\sigma'+it$, with radius $\sigma'-1/2-\delta_0/2$ and $\sigma' -1/2- \delta_0$. Since $t\in J_1(T)$ and the radius of the circle is $\ll \log t$, we conclude that \[ \zeta(z)\neq 0 \ \text{ for } \ |z-\sigma'-it | \le \sigma' - \frac{1}{2} - \frac{\delta_0}{2} \] and also $\zeta(z)$ has polynomial growth in this region. Thus on the larger circle, $\log |\zeta(z)| \le c_5\log t$, for some constant $c_5>0$. By Borel- Caratheodory theorem, \[\ |z-\sigma'-it | \le \sigma' - \frac{1}{2} - \delta_0 \ \text{ implies } \ |\log \zeta(z)| \le \frac{c_6\sigma'}{\delta_0} \log t, \] for some $c_6>0$. Let $1/2+\delta_0< \sigma < 1$, and $\xi>0$ be such that $1+\xi< \sigma'$. We consider three concentric circles centered at $\sigma'+it$ with radius $r_1=\sigma'-1-\xi$, $r_2=\sigma'-\sigma$ and $r_3=\sigma'-1/2-\delta_0$, and call them $\mathcal C_1, \mathcal C_2$ and $\mathcal C_3$ respectively. Let $$M_i = \sup_{z\in \mathcal C_i} |\log \zeta(z)|.$$ From the above bound on $|\log\zeta(z)|$, we get $$M_3 \le \frac{c_6\sigma'}{\delta_0} \log t.$$ Suitably enlarging $c_6$, we see that \[ M_1 \le \frac{c_6}{\xi}.\] Hence we can apply the Hadamard's three circle theorem to conclude that \[ M_2 \le M_1^{1-\nu} M_3^\nu, \ \text{ for } \ \nu=\frac{\log(r_2/r_1)}{\log(r_3/r_1)}. \] Thus \[ M_2 \le \left( \frac{c_6}{\xi} \right)^{1-\nu}\left(\frac{c_6\sigma' \log t}{\delta_0}\right)^\nu. \] It is easy to see that \[ \nu=2-2\sigma + \frac{4\delta_0(1-\sigma)}{1+2\xi-2\delta_0} +O(\xi) + O\left( \frac{1}{\sigma'}\right). \] Now we put \[ \xi=\frac{1}{\sigma'}=\frac{1}{\log\log t} .\] Hence \[ M_2 \le \frac{c_6 \log^\nu t \log\log t}{\delta_0^\nu} =\frac{c_7 \log\log t}{\delta_0^\nu}(\log t)^{2-2\sigma+\frac{4\delta_0(1-\sigma)}{1+2\xi-2\delta_0} }, \] for some $c_7>0$. We observe that \[ 2-2\sigma+\frac{4\delta_0(1-\sigma)}{1+2\xi-2\delta_0} < 2-2\sigma+\frac{4\delta_0(1-\sigma)}{1-2\delta_0} =\frac{1-2\delta}{1-2\delta_0}. \] So we get \[ |\log \zeta(\sigma +it) | \le c_7 \log\log t \left(\frac{\log t}{\delta_0}\right)^{\frac{1-2\delta}{1-2\delta_0}},\] and hence the lemma. \end{proof} We put $y=T^{\mathfrak b}$, for a constant $\mathfrak b \ge 8$. Now suppose that $$\int_T^{\infty} \frac{|\Delta(u)|^2}{u^{2\alpha+1}}e^{-u/y}du \ge \log^2 T,$$ for sufficiently large $T$. Then clearly $$\Delta(u) =\Omega( u^{\alpha}) .$$ Our next result explores the situation when such an inequality does not hold. \begin{prop}\label{main-prop} Let $\delta_0<\delta<\frac{1}{4}-\frac{\delta_0}{2}$. For $1/4+\delta/2 < \alpha <1/2$, suppose that \begin{equation}\label{assumption} \int_T^{\infty} \frac{|\Delta(u)|^2}{u^{2\alpha+1}}e^{-u/y}du \le \log^2 T, \end{equation} for a sufficiently large $T$. Then we have $$\int_{\substack{Re(s)=\alpha \\ t\in J_2(T)}} \frac{|D(s)|^2}{|s|^2} \ll 1 + \int_T^{\infty} \frac{|\Delta(u)|^2}{u^{2\alpha+1}}e^{-2u/y} du.$$ \end{prop} Before embarking on a proof, we need the following technical lemmas. \begin{lem}\label{gamma} For $0\le \text{Re}(z) \le 1$ and $|Im(z)|\ge \log^2T$, we have \begin{equation}\label{gamma1} \int_T^{\infty} e^{-u/y}u^{-z} du =\frac{T^{1-z}}{1-z} + O(T^{-\mathfrak b'}) \end{equation} and \begin{equation}\label{gamma2} \int_T^{\infty} e^{-u/y} u^{-z}\log u\ du =\frac{T^{1-z}}{1-z}\log T + O(T^{-\mathfrak b'}), \end{equation} where $\mathfrak b'>0$ depends only on $\mathfrak b$. \end{lem} \begin{proof} By changing variable by $v= u/y$, we get \[ \int_T^{\infty} \frac{e^{-u/y}}{u^z} du = y^{1-z} \int_{T/y}^{\infty} e^{-v} v^{-z} dv. \] Integrating the right hand side by parts \[ \int_{T/y}^{\infty} e^{-v} v^{-z} dv = \frac{e^{-T/y}}{1-z}\left( \frac{T}{y} \right)^{1-z} + \frac{1}{1-z} \int_{T/y}^{\infty} e^{-v} v^{1-z} dv \] It is easy to see that \[ \int_{T/y}^{\infty} e^{-v} v^{1-z} dv = \Gamma(2-z) + O\left( \left(\frac{T}{y} \right)^{2-Re(z)}\right). \] Hence (\ref{gamma1}) follows using $e^{-T/y} =1+O(T/y)$ and Stirling's formula along with the assumption that $ |Im(z)|\ge \log^2T$. Proof of (\ref{gamma2}) proceeds in the same line and uses the fact that \[ \int_{T/y}^{\infty} e^{-v} v^{1-z}\log v\ dv = \Gamma'(2-z) + O\left( \left(\frac{T}{y} \right)^{2-Re(z)}\log T\right). \] Then we apply Stirling's formula for $\Gamma'(s)$ instead of $\Gamma(s)$. \end{proof} \begin{lem}\label{initial-estimates} Under the assumption (\ref{assumption}), there exists $T_0$ with $T\le T_0 \le 2T$ such that \begin{equation*} \frac{\Delta(T_0)e^{-T_0/y}}{T_0^{\alpha}} \ll \log^2 T, \end{equation*} \begin{equation*} \text{and}\quad\frac{1}{y}\int_{T_0}^{\infty}\frac{\Delta(u)e^{-u/y}}{u^{\alpha}} du \ll \log T. \end{equation*} \end{lem} \begin{proof} The assumption (\ref{assumption}) implies that \begin{eqnarray*} \log^2T &\ge & \int_T^{2T} \frac{|\Delta(u)|^2}{u^{2\alpha+1}}e^{-u/y}du \\ &=& \int_T^{2T} \frac{|\Delta(u)|^2}{u^{2\alpha}}e^{-2u/y}\frac{e^{u/y}}{u} du \\ &\ge & \min_{T\le u\le 2T}\left(\frac{|\Delta(u)|}{u^{\alpha}}e^{-u/y}\right)^2, \end{eqnarray*} which proves the first assertion. To prove the second assertion, we use the previous assertion and Cauchy- Schwartz inequality along with assumption (\ref{assumption}) to get \begin{eqnarray*} \left( \int_{T_0}^{\infty}\frac{|\Delta(u)|^2}{u^{\alpha}}e^{-u/y}du \right)^2 &\le & \left( \int_{T_0}^{\infty}\frac{|\Delta(u)|^2}{u^{2\alpha+1}}e^{-u/y}du \right) \left( \int_{T_0}^{\infty} u e^{-u/y}du \right) \\ &\ll & y^2 \log^2 T. \end{eqnarray*} This completes the proof of this lemma. \end{proof} We now recall a mean value theorem due to Montgomery and Vaughan \cite{MontgomeryVaughan}. \begin{notation} For a real number $\theta$, let $\|\theta\|:=\min_{n\in \mathbb Z}|\theta -n|.$ \end{notation} \begin{thm}[Montgomery and Vaughan \cite{MontgomeryVaughan}]\label{mean-value} Let $a_1,\cdots, a_N$ be arbitrary complex numbers, and let $\lambda_1,\cdots,\lambda_N$ be distinct real numbers such that \[\delta = \min_{m,n}\| \lambda_m-\lambda_n\|>0.\] Then \[ \int_0^T \left| \sum_{n\le N} a_n \exp(i\lambda_n t) \right|^2 dt =\left(T +O\left(\frac{1}{\delta}\right)\right)\sum_{n\le N} |a_n|^2.\] \end{thm} \begin{lem}\label{mean-value-estimate} For $T\le T_0\le 2T$ and $\text{Re}(s)=\alpha$, we have $$\int_T^{2T} \left| \sum_{n\le T_0}\frac{|\tau(n,\theta)|^2}{n^s}e^{-n/y}\right|^2 t^{-2} dt \ll 1.$$ \end{lem} \begin{proof} Using theorem \ref{mean-value}, we get \begin{eqnarray*} &&\int_T^{2T} \left| \sum_{n\le T_0}\frac{|\tau(n,\theta)|^2}{n^s}e^{-n/y}\right|^2 t^{-2} dt \\ &\le & \frac{1}{T^2} \left( T \sum_{n\le T_0} |b(n)|^2 + O\left( \sum_{n\le T_0} n|b(n)|^2\right)\right), \end{eqnarray*} where \[ b(n)=\frac{|\tau(n,\theta)|^2}{n^{\alpha}}e^{-n/y}.\] Thus \[ \sum_{n\le T_0} |b(n)|^2 \le \sum_{n\le T_0}\frac{d(n)^4}{n^{2\alpha}} \ll T_0^{1-2\alpha+\epsilon}\] and \[ \sum_{n\le T_0} n|b(n)|^2 \le \sum_{n\le T_0}\frac{d(n)^4}{n^{2\alpha-1}} \ll T_0^{2-2\alpha + \epsilon}\] for any $\epsilon>0$, since the divisor function $d(n)\ll n^\epsilon$ for any$\epsilon>0$. This completes the proof as $\alpha>0$. \end{proof} \begin{lem}\label{mean-value-error} For $\text{Re}(s)=\alpha$ and $T\le T_0 \le 2T$, we have $$\int_T^{2T} \left| \sum_{n\ge 0}\int_0^1 \frac{\Delta(n+x+T_0) e^{-(n+x+T_0)/y}}{(n+x+T_0)^{s+1}} \mathrm{d} x \right|^2 \mathrm{d} t \ll \int_T^{\infty} \frac{|\Delta(x)|^2}{x^{2\alpha+1}}e^{-2x/y} \mathrm{d} x.$$ \end{lem} \begin{proof} Using Cauchy- Schwarz inequality, we get \begin{align*} & \left| \sum_{n\ge 0}\int_0^1 \frac{\Delta(n+x+T_0)}{(n+x+T_0)^{s+1}} e^{-(n+x+T_0)/y} \mathrm{d} x \right|^2 \\ \le& \int_0^1 \left| \sum_{n\ge 0} \frac{\Delta(n+x+T_0)}{(n+x+T_0)^{s+1}}e^{-(n+x+T_0)/y} \right|^2 \mathrm{d} x. \end{align*} Hence \begin{align*} &\int_T^{2T} \left| \int_0^1 \sum_{n\ge 0}\frac{\Delta(n+x+T_0) e^{-(n+x+T_0)/y}}{(n+x+T_0)^{s+1}} \mathrm{d} x \right|^2 \mathrm{d} t \\ \le& \int_T^{2T}\int_0^1 \left| \sum_{n\ge 0} \frac{\Delta(n+x+T_0)}{(n+x+T_0)^{s+1}} e^{-(n+x+T_0)/y} \right|^2 \mathrm{d} x \mathrm{d} t \\ =& \int_0^1 \int_T^{2T}\left| \sum_{n\ge 0} \frac{\Delta(n+x+T_0)}{(n+x+T_0)^{s+1}} e^{-(n+x+T_0)/y} \right|^2 \mathrm{d} t \mathrm{d} x. \end{align*} From Theorem \ref{mean-value}, we can get \begin{eqnarray*} &&\int_T^{2T}\left| \sum_{n\ge 0} \frac{\Delta(n+x+T_0)}{(n+x+T_0)^{s+1}}e^{-(n+x+T_0)/y} \right|^2 \mathrm{d} t\\ &=& T\sum_{n\ge 0}\frac{ |\Delta(n+x+T_0)|^2}{(n+x+T_0)^{2\alpha+2}} e^{-2(n+x+T_0)/y} + O\left( \sum_{n\ge 0} \frac{ |\Delta(n+x+T_0)|^2}{(n+x+T_0)^{2\alpha+1}}\right)\\ &\ll & \sum_{n\ge 0} \frac{ |\Delta(n+x+T_0)|^2}{(n+x+T_0)^{2\alpha+1}}e^{-2(n+x+T_0)/y}. \end{eqnarray*} Hence \begin{eqnarray*} &&\int_T^{2T} \left| \sum_{n\ge 0}\int_0^1 \frac{\Delta(n+x+T_0) e^{-(n+x+T_0)/T}}{(n+x+T_0)^{s+1}} \mathrm{d} x \right|^2 \mathrm{d} t \\ &\ll & \int_0^1 \sum_{n\ge 0} \frac{ |\Delta(n+x+T_0)|^2}{(n+x+T_0)^{2\alpha+1}}e^{-2(n+x+T_0)/y}\mathrm{d} x \ll \int_T^{\infty} \frac{|\Delta(x)|^2}{x^{2\alpha+1}}e^{-2x/y} \mathrm{d} x, \end{eqnarray*} completing the proof. \end{proof} \begin{proof}[\textbf{Proof of Proposition \ref{main-prop}.} ] For $s=\alpha+it$ with $1/4 +\delta /2 < \alpha < 1/2$ and $t\in J_2(T)$, we have \begin{eqnarray*} \sum_{n=1}^\infty \frac{|\tau(n,\theta)|^2}{n^s} e^{-n/y} &=& \frac{1}{2\pi i} \int_{2-i\infty}^{2+i\infty} D(s+w) \Gamma(w) y^w \mathrm{d} w \\ &=& \frac{1}{2\pi i} \int_{2-i\log^2T}^{2+i\log^2T} +O\left( y^2\int_{\log^2T}^{\infty} |D(s+w)||\Gamma(w)|\mathrm{d} w \right). \end{eqnarray*} The above error term is estimated to be $o(1)$. We move the integral to $$\left[\frac{1}{4}+\frac{\delta}{2} -\alpha-i\log^2T, \ \frac{1}{4}+\frac{\delta}{2} -\alpha+i\log^2 T\right].$$ Let $\delta'=1/4+\delta/2 -\alpha$. In this region $\text{Re}(2s+2w)=1/2+\delta$ . So we can apply Lemma~\ref{estimate-on-J(T)} to conclude that $D(s+w) =O(T^\kappa)$, for some constant $\kappa>0$. Thus the integrals along horizontal lines are $o(1)$. Since the only pole inside this contour is at $w=0$, we get \begin{eqnarray*} \sum_{n=1}^{\infty} \frac{|\tau(n,\theta)|^2}{n^s} e^{-n/y} = D(s) + \frac{1}{2\pi i} \int_{\delta'-i\log^2 T}^{\delta'+i\log^2 T} D(s+w)\Gamma(w)y^w \mathrm{d} w +o(1). \end{eqnarray*} Since $\delta' <0$, the remaining integral can be shown to be $o(1)$ for $\mathfrak b\geq 8$. Using $T_0$ as in Lemma \ref{initial-estimates}, we now divide the sum into two parts: $$D(s)= \sum_{n\le T_0} \frac{|\tau(n, \theta)|^2}{n^s} e^{-n/y} + \sum_{n > T_0} \frac{|\tau(n, \theta)|^2 }{n^s} e^{-n/y}+o(1).$$ To estimate the second sum, we write \begin{eqnarray*} \sum_{n > T_0} \frac{|\tau(n, \theta)|^2}{n^s} e^{-n/y} &=& \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^s} \mathrm{d} \left(\sum_{n\le x}|\tau(n, \theta)|^2 \right)\\ &=& \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^s} \mathrm{d} (\mathcal{M}(x)+\Delta(x))\\ &=& \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^s} \mathcal{M}'(x)\mathrm{d} x + \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^s} \mathrm{d} (\Delta(x)). \end{eqnarray*} Recall that \[ \mathcal{M}(x)=\omega_1(\theta)x\log x + \omega_2(\theta)x\cos(\theta\log x) +\omega_3(\theta)x,\] thus \[ \mathcal{M}'(x)=\omega_1(\theta)\log x + \omega_2(\theta)\cos(\theta\log x) -\theta\omega_2(\theta)\sin(\theta\log x)+\omega_1(\theta)+\omega_3(\theta).\] Observe that \[ \int_{T_0}^{\infty}\frac{ e^{-x/y}}{x^s}\cos(\theta\log x) \mathrm{d} x =\frac{1}{2} \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^{s+i\theta}} \mathrm{d} x +\frac{1}{2} \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^{s-i\theta}} \mathrm{d} x.\] Applying Lemma ~\ref{gamma}, we conclude that \[ \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^s} \mathcal{M}'(x)\mathrm{d} x = o(1).\] Integrating the second integral by parts: \begin{eqnarray*} \int_{T_0}^{\infty} \frac{ e^{-x/y}}{x^s} \mathrm{d} (\Delta(x)) &=& \frac{e^{-T_0/y} \Delta(T_0)}{T_0^s} \\ &+& \frac{1}{y}\int_{T_0}^\infty \frac{ e^{-x/y}}{x^s}\Delta(x) \mathrm{d} x -s\int_{T_0}^{\infty}\frac{ e^{-x/y}}{x^{s+1}}\Delta(x) \mathrm{d} x. \end{eqnarray*} Applying Lemma~\ref{initial-estimates}, we get \begin{eqnarray*} \sum_{n > T_0} \frac{|\tau(n, \theta)|^2}{n^s} e^{-n/y} &=& s\int_{T_0}^{\infty}\frac{\Delta(x) e^{-x/y}}{x^{s+1}} \mathrm{d} x + O(\log T) \\ &=& s\sum_{n\ge 0} \int_{0}^{1}\frac{\Delta(n+x+T_0) e^{-(n+x+T_0)/y}}{(n+x+T_0)^{s+1}} \mathrm{d} x +O(\log T). \end{eqnarray*} Hence we have $$D(s)= \sum_{n\le T_0} \frac{|\tau(n, \theta)|^2}{n^s} e^{-n/y} +s\sum_{n\ge 0} \int_{0}^{1}\frac{\Delta(n+x+T_0) e^{-(n+x+T_0)/y}}{(n+x+T_0)^{s+1}} \mathrm{d} x +O(\log T) .$$ Squaring both sides and then integrating on $J_2(T)$, we get \begin{align*} \int_{J_2(T)} \frac{|D(\alpha+it)|^2}{|\alpha+it|^2} \mathrm{d} t & \ll \int_T^{2T} \left| \sum_{n\le T_0} \frac{|\tau(n, \theta)|^2}{n^s} e^{-n/y} \right|^2 \frac{ \mathrm{d} t}{t^2} \\ & + \int_T^{2T} \left| \sum_{n\ge 0} \int_{0}^{1}\frac{\Delta(n+x+T_0) e^{-(n+x+T_0)/y}}{(n+x+T_0)^{s+1}} \mathrm{d} x \right|^2 \mathrm{d} t. \end{align*} The proposition now follows using Lemma \ref{mean-value-estimate} and Lemma \ref{mean-value-error}. \end{proof} We are now ready to prove our main theorem of this section. \begin{proof}[\textbf{Proof of Theorem \ref{omega_integral}}] We prove by contradiction. Suppose that (\ref{lb-increasing}) does not hold. Then, given any $N_0>1$, there exists $T>N_0$ such that \[\int_T^{\infty} \frac{|\Delta(x)|^2}{x^{2\alpha+1}}e^{-2x/y} \mathrm{d} x \ll \exp\left( c(\log T)^{7/8} \right),\] for all $c>0$. This gives \[ \int_T^{\infty} \frac{|\Delta(x)|^2}{x^{2\beta+1}}e^{-2x/y} \mathrm{d} x \ll 1, \] where \[ \beta=\frac{3}{8}-\frac{c}{2(\log T)^{1/8}} . \] We apply Proposition \ref{main-prop} to get \begin{equation}\label{contra} \int_{J_2(T)} \frac{|D(\beta+it)|^2}{|\beta+it|^2} \mathrm{d} t \ll 1. \end{equation} Now we compute a lower bound for the last integral over $J_2(T)$. Write the functional equation for $\zeta(s)$ as $$\zeta(s) =\pi^{1/2-s}\frac{\Gamma((1-s)/2)}{\Gamma(s/2)}\zeta(1-s).$$ Using the Stirlings formula for $\Gamma$ function, we get $$|\zeta(s)|=\pi^{1/2-\beta}t^{1/2-\beta}|\zeta(1-s)|\left(1+O\left(\frac{1}{T}\right)\right),$$ for $s=\beta+it$. This implies $$|D(\beta+it)|=t^{2-4\beta}\frac{|\zeta(1-\beta+it)^2\zeta(1-\beta-it-i\theta) \zeta(1-\beta-it+i\theta)|}{|\zeta(2\beta+i2t)|}.$$ Let $\delta_0=1/16$, and \[\beta=\frac{3}{8} -\frac{c}{2(\log T)^{1/8} }=\frac{1}{2}-\delta \] with \[\delta=\frac{1}{8}+\frac{c}{2(\log T)^{1/8} }.\] Then using Lemma \ref{estimate-on-J(T)}, we get \[ |\zeta(1-\beta+it)| = \left|\zeta\left(\frac{1}{2}+\delta+it\right)\right| \gg \exp\left(\log\log t \left(\frac{\log t}{\delta_0}\right)^{\frac{1-2\delta}{1-2\delta_0}}\right).\] For $t\in J_2(T)$ we observe that $t\pm\theta \in J_1(T)$, and so the same bounds hold for $\zeta(1-\beta+it+i\theta)$ and $\zeta(1-\beta+it -i\theta)$. Further \[ |\zeta(2\beta+i2t)| = \left|\zeta\left(\frac{1}{2}+\left(\frac{1}{2}-2\delta\right)+i2t\right)\right| \ll \exp\left(\log\log t \left(\frac{\log t}{\delta_0}\right)^{\frac{4\delta}{1-2\delta_0}}\right).\] Combining these bounds, we get \[ |D(\beta+it)| \gg t^{2-4\beta} \exp\left(-5\log\log t \left(\frac{\log t}{\delta_0}\right)^{\frac{1-2\delta}{1-2\delta_0}}\right).\] Therefore \begin{eqnarray*} \int_{J_2(T)} |D(\beta+it)|^2 \mathrm{d} t &\gg & \int_{J_2(T)} |D(\beta+it)|^2 \mathrm{d} t \gg \int_{J_2(T)} |D(\beta+it)|^2 \mathrm{d} t\\ &\gg & T^{4-8\beta} \exp\left(-10\log\log T\left(\frac{\log T}{\delta_0}\right)^{\frac{1-2\delta}{1-2\delta_0}}\right) \mu(J_2(T)) \\ &\gg & T^{5-8\beta}\exp\left(-10\log\log T \left(\frac{\log T}{\delta_0}\right)^{\frac{1-2\delta}{1-2\delta_0}}\right), \end{eqnarray*} where we use Lemma \ref{size-J(T)} to show that $\mu(J_2(T))\gg T$. Now putting the values of $\delta$ and $\delta_0$ as chosen above, we get $$\int_{J(T)} \frac{|D(\beta+it)|^2}{|\beta+it|^2} dt \gg \exp\left(3c(\log T)^{7/8}\right),$$ since $\frac{1-2\delta}{1-2\delta_0}< 7/8$. This contradicts (\ref{contra}), and hence the theorem follows. \end{proof} The following definition is required to state the corollaries. \begin{defi} An infinite unbounded subset $\mathcal{S}$ of non-negative real numbers is called an $\textbf{X}\text{-Set}$ . \end{defi} \noindent The following two corollaries are immediate. \begin{coro}\label{coro:balu_ramachandra1} For any $c>0$ there exists an $\textbf{X}\text{-Set}$ $\mathcal{S}$, such that for sufficiently large $T$ depending on $c$ there exists an \[ X \in \left[ T, \frac{T^{\mathfrak b}}{2}\log^2 T\right]\cap \mathcal{S}, \] for which we have \[ \int_X^{2X} \frac{|\Delta(x)|^2}{x^{2\alpha+1}} dx \ge \exp\left( (c-\epsilon)(\log X)^{7/8}\right)\] with $\alpha$ as in Theorem \ref{omega_integral} and for any $\epsilon>0$. \end{coro} \begin{coro}\label{coro:balu_ramachandra2} For any $c>0$ there exists an $\textbf{X}\text{-Set}$ $\mathcal{S}$, such that for sufficiently large $T$ depending on $c$ there exists an \[ x \in \left[ T, \frac{T^{\mathfrak b}}{2}\log^2 T\right]\cap \mathcal{S}, \] for which we have \[ \Delta(x) \ge x^{3/8} \exp\left( - c(\log x)^{7/8}\right). \] \end{coro} \noindent We can now prove a "measure version" of the above result. \begin{prop}\label{Balu-Ramachandra-measure} For any $c>0$, let \[\alpha(x)=\frac{3}{8}- \frac{c}{(\log x)^{1/8} }\] and $\mathcal{A}=\{x: |\Delta(x)|\gg x^{\alpha(x)} \}$. Then for every sufficiently large $X$ depending on $c$, we have \[ \mu(\mathcal{A}\cap [X,2X])=\Omega(X^{2\alpha(X)}).\] \end{prop} \begin{proof} Suppose that the conclusion does not hold, hence \[ \mu(\mathcal{A}\cap [X,2X]) \ll X^{2\alpha(X)}.\] Thus for every sufficiently large $X$, we get \[ \int_{\mathcal{A}\cap [X,2X] }\frac{|\Delta(x)|^2}{x^{2\alpha+1}}dx \ll X^{2\alpha}\frac{ M(X)}{X^{2\alpha+1}}=\frac{ M(X)}{X},\] where $\alpha=\alpha(X)$ and $M(X)=\sup_{X\le x \le 2X} |\Delta(x)|^2$. Using dyadic partition, we can prove \[ \int_{\mathcal{A}\cap [T,y] }\frac{|\Delta(x)|^2}{x^{2\alpha+1}}dx \ll \frac{M_0(T)}{T}\log T, \] where \[ M_0(T) =\sup_{T\le x\le y} |\Delta(x)|^2 \] and $y=T^{\mathfrak b}$ for some $\mathfrak b>0$ and $T$ sufficiently large. This gives \[ \int_T^{\infty} \frac{|\Delta(x)|^2}{x^{2\alpha+1}}e^{-2x/y} dx \ll \frac{M_0(T)}{T}\log T. \] Along with (\ref{lb-increasing}), this implies \[ M_0(T)\gg T\exp\left( \frac{c}{2}(\log T)^{7/8} \right).\] Thus \[|\Delta(x)| \gg x^{\frac{1}{2}} \exp\left(\frac{c}{4}(\log x)^{7/8}\right),\] for some $x\in [T,y].$ This contradicts the fact that $|\Delta(x)| \ll x^{\frac{1}{2}} (\log x)^6.$ \end{proof} \subsection{Optimality of the Omega Bound for the Second Moment } The following proposition shows the optimality of the omega bound in Corollary~\ref{coro:balu_ramachandra1}. \begin{prop}\label{prop:upper_bound_second_moment_twisted_divisor} Under Riemann Hypothesis (RH), we have \[\int_{X}^{2X}\Delta^2(x)\mathrm{d} x\ll X^{7/4+\epsilon},\] for any $\epsilon>0$. \end{prop} \begin{proof} The Perron's formula gives \begin{equation*} \Delta(x)=\frac{1}{2\pi}\int_{-T}^{T}\frac{D(3/8+it)x^{3/8+it}}{3/8+it}\mathrm{d} t + O(x^\epsilon), \end{equation*} for any $\epsilon>0$ and for $T=X^2$ with $x\in[X, 2X]$. Using this expression for $\Delta(x)$, we write its second moment as \begin{align*} &\int_{X}^{2X}\Delta^2(x)\mathrm{d} x = \int_{X}^{2X}\int_{-T}^{T}\int_{-T}^{T}\frac{D(3/8+ it_1)D(3/8+ it_2)}{(3/8+it_1)(3/8+ it_2)}x^{3/4+ i(t_1+t_2)}\mathrm{d} x \ \mathrm{d} t_1 \mathrm{d} t_2 \\ &\hspace{2.5 cm} + O\left(X^{1+\epsilon}(1+|\Delta(x)|)\right)\\ &\ll X^{7/4}\int_{-T}^{T}\int_{-T}^{T}\left|\frac{D(3/8+ it_1)D(3/8+ it_2)}{(3/8+it_1)(3/8+it_2)(7/4+ i(t_1+t_2))}\right|\mathrm{d} t_1 \mathrm{d} t_2 + O(X^{3/2+\epsilon}).\\ \end{align*} In the above calculation, we have used the fact that $\Delta(x)\ll x^{\frac{1}{2}+\epsilon}$ as in (\ref{eq:upper_bound_delta}). Also note that for complex numbers $a, b$, we have $|ab|\leq \frac{1}{2}(|a|^2 + |b|^2)$. We use this inequality with \[a=\frac{|D(3/8+it_1)|}{|3/8+it_1|\sqrt{|7/4+i(t_1+t_2)|}} \ \text{ and } \ b=\frac{|D(3/8+it_2)|}{|3/8+it_2|\sqrt{|7/4+i(t_1+t_2)|}},\] to get \begin{align*} \int_{X}^{2X}\Delta^2(x)\mathrm{d} x &\ll X^{7/4}\int_{-T}^{T}\int_{-T}^{T}\left|\frac{D(3/8+ it_2)}{(3/8+it_2)}\right|^2\frac{\mathrm{d} t_1}{|7/4+ i(t_1+t_2)|} \mathrm{d} t_2 + O(X^{3/2+\epsilon})\\ &\ll X^{7/4}\log X\int_{-T}^{T}\left|\frac{D(3/8+ it_2)}{(3/8+it_2)}\right|^2 \mathrm{d} t_2 + O(X^{3/2+\epsilon}). \end{align*} Under RH, $|D(3/8+it_2)|\ll |t_2|^{\frac{1}{2}+\epsilon}$. So we have \begin{align*} \int_{X}^{2X}\Delta^2(x)\mathrm{d} x \ll X^{7/4+\epsilon}\ \text{ for any } \epsilon>0. \end{align*} \end{proof} \section{Influence Of Measure}\label{sec:measure_analysis} In this section, we study the influence of measure of the set where $\Omega$-results hold. The following theorem is an illustration of the methods of this section, which will be proved in \ref{subsec:twisted_divisor_omega_plus_minus}. \begin{thm}\label{thm:tau_theta_omega_pm} Let $\Delta(x)$ be the error term of the summatory function of the twisted divisor function as defined in (\ref{eq:formmula_tau_ntheta}). For $c>0$, let \[\alpha(x)=\frac{3}{8}-\frac{c}{(\log x)^{1/8}} .\] Let $\delta$ and $\delta'$ be such that \[0<\delta<\delta'<\frac{1}{8}.\] Then either \[\Delta(x)=\Omega\left(x^{ \alpha(x)+ \frac{\delta}{2}}\right) \ \text{ or } \ \Delta(x)=\Omega_{\pm}\left(x^{\frac{3}{8}-\delta'}\right).\] \end{thm} Throughout this section, we assume the conditions and notations given in Assumptions~\ref{as:for_continuation_mellintran}. Further we have the following notations for this section. \begin{notations} For $i=0, 1, 2$, let $\alpha_i(T)$ denote a positive monotonic function such that $\alpha_i(T)$ converges to a constant as $T\rightarrow \infty$. For example, in some cases $\alpha_i(T)$ could be $1-1/\log(T)$, which tend to $1$ as $T\rightarrow \infty$. For $i=0, 1$, let $h_i(T)$ be positive monotonically increasing functions such that $h_i(T)\rightarrow\infty$ as $T\rightarrow\infty$. For a real valued and non-negative function $f$, we denote \[\mathcal{A}(f(x)):=\{x\geq 1: |\Delta(x)|>f(x)\}.\] \end{notations} \subsection{Refining Omega Result from Measure} Now we hypothesize a situation when there is a lower bound estimate for the second moment of the error term. \begin{asump}\label{as:measure_to_omega} Let $\mathcal{S}$ be an $\textbf{X}\text{-Set}$. Define a real valued positive bounded function $\alpha(T)$ on $\mathcal{S}$, such that \[0\leq \alpha(T)<M<\infty\] for some constant $M$. For a fixed $T$, we write \[\mathcal{A}_T:=[T/2, T]\cap\mathcal{A}(c_8 x^{\alpha(x)}), \ \text{ for } c_8>0.\] For all $T\in \mathcal{S}$ and for constants $c_9, \ c_{10} > 0$, assume the following bounds hold: \begin{enumerate} \item[(i)] \[\int_{\mathcal{A}_T}\frac{\Delta^2(x)}{x^{2\alpha+1}}\mathrm{d}x>c_9,\] \item[(ii)] \[\mu(\mathcal{A}_T)<c_{10}h_0(T),\quad \text{ and }\] \item[(iii)] the function \[x^{\alpha+1/2}h_0^{-1/2}(x)\] is monotonically increasing for $x\in [T/2, T]$. \end{enumerate} \end{asump} We note that the first assumption indicates an $\Omega$-estimate. The next two assumptions indicate that the measure of the set on which the $\Omega$ estimate holds is not \lq too big\rq. \begin{prop}\label{prop:refine_omega_from_measure} Suppose there exists an $\textbf{X}\text{-Set}$ $\mathcal{S}$ having properties as described in Assumptions~\ref{as:measure_to_omega}. Let the constant $c_{11}$ be given by \[c_{11}:=\sqrt{\frac{c_9}{2^{2M+1}c_{10}}}.\] Then there exists a $T_0$ such that for all $T>T_0$ and $T\in \mathcal{S}$, we have \[|\Delta(x)|>c_{11} x^{\alpha+1/2}h_0^{-1/2}(x)\] for some $x\in [T/2, T]$. In particular \[\Delta(x)=\Omega(x^{\alpha+1/2}h_0^{-1/2}(x)).\] \end{prop} \begin{proof} If the statement of the above proposition is not true, then for all $x\in [T/2, T]$ we have \[\Delta(x)\leq c_{11} x^{\alpha + 1/2}h_0^{-1/2}(x).\] From this, we may derive an upper bound for second moment of $\Delta(x)$: \begin{align*} \int_{\mathcal{A}_T}\frac{\Delta^2(x)}{x^{2\alpha+1}}\mathrm{d}x &\leq \frac{c_{11}^2 T^{2\alpha+1}\mu(\mathcal{A}_T\cap[T/2, T])}{h_0(T)(T/2)^{2\alpha+1}} \\ &\leq c_{11}^2 2^{2M + 1} c_{10} \leq c_9. \end{align*} The above bound contradicts (i) of Assumptions~\ref{as:measure_to_omega}. This proves the proposition. \end{proof} \subsection{Omega Plus-Minus Result from Measure} In this section, we prove an $\Omega_\pm$ result for $\Delta(x)$ when $\mu(A_T)$ is big. We formalize the conditions in the following assumptions. \begin{asump}\label{as:measure_omega_plus_minus} Suppose the conditions in Assumptions~\ref{as:for_continuation_mellintran} hold. Let $l$ be an integer such that \[l>\max(\sigma_2, 1),\] and let $\alpha_1(T)$ be a monotonic function satisfying the inequality \[0<\alpha_1(T)\leq \sigma_1.\] Furthermore: \begin{enumerate} \item[(i)] the Dirichlet series $D(\sigma+it)$ has no pole when $\alpha_1(T)\le \sigma\le \sigma_1$; \item[(ii)] if $|t|\leq T^{2l}$ and $\alpha_1(T)\le \sigma\le \sigma_1$, we have \[|D(\sigma + it)|\leq c_{12} (|t|+1)^{l-1}\] for some constant $c_{12}>0$. \end{enumerate} \end{asump} \begin{asump}\label{as:measure_omega_plus_minus_weak} Suppose there exists $\epsilon>0$ such that the following holds: \begin{flushleft} if $D(\sigma+it)$ has no pole for $\alpha_1(T)-\epsilon< \sigma \le \sigma_1$ and $|t|\leq 2T^{2l}$, then there exists a constant $c_{13}>0$ depending on $\epsilon$ such that \[|D(\sigma + it)|\leq c_{13} (|t|+1)^{l-1}, \] when $\alpha_1(T)\le \sigma \le \sigma_1$ and $|t|\leq T^{2l}$. \end{flushleft} \end{asump} Assumptions~\ref{as:measure_omega_plus_minus_weak} says that if there are no poles of $D(s)$ in $\alpha_1(T)-\epsilon< \sigma \le \sigma_1$, then it has polynomial growth in a certain region. \begin{lem}\label{lem:perron_for_omegapm_measure} Under the conditions in Assumptions~\ref{as:measure_omega_plus_minus}, we have \[\Delta(x) =\int_{\alpha_1-iT^{2l}}^{\alpha_1+iT^{2l}}\frac{D(\eta)x^\eta}{\eta}\mathrm{d}\eta + O(T^{-1}),\] for all $x\in [T/2, 5T/2]$. \end{lem} \begin{proof}Follows from Perron summation formula. \end{proof} \begin{lem}[Balasubramanian and Ramachandra \cite{BaluRamachandra2}]\label{lem:ramachandra_trick} Let $T\ge 1,$ $\delta_0>0$ and $f(x)$ be a real-valued integrable function such that \[f(x)\geq0 \quad \text{ for } x\in [T-\delta_0T, \ 2T+ \delta_0T].\] Then for $\delta>0$ and for a positive integer $l$ satisfying $\delta l\leq \delta_0,$ we have \[\int_T^{2T}f(x)\mathrm{d} x \leq \frac{1}{(\delta T)^l}\underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}f(x)\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l.\] \end{lem} \begin{proof} For $0\le y_i \le \delta T$, $i=1,2,...,l$ \begin{align*} \int_T^{2T}f(x)\mathrm{d} x \leq \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}f(x)\mathrm{d}{x}, \end{align*} as $f(x)\ge 0$ in \[\left[T-\sum_{1}^l y_i, 2T+\sum_{1}^l y_i\right]\subseteq [T-\delta_0T, 2T+ \delta_0T].\] This gives \begin{align*} &\frac{1}{(\delta T)^l}\underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}f(x)\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l \\ & \geq \frac{1}{(\delta T)^l}\underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T}^{2T}f(x)\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l = \int_{T}^{2T}f(x)\mathrm{d}{x}. \end{align*} \end{proof} The next theorem shows that if $\Delta(x)$ does not change sign then the set on which $\Omega$-estimate holds can not be \lq too big\rq. \begin{thm}\label{thm:upper_bound_measure} Suppose the conditions in Assumptions~\ref{as:measure_omega_plus_minus} hold. Let $h_1(T)$ be a monotonically increasing function such that $h_1(T)\rightarrow\infty$. Let $\alpha_2(T)$ be a bounded positive monotonic function, such that \begin{align*} & 0<\alpha_1(T)<\alpha_2(T)\leq \sigma_1, \text{ and} \\ & \frac{h_1(T)}{T^{\alpha_1}}\rightarrow \infty \text{ as } T\rightarrow \infty. \end{align*} If there exist a constant $x_0$ such that $\Delta(x)$ does not change sign on $\mathcal{A}(h_1(x))\cap[x_0, \infty)$, then \[\mu(\mathcal{A}(x^{\alpha_2(x)})\cap [T, 2T])\leq 4h_1(5T/2)T^{1-\alpha_2(T)}+ O(1 + T^{1-\alpha_2(T) + \alpha_1(T)})\] for $T\geq 2x_0$. \end{thm} \begin{proof} Trivially we have \[\mu(\mathcal{A}(x^{\alpha_2})\cap [T, 2T])\leq \int_T^{2T}\frac{|\Delta(x)|}{x^{\alpha_2}}\mathrm{d} x .\] Using Lemma~\ref{lem:ramachandra_trick} on the above inequality, we get \[\mu(\mathcal{A}(x^{\alpha_2})\cap [T, 2T])\leq \frac{1}{(\delta T)^l}\underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}\frac{|\Delta(x)|}{x^{\alpha_2}}\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l,\] where $\delta=\frac{1}{2l}.$ Let $\chi$ denote the characteristic function of the complement of $\mathcal{A}(h_1(x))$: \[\chi(x)=\begin{cases} 1 \quad \mbox{ if } x \notin \mathcal{A}(h_1(x)),\\ 0 \quad \mbox{ if } x \in \mathcal{A}(h_1(x)). \end{cases} \] For $T\geq 2x_0$, $\Delta(x)$ does not change sign on $$\left[T-\sum_{1}^l y_i, \ 2T+\sum_{1}^l y_i\right]\cap\mathcal{A}(h_1(x)),$$ as $0\le y_i\le \delta T$ for all $i=1,...,l$. So we can write the above inequality as \begin{align} \label{eq:measure_omega_pm_first_bound} \notag \mu(\mathcal{A}(x^{\alpha_2})\cap [T, 2T]) &\leq \frac{2}{(\delta T)^l} \underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}\frac{|\Delta(x)|}{x^{\alpha_2}}\chi(x)\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l\\ &+ \frac{1}{(\delta T)^l} \left| \underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}\frac{\Delta(x)}{x^{\alpha_2}}\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l\right|. \end{align} Since $x\notin \mathcal{A}(h_1(x))$ implies $|\Delta(x)|\le h_1(x)$, we get \begin{align}\label{eq:measure_omega_pm_trivial_part} \notag \frac{2}{(\delta T)^l}& \underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}\frac{|\Delta(x)|}{x^{\alpha_2}}\chi(x)\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l \\ &\leq 4 h_1(5T/2) T^{1-\alpha_2}. \end{align} We use the integral expression for $\Delta(x)$ as given in Lemma~\ref{lem:perron_for_omegapm_measure}, and get \begin{align}\label{eq:measure_omega_pm_perron_part} \notag & \frac{1}{(\delta T)^l} \left| \underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i}\frac{\Delta(x)}{x^{\alpha_2}}\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l\right|\\ \notag &= \frac{1}{(\delta T)^l}\left| \underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i} \int_{\alpha_1-iT^{2l}}^{\alpha_1+iT^{2l}}\frac{D(\eta)x^{\eta-\alpha_2}}{\eta}\mathrm{d}\eta~\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l\right| +O(1) \\ \notag &\ll 1 + \frac{1}{(\delta T)^l}\left| \int_{\alpha_1-iT^{2l}}^{\alpha_1+iT^{2l}}\frac{D(\eta)}{\eta} \underset{l~\mathrm{ times }\quad}{\int_0^{\delta T}\cdots\int_0^{\delta T}} \int_{T-\sum_{1}^l y_i}^{2T+\sum_{1}^l y_i} x^{\eta-\alpha_2}\mathrm{d}{x}~\mathrm{d} y_1 \ldots \mathrm{d} y_l~\mathrm{d}\eta \right| \\ \notag &\ll 1 + \frac{1}{(\delta T)^l}\left| \int_{\alpha_1-iT^{2l}}^{\alpha_1+iT^{2l}} \frac{D(\eta)(2T+l\delta T)^{\eta-\alpha_2 + l + 1}}{\eta\prod_{j=1}^{l+1}(\eta-\alpha_2+j)}\mathrm{d}\eta\right|\\ \notag &\ll 1 + \frac{T^{\alpha_1-\alpha_2+l + 1}}{(\delta T)^l} \int_{-T^{2l}}^{T^{2l}}\frac{(1+|t|)^{l-1}}{(1+ |t|)^{l+2}}\mathrm{d} t \ll 1 + T^{1-\alpha_2+\alpha_1}.\\ \end{align} The theorem follows from (\ref{eq:measure_omega_pm_first_bound}), (\ref{eq:measure_omega_pm_trivial_part}) and (\ref{eq:measure_omega_pm_perron_part}). \end{proof} \begin{thm}\label{thm:omega_pm_measure} Consider $\alpha_1(T), \alpha_2(T), \sigma_1, h_1(T)$ as in Theorem~\ref{thm:upper_bound_measure}, and $\mathcal P$ as in Assumptions \ref{as:for_continuation_mellintran}. Let $D(s)$ does not have a real pole in $[\alpha_1-\epsilon_0, \infty)-\mathcal{P}$, for some $\epsilon_0>0$. Suppose there exists an $\textbf{X}\text{-Set}$ $\mathcal{S}$ such that for all $T\in \mathcal{S}$ \[\mu(\mathcal{A}(x^{\alpha_2})\cap [T, 2T])> 5h_1(5T/2)T^{1-\alpha_2}.\] Then: \begin{enumerate} \item[(i)]under Assumptions~\ref{as:measure_omega_plus_minus}, we have \[\Delta(x)=\Omega_\pm(h_1(x))\] ( In this case $\Delta(x)$ changes sign in $[T/2, 5T/2]\cap \mathcal{A}(h_1(x))$ for $T\in S$ and $T$ is sufficiently large.); \item[(ii)]under Assumptions~\ref{as:measure_omega_plus_minus_weak}, we have \[\Delta(x)=\Omega_\pm(x^{\alpha_1-\epsilon}), \quad \text{for any } \ \epsilon>0.\] \end{enumerate} \end{thm} \begin{proof} If the conditions in Assumptions~\ref{as:measure_omega_plus_minus} hold, then (i) follows from Theorem~\ref{thm:upper_bound_measure}. To prove (ii), choose an $\epsilon$ such that $0<\epsilon<\epsilon_0$. Now suppose $\eta_0$ is a pole of $D$ for $\text{Re}(\eta)\geq\alpha_1(T)-\epsilon$ and $t\leq 2T^{2l}$, then by Theorem~\ref{thm:landu_omegapm} \[\Delta(x)=\Omega_\pm(T^{\alpha_1-\epsilon}).\] If there are no poles in the above described region of $\sigma + it$, then we are in the set-up of Assumptions~\ref{as:measure_omega_plus_minus}, and get \[\Delta(x)=\Omega_\pm(h_1(x)).\] We have \[T^{\alpha_1(T)}=o(h_1(T)),\] which gives \[\Delta(x)=\Omega_\pm(x^{\alpha_1-\epsilon}).\] This completes the proof of (ii). \end{proof} \subsection{Applications}\vspace{0.05mm} Now we shall see some examples demonstrating applications of Theorem~\ref{thm:omega_pm_measure}. \subsubsection{Error term of the divisor function} Let $d(n)$ denote the number of divisors of $n$: \[d(n)=\sum_{d|n}1.\] Dirichlet \cite[Theorem~320]{HardyWright} showed that \[\sum_{n\leq x}^*\tau(n) = x\log(x) + (2\gamma -1)x + \Delta(x), \] where $\gamma$ is the Euler constant and \[\Delta(x)=O(\sqrt{x}).\] Latest result on $\Delta(x)$ is due to Huxley \cite{HuxleyDivisorProblem}, which is \[\Delta(x)=O(x^{131/416}).\] On the other hand, Hardy \cite{HardyDirichletDivisor} showed that \begin{align*} \Delta(x)&=\Omega_+((x\log x)^{1/4}\log\log x),\\ &=\Omega_-(x^{1/4}). \end{align*} There are many improvements of Hardy's result. Some notable results are due to K. Corr{\'a}di and I. K{\'a}tai \cite{CorradiKatai}, J. L. Hafner \cite{Hafner}, and K. Sounderarajan \cite{Sound}. Below, we shall show that $\Delta(x)$ is $\Omega_\pm(x^{1/4})$ as a consequence of Theorem~\ref{thm:omega_pm_measure} and results of Ivi\'c and Tsang ( see below ). Moreover, we shall how that such fluctuations occur in $[T, 2T]$ for every sufficiently large $T$. \noindent Ivi\'c \cite{Ivic_second_moment_divisor_problem} proved that for a positive constant $c_{14}$, \[\int_{T}^{2T}\Delta^2(x)\mathrm{d} x \sim c_{14} T^{3/2}.\] A similar result for fourth moment of $\Delta(x)$ was proved by Tsang \cite{Tsang}: \[\int_T^{2T}\Delta^4(x)\mathrm{d} x\sim c_{15} T^2,\] for a positive constant $c_{15}$. Let $\mathcal{A}$ denote the following set: \[\mathcal{A}:=\left\{x:|\Delta(x)|>\frac{c_{14} x^{1/4}}{6}\right\}.\] For sufficiently large $T$, using the result of Ivi\'c \cite{Ivic_second_moment_divisor_problem}, we get \begin{align*} \int_{[T, 2T]\cap \mathcal{A}} \frac{\Delta^2(x)}{x^{3/2}}\mathrm{d} x &=\int_T^{2T}\frac{\Delta(x)^2}{x^{3/2}}\mathrm{d} x -\int_{[T, 2T]\cap\mathcal{A}^c}\frac{\Delta^2(x)}{x^{3/2}}\mathrm{d} x\\ & \geq \frac{1}{4T^{3/2}}\int_T^{2T}\Delta^2(x)\mathrm{d} x -\frac{c_{14}}{6} \\ & \geq \frac{c_{14}}{5} -\frac{c_{14}}{6} \geq \frac{c_{14}}{30}. \end{align*} Using Cauchy-Schwarz inequality and the result due to Tsang \cite{Tsang} we get \begin{align*} \int_{[T, 2T]\cap \mathcal{A}} \frac{\Delta^2(x)}{x^{3/2}}\mathrm{d} x &\leq \left(\int_{[T, 2T]\cap\mathcal{A}}\frac{\Delta^4(x)}{x^2}\mathrm{d} x\right)^{1/2} \left(\int_{[T, 2T]\cap\mathcal{A}}\frac{1}{x}\mathrm{d} x\right)^{1/2}\\ & \leq \left(\frac{c_{15}\mu([T, 2T]\cap \mathcal{A})}{T}\right)^{1/2}. \end{align*} The above lower and upper bounds on second moment of $\Delta$ gives the following lower bound for measure of $\mathcal{A}$: \begin{equation*} \mu([T, 2T]\cap \mathcal{A})> \frac{c_{14}^2}{901 c_{15}}T, \end{equation*} for some $T\geq T_0$. Now, Theorem ~\ref{thm:omega_pm_measure} applies with the following choices: \[\alpha_1(T)=1/5, \quad \alpha_2(T)=1/4, \quad h_1(T)=\frac{c_{14}^2}{9000c_{15}}T^{1/4}.\] Finally using Theorem~\ref{thm:omega_pm_measure}, we get that for all $T\geq T_0$ there exists $x_1, x_2 \in [T, 2T]$ such that \begin{align*} \Delta(x_1)> h_1(x_1) \ \text{ and } \ \Delta(x_2)< - h_1(x_2). \end{align*} In particular we get \begin{align*} \Delta(x)=\Omega_\pm(x^{1/4}). \end{align*} \subsubsection{Error term of a twisted divisor function}\label{subsec:twisted_divisor_omega_plus_minus} \ Recall that in (\ref{eq:formmula_tau_ntheta}) and (\ref{eq:upper_bound_delta}), we have defined $\Delta(x)$ as the error term that occurs while approximating $\sum_{n\leq x}^*|\tau(n, \theta)|^2$. Also recall that the corresponding Dirichlet series is given by \[D(s)=\sum_{n=1}^{\infty}\frac{|\tau(n, \theta)|^2}{n^s}=\frac{\zeta^2(s)\zeta(s+i\theta)\zeta(s-i\theta)}{\zeta(2s)}.\] Here the main term $\mathcal{M}(x)$ comes from the poles at $1, 1\pm i\theta$. Now we assume a zero free region for $D(\sigma+it)$, and estimate the growth of $D(\sigma+it)$ in that region. \begin{lem}\label{lem:polynomial_growth_critical_strip_2} Let $\delta$ and $\sigma$ be such that \begin{align*} 0<\delta < \frac{1}{8}, \mbox{ and } \quad \frac{3}{8}-\delta \leq \sigma < \frac{1}{2}. \end{align*} If $D(\sigma+it)$ does not have a pole in the above mentioned range of $\sigma$, then for \[\frac{3}{8} -\delta +\frac{\delta}{2(1 + \log\log (3 + |t|))}<\sigma < \frac{1}{2},\] we have \[D(\sigma + it)\ll_{\delta, \theta} |t|^{2-2\sigma+\epsilon}\] for any positive constant $\epsilon$. \end{lem} \begin{proof} Let $s=\sigma+ t$ with $3/8-\delta\leq\sigma<1/2$. Recall that \[D(s)=\frac{\zeta^2(s)\zeta(s+i\theta)\zeta(s-i\theta)}{\zeta(2s)}.\] Using functional equation, we write \begin{equation}\label{eq:functional_eq_tau_n_theta} D(s)=\mathcal{X}(s)\frac{\zeta^2(1-s)\zeta(1-s-i\theta)\zeta(1-s+i\theta)}{\zeta(2s)}, \end{equation} where $\mathcal{X}(s)$ is of order (can be obtained from Sterling's formula for $\Gamma$) \begin{equation}\label{eq:upperbound_chi} \mathcal{X}(\sigma+it)\asymp t^{2-4\sigma}. \end{equation} Using Sterling's formula and Phragmen-Lindelof principle, we get \begin{equation*} \zeta(1-s)|\ll |t|^{\sigma/2}\log t. \end{equation*} So we get \begin{equation}\label{eq:upperbound_numerator_tauntheta} |\zeta^2(1-s)\zeta(1-s-i\theta)\zeta(1-s+i\theta)|\ll t^{2\sigma}(\log t)^4. \end{equation} An upper bound for $|\zeta(2s)|^{-1}$ can be calculated in a similar way as in Lemma~\ref{estimate-on-J(T)}: \begin{equation}\label{eq:upperbound_denominator_tauntheta} |\zeta(2s)|^{-1}\ll \exp\left(c_{16}(\log\log t) (\log t)^{\frac{4(1-2\sigma)}{1+8\delta}}\right), \end{equation} for a suitable constant $c_{16}>0$ depending on $\delta$. The bound in the lemma follows from (\ref{eq:functional_eq_tau_n_theta}), (\ref{eq:upperbound_chi}), (\ref{eq:upperbound_numerator_tauntheta}) and (\ref{eq:upperbound_denominator_tauntheta}). \end{proof} Now we complete the proof of Theorem \ref{thm:tau_theta_omega_pm}. \begin{proof}[Proof of Theorem \ref{thm:tau_theta_omega_pm}] Let $M$ be any large positive constant, and define \[\mathcal{A}:=\mathcal{A}(Mx^{\alpha(x)}).\] Then from Corollary~\ref{coro:balu_ramachandra1}, we have \[\int_{[T, 2T]\cap \mathcal{A}}\frac{\Delta^2(x)}{x^{2\alpha(T)} +1} \mathrm{d} x \gg \exp\left(c(\log T)^{7/8}\right).\] Assuming \begin{equation}\label{eq:upper_bound_measure_asump} \mu([T, 2T]\cap \mathcal{A})\leq T^{1-\delta} \quad \text{for} \ T>T_0, \end{equation} Proposition~\ref{prop:refine_omega_from_measure} gives \[\Delta(x)=\Omega(x^{\alpha(x) +\delta/2})\] as $h_0(T)=T^{1-\delta}$, which is the first part of the theorem. But if (\ref{eq:upper_bound_measure_asump}) does not hold, then we have \[\mu([T, 2T]\cap \mathcal{A})> T^{1-\delta} \] for $T$ in an $\textbf{X}\text{-Set}$ . We choose \[h_1(T)=T^{\frac{3}{8}-\frac{2c}{(\log T)^{1/8}}-\delta}, \ \alpha_1(T)=\frac{3}{8}-\frac{3c}{(\log T)^{1/8}}-\delta, \ \alpha_2(T)=\alpha(T).\] Let $\delta''$ be such that $\delta<\delta''<\delta'$. If $D(\sigma + it)$ does not have pole for $\sigma>3/8-\delta''$ then by Lemma~\ref{lem:polynomial_growth_critical_strip_2}, $D(\alpha_1(T) + it)$ has polynomial growth. So Assumptions~\ref{as:measure_omega_plus_minus_weak} is valid. Since \[T^{1-\delta}>5h_1(5T/2)T^{1-\alpha_2(T)}\] by case (ii) of Theorem~\ref{thm:omega_pm_measure}, we have \[\Delta(T)=\Omega_{\pm}\left(T^{\frac{3}{8}-\frac{3c}{(\log T)^{1/8}}-\delta''}\right).\] The second part of the above theorem follows by the choice $\delta'>\delta''$. \end{proof} \subsubsection{Average order of Non-Isomorphic abelian Groups}\label{sec:sub_abelian_group} \ Let $a_n$ denote the number of non-isomorphic abelian groups of order $n$. The Dirichlet series $D(s)$ is given by \[D(s)=\sum_{n=1}^{\infty}\frac{a_n}{n^s} =\prod_{k=1}^{\infty}\zeta(ks),\quad \text{Re}(s)>1.\] The meromorphic continuation of $D(s)$ has poles at $1/k$, for all positive integer $k\geq 1$. Let the main term $\mathcal{M}(x)$ be \[\mathcal{M}(x)=\sum_{k=1}^{6}\Big( \prod_{j\neq k} \zeta(j/k) \Big)x^{1/k},\] and the error term $\Delta(x)$ be \[\sum_{n\leq x}^* a_n - \mathcal{M}(x).\] Balasubramanian and Ramachandra \cite{BaluRamachandra2} proved that \begin{equation*} \int_T^{2T}\Delta^2(x)\mathrm{d} x=\Omega(T^{4/3}\log T), \text{ and } \Delta(x)=\Omega_{\pm}(x^{92/1221}). \end{equation*} Sankaranarayanan and Srinivas \cite{srini} improved the $\Omega_\pm$ bound to \[ \Delta(x)=\Omega_{\pm}\left(x^{1/10}\exp\left(c\sqrt{\log x}\right)\right)\] for some constant $c>0$. An upper bound for the second moment of $\Delta(x)$ was first given by Ivi\'c \cite{Ivic_abelian_group}, and then improved by Heath-Brown \cite{Brown} to \[\int_T^{2T}\Delta^2(x)\mathrm{d} x\ll T^{4/3}(\log T)^{89}.\] This bound of Heath-Brown is best possible in terms of power of $T$. But for the fourth moment, the similar statement \[\int_T^{2T}\Delta^4(x)\mathrm{d} x\ll T^{5/3}(\log T)^C,\] which is best possible in terms of power of $T$, is an open problem. Another open problem is to show that \[\Delta(x)=\Omega_\pm(x^{1/6-\delta}),\] for any $\delta>0$. In the next Proposition, we shall show that at least one of the statement is true. \begin{prop}\label{prop:abelian_group} Let $\delta$ be such that $0<\delta<1/42$. Then either \[\int_T^{2T}\Delta^4(x)\mathrm{d} x=\Omega( T^{5/3+\delta} ),\] or \[\Delta(x)=\Omega_\pm(x^{1/6-\delta}).\] \end{prop} \begin{proof} If the first statement is false, then we have \[\int_T^{2T}\Delta^4(x)\mathrm{d} x\leq c_{17} T^{5/3+\delta}, \] for some constant $c_{17}$ depending on $\delta$ and for all $T\geq T_0$. Let $\mathcal{A}$ be defined by: \[\mathcal{A}=\{x: |\Delta(x)|>c_{18}x^{1/6}\}, \quad c_{18}>0.\] By the result of Balasubramanian and Ramachandra \cite{BaluRamachandra2}, we have an $\textbf{X}\text{-Set}$ $\mathcal{S}$, such that \[\int_{[T, 2T]\cap\mathcal{A}}\Delta^2(x)\mathrm{d} x \geq c_{19}T^{4/3}(\log T)\] for $T\in S$. Using Cauchy-Schwartz inequality, we get \begin{align*} c_{19}T^{4/3}(\log T)&\leq \int_{[T, 2T]\cap\mathcal{A}}\Delta^2(x)\mathrm{d} x \\ &\leq \left(\int_T^{2T}\Delta^4(x)\mathrm{d} x\right)^{1/2}(\mu(\mathcal{A}\cap[T, 2T]))^{1/2}\\ &\leq c_{17}^{1/2}T^{5/6+\delta/2}(\mu(\mathcal{A}\cap[T, 2T]))^{1/2}. \end{align*} This gives, for a suitable positive constant $c_{20}$, \[\mu(\mathcal{A}\cap[T, 2T])\geq c_{20}T^{1-\delta}(\log T)^2.\] Now we use Theorem~\ref{thm:omega_pm_measure}, (i), with \[\alpha_2=\frac{1}{6}, \quad \alpha_1=\frac{13}{84}-\frac{\delta}{2}, \quad \mbox{ and } \quad h_1(T)=T^{1/6-\delta}.\] So we get \[\Delta(x)=\Omega_\pm(x^{1/6-\delta}).\] This completes the proof. \end{proof} \bibliographystyle{abbrv}
train/arxiv
BkiUeBbxK1yAga6Jst4c
5
1
\section{Introduction} Collaborative Filtering (CF) methods are one type of recommendation techniques that use the \emph{past} interactions of other users to filter items for a single user. Broadly speaking, CF methods are generally characterized into memory-based and model-based methods. Memory-based methods are known for their simplicity and competitive performance \cite{volkovs2015effective}. Recently, they have been successfully used for session-based recommendations\cite{Jannach:2017:RNN:3109859.3109872} and they are still used as a part of the recommendation solution in industry\cite{slack}. Memory-based methods like user-kNN and item-kNN extract user (or item) similarities which are used to form user (or item) neighborhoods by taking the $k$-nearest neighbors. These neighborhoods are then used to filter items for a user. Calculating the similarity effectively is of great importance in these methods. One of the most commonly used similarity metrics is cosine similarity. Formally, the cosine similarity between two users $x$ and $y$ can be defined as: \begin{equation} \label{eq:cosine} \sigma = \frac{\sum_{i=1}^{n}x_iy_i}{ \sqrt{ \sum_{i=1}^{n} x_i^2} \sqrt{ \sum_{i=1}^{n} y_i^2} }, \end{equation} where, $n$ is the total number of samples (items in this case) and $x_i$ and $y_i$ represent the preferences of user $x$ and user $y$ on the $i$-th item respectively. The similarity between two items is defined in a similar manner. If the data is centered then the cosine similarity is equivalent to the empirical correlation which is calculated by: \begin{equation} \label{eq:pearson} \sigma = \frac{\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})}{ \sqrt{ \sum_{i=1}^{n} (x_i-\bar{x})^2} \sqrt{ \sum_{i=1}^{n} (y_i-\bar{y})^2} }, \end{equation} where, $\bar{x}$ is the sample mean i.e., $\frac{1}{n}\sum_{i=1}^{n}x_i$, and analogously for $\bar{y}$. The empirical correlation, and hence the cosine similarity, is a good estimation of the true correlation when the number of samples is large. However, in practice the number of users is of the same order as the number of items and the ratio of the number of users to the number of items is not very small compared to 1. In this case, the empirical correlations are dominated by noise and care should be taken while using them as similarities. The correlations between users (or items) can be viewed as an empirical correlation matrix where each entry denotes the empirical correlation of the entities represented by its index e.g., the entry at the index $(1,5)$ of the user empirical correlation matrix would be the correlation between user 1 and user 5. Results from random matrix theory (RMT) can then be used to understand the structure of the eigenvalues and eigenvectors of this empirical correlation matrix. The main contributions of this paper are as follows: \begin{itemize} \item We analyze the structure and spectral properties of the Pearson and cosine similarity. \item We argue that Cosine similarity possesses the desirable property of eigenvalue shrinkage. \item We quantify the overestimation of the largest eigenvalue in cosine similarity. \item We show that the theoretical results regarding the distribution of eigenvalues of random matrices can be used to clean the noise from the empirical user/item correlation matrix. \end{itemize} \section{Preliminaries of RMT}\label{PRMT} RMT theorems attempt to make statements about the spectral properties of large random correlation matrices \footnote{RMT theorems are also applicable to other general matrices.}. They are applied in the case when an $n \times m$ random matrix $\mathbf{X}$ with independent and identically distributed (i.i.d.) random entries of zero-mean is such that $m,n \rightarrow \infty$ and the ratio $m/n \rightarrow q \in (0,1]$. Interestingly, the eigenvalue distribution of the empirical correlation matrix of $\mathbf{X}$ is known exactly under these conditions and given by the Mar$\check{c}$enko Pastur law (MP-law): \begin{equation} \label{RMT} \rho_{\mathbf{X}}(\lambda) = \frac{1}{2 \pi q \lambda} \sqrt{(\lambda_{max}-\lambda)(\lambda-\lambda_{min})}, \end{equation} where the eigenvalue $\lambda \in [\lambda_{max}, \lambda_{min}] $ and $\lambda_{max} = (1+\sqrt{q})^2$ and $\lambda_{min} = (1-\sqrt{q})^2$. This result implies that there should be no eigenvalues outside the interval $ [\lambda_{max}, \lambda_{min}]$ for a random noise correlation matrix. A plot of the density of Equation \ref{RMT} is shown in Figure \ref{fig:RMTrand} along with the eigenvalue distribution of a random item correlation matrix formed by randomly permuting the entries of each column of a user-item feedback matrix. As we can see the histogram follows the theoretical MP-law distribution quite accurately. \section{Cleaning the correlation matrix}\label{cleaning} Using the result where a pure noise correlation matrix has an eigenvalue distribution similar to MP-law in the limiting case, we can clean the user (or item) correlation matrix by comparing its empirical eigenvalue distribution with that of the MP-law. If the bulk of the eigenvalues are within the range $[\lambda_{max}, \lambda_{min}]$ and their distribution resembles the MP-law then it is most probably due to noise and can be ignored. A simple strategy is to remove all eigenvalues between RMT ``noise bulk'' range i.e., $[\lambda_{min},\lambda_{max}]$ by setting them to 0, and retaining the rest of the eigenvalues. However, in practice the eigenvalue distribution in the noise bulk range does not follow the MP-law exactly. Therefore, a cutoff point near $\lambda_{max}$ is used instead of $\lambda_{max}$. This cutoff point $\lambda_{cut}$ is usually searched within a range near $\lambda_{max}$. This strategy is known as eigenvalue clipping \cite{bouchaud2009financial}. \begin{figure} \centering \includegraphics[scale=0.2]{fig/OnlineRetailRandomItemView} \caption{The solid line shows the plot of the MP-law density from Equation \ref{RMT}. The histogram obtained from eigenvalues of a random matrix follows the MP-law distribution. } \label{fig:RMTrand} \end{figure} \subsection{Eigenvalue spreading} The empirical correlation estimator of Equation \ref{eq:pearson}, also known as the Pearson or the sample correlation matrix is a common estimator of the true user or item correlation matrix. When we have a much larger number of datacases compared to the number of features i.e., $q \rightarrow 0$ then this estimator approaches the true correlation matrix. However, when the number of datacases and the number of features are of the same order i.e., $q = O(1)$, the MP-law states that the empirical correlation estimate becomes a noisy estimate of the true correlation matrix. This is because if the true correlation matrix is an identity matrix (pure noise) then the distribution of the eigenvalues of the empirical correlation is not a single spike at 1, but rather it is spread out as shown in Figure \ref{fig:RMTrand}. This spreading out is dependent on $q$ itself and given by the MP-law stated in Equation \ref{RMT}. The spectrum gets more spread out (noisier) as q increases. This tells us that when we have a data sample in the regime $q =O(1)$ then the small eigenvalues are \emph{smaller} and the large eigenvalues are \emph{larger} compared to the corresponding eigenvalues of the true correlation matrix. Therefore, the cleaning strategy should take this into account and shrink the estimated eigenvalues appropriately. \subsection{Zero-mean assumption} The Pearson estimator is more general as it assumes that the data is not-zero mean, which is often the case in practice. However, the data in collaborative filtering are large and sparse, and applying the Pearson correlation estimator on this data would imply making this large user-item matrix $\mathbf{X}$ dense (by removing the mean from each entry of the matrix). This is problematic from both the memory and computational points of view. The MP-law was stated for the zero-mean data. The Pearson estimator standardizes the data to make it zero-mean, therefore we can use the MP-law results. In this subsection, we show that we can use the findings from MP-law for the case when the data is not zero-mean. This is because any matrix $\mathbf{X}$ can be written as: \begin{equation} \label{eq:mean} \tilde{\mathbf{X}} = \mathbf{X} - \mathbf{M}, \end{equation} where, $\tilde{\mathbf{X}}$ is the demeaned version of $\mathbf{X}$ and $\mathbf{M} = \mathbf{1}_n \times \mathbf{m}$ is an $n \times m$ matrix, where each row is equal to the vector $\mathbf{m}$. Additionally, $\mathbf{m}$ is a $1 \times m$ row vector that contains the column mean of the corresponding columns of $\mathbf{X}$ and $\mathbf{1}_n$ is a $1 \times n$ vector of all 1's. Then we can rewrite the Pearson correlation estimation as: \begin{equation} \label{eq:pearsonnew} \mathbf{E}_p = \frac{1}{n}\tilde{\mathbf{X}}^T\tilde{\mathbf{X}} = \frac{1}{n}(\mathbf{X}^T\mathbf{X} - \mathbf{M}^T\mathbf{M}), \end{equation} where, w.l.o.g., for simplicity of notation, we assume that data has unit variance. It is trivial to see that $\mathbf{M}^T\mathbf{M}$ is of rank 1 and has one eigenvalue $\xi $, which is a positive number. We know from the subadditivity property of rank that: \begin{align} \label{eq:rankUB} rank(\mathbf{X}^T\mathbf{X})&=rank(\tilde{\mathbf{X}}^T\tilde{\mathbf{X}} + \mathbf{M}^T\mathbf{M} )\\ &\leq rank(\tilde{\mathbf{X}}^T\tilde{\mathbf{X}}) + rank(\mathbf{M}^T\mathbf{M}),\\ & \leq N +1, \end{align} where, $rank(\tilde{\mathbf{X}}^T\tilde{\mathbf{X}})=N$ and it can also be shown \cite{944751} that since $rank( \mathbf{M}^T\mathbf{M})=1$ then: \begin{equation} \label{eq:rankLB} rank(\mathbf{X}^T\mathbf{X})=rank(\tilde{\mathbf{X}}^T\tilde{\mathbf{X}} + \mathbf{M}^T\mathbf{M} ) \geq N -1, \end{equation} therefore, the rank of the correlation matrix ($\frac{1}{n}\mathbf{X}^T\mathbf{X}$) of data will change by at most 1, if at all, compared with the rank of the correlation matrix of the demeaned data. As we will see next, the eigenvalue $\xi $ is positive and large, so it will \emph{only} affect the top eigenvalues of the correlation matrix of the original data. In Figure \ref{fig:diff1} we plot the \emph{difference} in the eigenvalue magnitudes of the user correlation matrices of the original data and the demeaned data for the Movielens1M dataset, where the eigenvalues of both matrices are sorted in the ascending order of magnitude. We can see a huge positive spike at the largest eigenvalue, signifying that the largest eigenvalue of the original data correlation matrix is overestimated, and a couple of relatively negligible spikes. From the discussion in the previous subsection, the largest eigenvalue of the demeaned data correlation matrix is already overestimated and the effect of not removing the mean exaggerates it further. Therefore, the effect of not removing the mean from the data is that the largest eigenvalue of the correlation matrix is overestimated. \begin{figure}[] \includegraphics[scale=.20]{fig/difffinal} \caption{The magnitude of the difference in the corresponding eigenvalues of the original data correlation matrix and de-meaned data correlation matrix is shown on the y-axis, against the ID of the eigenvalue on the x-axis.} \label{fig:diff1} \end{figure} In the context of recommender systems, where the data are sparse and large, this means that we can operate on the sparse data matrices by correcting for this overestimation. Moreover, since not demeaning the data effectively just changes the top eigenvalue, we can still use the eigenvalue clipping strategy and other insights based on the MP-law. \subsection{Quantifying the overestimation} Interestingly this overestimation can be quantified by the eigenvalue of $\frac{1}{n}\mathbf{M}^T\mathbf{M}$. The sum of the difference shown in Figure \ref{fig:diff1} is exactly equal to $\xi $. This is trivially true since the trace of the data correlation matrix is to be preserved. We do not need to do the eigenvalue decomposition of $\frac{1}{n}\mathbf{M}^T\mathbf{M}$ to get $\xi $. This is because, firstly, the eigenvalue of a rank 1 matrix is equal to its trace by the following argument; $\frac{1}{n}\mathbf{M}^T\mathbf{M}=uv^T$ is an $m \times m$ rank 1 matrix, where $u,v$ are $m \times 1$ vectors. Since $m \geq 1$ the matrix is singular and has 0 as its eigenvalue. We know if $\mu$ is the eigenvector associated with $\xi $ then: \begin{align} (uv^T) \mu & = \xi \mu,\\ u(v^T \mu) / \xi & = \mu, \end{align} since $(v^T \mu) / \xi$ is a scalar, $u$ is also an eigenvector associated with $\xi$. Then, it follows that $u(v^T u) = \xi u$, and as $u \neq 0$ we have $\xi = (v^T u) = \sum_{i=1}^{m}v_iu_i = Tr(\frac{1}{n}\mathbf{M}^T\mathbf{M}) $. Secondly, the trace of $\frac{1}{n}\mathbf{M}^T\mathbf{M}$ is non-zero by the construction of the matrix $\mathbf{M}$. The matrix$\frac{1}{n}\mathbf{M}^T\mathbf{M}$ is dense and when $m$ is large calculating this matrix gets unfeasible. However, we notice that we are only interested in the diagonal of the above matrix and not the complete matrix. Therefore, the above trace can efficiently be calculated by: \begin{equation} Tr(\frac{1}{n} \mathbf{M}^T\mathbf{M}) = \sum_{i=1}^{m} n \tilde{m}_i^2, \label{eqn:overestimate} \end{equation} where, $\tilde{m}_i = m_i/ \sqrt{n}$ and $m_i$ is the $i-th$ element of $\mathbf{m}$. Equation \ref{eqn:overestimate} gives us an efficient way to quantify the overestimation in the top eigenvalue of $\mathbf{X}^T\mathbf{X}$ \footnote{The discussion so far generalizes to the case when columns of $\mathbf{X}$ are not a unit variance by dividing each column of $\mathbf{X}$ and $\mathbf{M}$ by the standard deviation of the corresponding column of $\mathbf{X}$.}. \subsection{Eigenvalue shrinkage} Before we outline our cleaning procedure we briefly talk about cosine similarity. Cosine similarity assumes that the data is zero mean, however, this is not true in general. Moreover, based on our previous discussion, it does not make the correction for this by scaling the largest eigenvalue. However, when we plot the difference in the eigenvalues of the cosine similarity and the Pearson correlation, we find some interesting results. As seen in Figure \ref{fig:diffc1}, we have a large spike at the top eigenvalue as before which is expected since cosine similarity does not remove the mean. This is followed by some oscillations, but these oscillations are negative too. This can be due to the difference in variance. Finally, and more importantly, unlike before, the difference between the magnitude of eigenvalues of cosine similarity and Pearson correlation for all the other top eigenvalues is not very close to 0. In fact, we can see a gradual upward slope in the zoomed-in plot in Figure \ref{fig:diffc1} which was not visible before. \begin{figure}[] \includegraphics[scale=.20]{fig/diffC1f} \caption{The the magnitude of the difference in the corresponding eigenvalues of the Pearson correlation matrix and Cosine correlation matrix is shown. The negative slope, highlighted by the red box, signifies the shrinkage property of cosine similarity.} \label{fig:diffc1} \end{figure} This negative slope signifies that the top eigenvalues of cosine similarity (except the maximum eigenvalue) are shrunk compared to the eigenvalues of the Pearson correlation. Therefore, the cosine similarity implicitly does eigenvalue shrinkage The reason for this shrinkage is that the column variances of the data calculated in the Pearson correlation and cosine similarity are not the same. This can been seen from the denominators of Equation \ref{eq:cosine} and Equation \ref{eq:pearson}. When this is the case we cannot write a simple expression like Equation \ref{eq:pearsonnew} since both matrices on the right-hand side will have different column variances(the $\mathbf{M}^T \mathbf{M}$ matrix comes from the Pearson correlation). Consequently, the simple analysis that followed will not hold, hence the effect of not removing the mean will be more complex and in this case in the form of shrinkage of the top eigenvalues except the maximum eigenvalue. \subsection{Cleaning algorithm} Below we outline a linear time and memory efficient similarity matrix cleaning strategy that explicitly shrinks the top eigenvalue, inherits the shrinkage property of cosine similarity for other eigenvalues\footnote{This shrinkage(both explicit and inherent) is not present in vanilla SVD/PCA.} and removes noise by clipping the smaller eigenvalues. \begin{algorithm} \caption{Clean-KNN($\mathbf{X}$,$F$)}\label{algo} \footnotesize \begin{description} \item[Inputs:] Sparse user-item matrix $\mathbf{X}$,, number of top eigenvalues $F$. \\ \end{description} \begin{algorithmic}[1] \Procedure{Learn Item-Item Similarity}{} \State\hskip-\ALG@thistlm \emph{One-pass over non-zero entries:} \State Calculate column mean vector $\mathbf{m}$; \State Calculate column sum vector $\mathbf{\sigma}$; \State\hskip-\ALG@thistlm \emph{One-pass over the non-zero entries} $x_{ij}$ \emph{of} $\mathbf{X}$: \State $\mathbf{X'}=[x_{ij}/ \sigma_j]_{ij}$, divide each $x_{ij}$ by its column sum $\sigma_j$ to form $\mathbf{X'}$;\label{alg.normalize} \State\hskip-\ALG@thistlm \textit{Get the top} $F$ \textit{singular value matrix} $\mathbf{S}$ \textit{and right-singular vector matrix} $\mathbf{V}$: \State $[\mathbf{V},\mathbf{S}]\gets$ svds$(\mathbf{X'})$ via Lanczos algorithm in roughly $O(n_{nz})$ time; \label{alg.svds} \State\hskip-\ALG@thistlm \emph{Adjust maximum eigenvalue}: \State $\mathbf{m} \gets \mathbf{m}./(\mathbf{\sigma}.\sqrt{n})$; \State $s_{top}^2 \gets s_{top}^2- \sum_{i=1}^{n}n m_i^2$; $\;\; \lambda_{top}=\sqrt{s_{top}^2}$; \label{alg.shrink} \State\hskip-\ALG@thistlm \emph{Get the cleaned, low-dimensional similarity representation: } \State $\mathbf{S} \gets \mathbf{V} \times (\mathbf{S}.^2)$;$\;\; \mathbf{V} \gets \mathbf{V}$;\label{alg.lds} \State\hskip-\ALG@thistlm For item $i$ and $j$ the similarity/correlation $c_{ij} = S_i \times V_{j}^T$. \EndProcedure \end{algorithmic} \end{algorithm} where, ``$.$'' denotes element-wise operation on vectors and matrices. $S_i$ and $V_j$ denote the $i-th$ and $j-th$ row of the matrices respectively, $s_{top}$ is the largest singular value, $\lambda_{top}$ is the largest eigenvalue and $n_{nz}$ is the number of non-zeros. Clean-KNN starts by calculating the mean and sum of each column of $\mathbf{X}$ and then it normalizes $\mathbf{X}$ in line \ref{alg.normalize} to form $\mathbf{X'}$. This is so that $\mathbf{X'}^T \mathbf{X'}$ is equal to cosine similarity matrix of $\mathbf{X}$. Since for real matrices the square of the \emph{singular values} of $\mathbf{X'}$ is equal to the \emph{eigenvalues} of $\mathbf{X'}^T \mathbf{X'}$ while the eigenvectors are the same, Clean-KNN just calculates the right-singular vectors and singular values of $\mathbf{X'}$ in line \ref{alg.svds}. In line \ref{alg.shrink} the top eigenvalue is shrunk according to Equation \ref{eqn:overestimate}. Finally, we get the low-dimensional similarity representation in line \ref{alg.lds}. We note that Clean-KNN can also be used for user-user similarity by transposing $\mathbf{X}$. \section{Experiments}\label{EXP} We aim to answer the following questions via quantitative evaluation: i) Is noise removed by removing the bulk of the eigenvalues? ii) Does the shrinkage of $\lambda_{top}$ improve performance? For our experiments we used Movielens1M dataset\footnote{\scriptsize \url{https://grouplens.org/datasets/movielens/1m/}}(ML1M) and converted it to implicit feedback \footnote{We focused on implicit feedback since it is closer to the real user behavior and is the focus of most research, however, our results generalize to explicit feedback.} by ignoring the rating magnitudes. We used four evaluation metrics namely, recall@50 (R@50), normalized discounted cumulative gain (NDCG@50), area under the curve (AUC) and diversity@50 (D@50). D@N is the total number of distinct items in the top-N list across all users. \subsection{Baselines and Parameters} Weighted user-KNN (WUkNN) and weighted item-KNN (WIkNN) were used as the base recommenders, with the similarity function defined by Equation \ref{eq:cosine}. We also compare our performance with a well know item recommender SLIM \cite{ning2011slim}, and the vanilla SVD recommender (svds in MATLAB) which used the same number of factors $F$ as Clean-KNN. We performed 5-fold cross-validation to select the parameters. We searched for $\lambda_{cut}$ by incrementing $F$ by $10$ when $10 \leq F \leq 100$ and in increments of $100$ afterwards till we reach close to $\lambda_{max}$. \section{Results}\label{Result} The results are shown in Table \ref{tab:mainResult}. It is worth mentioning here that we do not aim to provide state of the art results, rather we aim to gain insights into the similarity metrics used by memory based methods and demonstrate the effects of these insights on the performance. We note that Clean-KNN improves the performs over the vanilla kNN. We also see that it is better than vanilla SVD with the same number of factors. \subsection{Is noise removed?} For both datasets, the table is divided into subsections by dashed horizontal lines. In each subsection we want to highlight two scenarios: (a) the best base KNN recommender, and (b) the noise removed Clean-KNN recommender of Algorithm \ref{algo}. We can see that the performance of the scenario (b) is better than scenario (a). This signifies that most of the removed eigenvalues did not carry much useful information and hence can be categorized as noise \subsection{Does shrinkage of $\lambda_{top}$ help?} To answer this question we have to compare a base user or item-KNN recommender with a recommender that contains all the eigenvalues but shrinks the top eigenvalue according to Equation \ref{eqn:overestimate}. Note, that this recommender is created for illustration of the effectiveness of the shrinkage procedure. The performance of this recommender is shown in Table \ref{tab:mainResult} and labeled as (c). We see that the performance of the scenario (c) is always better than scenario (a). This confirms that just by shrinking $\lambda_{top}$ we get improved performance. In addition, scenario (c) is still outperformed by scenario (b), thus this confirms the utility of the clipping strategy. \sisetup{detect-weight=true,detect-inline-weight=math} \begin{table}[] \sisetup{round-mode=places} \centering \sisetup{round-mode=places} \caption{Performance of Clean-KNN w.r.t. four metrics shows that it outperforms its vanilla counterparts.} \label{tab:mainResult} {\small \begin{tabular}{lS[round-precision=3]S[round-precision=3]S[round-precision=3]l} \hline {\textbf{\scriptsize{Movielens1M}}} & {\textbf{\scriptsize{NDCG@50}}} & {\textbf{\scriptsize{AUC} }} & {\textbf{\scriptsize{R@50}}} &{\textbf{ \scriptsize{D@50}}} \\ \hline \scriptsize{(a)WUKNN}\tiny{($k=500$)} & 0.34513 &0.90509 &0.34577 &661 \\ \scriptsize{(b)\textbf{Clean-UKNN}}\tiny{($k=500, F=400$)} & 0.36077 &0.91152& 0.36395 &761 \\ \scriptsize{(c)Shrink-UKNN}\tiny{($k=500$)} &0.35826 &0.91105 &0.3608 &720 \\ \hdashline \scriptsize{(a)WIKNN}\tiny{($k=500$)} & 0.35629 &0.9115& 0.35458& 1668 \\ \scriptsize{(b)\textbf{Clean-IKNN}}\tiny{($k=500, F=400$)} &0.36834 &0.9193 &0.37833 &2187 \\ \scriptsize{(c)Shrink-IKNN}\tiny{($k=500$)} & 0.36938 &0.91664& 0.3678 &1730 \\ \hdashline \scriptsize{SVD}\tiny{($F=400$)} & 0.23632& 0.77011 &0.24779 &2242 \\ \scriptsize{SLIM}\tiny{($L_1=10^{-2}, L_2=10^{-3},k=500$)} &0.29339 &0.88235& 0.30035 &534 \\ \hline \end{tabular} } \end{table} \section{Conclusion}\label{conclusion} Memory-based recommenders are one of the earliest recommendation techniques which are still being deployed in the industry today in conjunction with other methods. In this paper, we analyzed the spectral properties of the Pearson and cosine similarities. And we used insights from MP-law to show that these empirical similarities suffer from noise and eigenvalue spreading. We showed that the cosine similarity naturally performs the eigenvalue shrinkage but it overestimates $\lambda_{top}$. We then provided a linear time and memory efficient cleaning strategy, Clean-KNN, that removes noise and corrects for the overestimation of $\lambda_{top}$. Through empirical evaluation, we showed that this cleaning strategy is effective and results in better performance, in terms of accuracy and diversity, compared to the vanilla kNN recommenders. \bibliographystyle{ACM-Reference-Format}
train/arxiv
BkiUfhPxK4sA-5fmzXu6
5
1
\section{Introduction} \noindent Understanding the stellar initial mass function (IMF) has major implications for a variety of astrophysical problems. The IMF plays a central role in converting the observed properties of galaxies (e.g., luminosity and color) into physically meaningful quantities like stellar mass and star-formation rate. An assumed universality of the IMF has contributed to our current observational-based paradigm of how galaxies formed and evolved throughout the history of the Universe, what fraction of the Universe's mass is tied up in stellar baryons, and the number of compact objects in the Universe. A clear understanding of the form and universality of the IMF in external galaxies is therefore a central goal in modern astrophysics (see, e.g., Bastian {et\,al.}\ 2010 for a review). In recent years, there has been mounting evidence indicating that the stellar IMFs for elliptical galaxies vary with galaxy mass. For example, Cappellari {et\,al.}\ (2012) used detailed dynamical models and two-dimensional stellar kinematic maps of 260 early-type galaxies from the ATLAS$^{\textup{3D}}$ project to show that the mass-to-light ratios ($M/L$) for elliptical galaxies increase with increasing stellar velocity dispersion, $\sigma$, consistent with a scenario where the IMF changes with galaxy mass. Such a finding has been independently noted in galaxy lensing measurements of $M/L$ for a variety of $\sigma$ (e.g., Auger {et\,al.}\ 2010; Treu {et\,al.}\ 2010). Specifically, these findings indicate that relatively low-mass, early-type galaxies (\hbox{$\sigma \lower.5ex\hbox{\ltsima} 100$~km~s$^{-1}$}) have $M/L$ values consistent with standard Milky Way-like IMFs (Kroupa 2001; Chabrier 2003). However, relatively massive ellipticals (\hbox{$\sigma \approx 300$~km~s$^{-1}$}) have mass-to-light ratios ($M/L$) that are larger than those predicted from standard IMFs, and can be consistent with either ``bottom-heavy'' IMFs (\hbox{$\alpha \approx 2.8$}, yielding more low-mass stars with higher $M/L$) or ``top-heavy'' IMFs (\hbox{$\alpha \approx 1.5$}, yielding more remnants with lower luminosities; Cappellari et al. 2012). \begin{table*} \begin{center} \footnotesize \caption{Properties of Low-Mass Elliptical Galaxy Sample} \begin{tabular}{lccccccccccccccc} \hline\hline & $D$ & $\sigma$ & $a$ & $b$ & $\log L_K$ & $N_{\textup{H}}$ & \multicolumn{2}{c}{\textit{HST} ACS Data} & $t_{\rm exp}$ & $N_{\rm LMXB}$ & $N_{\rm X, GC}$ & $N_{\rm X, bkg}$ \\ \multicolumn{1}{c}{Source Name} & (Mpc) & (km~s$^{-1}$) & \multicolumn{2}{c}{(arcmin)} & ($\log$ $L_{K,\odot}$) & ($10^{20}$ cm$^{-2}$) & (Blue Filter) & (Red Filter) & (ks) & (field) & (GCs) & (Background) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) \\ \hline NGC 4339 \ldots\ldots\ldots\ldots & 16.0 & 100.0 & 1.3 & 1.1 & 10.3$^{\dag}$ & 1.62 & F606W & --- & 33.6 & 2$^{\dag}$ & 0$^{\dag}$ & 1$^{\dag}$ \\ NGC 4387 \dotfill & 17.9 & 97.0 & 0.9 & 0.6 & 10.2 & 2.73 & F475W & F850LP & 38.7 & 1 & 0 & 1 \\ NGC 4458\dotfill & 16.4 & 85.0 & 0.9 & 0.7 & 10.0 & 2.63 & F475W & F850LP & 34.5 & 2 & 0 & 1 \\ NGC 4550\dotfill & 15.5 & 110.0 & 1.3 & 0.5 & 10.2 & 2.60 & F475W & F850LP & 25.8 & 6 & 1 & 1 \\ NGC 4551\dotfill & 16.1 & 95.0 & 1.1 & 0.7 & 10.2 & 2.59 & F475W & F850LP & 26.6 & 0 & 1 & 0 \\ NGC 7457$^*$\dotfill & 12.9 & 78.0 & 2.6 & 1.4 & 10.3 & 5.49 & F475W & F850LP & 37.7 & 1 & 0 & 2 \\ \hline Total Sample\dotfill & \multicolumn{9}{c}{ } & {\bf 12} & 2 & 6 \\ \hline \end{tabular} \end{center} {\scriptsize NOTE.---{\itshape Col.(1)}: Target galaxy name. {\itshape Col.(2)}: Distance as given by Cappellari {et\,al.}\ (2013). {\itshape Col.(3)}: Velocity dispersion from either Halliday {et\,al.}\ (2001) or Cappellari {et\,al.}\ (2006). {\itshape Col.(4)} and {\itshape (5)}: 2MASS based $K$-band galaxy major and minor axes from Jarrett {et\,al.}\ (2003). {\itshape Col.(6)}: Logarithm of the $K$-band luminosity. {\itshape Col.(7)}: Neutral hydrogen Galactic column density. {\itshape Col.(8)} and {\itshape (9)} The available $HST$ imaging (via either ACS or WFPC2 imaging) for ``blue'' and ``red'' filters, respectively, which were used to identify and characterize optical counterparts to \hbox{X-ray}\ sources. {\itshape Col.(10)}: Total {\itshape Chandra\/}\ exposure time. All galaxies were imaged using ACIS-S and had coverage over the entire galactic extents as defined in {\itshape Col.~(4)} and {\itshape (5)}. {\itshape Col.(11)--(13)}: The number of field LMXBs ({\itshape Col.(11)}), GC LMXBs ({\itshape Col.(12)}), and background or central AGN candidates ({\itshape Col.(13)}) within the galactic extent of each galaxy (as defined in {\itshape Col.(4)} and {\itshape (5)}) that had \hbox{0.5--7~keV} fluxes exceeding that of a $L_{\rm X} = 10^{38}$~erg~s$^{-1}$\ source at the distance provided in {\itshape Col.(2)}. \hbox{X-ray}\ sources were classified using {\itshape HST\/}\ data and the procedure outlined in \S2.2 of P14.\\ \noindent $^*$Observations of NGC 7457 were represented in both the sample by P14~and this study. However, due to the availability of new {\itshape HST\/}\ data for NGC 7457 in this study, results based on our analysis were used throughout this paper.\\ \noindent \dag~For NGC 4339, we considered 74\% of the reported $K$-band luminosity that corresponded to the region of the galaxy covered by the {\itshape HST\/}\ WFPC2 footprint detailed in \S2. For {\itshape Col.(11)--(13)}, we only considered X-ray sources detected in this covered region. } \end{table*} Van Dokkum \& Conroy (2010, 2011, 2012) and Conroy \& van~Dokkum (2012) showed that the spectra of massive ellipticals have strong Na~I and Ca~II absorption features that are indicative of a large population of very low-mass stars ($\lower.5ex\hbox{\ltsima}$0.3~$M_\odot$; Saglia et al. 2002), favoring the bottom-heavy IMF interpretation for these galaxies. These features had been known for quite some time (see, e.g. Hardy \& Couture 1988; Cenarro {et\,al.}\ 2003), but higher-quality data and spectral stellar synthesis modeling in recent years has given more confidence to the interpretation that the stellar IMFs in these galaxies are likely to be bottom-heavy and consistent with $\alpha \approx 2.8$. Despite the above evidence for a bottom-heavy IMF in massive ellipticals, some inconsistencies remain. For example, Smith and Lucey (2013), found that a bottom-heavy IMF is incompatible with the stellar mass obtained via strong lensing, for a ($\sigma \approx 300$~km~s$^{-1}$) galaxy. Smith (2014) has also shown, intriguingly, that there is no correlation between results based on near-IR indices and on dynamics, on a galaxy-by-galaxy basis. Additionally, Weidner {et\,al.}\ (2013) showed that a time-invariant and bottom-heavy IMF was incompatible with the observed chemical enrichment of massive ellipticals, and underpredicted the number of stellar remnants (in the form of LMXBs) observed in globular clusters (GCs). To resolve this, they proposed a time-dependent IMF model that would evolve from a top-heavy to bottom-heavy form. In this scenario, the star-formation histories of massive ellipticals would have included early phases of intense starburst activity that produced many massive stars (and a top-heavy IMF), which chemically enriched the galaxies and created turbulence in their interstellar mediums. Subsequent star formation within these environments would be more greatly fragmented and lead to the formation of preferentially smaller stars and bottom-heavy IMFs. Other authors have taken somewhat different approaches to resolving the above incompatibility issue. Ferreras {et\,al.}\ (2015) focused on the functional form of the IMF and tested a model where the low-mass and high-mass slopes of a broken power law IMF were independently varied. They found that regardless of the IMF slope parameters, a time-independent IMF (independent of galaxy mass) could not simultaneously reproduce the observed chemical enrichment and gravity sensitive spectral features in massive ellipticals. Finally, Mart\'in-Navarro {et\,al.}\ (2014) explored a radial and time-dependent form of the IMF. They argued that in massive ellipticals ($\sigma \approx 300$~km~s$^{-1}$), the radial trends in chemical enrichment and gravity sensitive spectral features could be well described by a time-dependent IMF, like that proposed by Weidner {et\,al.}\ (2013), in the central regions of these galaxies. In contrast, a Milky Way-like IMF was proposed in the galactic outskirts, based on the lack of these spectral signatures relative to the galactic interior. The IMF was then claimed to be a local phenomenon, reflecting a two-stage formation history for these galaxies. Other evidence for radial trends in spectral features follows similar rationales (La~Barbera {et\,al.}\ 2016a, 2016b). The studies above have helped to constrain variations in the low-mass end of the elliptical-galaxy IMF ($\lower.5ex\hbox{\ltsima}$0.5~$M_\odot$) with galaxy mass but do not place strong constraints on variations of the high-mass end of the IMF. To trace the high-mass end of the IMF requires studying the most massive stars in a galaxy, however these stars evolve quickly into black holes (BHs) and neutron stars (NSs) making them difficult to observe. Fortunately these stellar remnants are found in binary star systems, where a lower-mass and longer-lived companion star eventually becomes a ``donor'' star, losing mass to the compact object. The system itself radiates nearly all of its energy in the form of X-rays, making these previously undetectable high-mass remnants available to direct observation. Therefore, an efficient and effective way to constrain variations in the high-mass end of the IMF ($\lower.5ex\hbox{\gtsima}$8~$M_\odot$) with galaxy mass is to test for corresponding variations in the prevalence of these low-mass \hbox{X-ray}\ binary (LMXB) systems in elliptical galaxies. With this in mind, Peacock {et\,al.}\ (2014; hereafter, P14) calculated (using Maraston (2005) population synthesis calculations) that a bottom-heavy single power-law IMF, with $\alpha \approx 2.8$, would yield a factor of \hbox{$\approx$3} times fewer LMXBs per unit $K$-bandluminosity ($N/L_K$) over a standard Kroupa~(2001) IMF, implying a dramatic decline in $N/L_K$ with increasing $\sigma$. Using archival {\itshape Chandra X-ray Observatory} ({\itshape Chandra\/}\/) and {\itshape Hubble Space Telescope} ({\itshape HST\/}\/) data of eight nearby elliptical galaxes, P14~found instead that $N/L_K$ was constant with elliptical galaxy luminosity and velocity dispersion; however, their test included only one low-mass elliptical galaxy at $\sigma < 150$~km~s$^{-1}$, where $N/L_K$ was expected to be largest. In this paper, we augment the P14~study of eight elliptical galaxies by adding {\itshape Chandra\/}\ constraints for five new low-mass ellipticals ($\sigma =$~78--110~\ifmmode{~{\rm km~s^{-1}}}\else{~km s$^{-1}$}\fi) that are predicted to have standard IMFs that differ from those of the massive ellipticals (see, e.g., Cappellari {et\,al.}\ 2012). With this expanded sample of 13 galaxies, we are able to place robust constraints on how $N/L_K$ varies across the full range of velocity dispersion ($\sigma =$~80--300~km~s$^{-1}$) where significant variations in $N/L_K$ are plausibly expected. \section{Sample and Data Reduction} Our low-mass elliptical galaxy sample was derived from the samples of Halliday {et\,al.}\ (2001) and Cappellari {et\,al.}\ (2006), which contain velocity dispersion data for several nearby ellipticals. We limited our sample to galaxies with $D < 20$~Mpc, so that we could easily resolve LMXB populations, and $\sigma \lower.5ex\hbox{\ltsima} 110$~km~s$^{-1}$, to focus on the galaxies that are likely to have standard IMFs that differ from the already well-studied high-mass ellipticals. In order to guard against potential contamination from young high-mass \hbox{X-ray}\ binaries (HMXBs), we further restricted our sample to galaxies that had negligible star-formation rate signatures (SFR < 0.001 $M_\odot$\ yr$^{-1}$), as measured by 24$\mu$m {\itshape Spitzer\/}\ data (Temi {et\,al.}\ 2009). Our final sample of six low-mass elliptical galaxies is summarized in Table~1. We conducted new {\itshape Chandra\/}\ observations for four of the six galaxies to reach 0.5--7~keV point-source detection limits of $L_{\textup{X}} \approx 10^{38}$ erg~s$^{-1}$ (chosen for consistency with the sample from P14), after combining with archival {\itshape Chandra\/}\ data. All observations were conducted using ACIS-S, which covers the entire $K$-band-defined areal footprints of all of the galaxies in our sample (see Table~1 and Jarrett {et\,al.}\ 2003). The only exception was NGC 4339, which only had partial $HST$ coverage. For this galaxy, we utilized only the background-subtracted $K$-band flux within the overlapping $HST$ footprint, and only LMXBs detected in this region were included in our analyses. This resulted in our using only 74\% of the $K$-band luminosity reported in Col.(6) when computing $N/L_K$ for this galaxy. Data reduction for our sample of galaxies closely follows the procedure outlined in \S\S2.1 and 2.2 of Lehmer {et\,al.}\ (2013), with our reduction being performed with \texttt{CIAO v.~4.7} and \texttt{CALDB v.~4.6.7}. We reprocessed events lists from level~1 to level~2 using the script {\ttfamily chandra\_repro}, which identifies and removes events from bad pixels and columns, and filters events lists to include only good time intervals without significant flares and non-cosmic ray events corresponding to the standard ASCA grade set (grades 0, 2, 3, 4, 6). For galaxies with more than one observation, we combined events lists using the script {\ttfamily merge\_obs}. We constructed images in three \hbox{X-ray}\ bands: 0.5--2~keV, 2--7~keV, and 0.5--7~keV. Using our \hbox{0.5--7~keV} images, we utilized {\ttfamily wavdetect} at a false-positive probability threshold of $10^{-5}$ to create point source catalogs. We converted \hbox{0.5--7~keV} point-source count-rates to fluxes assuming an absorbed power-law spectrum with a photon index of $\Gamma = 1.5$ and Galactic extinction (see Col.~(7) in Table~1). We treated the hot gas component as negligible within the small area of the sources, as these galaxies have very little diffuse emission in total, typical of low-mass ellipticals like those studied here (e.g., O'Sullivan {et\,al.}\ 2001). Our choice of photon index reproduces well the mean 2--7~keV to 0.5--2~keV count-rate ratio of the detected point sources in our sample, a value calculated using stacking analyses (see, e.g., Lehmer {et\,al.}\ 2016 for details). The total number of sources within each galaxy footprint that had fluxes brighter than the $L_{\textup{X}} = 10^{38}$ erg~s$^{-1}$ limit span the range of 1--8 (see Col.~(11)--(13) of Table~1). Our test of the variation in the IMF with galaxy mass is sensitive to the field LMXB population that forms within the galactic stellar populations. To search for potential contaminants from unrelated \hbox{X-ray}-detected objects, including, e.g., background active galactic nuclei (AGN), foreground Galactic stars, central low-luminosity AGN, and LMXBs formed through dynamical processes in GCs, we made use of archival {\itshape HST\/}\ data (Col.(8) and (9) in Table~1). Contaminating source populations were identified following the procedure outlined in \S2.2 of P14, which makes use of {\itshape HST\/}\ source colors and morphologies to identify and classify counterparts. Regardless of their nature, we rejected \hbox{X-ray}\ sources with optical counterparts from further consideration. In total, we rejected 8 of the \hbox{X-ray}\ sources within the galactic footprints (see Col.~(12) and (13) of Table~1) that had optical counterparts, which left 12 candidate field LMXBs with $L_{\rm X} > 10^{38}$~erg~s$^{-1}$\ in our sample. \begin{figure*} \figurenum{1} \centerline{ \includegraphics[height=8cm]{fig_1a.ps} \hfill \includegraphics[height=8cm]{fig_1b.ps} } \vspace{0.1in} \caption{ ($a$)~The number of field LMXBs with $L_{\rm X} \ge 10^{38}$~erg~s$^{-1}$\ per $10^{10}$ $L_{K,\odot}$\ versus velocity dispersion, $\sigma$, for galaxies from this study ({\it black filled circles}) and from P14~({\it open circles}). The red filled triangle represents the mean value from the six galaxies in this study (see Table~1). The red open triangle represents the mean value of the seven high-mass galaxies studied in P14. Error bars are 1$\sigma$ and were computed following Gehrels (1986). The blue dot-dash curve and red dashed line represent, respectively, the variable and constant IMF scenarios tested by P14. The bold white curve represents our best model with our posterior (see \S4) and interpolates between the Kroupa IMF at $\sigma \le$~95~km~s$^{-1}$ to an IMF with $\alpha_1 =$ $3.84$, $\alpha_2 =$ $2.14$, and $\gamma_{\textup{X}} =$ $1.30$\ ($10^{10}$~$L_{K,\odot}$)$^{-1}$ at $\sigma \ge 300$~km~s$^{-1}$ (see \S3 for details). The gray shaded region is our 1$\sigma$ confidence region about our best model. ($b$)--($d$) Confidence contours for fitting parameters $\alpha_1$, $\alpha_2$, and $\gamma_{\rm X}$ (see \S3). The green contours correspond to the likelihood computed using only data from our LMXB study, where the black contours utilize our LMXB data with a prior based on the study by La~Barbera {et\,al.}\ (2013; see \S4). In each case, the solid and broken (dotted or dashed) curves represent the 1$\sigma$ and 2$\sigma$ cuts respectively. The gray shaded region represents the subset of parameter space that is disallowed based on the M/L constraints discussed in \S3, which require each model to have an $R_{(M/L),\,i} < 3.0$. This results in our study discarding $\alpha_1 >= 3.9$ and $\alpha_2 <= 1.6$, and gives rise to the sharp cut-offs displayed in the countours. Finally, it is important to note that before the application of the prior, $\alpha_1$ is essentially unconstrained by the broad parameter space that we explored in this study. The green open and black filled points are our best model parameter values before and after the use of the prior, respectively, while the red filled stars are the reference Kroupa values. Values for the best fit before the prior (green open points) are $\alpha_1 =$ $2.07$, $\alpha_2 =$ $2.20$, and $\gamma_{\textup{X}} =$ $1.30$\ ($10^{10}$~$L_{K,\odot}$)$^{-1}$; best fit values after the application of the prior (black filled points) are cited above in ($a$); red filled stars are the Kroupa IMF parameters with $\alpha_1 = 1.3$, $\alpha_2 = 2.3$, and $\gamma_{\textup{X}} =$ $1.30$\ ($10^{10}$~$L_{K,\odot}$)$^{-1}$.} \end{figure*} \section{Results} In Figure~1(a), we show the number of field LMXBs with $L_{\rm X} > 10^{38}$~erg~s$^{-1}$\ per unit $K$-band luminosity, $N_{38}/L_K$, versus velocity dispersion, $\sigma$, for our low-mass ellipticals sample combined with the P14~sample. The ``variable'' and ``invariant'' IMF models from P14~are shown in Figure~1(a) with blue and red curves, respectively. The variable model assumes that the IMF makes a transition from Kroupa at $\sigma = 95$~km~s$^{-1}$ (i.e., $\log \sigma = 1.98$) to a single power-law with slope $\alpha = 2.8$ at $\sigma = 300$~km~s$^{-1}$ (i.e., $\log \sigma = 2.48$; see details below), while the invariant model assumes a Kroupa IMF at all $\sigma$. At first inspection, it seems that the specific frequency of LMXBs indicate that the IMFs of elliptical galaxies are consistent with a single ``universal'' IMF. However, it is important to note that the LMXB population is tracing only the remnant population from stars with $m \lower.5ex\hbox{\gtsima} 8$~$M_\odot$. All that can be reliably inferred from the apparent constancy of $N_{38}/L_K$ across $\sigma$, is that the single power law slope of the IMF for stars of $m > 0.5$~$M_\odot$~does not undergo strong variations for ellipticals of all velocity dispersions. In fact, it may be possible that the low-mass end does vary strongly with velocity dispersion, as found in the literature (see references in \S1). With this in mind, we sought to extend P14~by exploring the space of acceptable parameters that could constrain how the IMF might vary, by constructing a suite of IMF models and comparing their predicted $N_{38}/L_K$ versus $\sigma$ tracks with our constraints shown in Figure~1(a). In this process, we followed a similar, but more generalized procedure to that outlined in \S4.2 of P14. Below, we summarize our model. We began by considering a broken power-law form for an IMF corresponding to high-mass ellipticals: \begin{equation} \begin{split} \frac{dN}{dm} = N_{0} \left \{ \begin{array}{ll} 2^{(\alpha_2 - \alpha_1)} & m^{-\alpha_{1}}\,\,\,\,\,\,\,\,\,0.1\,M_\odot < m < 0.5\,M_\odot \\ & m^{-\alpha_{2}}\,\,\,\,\,\,\,\,\,m > 0.5\,M_\odot, \\ \end{array} \right. \end{split} \end{equation} \noindent where for a Kroupa IMF, $\alpha_1$ = 1.3 and $\alpha_2$ = 2.3. $N_{0}$ is a constant of normalization, which, in our procedure, normalizes the 0.1--100~$M_\odot$\ integrated IMF to 1~$M_\odot$. Therefore, $N_{0}$ varies with $\alpha_1$ and $\alpha_2$. Next, we constructed a grid of IMFs over $\alpha_1 =$~1--5 and $\alpha_2 =$~1--3.5 with 801 and 501 steps of 0.005, respectively, thus resulting in a grid of \hbox{$n$ = 401,301} unique IMFs. For the $i^{\rm th}$ IMF, we quantified the $K$-band mass-to-light ratio, $(M/L_K)_i$, by running the stellar population synthesis code {\ttfamily P\'EGASE} (Fioc \& Rocca-Volmerange~1997, 1999), adopting for consistency the assumption in P14~of a single burst star-formation history of age 10~Gyr and solar metallicity. In this procedure, $M$ represents the total initial stellar mass and therefore does not vary with age. We defined the ratio of the $i^{\rm th}$ mass-to-light ratio to that of the Kroupa case as \begin{equation} R_{(M/L),\,i} = \frac{(M/L_K)_{i}}{(M/L_K)_{\textup{kro}}} = \frac{L_{K, {\rm kro}}}{L_{K, i}}. \end{equation} \noindent In order to keep $R_{(M/L)}$ consistent with observations of how the dynamical and stellar $M/L$ varies with $\sigma$, we required that $R_{(M/L),\,i} < 3.0$ (e.g., Cappellari {et\,al.}\ 2012; Conroy {et\,al.}\ 2012). This requirement on $R_{(M/L)}$ resulted in the discarding of models with specific combinations of $\alpha_1$ and $\alpha_2$ (notably, models with $\alpha_1 \ge 3.9$ or $\alpha_2 \le 1.6$ were rejected; see Fig.~1(d)). This constraint limited the number of models considered to $n = 198,298$ total models, which we utilize hereafter. Because the prevalence of the LMXB population is sensitive to the underlying compact object population of NSs and BHs, which are remnants of $>$~8~$M_\odot$\ stars, we calculated the number of stars per solar mass that become compact objects for a given IMF by integrating the IMF from 8--100~$M_\odot$: \begin{equation} N_{{\rm CO}, i} = N_{0,\,i} \int_{8}^{100} m^{-\alpha_{2,i}} dm. \end{equation} \noindent where $N_{0,\,i}$ is the $i^{\rm th}$ normalization factor. These mass fractions allow us to compute the $K$-band luminosity normalized ratio of expected NSs and BHs generated by the $i^{\textup{th}}$ IMF to that of a Kroupa IMF by, \begin{equation} R_{\textup{CO},i} \equiv \frac{(N_{{\rm CO}}/L_{K})_i}{(N_{\textup{CO}}/L_{K})_{\rm kro}} =\frac{N_{{\rm CO}, i}}{N_{\textup{CO, kro}}} R_{(M/L),\,i}. \end{equation} From these quantities, we construct a variable IMF model that varies smoothly with $\sigma$, bridging the low-mass ellipticals Kroupa IMF to the high-mass ellipticals IMF (i.e., the $i^{\rm th}$ IMF). We define our variable IMF function over the range $\sigma$ = \hbox{95--300~km~s$^{-1}$}, within which we require that the IMF varies as a function of $\sigma$ following the Cappellari {et\,al.}\ (2013) relation of $(M/L)_\sigma \propto \sigma^{0.72}$. Under these assumptions, we can quantify the fraction of the variable IMF that is composed of the $i^{\rm th}$ IMF as a function of $\sigma$ to reflect the desired result that galaxies with $\sigma \le$~95~km~s$^{-1}$ have no contribution from the $i^{\rm th}$ IMF, but will only be composed of a Kroupa IMF, while galaxies with $\sigma \ge$~300~km~s$^{-1}$ will be solely composed of the $i^{\rm th}$ IMF component: \begin{equation} \begin{split} F(\sigma) = \left \{ \begin{array}{ll} & 0 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\sigma \le 95~{\rm km~s^{-1}}) \\[6pt] & \frac{\sigma^{0.72} - 95^{0.72}}{300^{0.72} - 95^{0.72}} \;\;\;\;\;\;\;\; (95~{\rm km~s^{-1}} < \sigma < 300~{\rm km~s^{-1}}) \\[8pt] & 1 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\sigma \ge 300~{\rm km~s^{-1}}), \\ \end{array} \right. \end{split} \end{equation} \noindent We define the complementary fraction $F_{\textup{kro}}$, \begin{equation} F_{\textup{kro}}(\sigma) = 1 - F(\sigma), \end{equation} which is then the fraction of the variable IMF that is composed of a Kroupa IMF. We combine Equations~(4), (5), and (6) and define a composite function: \begin{equation} \begin{split} R_{\textup{comp},i}(\sigma) & \equiv \frac{(N_{{\rm CO}}/L_{K})_{\sigma,i}}{(N_{\textup{CO}}/L_{K})_{\rm kro}} = R_{\textup{CO},i} F(\sigma) + F_{\textup{kro}}(\sigma) \\[10pt] & = 1 - \left( 1 - R_{\textup{CO},i} \right)F(\sigma), \end{split} \end{equation} which represents the $\sigma$-dependent number of compact objects per unit $K$-band luminosity compared to the Kroupa IMF. We then arrive at a function that can be used to predict the number of observed LMXBs per unit $K$-band light: \begin{equation} \begin{split} \left( \frac{N_{\textup{X}}}{L_{K}} \right)_i & = \xi_i \left(\frac{N_{\textup{CO, kro}}}{L_{K, \textup{kro}}} \right) R_{\textup{comp},i}(\sigma) \\ & = \gamma_{{\rm X},i} R_{\textup{comp},i}(\sigma) . \end{split} \end{equation} Here $\xi_i$ represents the luminosity-dependent fraction of the compact object population that is actively involved in an LMXB phase, derived using the $i^{\rm th}$ model. $\gamma_{\textup{X},i}$ is a fitted scaling factor for the $i^{\rm th}$ model that allows us to express this quantity as an observed frequency of LMXBs by correcting for a portion of $N_{CO}$ which are not LMXBs (e.g. BH-BH pairs, BH-NS pairs, etc.). We note that we assume that $\xi$ is independent of $\sigma$, which simplifies the computation of our IMF models, but may not be physically accurate. A more detailed treatment requires \hbox{X-ray}\ binary population synthesis modeling for various IMFs (e.g., Fragos {et\,al.}\ 2013), that would involve variations in the mass ratio distribution with IMF. However, such treatments are likely to introduce many parameters into the analysis for which there are no plausible physical models for varying binarity, and for which there are no solid empirical constraints, making the benefit of such modeling inconclusive. Such work is beyond the scope of this paper (see \S4). Using Equation~(8), and the constraints on $N_{38}/L_K$ presented in Figure~1(a), we computed maximum likelihood values for all 198,298 IMFs in our grid using the Cash statistic ({\ttfamily cstat}; Cash~1979). This procedure resulted in a normalized likelihood cube with three dimensions: $\alpha_1$, $\alpha_2$, and $\gamma_{\textup{X}}$. From our grid of models, the maximum-likelihood model and 1$\sigma$ errors are $\alpha_1 =\firstalphaone^{+1.9}_{-1.1}$, $\alpha_2 = \firstalphatwo^{+0.17}_{-0.24}$, and $\gamma_{\textup{X}} = \firstgamma^{+0.55}_{-0.45}$ ($10^{10}$~\lksol)$^{-1}$\ (see the 68\% and 95\% green contours in Fig.~1(b)--1(d)). As we suspected, LMXBs provide stringent constraints on variations of the IMF for intermediate to massive stars (i.e., $m > 8$~$M_\odot$), and to the extent that we assume a single power law over the range of $m > 0.5$~$M_\odot$, a stringent constraint on $\alpha_2$. However, LMXBs essentially provide no useful constraints on the low-mass IMF slope. Indeed, in Figure~1(a) we can surmise that the ``invariant'' case (red curve) is formally acceptable and within the 1 $\sigma$ threshold of our maximum probability model, while the interpolation to a single power-law IMF with $\alpha = 2.8$ (blue curve) is not consistent with the LMXB observations. With an enhanced galaxy sample we not only confirm the result of P14~with better statistics, but also extend it by showing that it is possible to have an IMF that varies from being Kroupa for low-mass ellipticals to being bottom-heavy for high-mass ellipticals, as has been reported in the literature. In addition, the LMXB data suggest that the single power law slope for such a varying IMF above $0.5$~$M_\odot$~is unlikely to change much with velocity dispersion. In the next section, we utilize the formalism above, combined with a prior on how the IMF mass fraction due to low-mass stars can vary with velocity dispersion, to better constrain the values of $\alpha_1$, $\alpha_2$, and $\gamma_{\textup{X}}$. \section{Discussion and Conclusions} As discussed in \S3, LMXBs provide strong constraints on the variation of the IMF for stars with $m > 0.5$~$M_\odot$~with velocity dispersion; however, variations in the low-mass end of the IMF are not well constrained by LMXBs alone. The green contours in Figure~1(b)--1(d) show our 1 and 2$\sigma$ confidence intervals for $\alpha_1$, $\alpha_2$, and $\gamma_{\rm X}$ using only LMXB data. It is clear that these data are unable to constrain well $\alpha_1$; however, we show that the model slope above $0.5$~$M_\odot$, $\alpha_2$, and normalization factor $\gamma_{\rm X}$, were strongly constrained. In order to better constrain $\alpha_1$, we utilized a prior derived by La Barbera {et\,al.}\ (2013), who concluded that the gravity-sensitive near-IR absorption features observed in high-mass elliptical galaxy ($\sigma \approx$~300~km~s$^{-1}$) spectra require $\approx$70--90\% of the total initial stellar mass to be contained in stars with $m < 0.5$~$M_\odot$, as integrated directly from the IMF. From Equation~(1), we can compute the low-mass fraction for each of the $n$ IMF models defined above following \begin{equation} \begin{split} f_i(<0.5~M_\odot) & = \frac{M(<0.5~M_\odot)_i}{M_\odot} \\ & = \frac{N_{0,i} 2^{\alpha_{2,i}-\alpha_{1,i}}}{M_\odot} \int_{0.1}^{0.5} m^{-\alpha_{1,i}+1} dm. \end{split} \end{equation} Using the low-mass fractions calculated via Equation~(9), we next assigned a flat prior of 1 for $0.7 < f_i < 0.9$, and 0 elsewhere for our grid of IMF models. Multiplying this prior by our likelihood cube (see \S3), and renormalizing, resulted in a posterior probability distribution which we display as black contours in Figure~1(b)--1(d). The resulting posterior probability provides a stringent constraint on $\alpha_1$ for massive ellipticals, resulting in best model values of $\alpha_1 = \bestalphaone^{+0.09}_{-0.48}$, $\alpha_2 = \bestalphatwo^{+0.20}_{-0.85}$, and $\gamma_{\textup{X}} = \bestgamma^{+0.65}_{-0.35}$ ($10^{10}$~\lksol)$^{-1}$. The white curve, and gray 1$\sigma$ envelope, displayed in Figure~1(a) shows our best model, which smoothly varies between a Kroupa IMF for low-mass ellipticals, to a broken power-law IMF for high-mass ellipiticals. The high-mass galaxy IMF component has a steep slope of $\alpha_1 =$ $3.84$\ for stars $< 0.5$~$M_\odot$, and a slope of $\alpha_2 =$ $2.14$\ for stars $\ge 0.5$~$M_\odot$. We note that while $\alpha_2$ is slightly flatter than the slope given by Kroupa, it still squarely falls within the uncertainty of the Kroupa IMF ($\alpha_2 = 2.3 \pm 0.7$; Kroupa, 2001). By construction, this result is consistent with both the IMF for massive ellipticals being bottom-heavy, as inferred in the literature (see references in \S1), and the IMF not varying significantly across the mass spectrum at the high-mass end, as inferred by the LMXB populations (see also P14). Additionally, the variable IMF has been constructed to yield a total (including stellar remnants) mass-to-light ratio that varies with velocity dispersion following the observational constraints from Cappellari {et\,al.}\ 2013. In absolute terms, the stellar mass-to-light ratio of our best-fit model, varies from $M_\star/L_K$ = $0.679$\ at $\sigma = 90$~km~s$^{-1}$ to $M_\star/L_K$ = $3.10$\ at $\sigma = 300$~km~s$^{-1}$. The initial mass-to-light ratios, (initial stellar mass over $K$-band luminosity) are $M/L_K$ = $1.48$\ at $\sigma = 90$~km~s$^{-1}$ to $M/L_K$ = $4.12$\ at $\sigma = 300$~km~s$^{-1}$, resulting in a ratio $R_{M/L} = 2.78$. These values are below the limits placed on the dynamical mass-to-light ratio values measured in the literature (e.g., Cappellari et al. 2013; Conroy et al. 2013). We also noted in \S1 that recent studies have found evidence for radial gradients in the IMFs of massive ellipticals, in which a bottom-heavy IMF appears to be appropriate for the inner regions of the galaxy, while a more Kroupa-like IMF is appropriate in the outer regions (see, e.g., Mart\'in-Navarro et al. 2015; La Barbera et al. 2016a, 2016b). We tested to see whether the LMXB population showed evidence for such radial gradients by calculating the average $N_{38}/L_{K}$ values for LMXBs that were located within and outside of the effective radii, $r_e$, of the massive elliptical galaxy population. We utilized $r_e$ values from Cappellari et al. (2013) to divide the $L_{X} > 10^{38}$~erg~s$^{-1}$ LMXB catalogs from Peacock et al.~(2014) into inner- and outer-LMXB populations, and used the 2MASS $K$-band images to calculate the fraction of $L_K$ that was within and outside of $r_e$. We found consistent values of $N_{38}/L_{K} = 1.4 \pm 0.5$ and $1.7 \pm 0.6$ for the regions within and outside $r_e$, respectively, indicating that such gradients do not have a significant effect on our results. To emphasize, our likelihoods are broadly insensitive to $\alpha_1$. We relied on La Barbera et al. (2013) as a meaningful prior and showed we are able to reconcile their results with our analysis, with a statistically insignificant change in the derived values for the other two parameters, $\alpha_2$ and $\gamma_{\rm X}$. A choice of a different, meaningful prior would necessarily entail a different posterior probability density for $\alpha_1$, however, the essential constraint that this study places on $\alpha_2$ would not change significantly. If a slight flattening of the high-mass slope of the generalized IMF is common in high-mass elliptical galaxies (shown to be required if we accept that the bottom-heavy IMF claims in the literature are correct), then several additional predictions result. We would expect a factor of $\sim$2 higher supernova rate in the high redshift versions of these galaxies than their present day stellar masses imply. A more severe effect might be seen on the rate of long duration gamma-ray bursts (GRBs), which are often thought to come from only the most massive stellar explosions. Presently, we have limited information about very high-redshift GRBs, and almost no information about very high-redshift supernovae (SNe). The information that exists suggests a GRB rate not easily explained by detectable star forming galaxies (Tanvir et al. 2012), but the present results are easily explained by having most of the star formation take place in relatively small galaxies, below the {\itshape HST\/}~detection threshold. We would expect similar increases in the rates of double neutron star mergers producing short GRBs and gravitational wave sources, as well as an enhancement, albeit a somewhat lower one, of the rate of Type Ia SNe, since more white dwarfs would be produced and they would be skewed more heavily toward the massive end of the white dwarf spectrum. It is also important to note that if the slope of the high-mass end of the IMF changes by about 0.1 dex, then the ratio of core collapse supernovae from $\approx$40~$M_\odot$\ stars to that from $\approx$8~$M_\odot$\ stars changes by a factor of about 20\%. The yields from the most massive core collapse supernovae to those from the least massive ones can vary quite dramatically (e.g. Kobayashi {et\,al.}\ 2006). As a result, interstellar medium abundances may provide a complementary test of the IMF. Because additional metal enrichment comes from thermonuclear supernovae, classical novae, and mass loss, a detailed treatment of this problem must be undertaken before a clear prediction can be made. Future \hbox{X-ray}\ and high-resolution optical observations, and new population synthesis analyses could substantially improve constraints on the IMF variation with velocity dispersion. In particular, new {\itshape Chandra\/}\ and {\itshape HST\/}\ observations of low--to--intermediate mass ellipticals would be helpful in ruling-out a scenario where the high-mass end of the IMF is constant across all $\sigma$. Deeper {\itshape Chandra\/}\ observations of the low-mass galaxies in this study would have a similar effect, allowing us to probe to the more numerous population of low-luminosity LMXBs. In a forthcoming paper, Peacock {et\,al.}\ (in-prep), will be examining the $L_{\rm X} \lower.5ex\hbox{\gtsima} 10^{37}$~erg~s$^{-1}$\ field LMXB population in low-mass elliptical NGC~7457 and comparing it with similar binaries detected in deep observations of massive ellipticals. Most effectively testing the variations of the IMF in elliptical galaxies will require simultaneous modeling of near-IR spectroscopic data along with LMXB constraints using the combination of stellar population synthesis and \hbox{X-ray}\ binary population synthesis modeling (e.g., Fragos {et\,al.}\ 2013; Madau \& Fragos~2016), in which the IMF, stellar ages, metallicities, and other physical parameters that influence \hbox{X-ray}\ binary formation are all modeled self-consistently. Such a population synthesis framework will be an important future step for advancing our knowledge of how the IMF varies among elliptical galaxies. Additionally, a further understanding of the details of the stellar populations in the near-IR must be developed to ensure that the claims of a bottom-heavy IMF are reliable. At the present time, stellar evolution codes used for these purposes have limited or no treatment of unusual classes of stars such as interacting binaries and the products of binary interactions. Such stars are likely to be relatively unimportant for tracing out the optical bands where the models have been best calibrated, however, given that the largest radius stars are most likely to interact, these stars may be increasingly important toward the reddest parts of the spectral energy distribution where red giants and asymptotic giant branch stars are most important. For instance, it has already been shown that the S-type stars might appear with different frequencies in larger and smaller elliptical galaxies, and can potentially mimic the effects of bottom-heavy IMFs on the Wing-Ford band and the Na~I D line, although not on Ca~II (Maccarone 2014). Given the coincidence needed between changes in both the high- and low-mass ends of the IMF to reproduce the results we see here, as well as the subtlety of the features that have been used to suggest the bottom-heavy IMF, more work to investigate further the level of systematic effects from stars not included in standard stellar population synthesis models would be well-motivated. \acknowledgements D.A.C., B.D.L., and R.T.E. gratefully acknowledge support from {\itshape Chandra\/}\ \hbox{X-ray}\ Center (CXC) grant GO4-15090A. D.A.C. acknowledges support from USRA and GSFC, as well as generous support from the University of Arkansas. A.K. acknowledges CXC grants GO4-15090C and GO5-16084B, and T.M. awknowledges GO4-15090B. M.P. and S.Z. acknowledge support from NASA ADAP grant NNX15AI71G, and M.P. additionally acknowledges CXC grant GO5-16084A and the {\itshape Hubble Space Telescope} ({\itshape HST\/}\/) grant HST-GO-13942.001-A.
train/arxiv
BkiUa2LxK5YsWR0KhZ7h
5
1
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \linespread{1.3} \newcommand{\hs}[1]{\mbox{hs$[#1]$}} \newcommand{\w}[1]{\mbox{$\mathcal{W}_\infty[#1]$}} \newcommand{\bif}[2]{\small\left(\!\!\begin{array}{c}#1 \\#2\end{array}\!\!\right)} \newcommand{\rom}[1]{\mathrm{#1}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\labell}[1]{\label{#1}} \newcommand{\labels}[1]{\label{#1}} \newcommand{\reef}[1]{(\ref{#1})} \newcommand{\unarray}[2]{\left(\!\!\begin{array}{c}#1\\#2\end{array}\!\!\right)} \newcommand{\matrixarray}[4]{\left(\!\!\begin{array}{cc}#1&#2\\#3&#4\end{array}\!\!\right)} \newcommand{\innerproductwith}[1]{\left<#1\right|} \newcommand\re[1]{{\text{Re}\,}\big(#1\big)} \newcommand\im[1]{{\text{Im}\,}\big(#1\big)} \def{e.g.}\ {{e.g.}\ } \def{ i.e.}\ {{ i.e.}\ } \newcommand{\textrm{Tr}}{\textrm{Tr}} \def${\cal S}${{{\mathcal C}{\mathcal S}}} \def\widetilde{\widetilde} \def\mathop{\text{Res}}\limits{\mathop{\text{Res}}\limits} \def{1\over 2}{{1\over 2}} \newcommand{{\text d}}{{\text d}} \newcommand{\textrm{diag}}{\textrm{diag}} \def{(-1)}{{(-1)}} \def{(0)}{{(0)}} \def{(1)}{{(1)}} \def{(2)}{{(2)}} \def{(3)}{{(3)}} \def{(4)}{{(4)}} \def\+{{(+)}} \def\-{{(-)}} \def{(\mp)}{{(\mp)}} \def{(\pm)}{{(\pm)}} \def\boldsymbol{\boldsymbol} \def\mathcal{V}{\text{vol}} \def{\text{AdS}}{{\text{AdS}}} \def\scriptscriptstyle{\scriptscriptstyle} \def${\mathscr I}^+${${\mathscr I}^+$} \def{\cal R}{{\cal R}} \def{\epsilon}{{\epsilon}} \def${\cal S}${${\cal S}$} \def\Sigma {\Sigma } \def\sigma {\sigma } \def\sigma^{0} {\sigma^{0} } \def\Psi^{0} {\Psi^{0} } \def\partial{\partial} \def{\bar z}{{\bar z}} \def{\bar w}{{\bar w}} \def8\pi G_N {\mathcal T}{8\pi G_N {\mathcal T}} \def{(0)}{{(0)}} \def{(1)}{{(1)}} \def{(2)}{{(2)}} \def{\cal L}{{\cal L}} \def{\cal O}}\def\cv{{\cal V}{{\cal O}}\def\cv{{\cal V}} \def\nabla{\nabla} \def${\mathscr I}^+_+${${\mathscr I}^+_+$} \def\<{\langle } \def\>{\rangle } \def{\bar w}{{\bar w}} \def\mathcal{H}{\mathcal{H}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{align}{\begin{align}} \def\end{align}{\end{align}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\begin{eqnarray*}{\begin{eqnarray*}} \def\mathcal{O}{\mathcal{O}} \def\mbox{sech}{\mbox{sech}} \def\tilde{n}{\tilde{n}} \def\tau_0{\tau_0} \def\mathcal{W}{\mathcal{W}} \def\nabla{\nabla} \def\mathfrak{u}{\mathfrak{u}} \defi_1 \cdots i_s{i_1 \cdots i_s} \def\mathcal{I}^+{\mathcal{I}^+} \def\dim#1{\lbrack\!\lbrack #1 \rbrack\!\rbrack } \def\chi\!\cdot\!\chi{\chi\!\cdot\!\chi} \defa{a} \def\bar{c}{\bar{c}} \def\partial{\partial} \def\kappa{\kappa} \def\text{\begin{footnotesize}$~{}^\circ_\circ~$\end{footnotesize}}{\text{\begin{footnotesize}$~{}^\circ_\circ~$\end{footnotesize}}} \def\scriptscriptstyle{\scriptscriptstyle} \def{e.g.}\ {{e.g.}\ } \def{ i.e.}\ {{ i.e.}\ } \def\bar{\theta}{\bar{\theta}} \def\textrm{d}{\textrm{d}} \def\bar{\mu}{\bar{\mu}} \def{\hat {\cal A}}{{\hat {\cal A}}} \def{\cal M}{{\cal M}} \def{\mathbb R}{{\mathbb R}} \def\[{\big[} \def\]{\big]} \def\partial{\partial} \def\boldsymbol{p}{\boldsymbol{p}} \def\mathcal{V}{\mathcal{V}} \def{\vec x}{{\vec x}} \def\partial{\partial} \def$\cal P_O${$\cal P_O$} \setcounter{tocdepth}{1} \begin{document} \begin{titlepage} \unitlength = 1mm \ \\ \vskip 3cm \begin{center} {\LARGE{\textsc{A $d$-Dimensional Stress Tensor for Mink$_{d+2}$ Gravity}}} \vspace{0.8cm} Daniel Kapec$^\dagger$ and Prahar Mitra$^{\dagger *}$ \vspace{1cm} {\it $^\dagger$Center for the Fundamental Laws of Nature, Harvard University,\\ Cambridge, MA 02138, USA} {\it $^*$School of Natural Sciences, Institute for Advanced Study,\\ Princeton, NJ 08540, USA} \vspace{0.8cm} \begin{abstract} We consider the tree-level scattering of massless particles in $(d+2)$-dimensional asymptotically flat spacetimes. The ${\mathcal S}$-matrix elements are recast as correlation functions of local operators living on a space-like cut ${\mathcal M}_d$ of the null momentum cone. The Lorentz group $SO(d+1,1)$ is nonlinearly realized as the Euclidean conformal group on ${\mathcal M}_d$. Operators of non-trivial spin arise from massless particles transforming in non-trivial representations of the little group $SO(d)$, and distinguished operators arise from the soft-insertions of gauge bosons and gravitons. The leading soft-photon operator is the shadow transform of a conserved spin-one primary operator $J_a$, and the subleading soft-graviton operator is the shadow transform of a conserved spin-two symmetric traceless primary operator $T_{ab}$. The universal form of the soft-limits ensures that $J_a$ and $T_{ab}$ obey the Ward identities expected of a conserved current and energy momentum tensor in a Euclidean CFT$_d$, respectively. \end{abstract} \vspace{1.0cm} \end{center} \end{titlepage} \pagestyle{empty} \pagestyle{plain} \pagenumbering{arabic} \tableofcontents \section{Introduction } The light-like boundary of asymptotically flat spacetimes, ${\mathcal I} = {\mathcal I}^+ \cup {\mathcal I}^-$, is a null cone with a (possibly singular) vertex at spatial infinity. Massless excitations propagating in such a spacetime pass through ${\mathcal I}$ at isolated points on the celestial sphere. Guided by the holographic principle, one might hope that the ${\mathcal S}$-matrix for the scattering of massless particles in asymptotically flat spacetimes in $(d+2)$-dimensions might be reexpressed as a collection of correlation functions of local operators on the celestial sphere $S^{d}$ at null infinity, with operator insertions at the points where the particles enter or exit the spacetime. The Lorentz group would then be realized as the group of conformal motions of the celestial sphere, and the Lorentz covariance of the ${\mathcal S}$-matrix would guarantee that the local operators have well-defined transformation laws under the action of the Euclidean conformal group $SO(d+1,1)$. On these general grounds one expects the massless ${\mathcal S}$-matrix to display some of the features of a $d$-dimensional Euclidean conformal field theory (CFT$_d$). It has recently become possible to make some of these statements more precise in four dimensions, due in large part to Strominger's infrared triangle that relates soft theorems, asymptotic symmetry groups and memory effects \cite{Barnich:2010eb,Strominger:2013lka,Strominger:2013jfa,He:2014laa,Adamo:2014yya,Geyer:2014lca,Kapec:2014opa,He:2014cra,Lysov:2014csa,Campiglia:2014yka,Strominger:2014pwa,Kapec:2014zla,Mohd:2014oja,Campiglia:2015yka,Kapec:2015vwa,He:2015zea,Campiglia:2015qka,Kapec:2015ena,Strominger:2015bla,Campiglia:2015kxa,Dumitrescu:2015fej,Kapec:2016aqd,Campiglia:2016jdj,Campiglia:2016hvg,Conde:2016csj,Gabai:2016kuf,Campiglia:2016efb,Kapec:2016jld,Conde:2016rom,Pasterski:2016qvg,He:2017fsb,Strominger:2017zoo,Campiglia:2017dpg,Nande:2017dba,Pasterski:2017kqt,Kapec:2017tkm,Pasterski:2017ylz,Mao:2017wvx,Choi:2017bna,Laddha:2017vfh}. While the specific details of a putative holographic formulation are expected to be model dependent, it should be possible to make robust statements (primarily regarding symmetries) based on universal properties of the ${\mathcal S}$-matrix. One interesting class of universal statements about the ${\mathcal S}$-matrix concerns the so-called soft-limits \cite{Bloch:1937pw,Low:1954kd,GellMann:1954kc,Low:1958sn,Weinberg:1965nx,Burnett:1967km,Gross:1968in,Jackiw:1968zza} of scattering amplitudes. In the limit when the wavelength of an external gauge boson or graviton becomes much larger than any scale in the scattering process, the ${\mathcal S}$-matrix factorizes into a universal soft operator (controlled by the soft particle and the quantum numbers of the hard particles) acting on the amplitude without the soft insertion. This sort of factorization is reminiscent of a Ward identity, and indeed in four dimensions the soft-photon, soft-gluon, and soft-graviton theorems have been recast in the form of Ward identities for conserved operators in a putative CFT$_2$ \cite{Nair:1988bq,Strominger:2013lka,He:2015zea,Cheung:2016iub,Kapec:2016jld,He:2017fsb,Nande:2017dba}. Most importantly for the present work, in \cite{Kapec:2016jld,He:2017fsb, Cheung:2016iub} an operator was constructed from the subleading soft-graviton theorem whose insertion into the four dimensional ${\mathcal S}$-matrix reproduces the Virasoro Ward identities of a CFT$_2$ energy momentum tensor. The subleading soft-graviton theorem holds in all dimensions \cite{Cachazo:2014fwa,Schwab:2014xua,Bern:2014oka,Broedel:2014fsa,Sen:2017xjn,Sen:2017nim,Chakrabarti:2017ltl}, so it should be possible to construct an analogous operator in any dimension. We will see that this is indeed the case, and that the construction is essentially fixed by Lorentz (conformal) invariance. The organization of this paper is as follows. In section \ref{sec:kinematics} we establish our conventions for massless particle kinematics and describe the map from the $(d+2)$-dimensional ${\mathcal S}$-matrix to a set of $d$-dimensional ``celestial correlators'' defined on a space-like cut of the null momentum cone. Section \ref{sec:lorentztransform} describes the realization of the Euclidean conformal group on these correlation functions in terms of the embedding space formalism. Section \ref{sec:currents} outlines the construction of conserved currents -- namely the conserved $U(1)$ current and the stress tensor -- in the boundary theory and their relations to the leading soft-photon and subleading soft-graviton theorems. Section \ref{sec:comments} concludes with a series of open questions. In appendix \ref{app:coordinates}, we briefly discuss the bulk space-time interpretation of our results and their relations to previous work. \section{Massless Particle Kinematics }\label{sec:kinematics} The basic observable in asymptotically flat quantum gravity is the ${\mathcal S}$-matrix element \begin{equation} {\mathcal S} = \langle \; \text{out} \; |\; \text{in} \; \rangle \end{equation} between an incoming state on past null infinity $({\mathcal I}^-)$ and an outgoing state on future null infinity $({\mathcal I}^+)$. The perturbative scattering states in asymptotically flat spacetimes are characterized by collections of well separated, non-interacting particles.\footnote{In four dimensions, the probability to scatter into a state with a finite number of gauge bosons or gravitons is zero due to infrared divergences \cite{Bloch:1937pw}. In higher dimensions, infrared divergences are absent and one can safely consider the usual Fock space basis of scattering states.} Each massless particle is characterized by a null momentum $p^\mu$ and a representation of the little group $SO(d)$, as well as a collection of other quantum numbers such as charge, flavor, etc. Null momenta are constrained to lie on the future light cone ${\mathcal C}^+$ of the origin in momentum space ${\mathbb R}^{d+1,1}$, \begin{equation}\label{cpdef} {\mathcal C}^+= \{ p^\mu \in \mathbb{R}^{d+1,1} \,\big|\, p^2=0 \, , p^0 > 0 \} \; . \end{equation} A convenient parametrization for the momentum, familiar from the embedding space formalism in conformal field theories \cite{SimmonsDuffin:2012uy,Penedones:2016voo}, is given by \begin{equation} \label{momentum} p^\mu(\omega,x)=\omega \Omega(x)\hat{p}^\mu(x) \; , \hspace{.5 in} \hat{p}^\mu(x)= \left(\frac{1+x^2}{2},x^a,\frac{1-x^2}{2} \right) \; , \hspace{.25 in} \omega \geq 0\;, \;\; x^a \in \mathbb{R}^d \; , \end{equation} where $x^2= x^a x_a = \delta_{ab}x^a x^b $. The metric on this null cone is degenerate and is given by \begin{equation} ds^2_{{\mathcal C}^+} =dp^\mu dp_\mu= 0 d\omega^2 + \omega^2\Omega(x)^2 dx^a dx_a \; . \end{equation} For a fixed $\omega$, this parametrization specifies a $d$-dimensional space-like cut ${\mathcal M}_d$ of the future light cone with a conformally flat metric induced from the flat Lorentzian metric on $\mathbb{R}^{d+1,1}$: \begin{equation}\label{Mdmetric} ds_{{\mathcal M}_d}^2 = \Omega(x)^2 dx^a dx_a \;. \end{equation} $\Omega(x)$ defines the conformal factor on ${\mathcal M}_d$. In most of what follows we will choose $\Omega(x)=1$ for computational simplicity, although the generalization to an arbitrary conformally flat Euclidean cut is straightforward.\footnote{Other choices of $\Omega(x)$ have also proved useful in previous analyses. In particular, the authors of \cite{He:2014laa,Kapec:2015ena,Cachazo:2014fwa,He:2014cra,Strominger:2013jfa} choose $\Omega(x)=2(1+x^2)^{-1}$, yielding the round metric on $S^d$. For non-constant $\Omega(x)$, $d$-dimensional partial derivatives are simply promoted to covariant derivatives, and powers of the Laplacian are replaced by their conformally covariant counterparts, the GJMS operators \cite{JLMS:JLMS0557}. } The $(d+2)$-dimensional Lorentz-invariant measure takes the form \begin{equation} \int \frac{d^{d+1}p}{p^0} = \int d^dx\int d\omega \omega^{d-1}\; , \end{equation} while the Lorentzian inner product is given by \begin{equation} - 2 \hat{p}(x_1)\cdot \hat{p}(x_2)=(x_1-x_2)^2 \; . \end{equation} Massless particles of spin $s$ can be described by symmetric traceless fields \begin{equation} \Phi_{\mu_1 \dots \mu_s}(X) = \sum_{a_i}\int \frac{d^{d+1}p}{(2\pi)^{d+1}}\frac{1}{2\omega_p}{\varepsilon}_{\mu_1 \ldots \mu_s}^{a_1\ldots a_s}(p) \left[ {\mathcal O}_{a_1\dots a_s}(p)e^{ip \cdot X} + {\mathcal O}^\dagger_{a_1\dots a_s}(p) e^{-ip\cdot X} \right] \end{equation} satisfying the equations \begin{equation} \Box_X\Phi_{\mu_1 \dots \mu_s}(X) = 0 \; , \hspace{.5 in} \partial_{\mu_1 }\Phi^{\mu_1}{}_{\mu_2 \dots \mu_s}(X)=0 \; . \end{equation} Under gauge transformations, \begin{equation} \Phi_{\mu_1 \dots \mu_s}(X) ~ \to~ \Phi_{\mu_1 \dots \mu_s}(X) + \partial_{(\mu_1} \lambda_{\mu_2 \dots \mu_s)}(X) \; . \end{equation} We will work in the gauge \begin{equation} n^{\mu_{1}}\Phi_{\mu_1 \dots \mu_s}(X) = 0 \; , \hspace{.5 in} n^\mu = (1,0^a,-1) \; . \end{equation} A natural basis for the vector representation of the little group $SO(d)$ is given in terms of the $d$ polarization vectors \begin{equation} \label{polarization} {\varepsilon}_a^\mu(x)\equiv \partial_a \hat{p}^\mu(x) = \big( x_a , \delta_a^b, - x_a \big) \;. \end{equation} These are orthogonal to both $n$ and ${\hat p}$ and satisfy \begin{equation} \begin{split} {\varepsilon}_a(x) \cdot {\varepsilon}_b (x) = \delta_{ab}\; , \hspace{.5 in} {\varepsilon}^a_\mu (x) {\varepsilon}^\nu_a (x) = \Pi^\nu_\mu (x) \equiv \delta^\nu_\mu + n_\mu {\hat p}^\nu(x) + n^\nu {\hat p}_\mu(x) \; . \end{split} \end{equation} We also note the property \begin{equation} \begin{split} {\hat p} (x) \cdot {\varepsilon}_a (x') &= x_a - x'_a \; . \end{split} \end{equation} The polarization tensors for higher spin representations of the little group can be constructed from the spin-1 polarization forms. For instance, the graviton's polarization tensor is given by \begin{equation} {\varepsilon}_{\mu \nu}^{ab}(x)=\frac12 \left[ {\varepsilon}_\mu^a(x){\varepsilon}_\nu^b(x) + {\varepsilon}_\mu^b (x){\varepsilon}_\nu^a(x)\right] - \frac{1}{d} \delta^{ab} \Pi_{\mu\nu}(x) \; . \end{equation} The Fock space of massless scattering states is generated by the algebra of single particle creation and annihilation operators satisfying the standard commutation relations \begin{equation} \big[ {\mathcal O}_a(p),{\mathcal O}_b (p')^\dagger \big] = (2\pi)^{d+1} \delta_{a b } (2p^0) \delta^{(d+1)}(\vec{p} - \vec{p}\,' ) \; . \end{equation} We can rewrite this relation in terms of our parametrization \eqref{momentum} of the momentum light cone \begin{equation} \big[ {\mathcal O}_a(\omega,x), {\mathcal O}_b (\omega', x')^\dagger \big] = 2 (2\pi)^{d+1} \delta_{ab} \omega^{1-d} \delta( \omega - \omega' ) \delta^{(d)}(x-x') \; . \end{equation} Note that in terms of the commutation relations of local operators defined on the light cone, the energy direction actually appears space-like rather than time-like.\footnote{This is also the case for the null direction on ${\mathcal I}$ in asymptotic quantization.} The creation and annihilation operators can be viewed as operators inserted at specific points of ${\mathcal M}_d$ carrying an additional quantum number $\omega$, so that the ${\mathcal S}$-matrix takes the form of a conformal correlator of primary operators. In other words, the amplitude with $m$ incoming and $n-m$ outgoing particles\footnote{Here, we have suppressed all other quantum numbers that label the one-particle states, such as the polarization vectors, flavor indices, or charge quantum numbers.} \begin{equation} {\mathcal A}_{n,m}=\langle p_{m+1}, \dots , p_{n} | p_1, \dots , p_m\rangle \; \end{equation} can be equivalently represented as a correlation function on ${\mathcal M}_d$ \begin{equation} {\mathcal A}_{n}= \avg{ {\mathcal O}_1(p_1) \dots {\mathcal O}_{n}(p_n ) }_{{\mathcal M}_d} = \avg{ {\mathcal O}_1(\omega_1, x_1) \dots {\mathcal O}_{n}(\omega_n,x_{n}) }_{{\mathcal M}_d} \; . \end{equation} In this representation, outgoing states have $\omega >0$ and ingoing states have $\omega <0$. In the rest of this paper, we will freely interchange the notation $p_i \Leftrightarrow (\omega_i,x_i)$ to describe the insertion of local operators. In appendix \ref{app:coordinates}, we demonstrate that the space-like cut ${\mathcal M}_d$ of the momentum cone is naturally identified with the cross sectional cuts of $\mathcal{I}^+$. We construct the bulk coordinates whose limiting metric on future null infinity is that of a null cone with cross-sectional metric \eqref{Mdmetric}. This provides a holographic interpretation of our construction, recasting scattering amplitudes in asymptotically flat spacetimes in terms of a ``boundary'' theory that lives on $\mathcal{I}^+$. \section{Lorentz Transformations and the Conformal Group }\label{sec:lorentztransform} In this section, we make explicit the map from $(d+2)$-dimensional momentum space ${\mathcal S}$-matrix elements to conformal correlators on the Euclidean manifold ${\mathcal M}_d$. The setup is mathematically similar to the embedding space formalism, although in this case the $(d+2)$-dimensional ``embedding space'' is the physical momentum space rather than merely an auxiliary ambient construction. The Lorentz group $SO(d+1,1)$ with generators $M_{\mu \nu}$ acts linearly on momentum space vectors $p^\mu\in \mathbb{R}^{d+1,1}$. The isomorphism with the conformal group of ${\mathcal M}_d$ is given by the identifications \begin{equation}\label{map} J_{ab}=M_{ab}\; , \qquad T_a = M_{0,a} - M_{d+1,a}\; , \qquad D = M_{d+1,0}\; , \qquad K_a = M_{0,a} + M_{d+1,a}\; . \end{equation} The $J_{ab}$ generate $SO(d)$ rotations, $D$ is the dilation operator, and $T_a$ and $K_a$ are the generators of translations and special conformal transformations, respectively. These operators satisfy the familiar conformal algebra \begin{equation} \begin{split} [J_{ab}, J_{cd}] &= i(\delta_{ac}J_{bd} + \delta_{bd}J_{ac} - \delta_{bc}J_{ad} - \delta_{ad}J_{bc}) \; , \\ [J_{ab},T_c] &= i(\delta_{ac}T_b - \delta_{bc}T_a) \; ,\\ [J_{ab}, K_c] &= i(\delta_{ac}K_b - \delta_{bc}K_a)\; ,\\ [D,T_a] &= -iT_a \; ,\\ [D,K_a] &= i K_a \; ,\\ [T_a,K_b] &=-2i(\delta_{ab}D+J_{ab})\; . \end{split} \end{equation} Equation \eqref{map} describes the precise map between the linear action of Lorentz transformations on $p^\mu$ and nonlinear conformal transformations on $(\omega,x)$. The latter amount to transformations $x \to x'(x)$ for which \begin{equation} \frac{\partial x'^{c}}{\partial x^a}\frac{\partial x'^{d}}{\partial x^b}\delta_{cd}= \gamma(x)^2 \delta_{ab} \end{equation} along with \begin{equation} \omega \to \omega'= \frac{\omega}{\gamma(x)} \;. \end{equation} Using \eqref{map}, the conformal properties of the operators ${\mathcal O}(\omega,x)$ (we suppress spin labels for convenience) follow from their Lorentz transformations, \begin{align} [{\mathcal O} (p), M_{\mu \nu}]&= {\mathcal L}_{\mu \nu}{\mathcal O} (p) + {\mathcal S}_{\mu \nu} \cdot {\mathcal O}(p) \; , \;\;\;\;\; [{\mathcal O} (p), P_\mu]=-p_\mu {\mathcal O} (p) \; , \end{align} where \begin{equation} {\mathcal L}_{\mu \nu}= - i \left( p_\mu \frac{\partial}{\partial p^\nu} - p_\nu \frac{\partial}{\partial p^\mu} \right) \end{equation} and ${\mathcal S}_{\mu \nu}$ denotes the spin-$s$ representation of the Lorentz group. We would like to rewrite these relations in a way that manifests the action on ${\mathcal M}_d$. First, we note that \begin{equation} \frac{\partial \omega}{\partial p^a}=\frac{2x_a}{1+x^2} \; , \;\;\;\; \omega \frac{\partial x^a}{\partial p^b}=\delta_b^a-\frac{2x^ax_b}{1+x^2} \; , \;\;\;\; \frac{\partial \omega}{\partial p^{d+1}}=\frac{2}{1+x^2} \; , \;\;\;\; \omega \frac{\partial x^a}{\partial p^{d+1}}=-\frac{2x^a}{1+x^2}\; . \end{equation} It follows that \begin{equation} \begin{split}\label{Lmnexp} &{\mathcal L}_{0,a}=ix_a [\omega \partial_{\omega} -x^b\partial_b] + \frac{i}{2}(1+x^2)\partial_a \; , \hspace{.75 in}{\mathcal L}_{0,d+1}= i[\omega \partial_{\omega} -x^a\partial_a] \; ,\\ &{\mathcal L}_{a,d+1}=-ix_a[\omega\partial_{\omega}-x^b\partial_b]+\frac{i}{2}(1-x^2)\partial_a \; , \hspace{.5 in}{\mathcal L}_{ab}=-i[x_a\partial_b -x_b\partial_a] \; . \end{split} \end{equation} The action of the spin matrix ${\mathcal S}_{\mu \nu}$ can be conveniently expressed in terms of the polarization vectors (\ref{polarization}). For instance, the action on a spin-1 state is given by \begin{equation} [{\mathcal S}_{\mu \nu}]_a{}^{b}=-i \big[ \varepsilon_{\mu a}\varepsilon_{\nu}^{b} -\varepsilon_{\nu a}\varepsilon_{\mu}^{b} \big] + \varepsilon^\rho_a \mathcal{L}_{\mu \nu}\varepsilon_\rho^{b} \; . \end{equation} In general one finds \begin{equation}\label{Smnexp} {\mathcal S}_{0,d+1}=0\;, \hspace{.5 in}{\mathcal S}_{0a}={\mathcal S}_{d+1,a}=x^b{\mathcal S}_{ab} \;, \end{equation} where ${\mathcal S}_{ab}$ is the representation of the massless little group $SO(d)$. The action of the conformal group on the creation and annihilation operators is then given by \begin{equation}\label{Commutators} \begin{split} &[{\mathcal O}(\omega,x), T_a]= i\partial_a {\mathcal O}(\omega,x) \; ,\\ &[{\mathcal O}(\omega,x), J_{ab}] = -i ( x_a\partial_b -x_b\partial_a ) {\mathcal O}(\omega,x) + {\mathcal S}_{ab} \cdot {\mathcal O}(\omega,x) \; ,\\ &[{\mathcal O}(\omega,x), D]=i ( x^a\partial_a -\omega \partial_{\omega} ) {\mathcal O}(\omega,x) \; ,\\ &[{\mathcal O}(\omega,x), K_a] = i ( x^2\partial_a - 2 x_a x^b \partial_b + 2x_a\omega \partial_{\omega} ) {\mathcal O}(\omega,x) + 2x^b {\mathcal S}_{ab} \cdot {\mathcal O}(\omega,x) \; . \end{split} \end{equation} We recognize these commutation relations as the defining properties of a spin-$s$ conformal primary operator, with a non-standard dilation eigenvalue \begin{equation} \Delta = -\omega \partial_{\omega} \; . \end{equation} The fact that $\Delta$ is realized as a derivative simply reflects the fact that the energy eigenstates do not diagonalize the dilation operator, which simply translates the space-like cut ${\mathcal M}_d$ of the momentum cone along its null direction.\footnote{It is possible to obtain standard conformal primary operators via a Mellin transform, ${\mathcal O}(\Delta,x) = \int_{{\mathscr C}} d\omega \omega^{\Delta-1} {\mathcal O}(\omega,x)$ for some contour ${\mathscr C}$ in the complex $\omega$ plane.} \section{Conserved Currents and Soft Theorems}\label{sec:currents} The operator content and correlation functions of the theory living on ${\mathcal M}_d$ are highly dependent on the spectrum and interactions of the $(d+2)$-dimensional theory under consideration. However, the universal properties of the $(d+2)$-dimensional ${\mathcal S}$-matrix are expected to translate into general, model independent features of the ``boundary theory''. In this section, we explore the consequences of the universal soft factorization properties of ${\mathcal S}$-matrix elements. Soft factorization formulas closely resemble Ward identities, and indeed many soft theorems are known to be intimately related to symmetries of the $\mathcal{S}$-matrix. The existence of the soft theorems should therefore enable one to construct associated conserved currents for the theory living on $\mathcal{M}_d$. The appropriate currents were constructed for $d=2$ in \cite{Strominger:2013lka,He:2015zea,Kapec:2016jld,He:2017fsb,Nande:2017dba,Cheung:2016iub}. Here, we generalize these results to $d>2$. \subsection{Leading Soft-Photon Theorem and the Conserved $U(1)$ Current }\label{softphoton} The leading soft-photon theorem is a universal statement about the behavior of ${\mathcal S}$-matrix elements in the limit that an external photon's momentum tends to zero. It is model independent, exists in any dimension, and states that \begin{equation}\label{lsp} \avg{ {\mathcal O}_a(q) {\mathcal O}_1(p_1) \dots {\mathcal O}_n(p_n) } ~ \stackrel{q^0 \to 0}{ \longrightarrow} ~ \sum_{k=1}^n Q_k \frac{{\varepsilon}_a \cdot p_k}{q \cdot p_k} \avg{ {\mathcal O}_1(p_1) \dots {\mathcal O}_n(p_n) } \; , \end{equation} where ${\mathcal O}_a(\omega,x)$ creates an outgoing photon of momentum $p(\omega,x)$ and polarization ${\varepsilon}^\mu_a(x)$, and $Q_k$ is the charge of the $k$-th particle. We will first define the ``leading soft-photon operator'' \begin{equation} S_a(x) = \lim_{\omega \to 0} \omega {\mathcal O}_a(\omega,x) \;. \end{equation} This is a conformal primary operator with $(\Delta,s)=(1,1)$. Insertions of this operator are controlled by the leading soft-photon theorem \eqref{lsp} and take the form \begin{equation} \begin{split}\label{Sins} \avg{ S_a(x) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } &= \partial_a \sum_{k=1}^n Q_k \log \big[(x-x_k)^2\big] \avg{ {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n, x_n) } . \end{split} \end{equation} Note that $S_a(x)$ satisfies \begin{equation} \begin{split}\label{Saprop} \partial_a S_b - \partial_b S_a = 0 \; \end{split} \end{equation} identically, without contact terms. In even dimensions,\footnote{From this point on, we will consider only the even dimensional case in order to avoid discussion of fractional powers of the Laplacian.} the leading soft-photon theorem is known \cite{He:2014cra,Kapec:2014zla,Mohd:2014oja,Campiglia:2015qka,Kapec:2015ena} to be completely equivalent to the invariance of the ${\mathcal S}$-matrix under a group of angle dependent $U(1)$ gauge transformations with non-compact support. In $d=2$, this symmetry is generated by the action of a holomorphic boundary current $J_z$ satisfying the appropriate Kac-Moody Ward identities (see \cite{He:2015zea}). In higher dimensions, one consequently expects to encounter a conformal primary operator $J_a(x)$ with $(\Delta,s) = (d-1,1)$ satisfying the Ward identity \begin{equation} \begin{split}\label{Jins} \avg{ \partial^b J_b(y) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } &= \sum_{k=1}^n Q_k \delta^{(d)} ( y - x_k ) \avg{ {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n, x_n) } \; . \end{split} \end{equation} Our goal is to construct the conserved current $J_a(x)$ from the soft operator $S_a(x)$. The inverse problem -- constructing $S_a(x)$ from an operator $J_a(x)$ satisfying (\ref{Jins}) -- is easily solved. Multiplying both sides of \eqref{Jins} by $\int d^d y \partial_a \log [ ( x - y )^2 ] $, we find \begin{equation} \begin{split} & \int d^d y \partial_a \log [ ( x - y )^2 ] \avg{ \partial^b J_b(y) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } \\ &\qquad \qquad \qquad \qquad = \partial_a \sum_{k=1}^n Q_k \log [ ( x - x_k )^2 ] \avg{ {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n, x_n) } \\ &\qquad \qquad \qquad \qquad = \avg{ S_a(x) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } \;. \end{split} \end{equation} We therefore identify\footnote{Note that this integral expression is insensitive to improvement terms of the form $J_a\to J_a + \partial^bK_{[ba]}$ which do not affect the Ward identity (\ref{Jins}). } \begin{equation} \begin{split} S_a(x) = \int d^d y \partial_a \log [ ( x - y )^2 ] \partial^b J_b(y) = 2 \int d^d y \frac{ {\mathcal I}_{ab} ( x - y ) }{ ( x - y )^2 } J^b(y) \;, \end{split} \end{equation} where ${\mathcal I}_{ab}(x-y)$ is the conformally covariant tensor \begin{equation} \begin{split} {\mathcal I}_{ab}(x-y) = \delta_{ab} - 2 \frac{ ( x - y )_a ( x - y )_b }{ ( x - y)^2 } \; . \end{split} \end{equation} This nonlocal relationship between the $\Delta=1$ primary $S_a$ and the $\Delta=d-1$ primary $J_a$ is known as a shadow transform. For a spin-$s$ operator of dimension $\Delta$, the shadow operator is given by \cite{SimmonsDuffin:2012uy} \begin{equation} \begin{split} {\widetilde {\mathcal O}}_{a_1 \dots a_s} (x) = \delta_{a_1 \dots a_s}^{b_1 \dots b_s} \int d^d y \frac{ {\mathcal I}_{b_1 c_1} ( x - y ) \dots {\mathcal I}_{b_s c_s}(x-y) }{ [ ( x - y)^2 ]^{d-\Delta} } {\mathcal O}^{c_1 \dots c_s} (y) \; . \end{split} \end{equation} Here, $\delta_{a_1 \dots a_s}^{b_1 \dots b_s}$ is the invariant identity tensor in the spin-$s$ representation, \begin{equation} \begin{split} \delta_{a_1 \dots a_s}^{b_1 \dots b_s} = \delta^{\{a_1}_{\{b_1} \delta^{a_2}_{b_2} \dots \delta^{a_s\}}_{b_s\} } \; , \end{split} \end{equation} where the notation $\{ \, ,\}$ denotes the symmetric traceless projection on the indicated indices. The shadow transform is the unique integral transform that maps conformal primary operators with $(\Delta,s)$ onto conformal primary operators with $ ( d - \Delta , s )$. Given that $S_a$ has $(\Delta,s)=(1,1)$ while $J_a$ has $(\Delta,s)=(d-1,1)$ it seems natural to expect the appearance of the shadow transform. The shadow transform is, up to normalization \cite{SimmonsDuffin:2012uy, Rejon-Barrera:2015bpa}, its own inverse\footnote{The spatial integrals involved here are formally divergent and are regulated by the $i{\epsilon}$-prescription.} \begin{equation} \begin{split}\label{shadowinverse} {\widetilde {\widetilde {\mathcal O}}}_{a_1 \dots a_s} (x) = c(\Delta,s) {\mathcal O}_{a_1 \dots a_s}(x) \;, \qquad c ( \Delta,s) = \frac{ \pi^d (\Delta-1)(d-\Delta-1) \Gamma \left( \frac{d}{2} - \Delta \right) \Gamma \left( \Delta - \frac{d}{2} \right) }{ ( \Delta - 1 + s ) ( d - \Delta - 1 + s ) \Gamma ( \Delta )\Gamma ( d - \Delta ) } \; . \end{split} \end{equation} Using this, we can immediately write \begin{equation} \begin{split} S_a(x) = 2 {\widetilde J}_a(x) \; , \qquad J_a(x) = \frac{1}{ 2 c ( 1 , 1 ) } {\widetilde S}_a(x) \; . \end{split} \end{equation} Interestingly, the property \eqref{Saprop} allows one to obtain a local relation between $J_a(x)$ and $S_a(x)$ \begin{equation} \begin{split} J_a (x) &= \frac{1}{ ( 4 \pi )^{d/2} \Gamma ( d/2 ) } (-\Box)^{\frac{d}{2}-1} S_a(x) \; . \end{split} \end{equation} It is straightforward to verify that insertions of $J_a(x)$ are given by \begin{equation} \langle J_a(x) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n)\rangle =\frac{\Gamma ( d / 2 )}{2\pi^{d/2}} \sum_{k=1}^n Q_k\frac{(x-x_k)_a}{ | x - x_k |^d } \avg{ {\mathcal O}_1(\omega_1,x_1)\dots {\mathcal O}_n(\omega_n,x_n) } \; \end{equation} and satisfy (\ref{Jins}). In summary, we find that the leading soft-photon theorem in any dimension implies the existence of a conserved current $J_a(x)$ on the spatial cut ${\mathcal M}_d$. This current is constructed as the shadow transform of the soft-photon operator $S_a(x)$. This correspondence is reminiscent of a similar construction in AdS/CFT, where again the presence of a massless bulk gauge field produces a dual conserved boundary current. There have been attempts to make this analogy more precise using a so-called ``holographic reduction'' of Minkowski space\cite{deBoer:2003vf,Cheung:2016iub}. The $(d+2)$ dimensional Minkowski space can be foliated by hyperboloids (and a null cone), each of which is invariant under the action of the Lorentz group. Inside the light cone, this amounts to a foliation using a family of Euclidean ${\text{AdS}}_{d+1}$, all sharing an asymptotic boundary given by the celestial sphere.\footnote{This construction seems more natural in momentum space, where it is only the interior of the light cone which is physically relevant, and one never needs to discuss the time-like de Sitter hyperboloids lying outside the light cone.} Performing a Kaluza-Klein reduction on the (time-like, non-compact) direction transverse to the ${\text{AdS}}_{d+1}$ slices decomposes the gauge field $A(X)$ in Minkowski space into a continuum of ${\text{AdS}}_{d+1}$ gauge fields $A_\omega (x)$ with masses $\sim \omega$ ($\omega$ is the so-called \emph{Milne} energy). The $\omega \to 0$ gauge field -- equivalent to the soft limit -- is massless in ${\text{AdS}}_{d+1}$ and therefore induces a conserved current on the $d$-dimensional boundary. The holographic dictionary suggests that in the boundary theory, one has a coupling of the form \begin{equation} \begin{split} \int d^d x S^a (x) J_a(x) \; . \end{split} \end{equation} The discussion here suggests that a deeper holographic connection (beyond the simple existence of conserved currents) may exist between the theory on ${\mathcal M}_d$ and dynamics in Minkowski space. While intriguing, much remains to be done in order to elucidate this relationship. The hypothetical boundary theory is expected to have many peculiar properties, one of which we discuss in section \ref{sec:leadingsoftgraviton}. \subsection{Subleading Soft-Graviton Theorem and the Stress Tensor} In the previous section, we demonstrated that the presence of gauge fields in Minkowski space controls the global symmetry structure of the putative theory on $\mathcal{M}_d$. As in AdS/CFT, more interesting features arise when we couple the bulk theory to gravity and consider gravitational perturbations. Flat space graviton scattering amplitudes also display universal behavior in the infrared that is model independent and holds in any dimension. Of particular interest here is the subleading soft-graviton theorem, which states\footnote{We work in units such that $\sqrt{8\pi G}=1$.} \begin{equation}\label{subleadingsoftgravitontheorem} \lim_{\omega \to 0}(1+\omega \partial_\omega)\langle {\mathcal O}_{a b }(q) {\mathcal O}_1(p_1) \dots {\mathcal O}_n(p_n)\rangle =- i\sum_{k=1}^n \frac{{\varepsilon}^{\mu \nu}_{a b }p_{k\,\mu} q_\rho}{p_k\cdot q}{\mathcal J}_k^{\rho \nu}\langle {\mathcal O}_1(p_1) \dots {\mathcal O}_n(p_n)\rangle \;. \end{equation} Here ${\mathcal O}_{a b }(q)$ creates a graviton with momentum $q$ and polarization ${\varepsilon}_{\mu \nu}^{a b }(q)$, and ${\mathcal J}_{k \rho \nu}$ is the total angular momentum operator for the $k$-th particle. The operator $(1+\omega \partial_\omega)$ projects out the Weinberg pole \cite{Weinberg:1965nx}, yielding a finite $\omega \to 0$ limit. Returning to the analogy with ${\text{AdS}}_{d+1}/\text{CFT}_d$, one might expect that the bulk soft-graviton is associated to a boundary stress tensor, just as the bulk soft-photon is related to a boundary $U(1)$ current. In a quantum field theory, the stress tensor generates the action of spacetime (conformal) isometries on local operators. As we saw in (\ref{Commutators}), the angular momentum operator ${\mathcal J}_{\rho \nu}$ generates these transformations on the local operators on $\mathcal{M}_d$. Therefore, it is natural to suspect that the bulk subleading soft-graviton operator \begin{equation} B_{a b }(x)=\lim_{\omega \to 0}(1+\omega \partial_\omega) {\mathcal O}_{a b }(\omega,x) \; \end{equation} is related to the boundary stress tensor. Such a relationship was derived in four dimensions ($d=2$) in \cite{He:2017fsb,Kapec:2016jld,Cheung:2016iub}. In this section, we generalize the construction to $d> 2$. Insertions of $B_{ab}(x)$ are controlled by the subleading soft-graviton theorem \eqref{subleadingsoftgravitontheorem} and take the form (see \eqref{Lmnexp} and \eqref{Smnexp} for the explicit forms of the orbital and spin angular momentum operators) \begin{equation} \begin{split} \langle B_{ab}(x){\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n)\rangle &= \sum_{k=1}^n \left[ {\mathcal P}^c{}_{ab}(x-x_k)\partial_{x_k^c} + \frac{1}{d} \partial_c {\mathcal P}^c{}_{ab}(x-x_k)\omega_k \partial_{\omega_k} \right. \\ &\left. -\frac{i}{2}\partial^{[c}{\mathcal P}^{d]}{}_{ab}(x-x_k){\mathcal S}_{kcd} \right]\langle {\mathcal O}_1(\omega_1,x_1)\dots {\mathcal O}_n(\omega_n,x_n)\rangle \; , \end{split} \end{equation} where \begin{equation} {\mathcal P}^c{}_{ab}(x)=\frac12 \left[x_a\delta_b^c +x_b \delta_a^c +\frac{2}{d}x^c \delta_{ab}-\frac{4}{x^2}x^cx_ax_b\right] \; . \end{equation} One can check that \begin{equation} \begin{split} \partial_{\{c}{\mathcal P}_{d\}ab}(x)= {\mathcal I}_{\{\underline{a}\{c} (x) {\mathcal I}_{d\}\underline{b}\}} (x) \; . \end{split} \end{equation} As in section \ref{softphoton}, it is easiest to first determine $B_{ab}$ in terms of $T_{ab}$. Recall that the Ward identities for the energy momentum tensor of a CFT$_d$ take the form \cite{DiFrancesco:1997nk} \begin{align} \label{div} \avg{ \partial^dT_{dc}(y) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } &= -\sum_{k=1}^n \delta^{(d)}(y-x_k)\partial_{x_k^c}\avg{ {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } \; ,\\ \label{trace} \avg{ T^c{}_{c}(y) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } &= \sum_{k=1}^n \delta^{(d)}(y-x_k)\omega_k\partial_{\omega_k}\avg{ {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n)} \; , \\ \label{antisym} \avg{ T^{[cd]}(y) {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } &= -\frac{i}{2}\sum_{k=1}^n \delta^{(d)}(y-x_k){\mathcal S}_k^{cd}\avg{ {\mathcal O}_1(\omega_1,x_1) \dots {\mathcal O}_n(\omega_n,x_n) } \; . \end{align} Multiplying (\ref{div}) by $- \int d^dy {\mathcal P}^c{}_{ab}(x-y)$, (\ref{trace}) by $\frac{1}{d} \int d^dy \partial_c {\mathcal P}^c{}_{ab}(x-y)$, (\ref{antisym}) by \\$\int d^dy \partial^c {\mathcal P}^d{}_{ab}(x-y)$, and taking the sum, one finds \begin{equation} \begin{split}\label{Babrel} B_{ab}(x) &= - \int d^d y \partial_{\{c} {\mathcal P}_{d\}ab} (x - y) T^{cd}(y) \\ &= -\int d^dy {\mathcal I}_{\{\underline{a}\{c} (x-y) {\mathcal I}_{d\}\underline{b}\}} (x-y)T^{cd}(y) \\ &= - {\widetilde T}_{\{ ab \}}(x) \;. \end{split} \end{equation} Once again, the soft operator appears as the shadow transform of a conserved current. The relationship could have been guessed from the outset based on the dimensions of $B_{ab}$ and $T_{\{ab\}}$.\footnote{Note that only the symmetric traceless part of the stress tensor appears in this dictionary since the graviton lies in the symmetric traceless representation of the little group. The trace term may be related to soft-dilaton theorems. } Having derived \eqref{Babrel}, we can now invert the shadow transform to find \begin{equation} \begin{split}\label{Tabrel} T_{\{ ab \}}(x) =- \frac{1}{ c ( 0 , 2 ) } {\widetilde B}_{ab}(x) \; . \end{split} \end{equation} The shadow relationship between the soft operator $B_{ab}$ and the energy momentum tensor is again suggestive of a coupling \begin{equation} \int d^dx B^{ab}(x)T_{ab}(x) \end{equation} in some hypothetical dual formulation of asymptotically flat gravity: the soft-graviton creates an infinitesimal change in the boundary metric, sourcing the operator $T_{ab}$. In \cite{Kapec:2016jld}, it was viewed as a puzzle that the energy momentum tensor appears non-local when written in terms of the soft-modes of the four dimensional gravitational field. Here we see that this is essentially the consequence of a linear response calculation, and that the non-locality is actually the only one allowed by conformal symmetry. As in \cite{Kapec:2016jld}, it is possible to derive a local differential equation for $T_{\{ab\}}$ in even dimensions. We first define the following derivative operator \begin{equation} \begin{split} {\mathbb D}^a O_{ab} &\equiv \frac{1}{2(4\pi)^{d/2}\Gamma(d/2+1)} \big[(-\Box)^{d/2}\partial^a O_{ab} + \frac{d}{d-1}\partial_b (-\Box)^{d/2-1}\partial^e\partial^f O_{ef} \big] \; . \end{split} \end{equation} One can check that \begin{equation} {\mathbb D}^a {\mathcal P}^c{}_{ab} = -\delta_b^c \delta^{(d)}(x) \; . \end{equation} Then, acting on the first equation of \eqref{Babrel} with ${\mathbb D}^a$, we find \begin{equation} \begin{split} {\mathbb D}^a B_{ab}(x) &= \partial^a T_{\{ab\}}(y) \; . \end{split} \end{equation} \subsection{Leading Soft-Graviton Theorem and Momentum Conservation}\label{sec:leadingsoftgraviton} In the previous two subsections, we have avoided the discussion of currents related to the leading soft-graviton theorem. This soft theorem is associated to spacetime translational invariance (and more generally to BMS supertranslations \cite{Strominger:2013jfa,Campiglia:2015kxa,He:2014laa,Kapec:2015vwa}). For scattering amplitudes in the usual plane wave basis, this symmetry is naturally enforced by a momentum conserving Dirac delta function. In our discussion above, we have chosen to make manifest the Lorentz transformation properties of the scattering amplitude. Consequently, translation invariance, or global momentum conservation, is unwieldy in our formalism. In fact, it must somehow appear as a non-local constraint on the correlation functions on ${\mathcal M}_d$, since arbitrary operator insertions corresponding to arbitrary configurations of incoming and outgoing momenta will in general violate momentum conservation. The difficulty can also be seen at the level of the symmetry algebra. Momentum conservation cannot arise simply as a global $\mathbb{R}^{d+1,1}$ symmetry of the CFT$_d$, since the associated conserved charges do not commute with the conformal (Lorentz) group. In light of this it is not clear that our construction can really be viewed as a local conformal field theory living on ${\mathcal M}_d$.\footnote{However, it is also not clear that we should expect a local QFT dual to asymptotically flat quantum gravity. The flat space Bekenstein-Hawking entropy is always super-Hagedorn in $d\geq4$. The high energy density of states grows faster than in any local theory.} We have tried, unsuccessfully, to find a natural set of operators whose shadow reproduces the leading soft-graviton theorem\footnote{It has also been suggested \cite{Shu-Heng} that translational invariance of the $\mathcal{S}$-matrix is realized through null state relations of boundary correlators rather than through local current operators, since the former are typically non-local constraints on CFT correlation functions.} \begin{equation} \lim_{\omega \to 0}\omega \langle {\mathcal O}_{a b }(q) {\mathcal O}_1(p_1)\dots {\mathcal O}_n(p_n) \rangle = \omega \sum_{k=1}^n \frac{{\varepsilon}_{\mu \nu}^{a b }p_{k}^\mu p_k^\nu}{p_k\cdot q} \langle {\mathcal O}_1(p_1)\dots {\mathcal O}_n(p_n) \rangle \; . \end{equation} The soft operator \begin{equation} G_{ab}(x)=\lim_{\omega \to 0}\omega {\mathcal O}_{a b }(\omega,x) \end{equation} has insertions given by \begin{equation} \langle G_{ab}(x){\mathcal O}_1(\omega_1,x_1)\dots {\mathcal O}_n(\omega_n,x_n) \rangle = \frac{2}{d}\sum_{k=1}^n \omega_k {\mathcal I}_{ab}^{(d)}(x - x_k)\langle {\mathcal O}_1(\omega_1,x_1)\dots {\mathcal O}_n(\omega_n,x_n) \rangle \; , \end{equation} where \begin{equation} {\mathcal I}_{ab}^{(d)} (x ) = \delta_{ab} - d \frac{x_ax_b}{x^2} \; . \end{equation} One finds that the operator \begin{equation} U_a = \frac{ 1 }{ ( 4 \pi ) ^{d/2} \Gamma \left( d/ 2 \right) ( d - 1 ) } (-\Box)^{\frac{d}{2}-1}\partial^bG_{ba} \end{equation} satisfies \begin{equation}\label{Uains} \langle \partial^a U_{a}(x) {\mathcal O}_1(\omega_1,x_1)\dots {\mathcal O}_n(\omega_n,x_n) \rangle = - \sum_{k=1}^n \omega_k \delta^{(d)} ( x - x_k ) \langle {\mathcal O}_1(\omega_1,x_1)\dots {\mathcal O}_n(\omega_n,x_n) \rangle \; . \end{equation} Thus, $U_a(x)$ satisfies the current Ward identity corresponding to ``energy'' conservation. However, since $\omega$ is not a scalar charge, the current $U_a(x)$ is not a primary operator (though it has a well-defined scaling dimension $\Delta = d$). Acting on \eqref{Uains} with $ - \frac{2}{d} \int d^d x {\mathcal I}_{ab}^{(d)}( y - x)$, one finds \begin{equation} \begin{split}\label{Gabrel} G_{ab}(x) &= - \frac{2}{d} \int d^dy \, {\mathcal I}_{ab}^{(d)}(x-y)\partial^cU_c(y) \;. \\ \end{split} \end{equation} Unlike the $U(1)$ current and the stress tensor, the leading soft-graviton ``current'' is not related to the soft operator $G_{ab}$ through a shadow transform. It may be possible to interpret \eqref{Gabrel} as some other (conformally) natural non-local transform of $U_c(x)$, but we do not pursue this here.\footnote{The shadow transform $(\Delta,s) \to (d-\Delta,s)$ is related to the ${\mathbb Z}_2$ symmetry of the quadratic and quartic Casimirs of the conformal group $c_2 = \Delta(d-\Delta) + s(2-d-s) $, $c_4 = -s(2-d-s)(\Delta-1)(d-\Delta-1)$. $c_2$ and $c_4$ are also invariant under another ${\mathbb Z}_2$ symmetry under which $(\Delta,s)\to (1-s,1-\Delta)$. Equation \eqref{Gabrel} may be the integral representation of a shadow transform followed by the second ${\mathbb Z}_2$ transform which maps $(d+1,0) \to (-1,0) \to (1,2)$.} \section{Conclusion}\label{sec:comments} In this paper we have taken steps to recast the $(d+2)$-dimensional ${\mathcal S}$-matrix as a collection of celestial correlators, but many open questions remain. Our analysis relied on symmetry together with the universal behavior of the $\mathcal{S}$-matrix in certain kinematic regimes. It would be interesting to analyze the consequences of other universal properties of the $\mathcal{S}$-matrix for the celestial correlators. The analytic structure and unitarity of the $\mathcal{S}$-matrix should be encoded in properties of these correlation functions, although the mechanism may be subtle. It seems likely that the collinear factorization of the $\mathcal{S}$-matrix could be used to define some variant of the operator product expansion for local operators on the light cone. Although this paper only addressed single soft insertions, double soft limits, appropriately defined, could be used to define OPE's between the conserved currents and stress tensors. We expect supergravity soft theorems to yield a variety of interesting operators, including a supercurrent. Finally, the interplay of momentum conservation with the CFT$_d$ structure requires further clarification. We leave these questions to future work. \section*{Acknowledgements} We are grateful to Sabrina Pasterski, Abhishek Pathak, Ana-Maria Raclariu, Shu-Heng Shao, Andrew Strominger, Xi Yin, and Sasha Zhiboedov for discussions. This work was supported in part by DOE grant DE-SC0007870 and the Fundamental Laws Initiative at Harvard. PM gratefully acknowledges support from DOE grant DE-SC0009988.
train/arxiv
BkiUfPPxK4tBVhvvuM9q
5
1
\section{#1}}} \textheight=21.5cm \textwidth=16cm \oddsidemargin .1cm \evensidemargin .1cm \topmargin= .0cm \headsep 0pt \def(\roman{enumi}){(\roman{enumi})} \arraycolsep 1pt \font\twlgot =eufm10 scaled \magstep1 \font\egtgot =eufm8 \font\sevgot =eufm7 \font\twlmsb =msbm10 scaled \magstep1 \font\egtmsb =msbm8 \font\sevmsb =msbm7 \newfam\gotfam \def\fam\gotfam\twlgot{\fam\gotfam\twlgot} \textfont\gotfam\twlgot \scriptfont\gotfam\egtgot \scriptscriptfont\gotfam\sevgot \def\protect\pgot{\protect\fam\gotfam\twlgot} \newfam\msbfam \textfont\msbfam\twlmsb \scriptfont\msbfam\egtmsb \scriptscriptfont\msbfam\sevmsb \def\protect\pBbb{\protect\pBbb} \def\pBbb{\relax\ifmmode\expandafter\Bb\else\typeout{You cann't use Bbb in text mode}\fi} \def\Bb #1{{\fam\msbfam\relax#1}} \def\thebibliography#1{\bigskip\section*{\centering References\\}\bigskip\list {\arabic{enumi}.}{\settowidth\labelwidth{#1}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em} \sloppy\clubpenalty4000\widowpenalty4000 \sfcode`\.=1000\relax} \newcommand{\Sigma}{\Sigma} \newcommand{\under}[2]{\mathrel{\mathop{#1}\limits_{\scriptstyle #2}}} \def\op#1{\mathop{\fam0 #1}\limits} \newcommand{{\rm id\,}}{{\rm id\,}} \newcommand{{\rm pr\,}}{{\rm pr\,}} \newcommand{{\rm dim\,}}{{\rm dim\,}} \newcommand{{\rm Id\,}}{{\rm Id\,}} \def{\rm Ker\,}{{\rm Ker\,}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\begin{eqalph}}{\begin{eqalph}} \newcommand{\end{eqalph}}{\end{eqalph}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal T}}{{\cal T}} \newcommand{\bold L}{\bold L} \newcommand{\bold R}{\bold R} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\phi}{\phi} \newcommand{\Phi}{\Phi} \newcommand{\pi}{\pi} \newcommand{\omega}{\omega} \newcommand{\Omega}{\Omega} \newcommand{\mu}{\mu} \newcommand{\nu}{\nu} \newcommand{\gamma}{\gamma} \newcommand{\Gamma}{\Gamma} \newcommand{\epsilon}{\epsilon} \newcommand{\varepsilon}{\varepsilon} \newcommand{\theta}{\theta} \newcommand{\rho}{\rho} \newcommand{\sigma}{\sigma} \newcommand{\wedge}{\wedge} \newcommand{\widetilde}{\widetilde} \newcommand{\widehat}{\widehat} \newcommand{\overline}{\overline} \newcommand{\partial}{\partial} \newcommand{\,\rule{1.5mm}{0.2mm}\rule{0.2mm}{4mm}\,\,}{\,\rule{1.5mm}{0.2mm}\rule{0.2mm}{4mm}\,\,} \newcommand{\rule{1.5mm}{0.6mm}\rule{0.6mm}{4mm}\,}{\rule{1.5mm}{0.6mm}\rule{0.6mm}{4mm}\,} \newcommand{\colon\ }{\colon\ } \newcounter{eqalph} \newcounter{equationa} \newenvironment{proposition}[1]{{{\it Proposition #1:}}}{} \newenvironment{lemma}[1]{{{\it Lemma #1:}}}{} \newenvironment{definition}[1]{{{\it Definition #1:}}}{} \newenvironment{proof}{{\bf Proof.}}{} \newenvironment{remark}{{\bf Remark.}}{} \def\arabic{equationa}\alph{equation}{\arabic{equation}} \newenvironment{eqalph}{\stepcounter{equation} \setcounter{equationa}{\value{equation}} \setcounter{equation}{0} \def\arabic{equationa}\alph{equation}{\arabic{equationa}\alph{equation}} \begin{eqnarray}}{\end{eqnarray} \setcounter{equation}{\value{equationa}}} \hyphenation{ma-ni-fold La-gran-gi-ans di-men-si-o-nal -di-men-si-o-nal La-gran-gi-an Ha-mil-to-ni-an} \begin{document} \hbox{} \centerline{\bf\large MULTIMOMEMTUM HAMILTONIAN FORMALISM} \medskip \centerline{\bf\large IN FIELD THEORY. GEOMETRIC SUPPLEMENTARY} \bigskip \centerline{\bf Gennadi Sardanashvily} \medskip \centerline{Department of Theoretical Physics, Moscow State University} \centerline{117234 Moscow, Russia} \centerline{E-mail: [email protected]} \vskip1cm The well-known geometric approach to field theory is based on description of classical fields as sections of fibred manifolds, e.g. bundles with a structure group in gauge theory. In this approach, Lagrangian and Hamiltonian formalisms including the multiomentum Hamiltonian formalism are phrased in terms of jet manifolds \cite{car,gia,got,kol,kup,6sar,sard,lsar}. Then, configuration and phase spaces of fields are finite-dimensional. Though the jet manifolds have been widely used for theory of differential operators, the calculus of variations and differential geometry, this powerful mathematical methods remains almost unknown for physicists. This Supplementary to our previous article \cite{lsar} aims to summarize necessary requisites on jet manifolds and general connections \cite{man,sard,sau}. \bigskip All morphisms throughout are differentiable mappings of class $C^\infty$. Manifolds are real, Hausdorff, finite-dimensional, second-countable and connected. We use the conventional symbols $\otimes$, $\vee$ and $\wedge$ for the tensor, symmetric and exterior products respectively. The interior product (contraction) is denoted by $\rfloor$. The symbols $\partial^A_B$ mean the partial derivatives with respect to coordinates with indices $^B_A$. Given a manifold $M$ with an atlas of local coordinates $(z^\lambda)$, the tangent bundle $TM$ of $M$ (resp. the cotangent bundle $T^*M$ of $M$) is provided with the atlas of the induced coordinates $(z^\lambda, \dot z^\lambda)$ (resp. $(z^\lambda,\dot z_\lambda)$ relative to the holonomic bases $\partial_\lambda$ (resp. $dz^\lambda$). If $f:M\to M'$ is a manifold mapping, by \[ Tf: TM\to TM' \qquad \dot z'^\lambda= \frac{\partial f^\lambda}{\partial z^\alpha}\dot z^\alpha, \] is meant the morphism tangent to $f$. By ${\rm pr\,}_1$ and ${\rm pr\,}_2$, we denote the canonical surjections \[ {\rm pr\,}_1:A\times B\to A, \qquad {\rm pr\,}_2:A\times B\to B. \] We handle the following types of manifold mappings $f:M\to M'$ when the tangent morphism $Tf$ to $f$ meets the maximal rank. These are immersion, submersion and local diffeomorphism when $f$ is both immersion and submersion. Recall that a mapping $f:M\to M'$ is called immersion (resp. submersion) at a point $z\in M$ when the tangent morphism $Tf$ to $f$ is injection (resp. surjection) of the tangent space $T_zM$ to $M$ at $z$ to the tangent space $T_{f(z)}M'$. A manifold mapping $f$ of $M$ is termed immersion (resp. submersion) if it is immersion (resp. submersion) at all points of $M$. A triple $ f:M\to M'$ is called the submanifold (resp. the fibred manifold) if $f$ is both immersion and injection (resp. both submersion and surjection) of $M$ to $M'$. A submanifold which also is a topological subspace is called the imbedded submanifold. Every open subset $U$ of a manifold $M$ is endowed with the manifold structure such that the canonical injection $i_U: U\hookrightarrow M$ is imbedding. \section{Fibred manifolds} Throughout the work, by $Y$ is meant a fibred manifold \begin{equation} \pi :Y\to X \label{1.1} \end{equation} over an $n$-dimensional base $X$. We use symbols $y$ and $x$ for points of $Y$ and $X$ respectively. The total space $Y$ of a fibred manifold $Y\to X$, by definition, is provided with an atlas of fibred coordinates \begin{eqnarray} && (x^\lambda, y^i),\qquad x^\lambda\circ\pi =x^\lambda, \nonumber\\ && x^\lambda \to {x'}^\lambda(x^\mu), \qquad y^i \to {y'}^i(x^\mu,y^j),\label{1.2} \end{eqnarray} where $(x^\lambda$ are coordinates of $X$. They are compatible with the fibration (\ref{1.1}). A fibred manifold $Y\to X$ is called the locally trivial fibred manifold if there exists a fibred coordinate atlas of $Y$ over an open covering $\{\pi^{-1}(U_\xi)\}$ of $Y$ where $\{U_\xi\}$ is an open covering of the base $X$. In other words, all points of a fibre of $Y$ belong to the same coordinate chart. By a differentiable fibre bundle (or simply a bundle), we mean the locally trivial fibred manifold (\ref{1.1}) provided with a family of equivalent bundle atlases \[ \Psi = \{U_\xi, \psi_\xi \}, \qquad \psi_\xi :\pi^{-1}(U_\xi)\to U_\xi\times V, \] where $V$ is a standard fibre of $Y$. Recall that two bundle atlases are called equivalent if their union also is a bundle atlas. If $Y\to X$ is a bundle, the fibred coordinates (\ref{1.2}) of $Y$ are assumed to be bundle coordinates associated with a bundle atlas $\Psi$ of $Y$, that is, \begin{equation} y^i(y)=(\phi^i\circ{\rm pr\,}_2\circ \psi_\xi)(y), \qquad \pi (y)\in U_\xi, \label{1.4} \end{equation} where $\phi^i$ are coordinates of the standard fibre $V$ of $Y$. Given fibred manifolds $Y\to X$ and $ Y'\to X'$, by a fibred morphism is meant a fibre-to-fibre manifold mapping $\Phi : Y\to Y'$ over a manifold mapping $f: X\to X'$. If $f={\rm Id\,}_X,$ fibred morphism is termed the fibred morphism $\op\to_X$ over $X$. In particular, let $X_X$ denotes the fibred manifold ${\rm Id\,}_X: X\hookrightarrow X$. Given a fibred manifold $Y\to X$, a fibred morphism $X_X\to Y$ over $X$ is a global section of $Y\to X$. It is a closed imbedded submanifold. Let $N$ be an imbedded submanifold of $X$. A fibred morphism $N_N\to Y$ over $N\hookrightarrow X$ is called a section of $Y\to X$ over $N$. For each point $x\in X$, a fibred manifold, by definition, has a section over an open neighborhood of $x$. \begin{remark} In accordance with the well-known theorem, if a fibred manifold $Y\to X$ has a global section, every section of $Y$ over a closed imbedded submanifold $N$ of $X$ is extended to a global section of $Y$ due to the properties which are required of a manifold. \end{remark} If fibred morphism $Y\to Y'$ over $X$ is a submanifold, $Y\to X$ is called the fibred submanifold of $Y'\to X$. Fibred imbedding and fibred diffeomorphism are usually callled the monomorphism and the isomorphism respectively. Given a fibred manifold $Y\to X$, every manifold mapping $f : X' \to X $ yields the pullback $f^*Y\to X'$ comprising the pairs \[ \{(y,x')\in Y\times X' \mid \quad \pi(y) =f(x')\} \] together with the surjection $ (y,x')\to x'$. Every section $s$ of the fibred manifold $Y\to X$ defines the corresponding pullback section \[ (f^*s )(x') = ((s\circ f)(x'),x'), \qquad x'\in X', \] of the fibred manifold $f^*Y\to X'$. In particular, if a mapping $f$ is a submanifold, the pullback $f^*Y$ is called the restriction $Y\mid_{f(X')}$ of the fibred manifold $Y$ to the submanifold $f(X')\subset X$. The product of fibred manifolds $\pi:Y\to X$ and $\pi':Y'\to X$ over $X$, by definition, is the total space of pullbacks \[ \pi^*Y'={\pi'}^*Y=Y\op\times_X Y'. \] A composite fibred manifold (or simply a composite manifold) is defined to be composition of surjective submersions \begin{equation} \pi_{\Sigma X}\circ\pi_{Y\Sigma}:Y\to \Sigma\to X. \label{1.34} \end{equation} It is the fibred manifold $Y\to X$ provided with the particular class of coordinate atlases: \begin{equation} ( x^\lambda ,\sigma^m,y^i) \label{1.36} \end{equation} \[ {x'}^\lambda=f^\lambda(x^\mu), \qquad {\sigma'}^m=f^m(x^\mu,\sigma^n), \qquad {y'}^i=f^i(x^\mu,\sigma^n,y^j), \] where $(x^\mu,\sigma^m)$ are fibred coordinates of the fibred manifold $\Sigma\to X$. In particular, let $TY\to Y$ be the tangent bundle of a fibred manifold $Y\to X$. We have the composite manifold \begin{equation} TY\op\to^{T\pi} TX\to X. \label{1.6} \end{equation} Given the fibred coordinates (\ref{1.2}) of $Y$, the corresponding induced coordinates of $TY$ are $(x^\lambda,y^i,\dot x^\lambda, \dot y^i)$. The tangent bundle $TY\to Y$ of a fibred manifold $Y$ has the subbundle \[ VY = {\rm Ker\,} T\pi \] which is called the vertical tangent bundle of $Y$. This subbundle is provided with the induced coordinates $(x^\lambda,y^i,\dot y^i).$ The vertical cotangent bundle $V^*Y\to Y$ of $Y$, by definition, is the vector bundle dual to the vertical tangent bundle $VY\to Y$, it is not a subbundle of $T^*Y$. With $VY$ and $V^*Y$, we have the following exact sequences of bundles over a fibred manifold $Y\to X$: \begin{eqalph} && 0\to VY\hookrightarrow TY\op\to_Y Y\op\times_X TX\to 0, \label{1.8a} \\ && 0\to Y\op\times_X T^*X\hookrightarrow T^*Y\to V^*Y\to 0. \label{1.8b} \end{eqalph} For the sake of simplicity, we shall denote the products \[ Y\op\times_X TX, \qquad Y\op\times_X T^*X \] by the symbols $TX$ and $T^*X$ respectively. Different splittings \[ Y\op\times_X TX\op\to_Y TY, \qquad V^*Y\to T^*Y \] of the exact sequences (\ref{1.8a}) and (\ref{1.8b}), by definition, correspond to different connections on a fibred manifold $Y$. At the same time, there is the canonical bundle monomorphism \begin{equation} \op\wedge^nT^*X\op\otimes_YV^*Y\op\hookrightarrow_Y\op\wedge^{n+1}T^*Y.\label{86} \end{equation} Let $\Phi:Y\to Y'$ be a fibred morphism over $f$. The tangent morphism $T\Phi$ to $\Phi$ reads \begin{equation} (\dot{x'}^\lambda,\dot{y'}^i)\circ T\Phi =(\partial_\mu f^\lambda\dot x^\mu,\partial_\mu\Phi^i\dot x^\mu +\partial_j\Phi ^i\dot y^j). \label{1.7} \end{equation} It is both a linear bundle morphism over $\Phi$ and a fibred morphism over the tangent morphism $Tf$ to $f$. Its restriction to the vertical tangent subbundle $VY$ yields the vertical tangent morphism \begin{eqnarray} &&V\Phi:VY\to VY', \nonumber\\ &&\dot{y'}^i\circ V\Phi =\partial_j\Phi^i\dot y^j.\label{2} \end{eqnarray} Vertical tangent bundles of fibred manifolds utilized in field theory meet almost always the following simple structure. One says that a fibred manifold $Y\to X$ has vertical splitting if there exists the linear isomorphism \begin{equation} \alpha : VY\op\to_Y Y\op\times_X \overline Y \label{1.9} \end{equation} where $\overline Y\to X$ is a vector bundle. The fibred coordinates (\ref{1.2}) of $Y$ are called adapted to the vertical splitting (\ref{1.9}) if the induced coordinates of the vertical tangent bundle $VY$ take the form \[ (x^\mu,y^i,\dot y^i = \overline y^i\circ\alpha) \] where $(x^\mu,y^i,\overline y^i)$ are bundle coordinates of $\overline Y$. In this case, transition functions $\dot y^i\to\dot y'^i$ between induced coordinate charts are independent on the coordinates $y^i$. In particular, a vector bundle $Y\to X$ has the canonical vertical splitting \begin{equation} VY=Y\op\times_X Y. \label{1.10} \end{equation} An affine bundle $Y$ modelled on a vector bundle $\overline Y$ has the canonical vertical splitting \begin{equation} VY=Y\op\times_X\overline Y.\label{48} \end{equation} Moreover, linear bundle coordinates of a vector bundle and affine bundle coordinates of an affine bundle are always adapted to these canonical vertical splittings. We shall refer to the following fact. \begin{lemma}{1.1} Let $Y$ and $Y'$ be fibred manifolds over $X$ and $\Phi:Y\to Y'$ a fibred morphism over $X$. Let $V\Phi$ be the vertical tangent morphism to $\Phi$. If $Y'$ admits vertical splitting $VY'=Y'\times\overline Y,$ then there exists the linear bundle morphism \begin{equation} \overline V\Phi:VY\op\to_Y Y\times \overline Y \label{64} \end{equation} over $Y$ given by the coordinate expression \[ {\overline y'}^i\circ\overline V\Phi =\partial_j\Phi^i\dot y^j. \] \end{lemma} By differential forms (or symply forms) on a fibred manifold, we shall mean exterior, tangent-valued and pullback-valued forms. Recall that a tangent-valued $r$-form on a manifold $M$ is defined to be a section \[ \phi = \phi_{\lambda_1\dots\lambda_r}^\mu dz^{\lambda_1}\wedge\dots\wedge dz^{\lambda_r}\otimes\partial_\mu \] of the bundle $\op\wedge^r T^*M\otimes TM.$ In particular, tangent-valued 0-forms are vector fields on $M$. \begin{remark} There is the 1:1 correspondence between the tangent-valued 1-forms on $M$ and the linear bundle morphisms $TM\to TM$ or $T^*M\to T^*M$ over $M$ \begin{eqnarray} &&\theta:M\to T^*M\otimes TM,\label{29}\\ &&\theta: T_zM\ni t\mapsto t\rfloor\theta(z)\in T_zM,\nonumber\\ &&\theta: T^*_zM\ni t^*\mapsto \theta(z)\rfloor t^*\in T^*_zM.\nonumber \end{eqnarray} For instance, ${\rm Id\,}_{TM}$ corresponds to the canonical tangent-valued 1-form on $M$: \[ \theta_M = dz^\lambda\otimes \partial_\lambda, \qquad \partial_\lambda \rfloor \theta_M = \partial_\lambda. \] \end{remark} Let $\op\Lambda^r{{\cal T}}^*(M)$ be the sheaf of exterior $r$-forms on $M$ and ${\cal T}(M)$ the sheaf of vector fields on $M$. Tangent-valued $r$-forms on a manifold $M$ constitute the sheaf $\op\Lambda^r{{\cal T}}^*(M)\otimes{\cal T}(M)$. It is brought into the sheaf of graded Lie algebras with respect to the Fr\"olicher-Nijenhuis (F-N) bracket. The F-N bracket is defined to be the sheaf morphism \[ \op\Lambda^r {{\cal T}}^*(M)\otimes{\cal T}(M)\times \op\Lambda^s {{\cal T}}^*(M)\otimes{\cal T}(M) \to \op\Lambda^{r+s} {{\cal T}}^*(M)\otimes{\cal T}(M), \] \begin{eqnarray*} && [\phi ,\sigma] = -(-1)^{\mid\phi\mid\mid\sigma\mid}[\sigma,\phi] =[\alpha\otimes u,\beta\otimes v] \\ &&\qquad= \alpha\wedge\beta\otimes [u,v] + \alpha\wedge {\bf L}_u\beta\otimes v - (-1)^{rs}\beta\wedge {\bf L}_v\alpha \otimes u \\ &&\qquad +(-1)^r(v\rfloor \alpha)\wedge d\beta\otimes u- (-1)^{rs+s}(u\rfloor\beta)\wedge d\alpha\otimes v, \end{eqnarray*} \[ \alpha\in\op\Lambda^r{{\cal T}}^*(M),\quad \beta\in \op\Lambda^s{{\cal T}}^*(M), \quad u,v\in {\cal T}(M), \] where ${\bf L}_u$ and ${\bf L}_v$ are the Lie derivatives of exterior forms. We have its coordinate expression \begin{eqnarray*} && [\phi,\sigma] = (\phi_{\lambda_1\dots\lambda_r}^\nu\partial_\nu\sigma_{\lambda_{r+1}\dots\lambda_{r+s}}^\mu \\ && \qquad -(-1)^{rs}\sigma_{\lambda_1\dots\lambda_s}^\nu\partial_\nu\phi_{\lambda_{s+1}\dots \lambda_{r+s}}^\mu-r\phi_{\lambda_1\dots\lambda_{r-1}\nu}^\mu \partial_{\lambda_r}\sigma_{\lambda_{r+1}\dots\lambda_{r+s}}^\nu \\ && \qquad +(-1)^{rs}s\sigma_{\lambda_1\dots\lambda_{s-1}\nu}^\mu\partial_{\lambda_s} \phi_{\lambda_{s+1}\dots\lambda_{r+s}}^\nu) dz^{\lambda_1}\wedge\dots\wedge dz^{\lambda_{r+s}}\otimes\partial_\mu. \end{eqnarray*} Given a tangent-valued form $\phi$, the Nijenhuis differential is defined to be the sheaf morphism \begin{equation} d_\phi : \sigma\mapsto d_\phi\sigma = [\phi,\sigma]. \label{33}\\ \end{equation} In particular, if $\theta=u$ is a vector field, we have the Lie derivative of tangent-valued forms \begin{eqnarray*} && {\bf L}_u\sigma=[u,\sigma] =(u^\nu\partial_\nu\sigma_{\lambda_1\dots\lambda_s}^\mu - \sigma_{\lambda_1\dots\lambda_s}^\nu\partial_\nu u^\mu \\ && \qquad +s\sigma^\mu_{\lambda_1\dots\lambda_{s-1}\nu}\partial_{\lambda_s}u^\nu) dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_s}\otimes\partial_\mu. \end{eqnarray*} The Nijehuis differential (\ref{33}) can be extended to exterior forms $\sigma$ by the rule \begin{eqnarray} &&d_\phi\sigma=\phi\rfloor d\sigma +(-1)^rd(\phi\rfloor\sigma) =(\phi_{\lambda_1\dots\lambda_r}^\nu\partial_\nu\sigma_{\lambda_{r+1}\dots\lambda_{r+s}} \nonumber \\ &&\qquad +(-1)^{rs}s\sigma_{\lambda_1\dots\lambda_{s-1}\nu}\partial_{\lambda_s} \phi_{\lambda_{s+1}\dots\lambda_{r+s}}^\nu) dz^{\lambda_1}\wedge\dots\wedge dz^{\lambda_{r+s}}. \label{32} \end{eqnarray} In particular, if $\phi=\theta_M$, the familiar exterior differential \[ d_{\theta_M}\sigma=d\sigma \] is reproduced. On a fibred manifold $Y\to X$, we consider the following particular subsheafs of exterior and tangent-valued forms: \begin{itemize}\begin{enumerate} \item exterior horizontal forms $\phi : Y\to\op\wedge^r T^*X$; \item tangent-valued horizontal forms \begin{eqnarray*} && \phi : Y\to\op\wedge^r T^*X\op\otimes_Y TY,\\ && \phi =dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}\otimes (\phi_{\lambda_1\dots\lambda_r}^\mu\partial_\mu + \phi_{\lambda_1\dots\lambda_r}^i \partial_i); \end{eqnarray*} \item tangent-valued projectable horizontal forms \[ \phi =dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}\otimes (\phi_{\lambda_1\dots\lambda_r}^\mu(x)\partial_\mu + \phi_{\lambda_1\dots\lambda_r}^i(y) \partial_i); \] \item vertical-valued horizontal forms \begin{eqnarray*} &&\phi : Y\to\op\wedge^r T^*X\op\otimes_Y VY,\\ &&\phi =\phi_{\lambda_1\dots\lambda_r}^i dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}\otimes\partial_i. \end{eqnarray*} \end{enumerate}\end{itemize} Vertical-valued horizontal 1-forms on $Y\to X$ are termed the soldering forms. By pullback-valued forms on a fibred manifold $Y\to X$, we mean the morphisms \begin{eqnarray} &&Y\to \op\wedge^r T^*Y\op\otimes_Y TX, \label{1.11} \\ &&Y\to \op\wedge^r T^*Y\op\otimes_Y T^*X. \label{87} \end{eqnarray} Let us emphasize that the forms (\ref{1.11}) are not tangent-valued forms and the forms (\ref{87}) are not exterior forms on $Y$. In particular, we shall refer to the pullback $\pi^*\theta_X$ of the canonical form $\theta_X$ on the base $X$ by $\pi$ onto $Y$. This is a pullback-valued horizontal 1-form on $Y\to X$ which we denote by the same symbol \begin{equation} \theta_X:Y\to T^*X\op\otimes_Y TX, \qquad \theta_X =dx^\lambda\otimes\partial_\lambda. \label{12} \end{equation} Horizontal $n$-forms on a fibred manifold $Y\to X$ are called horizontal densities. We denote \[ \omega=dx^1\wedge\dots\wedge dx^n. \] \section{Jet manifolds} This Section briefs some notions of the higher order jet formalism and illustrates them by the ones of first and second order jet machineries. We use the multi-index $\Lambda$, $\mid\Lambda\mid=k$ for symmetrized collections $(\lambda_1...\lambda_k)$. By $\Lambda+\lambda$ is meant the symmetrized collection $(\lambda_1...\lambda_k\lambda)$. \begin{definition}{1.2} The $k$-order jet space $J^kY$ of a fibred manifold $Y\to X$ is defined to comprise all equivalence classes $j^k_xs$, $x\in X$, of sections $s$ of $Y$ so that sections $s$ and $s'$ belong to the same class $j^k_xs$ if and only if \[ \partial_\Lambda s^i(x)=\partial_\Lambda {s'}^i(x), \qquad 0\leq \mid\Lambda\mid \leq k. \] \end{definition} In other words, sections of $Y\to X$ are identified by the first $k+1$ terms of their Teylor series at points of $X$. Given fibred coordinates (\ref{1.2}) of $Y$, the $k$-order jet space $J^kY$ is provided with atlases of the adapted coordinates \[ (x^\lambda, y^i_\Lambda),\qquad 0\leq \mid\Lambda\mid \leq k. \] They bring $J^kY$ into a finite-dimensional smooth manifold satisfying the conditions which we require of a manifold. It possesses the composite fibration \[ J^kY\to J^{k-1}Y\to ... \to Y\to X. \] In particular, the first order jet manifold (or simply the jet manifold) $J^1Y$ of $Y$ consists of the equivalence classes $j^1_xs$, $x\in X$, of sections $s$ of $Y$ so that different sections $s$ and $s'$ belong to the same class $j^1_xs$ if and only if \[ Ts\mid _{T_xX} =Ts'\mid_{T_xX}. \] In other words, sections $s\in j^1_xs$ are identified by their values $s^i(x)={s'}^i(x)$ and values of their partial derivatives $\partial_\mu s^i(x)=\partial_\mu{s'}^i(x)$ at the point $x$ of $X$. There are the natural surjections \begin{eqnarray} &&\pi_1:J^1Y\ni j^1_xs\mapsto x\in X, \label{1.14}\\ &&\pi_{01}:J^1Y\ni j^1_xs\mapsto s(x)\in Y. \label{1.15} \end{eqnarray} We have the composite manifold \[ \pi_1=\pi\circ\pi_{01}:J^1Y\to Y \to X. \] The surjection (\ref{1.14}) is a fibred manifold. The surjection (\ref{1.15}) is a bundle. If $Y\to X$ is a bundle, so is the surjection (\ref{1.14}). The first order jet manifold $J^1Y$ of $Y$ is provided with the adapted coordinate atlases \begin{eqnarray} &&(x^\lambda,y^i,y_\lambda^i),\label{49}\\ &&(x^\lambda,y^i,y_\lambda^i)(j^1_xs)=(x^\lambda,s^i(x),\partial_\lambda s^i(x)),\nonumber\\ &&{y'}^i_\lambda = (\frac{\partial{y'}^i}{\partial y^j}y_\mu^j + \frac{\partial{y'}^i}{\partial x^\mu})\frac{\partial x^\mu}{\partial{x'}^\lambda}. \label{50} \end{eqnarray} A glance at the transformation law (\ref{50}) shows that the bundle $J^1Y\to Y$ is an affine bundle. We call it the jet bundle. It is modelled on the vector bundle \begin{equation} T^*X \op\otimes_Y VY\to Y.\label{23} \end{equation} The second order jet manifold $J^2Y$ of a fibred manifold $Y\to X$ is defined to be the union of all equivalence classes $j_x^2s$ of sections $s$ of $Y\to X$ such that sections $s\in j^2_xs$ are identified by their values and values of their first and second order partial derivatives at the point $x\in X$. The second order jet manifold $J^2Y$ is endowed with the adapted coordinates \[ (x^\lambda ,y^i, y^i_\lambda,y^i_{\lambda\mu}=y^i_{\mu\lambda}), \] \[ y^i_\lambda (j_x^2s)=\partial_\lambda s^i(x),\qquad y^i_{\lambda\mu}(j_x^2s)=\partial_\mu\partial_\lambda s^i(x). \] Let $Y$ and $Y'$ be fibred manifolds over SXS and $\Phi:Y\to Y'$ a fibred morphism over a diffeomorphism $f$ of $X$. It yields the $k$-order jet prolongation \[ J^k\Phi: J^kY\ni j^k_xs\mapsto j^k_{f(x)}(\Phi\circ s\circ f^{-1}) \in J^kY' \] of $\Phi$. In particular, the first order jet prolongation (or simply the jet prolongation) of $\Phi$ reads \[ J^1\Phi : J^1Y \to J^1Y', \] \begin{equation} J^1\Phi : j_x^1s\mapsto j_{f(x)}^1(\Phi\circ s\circ f^{-1}),\label{26} \end{equation} \[ {y'}^i_\lambda\circ J^1\Phi=\partial_\lambda(\Phi^i\circ f^{-1}) +\partial_j(\Phi^iy^j_\lambda \circ f^{-1}). \] It is both an affine bundle morphism over $\Phi$ and a fibred morphism over the diffeomorphism $f$. Every section $s$ of a fibred manifold $Y\to X$ admits the $k$-order jet prolongation to the section \[ (J^ks)(x)\op=^{\rm def} j^k_xs \] of the fibred manifold $J^kY\to X$. In particular, its first order jet prolongation to the section $J^1s$ of the fibred jet manifold $J^1Y\to X$ reads \begin{equation} (x^\lambda,y^i,y_\lambda^i)\circ J^1s= (x^\lambda,s^i(x),\partial_\lambda s^i(x)).\label{27} \end{equation} Every vector field \[ u = u^\lambda\partial_\lambda + u^i\partial_i \] on a fibred manifold $Y\to X$ has the jet lift to the projectable vector field \begin{eqnarray} &&\overline u =r\circ J^1u: J^1Y\to TJ^1Y,\nonumber \\ && \overline u = u^\lambda\partial_\lambda + u^i\partial_i + (\partial_\lambda u^i+y^j_\lambda\partial_ju^i - y_\mu^i\partial_\lambda u^\mu)\partial_i^\lambda, \label{1.21} \end{eqnarray} on the fibred jet manifold $J^1Y\to X$. To construct it, we use the canonical fibred morphism \[ r: J^1TY\to TJ^1Y, \] \[ (x^\lambda,y^i,y^i_\lambda,\dot x^\lambda, \dot y^i, \dot y^i_\lambda)\circ r = (x^\lambda,y^i,y^i_\lambda,\dot x^\lambda,\dot y^i,(\dot y^i)_\lambda-y^i_\mu\dot x^\mu_\lambda), \] where $J^1TY$ is the jet manifold of the fibred manifold $TY\to X$. In particular, there exists the canonical isomorphism \begin{equation} VJ^1Y=J^1VY, \qquad \dot y^i_\lambda=(\dot y^i)_\lambda, \label{1.22} \end{equation} where $J^1VY$ is the jet manifold of the fibred manifold $VY\to X$ and $VJ^1Y$ is the vertical tangent bundle of the fibred manifold $J^1Y\to X$. As a consequence, the jet lift (\ref{1.21}) of a vertical vector field $u$ on a fibred manifold $Y\to X$ consists with its first order jet prolongation \[ \overline u=J^1u=u^i\partial_i +(\partial_\lambda u^i+y^j_\lambda\partial_ju^i)\partial^\lambda_i \] to a vertical vector field on the fibred jet manifold $J^1Y\to X$. Given a second order jet manifold $J^2Y$ of $Y$, we have (i) the fibred morphism \[ r_2: J^2TY\to TJ^2Y, \] \[ (\dot y^i_\lambda, \dot y^i_{\lambda\alpha})\circ r_2 = ((\dot y^i)_\lambda-y^i_\mu\dot x^\mu_\lambda, (\dot y^i)_{\lambda\alpha} -y^i_\mu\dot x^\mu_{\lambda\alpha} - y^i_{\lambda\mu}\dot x^\mu_\alpha), \] and (ii) the canonical isomorphism \[ VJ^2Y=J^2VY \] where $J^2VY$ is the second order jet manifold of the fibred manifold $VY\to X$ and $VJ^2Y$ is the vertical tangent bundle of the fibred manifold $J^2Y\to X$. As a consequence, every vector field $u$ on a fibred manifold $Y\to X$ admits the second order jet lift to the projectable vector field \[ \overline u =r_2\circ J^2u: J^2Y\to TJ^2Y. \] In particular, if \[ u = u^\lambda\partial_\lambda + u^i\partial_i \] is a projectable vector field on $Y$, its second order jet lift reads \begin{eqnarray} && \overline u = u^\lambda\partial_\lambda + u^i\partial_i + (\partial_\lambda u^i+y^j_\lambda\partial_ju^i - y_\mu^i\partial_\lambda u^\mu)\partial_i^\lambda+ \nonumber \\ && \qquad [(\partial_\alpha +y^j_\alpha\partial_j +y^j_{\beta\alpha}\partial^\beta_j) (\partial_\lambda +y^k_\lambda\partial_k)u^i -y^i_\mu\dot x^\mu_{\lambda\alpha} - y^i_{\lambda\mu}\dot x^\mu_\alpha]\partial_i^{\lambda\alpha}. \label{80} \end{eqnarray} Given a $k$-order jet manifold $J^kY$ of $Y$, we have the fibred morphism \[ r_k: J^kTY\to TJ^kY \] and the canonical isomorphism \[ VJ^kY=J^kVY \] where $J^kVY$ is the $k$-order jet manifold of the fibred manifold $VY\to X$ and $VJ^kY$ is the vertical tangent bundle of the fibred manifold $J^kY\to X$. As a consequence, every vector field $u$ on a fibred manifold $Y\to X$ has the $k$-order jet lift to the projectable vector field \begin{eqnarray} &&\overline u =r_k\circ J^ku: J^kY\to TJ^kY, \nonumber \\ && \overline u = u^\lambda\partial_\lambda + u^i\partial_i + u_\Lambda^i\partial_i^\Lambda, \nonumber\\ && u_{\Lambda+\lambda}^i = \widehat\partial_\lambda u_\Lambda^i - y_{\Lambda+\mu}^iu^\mu, \label{84} \end{eqnarray} on $J^kY$ where \begin{equation} \widehat\partial_\lambda = (\partial_\lambda + y^i_{\Sigma+\lambda}\partial_i^\Sigma), \qquad 0\leq \mid\Sigma\mid \leq k. \label{85} \end{equation} The expression (\ref{84}) is the $k$-order generalization of the expressions (\ref{1.21}) and (\ref{80}). Algebraic structure of a bundle $Y\to X$ also has the jet prolongation to the jet bundle $J^1Y\to X$ due to the jet prolongations of the corresponding morphisms. If $Y$ is a vector bundle, $J^1Y\to X$ does as well. Let $Y$ be a vector bundle and $\langle\rangle$ the linear fibred morphism \begin{eqnarray*} &&\langle\rangle: Y\op\times_X Y^*\op\to_X X\times{\bf R},\\ && r\circ\langle\rangle = y^iy_i. \end{eqnarray*} The jet prolongation of $\langle\rangle$ is the linear fibred morphism \begin{eqnarray*} &&J^1\langle\rangle :J^1Y\op\times_X J^1Y^* \op\to_X T^*X\times{\bf R},\\ && \dot x_\mu \circ J^1\langle\rangle = y^i_\mu y_i +y^iy_{i\mu}. \end{eqnarray*} Let $Y\to X$ and $Y'\to X$ be vector bundles and $\otimes$ the bilinear fibred morphism \begin{eqnarray*} && \otimes :Y\op\times_X Y' \op\to_X Y\op\otimes_X Y', \\ && y^{ik}\circ\otimes = y^iy^k. \end{eqnarray*} The jet prolongation of $\otimes$ is the bilinear fibred morphism \begin{eqnarray*} && J^1\otimes : J^1Y\op\times_X J^1Y'\op\to_X J^1(Y\op\otimes_X Y'), \\ && y^{ik}_\mu\circ J^1\otimes =y^i_\mu y^k + y^i y^k_\mu. \end{eqnarray*} If $Y$ is an affine bundle modelled on a vector bundle $\overline Y$, then $J^1Y\to X$ is an affine bundle modelled on the vector bundle $J^1\overline Y\to X$. \begin{proposition}{1.3} There exist the following bundle monomorphisms: \begin{itemize}\begin{enumerate} \item the contact map \begin{eqnarray} &&\lambda:J^1Y\op\to_Y T^*X \op\otimes_Y TY,\label{18}\\ &&\lambda=dx^\lambda\otimes\widehat{\partial}_\lambda=dx^\lambda \otimes (\partial_\lambda + y^i_\lambda \partial_i),\nonumber \end{eqnarray} \item the complementary morphism \begin{eqnarray} &&\theta_1:J^1Y \op\to_Y T^*Y\op\otimes_Y VY,\label{24}\\ &&\theta_1=\widehat{d}y^i \otimes \partial_i=(dy^i- y^i_\lambda dx^\lambda)\otimes\partial_i. \nonumber \end{eqnarray} \end{enumerate}\end{itemize}\end{proposition} These monomorphisms able us to handle the jets as familiar tangent-valued forms. The canonical morphisms (\ref{18}) and (\ref{24}) define the bundle monomorphisms \begin{eqnarray} && \widehat\lambda: J^1Y\op\times_X TX\ni\partial_\lambda\mapsto\widehat{\partial}_\lambda \in J^1Y\op\times_Y TY, \label{30} \\ &&\widehat\theta_1: J^1Y\op\times_Y V^*Y\ni dy^i\mapsto\widehat dy^i\in J^1Y\op\times_Y T^*Y, \label{1.18} \end{eqnarray} The morphism (\ref{30}) yields the canonical horizontal splitting of the pullback \begin{equation} J^1Y\op\times_Y TY=\widehat\lambda(TX)\op\oplus_{J^1Y} VY,\label{1.20} \end{equation} \[ \dot x^\lambda\partial_\lambda +\dot y^i\partial_i =\dot x^\lambda(\partial_\lambda +y^i_\lambda\partial_i) + (\dot y^i-\dot x^\lambda y^i_\lambda)\partial_i. \] Accordingly, the morphism (\ref{1.18}) yields the dual canonical horizontal splitting of the pullback \begin{equation} J^1Y\op\times_Y T^*Y=T^*X\op\oplus_{J^1Y} \widehat\theta_1(V^*Y),\label{34} \end{equation} \[ \dot x_\lambda dx^\lambda +\dot y_i dy^i =(\dot x_\lambda + \dot y_iy^i_\lambda)dx^\lambda + \dot y_i(dy^i- y^i_\lambda dx^\lambda). \] In other words, over $J^1Y$, we have the canonical horizontal splittings of the tangent and cotangent bundles $TY$ and $T^*Y$ and the corresponding splittings of the exact sequences (\ref{1.8a}) and (\ref{1.8b}). In particular, one gets the canonical horizontal splittings of \begin{itemize}\begin{enumerate} \item a projectable vector field \begin{eqnarray} && u =u^\lambda\partial_\lambda +u^i\partial_i=u_H +u_V \nonumber\\ && \quad =u^\lambda (\partial_\lambda +y^i_\lambda \partial_i)+(u^i - u^\lambda y^i_\lambda)\partial_i, \label{31} \end{eqnarray} \item an exterior 1-form \begin{eqnarray*} &&\sigma =\sigma_\lambda dx^\lambda + \sigma^idy^i\\ &&\quad =(\sigma_\lambda + y^i_\lambda\sigma_i)dx^\lambda + \sigma_i(dy^i- y^i_\lambda dx^\lambda), \end{eqnarray*} \item a tangent-valued projectable horizontal form \begin{eqnarray*} &&\phi = dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}\otimes (\phi_{\lambda_1\dots\lambda_r}^\mu\partial_\mu + \phi_{\lambda_1\dots\lambda_r}^i\partial_i)\\ &&\quad = dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}\otimes [\phi_{\lambda_1\dots\lambda_r}^\mu (\partial_\mu +y^i_\mu \partial_i) \\ &&\quad\qquad+(\phi_{\lambda_1\dots\lambda_r}^i - \phi_{\lambda_1\dots\lambda_r}^\mu y^i_\mu)\partial_i], \end{eqnarray*} \item the canonical 1-form \begin{eqnarray} &&\theta_Y=dx^\lambda\otimes\partial_\lambda + dy^i\otimes\partial_i \nonumber\\ &&\quad =\lambda + \theta_1=dx^\lambda\otimes\widehat{\partial}_\lambda+\widehat d y^i\otimes\partial_i\nonumber \\ &&\quad\qquad = dx^\lambda\otimes(\partial_\lambda + y^i_\lambda \partial_i) + (dy^i-y^i_\lambda dx^\lambda)\otimes\partial_i.\label{35} \end{eqnarray} \end{enumerate}\end{itemize} As an immediate consequence of the splitting (\ref{35}), we get the canonical horizontal splitting of the exterior differential \begin{equation} d=d_{\theta_Y}=d_H+d_V=d_\lambda + d_{\theta_1}. \label{1.19} \end{equation} Its components $d_H$ and $d_V$ act on the pullbacks of horizontal exterior forms \[ \phi_{\lambda_1\dots\lambda_r}(y)dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r} \] on a fibred manifold $Y\to X$ by $\pi_{01}$ onto $J^1Y$. In this case, $d_H$ makes the sense of the total differential \begin{eqnarray*} && d_H\phi_{\lambda_1\dots\lambda_r}(y)dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}\\ &&\qquad =(\partial_\mu + y^i_\mu\partial_i) \phi_{\lambda_1\dots\lambda_r}(y)dx^\mu\wedge dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}, \end{eqnarray*} whereas $d_V$ is the vertical differential \begin{eqnarray*} && d_V\phi_{\lambda_1\dots\lambda_r}(y)dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}\\ &&\qquad =\partial_i \phi_{\lambda_1\dots\lambda_r}(y)(dy^i-y^i_\mu dx^\mu)\wedge dx^{\lambda_1}\wedge\dots\wedge dx^{\lambda_r}. \end{eqnarray*} If $\phi=\widetilde\phi\omega$ is an exterior horizontal density on $Y\to X$, we have \[ d\phi=d_V\phi=\partial_i\widetilde\phi dy^i\wedge\omega. \] There exist the following second order generalizations of the contact map (\ref{18}) and the complementary morphism (\ref{24}) to the second order jet manifold $J^2Y$: \begin{eqnarray} (i)\quad &&\lambda:J^2Y\op\to_{J^1Y} T^*X \op\otimes_{J^1Y} TJ^1Y,\nonumber\\ &&\lambda=dx^\lambda\otimes\widehat{\partial}_\lambda=dx^\lambda \otimes (\partial_\lambda + y^i_\lambda \partial_i + y^i_{\mu\lambda}\partial_i^\mu),\label{54}\\ (ii)\quad &&\theta_1:J^2Y \op\to_{J^1Y}T^*J^1Y\op\otimes_{J^1Y} VJ^1Y,\nonumber\\ &&\theta_1=(dy^i- y^i_\lambda dx^\lambda)\otimes\partial_i + (dy^i_\mu- y^i_{\mu\lambda} dx^\lambda)\otimes\partial_i^\mu.\label{55} \end{eqnarray} The contact map (\ref{54}) defines the canonical horizontal splitting of the exact sequence \[ 0\to VJ^1Y\op\hookrightarrow_{J^1Y} TJ^1Y\op\to_{J^1Y} J^1Y\op\times_X TX\to 0. \] In particular, we get the canonical horizontal splitting of a projectable vector field $\overline u$ on $J^1Y$ over $J^2Y$: \begin{eqnarray} &&\overline u=u_H+u_V = u^\lambda[\partial_\lambda+ y^i_\lambda+y^i_{\mu\lambda}] \nonumber \\ && \qquad +[(u^i-y^i_\lambda u^\lambda)\partial_i+(u^i_\mu- y^i_{\mu\lambda}u^\lambda)\partial^\mu_i]. \label{79} \end{eqnarray} Using the morphisms (\ref{54}) and (\ref{55}), we can still obtain the horizontal splittings of the canonical tangent-valued 1-form $\theta_{J^1Y}$ on $J^1Y$ and, as a result, the horizontal splitting of the exterior differential which are similar the horizontal splittings (\ref{35}) and (\ref{1.19}): \begin{eqnarray} &&\theta_{J^1Y}=dx^\lambda\otimes\partial_\lambda + dy^i\otimes\partial_i + dy^i_\mu\otimes\partial_i^\mu=\lambda +\theta_1, \nonumber \\ && d= d_{\theta_{J^1Y}}=d_\lambda +d_{\theta_1}= d_H +d_V. \label{56} \end{eqnarray} The contact maps (\ref{18}) and (\ref{54}) are the particular cases of the monomorphism \begin{eqnarray} &&\lambda: J^{k+1}Y\to T^*X\op\otimes_{J^kY} TJ^kY,\nonumber\\ &&\lambda =dx^\lambda\otimes(\partial_\lambda + y^i_{\Lambda+\lambda}\partial_i^\Lambda), \qquad 0\leq \mid\Lambda\mid \leq k. \label{82} \end{eqnarray} The $k$-order contact map (\ref{82}) sets up the canonical horizontal splitting of the exact sequence \[ 0\to VJ^kY\hookrightarrow TJ^kY\to J^kY\op\times_X TX\to 0. \] In particular, we get the canonical horizontal splitting of a projectable vector field $\overline u$ on $J^kY\to X$ over $J^{k+1}Y$: \[ \overline u=u_H+u_V= u^\lambda (\partial_\lambda + y^i_{\Lambda+\lambda}\partial_i^\Lambda) + (u^i_\Lambda - y^i_{\Lambda+\lambda})\partial_i^\Lambda, \qquad 0\leq \mid\Lambda\mid \leq k . \] This splitting is the $k$-order generalization of the splittings (\ref{31}) and (\ref{80}). \begin{definition}{1.4} The repeated jet manifold $J^1J^1Y$ is defined to be the first order jet manifold of the fibred jet manifold $J^1Y\to X$. \end{definition} Given the coordinates (\ref{50}) of $J^1Y$, the repeated jet manifold $J^1J^1Y$ is provided with the adapted coordinates \begin{equation} (x^\lambda ,y^i,y^i_\lambda ,y_{(\mu)}^i,y^i_{\lambda\mu}).\label{51} \end{equation} There are the following two bundles: \begin{eqnarray} (i) \quad&&\pi_{11}:J^1J^1Y\to J^1Y, \label{S1}\\ &&y_\lambda^i\circ\pi_{11} = y_\lambda^i,\nonumber\\ (ii) \quad&&J^1\pi_{01}:J^1J^1Y\to J^1Y,\label{S'1}\\ && y_\lambda^i\circ J^1\pi_{01} = y_{(\lambda)}^i.\nonumber \end{eqnarray} Their affine difference over $Y$ yields the Spencer bundle morphism \[ \delta=J^1\pi_{01} - \pi_{11} :J^1J^1Y\op\to_Y T^*X\op\otimes_Y VY, \] \[ (x^\lambda ,y^i,\dot x_\lambda\otimes \dot y^i)\circ\delta =(x^\lambda, y^i,y_{(\lambda)}^i-y^i_\lambda). \] The kernel of this morphism is the sesquiholonomic affine subbundle \begin{equation} \widehat J^2Y\to J^1Y\label{52} \end{equation} of the bundles (\ref{S1}) and (\ref{S'1}). This subbundle is characterized by the coordinate condition $y^i_{(\lambda)}= y^i_\lambda.$ It is modelled on the vector bundle \[ \op\otimes^2 T^*X\op\otimes_{J^1Y} VY. \] Given the coordinates (\ref{51}) of $J^1J^1Y$, the sesquiholonomic jet manifold $\widehat J^2Y$ is provided with the adapted coordinates $(x^\lambda ,y^i, y^i_\lambda ,y^i_{\lambda\mu})$. The second order jet manifold $J^2Y$ is the affine subbundle of the bundle (\ref{52}) given by the coordinate condition $y^i_{\lambda\mu}=y^i_{\mu\lambda}.$ It is modelled on the vector bundle \[ \op\vee^2 T^*X\op\otimes_{J^1Y} VY. \] We have the following affine bundle monomorphisms \[ J^2Y\hookrightarrow \widehat J^2Y\hookrightarrow J^1J^1Y \] over $J^1Y$ and the canonical splitting \begin{equation} \widehat J^2Y =J^2Y\op\oplus_{J^1Y} (\op\wedge^2 T^*X \op\otimes_Y VY), \label{1.23} \end{equation} \[ y^i_{\lambda\mu} = \frac12(y^i_{\lambda\mu}+y^i_{\mu\lambda}) + (\frac12(y^i_{\lambda\mu}-y^i_{\mu\lambda}). \] Let $\Phi$ be a fibred morphism of a fibred manifold $Y\to X$ to a fibred manifold $Y'\to X$ over a diffeomorphism of $X$. Let $J^1\Phi$ be its first order jet prolongation (\ref{26}). One can consider the first order jet prolongation $J^1J^1\Phi$ of the fibred morphism $J^1\Phi$. The restriction of the morphism $J^1J^1\Phi$ to the second order jet manifold $J^2Y$ of $Y$ consists with the second order jet prolongation $J^2\Phi$ of a fibred morphism $\Phi$. In particular, the repeated jet prolongation $J^1J^1s$ of a section $s$ of $Y\to X$ is a section of the fibred manifold $J^1J^1Y\to X$. It takes its values into $J^2Y$ and consists with the second order jet prolongation $J^2s$ of $s$: \[ (J^1J^1s)(x)=(J^2s)(x)=j^2_xs. \] Given the fibred jet manifold $J^kY\to X$, let us consider the repeated jet manifold $J^1J^kY$ provided with the adapted coordinates $(x^\mu, y^i_\Lambda, y^i_{\Lambda\lambda})$. Just as in the case of $k=2$, there exist two fibred morphisms of $J^1J^kY$ to $J^1J^{k-1}Y$ over $X$. Their difference over $J^{k-1}Y$ is the $k$-order Spencer morphism \[ J^1J^kY\to T^*X\op\otimes_{J^{k-1}Y} VJ^{k-1}Y \] where $ VJ^{k-1}Y$ is the vertical tangent bundle of the fibred manifold $J^{k-1}Y\to X$. Its kernel is the sesquiholonomic subbundle $\widehat J^{k+1}Y$ of the bundle $J^{k+1}Y\to Y$. It is endowed with the coordinates $(x^\mu, y^i_\Lambda, y^i_{\Lambda\mu})$, $\mid\Lambda\mid=k$. \begin{proposition}{1.5} There exist the fibred monomorphisms \begin{equation} J^kY\hookrightarrow\widehat J^kY\hookrightarrow J^1J^{k-1}Y \label{76} \end{equation} and the canonical splitting \begin{equation} \widehat J^{k+1}Y= J^{k+1}Y\op\oplus_{J^kY}(T^*X\wedge\op\vee^{k-1}T^*X\op\otimes_Y VY),\label{75} \end{equation} \[ (x^\mu,y^i_\Lambda, y^i_{\Lambda\mu}) =(x^\mu,y^i_\Lambda, y^i_{(\Lambda\mu)}+y^i_{[\Lambda\mu]}). \] \end{proposition} We have the following integrability condition. \begin{lemma}{1.6} Let $\overline s$ be a section of the fibred manifold $J^kY\to X$. Then, the following conditions are equivalent: \begin{itemize}\begin{enumerate} \item $\overline s=J^ks$ where $s$ is a section of $Y\to X$, \item $J^1\overline s:X\to \widehat J^{k+1}Y$, \item $J^1\overline s:X\to J^{k+1}Y$. \end{enumerate}\end{itemize} \end{lemma} Building on Proposition 1.5 and Lemma 1.6, we now describe reduction of higher order differential operators to the first order ones. Let $Y$ and $Y'$ be fibred manifolds over $X$ and $J^kY$ the $k$-order jet manifold of $Y$. \begin{definition}{1.7} A fibred morphism \begin{equation} {\cal E}: J^kY\op\to_X Y' \label{70} \end{equation} is called the $k$-order differential operator (of class $C^\infty$) on $Y$. It sends every section $s$ of the fibred manifold $Y$ to the section ${\cal E}\circ J^ks$ of the fibred manifold $Y'$. \end{definition} \begin{proposition}{1.8} Given a fibred manifold $Y$, every first order differential operator \begin{equation} {\cal E}'': J^1J^{k-1}Y\op\to_X Y'\label{72} \end{equation} on $J^{k-1}Y$ defines the $k$-order differential operator ${\cal E}={\cal E}''\mid_{J^kY}$ on $Y$. Conversely, if a first order differential operator on $J^{k-1}Y$ exists, any $k$-rder differential operator (\ref{70}) on $Y$ can be represented by the restriction ${\cal E}''\mid_{J^kY}$ of some first order differential operator (\ref{72}) on $J^{k-1}Y$ to the $k$-order jet manifold $J^kY$. \end{proposition} \begin{proof} Because of the monomorphism (\ref{76}) every fibred morphism $J^kY\to Y'$ can be extended to a fibred morphism $J^1J^{k-1}Y\to Y'.$ \end{proof} In particular, every $k$-order differential operator (\ref{70}) yields the morphism \begin{equation} {\cal E}'={\cal E}\circ{\rm pr\,}_2: \widehat J^kY\op\to_X Y' \label{77} \end{equation} where ${\rm pr\,}_2:\widehat J^kY\to J^kY$ is the surjection corresponding to the canonical splitting (\ref{75}). It follows that, for every section $s$ of a fibred manifold $Y\to X$, we have the equality \[ {\cal E}'\circ J^1J^{k-1}s= {\cal E}\circ J^ks. \] We call ${\cal E}'$ (\ref{77}) the sesquiholonomic differential operator and consider extensions of a $k$-order differential operator ${\cal E}$ (\ref{70}) to first order differential operators (\ref{72}) only through its extension to the sequiholonomic differential operator (\ref{77}). Let $\overline s$ be a section of the fibred $(k-1)$-order jet manifold $J^{k-1}Y\to X$ such that its first order jet prolongation $J^1\overline s$ takes its values into the sesquiholonomic jet manifold $\widehat J^kY$. In virtue of Lemma 1.6, there exists a section $s$ of $Y\to X$ such that $\overline s=J^{k-1}s$ and \begin{equation} {\cal E}'\circ J^1\overline s= {\cal E}\circ J^ks.\label{S5} \end{equation} Let $Y'\to X$ be a composite manifold $Y'\to Y\to X$ where $Y'\to Y$ is a vector bundle. Let the $k$-order differential operator (\ref{70}) on a fibred manifold $Y$ be a fibred morphism over $Y$. By ${\rm Ker\,}{\cal E}$ is meant the preimage ${\cal E}^{-1}(\widehat 0(Y))$ where $\widehat 0$ is the canonical zero section of $Y'\to Y$. We say that a section $s$ of $Y$ satisfies the corresponding system of $k$-order differential equations if \begin{equation} J^ks(X)\subset {\rm Ker\,}{\cal E}. \label{73} \end{equation} Let a $k$-order differential operator ${\cal E}$ on $Y\to X$ be extended to a first order differential operator ${\cal E}''$ on $J^{k-1}Y$. Let $\overline s$ be a section of the fibred manifold $J^{k-1}Y\to X$. We shall say that $\overline s$ is a sesquiholonomic solution of the corresponding system of first order differential equations if \begin{equation} J^1\overline s(X)\subset {\rm Ker\,}{\cal E}''\cap \widehat J^kY. \label{74} \end{equation} \begin{proposition}{1.9} The system of the $k$-order differential equations (\ref{73}) and the system of the first order differential equations (\ref{74}) are equivalent to each other. \end{proposition} \begin{proof} In virtue of the relation (\ref{S5}), every solution $s$ of the equations (\ref{73}) yields the solution \begin{equation} \overline s=J^{k-1}s \label{78} \end{equation} of the equations (\ref{74}). Every solution of the equations (\ref{74}) takes the form (\ref{78}) where $s$ is a solution of the equations (\ref{73}). \end{proof} \section{General Connections} The most of existent differential geometric methods in field theory is based on principal bundles and principal connections. We follow the general notion of connections as sections of jet bundles $J^1Y\to Y$ without appealing to transformation groups. Given a fibred manifold $Y\to X$, the canonical horizontal splittings (\ref{1.20}) and (\ref{34}) of the tangent and cotangent bundles $TY$ and $T^*Y$ of $Y$ over the jet bundle $J^1Y\to Y$ enable us to get the splittings of the exact sequences (\ref{1.8a}) and (\ref{1.8b}) by means of a section of this jet bundle. \begin{definition}{1.10} A first order jet field (or simply a jet field) on a fibred manifold $Y\to X$ is defined to be a section $\Gamma$ of the affine jet bundle $J^1Y\to Y$. A first order connection $\Gamma$ on a fibred manifold $Y$ is defined to be a global jet field \begin{eqnarray} &&\Gamma :Y\to J^1Y,\nonumber\\ &&(x^\lambda ,y^i,y^i_\lambda)\circ\Gamma =(x^\lambda,y^i,\Gamma^i_\lambda (y)).\label{61} \end{eqnarray} \end{definition} By means of the contact map $\lambda$ (\ref{18}), every connection $\Gamma$ (\ref{61}) on a fibred manifold $Y\to X$ can be represented by a projectable tangent-valued horizontal 1-form $\lambda\circ\Gamma$ on $Y$ which we denote by the same symbol \begin{eqnarray} &&\Gamma =dx^\lambda\otimes(\partial_\lambda +\Gamma^i_\lambda (y)\partial_i), \label{37}\\ &&{\Gamma'}^i_\lambda = (\frac{\partial{y'}^i}{\partial y^j}\Gamma_\mu^j + \frac{\partial{y'}^i}{\partial x^\mu})\frac{\partial x^\mu}{\partial{x'}^\lambda}.\nonumber \end{eqnarray} Substituting a connection $\Gamma$ (\ref{37}) into the canonical horizontal splittings (\ref{1.20}) and (\ref{34}), we obtain the familiar horizontal splittings \begin{eqnarray} &&\dot x^\lambda\partial_\lambda +\dot y^i\partial_i = \dot x^\lambda (\partial_\lambda +\Gamma^i_\lambda\partial_i) + (\dot y^i-\dot x^\lambda\Gamma^i_\lambda)\partial_i, \nonumber\\ &&\dot x_\lambda dx^\lambda +\dot y_idy^i = (\dot x_\lambda +\Gamma^i_\lambda\dot y_i)dx^\lambda + \dot y_i(dy^i-\Gamma^i_\lambda dx^\lambda) \label{9} \end{eqnarray} of the tangent and cotangent bundles $TY$ and $T^*Y$ with respect to a connection $\Gamma$ on $Y$. Conversely, every horizontal splitting (\ref{9}) determines a tangent-valued form (\ref{37}) and, consequently, a global jet field on $Y\to X$. Since the affine jet bundle $J^1Y\to Y$ is modelled on the vector bundle (\ref{23}), connections on a fibred manifold $Y$ constitute an affine space modelled on the linear space of soldering forms on $Y$. It follows that, if $\Gamma$ is a connection and \[ \sigma=\sigma^i_\lambda dx^\lambda\otimes\partial_i \] is a soldering form on a fibred manifold $Y$, then \[ \Gamma+\sigma=dx^\lambda\otimes[\partial_\lambda+(\Gamma^i_\lambda +\sigma^i_\lambda)\partial_i] \] is a connection on $Y$. Conversely, if $\Gamma$ and $\Gamma'$ are connections on a fibred manifold $Y$, then \[ \Gamma-\Gamma'=(\Gamma^i_\lambda -{\Gamma'}^i_\lambda)dx^\lambda\otimes\partial_i \] is a soldering form on $Y$. For instance, let $Y\to X$ be a vector bundle. A linear connection on $Y$ reads \begin{equation} \Gamma=dx^\lambda\otimes[\partial_\lambda+\Gamma^i{}_{j\lambda}(x)y^j\partial_i]. \label{8} \end{equation} One introduces the following basic forms involving a connection $\Gamma$ and a soldering form $\sigma$ on a fibred manifold $Y$: (i) the curvature of $\Gamma$: \begin{eqnarray} &&R =\frac12 d_\Gamma\G =\frac12 R^i_{\lambda\mu} dx^\lambda\wedge dx^\mu\otimes\partial_i= \nonumber \\ &&\quad \frac12 (\partial_\lambda\Gamma^i_\mu -\partial_\mu\Gamma^i_\lambda +\Gamma^j_\lambda\partial_j\Gamma^i_\mu -\Gamma^j_\mu\partial_j\Gamma^i_\lambda) dx^\lambda\wedge dx^\mu\otimes\partial_i; \label{13} \end{eqnarray} (ii) the torsion of $\Gamma$ with respect to $\sigma$: \begin{eqnarray} &&\Omega = d_\sigma\Gamma =d_\Gamma\sigma =\frac 12 \Omega^i_{\lambda\mu} dx^\lambda \wedge dx^\mu\otimes\partial_i= \nonumber\\ &&\quad (\partial_\lambda\sigma^i_\mu +\Gamma^j_\lambda\partial_j\sigma^i_\mu -\partial_j\Gamma^i_\lambda\sigma^j_\mu) dx^\lambda\wedge dx^\mu\otimes \partial_i; \label{14} \end{eqnarray} (ii) the soldering curvature of $\sigma$: \begin{eqnarray} &&\varepsilon=\frac12 d_\sigma\si=\frac12 \varepsilon^i_{\lambda\mu}dx^\lambda\wedge dx^\mu\otimes\partial_i= \nonumber\\ &&\quad \frac12 (\sigma^j_\lambda\partial_j \sigma^i_\mu - \sigma^j_\mu\partial_j \sigma^i_\l) dx^\lambda\wedge dx^\mu\otimes \partial_i. \label{15} \end{eqnarray} In particular, the curvature (\ref{13}) of the linear connection (\ref{8}) reads \begin{eqnarray} && R^i_{\lambda\mu}(y)=R^i{}_{j\lambda\mu}(x)y^j, \nonumber\\ &&R^i{}_{j\lambda\mu}=\partial_\lambda\Gamma^i{}_{j\mu} -\partial_\mu\Gamma^i{}_{j\lambda} +\Gamma^k{}_{j\lambda}\Gamma^i{}_{k\mu} -\Gamma^k{}_{j\mu}\Gamma^i{}_{k\lambda}.\label{25} \end{eqnarray} A connection $\Gamma$ on a fibred manifold $Y\to X$ yields the affine bundle morphism \begin{eqnarray} &&D_\Gamma:J^1Y\ni z\mapsto z-\Gamma(\pi_{01}(z))\in T^*X\op\otimes_Y VY, \label{38} \\ &&D_\Gamma =(y^i_\lambda -\Gamma^i_\lambda)dx^\lambda\otimes\partial_i. \nonumber \end{eqnarray} It is called the covariant differential. The corresponding covariant derivative of sections $s$ of $Y$ reads \begin{equation} \nabla_\Gamma s=D_\Gamma\circ J^1s=(\partial_\lambda s^i- (\Gamma\circ s)^i_\lambda)dx^\lambda\otimes\partial_i \label{39} \end{equation} A section $s$ of a fibred manifold $Y$ is called an integral section for a connection $\Gamma$ on $Y$ if \begin{equation} \Gamma\circ s=J^1s,\label{40} \end{equation} that is, $ \nabla_\Gamma s=0$. Now, we consider some particular properties of linear connections on vector bundles. Let $Y\to X$ be a vector bundle and $\Gamma$ a linear connection on $Y$. On the dual vector bundle $Y^*\to X$, there exists the dual connection $\Gamma^*$ is called the dual connection to $\Gamma$ given by the coordinate expression \[ \Gamma^*_{i\lambda}=-\Gamma^j{}_{i\lambda}(x)y_j. \] For instance, a linear connection $K$ on the tangent bundle $TX$ of a manifold $X$ and the dual connection $K^*$ to $K$ on the cotangent bundle $T^*X$ read \begin{eqnarray} && K^\alpha_\lambda=K^\alpha{}_{\nu\lambda}(x)\dot x^\nu,\nonumber\\ &&K^*_{\alpha\lambda}=-K^\nu{}_{\alpha\lambda}(x)\dot x_\nu, \label{408} \end{eqnarray} Let $Y$ and $Y'$ be vector bundles over $X$. Given linear connections $\Gamma$ and $\Gamma'$ on $Y$ and $Y'$ respectively, the tensor product connection $\Gamma\otimes\Gamma'$ on the tensor product \[ Y\op\otimes_X Y'\to X \] is defined. It takes the coordinate form \[ (\Gamma\otimes\Gamma')^{ik}_\lambda=\Gamma^i{}_{j\lambda}y^{jk}+{\Gamma'}^k{}_{j\lambda}y^{ij}. \] The construction of the dual connection and the tensor product connection can be extended to connections on composite manifolds (\ref{1.34}) when $Y\to\Sigma$ is a vector bundle. Let $Y\to\Sigma\to X$ be the composite manifold (\ref{1.34}). Let $J^1\Sigma$, $J^1Y_\Sigma$ and $J^1Y$ the first order jet manifolds of $\Sigma\to X$, $Y\to \Sigma$ and $Y\to X$ respectively. Given fibred coordinates $(x^\lambda, \sigma^m, y^i)$ (\ref{1.36}) of $Y$, the corresponding adapted coordinates of the jet manifolds $J^1\Sigma$, $J^1Y_\Sigma$ and $J^1Y$ are \begin{eqnarray} &&( x^\lambda ,\sigma^m, \sigma^m_\lambda),\nonumber\\ &&( x^\lambda ,\sigma^m, y^i, \widetilde y^i_\lambda, y^i_m),\nonumber\\ &&( x^\lambda ,\sigma^m, y^i, \sigma^m_\lambda ,y^i_\lambda) .\label{47} \end{eqnarray} We say that a connection \[ A=dx^\lambda\otimes[\partial_\lambda+\Gamma^m_\lambda (\sigma) \partial_m + A^i_\lambda (y)\partial_i], \qquad \sigma=\pi_{Y\Sigma}(y), \] on a composite manifold $Y\to\Sigma\to X$ is projectable to a connection \[ \Gamma= dx^\lambda\otimes[\partial_\lambda+\Gamma^m_\lambda (\sigma)\partial_m] \] on the fibred manifold $\Sigma\to X$, if there exists the commutative diagram \[ \begin{array}{rcccl} & {J^1Y} & \op\longrightarrow^{J^1\pi_{Y\Sigma}} & {J^1\Sigma} & \\ {_A} &\put(0,-10){\vector(0,1){20}} & & \put(0,-10){\vector(0,1){20}} & {_\Gamma} \\ & {Y} & \op\longrightarrow_{\pi_{Y\Sigma}} & {\Sigma} & \end{array} \] Let $Y\to \Sigma$ be a vector bundle and \[ A=dx^\lambda\otimes[\partial_\lambda+\Gamma^m_\lambda (\sigma) \partial_m + A^i{}_{j\lambda}(\sigma )y^j\partial_i] \] a linear morphism over $\Gamma$. Let $Y^*\to\Sigma\to X$ be a composite manifold where $Y^*\to \Sigma$ is the vector bundle dual to $Y\to \Sigma$. On $Y^*\to X$, there exists the dual connection \begin{equation} A^*=dx^\lambda\otimes[\partial_\lambda+\Gamma^m_\lambda (\sigma) \partial_m - A^j{}_{i\lambda}(\sigma )y_j\partial^i] \label{65} \end{equation} projectable to $\Gamma$. Let $Y\to\Sigma\to X$ and $Y'\to\Sigma\to X$ be composite manifolds where $Y\to\Sigma$ and $Y'\to\Sigma$ are vector bundles. Let $A$ and $A'$ be connections on $Y$ and $Y'$ respectively which are projectable to the same connection $\Gamma$ on the fibred manifold $\Sigma\to X$. On the tensor product \[ Y\op\otimes_\Sigma Y'\to X, \] there exists the tensor product connection \begin{equation} A\otimes A'=dx^\lambda\otimes(\partial_\lambda+\Gamma^m_\lambda \partial_m+(A^i{}_{j\lambda}y^{jk}+{A'}^k{}_{j\lambda}y^{ij})\partial_{ik})\label{66} \end{equation} projectable to $\Gamma$. In particular, let $Y\to X$ be a fibred manifold and $\Gamma$ a connection on $Y$. The vertical tangent morphism $V\Gamma$ to $\Gamma$ defines the connection \begin{eqnarray} && V\Gamma :VY\to VJ^1Y=J^1VY,\nonumber \\ && V\Gamma =dx^\lambda\otimes(\partial_\lambda +\Gamma^i_\lambda\frac{\partial}{\partial y^i}+\partial_j\Gamma^i_\lambda\dot y^j \frac{\partial}{\partial \dot y^i}), \label{43} \end{eqnarray} on the composite manifold $VY\to Y\to X$ due to the canonical isomorphism (\ref{1.22}). The connection $V\Gamma$ is projectable to the connection $\Gamma$ on $Y$, and it is a linear bundle morphism over $\Gamma$: \[ \begin{array}{rcccl} & {VY} & \op\longrightarrow^{V\Gamma} & {J^1VY} & \\ &\put(0,10){\vector(0,-1){20}} & & \put(0,10){\vector(0,-1){20}} & \\ & {Y} & \op\longrightarrow^\Gamma & {J^1Y} & \end{array} \] The connection (\ref{43}) yields the connection \begin{equation} V^*\Gamma =dx^\lambda\otimes(\partial_\lambda +\Gamma^i_\lambda\frac{\partial}{\partial y^i}-\partial_j\Gamma^i_\lambda \dot y_i \frac{\partial}{\partial \dot y_j}) \label{44} \end{equation} on the composite manifold $ V^*Y\to Y\to X$ which is the dual connection to $V\Gamma$ over $\Gamma$. Now, we consider second-order connections. \begin{definition}{1.11} A second order jet field (resp. a second order connection) $\overline\Gamma$ on a fibred manifold $Y\to X$ is defined to be a first order jet field (resp. a first order connection) on the fibred jet manifold $J^1Y\to X$, i.e. this is a section (resp. a global section) of the bundle (\ref{S1}). \end{definition} In the coordinates (\ref{51}) of the repeated jet manifold $J^1J^1Y$, a second order jet field $\overline\Gamma$ is given by the expression \[ (y^i_\lambda,y^i_{(\mu)},y^i_{\lambda\mu})\circ\overline\Gamma= (y^i_\lambda,\overline\Gamma^i_{(\mu)},\overline\Gamma^i_{\lambda\mu}). \] Using the contact map (\ref{54}), one can represent it by the horizontal 1-form \begin{equation} \overline\Gamma=dx^\mu\otimes (\partial_\mu +\overline\Gamma^i_{(\mu)}\partial_i+\overline\Gamma^i_{\lambda\mu}\partial^\lambda_i)\label{58} \end{equation} on the fibred jet manifold $J^1Y\to X.$ A second order jet field $\overline\Gamma$ on $Y$ is termed a sesquiholonomic (resp. holonomic) second order jet field if it takes its values into the subbundle $\widehat J^2Y$ (resp. $J^2Y$) of $J^1J^1Y$. We have the coordinate equality $\overline\Gamma^i_{(\mu)}=y^i_\mu$ for a sesquiholonomic second order jet field and the additional equality $\overline\Gamma^i_{\lambda\mu}=\overline\Gamma^i_{\mu\lambda}$ for a holonomic second order jet field. Given a first order connection $\Gamma$ on a fibred manifold $Y\to X$, one can construct a second order connection on $Y$, that is, a connection on the fibred jet manifold $J^1Y\to X$ as follows. The first order jet prolongation $J^1\Gamma$ of the connection $\Gamma$ on $Y$ is a section of the bundle (\ref{S'1}), but not the bundle $\pi_{11}$ (\ref{S1}). Let $K^*$ be a linear symmetric connection (\ref{408}) on the cotangent bundle $T^*X$ of $X$: \[ K_{\lambda\mu}=-K^\alpha{}_{\lambda\mu}\dot x_\alpha, \qquad K_{\lambda\mu}=K_{\mu\lambda}. \] There exists the affine fibred morphism \[ r_K: J^1J^1Y\to J^1J^1Y, \qquad r_K\circ r_K={\rm Id\,}_{J^1J^1Y}, \] \[ (y^i_\lambda ,y_{(\mu)}^i,y^i_{\lambda\mu})\circ r_K= (y^i_{(\lambda)} ,y_\mu^i,y^i_{\mu\lambda}+ K^\alpha{}_{\lambda\mu}(y^i_\alpha - y^i_{(\alpha)})). \] One can verify the following transformation relations of the coordinates (\ref{51}): \begin{eqnarray*} &&{y'}^i_\mu\circ r_k={y'}^i_{(\mu)}, \qquad {y'}^i_{(\mu)}\circ r_k={y'}^i_\mu, \\ &&{y'}^i_{\lambda\mu}\circ r_k={y'}^i_{\mu\lambda} +{K'}^\alpha{}_{\lambda\mu}({y'}^i_\alpha - {y'}^i_{(\alpha)}). \end{eqnarray*} Hence, given a first order connection $\Gamma$ on a fibred manifold $Y\to X$, we have the second order connection \[ J\Gamma\op = r_K\circ J^1\Gamma, \] \begin{equation} J\Gamma=dx^\mu\otimes [\partial_\mu+\Gamma^i_\mu\partial_i +(\partial_\lambda\Gamma^i_\mu+ \partial_j\Gamma^i_\mu y^j_\lambda - K^\alpha{}_{\lambda\mu} (y^i_\alpha-\Gamma^i_\alpha)) \partial_i^\lambda],\label{59} \end{equation} on $Y$. This is an affine morphism \[ \begin{array}{rcccl} & {J^1Y} & \op\longrightarrow^{J\Gamma} & {J^1J^1Y} & \\ {_{\pi_{01}}} &\put(0,10){\vector(0,-1){20}} & & \put(0,10){\vector(0,-1){20}} & {_{\pi_{11}}} \\ & {Y} & \op\longrightarrow_{\Gamma} & {J^1Y} & \end{array} \] over the first order connection $\Gamma$. Note that the curvature $R$ (\ref{13}) of a first order connection $\Gamma$ on a fibred manifold $Y\to X$ induces the soldering form \begin{equation} \overline\sigma_R=R^i_{\lambda\mu}dx^\mu\otimes\partial^\lambda_i \label{60} \end{equation} on the fibred jet manifold $J^1Y\to X$. Also the torsion (\ref{14}) of a first order connection $\Gamma$ with respect to a soldering form $\sigma$ on $Y\to X$ and the soldering curvature (\ref{15}) of $\sigma$ define soldering forms on $J^1Y\to X$.
train/arxiv
BkiUbzA5qoTDtzcIArtF
5
1
\section{Introduction} \label{sec:intro} The general adversary bound has proven to be a powerful concept in quantum computing. Originally formulated as a lower bound on quantum query complexity \cite{Hoyer2007}, it has been shown to be tight with respect to the quantum query complexity of evaluating any function, and in fact is tight with respect to the more general problem of state conversion \cite{Randothers2}. The general adversary bound is in some sense the culmination of a series of adversary methods \cite{Ambainis2000,Ambainis2003}. While the adversary method in its various forms has been useful in finding lower bounds on quantum query complexity \cite{Ambainis2010,Hoyer2008,Reichardtorigin}, the general adversary bound itself can be difficult to apply, as the quantity for even simple, few-bit functions must usually be calculated numerically \cite{Hoyer2007,Reichardtorigin}. One of the nicest properties of the general adversary bound is that it behaves well under composition \cite{Randothers2}. This fact has been used to lower bound the query complexity of composed total functions, and to create optimal algorithms for composed total functions \cite{Reichardtorigin}. In this work we extend one of the composition results to partial boolean functions, and use it to obtain an {\it{upper}} bound on query complexity by upper bounding the general adversary bound. Generally, finding an upper bound on the general adversary bound is just as difficult as finding an algorithm, as they are dual problems \cite{Randothers2}. However, using the composition property of the general adversary bound, given an algorithm for a boolean function $f$ composed $d$ times, we upper bound the general adversary bound of $f$. Due to the tightness of the general adversary bound and query complexity, this procedure gives an upper bound on the query complexity of $f$, but because it is nonconstructive, it doesn't give any hint as to what the corresponding algorithm for $f$ might look like. The procedure a bit counter-intuitive: we obtain information about an algorithm for a simpler function by creating an algorithm for a more complicated function. This is similar in spirit to the tensor-product trick, where an inequality between two terms is proved by considering tensor powers of those terms\footnote{See Terence Tao's blog, {\it{What's New}} ``Tricks Wiki article: The tensor power trick," http://terrytao.wordpress.com/2008/08/25/tricks-wiki-article-the-tensor-product-trick/}. We describe a class of oracle problems called \textsc{Constant-Fault Direct Trees} (introduced by Zhan et al. \cite{us}), for which this method proves the existence of a $O(1)$ query algorithm, where the previous best known query complexity is polynomial in the size of the problem. While this method does not give an explicit algorithm, we show that a span program algorithm achieves this bound. We show that a special case of \textsc{Constant-Fault Direct Trees} can be solved in a single query using an algorithm based on the quantum Haar transform. The quantum Haar transform has appeared as a subroutine in other algorithms \cite{Park2007,Hoyer2001}, and a 3-dimensional wavelet transform is the workhorse of an algorithm due to Liu \cite{Liu2009}. We describe a new problem, the \textsc{Haar Problem}, that also can be solved with the quantum Haar transform. While the \textsc{Haar Problem} requires only $1$ quantum query, it requires $\Omega(\log n)$ classical queries (where the oracle is an $n$-bit function). The \textsc{Haar Problem} is somewhat like period finding and may have interesting applications. \section{A Nonconstructive Upper Bound on Query Complexity} \label{sec:proofsec} Our theorem for creating a nonconstructive upper bound on query complexity relies on the tightness of the general adversary bound with respect to query complexity, and the properties of the general adversary bound under composition. The actual definition of the general adversary bound is not necessary for our purposes, but can be found in \cite{Hoyer2008}. Our theorem applies to boolean functions. A function $f$ is boolean if $f:S\rightarrow\{0,1\}$ with $S\subseteq\{0,1\}^n$. Given a boolean function $f$ and a natural number $d$, we define $f^d$, ``$f$ composed $d$ times," recursively as $f^d=f\circ(f^{d-1},\dots,f^{d-1})$, where $f^1=f$. Now we can state the main theorem: \begin{theorem} \label{thm:maintheorem} Suppose we have a (possibly partial) boolean function $f$ that is composed $d$ times, $f^d$, and a quantum algorithm for $f^d$ that requires $O(J^d)$ queries. Then $Q(f)=O(J)$, where $Q(f)$ is the bounded-error quantum query complexity of $f$. \end{theorem} \noindent (For background on bounded-error quantum query complexity and quantum algorithms, see \cite{Ambainis2000}.) There are seemingly similar results in the literature; for example, Reichardt proves in \cite{Reichardtrefl} that the query complexity of a function composed $d$ times, when raised to the $1/d^{th}$ power, is equal to the adversary bound of the function, in the limit that $d$ goes to infinity. This result is meant to give understanding of the exact query complexity of a function, whereas our result is a tool for upper bounding query complexity, possibly without gaining any knowledge of the exact query complexity of the function. One might think that \thm{maintheorem} is useless because an algorithm for $f^d$ usually comes from composing an algorithm for $f$, and one expects the query complexity of the algorithm for $f^d$ to be at least $J^d$ if $J$ is the query complexity of the algorithm for $f$. Luckily for us, this is not always correct. If there is a quantum algorithm for $f$ that uses $J$ queries, where $J$ is not optimal (i.e. is larger than the true bounded error quantum query complexity of $f$), then the number of queries used when the algorithm is composed $d$ times can be much less than $J^d$. If this is the case, and if the non-optimal algorithm for $f$ is the best known, \thm{maintheorem} promises the existence of an algorithm for $f$ that uses fewer queries than the best known algorithm, but, as \thm{maintheorem} is nonconstructive, it gives no hint as to what the algorithm looks like. We need two lemmas to prove \thm{maintheorem}: \begin{restatable}[Based on Lee et al. \cite{Randothers2}]{lemma}{lemonee} \label{lem:lemone} For any boolean function $f:S\rightarrow\{0,1\}$ with $S\subseteq\{0,1\}^n$ and natural number $d$, \begin{equation} \label{eq:advcomp} {\rm{ADV}}^{\pm}(f^d)\geq ({\rm{ADV}}^{\pm}(f))^d. \end{equation} \end{restatable} \noindent The proof of this lemma is in \app{comp}. H\o yer et al. \cite{Hoyer2007} prove \lem{lemone} for {\it{total}} boolean functions\footnote{While the statement of Theorem 11 in \cite{Hoyer2007} seems to apply to partial functions, it is mis-stated; their proof actually assumes total functions.}, and the result is extended to more general total functions in \cite{Randothers2}. Our contribution is to extend the result to partial functions. While \thm{maintheorem} still holds for total functions, the example we will describe later in the paper requires it to hold for partial functions. \begin{lemma} \emph{(Lee, et al. \cite{Randothers2})} For any function $f:S\rightarrow E$, with $S\in D^n$, and $E, D$ finite sets, the bounded-error quantum query complexity of $f$, $Q(f)$, satisfies \begin{equation} Q(f)=\Theta({\rm{ADV}}^{\pm}(f)). \end{equation} \label{lem:lem2} \end{lemma} We now prove \thm{maintheorem}: \begin{proof} Given an algorithm for $f^d$ that requires $O(J^d)$ queries, by \lem{lem2}, \begin{equation} \label{eq:eq1} {\rm{ADV}}^{\pm}(f^{d})=O(J^d). \end{equation} Combining Eq. \eq{eq1} and \lem{lemone}, we have \begin{equation} ({\rm{ADV}}^{\pm}(f))^d=O(J^d). \end{equation} Raising both sides to the $1/d^{th}$ power, we obtain \begin{equation} {\rm{ADV}}^{\pm}(f)=O(J). \end{equation} At this point, we have the critical upper bound on the general adversary bound of $f$. Finally, using \lem{lem2} again, we have \begin{equation} Q(f)=O(J). \end{equation} \end{proof} \section{Example where the General Adversary Upper Bound is Useful} \label{sec:example} In this section we will describe a function, called the \textsc{1-Fault Nand Tree}, for which \thm{maintheorem} gives a better upper bound on query complexity than any known quantum algorithm. The \textsc{1-Fault Nand Tree} was proposed by Zhan et al. \cite{us} to obtain a superpolynomial speed-up for a boolean formula with a promise on the inputs, and is a specific type of \textsc{Constant-Fault Direct Tree}, which is mentioned in \sec{intro}. We will first define a \textsc{Nand Tree}, and then explain the promise of the \textsc{1-Fault Nand Tree}. The \textsc{Nand Tree} is a complete, binary tree of depth $n$, where each node is assigned a bit value. The leaves are assigned arbitrary values, and any internal node $v$ is given the value \textsc{nand}$(val(v_1),val(v_2))$, where $v_1$ and $v_2$ are $v$'s children, and $val(v_i)$ denotes the value of that node. To evaluate the \textsc{Nand Tree}, one must find the value of the root given an oracle for the values of the leaves. (The \textsc{Nand Tree} is equivalent to solving \textsc{nand}$^n$, although the composition we will use for \thm{maintheorem} is not the composition of the \textsc{nand} function, but of the \textsc{Nand Tree} as a whole.) For arbitrary inputs, Farhi et al. showed that there exists an optimal algorithm in the Hamiltonian model to solve the \textsc{Nand Tree} in $O(2^{0.5n})$ time \cite{FarhiNAND1}, and this was subsequently extended to a standard discrete algorithm with quantum query complexity $O(2^{0.5n})$ \cite{Childs2007,Reichardt2010}. Classically, the best algorithm requires $2^{0.753n}$ queries \cite{Saks1986}. Here, we will consider the \textsc{1-Fault Nand Tree}, for which there is a promise on the values of the inputs. \begin{definition}\emph{(\textsc{1-Fault Nand Tree} \cite{us})} \label{defi:faulttree} Consider a \textsc{Nand Tree} of depth $n$, (as described above). Then to each node $v$, with child nodes $v_1$ and $v_2$, we assign an integer $\kappa(v)$ such that: \begin{itemize} \item $\kappa(v)=0$ for leaf nodes. \item $\kappa(v)=\max_{i} \kappa(v_i)$, if $val(v_1)=val(v_2)$ \item Otherwise $val(v_1)\neq val(v_2)$. Let $v_i$ be the node such that $val(v_i)=0$. Then $\kappa(v)=1+\kappa(v_i)$. \end{itemize} A tree satisfies the $1$-fault condition if $\kappa(v)\leq1$ for any node $v$ in the tree. \end{definition} \noindent{\bf{Notation:}} When $val(v_1)\neq val(v_2)$ we call the node $v$ a {\bf{fault}}. (Since \textsc{nand}$(0,1)$ $=1$, fault nodes must have value $1$, although not all $1$-valued nodes are faults.) The $1$-fault condition is a limit on the amount and location of faults within the tree. In a \textsc{1-Fault Nand Tree}, if a path moving from a root to a leaf encounters any fault node and then passes through the $0$-valued child of the fault node, there can be no further fault nodes on the path. An example of a \textsc{1-Fault Nand Tree} is given in \fig{NANDtree}. \begin{figure}[!ht] \center\includegraphics[width=2.7in]{NAND1full.pdf} \caption{An example of a \textsc{1-Fault Nand Tree} of depth 4. Fault nodes are highlighted by a double circle. The node $v$ is a fault since one of its children ($v_1$) has value $0$, and one ($v_2$) has value $1$. Among $v_1$ and its children, there are no further faults, as required by the $1$-fault condition. At $v_2$, we can have faults below the $1$-valued child of $v_2$, but there can be no faults below the $0$-valued child. } \label{fig:NANDtree} \end{figure} Zhan et al. \cite{us} propose a quantum algorithm for an $n$ level \textsc{1-Fault Nand Tree} that requires $O(n^2)$ queries to an oracle for the leaves. However, when the \textsc{1-Fault Nand Tree} is composed $\log n$ times, they apply their algorithm and find it only requires $O(n^3)$ queries. (Here we see an example where the number of queries required by an algorithm composed $d$ times does not scale exponentially in $d$, which is critical for applying \thm{maintheorem}.) By applying \thm{maintheorem} to the algorithm for the \textsc{1-Fault Nand Tree} composed $\log n$ times, we find that an upper bound on the query complexity of the \textsc{1-Fault Nand Tree} is $O(1)$. This is a large improvement over $O(n^2)$ queries. Zhan et al. prove $\Omega(\rm{poly}\log n)$ is a lower bound on the classical query complexity of \textsc{1-Fault Nand Trees}. An identical argument can be used to show that \textsc{Constant-Fault Nand Trees} (from \defi{faulttree}, trees satisfying $\kappa(v)\leq c$ with $c$ a constant) have query complexity $O(1)$. In fact, Zhan et al. find algorithms for a broad range of trees, where instead of \textsc{nand}, the evaluation tree is made up of a type of boolean function they call a {\it{direct}} function. A direct function is a generalization of a monotonic boolean function, and includes functions like majority, threshold, and their negations. For the exact definition, which involves span programs, see \cite{us}. Applying \thm{maintheorem} to their algorithm for trees made of direct functions proves the existence of $O(1)$ query algorithms for \textsc{Constant-Fault Direct Trees} (an explicit definition of \textsc{Constant-Fault Direct Trees} is given in \app{spanapp}). The best quantum algorithm of Zhan et. al requires $O(n^2)$ queries, and again they prove $\Omega(\rm{poly}\log n)$ is a lower bound on the classical query complexity of \textsc{Constant-Fault Direct Trees}. The structure of \textsc{Constant-Fault Direct Trees} can be quite complex, and it is not obvious that there should be a $O(1)$ query algorithm. Inspired by the knowledge of the algorithm's existence, thanks to \thm{maintheorem}, we found a span program algorithm for \textsc{Constant-Fault Direct Trees} that requires $O(1)$ queries. In the next section we will briefly describe this algorithm (details can be found in \app{spanapp}). However, as with many span program algorithms, it is hard to gain intuition about the algorithm. Thus in later sections we will describe a quantum algorithm based on the Haar transform that solves the \textsc{1-Fault Nand Tree} in $1$ query in the special case that there is exactly one fault on every path from the root to a leaf, and those faults all occur at the same level. \section{Quantum Algorithms for \textsc{Constant-Fault Direct Trees}} \subsection{Span Program Algorithm} Span programs are linear algebraic representations of boolean functions, which have an intimate relationship with quantum algorithms. In particular, Reichardt proves \cite{Reichardtrefl} that given a span program $P$ for a function $f$, there is a function of the span program, called the witness size, such that one can create a quantum algorithm for $f$ with query complexity $Q(f)$ such that \begin{equation} Q(f)=O(\textsc{witness size}(P)) \label{eq:wsizeqq} \end{equation} Thus, creating a span program for a function is equivalent to creating a quantum query algorithm. There have been many iterations of span program quantum algorithms, due to Reichardt and others \cite{Randothers2,Reichardtrefl,Reichardtorigin}. In \cite{us}, Zhan et al. create span programs for direct boolean functions \cite{us} using the span program formulation described in Definition 2.1 in \cite{Reichardtrefl}, one of the earliest versions (we will not go into the details of span programs in this paper). Using the more recent advancements in span program technology, we show here: \begin{restatable}{theorem}{kfault}\label{thm:kfault} Given an evaluation tree composed of the direct boolean function $f$, with the promise that the tree satisfies the $k$-fault condition, ($k$ a natural number), there is a quantum algorithm that evaluates the tree using $O(w^k)$ queries, where $w$ is a constant that depends on $f$. In particular, for a \textsc{Constant-Fault Direct Tree}, ($k$ a constant), the algorithm requires $O(1)$ queries. \end{restatable} \noindent Properties of direct boolean functions and precise definitions for the $k$-fault condition can be found in \app{spanapp}, as well as the proof of \thm{kfault}. The proof combines the properties of the witness size of direct boolean functions with a more current version of span program algorithms, due to Reichardt \cite{Reichardtrefl}. (For more details on direct boolean functions, see \cite{us}.) Thus, while \thm{maintheorem} promises the existence of $O(1)$ query quantum algorithms for \textsc{Constant-Fault Direct Trees}, \thm{kfault} gives an explicit $O(1)$ query quantum algorithm for these problems. \subsection{Quantum Haar Transform Algorithm} \label{sec:Haar} In this section we will describe a quantum algorithm for solving the \textsc{1-Fault Nand Tree} in a single query when there is exactly one fault node in each path from the root to a leaf, and all those faults occur at the same level, as in \fig{tiger}. We call this problem the \textsc{Haar Tree}. Let's consider the values of the leaves on a \textsc{Haar Tree}. When there are no faults in a \textsc{Nand Tree}, as in \fig{no}, then all even depth nodes have the same value as the root, and all odd depth nodes have the opposite value. Since faults can only occur at nodes with value $1$ (since \textsc{nand}$(0,1)=1$), the level of the tree containing faults must occur at even depth if the root has value $1$ or at odd depth if the root has value $0$. Thus if all the faults are at height $h$ (so their depth is $n-h$), then the value of the root is $\textsc{parity}(n-h+1$). \begin{figure}[h!] \centering \subfloat[\textsc{Nand Tree} with no faults]{\label{fig:no}\includegraphics[width=0.45\textwidth]{NANDvsimple.pdf}} \subfloat[\textsc{Nand Tree} with one fault per path]{\label{fig:tiger}\includegraphics[width=0.45\textwidth]{NAND1f2.pdf}} \caption{Figure (a) shows a \textsc{Nand Tree} with no faults, and Figure (b) shows a \textsc{Haar Tree}. In Figure (a), at each depth, all nodes have the same value, depending on the parity of the level. In Figure (b), since the root is $0$, the level of faults occurs at odd depth. (Faults are double circled.) The first half of the leaves descending from a fault node have one value, and the next half have the opposite value.} \label{fig:animals1} \end{figure} Now consider the leaves descending from a fault node $v$ when there are no further faults at any nodes descending from $v$ (as in \fig{tiger}). If $v$ is at height $h$, then it has $2^h$ leaves descending from it. Because one of $v's$ children has value $0$, and one has value $1$, the $2^{h-1}$ leaves descending from one child will all have the same value, $b$, and the $2^{h-1}$ leaves descending from the other child will have the value $\lnot b$. For a \textsc{Haar Tree}, since we are promised all faults are at the same height $h$, the values of the leaves will come in blocks of $2^h$, where within each block, the first $2^{h-1}$ leaves will have one value, and the next $2^{h-1}$ leaves will have the negation of the value in the first set of leaves. We can now reformulate the \textsc{Haar Tree} outside of the context of boolean evaluation trees. We define a new problem, the \textsc{Haar Problem}, to which the \textsc{Haar Tree} reduces. For the \textsc{Haar Problem}, one is given access to an oracle for a function $x:\{0,\dots,2^n-1\}\rightarrow\{0,1\}.$ We call the $i^{th}$ output of the oracle $x_i$. The function $x$ is promised to have a certain form: there exists an integer $h^*\in\{1,\dots,n\}$ and boolean variables $b_l$ for $l\in\{1,\dots,2^{n-h^*}\}$ such that \begin{equation} x_i=\begin{cases} b_l, & \text{if }2^{h^*}(l-1)\leq i< 2^{h^*}(l-\frac{1}{2})\\ \lnot b_l, & \text{if }2^{h^*}(l-\frac{1}{2})\leq i < 2^{h^*}l. \end{cases} \end{equation} \noindent See \fig{rrfunc} for an example of a \textsc{Haar Problem} oracle. \begin{figure}[!ht] \centering \includegraphics[width=2.5in]{rrfunction3.pdf} \caption{An example of an oracle function for the \textsc{Haar Problem} with $n=5$ (so $i$ is an integer, $0\leq i < 32$) and $h=2$ (so the function is divided into blocks of length $2^2=4$). We have emphasized the blocks by separating them using vertical lines. In each block the first two outputs have value $1$ and the next two have value $0$, or vice versa.} \label{fig:rrfunc} \end{figure} The \textsc{Haar Problem} is almost like period finding. We are promised that the function is divided into blocks of length $2^{h^*}$, and we need to find the length of these blocks. But instead of the output being the same in each block, each block has one degree of freedom: within the $l^{th}$ block, there is a choice of $b_l=0$ or $b_l=1$, where the first half of the outputs have value $b_l$, and second half have value $\lnot b_l$. Note that any oracle for the \textsc{Haar Problem} is also an oracle for the \textsc{Haar Tree}; to solve the \textsc{Haar Tree}, simply solve the \textsc{Haar Problem}, and then calculate $\textsc{parity}(n-h^*+1)$. The quantum algorithm for solving the \textsc{Haar Problem} requires making a measurement in the Haar wavelet basis \cite{Haar1910,easywave}. The Haar basis is based on the following step-like function: \begin{align} \psi(t) = \left\{ \begin{array}{lll} 1 & \mbox{if }0\leq t<1/2 \\ -1 & \mbox{if } 1/2\leq t <1\\ 0 & \mbox{otherwise. } \end{array} \right. \end{align} On the $2^n$ dimensional Hilbert space, with standard basis states $\{|i\rangle\}$, $i\in\{0,\dots,2^n-1\}$, the (un-normalized) Haar basis consists of the states $\{|\phi_0\rangle, |\psi_{h,l}\rangle\}$: \begin{align} |\phi_0\rangle&=\sum_{j=0}^{2^n-1}|i\rangle,& |\psi_{h,l}\rangle&=\sum_{i=0}^{2^n-1}\psi(2^{-h}i-(l-1))|i\rangle \end{align} where $h\in\{1,\dots,n\}$ and $l\in\{1,\dots,2^{n-h}\}$. Several Haar basis states for $n=3$ are shown in \fig{HaarStates}. \begin{figure}[h!] \centering \subfloat[Haar States]{\label{fig:HaarStates}\includegraphics[width=0.7\textwidth]{HaarStates2.pdf}} \subfloat[Example of $|\xi_x\rangle$]{\label{fig:fexample}\includegraphics[width=0.3\textwidth]{ex1a.pdf}} \caption{Figure (a) shows four of the eight un-normalized Haar basis states for $n=3$. The $x$-axis depicts the standard basis states $\{|0\rangle,|1\rangle,\dots, |7\rangle\}$, while the $y$-axis shows the un-normalized amplitude corresponding to each basis state. The line graphs represent the underlying functions $\psi(2^{-h}i-(l-1))$ that give the states their form, while the amplitudes themselves are represented by dots. Figure (b) shows $|\xi_x\rangle$ for $x$ with $n=3$, $h^*=2$, $b_0=1$, and $b_1=0$, plotted as a function of the non-normalized amplitude of each standard basis state.} \label{fig:animals} \end{figure} We suppose that we have access to a phase-flip oracle $O_x$ such that $O_x|i\rangle=(-1)^{x_i}|i\rangle$ where $\{x_i\}$ satisfy the promise of the \textsc{Haar Problem} oracle. Then the following algorithm solves the \textsc{Haar Problem} in one query: \begin{align*} (1) & \hspace{.5cm}\text{Create an equal superposition of standard basis states: } |\xi\rangle= \frac{1}{\sqrt{2^n}}\sum_{j=0}^{2^n-1}|i\rangle\nonumber \\ (2) &\hspace{.5cm} \text{ Apply the phase flip oracle, giving } |\xi_x\rangle= \frac{1}{\sqrt{2^n}}\sum_{j=0}^{2^n-1}(-1)^{x_i}|i\rangle \nonumber \\ (3) &\hspace{.5cm} \text{ Measure } |\xi_x\rangle \mbox{ in the Haar basis. If the state $|\psi_{h,l}\rangle$ is measured, return $h$.} \nonumber \\ \end{align*} It is especially easy to see why the algorithm works graphically. Suppose we are given an oracle $x$ with $n=3$ and $h^*=2$. Then $|\xi_x\rangle$ (the state in step (2) of the algorithm) is a superposition of all standard basis states, with amplitudes as shown, for example, in \fig{fexample}. One can see by comparing the graphs in \fig{HaarStates} and \fig{fexample} that the amplitudes completely destructively interfere for the inner product of $|\xi_x\rangle$ and any Haar basis states except $\{|\psi_{2,l}\rangle\}$ (since here $h^*=2$). Classically, the \textsc{Haar Problem} can be solved in $\tilde{\Theta}(\log n)$ queries, where $\tilde{\Theta}$ indicates tightness up to $\log\log$ factors. The proof of this fact, as well as a description of a subset of inputs on which the \textsc{1-Fault Nand Tree} becomes classically easy, can be found in \app{ClassicalBound}. \subsection{Extensions and Related Problems} There are other oracle problems whose algorithms naturally involve the quantum Haar transform. In the \textsc{Haar Problem}, the oracle has the property that when the phase flip oracle operation is applied to an equal superposition of standard basis states, the outcome is a superposition of non-overlapping Haar basis states. All Haar basis states in this superposition have the form $|\psi_{h^*,l}\rangle$. One can design a new oracle such that when the the phase flip operation is applied, the outcome is still a superposition of non-overlapping Haar basis states, but now all Haar basis states in the superposition share a new common feature. For example, they could all have the form $|\psi_{h_j,l}\rangle$ where $h_j$ is promised to either be even or odd. In this case, the goal would be to determine whether $\{h_j\}$ are even or odd, and a single quantum query in the Haar basis will give the answer. This new promise problem (determining whether $\{h_j\}$ are even or odd) is equivalent to solving a \textsc{1-Fault Nand Tree} where each path from the root to the leaves contains exactly one fault, but those faults are now not all on the same level. The \textsc{Haar Problem} is closely related to the \textsc{Parity Problem} introduced by Bernstein and Vazirani \cite{Bernstein1993}. Let $x:\{0,1\}^n\rightarrow\{0,1\}$ such that $x_i=i\cdot k$ where $k\in \{0,1\}^n$. Then the \textsc{Parity Problem} is: given an oracle for $x$, find $k$. The \textsc{Parity Problem} can also be solved in a single quantum query. Notice that any oracle that satisfies the promise required by the \textsc{Parity Problem} also satisfies the promise required by the \textsc{Haar Problem} (although the converse is not true). The algorithm for the \textsc{Parity Problem} is similar to the quantum Haar transform algorithm described in \sec{Haar}, except in step (3), one measures in the Hadamard basis rather than the Haar basis, and obtains the output $k$. It is not hard to show that the Bernstein and Vazirani algorithm can also be used to solve the \textsc{Haar Problem}; the value of $h^*$ is the location of the first non-zero bit of the outcome of the \textsc{Parity Problem}, counting from least significant to most significant bits. While both the \textsc{Haar} and \textsc{Parity Problems} are similar, the \textsc{Haar Problem} has a less stringent promise, and is slightly more natural, when viewed as finding the period of a function with some freedom within each period. \section{Conclusions and Future Work} We describe a method for upper bounding the quantum query complexity of boolean functions using the general adversary bound. Using this method, we show that \textsc{Constant-Fault Direct Trees} can always be solved in $O(1)$ queries. Furthermore, we create an algorithm with a matching upper bound using improved span program technology. For the more restricted case of the \textsc{Haar Tree} we give a single query algorithm using a reduction to the \textsc{Haar Problem}. The \textsc{Haar Problem} is a new oracle problem that can be solved in a single quantum query using the quantum Haar transform, but which requires $\Omega (\log n)$ classical queries to solve. This problem seems to fall somewhere in between the \textsc{Parity Problem} of Bernstein and Vazirani \cite{Bernstein1993} and period finding. Period finding has been shown to have useful applications, most notably in factoring \cite{Shor1994}. Thus we hope that a new application for the \textsc{Haar Problem} or the quantum Haar transform can be found. In particular, the fact that the quantum Haar transform can be used find the length of blocks in the \textsc{Haar Problem}, while ignoring the extra degree of freedom in each block, seems like a useful property. We would like to find other examples where \thm{maintheorem} is useful, although we suspect that \textsc{Constant-Fault Direct Trees} are a somewhat unique case. It is clear from the span program algorithm described in \app{spanapp} that \thm{maintheorem} will not be useful for composed functions where the base function is created using span programs. However, there could be other types of quantum walk algorithms, for example, to which \thm{maintheorem} might be applied. In any case, this work suggests that new ways of upper bounding the general adversary bound could give us a second window into quantum query complexity beyond algorithms. \section{Acknowledgements} Thanks to Rajat Mittal for generously explaining the details of the composition theorem for the general adversary bound. Thanks to the anonymous FOCS reviewer for pointing out problems with the previous version, and also for encouraging me to find a constant query span program algorithm. Thanks to Bohua Zhan, Avinatan Hassidim, Eddie Farhi, Andy Lutomirski, Paul Hess, and Scott Aaronson for helpful discussions. This work was supported by NSF Grant No. DGE-0801525, {\em IGERT: Interdisciplinary Quantum Information Science and Engineering} and by the U.S. Department of Energy under cooperative research agreement Contract Number DE-FG02-05ER41360.
train/arxiv
BkiUbmLxaJJQn1aABmal
5
1
\section{Introduction} In this paper we obtain convergence rates for the homogenization of the Poisson problem in a bounded domain of $\mathbb{R}^d$, $d \geq 3$, that is perforated by many small random holes $H^\varepsilon$. We impose with Dirichlet boundary conditions on the boundary of the set and of the holes $H^\varepsilon$. We assume that, for $\varepsilon > 0$, the random set $H^\varepsilon$ is generated by a rescaled \textit{marked point process $(\Phi, \mathcal{R})$}, where $\Phi$ is either the lattice $\mathbb{Z}^d$ or a Poisson point process of intensity $\lambda > 0$. The associated marks $\mathcal{R}= \{ \rho_z \}_{z \in \Phi}$ are independent and identically distributed random variables that satisfy the moment condition \begin{align}\label{integrability.radii.intro} \mathbb{E} \bigl[ \rho^{d-2+\beta} \bigr] < +\infty, \ \ \ \beta > 0. \end{align} More precisely, given $(\Phi, \mathcal{R})$ and a bounded and smooth domain $D\subset \mathbb{R}^d$, we define \begin{align}\label{def.holes} H^\varepsilon:= \bigcup_{z \in \Phi \cap (\frac{1}{\varepsilon}D)} B_{(\eps^\frac{d}{d-2} \rho_{z}) \wedge 1} (\varepsilon z), \ \ \ \ \ \ D^\varepsilon:= D \backslash H^\varepsilon \end{align} with $(\frac{1}{\varepsilon}D):= \{ x \in \mathbb{R}^d \, \colon \, \varepsilon x \in D \}$. As shown in \cite{GHV}, if $\beta=0$ in \eqref{integrability.radii.intro}, then for every $f \in H^{-1}(D)$ and $\P$-almost every realization of the random set $H^\varepsilon$, the solutions to \begin{align}\label{P.eps} \begin{cases} -\Delta u_\varepsilon = f \ \ \ \ &\text{in $D^\varepsilon$}\\ u_\varepsilon = 0 \ \ \ \ &\text{on $\partial D^\varepsilon$} \end{cases} \end{align} converge weakly in $H^1_0(D)$ to the homogenized problem \begin{align}\label{P.hom} \begin{cases} -\Delta u + C_0 u = f \ \ \ \ &\text{in $D$}\\ u = 0 \ \ \ \ &\text{on $\partial D$.} \end{cases} \end{align} The constant $C_0 > 0$ is the limit of the density of harmonic capacity generated by the set $H^\varepsilon$: If $S^{d-1}$ denotes the $(d-1)$-dimensional unit sphere, then \begin{equation}\label{strange.term} C_0 := c_d\begin{cases} \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr] \ \ \ \ & \text{if $\Phi = \mathbb{Z}^d$}\\ \lambda \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr] & \text{if $\Phi = \mathop{Poi}(\lambda)$} \end{cases}, \ \ \ \ c_d:= (d-2) \mathcal{H}^{d-1}(S^{d-1}). \end{equation} In this paper, we strengthen the condition of \cite{GHV} from $\beta=0$ to $\beta > 0$ in \eqref{integrability.radii.intro} and study the convergence rates of $u_\varepsilon$ to the homogenized solution $u$. \bigskip By the Strong Law of Large Numbers, assumption \eqref{integrability.radii.intro} with $\beta=0$ is minimal in order to ensure that for $\P$-almost every realization of $H^\varepsilon$, its density of capacity admits a finite limit. However, it does not prevent the balls in $H^\varepsilon$ from having radii that are much bigger than size $\eps^\frac{d}{d-2}$. This gives rise to clustering phenomena with overwhelming probability. In particular, for $\beta < d -2$, the expected number of balls of $H^\varepsilon$ that intersect, namely such that their radius $\eps^\frac{d}{d-2} \rho_z$ is bigger than the typical distance $\varepsilon$ between the centres, is of order $\varepsilon^{-d+2+\beta}$ (over an expected total of $\varepsilon^{-d}$ balls). The same holds also under assumption \eqref{integrability.radii.intro} for $\beta < \frac{(d-2)^2}{2}$, with the expected number of overlapping balls being of order $\varepsilon^{-d +2 +\frac{2}{d-2}\beta}$ . The presence of balls that overlap is the main challenge in the proof of the qualitative homogenization statement obtained in \cite{GHV} and is one of the challenges of the current paper. It requires a careful treatment of the set $H^\varepsilon$ to ensure that the presence of long chains of overlapping balls does not destroy the homogenization process. For a more detailed discussion on this issue we refer to the introductory section in \cite{GHV} and to Subsection \ref{sub.ideas} of the present paper. \bigskip The main results contained in this paper provide an annealed (i.e. averaged in probability) estimate for the $H^1$-norm of the homogenization error $u_\varepsilon- W_\varepsilon u$. The function $W_\varepsilon$ is a suitable corrector function that is related to the so-called \textit{oscillating test function} \cite{Cioranescu_Murat,Tartar}. We assume that $\Phi$ is the lattice $\mathbb{Z}^d$ or that it is a Poisson point process in dimension $d=3$. If $\mathbb{E} \bigl[ \cdot \bigr]$ denotes the expectation under the probability measure associated to the process $(\Phi, \mathcal{R})$, we show that\footnote{In the case of $\Phi$ being a Poisson point process, there is a factor $\log\varepsilon$ on the right-hand side. We refer to Theorem \ref{t.main} for the precise statement.} \begin{align}\label{thm.rough} \mathbb{E}\bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr]^{\frac 1 2} \leq C\begin{cases} \varepsilon^{\frac{d}{d^2- 4}\beta} \ \ \ &\text{if $\beta \leq d-2$}\\ \varepsilon^{\frac{d}{d+2}} \ \ \ &\text{if $\beta > d-2$} \end{cases} \end{align} We stress that in the case of periodic holes, namely when $\Phi=\mathbb{Z}^d$ and $\rho_z \equiv r > 0$ for all $z \in \mathbb{Z}^d$, the optimal rate on the right-hand side of \eqref{thm.rough} is $\varepsilon$ \cite{Kacimi_Murat}. \bigskip The main quantity that governs the decay of the homogenization error $u_\varepsilon - W_\varepsilon u$ is the convergence of the capacity density of $H^\varepsilon$ to the constant term $C_0$ defined in \eqref{strange.term}. In the periodic case mentioned in the previous paragraph, the term $C_0= c_d r^{d-2}$ is ``very'' close to the density of capacity of $H^\varepsilon$ already at scale $\varepsilon$. Heuristically, indeed, if $A \subset D$ we have \begin{align}\label{density.capacity} \capacity(A \cap H^\varepsilon) \simeq \sum_{z \in (\varepsilon Z)^d \cap A} \capacity(B_{\eps^\frac{d}{d-2} r}(z)) \simeq |A| \varepsilon^{-d} c_d (\eps^\frac{d}{d-2} r)^{d-2} \stackrel{\eqref{strange.term}}{=} C_0 |A|, \end{align} and this chain of identities is true as long as $|A|$ is at least of order $\varepsilon$. On the other hand, in our setting, this identity is expected to hold at scales that are larger than $\varepsilon$ due to the fluctuations of the process $(\Phi, \mathcal{R})$. For a more detailed explanation of the exponents in \eqref{thm.rough}, we refer to Subsection \ref{sub.ideas}. {We also remark that the threshold $d-2$ in the parameter $\beta$ obtained in \eqref{thm.rough} is related to the $L^2$-nature of the norm considered for the homogenization error. Roughly speaking, the norm considered in \eqref{thm.rough} requires a control on the expectation of the square of the capacity generated by the balls in $H^\varepsilon$.} \bigskip Starting with \cite{Cioranescu_Murat} and \cite{papvar.tinyholes}, there is a large amount of literature devoted to the homogenization of \eqref{P.eps}, both for deterministic \cite{DalMasoGarroni.punctured, HoeferVelazquez.reflections} and random holes $H^\varepsilon$ \cite{CaffarelliMellet, CasadoDiaz, MarchenkoKhruslov}; similar problems have also been studied in the case of the fractional laplacian $(-\Delta)^s$, \cite{CaffarelliMellet.fractional, Focardi.fractional} or for nonlinear elliptic operators \cite{CasadoDiaz.nonlinear, Zhikov}. All the models considered in the deterministic case contain assumptions that ensure that, for $\varepsilon$ small enough, the holes in $H^\varepsilon$ do not to overlap. In the random models mentioned above, the previous property is as well required, at least for $\P$-almost every realization and $\varepsilon>0$ small enough. For a complete and more detailed description of these works, we refer to the introduction of \cite{GHV}. \smallskip We also mention that the analogue of \eqref{P.eps} for a Stokes (and Navier-Stokes) system with no-slip boundary conditions on the holes $H^\varepsilon$ has been considered in \cite{AllaireARMA1990a, Allaire_arma2, SanchezP82} in the periodic case and then extended to more general configurations of holes (see, e.g., \cite{Desvillettes2008, Hillairet2018, Hillairet2017}). In the case of the Stokes operator, the limit equation contains an additional zero-th order term similar to $C_0$ in \eqref{P.hom}. Under the same assumptions of this paper, the analogue of the homogenization result contained in \cite{GHV} has been proven for a Stokes system in \cite{GH1, GH_pressure}. \smallskip In the periodic case, quantitative rates of convergence for \eqref{P.eps} to \eqref{P.hom} have been first obtained in \cite{Kacimi_Murat}. In \cite{Russel, Wang_Xu_Zhang}, similar results have been obtained with $-\Delta$ replaced by (also nonlinear) oscillating elliptic operators. When the holes are randomly distributed, the first quantitative result on the convergence of $u_\varepsilon$ to $u$ has been obtained in \cite{Figari_Orlandi}. In this paper, the authors study the analogue of \ref{P.eps} for the operator $-\Delta + \lambda$ in an unbounded domain of $\mathbb{R}^3$, that is perforated by $m$ spherical holes of identical radius $\sim m^{-1}$. The centres of the holes are independent and distributed according to a compactly supported and continuous potential $V$. If $u_m$ denotes the analogue of $u_\varepsilon$,when the massive term $\lambda$ is big enough (compared to $V$), the authors provide rates of convergence for the $L^2$-norm of the difference $u_m -u$ in the limit $m \to +\infty$. Furthermore, they prove the Gaussianity of the fluctuations of $u_m$ around the homogenized solution $u$ in the CLT-scaling. In \cite{Jonas_Richard}, this result has been obtained in the same setting of \cite{Figari_Orlandi} without any constraint on the massive term $\lambda$. \bigskip \subsection*{Organization of the paper.} This paper is organized as follows: In Section \ref{s.main} we introduce the setting and state the main results. In Subsection \ref{sub.ideas}, we provide an overview of the main challenges and ideas used to prove Theorem \ref{t.main} . In Section \ref{s.periodic} we prove Theorem \ref{t.main}, $(a)$ while in Section \ref{s.poi.d} we show how to extend the argument of the previous section when $\Phi$ is a Poisson point process in $\mathbb{R}^3$ (Theorem \ref{t.main}, case $(b)$). Finally, Section \ref{Aux} contains some auxiliary results that are used in the proofs of the main results. \section{Setting and main results}\label{s.main} Let $d \geq 3$ and $D \subset \mathbb{R}^d$ be a bounded and smooth domain that is star-shaped with respect to the origin. For $\varepsilon > 0$, we define the random set of holes $H^\varepsilon$ and the punctured set $D^\varepsilon$ as in \eqref{def.holes}. \smallskip We assume that the union of balls $H^\varepsilon$ is generated by a marked point process $(\Phi, \mathcal{R})$ on $\mathbb{R}^d \times \mathbb{R}_+$. In other words, we generate the centres of the balls in $H^\varepsilon$ via a point process $\Phi$. To each point $z\in \Phi$, we associate a mark $\rho_z \geq 0$ that determines the radius of the ball. We refer to \cite[Chapter 9, Definitions 9.1.I - 9.1.IV]{Daley.Jones.book2} for an extensive and rigorous definition of marked point processes and their associated measures on $\mathbb{R}^d \times \mathbb{R}_+$. We denote by $(\Omega; \mathcal{F}, \mathbb{P})$ the probability space associated to $(\Phi, \mathcal{R})$, so that the random sets in \eqref{def.holes} and the random field solving \eqref{P.eps} may be written as $H^\varepsilon= H^\varepsilon(\omega)$, $D^\varepsilon=D^\varepsilon(\omega)$ and $u_\varepsilon(\omega; \cdot)$, respectively. The set of realizations $\Omega$ may be seen as the set of atomic measures $\sum_{n \in \mathbb{N}} \delta_{(z_n, \rho_n)}$ in $\mathbb{R}^d \times \mathbb{R}_+$ or, equivalently, as the set of (unordered) collections $\{ (z_n , \rho_n \}_{ n\in \mathbb{N}} \subset \mathbb{R}^d \times \mathbb{R}_+$. \smallskip Throughout this paper we assume that $(\Phi, \mathcal{R})$ satisfies the following conditions: \begin{itemize} \item[(i)] $\Phi$ is either the lattice $\mathbb{Z}^d$ or $\Phi = \mathop{Poi}(\lambda)$, i.e. a Poisson point process of intensity $\lambda>0$; \smallskip \item[(ii)] The marks $\{ \rho_z\}_{z \in \Phi}$ are independent and identically distributed: if $\P_{\mathcal{R}}$ denotes the marginal of the marks with respect to the process $\Phi$, then the n-correlation function may be written as the product \begin{align} f_n ( (z_1, \rho_1), \cdots, (z_n, \rho_n)) = \Pi_{i=1}^n f_1((z_i, \rho_i)), \ \ \ \ f_1((z, \rho)) = f(\rho). \end{align} \smallskip \item[(iii)] The marks $\mathcal{R}$ have finite $(d-2+\beta)$-moment, namely the density function $f$ in (ii) satisfies \begin{align}\label{integrability.radii} \mathbb{E}_{\rho}\bigl[ \rho^{d-2+\beta}\bigr] : = \int_0^{+\infty} \rho^{d-2 +\beta} f(\rho) \d \rho \leq 1, \ \ \ \ \text{with $\beta > 0$.} \end{align} \end{itemize} We stress that conditions (i)-(ii) yield that $(\Phi, \mathcal{R})$ is stationary. In the case $\Phi=\mathop{Poi}(\lambda)$, the process $(\Phi, \mathbb{R})$ is stationary with respect to the action of the group of translations $\{\tau_x \}_{x\in \mathbb{R}^d}$. This means that the probability measure $\P$ is invariant under the action of the transformation $\tau_x: \Omega \to \Omega$, \, $\omega= \{ ( z_i; \rho_{z_i}) \}_{i \in \mathbb{N}} \mapsto \tau_x \omega := \{ ( z_i + x; \rho_{z_i}) \}_{i \in \mathbb{N}}$. In the case $\Phi=\mathbb{Z}^d$ the same holds under the action of the group $\{\tau_z \}_{z\in \mathbb{Z}^d}$. \bigskip \subsection*{Notation.} When no ambiguity occurs, we skip the argument $\omega \in \Omega$ in the notation for $H^\varepsilon(\omega), D^\varepsilon(\omega)$ and $u_\varepsilon(\omega ; \cdot)$, as well as in all the other random objects. We denote by $\mathbb{E}\bigl[ \cdot \bigr]$ and $\mathbb{E}_{\Phi}\bigl[ \cdot \bigr]$ the expectations under the total probability measure $\mathbb{P}$ the probability measure $\mathbb{P}_{\Phi}$ associated to the point process $\Phi$. For $\varepsilon > 0$ and a set $A \subset \mathbb{R}^d$, we define \begin{align}\label{notation.psi} \Phi(A):=\bigl\{ z \in \Phi \, \colon \, z \in A \bigr\}, \ \ \ \Phi_\varepsilon(A) := \bigl\{ z \in \Phi \, \colon \, \varepsilon z \in A \bigr\} \end{align} and the random variables \begin{align} N(A):= \#(\Phi(A)), \ \ \ N^\varepsilon(A):= \#(\Phi^\varepsilon(A)). \end{align} \smallskip For any $\mu \in H^{-1}(D)$, we write $\langle \, \cdot \, ; \, \cdot \, \rangle$ for the duality product with $H^1_0(D)$; we use the notation $\mathop{\mathpalette\avsuminner\relax}\displaylimits_{i \in I}$ for the averaged sum $\#(I)^{-1}\sum_{i\in I}$ and $\lesssim$ and $\gtrsim$ instead of $\leq C$ and $\geq C$ with the constant $C$ depending on the dimension $d$, the domain $D$ and, in the case of $\Phi=\mathop{Poi}(\lambda)$, the intensity rate $\lambda$. \bigskip \subsection{Main result} Before stating the main results, we need to define a suitable corrector function $W_\varepsilon$ that appears in the homogenization error $u_\varepsilon - W_\varepsilon u$. We stress that, also in the case of periodic holes, the solutions $u_\varepsilon$ are only expected to converge weakly in $H^1_0(D)$ to $u$. Therefore, the homogenized solution needs to be suitably modified via a corrector $W_\varepsilon$ in order to be a good approximation for $u_\varepsilon$ also { in the strong topology of $H^1_0(D)$}. \smallskip For $x \in \mathbb{R}^d$ we set \begin{align}\label{minimal.distance} R_{\varepsilon,x} := \frac \varepsilon 4 \min_{z \in \Phi^\varepsilon(D), \atop z \neq x} \biggl\{|z - x| ; 1 \biggr\} \end{align} Note that, if $\Phi= \mathbb{Z}^d$, then the above quantity is always $\frac \varepsilon 4$. For $\delta > 0$, we denote by $\Phi_{\delta}^\varepsilon(D) \subset \Phi_\varepsilon(D)$ the set \begin{align}\label{thinning.psi} \Phi_{\delta}^{\varepsilon}(D):= \biggl\{ z \in \Phi_\varepsilon(D) \, \colon \, \eps^\frac{d}{d-2}\rho_z \leq \varepsilon^{1 +\delta}, \, \, R_{\varepsilon,z} \geq 2 \sqrt d \eps^\frac{d}{d-2}\rho_z \biggr\}. \end{align} \smallskip For each $z \in \tilde \Phi^\varepsilon(D)$, let $w_{\varepsilon,z} \in H^1(B_{\frac \varepsilon 2}(\varepsilon z))$ be the solution to \begin{align}\label{def.harmonic.annuli} \begin{cases} -\Delta w_{z,\varepsilon} = 0 \ \ \ \ &\text{in $B_{R_{\varepsilon,z}}(\varepsilon z)\backslash B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)$}\\ w_{z,\varepsilon} = 0 \ \ \ &\text{on $\partial B_{R_{\varepsilon,z}}(\varepsilon z)$}\\ w_{z,\varepsilon} = 1 \ \ \ &\text{on $\partial B_{R_{\varepsilon,z}}(\varepsilon z)$}. \end{cases} \end{align} We thus define \begin{align}\label{corrector} W_\varepsilon(x)=\begin{cases} w_{z,\varepsilon} \ \ \ &\text{if $x\in B_{R_{\varepsilon,z}}(\varepsilon z)\backslash B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)$}\\ 0 \ \ \ &\text{if $x \in B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)$}\\ 1 \ \ \ &\text{otherwise} \end{cases} \end{align} We stress that \eqref{thinning.psi} ensures that definitions \eqref{def.harmonic.annuli} and \eqref{corrector} are well-posed since the set $\{ B_{R_{\varepsilon,z}}(\varepsilon z)\}_{z \in \Phi_\delta^\varepsilon(D)}$ is made of disjoint balls and, for every $z \in \Phi^\varepsilon_\delta(D)$, it holds $B_{\eps^\frac{d}{d-2} \rho_z}(\varepsilon z) \subset B_{R_{\varepsilon,z}}(\varepsilon z)$. Note that in the above definition the function $W_\varepsilon \in H^1(D)$ depends on the choice of the parameter $\delta$ used to select the subset $\Phi_\delta^\varepsilon(D)$. The optimal parameter $\delta$ will be fixed in Theorem \ref{t.main}. We finally stress that, in the periodic case $\Phi= \mathbb{Z}^d$ and $\rho_z \equiv r$, for any $\delta > 0$ and $\varepsilon$ small enough, the function $W_\varepsilon$ coincides with the \textit{oscillating test function} constructed in \cite{Cioranescu_Murat, Kacimi_Murat}. \bigskip \begin{thm}\label{t.main} Let $(\Phi, \mathcal{R})$ satisfy conditions (i)-(iii) of the previous subsection. For $\varepsilon>0$ and $f \in L^\infty(D)$, with $\|f\|_{L^\infty(D)}=1$, let $u_\varepsilon$ and $u$ be as in \eqref{P.eps} and \eqref{P.hom}, respectively. We consider the random field $W_\varepsilon$ in \eqref{corrector} with \begin{align*} \delta = \begin{cases} \frac{4}{d^2- 4} \ \ \ &\text{if $\beta \leq d-2$}\\ \frac{2}{d-2}- \frac{2d}{(d+2)\beta} \ \ \ &\text{if $\beta > d-2$} \end{cases} \end{align*}. Then \begin{itemize} \item[(a)] If $\Phi= \mathbb{Z}^d$, there exists a constant $C= C(d, D)> 0$ such that \begin{align*} \mathbb{E} \bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr]^{\frac 1 2} \leq C\begin{cases} \varepsilon^{\frac{d}{d^2- 4}\beta} \ \ \ &\text{if $\beta \leq d-2$}\\ \varepsilon^{\frac{d}{d+2}} \ \ \ &\text{if $\beta > d-2$} \end{cases} \end{align*} \item[(b)] If $\Phi = \mathop{Poi}(\lambda)$ with $\lambda >0$ and $d=3$, there exists a constant $C=C(\lambda, D)>0$ such that \begin{align*} \mathbb{E} \bigl[ \| u_\varepsilon - \tilde W_\varepsilon u \|_{H^1_0(D)}^2 \bigr]^{\frac 1 2} \leq C\begin{cases} |\log \varepsilon|\varepsilon^{\frac{3}{5}\beta} \ \ \ &\text{if $\beta \leq 1$}\\ |\log \varepsilon| \varepsilon^{\frac{3}{5}} \ \ \ &\text{if $\beta > 1$} \end{cases} \end{align*} \end{itemize} \end{thm} \bigskip \begin{rem} As it becomes apparent in the proof of Theorem \ref{t.main}, the choice of $W_\varepsilon$ is not unique. The same result holds, for instance, if $W_\varepsilon$ is replaced with the oscillating test function $w_\varepsilon$ constructed in {\cite[Section 3]{GHV}} and in Subsection \ref{sub.quenched.periodic} of the present paper. The function $W_\varepsilon$, however, has a simpler and more explicit construction that may be implemented numerically with more efficiency. It is, indeed, an oscillating test function restricted to the balls of $H^\varepsilon$ that do not overlap and have radius smaller than the fixed threshold $\varepsilon^{1+\delta}$. \end{rem} \bigskip \subsection{Ideas of the proofs}\label{sub.ideas} The proof of Theorem \ref{t.main} is inspired to the proof of the same result in the case of periodic holes shown in \cite{Kacimi_Murat}. The latter, in turn, upgrades the result of \cite{Cioranescu_Murat} from the qualitative statement $u_\varepsilon \rightharpoonup u$ in $H^1_0(D)$ to an estimate on the convergence of the homogenization error. Both arguments rely on the construction of suitable \textit{oscillating test functions} $\{ w_\varepsilon \}_{\varepsilon> 0} \subset H^1(D)$. In the qualitative statement of \cite{Cioranescu_Murat}, these functions allow to pass to the limit $\varepsilon \downarrow 0$ in the weak formulation of \eqref{P.eps} and infer the homogenized equation of \eqref{P.hom}. \smallskip The functions $\{ w_\varepsilon \}_{\varepsilon >0}$ may be constructed as $W_\varepsilon$ in \eqref{corrector}, where the set $\Phi^\varepsilon_\delta$ coincides with the whole set $\Phi=\mathbb{Z}^d$ and $w_{\varepsilon,z}= w_{\varepsilon,0}( \cdot + z)$. Furthermore, they are strictly related to the density of capacity generated by $H^\varepsilon$: The additional term $C_0= c_d r^{d-2}$ that appears in the homogenized equation \eqref{P.hom} is indeed the limit of the measures $-\Delta w_\varepsilon$ when tested against the function $\rho u_\varepsilon \in H^1_0(D^\varepsilon)$, $\rho \in C^\infty_0(D)$. It is not hard to see from \eqref{corrector} that, for functions that vanish on the holes $H^\varepsilon$, the action of $-\Delta w_\varepsilon$ reduces the periodic measure \begin{align}\label{mu.eps} \mu_\varepsilon = \sum_{z \in \mathbb{Z}^d \cap \frac{1}{\varepsilon}D} \partial_n w_{\varepsilon,z} \delta_{ \partial B_{\frac \varepsilon 4}(\varepsilon z) }, \end{align} that is concentrated on the spheres $\{ \partial B_{\frac \varepsilon 4}(\varepsilon z) \}_{z \in \mathbb{Z}^d}$. \smallskip In \cite{Kacimi_Murat}, the corrector $W_\varepsilon$ is chosen as the oscillating test function $w_\varepsilon$ itself. As a first step, it is shown that the decay of $\| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}$ boils down to controlling the convergence of the density of capacity of $H^\varepsilon$ to its limit $C_0$ (c.f. \eqref{strange.term}). The latter is expressed in terms of the decay of the norm $\|\mu_\varepsilon- C_0\|_{H^{-1}(D)}$. As a second step, the authors appeal to a result of \cite{Kohn_Vogelius} to estimate the decay of $ \| \mu_\varepsilon- C_0\mathbf{1}_D \|_{H^{-1}(D)}$ in terms of the size $\varepsilon$ of the periodic cell $C_\varepsilon:=[-\frac \varepsilon 2; \frac \varepsilon 2]$ of $\mu_\varepsilon$. The crucial feature is that, up to a correction of order $\varepsilon^2$, the measure $\mu_\varepsilon -C_0$ has zero average in $C_\varepsilon$. In other words, we have \begin{align}\label{zero.average} \int_{\partial B_{\frac \varepsilon 4}(0)} \partial_n w_{\varepsilon} = \varepsilon^d \bigl( C_0 + O(\varepsilon^2)\bigr). \end{align} \bigskip In this paper we adapt to the random setting the previous two-step argument. The first main difference is strictly related to the randomness of the radii in $H^\varepsilon$ and needs to be addressed also in the case of bounded radii (i.e. if $\beta= +\infty$ in \eqref{integrability.radii}) and periodic centres. In this case, the measure $\mu_\varepsilon$ is defined as in \eqref{mu.eps} but, on each sphere $\partial B_{\frac \varepsilon 4}(\varepsilon z)$, $z \in \mathbb{Z}^d$, the term $\partial_n w_\varepsilon$ depends on the random associated mark $\rho_z$. Therefore, contrarily to the periodic case, \eqref{zero.average} may not hold in each cube $\varepsilon z + C_\varepsilon$. Nevertheless, by the Law of Large Numbers, we may expect that the average of $\mu_\varepsilon - C_0$ is close to zero over cubes of size $k\varepsilon$, $k >>1$, as the left-hand side in \eqref{zero.average} turns into an averaged sum of $k$ random variables. This motivates the introduction of a partition of the set $D$ into cubes of mesoscopic size $k\varepsilon$ (c.f. Section \ref{sub.covering}) that plays the role of the cells $C_\varepsilon + \varepsilon z$ of the periodic case. This allows us to adapt the result by \cite{Kohn_Vogelius} and obtain \begin{align}\label{estimate.scaling} \mathbb{E}\bigl[ \| \mu_\varepsilon- C_0\mathbf{1}_D \|_{H^{-1}(D)}^2 \bigr]^{\frac 1 2} \lesssim k \varepsilon + \mathbb{E}_\rho\biggl[ (\mathop{\mathpalette\avsuminner\relax}\displaylimits_{i=1}^k \rho_i^{d-2} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \biggr]. \end{align} Here, the last term accounts for the difference between the average of $\mu_\varepsilon$ in each cube of size $(k\varepsilon)$, $k \in \mathbb{N}$ and the value $C_0$. This inequality, implies an estimate of the form: \begin{align} \mathbb{E}\bigl[ \| \mu_\varepsilon- C_0\mathbf{1}_D \|_{H^{-1}(D)}^2 \bigr]^{\frac 1 2} \lesssim k \varepsilon + \mathbb{E}_\rho\biggl[(\rho^{d-2} - \mathbb{E}_\rho\bigl[ \rho^{d-2} \bigr])^2 \biggr]k^{-\frac d 2}. \end{align} The optimal choice of $k$ yields the exponent $\frac{d}{d+2}$ of Theorem \ref{t.main}. If $\rho_z \equiv r$ for all $z \in \mathbb{Z}^d$, then the second term vanishes and the above estimate with $k=1$ gives the optimal rate of \cite{Kacimi_Murat}. \smallskip In the case of centres distributed according to a Poisson point process, the argument for Theorem \ref{t.main} follows the same ideas above; although the centres of the holes in $H^\varepsilon$ have random positions, their typical distance is indeed still of size $\varepsilon$. This feature gives rise to the additional logarithmic factor in the rate of Theorem \ref{t.main}. The main technical challenge is related to the construction of the mesoscopic partition of $D$ that allows to obtain the analogue of \eqref{estimate.scaling}. In contrast with the case $\Phi=\mathbb{Z}^d$, indeed, there are ($\P$-sufficiently many) realizations of $H^\varepsilon$ where the support of the measure $\mu_\varepsilon$ intersects the boundary of the covering. In other words, the spheres $\{ \partial B_{\frac \varepsilon 4}(\varepsilon z) \}_{z\in \Phi^\varepsilon_\delta(D)}$ might fall across two cubes of size $\varepsilon k$ that cover $D$. This, in particular, implies that to the covering does not correspond a well-defined partition of the spheres where the measure $\mu_\varepsilon$ is supported. We tackle this issue by constructing a suitable random covering. We do this by enlarging each cube of size $\varepsilon k$ so that it also includes the spheres $B_{\frac \varepsilon 4}(\varepsilon z)$ that fall on its boundary (see also Figure \ref{covering.pic}). In order to obtain the wanted rate of convergence, we require that the new sets have volume that is ``very close'' to the one of deterministic partition into cubes that is used in the case $\Phi= \mathbb{Z}^d$. We do so by restricting the size of the spheres that are too close to the boundary from size $\varepsilon$ to size $\varepsilon^{1+\kappa}$, where $\kappa =\kappa(d)>0$ is a suitable exponent. We refer to Subsection \ref{sub.covering} for the precise construction. \smallskip A second challenge that arises in the proof of Theorem \ref{t.main} is related to the presence of overlapping holes in the case $\beta < +\infty$ in \eqref{integrability.radii}. The strategy to deal with this issue is very similar to the one used in \cite{GHV}: We construct, indeed, a suitable partition of $H^\varepsilon= H^\varepsilon_b \cup H^\varepsilon_g$, where the subset $H^\varepsilon_b$ contains all the holes that overlap (c.f. Lemma \ref{l.geometry.periodic}). As shown in \cite{GHV}, the contribution of $H^\varepsilon_b$ to the density of capacity is negligible in the limit $\varepsilon \downarrow 0$. As a consequence, we may modify the estimates of \cite{Kacimi_Murat}, to prove that we may control the decay of $\| u_\varepsilon- W_\varepsilon u \|_{H^1_0(D)}$ with the decay of the norm $ \| \mu_\varepsilon- C_0\mathbf{1}_D \|_{H^{-1}(D)}$, where the measure $\mu_\varepsilon$ is now only related to the union of disjoint balls $H^\varepsilon_g$. \smallskip \smallskip \section{Proof of Theorem \ref{t.main}, $(a)$}\label{s.periodic} \subsection{Partition of the holes $H^\varepsilon$ and mesoscopic covering of $D$}\label{sub.covering} This section contains some technical tools that will be crucial to prove the main result: The first one is an adaptation of \cite{GHV} and provides a suitable way of dividing the holes $H^\varepsilon$ between the ones that may overlap due to the unboundedness of the marks $\{ \rho_z\}_{z \in \Phi}$ and the ones that, instead, are disjoint and have radii $\eps^\frac{d}{d-2} \rho_z$ much smaller than the distance $\varepsilon$ between the centres. \bigskip \begin{lem}\label{l.geometry.periodic} Let $\delta \in (0, \frac{2}{d-2}]$ be fixed. There exists an $\varepsilon_0=\varepsilon_0(\delta,d)$ such that for every $\varepsilon \leq \varepsilon_0$ and $\omega \in \Omega$ we may find a partition of the realization of the holes $$ H^\varepsilon:= H^\varepsilon_{g} \cup H^\varepsilon_{b} $$ with the following properties: \begin{itemize} \item There exists a subset of centres $n^\varepsilon(D) \subset \Phi^\varepsilon(D)$ such that \begin{align}\label{good.set.periodic} H^\varepsilon_g: = \bigcup_{ z \in n^\varepsilon(D)} B_{\eps^\frac{d}{d-2} \rho_z}( \varepsilon z ), \ \ \max_{z \in n^\varepsilon(D)}\eps^\frac{d}{d-2} \rho_z \leq \varepsilon^{1+\delta}; \end{align} \smallskip \item There exists a set $D^\varepsilon_b \subset \{ x \in \mathbb{R}^3 \, \colon \, \mathop{dist}(x, D) \leq 2\}$ satisfying \begin{align} H^\varepsilon_{b} \subset D^\varepsilon_b, \ \ \ \capacity ( H^\varepsilon_b,D_b^\varepsilon) \lesssim \varepsilon^d \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2} \label{capacity.sum.periodic} \end{align} and \begin{align}\label{distance.good.bad.periodic} B_{\frac{\varepsilon}{4}}(\varepsilon z) \cap D^\varepsilon_b = \emptyset, \ \ \ \ \ \ \ \text{for every $z \in n^\varepsilon(D)$.} \end{align} \end{itemize} \end{lem} \bigskip \begin{proof}[Proof of Lemma \ref{l.geometry.periodic}] The construction for the sets $H^\varepsilon_g, H^\varepsilon_b$ and $D^\varepsilon_b$ is the one implemented in the proof of \cite[Lemma 2.2]{GHV}. We fix $\delta \in (0, \frac{2}{d-2}]$ throughout the proof. \smallskip We denote by $I_\varepsilon^b \subset \Phi_\varepsilon(D)$ the set that generates the holes $H_b^\varepsilon$. We construct it in the following way: We first consider the points $z \in \Phi^\varepsilon(D)$ whose marks $\rho_z$ are bigger than $\varepsilon^{-\frac{2}{d-2}+\delta}$, namely \begin{align}\label{index.set.J} J^\varepsilon_b= \Bigl\{ z \in \Phi^\varepsilon(D) \colon \eps^\frac{d}{d-2}\rho_z \geq \varepsilon^{1+\delta}\Bigr\}. \end{align} Given the holes \begin{align*} \tilde H^\varepsilon_b := \bigcup_{z \in J^\varepsilon_b} B_{2 (\eps^\frac{d}{d-2} \rho_z \wedge 1)}(\varepsilon z), \end{align*} we include in $I_b^\varepsilon$ also the set of points in $\Phi_\varepsilon(D) \backslash J^\varepsilon_b$ that are ``too close'' to the set $\tilde H_b^\varepsilon$, i.e. \begin{align}\label{index.set.I.tilde} \tilde I^\varepsilon_b := \left \{ z \in \Phi^\varepsilon(D) \backslash J^\varepsilon_b \colon \tilde H^\varepsilon_b \cap B_{\frac{\varepsilon}{4}}(\varepsilon z) \neq \emptyset \right\}. \end{align} We define \begin{equation} \begin{aligned}\label{definition.bad.index} I^\varepsilon_b := \tilde I^\varepsilon_b \cup J^\varepsilon_b, \ \ \ n^\varepsilon &(D) := \Phi^\varepsilon(D) \backslash I^\varepsilon_b \\ H^\varepsilon_b:= \bigcup_{z \in I^\varepsilon_b} B_{\eps^\frac{d}{d-2} \rho_z \wedge 1}(\varepsilon z), \ \ \ H^\varepsilon_g:= \bigcup_{z_j \in n^\varepsilon(D)}& B_{\eps^\frac{d}{d-2} \rho_z}(\varepsilon z), \ \ \ D^\varepsilon_b:= \bigcup_{z \in I^\varepsilon_b} B_{2(\eps^\frac{d}{d-2} \rho_z \wedge 1)}(\varepsilon z). \end{aligned} \end{equation} \smallskip It remains to show that the sets defined above satisfy properties \eqref{good.set.periodic}-\eqref{distance.good.bad.periodic}. Property \eqref{good.set.periodic} is an immediate consequence of definition \eqref{index.set.J}. The first inclusion in \eqref{capacity.sum.periodic} easily follows by the definition of $H^\varepsilon_b$ and $D^\varepsilon_b$ in \eqref{definition.bad.index}; for the the inequality in \eqref{capacity.sum.periodic} we instead appeal to the subadditivity of the capacity to bound \begin{align*} \capacity(H^\varepsilon_b; D^\varepsilon_b) \leq \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \capacity(B_{\eps^\frac{d}{d-2}\rho_z \wedge 1}(\varepsilon z); D^\varepsilon_b). \end{align*} Moreover, by the monotonicity property $\capacity(A; B) \leq \capacity(A; C)$ for every $B \supseteq C \supseteq A$, this turns into \begin{align*} \capacity(H^\varepsilon_b; D^\varepsilon_b) \leq \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \capacity(B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z); B_{2\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)) \lesssim \varepsilon^d\sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2}, \end{align*} i.e. the estimate in \eqref{capacity.sum.periodic}. \smallskip To conclude the proof of this lemma, it remains to argue \eqref{distance.good.bad.periodic}: By construction (see \eqref{definition.bad.index}), it holds that \begin{align}\label{bad.set.periodic.1} D^\varepsilon_b = \tilde H_b^\varepsilon \cup \bigcup_{z \in \tilde I^\varepsilon_b}B_{2\eps^\frac{d}{d-2} \rho_z}(\varepsilon z). \end{align} On the one hand, by the definition of $n^\varepsilon(D)$ in \eqref{definition.bad.index} and \eqref{index.set.I.tilde}, for each $z \in n^\varepsilon(D)$ we have that \begin{align}\label{distance.periodic.1} \mathop{dist}(\varepsilon z; \tilde H_b^\varepsilon) \geq \frac{\varepsilon}{4}. \end{align} On the other hand, again by \eqref{index.set.J}-\eqref{index.set.I.tilde}, if $w \in \tilde I^\varepsilon_b$, then $4\eps^\frac{d}{d-2} \rho_w \leq \varepsilon^{1+\delta}$ so that \begin{align*} \mathop{dist}(\varepsilon z; B_{2 \eps^\frac{d}{d-2} \rho_w}(\varepsilon w)) \geq \frac \varepsilon 2 |z -w| \geq \frac \varepsilon 4, \end{align*} whenever $\varepsilon$ is such that $\varepsilon^\delta < \frac 1 4$. Hence, also \begin{align*} \mathop{dist}(\varepsilon z; \bigcup_{z \in \tilde I^\varepsilon_b}B_{2\eps^\frac{d}{d-2} \rho_z}(\varepsilon z)) \geq \frac{\varepsilon}{4}. \end{align*} Combining this with \eqref{distance.periodic.1} and \eqref{bad.set.periodic.1}, we infer \eqref{distance.good.bad.periodic}. The proof of Lemma \ref{l.geometry.periodic} is complete. \end{proof} \bigskip We now construct a suitable covering of $D$ that, as explained in Subsection \ref{sub.ideas}, plays a fundamental role in the proof of Theorem \ref{t.main}. We recall that, by our assumption, the set $D$ is any smooth domain that is star-shaped with respect to the origin. \smallskip For $k \in \mathbb{N}$ and $z \in \mathbb{Z}^d$, let \begin{equation}\label{cubes.meso} Q_{k,z}:= \varepsilon z +\frac {k\varepsilon}{2} Q, \ \ \ Q := [-1 ; 1]^d. \end{equation} Let $N_k \subset \mathbb{Z}^d$ be such that the collection $\{ Q_{k, z} \}_{z \in N_k}$ is an essentially disjoint covering of $D$. Since $D$ is bounded, we may assume that \begin{align}\label{number.N.k} \# (N_k) \lesssim (\varepsilon k)^{-d}. \end{align} Let \begin{align}\label{interior.cubes} \mathring{N}_{k}:= \biggl\{ z \in N_k \, \colon \, Q_{k, z} \subseteq D, \, \mathop{dist}(Q_{k,z}; \partial D) \geq \varepsilon \biggr\}. \end{align} Since $D$ is smooth and has compact boundary, it is easy to see that there exist $C_1=C_1(D)$ such that, whenever $k\varepsilon \leq C$ it holds \begin{align}\label{number.boundary.cubes} \#({N}_{k} \backslash \mathring{N}_k) \lesssim (k\varepsilon)^{d-1}. \end{align} Finally, for each $z \in N_k$ we denote by $N_{k,z} \subset \Phi$ the set of points of $\Phi_\delta^\varepsilon(D)$ that, when rescaled, are contained into the cube $Q_{\varepsilon,z}$, i.e. such that \begin{align}\label{number.covering} N_{k, z}:= \{ w \in \Phi^\varepsilon_\delta(D) \, \colon \, \varepsilon w \in Q_{k,z} \} \stackrel{\eqref{notation.psi}}{=} \Phi^\varepsilon_\delta(D) \cap \Phi^\varepsilon(Q_{\varepsilon,z}) \end{align} Note that, since in this section we assumed that $\Phi=\mathbb{Z}^d$, it follows that $$ \bigcup_{w \in N_{k,z}} Q_{\varepsilon, w} \subset Q_{k,z}, \ \ \ \text{for every $z \in N_k$} $$ and that, for every $z \in \mathring{N}_{k,z}$, the sets $\{Q_{\varepsilon,w}\}_{w \in N_{k,z}}$ provide a refining of $Q_{k,z}$. \bigskip \subsection{Quenched estimates for the homogenization error}\label{sub.quenched.periodic} All the results contained in this subsection are quenched, in the sense that they hold for any fixed realization of the holes $H^\varepsilon$. The main result of this section is Lemma \ref{det.estimate} that allows to control the norm of the homogenization error $u_\varepsilon- W_\varepsilon u$ in terms of suitable averaged sums of the random marks $\{\rho_z \}_{z \in \Phi}$. Lemma \ref{det.estimate} relies on Lemma \ref{det.KM}, that is an adaptation of \cite{Kacimi_Murat}[Theorem 3.2] and shows that controlling the error $u_\varepsilon- W_\varepsilon u$ considered in Theorem \ref{t.main} boils down to controlling the convergence to $C_0$ of the density of capacity generated by $H^\varepsilon$. This, in turn, may be controlled using the mesoscopic covering $\{ Q_{k,z} \}_{z \in N_k}$ of the previous subsection with Lemma \ref{Kohn_Vogelius} of Section \ref{Aux}. \bigskip Before giving the statement of the first lemma, we recall the construction of the oscillating test function $w_\varepsilon \in H^1(D)$ implemented in \cite{GHV}. As mentioned in the introduction and in Subsection \ref{sub.ideas}, the main feature of this function is to vanish on the holes $H^\varepsilon$ and ``approximate'' the density of the capacity of $H^\varepsilon$. We note that the unboundedness of the marks $\{ \rho_z\}_{z\in \Phi}$ implies that the set $\Phi^\varepsilon_\delta(D) \subsetneq \Phi^\varepsilon(D)$ and that the function $W_\varepsilon$ in \eqref{corrector} does not vanish in all the holes contained in $H^\varepsilon$. \smallskip Let $H^\varepsilon_g, H^\varepsilon_b$ and $D^\varepsilon_b$ be as in Lemma \ref{l.geometry.ppp}. For every $z \in \Phi^\varepsilon(D)$, we set \begin{align} v_\varepsilon:= \mathop{argmin}\{ \int_{D_b^\varepsilon} |\nabla u |^2 \ \colon \, u \in H^1_0(D_b^\varepsilon), \ u = 1 \ \ \text{on $\partial H_b^\varepsilon$} \}, \end{align} i.e. the minimizer of $\capacity( H^\varepsilon_b; D^\varepsilon_b)$.\footnote{We assume that the minimizer exists. If this is not the case, then it suffice to take any function $v_\varepsilon$ in the minimizing class such that $ \int_{D_b^\varepsilon} |\nabla v_\varepsilon |^2 \leq 2 \capacity( H^\varepsilon_b; D^\varepsilon_b)$.} \smallskip We set as oscillating test function \begin{align}\label{oscillating.function} w_\varepsilon = w_\varepsilon^g \wedge w_\varepsilon^b \end{align} where $w_\varepsilon^g$ and $w_\varepsilon^b$ are defined as follows: \begin{align}\label{oscillating.bad} w_\varepsilon^b &:=\begin{cases} 1 - v_\varepsilon \ &\text{in $D^\varepsilon_b \backslash H^\varepsilon_b$}\\ 0 \ &\text{in $H^\varepsilon_b$}\\ 1 \ &\text{in $\mathbb{R}^3 \backslash D^\varepsilon_b$} \end{cases} \end{align} and \begin{align}\label{oscillating.good} w_\varepsilon^g(x) :=\begin{cases} w_{z,\varepsilon} \ \ \ &\text{if $ x \in B_{\frac \varepsilon 4}(\varepsilon z_i) \backslash B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)$, for some $z \in n_\varepsilon(D)$}\\ 0 \ \ \ &\text{if $ x \in B_{\varepsilon^3\rho_i}(\varepsilon z_i)$, for some $z_i \in n_\varepsilon(D)$}\\ 1 \ \ \ \ \ \ &\text{ otherwise}\\ \end{cases} \end{align} For each $z \in n_\varepsilon(D)$, the function $w_{z,\varepsilon}$ is as in \eqref{def.harmonic.annuli}. We remark that each $w_{\varepsilon,z}$ admits the explicit formulation \begin{align}\label{explicit.formula} w_{z,\varepsilon}(x) &= \frac{(\eps^\frac{d}{d-2}\rho_z)^{-(d-2)}- |x- \varepsilon z_i|^{-(d-2)}}{(\eps^\frac{d}{d-2}\rho_z)^{-(d-2)} -(\frac \varepsilon 4)^{-(d-2)}} \ \ \ \text{in $B_{\frac \varepsilon 4}(\varepsilon z) \backslash B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)$.} \end{align} \smallskip For $k \in \mathbb{N}$, let $\{ Q_{k,z}\}_{z\in N_k}$ be as in the previous subsection. For every $z \in N_k$, we define the random variables \begin{align}\label{averaged.sum} S_{k,z}:= \frac{1}{k^d}\sum_{w \in N_{k,z}} Y_{\varepsilon, w} \ \ \ \ \ Y_{\varepsilon,w}:= \rho_w^{d-2} \frac{1}{1- 4^{d-2}\varepsilon^{2} \rho_w^{d-2}}. \end{align} \bigskip \begin{lem}\label{det.estimate} Let $\delta \in (0, \frac{2}{d-2}]$ be fixed. Then for every $\varepsilon >0$ and $k \in \mathbb{N}$ with $ k\varepsilon \leq 1$ the following inequality holds: If $u_\varepsilon, u$ are as in Theorem \ref{t.main} and $W_\varepsilon, w_\varepsilon$ as in \eqref{corrector} and \eqref{oscillating.function}, respectively, then \begin{align*} \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)} &\lesssim \biggl( (k \varepsilon)^2 \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} + \varepsilon^d \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{(d-2)}\biggr)^{\frac 1 2}\\ &\quad + \biggl(\mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N}_k} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 + (k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \biggr)^{\frac 1 2}. \end{align*} \end{lem} \bigskip The next lemma is a simple adaptation of \cite{Kacimi_Murat} to our definition of corrector $W_\varepsilon$ and oscillating test function $w_\varepsilon$: \begin{lem}\label{det.KM} Let $\delta \in (0; \frac{2}{d-2})$ be fixed; let $u_\varepsilon, u, w_\varepsilon$ and $W_\varepsilon$ be as in Lemma \ref{det.estimate}. Then \begin{align*} \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \lesssim \| w_\varepsilon - 1\|_{L^2(D)}^2 + \|\nabla(w_\varepsilon- W_\varepsilon) \|_{L^2(\mathbb{R}^d)}^2 + \| \mu_\varepsilon - C_0 \|_{H^{-1}(D)}^2, \end{align*} with \begin{align}\label{def.mu.eps} \mu_\varepsilon := \sum_{z \in \Phi_\delta^\varepsilon(D)} \partial_n w_{\varepsilon,z} \, \delta_{\partial B_{\frac \varepsilon 4}(\varepsilon z)}. \end{align} \end{lem} \bigskip \begin{proof}[Proof of Lemma \ref{det.estimate}] The statement follows from Lemma \ref{det.KM}, provided that we show that \begin{align}\label{L.2.norm} \| \nabla(w_\varepsilon - W_\varepsilon) \|_{L^2(D)}^2 + \| w_{\varepsilon} - 1 \|_{L^2(D)}^2 \lesssim \varepsilon^{d+2} \sum_{z \in n_\varepsilon(D)} \rho_z^{d-2} + \varepsilon^d \sum_{\Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2} \end{align} and that for every $\varepsilon >0$ and $k \in \mathbb{N}$ such that $k \varepsilon \leq 1$ \begin{equation} \label{H.minus} \begin{aligned} \| \mu_\varepsilon - C_0 \|_{H^{-1}(D)}^2 &\lesssim (k\varepsilon)^2 \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \\ &\quad \quad + \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N_k}} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 + (k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 . \end{aligned} \end{equation} \smallskip We first argue \eqref{L.2.norm}: By definition \eqref{oscillating.bad} for $w_b^\varepsilon$ and Lemma \ref{l.geometry.periodic}, we have that \begin{align}\label{grad.w.bad} \| \nabla w_b^\varepsilon \|_{L^2(\mathbb{R}^d)}^2 \lesssim \varepsilon^d \sum_{\Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2}. \end{align} Since by Lemma \ref{l.geometry.periodic} the sets $\bigcup_{z \in n^\varepsilon(D)} B_{\frac \varepsilon 4}(\varepsilon z)$ and $D_b^\varepsilon$ are disjoint, we appeal to \eqref{oscillating.function} to estimate \begin{align}\label{L.2.norm.1} \| w_\varepsilon - 1 \|_{L^2(D)}^2 = \sum_{z_i \in n_\varepsilon(D)} \| w_\varepsilon^g - 1 \|_{L^2(B_{\frac \varepsilon 4}(\varepsilon z_i))}^2 + \| w_\varepsilon^b -1 \|_{L^2(D_\varepsilon^b \cap D)}^2. \end{align} The function $w_\varepsilon^g-1$ vanishes on $\bigcup_{z \in n_\varepsilon(D)} \partial B_{\frac \varepsilon 4}(\varepsilon z )$: Since the balls $\{ B_{\frac{\varepsilon}{4}}(\varepsilon z)\}_{z \in n^\varepsilon(D)}$ are all disjoint, Poincar\'e's inequality in each ball $B_{\frac \varepsilon 4}(\varepsilon z)$ yields \begin{align*} \| w_\varepsilon^g - 1 \|_{L^2(D)}^2 \lesssim \varepsilon^2 \sum_{z \in n_\varepsilon(D)} \| \nabla w_\varepsilon^g \|_{L^2(B_{\frac \varepsilon 4}(\varepsilon z))}^2. \end{align*} Using definition \eqref{oscillating.good}, we may rewrite \begin{align*} \| w_\varepsilon^g - 1 \|_{L^2(D)}^2 \lesssim \varepsilon^{d+2} \sum_{z \in n_\varepsilon(D)} \rho_z^{d-2}, \end{align*} and, inserting this into \eqref{L.2.norm.1}, also \begin{align}\label{L.2.norm.2} \| w_\varepsilon - 1 \|_{L^2(D)}^2 = \varepsilon^{d+2} \sum_{z \in n_\varepsilon(D)} \rho_z^{d-2} + \| w_\varepsilon^b -1 \|_{L^2(D_\varepsilon^b \cap D)}^2. \end{align} To conclude the proof of \eqref{L.2.norm} for $w_\varepsilon -1$, it thus remains to estimate the last term on the right-hand side. By construction (c.f. \eqref{oscillating.bad}), it holds $w_\varepsilon^b -1 = 0 $ on $\partial D_\varepsilon^b$; appealing to Lemma \ref{l.geometry.periodic}, we also have that $D^\varepsilon_b \subset \{ x \in \mathbb{R}^3 \, \colon \, \mathop{dist}(x, D) \leq 2 \}$. We thus apply Poincar\'e's inequality in this set and conclude that \begin{align*} \| w_\varepsilon^b - 1 \|_{L^2(D_\varepsilon^b \cap D)}^2 \lesssim \|\nabla w_\varepsilon^b \|_{L^2(D_\varepsilon^b)}^2 \stackrel{\eqref{grad.w.bad}}{\lesssim} \varepsilon^d \sum_{\Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2}. \end{align*} To establish \eqref{L.2.norm} for $w_\varepsilon -1$, it only remains to combine this last inequality with \eqref{L.2.norm.2}. \smallskip We now argue \eqref{L.2.norm} for $\nabla(w_\varepsilon - W_\varepsilon)$: By definition \eqref{thinning.psi} and \eqref{good.set.periodic} of Lemma \ref{l.geometry.periodic}, it holds \begin{align}\label{inclusion.set} n^\varepsilon(D) \subset \Phi^\varepsilon_\delta(D). \end{align} Thanks to definition \eqref{oscillating.function} for $w_\varepsilon$ and the fact that, by Lemma \ref{l.geometry.periodic} the support of $\nabla w_g^\varepsilon$ and $\nabla w_b^\varepsilon$ is disjoint, we use the triangle inequality to infer that \begin{align}\label{comparison.periodic} \| \nabla ( w_\varepsilon - W_\varepsilon ) \|_{L^2(D)}^2 &\lesssim \| \nabla ( w_g^\varepsilon- W_\varepsilon ) \|_{L^2(D)}^2 + \| \nabla w_b^\varepsilon\|_{L^2(D)}^2\\ & \stackrel{\eqref{grad.w.bad}}{\lesssim} \| \nabla ( w_g^\varepsilon- W_\varepsilon ) \|_{L^2(D)}^2 + \varepsilon^d \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2}. \end{align} Comparing definition \eqref{oscillating.good} for $w_g^\varepsilon$ with definition \eqref{corrector} for $W_\varepsilon$ and using inclusion \eqref{inclusion.set}, we observe that \begin{align*} \nabla (w_g^\varepsilon - W_\varepsilon) = \sum_{\Phi^\varepsilon_\delta(D) \backslash n^\varepsilon(D)} \nabla W_\varepsilon \mathbf{1}_{B_{\frac \varepsilon 4}(\varepsilon z)}. \end{align*} Since the balls $\{ B_{\frac \varepsilon 4}(\varepsilon z) \}_{z \in \Phi^\varepsilon_\delta(D)}$ are disjoint, the previous identity and the triangle inequality imply that \begin{align*} \|\nabla (w_g^\varepsilon - W_\varepsilon)\|_{L^2(D)}^2& \lesssim \sum_{\Phi^\varepsilon_\delta(D) \backslash n^\varepsilon(D)} \|\nabla w_{\varepsilon,z} \|_{L^2(B_{\frac \varepsilon 4}(\varepsilon z))}^2\\ &\stackrel{\eqref{def.harmonic.annuli}}{=} \sum_{\Phi^\varepsilon_\delta(D) \backslash n^\varepsilon(D)} \capacity(B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z) ; B_{\frac \varepsilon 4}(\varepsilon z)) \stackrel{\eqref{thinning.psi}}{\lesssim} \varepsilon^d \sum_{\Phi^\varepsilon(D) \backslash n^\varepsilon(D)}\rho_z^{d-2}. \end{align*} Inserting this bound into \eqref{comparison.periodic} yields \eqref{L.2.norm} also for the norm of $\nabla(w_\varepsilon- W_\varepsilon)$. \bigskip We now turn to \eqref{H.minus} and claim that we may apply Lemma \ref{Kohn_Vogelius} with $M= \mu_\varepsilon$, $\mathcal{Z}= \{ \varepsilon w \}_{w \in \Phi^\varepsilon_\delta(D)}$, $\mathcal{X}=\{ \eps^\frac{d}{d-2} \rho_w\}_{w \in \Phi^\varepsilon_\delta(D)}$ and $r_w\equiv \frac \varepsilon 4$ for every $w \in \Phi^\varepsilon_\delta(D)$. We use as covering $\{K_j\}_{j \in J}$ the sets $\{ Q_{k, z}\}_{z \in N_k}$. Conditions \eqref{well.defined.cells} and \eqref{contains.balls} are satisfied thanks to \eqref{thinning.psi} and by construction (see Subsection \ref{sub.covering}), respectively. Appealing to Lemma \ref{Kohn_Vogelius}, we therefore have that \begin{align*} \| \mu_\varepsilon - m_k \|_{H^{-1}(D)}^2 \lesssim (k\varepsilon)^2 \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{d-2}, \ \ \ m_k \stackrel{\eqref{averaged.sum}}{ =} c_d \sum_{z \in N_{k}} S_{k,z} \mathbf{1}_{Q_{k, z}}. \end{align*} By the triangle inequality and the previous estimate, we thus bound \begin{align}\label{H.minus.1} \| \mu_\varepsilon - C_0 \|_{H^{-1}(D)}^2 \leq (\varepsilon k)^2 \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} + \| m_k - C_0 \|_{H^{-1}(D)}^2 \end{align} so that, to prove \eqref{H.minus}, it only remains to control the last term on the right-hand side above. We do this by observing that for each $\phi \in H^1_0(D)$ we have \begin{align} |\langle m_k - m ; \phi \rangle | \simeq | \sum_{z \in N_k} (S_{k,z} -\mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr]) \int_{Q_{k,z} \cap D}\phi | \end{align} and, by the triangle inequality, also \begin{equation} \begin{aligned}\label{H.minus.5} |\langle m_k - m ; \phi \rangle | &\stackrel{\eqref{interior.cubes}}{\lesssim} | \sum_{z \in \mathring{N}_{k}} (S_{k,z} -\mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr]) \int_{Q_{k,z}}\phi |\\ &\quad\quad + | \sum_{z \in N_k \backslash \mathring{N}_k} (S_{k,z} -\mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr]) \int_{Q_{k,z} \cap D}\phi |. \end{aligned} \end{equation} \smallskip We claim that \begin{align}\label{H.minus.6.b} | \sum_{z \in \mathring{N}_{k}} (S_{k,z} -\mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr]) \int_{Q_{k,z}}\phi | \lesssim \bigl( \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N}_k} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \bigr)^{\frac 1 2} \bigl( \int_D |\nabla \phi |^2 \bigr)^{\frac 12}. \end{align} This is an easy consequence of the properties of the covering $\{ Q_{k,z} \}_{z \in N_k}$ of $D$, \eqref{number.N.k}, together with Cauchy-Schwarz's inequality and Poincar\'e's inequality for $\phi$ in $D$. \smallskip We now turn to the second term in \eqref{H.minus.5}. We note that, by definition \eqref{interior.cubes}, the set $$ \bigcup_{z \in N_k \backslash \mathring{N}_{k}} Q_{k, z} \subset \{ x \in \mathbb{R}^d \, \colon \, \mathop{dist}( x; \partial D) \leq 4k \varepsilon \}. $$ Since $\phi \in H^1_0(D)$ and $D$ is a smooth and bounded set, we may appeal to Poincar'e's inequality {\color{red} \cite{???}} to bound \begin{align}\label{poincare.boundary} \bigl(\sum_{z \in N_k \backslash \mathring{N}_k} \int_{Q_{k,z}}|\phi|^2 \bigr)^{\frac 1 2} \lesssim (k \varepsilon) \bigl(\int_D |\nabla \phi|^2 \bigr)^{\frac 1 2}. \end{align} Appealing once again Cauchy-Schwarz's inequality and using the above estimate, we control \begin{align*} | \sum_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} -\mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr]) \int_{K_{\alpha,z} \cap D}\phi | \lesssim \bigl((k\varepsilon)^{d+2} \sum_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \bigr)^{\frac 1 2} \bigl(\int_D |\nabla \phi|^2 \bigr)^{\frac 12}. \end{align*} Hence, provided $k \varepsilon \leq 1$, we may appeal to \eqref{number.boundary.cubes} and infer that \begin{align*} | \sum_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} -\mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr]) \int_{Q_{\alpha,z} \cap D}\phi | &{\lesssim} \bigl((k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \bigr)^{\frac 1 2} \bigl(\int_D |\nabla \phi|^2 \bigr)^{\frac 12}. \end{align*} Combining this with \eqref{H.minus.6.b} and \eqref{H.minus.5} allows us to infer that for every $\phi \in H^1_0(D)$ \begin{align*} |\langle m_k - m ; \phi \rangle | {\lesssim} \bigl((k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k, z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 + \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N}_k} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \bigr)^{\frac 1 2} \bigl( \int_D |\nabla \phi |^2 \bigr)^{\frac 12}, \end{align*} or, equivalently, that \begin{align}\label{H.minus.7} \| m_k - m \|_{H^{-1}(D)}^2 {\lesssim} (k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 + \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N}_k} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2. \end{align} This, together with \eqref{H.minus.1}, establishes \eqref{H.minus}. The proof of Lemma \ref{det.estimate} is complete. \end{proof} \bigskip \begin{proof}[Proof of Lemma \ref{det.KM}] The argument for this lemma is very similar to the one of \cite[Theorem 3.1]{Kacimi_Murat}. Since $f \in L^\infty$ and $D$ is smooth, by standard elliptic regularity we infer that $u \in W^{2,\infty}(D)$. By computing the (distributional) Laplacian of $u_\varepsilon - w_\varepsilon u$ we obtain that in $D^\varepsilon$ \begin{align}\label{distributional.laplacian} -\Delta ( u_\varepsilon - &w_\varepsilon u) = (C_0 + \Delta w_\varepsilon) u - 2 \nabla \cdot ( (1-w_\varepsilon) \nabla u ) + (1-w_\varepsilon)\Delta u \end{align} We now smuggle the term $(-\Delta W_\varepsilon) u \in H^{-1}(D)$ in the right-hand side so that the previous identity turns into \begin{align*} -\Delta &( u_\varepsilon - w_\varepsilon u) \\ &= (C_0 + \Delta W_\varepsilon) u - \Delta( W_\varepsilon- w_\varepsilon) u - 2 \nabla \cdot ( (1-w_\varepsilon) \nabla u ) + (1-w_\varepsilon)\Delta u \ \ \ \ \text{in $D_\varepsilon$.} \end{align*} We stress that, since $u \in W^{2,+\infty}(D)\cap H^1_0(D)$, $u_\varepsilon \in H^1_0(D^\varepsilon)$ ,$w_\varepsilon \in H^1(D)$, the above equation holds in the sense that for every $\phi \in H^1_0(D_\varepsilon)$ \begin{align}\label{weak.formulation} \int \nabla \phi \cdot \nabla( u_\varepsilon - w_\varepsilon u) &= \langle C_0 + \Delta W_\varepsilon ; u \phi \rangle + \int \nabla(W_\varepsilon- w_\varepsilon) \cdot \nabla(u \phi) \\ &\quad\quad + 2\int (1-w_\varepsilon) \nabla u \cdot \nabla \phi + \int (1-w_\varepsilon) \Delta u \, \phi . \end{align} Since the balls $\{ B_{\frac \varepsilon 4}(\varepsilon z)\}_{z \in \Phi_\delta^\varepsilon(D)}$ are all mutually disjoint, by definition \eqref{corrector} and equations \eqref{def.harmonic.annuli} we have that $$ -\Delta W_\varepsilon:= \sum_{z \in \Phi_\delta^\varepsilon(D)} \partial_n w_{\varepsilon,z} (\mathbf{1}_{\partial B_{\frac \varepsilon 4}(\varepsilon z)} - \mathbf{1}_{\partial B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)}). $$ Since $\phi \in H^1_0(D^\varepsilon)$ and therefore it vanishes on the spheres $\{\partial B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z)\}_{z \in \Phi_\delta^\varepsilon(D)}$, the above identity implies that $$ \langle \Delta W_\varepsilon ; u\phi \rangle = - \sum_{z \in \Phi_\delta^\varepsilon(D)} \int_{\partial B_{\frac \varepsilon 4}(\varepsilon z)} \partial_n w_{\varepsilon,z} u \phi \stackrel{\eqref{def.mu.eps}}{=}- \langle \mu_\varepsilon ; u \phi \rangle. $$ Inserting this last identity in \eqref{weak.formulation}, we infer that \begin{align*} \int \nabla \phi \cdot \nabla( u_\varepsilon - w_\varepsilon u) &= \langle C_0 - \mu_\varepsilon; u \phi \rangle + \int \nabla(W_\varepsilon- w_\varepsilon) \cdot \nabla(u \phi) \\ &\quad\quad + 2\int (1-w_\varepsilon) \nabla u \cdot \nabla \phi + \int (1-w_\varepsilon) \Delta u \, \phi . \end{align*} We now choose $\phi = u_\varepsilon- w_\varepsilon u$ and apply H\"older's and Poincar\'e's inequalities to bound \begin{align*} \| u_\varepsilon - w_\varepsilon u \|_{H^1_0(D)}^2 \lesssim \| u \|_{W^{2,\infty}}^2\bigl( \| w_\varepsilon - 1\|_{L^2(D)}^2 + \| \nabla (w_\varepsilon- W_\varepsilon) \|_{L^2(D)}^2 + \| \mu_\varepsilon - \mu \|_{H^{-1}(D)}^2 \bigr). \end{align*} To obtain the claim of Lemma \ref{det.KM} it remains to use that, by the triangle inequality and H\"older's inequality, we have \begin{align*} \|\nabla( u_\varepsilon - W_\varepsilon u) \|_{L^2(D)} \leq \|u \|_{W^{2,\infty}} \| W_\varepsilon - w_\varepsilon \|_{H^1(\mathbb{R}^d)} + \| u_\varepsilon - w_\varepsilon u \|_{H^1_0(D)} \end{align*} and that, by definitions \eqref{corrector} and \eqref{oscillating.function}, the difference $W_\varepsilon- w_\varepsilon$ is compactly supported in $\{ x \in \mathbb{R}^d \, \colon \, \mathop{dist}(x; \partial D) \leq 4 \}$ (see also Lemma \ref{l.geometry.periodic}). \end{proof} \bigskip \subsection{Annealed estimates (Proof of Theorem \ref{t.main}, $(a)$)} In this subsection we rely on the quenched estimate of Lemma \ref{det.estimate} to prove the statement of Theorem \ref{t.main} in the case of periodic centres. The first ingredient is the following annealed bound: \bigskip \begin{lem}\label{l.capacity.average} Let $(\Phi , \mathcal{R})$ satisfy the assumptions of Theorem \ref{t.main}, $(a)$. For every $\delta \in (0, \frac{2}{d-2}]$, let $n^\varepsilon(D) \subset \Phi^\varepsilon(D)$ be the random subset constructed in Lemma \ref{l.geometry.periodic}. Then \begin{align*} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2} \bigr] \lesssim \varepsilon^{(\frac{2}{d-2} -\delta)\beta}. \end{align*} \end{lem} \bigskip \begin{proof}[Proof of Theorem \ref{t.main}, $(a)$] By the assumptions on $D$, we may find a constant $c=c(D) \leq 1$ such that for $\varepsilon > 0$, and $k \in \mathbb{N}$ such that $\varepsilon k \leq c$, the cube $Q_{k,0} \subseteq D$. We restrict to the values of $k\in\mathbb{N}$ satisfying the previous bound. \smallskip Combining Lemma \ref{det.estimate} and Lemma \ref{l.capacity.average}, we bound for every $\varepsilon> 0$ and $k\in \mathbb{N}$ as above \begin{align*} \mathbb{E} \bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2 \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigr] + \mathbb{E} \bigl[ \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N}_k} (S_{k,z} - \mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] \\ &\quad\quad + (\varepsilon k)^3 \mathbb{E} \bigl[ \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_k} (S_{k,z} - \mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] +\varepsilon^{(\frac{2}{d-2} -\delta)\beta}. \end{align*} Since the sets $N_k, \mathring{N}_k$ are deterministic, and $\{ S_{k,z}\}_{z \in \mathring{N}_k}$ are identically distributed, we infer that \begin{align*} \mathbb{E} \bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2 \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigr] + \mathbb{E} \bigl[(S_{k,0} - \mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] \\ &\quad\quad + (\varepsilon k)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_k} \mathbb{E} \bigl[ (S_{k,z} - \mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] + \varepsilon^{(\frac{2}{d-2} -\delta)\beta}, \end{align*} We observe that $$ \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigr] \lesssim \mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr]. $$ This identity is, indeed, an easy consequence of the definition \eqref{thinning.psi} and of the fact that $\Phi^\varepsilon(D)$ is deterministic with $\#\Phi^\varepsilon(D) \lesssim \varepsilon^{-d}$. The previous two displays thus imply that \begin{align}\label{main.1} \mathbb{E} \bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2 \mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr] + \mathbb{E} \bigl[(S_{k,0} - \mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] \\ &\quad\quad + (\varepsilon k)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_k} \mathbb{E} \bigl[ (S_{k,z} - \mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] +\varepsilon^{(\frac{2}{d-2} -\delta)\beta}. \end{align} \smallskip We now claim that if we choose $k \leq \varepsilon^{-\frac{2}{d+2}}$, then \begin{equation} \begin{aligned}\label{main.0.b} \mathbb{E} \bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2\mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr] \\ & \quad\quad\quad + k^{-d} \mathop{Var}(Y_{\varepsilon,0} \mathbf{1}_{\rho< \varepsilon^{-\frac{2}{d-2} +\delta}} ) +\varepsilon^{(\frac{2}{d-2} -\delta)\beta}, \end{aligned} \end{equation} where $Y_{\varepsilon,0}$ is defined as in \eqref{averaged.sum}. We remark that for $k$ as above, we have that $\varepsilon k \to 0$ when $\varepsilon \downarrow 0$ and therefore that $\varepsilon k \leq c$ for $\varepsilon$ is small enough (only depending on $D$ and $d$). We begin by showing how to conclude the proof of the theorem provided the estimate in the previous display holds. \smallskip Let us first assume that \eqref{integrability.radii} holds with $\beta \geq d-2$; in this case, we have that $$ \mathbb{E}\bigl[ \rho^{2(d-2)} \bigr] + \mathbb{E}\bigl[ Y_{\varepsilon,0}^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr] \lesssim 1 $$ and therefore that \begin{align}\label{important.estimate} \mathbb{E} \bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2 + k^{-d} + \varepsilon^{(\frac{2}{d-2}-\delta) \beta}. \end{align} Estimate of Theorem \ref{t.main} for $\beta \geq d-2$ follows from this inequality if we minimize the right-hand side above in $k$, i.e. if we choose $k= \lfloor \varepsilon^{-\frac{2}{d+2}} \rfloor$, and set $\delta$ as in Theorem \ref{t.main}. \smallskip Let us now assume that $\beta < d-2$ in \eqref{integrability.radii}: In this case, we bound \begin{align} \mathop{Var}(Y_{\varepsilon,0} \mathbf{1}_{\rho< \varepsilon^{-\frac{2}{d-2}+\delta}} ) + \mathbb{E}\bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr] \lesssim \varepsilon^{-(\frac{2}{d-2}- \delta)(d-2-\beta)} \end{align} so that \eqref{main.0.b} turns into \begin{align} \mathbb{E} \bigl[ \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim \bigl( (k\varepsilon)^2 + k^{-d} \bigr) \varepsilon^{-(\frac{2}{d-2}- \delta)(d-2-\beta)} + \varepsilon^{(\frac{2}{d-2}-\delta) \beta}. \end{align} Also in this case, we infer the estimate of Theorem \ref{t.main} by minimizing the right-hand side in $k$ and $\delta$, i.e. choosing $k= \lfloor \varepsilon^{-\frac{2}{d+2}} \rfloor$ and $\delta$ as in Theorem \ref{t.main}. \bigskip To complete the proof of the theorem it only remains to argue \eqref{main.0.b} from \eqref{main.1}. We first tackle the second term on the right-hand side of \eqref{main.1} and show that \begin{align}\label{main.3} \mathbb{E} \bigl[(S_{k,0} - \mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] &\lesssim k^{-d} \mathop{Var}(Y_{w,\varepsilon} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}})+ \varepsilon^4 \mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr]^2 \\ &\quad \quad + \varepsilon^{2(\frac{2}{d-2} -\delta)\beta}. \end{align} We begin by remarking that the definition of $\Phi^\varepsilon_{\delta}(D)$ and of $N_{k,z}$ (see \eqref{thinning.psi} and \eqref{number.covering}), implies that \begin{align}\label{N.k.lattice} N_{k,z} = \bigl\{ w \in \mathbb{Z}^d \, \colon \, \varepsilon w \in Q_{k,z} \cap D, \ \ \eps^\frac{d}{d-2}\rho_w < \varepsilon^{1+\delta} \bigr\}. \end{align} Since $0 \in \mathring{N}_k$, this last identity allows us to rewrite \begin{align*} S_{k,0} - \mathbb{E}_\rho \bigl[\rho^{d-2} \bigr] &\stackrel{\eqref{averaged.sum}}{=} \mathop{\mathpalette\avsuminner\relax}\displaylimits_{w \in \mathbb{Z}^d \atop \varepsilon w \in Q_{k, 0}}Y_{w,\varepsilon} \mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2}+\delta}} - \mathbb{E}_\rho \bigl[\rho^{d-2} \bigr] \\ &= \mathop{\mathpalette\avsuminner\relax}\displaylimits_{w \in \mathbb{Z}^d \atop \varepsilon w \in Q_{k, 0}} (Y_{w,\varepsilon}\mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2}+\delta}} - \mathbb{E}_\rho \bigl[\rho^{d-2} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr] \bigr) + \mathbb{E}_\rho \bigl[\rho^{d-2} \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr] . \end{align*} Hence, \begin{align}\label{main.3.c} \mathbb{E} \bigl[ (S_{k,0} - \mathbb{E}_\rho \bigl[\rho^{d-2} \bigr] )^2\bigr] &\lesssim \mathbb{E} \bigl[ (\mathop{\mathpalette\avsuminner\relax}\displaylimits_{w \in \mathbb{Z}^d \atop \varepsilon w \in Q_{k, 0}}Y_{w,\varepsilon} \mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2} +\delta}} - \mathbb{E}_\rho \bigl[\rho^{d-2}\mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr] )^2 \bigr] \\ &\quad \quad + \mathbb{E}_\rho \bigl[\rho^{d-2} \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr]^2 \end{align} Since Chebyshev's inequality and assumption \eqref{integrability.radii} we have \begin{align}\label{Cheb} \mathbb{E}_\rho \bigl[\rho^{d-2} \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2}+\delta}} \bigr]\lesssim \varepsilon^{(\frac{2}{d-2} -\delta)\beta}, \end{align} we rewrite \eqref{main.3.c} as \begin{align}\label{main.3.c} \mathbb{E} \bigl[ (S_{k,0} - \mathbb{E}_\rho \bigl[\rho^{d-2} \bigr] )^2\bigr] &\lesssim \mathbb{E} \bigl[ (\mathop{\mathpalette\avsuminner\relax}\displaylimits_{w \in \mathbb{Z}^d \atop \varepsilon w \in Q_{k, 0}}Y_{w,\varepsilon} \mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2} +\delta}} - \mathbb{E}_\rho \bigl[\rho^{d-2}\mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr] )^2 \bigr] \\ &\quad\quad + \varepsilon^{2(\frac{2}{d-2} -\delta)\beta}. \end{align} Using the independence of the random variables $\{\rho_z \}_{z \in \Phi}$, and the fact that for $\Phi =\mathbb{Z}^d$ we have $N^\varepsilon(Q_{k,0}) = k^{-d}$ (c.f. \eqref{notation.psi}), we decompose \begin{equation} \begin{aligned}\label{main.3.b} \mathbb{E} \bigl[ (\mathop{\mathpalette\avsuminner\relax}\displaylimits_{w \in \mathbb{Z}^d \atop \varepsilon w \in Q_{k, 0}}Y_{w,\varepsilon} \mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2} +\delta}}& - \mathbb{E}_\rho \bigl[\rho^{d-2}\mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr] )^2 \bigr] \lesssim k^{-d} \mathop{Var}(Y_{0,\varepsilon} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}}) \\ & + k^{-2d}\sum_{w \in \mathbb{Z}^d \atop \varepsilon w \in Q_{k, 0}}\sum_{\tilde w \in \mathbb{Z}^d \backslash\{ w \} \atop \varepsilon \tilde w \in Q_{k, 0}} (\mathbb{E} \bigl[ Y_{\varepsilon,0}- \mathbb{E}_\rho\bigl[ \rho^{d-2}\mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr]\bigr])^2\end{aligned} \end{equation} We now observe that, by \eqref{averaged.sum} and the triangle inequality, it holds \begin{align} |\mathbb{E} \bigl[ Y_{\varepsilon,0}- \mathbb{E}_\rho\bigl[ \rho^{d-2}\mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr]| \lesssim \varepsilon^2 \mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr]. \end{align} To obtain \eqref{main.3} it only remains to insert the inequality above into \eqref{main.3.b}. \bigskip We now turn to the remaining term in \eqref{main.1} and argue that \begin{align}\label{main.4} (\varepsilon k)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_k} \mathbb{E} \bigl[ (S_{k,z} - \mathbb{E} \bigl[ \rho^{(d-2)} \bigr])^2 \bigr] \lesssim (k\varepsilon)^2\mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr]. \end{align} By the triangle inequality and assumption \eqref{integrability.radii}, the left-hand side is bounded by \begin{align}\label{main.4.b} (\varepsilon k)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_k} \mathbb{E} \bigl[ (S_{k,z} - \mathbb{E} \bigl[ \rho \bigr])^2 \bigr] \lesssim (\varepsilon k)^3 + (\varepsilon k)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_k} \mathbb{E} \bigl[ S_{k,z}^2 \bigr] \end{align} To establish \eqref{main.4} from this it suffices to remark that, by \eqref{averaged.sum} and \eqref{N.k.lattice}, we have $$ |S_{k,z}|^2 \lesssim \mathop{\mathpalette\avsuminner\relax}\displaylimits_{w \in \mathbb{Z}^3 \atop \varepsilon w \in Q_{k,z}\cap D} \rho_w^{2(d-2)}\mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2} +\delta}} $$ so that this, and the fact that the random variables $\{ \rho_z \}_{z \in \Phi^\varepsilon(D)}$ are identically distributed, yields \begin{align*} \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_k} \mathbb{E} \bigl[ (S_{k,z})^2 \bigr]\lesssim \mathbb{E} \bigl[\mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \rho^{2(d-2)} \bigr]. \end{align*} Inserting this into \eqref{main.4.b} implies \eqref{main.4}. To establish \eqref{main.0.b} it remains to combine \eqref{main.4}, \eqref{main.3} and \eqref{main.1} and use that, for $k \leq \varepsilon^{-\frac{2}{d-2}}$, it holds $$ \varepsilon^4 \mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr]^2 \lesssim (\varepsilon k)^2 \mathbb{E} \bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr]. $$ The proof of Theorem \ref{t.main}, $(a)$ is complete. \end{proof} \bigskip \begin{proof}[Proof of Lemma \ref{l.capacity.average}] We resort to the construction of the set $n^\varepsilon(D)$ implemented in Lemma \ref{l.geometry.periodic}: By \eqref{definition.bad.index}, \eqref{index.set.J} and \eqref{index.set.I.tilde} in the proof of Lemma \ref{l.geometry.periodic} we decompose \begin{align}\label{capacity.bad.periodic.1} \varepsilon^d \sum_{\Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_w^{d-2} = \varepsilon^d \sum_{z \in J^\varepsilon_b} \rho_z^{d-2} + \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} . \end{align} and prove the statement of Lemma \ref{l.capacity.average} for each one of the two sums. We begin with the first one: Using \eqref{index.set.J} we write \begin{align*} \varepsilon^d \sum_{z \in J^\varepsilon_b} \rho_z^{d-2} = \varepsilon^d \sum_{z \in \Phi^\varepsilon(D)} \rho_z^{d-2} \mathbf{1}_{\rho_z \geq \varepsilon^{-\frac{2}{d-2} +\delta}}. \end{align*} Taking the expectation and using that $\{ \rho_z\}_{\Phi^\varepsilon(D)}$ are identically distributed and that $N^\varepsilon(D) \lesssim \varepsilon^{-d}$, we immediately bound \begin{align}\label{capacity.bad.periodic.sum1} \varepsilon^d \mathbb{E} \bigl[ \sum_{z \in J^\varepsilon_b} \rho_z^{d-2} \bigr] \lesssim \mathbb{E}_\rho \bigl[ \rho^{d-2} \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] \stackrel{\eqref{Cheb}}{\lesssim} \varepsilon^{(\frac{2}{d-2} - \delta)\beta}, \end{align} i.e. the claim of Lemma \ref{l.capacity.average} for the first sum in \eqref{capacity.bad.periodic.1}. \smallskip We now turn to the second sum in \eqref{capacity.bad.periodic.1}: By definition \eqref{index.set.I.tilde}, if $z \in \tilde I_b^\varepsilon$, then $\rho_z \leq \varepsilon^{-\frac{2}{d-2} +\delta}$ and there exists an element $w \in J_b^\varepsilon$ such that $\varepsilon |z -w| < \varepsilon + (\eps^\frac{d}{d-2}\rho_w \wedge 1)$. This allows us to bound \begin{align}\label{capacity.bad.3} \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} &\leq \varepsilon^d \sum_{w \in J^\varepsilon_b} \sum_{z \in \Phi^\varepsilon(D) \backslash \{ w\}, \atop \varepsilon |z-w| < \varepsilon + \eps^\frac{d}{d-2}\rho_w \wedge 1} \rho_z^{d-2} \mathbf{1}_{\rho_z < \frac 1 2 \varepsilon^{-\frac{2}{d-2} +\delta}}\\ &= \varepsilon^d\sum_{w \in \Phi^\varepsilon(D)} \mathbf{1}_{\rho_w \geq \varepsilon^{-\frac{2}{d-2} +\delta}} \sum_{z \in \Phi^\varepsilon(D) \backslash \{ w\}, \atop \varepsilon |z-w| < \varepsilon+ \eps^\frac{d}{d-2}\rho_w} \rho_z^{d-2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} +\delta}} . \end{align} We now take the expectation and use that $\Phi= \mathbb{Z}^d$ and that $\{\rho_z \}_{z\in \Phi}$ are independent and identically distributed: This implies that \begin{align}\label{capacity.bad.4} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} \bigr]& \lesssim \mathbb{E}\bigl[ \varepsilon^d \sum_{w \in \Phi^\varepsilon(D)} \mathbf{1}_{\rho_w \geq \varepsilon^{-\frac{2}{d-2} +\delta}} \# \{ z \in \Phi^\varepsilon(D) \backslash \{ w\} \, \colon \, \varepsilon|z-w| < \varepsilon + \eps^\frac{d}{d-2}\rho_w \wedge 1\} \bigr]. \end{align} Since for every $w \in J^\varepsilon_b$, the set $$ \# \{ z \in \Phi^\varepsilon(D) \backslash \{ w\} \, \colon \, \varepsilon|z-w| < \varepsilon + \eps^\frac{d}{d-2}\rho_w \wedge 1\} \lesssim 1 + \rho_w^{d-2}, $$ we obtain that \begin{align}\label{term.I.tilde.periodic} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} \bigr]& \lesssim \mathbb{E}\bigl[\varepsilon^d\sum_{w \in \Phi^\varepsilon(D)} \rho_w^{d-2} \mathbf{1}_{\rho_w \geq \varepsilon^{-\frac{2}{d-2} +\delta}}\bigr] \stackrel{\eqref{index.set.J}}{=} \mathbb{E}\bigl[\varepsilon^d \sum_{w \in J^\varepsilon_b}\rho_w^{d-2}\bigr]\\ &\stackrel{\eqref{capacity.bad.periodic.sum1}}{\lesssim} \varepsilon^{(\frac{2}{d-2}-\delta)\beta} . \end{align} This, identity \eqref{capacity.bad.periodic.1} and \eqref{capacity.bad.periodic.sum1} establish Lemma \ref{l.capacity.average}. \end{proof} \section{Proof of Theorem \ref{t.main}, $(b)$}\label{s.poi.d} In this section we adapting the argument of the previous section to show also Theorem \ref{t.main} in case $(b)$. As mentioned in Subsection \ref{sub.ideas}, the main challenge is related to the construction of a mesoscopic covering $\{ K_{k,z} \}_{z \in N_k}$ that plays the same role of $\{ Q_{k,z} \}_{z \in N_k}$ of Subsection \ref{sub.covering} for Theorem \ref{t.main}. In the present case the random positions of the centres imply that there are configurations (with positive probability) in which some of the spheres $\{ \partial B_{\frac \varepsilon 4}(\varepsilon z)\}_{z\in \Phi}$ intersect the boundary of $\{ Q_{k,z} \}_{z \in N_k}$. This prevents us from appealing to Lemma \ref{Kohn_Vogelius} as condition \eqref{contains.balls} is not satisfied. \smallskip We stress that all the results contained in this section besides hold for any dimension $d \geq 3$. However, in the proof of Theorem \ref{t.main}, $(b)$ we obtain the same decay rate of case $(a)$ only in $d=3$. In higher dimensions we obtain a slower (but still algebraic) rate. In order to best appreciate this dimensional constraint, in the whole section we work in a general dimension $d \geq 3$. \smallskip Throughout this section we set $\delta$ as in Theorem \ref{t.main} and define the parameters \begin{align}\label{right.regime.ppp} k := \lfloor \varepsilon^{-\frac{2}{d+2}} \rfloor, \ \ \ \kappa:= \frac{2}{(d-1)(d+2)}. \end{align} \subsection{Partition of the holes and mesoscopic covering of $D$}\label{sub.covering.ppp} This subsection contains an adaptation to the case of random centres of Lemma \ref{l.geometry.periodic} and of the sets $\{ Q_{k,z}\}_{z \in N_k}$. \bigskip \begin{lem}\label{l.geometry.ppp} Let $\delta$ be as in Theorem \ref{t.main}. We recall the definition \eqref{minimal.distance} of $R_{\varepsilon,z}$. For $\omega \in \Omega$, we consider a realization of the marked point process $(\Phi; \mathcal{R})$ and of the associated set of holes $H^\varepsilon$. Then, there exists a partition $$ H^\varepsilon:= H^\varepsilon_{g} \cup H^\varepsilon_{b}, $$ with the following properties: \begin{itemize} \item There exists a subset of centres $n^\varepsilon(D) \subset \Phi^\varepsilon(D)$ such that \begin{align}\label{good.set.ppp} H^\varepsilon_g : = \bigcup_{z \in n^\varepsilon(D)} B_{\eps^\frac{d}{d-2} \rho_z}( \varepsilon z ), \ \ \ \ \min_{z \in n^\varepsilon(D)}R_{\varepsilon, z}\geq \varepsilon^{2}, \ \ \ \ \ \max_{z \in n^\varepsilon(D)}\eps^\frac{d}{d-2} \rho_z \leq \varepsilon^{1+\delta}, \end{align} and such that \begin{align}\label{no.overlapping} 2 \sqrt d \eps^\frac{d}{d-2} \rho_z \leq R_{\varepsilon,z}, \ \ \ \ \text{for every $z \in n^\varepsilon(D)$.} \end{align} \smallskip \item There exists a set $D^\varepsilon_b(\omega) \subset \{ x \in \mathbb{R}^d \, \colon \, \mathop{dist}(x, D) \leq 2\}$ satisfying \begin{align} H^\varepsilon_{b} \subset D^\varepsilon_b, \ \ \ \capacity ( H^\varepsilon_b, D_b^\varepsilon) \lesssim \eps^\frac{d}{d-2} \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2} \label{capacity.sum} \end{align} and for which \begin{align}\label{ppp.distance.good.bad} B_{R_{\varepsilon,z}}(\varepsilon z) \cap D^\varepsilon_b = \emptyset, \ \ \ \ \ \ \ \text{for every $z \in n^\varepsilon(D)$.} \end{align} \end{itemize} \end{lem} \begin{proof}[Proof of Lemma \ref{l.geometry.ppp}] The construction for the sets $H^\varepsilon_g, H^\varepsilon_b$ and $D^\varepsilon_b$ is very similar to the one implemented in the proof of Lemma \ref{l.geometry.periodic} and in the proof of \cite[Lemma 4.2]{GHV}. Also in this case, we denote by $I_\varepsilon^b \subset \Phi_\varepsilon(D)$ the set that generates the holes $H_b^\varepsilon$. We construct $I^\varepsilon_b$ in the following way: As in Lemma \ref{l.geometry.periodic}, we include in it the points $z \in \Phi^\varepsilon(D)$ whose mark $\rho_z$ is bigger than the threshold $\varepsilon^{-\frac{2}{d-2} +\delta}$, namely \begin{align}\label{index.set.ppp.1} J^\varepsilon_b= \Bigl\{ z \in \Phi^\varepsilon(D) \colon \eps^\frac{d}{d-2}\rho_z \geq \varepsilon^{-\frac{2}{d-2} +\delta}\Bigr\}. \end{align} Contrarily to the periodic case of Lemma \ref{l.geometry.periodic}, we also need to consider the points of $\Phi_\varepsilon(D)$ which are very close to each other: We indeed define \begin{align}\label{index.set.ppp.2} K^{\varepsilon}_{b}:= \biggl\{ z \in \Phi^\varepsilon(D) \backslash J^\varepsilon_b \, \colon \, R_{\varepsilon,z} \leq \varepsilon^2 \biggr\}. \end{align} We now include in the set $I^\varepsilon_b$ also those points that are close when compared to their radii, i.e. the set \begin{align}\label{index.set.ppp.4} C^\varepsilon_b:= \Bigl\{ z \in \Phi^\varepsilon(D) \backslash (J^\varepsilon_b \cup K^\varepsilon_b) \, \colon \, 2\sqrt d \eps^\frac{d}{d-2}\rho_z \geq R_{\varepsilon,z} \Bigr\}. \end{align} Finally, given the holes \begin{align*} \tilde H^\varepsilon_b := \bigcup_{z \in J^\varepsilon_b \cup K^\varepsilon_b \cup C^\varepsilon_b} B_{2 \eps^\frac{d}{d-2}\rho_z}(\varepsilon z), \end{align*} we consider the set of points in $\Phi_\varepsilon(D) \backslash (J^\varepsilon_b \cup K^\varepsilon_b \cup C^\varepsilon_b)$ that are close to the set $\tilde H_b^\varepsilon$, i.e. \begin{align}\label{index.set.ppp.3} \tilde I^\varepsilon_b := \left \{ z \in \Phi^\varepsilon(D) \backslash(J^\varepsilon_b \cup K^\varepsilon_b \cup C^\varepsilon_b) \colon \tilde H^\varepsilon_b \cap B_{R_{\varepsilon,z}}(\varepsilon z) \neq \emptyset \right\}. \end{align} We define \begin{equation}\label{definition.bad.index.ppp} \begin{aligned} I^\varepsilon_b := \tilde I^\varepsilon_b \cup J^\varepsilon_b \cup& K^\varepsilon_b \cup C^\varepsilon_b, \ \ \ n^\varepsilon(D) := \Phi^\varepsilon(D) \backslash I^\varepsilon_b \\ H^\varepsilon_b:= \bigcup_{z \in I^\varepsilon_b} B_{\eps^\frac{d}{d-2}\rho_z}(\varepsilon z), \ \ \ H^\varepsilon_g&:= \bigcup_{z_j \in n^\varepsilon(D)} B_{\eps^\frac{d}{d-2} \rho_z}(\varepsilon z), \ \ \ D^\varepsilon_b:= \bigcup_{z \in I^\varepsilon_b} B_{2\eps^\frac{d}{d-2}\rho_z}(\varepsilon z). \end{aligned} \end{equation} \bigskip It remains to show that these sets satisfy properties \eqref{good.set.ppp}- \eqref{ppp.distance.good.bad}. Properties \eqref{good.set.ppp}, \eqref{no.overlapping} are immediate consequences of definitions \eqref{index.set.ppp.1}, \eqref{index.set.ppp.2}, \eqref{index.set.ppp.4} and \eqref{definition.bad.index.ppp}. Properties \eqref{capacity.sum} and \eqref{ppp.distance.good.bad} may be proven as \eqref{capacity.sum.periodic} and \eqref{distance.good.bad.periodic} of Lemma \ref{l.geometry.ppp}. The proof of Lemma \ref{l.geometry.ppp} is complete. \end{proof} \smallskip For $k$ as in \eqref{right.regime.ppp}, let $\{ Q_{k,z} \}_{z\in N_k}$ be as in Subsection \ref{sub.covering}. For every $z \in N_k$ we define the sets $N_{k,z}$ as in \eqref{number.covering}. We stress that, in this case, \eqref{number.covering} is ill-defined for the realizations of $\Phi$ such that there are points in $\Phi^\varepsilon_\delta(D)$ that fall on the boundary of the cubes $\{Q_{k,z}\}_{z \in N_k}$. This issue may be easily solved by fixing a deterministic rule to assign these points to a particular cube that shares the boundary considered. We stress that all the arguments of this section do not depend on this rule since the set of the boundaries of the covering $\{Q_{k,z}\}_{z \in N_k}$ has zero (Lebesgue)-measure. \smallskip For $z\in N_k$ and $w \in N_{k,z}$, we define the modification of the minimal distance $R_{\varepsilon,w}$ (see Figure \eqref{covering.pic1}): \begin{align}\label{def.tilde.R} \tilde R_{\varepsilon,w}:= \begin{cases} R_{\varepsilon,w} \ \ \ \ &\text{if $\varepsilon w \in Q_{z, k-1}$}\\ \varepsilon^{1+\kappa} \wedge R_{\varepsilon,w} \ \ \ &\text{if $\mathop{dist}(\varepsilon w; \partial Q_{z,k} ) \leq \varepsilon^{1+\kappa}$}\\ (2^{n-1}\varepsilon^{1+\kappa})\wedge R_{\varepsilon,w} &\text{if $\varepsilon w \notin Q_{z,k-1}$, \ $2^{n-1}\varepsilon^{1+\kappa} \leq \mathop{dist}(\varepsilon w; \partial Q_{z,k} ) \leq 2^n \varepsilon^{1+\kappa}$}. \end{cases} \end{align} \begin{figure} \begin{minipage}[c]{7cm} \input{pictures/covering.2} \end{minipage} \caption{{\small The square in the thick black line is $Q_{k,z}$, while the blue on is $Q_{k-1,z}$. The dots are a points of $\Phi^\varepsilon_\delta(D)$. The dashed squares correspond to the sets $\{ x \in Q_{k,z}\, \colon \, \mathop{dist}(x; \partial Q_{k,z}) > 2^n \varepsilon^{1+\kappa} \}$. Inside the blue square (i.e. for the green dots) the random variable $\tilde R_{\varepsilon,w}= R_{\varepsilon,w}$. In each dashed frame (i.e. for the black points), $\tilde R_{\varepsilon, w}= R_{\varepsilon,w} \wedge (2^n \varepsilon^{1+\kappa})$. } }\label{covering.pic1} \end{figure} We aim at obtaining a (random) collection of disjoint sets $\{ K_{k,z}\}_{z\in N_k}$ having size $\simeq \varepsilon k$ and such that for every $z \in N_k$ and $w \in \Phi^\varepsilon_\delta(D)$ \begin{align} B_{\tilde R_{\varepsilon,z}}(\varepsilon w) \cap K_{k,z} = \emptyset \ \ \ \ \text{OR} \ \ \ \ B_{2\tilde R_{\varepsilon,z}}(\varepsilon w) \subset K_{k,z}. \end{align} We modify $\{Q_{k,z}\}_{z \in N_k}$ as follows: For $\kappa$ as in \eqref{right.regime.ppp}, any $z \in N_k$ and $w \in N_{k,z}$, we consider the cubes \begin{align}\label{minimal.cube} \tilde Q_{\varepsilon,w}:= \varepsilon w +2[ - \tilde R_{\varepsilon,z}; \tilde R_{\varepsilon,z}]. \end{align} Note that, by definition \eqref{minimal.distance}, all the cubes above are disjoint. For every $z\in N_k$, we thus set (see Figure \eqref{covering.pic}) \begin{align}\label{covering.Poisson} K_{k, z} := \bigl(Q_{k, z} \bigcup_{w \in N_{k,z}} Q_{\varepsilon,w} \bigr) \backslash \bigcup_{w \in \Phi_{\delta}^\varepsilon(D) \backslash N_{k,z}} Q_{\varepsilon,w}. \end{align} \begin{figure} \begin{minipage}[c]{7cm} \input{pictures/covering} \end{minipage} \caption{{\small The construction of $K_{\varepsilon,z}$ from the cube $Q_{k,\varepsilon}$. The dashed grey area corresponds to the set $K_{\varepsilon,z}$, while $Q_{\varepsilon,z}$ is the square bounded by the thick black line. The green dots are the points of $\Phi^\varepsilon_\delta$ that fall inside the set $Q_{k-1,z}$ (here bounded by the dashed blue line). The red dots are the points that are outside of $Q_{k,z}$ but whose associated cube intersects $\partial Q_{k,z}$. The black dots are the points that are in $Q_{k,z} \backslash Q_{k-1,z}$. Note that the cubes associated to the black and red dots are typically smaller than the ones associated to the green dots due to the cut-off $\tilde R_{\varepsilon,z}$.}}\label{covering.pic} \end{figure} Since the cubes $\{ Q_{\varepsilon,z} \}_{z \in \Phi_\delta^\varepsilon(D)}$ are disjoint we have that \begin{equation} \begin{aligned}\label{properties.covering} &\bigcup_{z \in N_k} K_{k,z} \supseteq D, \ \ \ |\mathop{diam}(K_{k,z})| \lesssim k \varepsilon,\\ & ( k - \varepsilon^{\kappa})^d \varepsilon^d \leq |K_{k,z}| \leq ( k + \varepsilon^{\kappa})^d \varepsilon^d. \end{aligned} \end{equation} We emphasize that the previous properties hold for every realization of the point process $\Phi$. The introduction of the modified random variable $\tilde R_{\varepsilon,z}$ is needed to ensure that the second property in \eqref{properties.covering} holds with $\varepsilon^\kappa$ instead of $1$. This yields that the difference between the volume of the set $K_{k,z}$ and the cube $Q_{k,z}$ is of order $\varepsilon^{d +\kappa} k^{d-1}$ instead of $\varepsilon^d k^{d-1}$. This condition plays a crucial role in the proof of the theorem (see \eqref{main.5.ppp.2}) and is the main term that forces the dimensional constraint $d=3$ in the rates of convergence. \subsection{Quenched estimates for the homogenization error} In this section we adapt Lemma \ref{det.estimate} to the current setting. As in the case of Lemma \ref{det.estimate}, the next result relies on a variation of Lemma \ref{det.KM} that allows us to replace in the definition \eqref{def.mu.eps} of $\mu_\varepsilon$ the radii $\frac{\varepsilon}{4}$ with $\tilde R_{\varepsilon,z}$ defined in \eqref{def.tilde.R}. \smallskip We define the oscillating test function $w_\varepsilon \in H^1(D)$ as done in Subsection \ref{sub.quenched.periodic}, this time using the sets $H^\varepsilon_b, H^\varepsilon_g$ and $D^\varepsilon_b$ of Lemma \ref{l.geometry.ppp} with $\delta$ as in Theorem \ref{t.main}, and $R_{\varepsilon,z}$ instead of $\frac \varepsilon 4$ in \eqref{oscillating.good}. We also define the analogues of \eqref{averaged.sum}, this time associated to the covering $\{ K_{k,z} \}_{z \in N_k}$ constructed in the previous subsection: For every $z \in N_k$ we indeed set \begin{align}\label{averaged.sum.ppp} S_{k,z}:= \frac{\varepsilon^d}{|K_{k,z}|}\sum_{w \in N_{k,z}}Y_{\varepsilon, w}, \ \ \ \ \ Y_{\varepsilon,w}:= \rho_w^{d-2} \frac{\tilde R_{\varepsilon,w}^{d-2}}{\tilde R_{\varepsilon,w}^{d-2}- \varepsilon^d \rho_w^{d-2}}. \end{align} \bigskip \begin{lem}\label{det.estimate.ppp} Let $W_\varepsilon$ be as in \eqref{corrector} and let $w_\varepsilon$ be defined as above. Then, for every $\varepsilon >0$ and $k \in \mathbb{N}$ such that $\varepsilon k \leq 1$ we have that \begin{equation} \begin{aligned} \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}& \lesssim \biggl( (k\varepsilon)^2 \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d + \varepsilon^d \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2} \biggr)^{\frac 1 2}\\ & \quad + \biggl( \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N}_k} (S_{k,z} - \lambda \mathbb{E}\bigl[ \rho^{d-2}\bigr])^2 + (k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \lambda \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \biggr)^{\frac 1 2}\\ &\quad + \biggl( \varepsilon^{d+\frac{2d}{d+2}}\sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)}\biggr)^{\frac 12}. \end{aligned} \end{equation} \end{lem} \bigskip \begin{lem}\label{det.KM.ppp} Let $u_\varepsilon, u$ and $W_\varepsilon$ be as in Lemma \ref{det.estimate} and let $w_\varepsilon$ be as defined above. Then \begin{align*} \| u_\varepsilon - W_\varepsilon u \|_{H^1_0(D)}^2& \lesssim \| w_\varepsilon - 1\|_{L^2(D)}^2 + \|\nabla(w_\varepsilon- W_\varepsilon) \|_{L^2(\mathbb{R}^d)}^2\\ & \quad \quad + \|\nabla(\tilde W_\varepsilon- W_\varepsilon) \|_{L^2(\mathbb{R}^d)}^2 + \| \mu_\varepsilon - C_0 \|_{H^{-1}(D)}^2, \end{align*} where $\tilde W_\varepsilon$ is defined as in \eqref{corrector} with $R_{\varepsilon,z}$ substituted by $\tilde R_{\varepsilon,z}$. Furthermore, in this case \begin{align}\label{def.mu.eps.ppp} \mu_\varepsilon := \sum_{z \in \Phi_\delta^\varepsilon(D)} \partial_n \tilde w_{\varepsilon,z} \, \delta_{\partial B_{\tilde R_{\varepsilon,z}}(\varepsilon z)}, \end{align} with $\tilde w_{\varepsilon,z}$ as in \eqref{def.harmonic.annuli} with $\tilde R_{\varepsilon,z}$ instead of $R_{\varepsilon,z}$. \end{lem} { \begin{proof}[Proof of Lemma \ref{det.estimate.ppp}] Analogously to the proof of Lemma \ref{det.estimate}, we appeal to Lemma \ref{det.KM.ppp} and reduce to showing that \begin{align}\label{L.2.norm.ppp} \| \nabla( W_\varepsilon - w_\varepsilon) \|_{L^2(D)}^2 + \| w_{\varepsilon} - 1 \|_{L^2(D)}^2 &\lesssim \varepsilon^{d+2} \sum_{z \in n_\varepsilon(D)} \rho_z^{d-2} + \varepsilon^d \sum_{\Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2},\\ \|\nabla(\tilde W_\varepsilon- W_\varepsilon) \|_{L^2(\mathbb{R}^d)}^2 &\lesssim \varepsilon^{d + \frac{2d}{d+2}}\sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)} \label{comparison.correctors} \end{align} and \begin{equation} \label{H.minus.ppp} \begin{aligned} \| \mu_\varepsilon - C_0\|_{H^{-1}(D)}^2 &\lesssim (k\varepsilon)^2 \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d\\ &\quad + (k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 + \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N_k}} (S_{k,z} - \lambda \mathbb{E}\bigl[ \rho^{d-2}\bigr] \bigr)^2. \end{aligned} \end{equation} \smallskip Inequality \eqref{L.2.norm.ppp} may be argued exactly as done for \eqref{L.2.norm} in the proof of Lemma \ref{det.estimate}, this time appealing to Lemma \ref{l.geometry.ppp} instead of Lemma \ref{l.geometry.periodic}. \smallskip We thus turn to \eqref{comparison.correctors}. We begin by remarking that $\tilde W_{\varepsilon}$ is well-defined: Indeed, by definition \eqref{minimal.distance} and \eqref{def.tilde.R}, we have that $\tilde R_{\varepsilon,z} \leq R_{\varepsilon,z} \leq \frac \varepsilon 4$ for every $z \in \Phi^\varepsilon_\delta(D)$. Furthermore, since $\kappa < \delta$ (c.f. \eqref{right.regime.ppp} and Theorem \ref{t.main}), it follows from \eqref{thinning.psi} that $ 2 \eps^\frac{d}{d-2} \rho_z \leq \tilde R_{\varepsilon,z}$, for every $z\in \Phi^\varepsilon_\delta(D)$. Therefore, comparing the two definitions of $W_\varepsilon$ and $\tilde W_\varepsilon$, we use \eqref{def.tilde.R} to bound: \begin{align}\label{comparison.correctors.1} \| \nabla( \tilde W_\varepsilon- W_\varepsilon)\|_{L^2}^2 = \sum_{z \in N_k} \sum_{w \in N_{k,w}, \atop \varepsilon w \in Q_{k} \backslash Q_{k-1}}\| \nabla( \tilde W_\varepsilon- W_\varepsilon)\|_{L^2(B_{R_{\varepsilon,w}}(\varepsilon w))}^2. \end{align} Since, if $R_{\varepsilon,w} \neq \tilde R_{\varepsilon,w}$, then $\varepsilon^{1+\kappa}\leq \tilde R_{\varepsilon,w} \leq R_{\varepsilon,w}$, we have that \begin{align} \| \nabla( \tilde W_\varepsilon- W_\varepsilon)\|_{L^2(B_{R_{\varepsilon,w}}(\varepsilon w)}^2 &\leq \int_{B_{R_{\varepsilon,w}}(\varepsilon w) \backslash B_{\varepsilon^{1+\kappa}}(\varepsilon w)} |\nabla W_\varepsilon|^2 \\ &\quad \quad + \int_{B_{R_{\varepsilon,w}}(\varepsilon w)\backslash B_{\eps^\frac{d}{d-2} \rho_w}(\varepsilon w)}|\nabla(W_\varepsilon-\tilde W_\varepsilon)|^2. \end{align} Appealing to \eqref{corrector}, \eqref{def.harmonic.annuli} and the adaptation of \eqref{explicit.formula} for both $\tilde W_\varepsilon$ and $W_\varepsilon$, the previous integrals may be bounded by \begin{align} \| \nabla( \tilde W_\varepsilon- W_\varepsilon)\|_{L^2(B_{R_{\varepsilon,w}}(\varepsilon w)}^2 \lesssim \varepsilon^{d} \rho_w^{2(d-2)} \varepsilon^{2-(d-2)\kappa} + \varepsilon^{d}\rho_w^{3(d-2)} \varepsilon^{2(2-(d-2)\kappa)} \end{align} Since $w \in \Phi^\varepsilon_\delta(D)$, we have that $\rho_w \leq \varepsilon^{-\frac{2}{d-2}+\delta}$ so that \begin{align} \| \nabla( \tilde W_\varepsilon- W_\varepsilon)\|_{L^2(B_{R_{\varepsilon,w}}(\varepsilon w)}^2& \lesssim \varepsilon^{d} \rho_w^{2(d-2)} \varepsilon^{2-(d-2)\kappa} + \varepsilon^{d}\rho_w^{2(d-2)} \varepsilon^{2-2(d-2)\kappa + (d-2)\delta}\\ & \stackrel{\delta > \kappa}{ \lesssim} \varepsilon^{d} \rho_w^{2(d-2)} \varepsilon^{2-(d-2)\kappa}. \end{align} Inserting this into \eqref{comparison.correctors.1} and appealing to \eqref{right.regime.ppp} for $\kappa$ yields \eqref{comparison.correctors}. \smallskip We finally tackle \eqref{H.minus.ppp}: As done for \eqref{H.minus} of Lemma \ref{det.estimate}, we aim at applying Lemma \ref{Kohn_Vogelius}. We thus pick $\mathcal{Z}= \Phi^\varepsilon_\delta(D)$ and $\mathcal{X}= \{ \eps^\frac{d}{d-2} \rho_z \}_{z \in \mathcal{Z}}, \mathcal{R}:=\{ \tilde R_{\varepsilon,z}\}_{z \in \mathcal{Z}}$. As shown above in the argument for \eqref{comparison.correctors}, condition \eqref{well.defined.cells} is satisfied. Moreover, thanks to \eqref{covering.Poisson}, the collection $\{ K_{k,z} \}_{z \in N_k}$ satisfies \eqref{contains.balls}. Hence, by Lemma \ref{Kohn_Vogelius}, we have that \begin{align*} \| \mu_\varepsilon - m_k \|_{H^{-1}}& \lesssim \max_{z \in N_k} \bigl(\mathop{diam}(K_{k,z})\bigr) \bigl(\varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)}\bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr)^{\frac 1 2}\\ &\stackrel{\eqref{properties.covering}}{\lesssim} \varepsilon k \bigl(\varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)}\bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr)^{\frac 1 2} \end{align*} with \begin{align*} m_k \stackrel{\eqref{averaged.sum.ppp}}{ =} c_d \sum_{z \in N_k} S_{k,z} \mathbf{1}_{K_{k, z}}. \end{align*} By the triangle inequality it thus only remains to control the norm $\| m_k - \mu \|_{H^{-1}(D)}$. Using \eqref{properties.covering}, this may be done exactly as in the proof of Lemma \ref{det.estimate}. The proof of Lemma \ref{det.estimate.ppp} is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{det.KM.ppp}] This lemma may be argued as done for Lemma \ref{det.KM}. The only difference is that, in \eqref{distributional.laplacian}, we smuggle in $-\Delta \tilde W_\varepsilon$ instead of $-\Delta W_\varepsilon$ and apply the triangle inequality to bound $\| \nabla (\tilde W_\varepsilon- w_\varepsilon )\|_{L^2} \leq \| \nabla (W_\varepsilon- w_\varepsilon )\|_{L^2} + \| \nabla (\tilde W_\varepsilon- W_\varepsilon )\|_{L^2}$. \end{proof} \subsection{Annealed estimates (Proof of Theorem \ref{t.main}, $(b)$)} {As in case $(a)$ the next lemma provides annealed bounds for some of the quantities appearing in the right-hand side of Lemma \ref{det.estimate.ppp}.} \begin{lem}\label{l.capacity.average.ppp} Let $n^\varepsilon(D) \subset \Phi^\varepsilon(D)$ the (random) subset constructed in Lemma \ref{l.geometry.ppp}. Then, there exists a constant $C=C(d, \lambda)$ such that \begin{align} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi^\varepsilon(D) \backslash n^\varepsilon(D)} \rho_z^{d-2} \bigr] \leq C \varepsilon^{(\frac{2}{d-2}-\delta)\beta}. \end{align} \end{lem} \begin{proof}[Proof of Theorem \ref{t.main}, $(b)$] We recall that $k$ satisfies \eqref{right.regime.ppp}. Combining Lemma \ref{det.estimate.ppp} and Lemma \ref{l.capacity.average.ppp}, we bound \begin{equation*} \begin{aligned} \mathbb{E} \bigl[ \| u_\varepsilon - \tilde W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2 \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]\\ &\quad \quad + \mathbb{E} \bigl[ \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in \mathring{N}_k} (S_{k,z} - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] + (k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \lambda \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \\ &\quad \quad + \varepsilon^{(\frac{2}{d-2}-\delta)\beta} + \varepsilon^{d +\frac{2d}{d+2}} \mathbb{E} \bigl[ \sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)}\bigr] \end{aligned} \end{equation*} As done in the proof of Theorem \ref{t.main}, this also turns into \begin{equation}\label{main.1.ppp} \begin{aligned} \mathbb{E} \bigl[ \| u_\varepsilon - \tilde W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2 \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]\\ &\quad \quad + \mathbb{E} \bigl[(S_{k,0} - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr] + (k\varepsilon)^3 \mathop{\mathpalette\avsuminner\relax}\displaylimits_{z \in N_k \backslash \mathring{N}_{k}} (S_{k,z} - \lambda \mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr])^2 \\ &\quad \quad + \varepsilon^{(\frac{2}{d-2}-\delta)\beta}+ \varepsilon^{d +\frac{2d}{d+2}} \mathbb{E} \bigl[ \sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)}\bigr]. \end{aligned} \end{equation} \smallskip We now claim that, thanks to \eqref{right.regime.ppp}, the previous estimate reduces to \begin{equation} \begin{aligned}\label{main.2.ppp} \mathbb{E} \bigl[ \| u_\varepsilon - \tilde W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim \bigl( |\log \varepsilon| (k\varepsilon)^2 + k^{-d}\bigr) \mathbb{E} \bigl[\rho^{2(d-2)}\mathbf{1}_{\rho < -\frac{2}{d-2} +\delta} \bigr]\\ &\quad + \varepsilon^{(\frac{2}{d-2}-\delta)\beta} + k^{-2}\varepsilon^{ \frac{2}{(d+2)(d-1)}}. \end{aligned} \end{equation} If the previous estimate holds, by the choice of $\delta$ and \eqref{right.regime.ppp}, we infer that \begin{equation*} \begin{aligned} \mathbb{E} \bigl[ \| u_\varepsilon - \tilde W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim |\log \varepsilon | \varepsilon^{\frac{2d}{d^2-4} \beta \wedge (d-2)} + \varepsilon^{ \frac{2}{(d+2)}( \frac{2}{d-1} + 2)}. \end{aligned} \end{equation*} If $d \leq 3$, we have that \begin{align}\label{dimensional.constraint} \varepsilon^{ \frac{2}{(d+2)}( \frac{2}{d-1} + 2)} \lesssim |\log \varepsilon | \varepsilon^{\frac{2d}{d^2-4} \beta \wedge (d-2)}, \ \ \ \text{for every $\beta> 0$.} \end{align} This establishes Theorem \ref{t.main}, $(b)$. \bigskip To conclude the proof, we only need to obtain \eqref{main.2.ppp} from \eqref{main.1.ppp}. Arguing as for \eqref{main.4} in the proof of Theorem \ref{t.main}, $(a)$, we reduce to \begin{equation}\label{main.3.ppp} \begin{aligned} \mathbb{E} \bigl[ \| u_\varepsilon - \tilde W_\varepsilon u \|_{H^1_0(D)}^2 \bigr] &\lesssim (k\varepsilon)^2 \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr] + \mathbb{E} \bigl[(S_{k,0} - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \bigr]\\ &\quad\quad + \varepsilon^{(\frac{2}{d-2}-\delta)\beta} + \varepsilon^{d +\frac{2d}{d+2}} \mathbb{E} \bigl[ \sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)}\bigr]. \end{aligned} \end{equation} This implies inequality \eqref{main.2.ppp} provided that \begin{align}\label{main.comparison} \varepsilon^{d +\frac{2d}{d+2}} \mathbb{E} \bigl[ \sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)}\bigr]\lesssim (\varepsilon k)^2 \mathbb{E}\bigl[\rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr], \end{align} \begin{align}\label{main.4.ppp} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr] \lesssim |\log \varepsilon| \mathbb{E} \bigl[\rho^{2(d-2)}\mathbf{1}_{\rho < -\frac{2}{d-2} +\delta} \bigr] \end{align} and \begin{align}\label{main.5.ppp.2} \mathbb{E} \bigl[(S_{k,0} - \lambda\mathbb{E} \bigl[ \rho \bigr])^2 \bigr]& \lesssim k^{-d} \mathbb{E}\bigl[ \rho^2 \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}}\bigr]+ \varepsilon^{(\frac{2}{d-2}-\delta)\beta} + k^{-2}\varepsilon^{\frac{4}{(d+2)(d-1)}}. \end{align} \smallskip We argue \eqref{main.4.ppp}: Recalling the definition of the covering $\{ Q_{k,z} \}_{z \in N_k}$, we decompose $$ D \subset \bigcup_{z \in N_k} \bigl( Q_{k-1, z} \cup (Q_{k,z} \backslash Q_{k-1,z})\bigr) $$ and rewrite \begin{align} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)}& \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]\\ & = \mathbb{E} \bigl[ \varepsilon^d \sum_{w \in N_k}\biggl( \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k-1,w}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d + \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k,w}\backslash Q_{k-1,w}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \biggr) \bigr] . \end{align} Since the process $(\Phi , \mathcal{R})$ is stationary, we bound \begin{equation} \begin{aligned}\label{main.5.ppp} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)}& \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]\\ &\stackrel{\eqref{number.N.k}}{\lesssim} k^{-d}\mathbb{E} \bigl[ \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k-1,0}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d + \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k,0}\backslash Q_{k-1,0}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]. \end{aligned} \end{equation} Let us partition the cube $Q_{k,0}$ into $k^{d}$ cubes of size $\varepsilon$ and let $Q$ be as in \eqref{cubes.meso}; the definitions of $\Phi^\varepsilon_\delta(D)$ and $\tilde R_{\varepsilon,z}$ (c.f. \eqref{thinning.psi}, \eqref{def.tilde.R}) and the stationarity of $(\Phi, \mathcal{R})$ imply that \begin{align}\label{example.product} k^{-d}\mathbb{E} \bigl[ \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k-1,0}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr] \lesssim \mathbb{E} \bigl[\sum_{z \in \Phi(Q)} \rho_z^{2(d-2)} \mathbf{1}_{\rho_z \leq \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2} \bigl( \frac{\varepsilon}{R_{\varepsilon,z}}\bigr)^d \bigr]. \end{align} We now apply Lemma \ref{product.measure} with $G((x,\rho); \omega) = \bigl(\frac{\varepsilon}{R_{\varepsilon,x}}\bigr)^d \mathbf{1}_{R_{x,\varepsilon} \geq \varepsilon^2} \rho^{2(d-2)} \mathbf{1}_{\rho<\varepsilon^{-\frac{2}{d-2} +\delta}}$ to infer that \begin{align} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr] \lesssim \mathbb{E}_\rho \bigl[ \rho_z^{2(d-2)} \mathbf{1}_{\rho_z \leq \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] \mathbb{E}_\Phi \bigl[ \bigl( \frac{\varepsilon}{R_{\varepsilon,0}}\bigr)^d \mathbf{1}_{R_{\varepsilon,0} \geq \varepsilon^2} \bigr]. \end{align} By definition of $R_{\varepsilon,z}$ (see \eqref{minimal.distance}) it follows from the properties of the Poisson point process that \begin{align}\label{log.term} \mathbb{E}_\phi \bigl[ \mathbf{1}_{R_{\varepsilon,0} > \varepsilon^2}\bigl( \frac{\varepsilon}{R_{\varepsilon,0}}\bigr)^d \bigr] \lesssim |\log \varepsilon|. \end{align} Hence, $$ k^{-d}\mathbb{E} \bigl[ \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k-1,0}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr] \lesssim |\log \varepsilon| \mathbb{E}_\rho \bigl[ \rho_z^{2(d-2)} \mathbf{1}_{\rho_z \leq \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] $$ and \eqref{main.5.ppp} turns into \begin{align} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \Phi_{\delta}^\varepsilon(D)}& \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]\\ & \lesssim |\log\varepsilon| \mathbb{E}_\rho \bigl[ \rho_z^{2(d-2)} \mathbf{1}_{\rho_z \leq \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] + k^{-d}\mathbb{E}\bigl[ \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k,0}\backslash Q_{k-1,0}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]. \end{align} \smallskip We now tackle the remaining term in the inequality above and claim that, thanks to \eqref{right.regime.ppp}, we have that \begin{align}\label{main.6.ppp} k^{-d}\mathbb{E}\bigl[ \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k,0}\backslash Q_{k-1,0}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr] \lesssim \mathbb{E}_\rho \bigl[ \rho_z^{2(d-2)} \mathbf{1}_{\rho_z \leq \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr]. \end{align} Let $Q_{r}$ be the cube of size $r>0$ centred at the origin. Using \eqref{thinning.psi}, indeed, we bound \begin{align} k^{-d}\mathbb{E}\bigl[ \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k,0}\backslash Q_{k-1,0}} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr] \leq k^{-d}\mathbb{E}\bigl[ \sum_{z \in \Phi( Q_{k}\backslash Q_{k-1})} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} + \delta}} \bigr]. \end{align} Since we may decompose the set $Q_k \backslash Q_{k-1}$ into $\lesssim k^{d-1}$ unitary cubes, we use again the stationarity of $(\Phi; \mathcal{R})$ and infer that \begin{align}\label{logarithmic.1} k^{-d}\mathbb{E}\bigl[ \sum_{z \in \Phi_{\delta}^\varepsilon(D), \atop \varepsilon z \in Q_{k,0}\backslash Q_{k-1,0}} &\rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \bigr]\\ & \leq k^{-1} \mathbb{E}\bigl[ \sum_{z \in \Phi(Q_1)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d \mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} + \delta}} \bigr], \end{align} where $Q_1$ is any unitary cube that is contained in $Q_k \backslash Q_{k-1}$\footnote{In principle, in this last step one should distinguish between unitary cubes according to the number of faces that they share with the boundary. However, the argument shown below for the term associated to $Q_1$ may be easily adapted to any of the previous cubes. }. We now decompose \begin{align} Q_1 &= \sum_{n=1}^{ \lceil -\kappa\log\varepsilon \rceil}A_n,\\ A_n:= \{ x \in Q_1 \, \colon \, 2^n \varepsilon^\kappa \leq \mathop{dist}(x ; \partial Q_k ) &\leq 2^{n+1}\varepsilon^\kappa \}, \ \ \ \ \ A_0:= \{ x \in Q_1 \, \colon \, \mathop{dist}(x ; \partial Q_k ) \leq \varepsilon^\kappa \}. \end{align} and use \eqref{def.tilde.R} to rewrite \begin{align} \mathbb{E}\bigl[ \sum_{z \in \Phi(Q_1)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d &\mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} + \delta}} \bigr]\\ &\lesssim \sum_{n=0}^{ \lceil -\kappa\log\varepsilon \rceil} \mathbb{E}\bigl[ \sum_{z \in \Phi(A_n)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{ R_{\varepsilon,z} \wedge 2^n\varepsilon^{1+\kappa}}\bigr)^d \mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} + \delta}} \bigr] \end{align} We now appeal again to Lemma \ref{product.measure} as for \eqref{example.product} to reduce to \begin{align} \mathbb{E}\bigl[ \sum_{z \in \Phi(Q_1)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d &\mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} + \delta}} \bigr]\\ &\lesssim \sum_{n=0}^{ \lceil -\kappa\log\varepsilon \rceil} \mathbb{E}_\rho\bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} + \delta}}\bigr] |A_n| \mathbb{E}\bigl[ \bigl( \frac{\varepsilon}{ R_{\varepsilon,0} \wedge 2^n\varepsilon^{1+\kappa}}\bigr)^d \mathbf{1}_{R_{\varepsilon, 0} \geq \varepsilon^2} \bigr]. \end{align} Arguing as for \eqref{log.term} and using the stationarity of $\Phi$ we infer that \begin{align} \mathbb{E}\bigl[ \sum_{z \in \Phi(Q_1)} \rho_z^{2(d-2)} \bigl( \frac{\varepsilon}{\tilde R_{\varepsilon,z}}\bigr)^d &\mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} + \delta}} \bigr]\\ &\lesssim \mathbb{E}_\rho\bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} + \delta}}\bigr] \sum_{n=0}^{ \lceil -\kappa\log\varepsilon \rceil} 2^n \varepsilon^{\kappa} \bigl(2^{-dn} \varepsilon^{-d\kappa} - \log \varepsilon \bigr)\\ &\lesssim \mathbb{E}_\rho\bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} + \delta}}\bigr] \varepsilon^{-(d-1)\kappa}. \end{align} To establish \eqref{main.6.ppp} it only remains to combine the previous inequality with \eqref{logarithmic.1} and use \eqref{right.regime.ppp}. The proof of \eqref{main.4.ppp} is therefore complete. \bigskip Inequality \eqref{main.comparison} may be obtained in a similar way as to that of \eqref{main.4.ppp}: Since we may decompose the set $\bigcup_{z \in N_k} Q_{k,z}\backslash Q_{k-1,z}$ into $n \lesssim (\varepsilon k)^{-d} k^{d-1}$ disjoint cubes $\{ Q_{\varepsilon,i} \}_{i=1}^n$ of size $\varepsilon$, we use definition \eqref{thinning.psi} and the stationarity of $(\Phi, \mathcal{R})$ to bound \begin{align} \varepsilon^{d + \frac{2d}{d+2}} \mathbb{E} \bigl[ \sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)}\bigr] \lesssim k^{-1}\varepsilon^{\frac{2d}{d+2}}\mathbb{E}\bigl[ \sum_{w \in \Phi(Q)} \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] \end{align} so that, again by Lemma \ref{product.measure}, we obtain \begin{align} \varepsilon^{d + \frac{2d}{d+2}} \mathbb{E} \bigl[ \sum_{z \in N_k} \sum_{w \in N_{k,z}, \atop \varepsilon w \in Q_{k,z} \backslash Q_{k-1,z}} \rho_w^{2(d-2)}\bigr] \lesssim k^{-1}\varepsilon^{\frac{2d}{d+2}}\mathbb{E}\bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] \end{align} We establish \eqref{main.comparison} after observing that, thanks to \eqref{right.regime.ppp}, it holds $k^{-1}\varepsilon^{\frac{2d}{d+2}} \leq (\varepsilon k)^2$. \bigskip We now tackle \eqref{main.5.ppp.2}: By construction (see definition \eqref{covering.Poisson}), the (random) set $K_{\varepsilon,0}$ satisfies \begin{align} \bigl\{ w \in \Phi^\varepsilon_\delta(D) \, \colon \, \varepsilon w \in K_{k,0} \bigr\}= \bigl\{ w \in \Phi^\varepsilon_\delta(D) \, \colon \, \varepsilon w \in Q_{k,0} \bigr\} = \Phi^\varepsilon_\delta(D) \cap \Phi(Q_k), \end{align} where $Q_k$ is, as above the cube of size $k$ centred at the origin. Hence, decomposing $Q_k= \sum_{i=1}^{k^d}Q_i$ into unitary cubes, definitions \eqref{averaged.sum.ppp} and \eqref{thinning.psi} allow us to rewrite \begin{align} S_{k,0} - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr] = \frac{\varepsilon^d}{|K_{k,z}|} \sum_{i=1}^{k^d} Z_i - \lambda\mathbb{E} \bigl[ \rho \bigr] \end{align} with \begin{align}\label{def.Z.2} Z_i := \sum_{\Phi(Q_{i})} Y_{\varepsilon,z} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{ R_{\varepsilon,z} \geq 2\eps^\frac{d}{d-2} \rho_z}, \ \ \ i = 1, \cdots, k^d. \end{align} We rewrite \begin{align} S_{k,0} - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr] = \frac{\varepsilon^d}{|K_{k,z}|} \sum_{i=1}^{k^d} (Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr]) + \lambda (\frac{\varepsilon^dk^d}{|K_{k,z}|} -1)\mathbb{E}_\rho \bigl[ \rho^{d-2} \bigr] \end{align} so that the triangle inequality, assumption \eqref{integrability.radii} and the quenched bounds in \eqref{properties.covering} yield \begin{align} (S_{k,0} - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr])^2 \lesssim \bigl(\mathop{\mathpalette\avsuminner\relax}\displaylimits_{i=1}^{k^d} (Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr]) \bigr)^2 + k^{-2}\varepsilon^{2\kappa}. \end{align} Appealing to definitions \eqref{def.Z.2}, \eqref{averaged.sum.ppp} and \eqref{minimal.distance}, we observe that $Z_i$ and $Z_j$ are independent whenever $i, j$ are such that $Q_{j}$ and $Q_{i}$ are not adjacent. Hence, by taking the expectation in the previous inequality, we estimate \begin{equation} \begin{aligned}\label{main.6.ppp.c} \mathbb{E} \biggl[ (S_{k,0} - \lambda\mathbb{E} \bigl[ \rho \bigr])^2\biggr] &\lesssim k^{-2d} \sum_{i=1}^{k^d} \sum_{j \colon Q_{j}, Q_{i} \text{\, adjacent}} \mathbb{E}\biggl[ (Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr]) (Z_j - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr]) \biggr]\\ &+ k^{-d}\sum_{i=1}^{k^d} \mathbb{E}\biggl[ Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr] \biggr]^2 + k^{-2}\varepsilon^{2\kappa}. \end{aligned} \end{equation} To establish \eqref{main.5.ppp.2} from \eqref{main.6.ppp.c} it suffices to bound \begin{align}\label{LLN.1} \mathbb{E}\biggl[ (Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr]) (Z_j - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr]) \biggr] &\lesssim \mathbb{E}\bigl[ \rho^{2(d-2)} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr],\\ \mathbb{E}\biggl[ Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr] \biggr] \lesssim \varepsilon^{(\frac{2}{d-2}-\delta)\beta}. \label{LLN.2} \end{align} \bigskip Inequality \eqref{LLN.1} immediately follows from Cauchy-Schwarz's inequality, the triangle inequality and definitions \eqref{def.Z.2} and \eqref{averaged.sum.ppp}. Again by \eqref{def.Z.2} and \eqref{averaged.sum.ppp}, for every $i=1, \cdots, k^d$, we have that \begin{align} \mathbb{E}\biggl[ Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr] \biggr] &= {\mathbb{E}\bigl[ \sum_{\Phi(Q)} \rho_z^{d-2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2 \vee \eps^\frac{d}{d-2} \rho_z}\bigr]}- \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr]\\ &\quad\quad + \mathbb{E}\bigl[ \sum_{\Phi(Q)}\varepsilon^d \frac{\rho_z^{2(d-2)}}{\tilde R_{\varepsilon,z}^{d-2} - \varepsilon^d \rho_z^{d-2}} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2 \vee \eps^\frac{d}{d-2} \rho_z}\bigr]. \end{align} Observing that $$ \mathbb{E} \bigl[ \sum_{\Phi(Q)} \rho_z^{d-2} \bigr] = \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr] $$ and writing \begin{align} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2 \vee \eps^\frac{d}{d-2} \rho_z} &= 1- \mathbf{1}_{\rho_z \geq \varepsilon^{-\frac{2}{d-2}+\delta}} - \mathbf{1}_{\rho_z \leq \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \leq 2\eps^\frac{d}{d-2}\rho_z \vee \varepsilon^2}, \end{align} we infer that \begin{align} | \mathbb{E}\biggl[ Z_i - \lambda\mathbb{E} \bigl[ \rho^{d-2} \bigr] \biggr] | & \lesssim \mathbb{E}\bigl[ \sum_{\Phi (Q)} \rho_z^{d-2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \leq \eps^\frac{d}{d-2} \rho \vee \varepsilon^2} \bigr]+ \mathbb{E}\bigl[\rho^{d-2} \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr]\\ &\quad\quad + \varepsilon^d \mathbb{E}\bigl[ \sum_{\Phi(Q)} \biggl(\frac{\rho_z^{2}}{\tilde R_{\varepsilon,z}}\biggr)^{d-2} \mathbf{1}_{\rho < \varepsilon^{-2+\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \eps^\frac{d}{d-2} \rho_z \vee \varepsilon^2}\bigr]. \end{align} We now appeal to Lemma \ref{product.measure} with $G((x,\rho); \omega)= \rho \mathbf{1}_{\rho \leq \varepsilon^{-2+\delta}} \mathbf{1}_{R_{\varepsilon,x}\leq \varepsilon^2}$ and to the properties of Poisson point processes to infer that {\begin{align}\label{estimate.ball.radii} \mathbb{E}\bigl[ \sum_{\Phi(Q)} \rho_z^{d-2} \mathbf{1}_{\rho_z < \varepsilon^{-2+\delta}} \mathbf{1}_{R_{\varepsilon,z} \leq \eps^\frac{d}{d-2} \rho_z \vee \varepsilon^2} \bigr]&\lesssim \mathbb{E}_\rho\biggl[\rho^{d-2} \mathbb{E}_{\Phi}\bigl[ \mathbf{1}_{R_{\varepsilon,0} < \eps^\frac{d}{d-2}\rho \vee \varepsilon^2} \bigr] \bigr] \biggr]\\ \stackrel{\eqref{integrability.radii}}{\lesssim} \varepsilon^d + \varepsilon^{(\frac{2}{d-2} - \delta)\beta}. \end{align}} Hence, \begin{align} | \mathbb{E}\bigl[ (Z_i - \lambda\mathbb{E} \bigl[ \rho \bigr] \bigr] | & \lesssim \varepsilon^{d} + \mathbb{E}\bigl[\rho \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2}+\delta}}\bigr] + \mathbb{E}\bigl[ \sum_{\Phi(Q_{i})}\frac{\rho_z^{2(d-2)}}{\tilde R_{\varepsilon,z} - \eps^\frac{d}{d-2}\rho_z} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \eps^\frac{d}{d-2} \rho_z\vee \varepsilon^2 }\bigr]\\ &\stackrel{\eqref{Cheb}}{\lesssim}\varepsilon^{(\frac{2}{d-2}-\delta)\beta} + \mathbb{E}\bigl[ \sum_{\Phi(Q_{i})} \biggl(\frac{\rho_z^{2}}{\tilde R_{\varepsilon,z}}\biggr)^{d-2} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \eps^\frac{d}{d-2} \rho_z \vee \varepsilon^2}\bigr] \end{align} To establish \eqref{LLN.2} it remains to use \eqref{integrability.radii}, \eqref{def.tilde.R} an argument similar to \eqref{log.term} and \eqref{logarithmic.1} to infer that \begin{align} \mathbb{E}\bigl[ \sum_{\Phi(Q_{i})} \biggl(\frac{\rho_z^{2}}{\tilde R_{\varepsilon,z}}\biggr)^{d-2} \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2}+\delta}} \mathbf{1}_{R_{\varepsilon,z} \geq \eps^\frac{d}{d-2} \rho_z \vee \varepsilon^2}\bigr]&\lesssim \varepsilon^{d- 2 + (\frac{2}{d-2} -\delta)\beta +(d-2)\delta} \mathbb{E}_{\Phi}\bigl[ \tilde R_{\varepsilon,0}^{-(d-2)}\mathbf{1}_{R_{\varepsilon,z} \geq \varepsilon^2}\bigr]\\ & \lesssim \varepsilon^{(\frac{2}{d-2} -\delta)\beta +(d-2)(\delta- \kappa)}. \end{align} Since $\kappa<\delta$, this concludes the proof of \eqref{LLN.2} and, in turn, of \eqref{main.5.ppp.2}. The proof of Theorem \ref{t.main} is thus complete. \end{proof} \bigskip \begin{proof}[Proof of Lemma \ref{l.capacity.average.ppp}] The proof of this lemma follows the same lines of the argument for Lemma \ref{l.capacity.average}. We resort to the construction made in Lemma \ref{l.geometry.ppp} (c.f. \eqref{definition.bad.index}) to decompose \begin{align}\label{capacity.bad.2} \varepsilon^d \sum_{z \in J^\varepsilon_b} \rho_z^{d-2} + \varepsilon^d \sum_{z \in K^\varepsilon_b }\rho_z^{d-2} + \varepsilon^d \sum_{z \in C^\varepsilon_b} \rho_z^{d-2} + \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2}. \end{align} The expectation of the first sum is bounded as follows: We use definition \eqref{index.set.ppp.1} and argue as done for \eqref{main.4.ppp} to reduce to \begin{align} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in J^\varepsilon_b} \rho_z^{d-2} \bigr] \lesssim \mathbb{E} \bigl[ \sum_{z \in \Phi(Q)} \rho_z^{d-2}\mathbf{1}_{\rho_z \geq \varepsilon^{-\frac{2}{d-2} +\delta}}\bigr] \lesssim \mathbb{E}[\rho^{d-2} \mathbf{1}_{\rho> \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] \stackrel{\eqref{Cheb}}{\lesssim} \varepsilon^{(\frac{2}{d-2}- \delta)\beta}. \end{align} The second sum in \eqref{capacity.bad.2} may be treated likewise since, by definition \eqref{index.set.ppp.3}, we have $$ \varepsilon^d \sum_{z \in K^\varepsilon_b} \rho_z^{d-2} \leq \varepsilon^d \sum_{z \in \Phi^\varepsilon(D)} \rho_z^{d-2} \mathbf{1}_{R_{\varepsilon,z} \leq \varepsilon^2}. $$ Similarly, we use \eqref{index.set.ppp.4} to rewrite $$ \varepsilon^d \sum_{z \in C^\varepsilon_b} \rho_z^{d-2} \leq \varepsilon^d \sum_{z \in \Phi^\varepsilon(D)} \rho_z^{d-2} \mathbf{1}_{\rho_z < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,z} \leq 2\eps^\frac{d}{d-2}\rho_z}. $$ Taking the expectation, we bound this term by $\varepsilon^{(\frac{2}{d-2}-\delta)\beta}$ as done in \eqref{estimate.ball.radii}. Hence, it only remains to estimate the last sum in \eqref{capacity.bad.2}. As done for the same sum in \eqref{capacity.bad.periodic.1}, we use definition \eqref{index.set.ppp.3} to rewrite \begin{align} \mathbb{E} \bigl[ & \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} \bigr]\\ & \lesssim \mathbb{E} \bigl[ \varepsilon^d \sum_{w \in \Phi^\varepsilon(D)} (\mathbf{1}_{\rho_w \geq \varepsilon^{-\frac{2}{d-2} +\delta}} + \mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,w} \leq 2\eps^\frac{d}{d-2}\rho_w \vee \varepsilon^2}) \sum_{z \in \Phi^\varepsilon(D)\backslash\{ w \} \, \atop \varepsilon |w-z| \leq \varepsilon + \eps^\frac{d}{d-2}\rho_w \wedge 1} \rho_z^{d-2} \bigr] \end{align} and, again by stationarity, reduce to \begin{align} \mathbb{E} \bigl[ \varepsilon^d &\sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} \bigr]\\ & \lesssim \mathbb{E} \bigl[ \sum_{w \in \Phi(Q)} (\mathbf{1}_{\rho_w \geq \varepsilon^{-\frac{2}{d-2} +\delta}} + \mathbf{1}_{\rho_w < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,w} \leq 2\eps^\frac{d}{d-2}\rho_w \vee \varepsilon^2}) \sum_{z \in \Phi^\varepsilon(D)\backslash\{ w \} \, \atop \varepsilon |w-z| \leq \varepsilon + \eps^\frac{d}{d-2}\rho_w \wedge 1} \rho_z^{d-2} \bigr]. \end{align} By Lemma \ref{product.measure} applied to \begin{align} G( ( x, \rho), \omega)& = (\mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2} +\delta}} + \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,x} \leq 2\eps^\frac{d}{d-2} y \vee \varepsilon^2}) \sum_{z \in \Phi^\varepsilon(D) \, \atop \varepsilon |x-z| \leq \varepsilon + \eps^\frac{d}{d-2} \rho \wedge 1} \rho_z^{d-2}, \end{align} we infer that \begin{align} \mathbb{E} \bigl[ \varepsilon^d& \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} \bigr] \lesssim \mathbb{E}_\rho \biggl[ \mathbb{E}\bigl[(\mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2} +\delta}} + \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,0} \leq 2\eps^\frac{d}{d-2}\rho \vee \varepsilon^2}) \sum_{z \in \Phi^\varepsilon(D)\backslash\{ 0\} \, \atop \varepsilon |z| \leq \varepsilon + \eps^\frac{d}{d-2}\rho \wedge 1} \rho_z^{d-2} \bigr] \biggr]. \end{align} Since the marks $\{\rho_z\}$ are identically distributed and independent, we use \eqref{integrability.radii} to bound \begin{align} \mathbb{E} \bigl[ \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} \bigr] \lesssim \mathbb{E}_\rho \biggl[ \mathbb{E}_\Phi\bigl[ (\mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2} +\delta}} &+ \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,0} \leq 2\eps^\frac{d}{d-2}\rho \vee \varepsilon^2}) \\ &\times \# \{ z \in \Phi \backslash\{ 0 \} \, \colon \, \varepsilon |z| \leq \varepsilon + \eps^\frac{d}{d-2}\rho \wedge 1 \} \bigr] \biggr] \, \d x. \end{align} Since $$ \mathbb{E}_\Phi\bigl[ \# \{ z \in \Phi \backslash\{ 0 \} \, \colon \, \varepsilon |z| \leq \varepsilon + \eps^\frac{d}{d-2} \rho \wedge 1 \} \bigr] \lesssim (\rho^{d-2} + 1), $$ the first term on the right-hand side above is easily bounded by \begin{equation} \begin{aligned}\label{capacity.5} \mathbb{E}_\rho \biggl[ \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbb{E}_\Phi\bigl[ \# \{ z \in \Phi \backslash\{ 0 \} \, \colon \, &\varepsilon |z| \leq \varepsilon + \eps^\frac{d}{d-2} \rho \wedge 1 \} \bigr] \biggr] \\ & \lesssim \mathbb{E}_\rho\bigl[ \rho^{d-2} \mathbf{1}_{\rho \geq \varepsilon^{-\frac{2}{d-2} +\delta}} \bigr] \stackrel{\eqref{Cheb}}{\lesssim} \varepsilon^{(\frac{2}{d-2} - \delta)\beta}. \end{aligned} \end{equation} We now turn to the second term on the right-hand side above: We observe that we may rewrite \begin{align} \mathbb{E}_\rho \biggl[ \mathbb{E}_\Phi\bigl[ \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,0} \leq 2\eps^\frac{d}{d-2}\rho \vee \varepsilon^2} \# \{ z \in \Phi \backslash\{ 0 \} \, \colon \, \varepsilon |z| \leq \varepsilon + \eps^\frac{d}{d-2} \rho \wedge 1 \} \bigr] \biggr] \\ = \mathbb{E}_\rho \biggl[ \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbb{E}_\Phi\bigl[ \mathbf{1}_{R_{\varepsilon,0} \leq 2\eps^\frac{d}{d-2}\rho \vee \varepsilon^2} \# \{ z \in \Phi \backslash\{ 0 \} \, \colon \, |z| \leq 4 \} \bigr] \biggr] \end{align} Using H\"older's inequality with exponents $\frac 3 2$ and $3$ in the inner expectation, definition \eqref{minimal.distance} and the fact that $\Phi$ is a Poisson point process, we thus bound \begin{align} \mathbb{E}_\rho& \biggl[ \mathbb{E}_\Phi\bigl[ \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbf{1}_{R_{\varepsilon,0} \leq 2\eps^\frac{d}{d-2}\rho \vee \varepsilon^2} \# \{ z \in \Phi \backslash\{ 0 \} \, \colon \, \varepsilon |z| \leq \varepsilon + \eps^\frac{d}{d-2} \rho \wedge 1 \} \bigr] \biggr] \\ &\leq \mathbb{E}_\rho \biggl[ \mathbf{1}_{\rho < \varepsilon^{-\frac{2}{d-2} +\delta}} \mathbb{E}_\Phi \bigl[ \mathbf{1}_{R_{\varepsilon,0} \leq 2\eps^\frac{d}{d-2}\rho \vee \varepsilon^2} \bigr]^{\frac 2 3} \biggr] \stackrel{\eqref{integrability.radii}}{\lesssim} \varepsilon^{\frac{2d}{3}} + \varepsilon^{\frac 2 3 (1+ 2\delta + (\frac{2}{d-2} - \delta)\beta)}. \end{align} Thanks to the choice of $\delta$ and since $d \geq 2$, the right-hand side is always bounded by $\varepsilon^{ (\frac{2}{d-2} - \delta)\beta}$. Combining this with \eqref{capacity.5} yields \begin{align} \mathbb{E} \bigl[ & \varepsilon^d \sum_{z \in \tilde I^\varepsilon_b} \rho_z^{d-2} \bigr] \lesssim \varepsilon^{(\frac{2}{d-2} - \delta)\beta}. \end{align} This concludes the proof of Lemma \ref{l.capacity.average.ppp}. \end{proof} \bigskip \section{Auxiliary results}\label{Aux} Let $\mathcal{Z}:= \{ z_i \}_{i \in I} \subset D$ be a collection of points and let $\mathcal{X}:=\{ X_i \}_{i\in I}, \mathcal{R}:=\{ r_i \}_{i\in I} \subset \mathbb{R}_+$. We assume that \begin{align}\label{well.defined.cells} 2X_i < r_i < \min_{z_j \in \mathcal{Z}, \atop z_j \neq z_i}\{ |z_j - z_i|\}, \ \ \ \text{for every $z_i \in \mathcal{Z}$.} \end{align} We define the measure \begin{align}\label{measure.M} M:= \sum_{i \in I} \partial_n v_i \delta_{\partial B_{r_i}(z_i)} \in H^{-1}(D), \end{align} where each $v_i \in H^1(B_{r_i}(z_i))$ is the solution of \eqref{def.harmonic.annuli} with $B_{\varepsilon \rho_z}(\varepsilon z)$ and $B_{R_{\varepsilon,z}}(\varepsilon z)$ replaced by $B_{X_i}(z_i)$ and $B_{r_i}(z_i)$, respectively. \smallskip The next lemma is a generalization of the result by \cite{Kohn_Vogelius} used in \cite{Kacimi_Murat} to show the analogue of Theorem \ref{t.main} in the case of periodic holes $H\varepsilon$.\begin{lem}\label{Kohn_Vogelius} Let $\mathcal{Z}$, $\mathcal{X}$ and $\mathcal{R}$ be as above. Then, there exists a constant $C=C(d)$ such that for every finite Lipschitz and (essentially) disjoint covering $\{ K_j\}_{j \in J}$ of $D$ such that \begin{align}\label{contains.balls} B_{2r_i}(z_i) \subset K_j \ \ \ \text{OR} \ \ \ B_{r_i}(z_i) \cap K_j= \emptyset \ \ \ \ \text{for every $i \in I$, $j \in J$} \end{align} we have that \begin{align}\label{KV.2} \| M -m \|_{H^{-1}(D)} \leq C \max_{j \in J}\mathop{diam}(K_j) \bigl( \sum_{i \in I} X_i^{2(d-2)} r_i^{-d} \bigr)^{\frac 1 2}, \end{align} with \begin{align}\label{mean.m} m := c_d \sum_{j \in J} \bigl(\frac{1}{|K_{j}|} \sum_{i \in I, \atop z_i \in K_j}\frac{X_i^{d-2} r_i^{d-2}}{r_i^{d-2} - X_i^{d-2}} \bigr) \mathbf{1}_{K_{j}}. \end{align} Here, the constant $c_d$ is as in \eqref{strange.term}. \end{lem} \bigskip The next result is a very easy consequence of the assumptions (i)-(iii) on the marked point process $(\Phi, \mathcal{R})$. Since it is used extensively in the proof of Theorem \ref{t.main}, in the sake of a self-contained presentation, we give below the statement and its brief proof. \begin{lem}\label{product.measure} Let $A\subset \mathbb{R}^d$ be a bounded set containing the origin. Let $(\Phi; \mathcal{R})$ satisfy (i)-(iii) with $\Phi=\mathop{Poi}(\lambda)$. For every $\varepsilon > 0$ and $x \in \mathbb{R}^d$, let $R_{\varepsilon,z}$ be as in \eqref{minimal.distance}. Then for every $G: \mathbb{R}^d \times \mathbb{R}_+ \times \Omega \to \mathbb{R}$ it holds \begin{align*} \mathbb{E} \bigl[ \sum_{z \in \Phi(A)} G( (z, \rho_z); \omega &\backslash \{(z, \rho_z)\} ) \bigr]= \lambda |A| \mathbb{E}_\rho \biggl[ \mathbb{E} \bigl[ G((0, \rho); \omega) \bigr] \biggr]. \end{align*} \end{lem} \bigskip \begin{proof}[Proof of Lemma \ref{Kohn_Vogelius}] With no loss of generality, we give the proof for $d=3$. We start by remarking that, thanks to \eqref{contains.balls}, for every $j \in J$, there exists $\eta_j \in C^\infty_0(K_j)$ such that $\eta_j =1$ in $\bigcup_{i \in I, \atop z_i \subset K_j} B_{r_i}(z_i)$. This in particular allows us to rewrite the measure $M$ in \eqref{measure.M} as \begin{align}\label{organizing.M} M= \sum_{j \in J} \eta_j M_j, \ \ \ M_j:= \sum_{i \in I, \atop z_i \subset K_j} \partial_n v_i \delta_{\partial B_{r_i}(z_i)} \end{align} and use the definition of the capacitary functions $\{ v_i \}_{i\in I}$ (see also \eqref{explicit.formula}) to observe that $m$ in \eqref{mean.m} satisfies \begin{align}\label{organizing.m} m:= \sum_{j\in J} m_j, \ \ \ m_j := \bigl(\frac{1}{|K_j|} \sum_{i \in I, \atop z_i \subset K_j} \int_{\partial B_{r_i}(z_i)} \partial_n v_i \bigr) \mathbf{1}_{K_j} \end{align} For every $j\in J$, we thus define $q_j \in H^1(K_j)$ as the (weak) solution to \begin{align}\label{def.q.eps} \begin{cases} -\Delta q_{j} = \eta M_j - m_{j} \ \ \ &\text{ in $K_{j}$}\\ \partial_n q_{j} = 0 \ \ \ &\text{ on $\partial K_{j}$} \end{cases}, \ \ \ \ \int_{K_{j}} q_{j}=0, \end{align} in the sense that for every $u \in H^1(K_j)$ $$ \int_{K_j} \nabla u \cdot \nabla q_j = \langle M_j ; \eta u \rangle - \int_{K_j} m_j u. $$ We stress that $q_j$ exists since $K_{j}$ is a Lipschitz domain and, thanks to \eqref{contains.balls} and \eqref{organizing.M}-\eqref{organizing.m}, the compatibility condition $$ \langle M_j ; \eta \rangle - \int_{K_j} m_j = 0 $$ is satisfied. \bigskip By \eqref{def.q.eps} and \eqref{organizing.M}-\eqref{organizing.m}, for any $\phi \in H^1_0(D)$ we thus have that \begin{align*} \langle M - m ; \phi \rangle = \sum_{j \in J} \int_{K_{j}} \nabla q_{j} \cdot \nabla \phi, \end{align*} and, by Cauchy-Schwarz's inequality, also \begin{align}\label{H.minus.3} \| M - m \|_{H^{-1}(D)} \leq \bigl(\sum_{j \in J} \int_{K_{j}}|\nabla q_{j}|^2 \bigr)^{\frac 1 2}. \end{align} We now claim that for each $j \in J$ \begin{align}\label{H.minus.5.aux} \bigl( \int_{K_{j}}|\nabla q_{j}|^2 \bigr)^{\frac 1 2} \lesssim \mathop{diam}(K_j) \bigl( \sum_{i \in I, \atop z_i \in K_j} X_i^2 r_i^{-3} \bigr)^{\frac 1 2}. \end{align} This inequality and \eqref{H.minus.3} immediately yield the statement of Lemma \ref{Kohn_Vogelius}. \bigskip We argue \eqref{H.minus.5.aux} as follows: testing the equation for $q_{j}$ with $q_{j}$ itself and using that $q_j$ has zero mean (see \eqref{def.q.eps}), we obtain \begin{align*} \int_{K_{j}}|\nabla q_{j}|^2 &= \sum_{i \in I, \atop z_i \in K_j} \int_{\partial B_{r_i}(z_i)} \partial_n v_i \, q_{z}. \end{align*} By Cauchy-Schwarz's inequality, this implies that \begin{align*} \int_{K_{j}}|\nabla q_{j}|^2 \lesssim \sum_{i \in I, \atop z_i \in K_j} \bigl(\int_{\partial B_{r_i}(z_i)} |\partial_n v_i|^2 \bigr)^{\frac 1 2} \bigl(\int_{\partial B_{r_i}(z_i)}|q_{j}|^2 \bigr)^{\frac 1 2}. \end{align*} By the definition of $v_i$ (see also \eqref{explicit.formula}), we rewrite the above inequality as \begin{equation} \begin{aligned}\label{H.minus.6.bis} \int_{K_{j}}|\nabla q_{j}|^2 &\lesssim \sum_{i \in I, \atop z_i \in K_j} r_i^{-1} \bigl(\frac{X_i r_i}{r_i - X_i}\bigr) \bigl(\int_{\partial B_{r_i}(z_i)}|q_{j}|^2 \bigr)^{\frac 1 2}\\ & \stackrel{\eqref{well.defined.cells}}{\leq} \sum_{i \in I, \atop z_i \in K_j} r_i^{-1} X_i \bigl(\int_{\partial B_{r_i}(z_i)}|q_{j}|^2 \bigr)^{\frac 1 2} . \end{aligned} \end{equation} By the Trace embedding $L^2(\partial B_{r_i}(z_i)) \hookrightarrow H^1(B_{r_i}(z_i))$ we have \begin{align}\label{trace.embedding} \int_{\partial B_{r_i}(z_i)}|q_{j}|^2 \lesssim r_i^{-1 }\bigl( \int_{B_{r_i}(z_i)}|q_{j}|^2 + r_i^2 \int_{B_{r_i}(z_i)} |\nabla q_j|^2 \bigr) \end{align} so that this, \eqref{H.minus.6.bis} and an application of Cauchy-Schwarz's inequality imply \begin{align}\label{H.minus.6} \int_{K_{j}}|\nabla q_{j}|^2 \lesssim \bigl( \sum_{i \in I, \atop z_i \in K_j} X_i^2 r_i^{-3} \bigr)^{\frac 1 2} \bigl( \sum_{i \in I, \atop z_i \in K_j} \int_{B_{r_i}(z_i)} (|q_{z}|^2 + r_i^2 |\nabla q_z|^2) \bigr)^{\frac 1 2}\\ \stackrel{\eqref{contains.balls}-\eqref{well.defined.cells}}{\lesssim} \bigl( \sum_{i \in I, \atop z_i \in K_j} X_i^2 r_i^{-3} \bigr)^{\frac 1 2} \bigl(\int_{K_j} (|q_{z}|^2 + \mathop{diam}(K_j)^2 |\nabla q_z|^2) \bigr)^{\frac 1 2} \end{align} Since by \eqref{def.q.eps} the function $q_j$ has zero mean, we may apply Poincar\'e-Wirtinger's inequality to conclude that \begin{align}\label{H.minus.7} \int_{K_{j}}|\nabla q_{j}|^2 \lesssim \mathop{diam}(K_j) \bigl( \sum_{i \in I, \atop z_i \in K_j} X_i^2 r_i^{-3} \bigr)^{\frac 1 2} \bigl(\int_{K_{j}}|\nabla q_{j}|^2 \bigr)^{\frac 1 2}. \end{align} This establishes \eqref{H.minus.5.aux} and, in turn, concludes the proof of Lemma \ref{Kohn_Vogelius}. \end{proof} \bigskip \begin{proof}[Proof of Lemma \ref{product.measure}] Without loss of generality we assume that $|A|=1$. By the assumption (i)-(ii) on $(\Phi, \mathcal{R})$ we have that \begin{align} \mathbb{E}& \bigl[ \sum_{z \in \Phi(A)} G( (z, \rho_z), \omega \backslash (z, \rho_z) ) \bigr]= e^{-\lambda} \sum_{n \geq 1} \frac{\lambda^n}{n!}\\ &\times \sum_{i=1}^n \int_{(A \times \mathbb{R}_+)^n } \mathbb{E} \bigl[ G( (x_i, \rho_i), \omega \backslash \{ (x_i, \rho_i)\} ) \, | \, \Phi(A), \{\rho_z \}_{\Phi(A)} \bigr] f(\rho_1) \d\rho_1 \d x_1 \cdots f(\rho_n) \d\rho_n \d x_n. \end{align} and, by symmetry, \begin{align} \mathbb{E} &\bigl[ \sum_{z \in \Phi(A) } G( (z, \rho_z), \omega \backslash \{(z, \rho_z)\} ) \bigr] = \lambda e^{-\lambda} \sum_{n \geq 1} \frac{\lambda^{n-1}}{(n-1)!} \\ &\times \int_{(A\times \mathbb{R}_+)^{n} } \mathbb{E} \bigl[ G( (x_1, \rho_1), \omega \backslash \{(x_1, \rho_1)\} ) \, | \, \Phi(A), \{\rho_z \}_{\Phi(A)} \bigr] f(\rho_1) \d\rho_1 \d x_1 \cdots f(\rho_n) \d\rho_n \d x_n. \end{align} Appealing to Fubini's theorem and relabelling the elements $\{ (x_i, \rho_i)\}_{i=1}^n$, this implies \begin{align} \mathbb{E} &\bigl[ \sum_{z \in \Phi(A) } G((z, \rho_z), \omega \backslash \{(z, \rho_z)\} ) \bigr] = \lambda \int_{A \times \mathbb{R}_+} \biggr(e^{-\lambda} \sum_{n \geq 0} \frac{\lambda^{n}}{n!} \\ &\times \int_{(A\times \mathbb{R}_+)^{n} } \mathbb{E} \bigl[ G( (x, \rho), \omega) \, | \, \Phi(A), \{\rho_z \}_{\Phi(A)} \bigr] f(\rho_1) \d\rho_1 \d x_1 \cdots f(\rho_n) \d\rho_n \d x_n \biggr) \, f(\rho) \, \d x, \end{align} i.e. \begin{align} \mathbb{E} &\bigl[ \sum_{z \in \Phi(A) } G( (z, \rho_z), \omega \backslash \{(z, \rho_z)\} ) \bigr] = \lambda \int_{A} \mathbb{E}_\rho \biggl[ \mathbb{E} \bigl[ G((x, \rho) , \omega) \bigr] \biggr] \, \d x \end{align} Since $\Phi$ is stationary, the above inequality immediately implies Lemma \ref{product.measure}. \end{proof}
train/arxiv
BkiUfQ44uzliBaE72bFG
5
1
\section*{Content} \section{Introduction} A myriad of tasks in machine learning and network science involve discovering structure in data. Especially as we process graphs with millions of nodes, analysis of individual nodes is untenable, while global properties of the graph ignore local details. It becomes critical to find an intermediate level of complexity, whether it be communities, cores and peripheries, or other structures. Points in metric space and nodes of graphs can be clustered, and hubs identified, using algorithms from network science. A longstanding theoretical question in machine learning has been whether an \say{ultimate} clustering algorithm is a possibility or merely a fool's errand. Largely, the question was addressed by \citet{wolpert1996lack} as a \term{No Free Lunch theorem}, a claim about the limitations of algorithms with respect to their problem domain. When an appropriate function is chosen to quantify the error (or \term{loss}), no algorithm can be superior to any other: an improvement across one subset of the problem domain is balanced by diminished performance on another subset. This is jarring at first. Are we not striving to find the best algorithms for our tasks? Yes---but by making specific assumptions about the subset of problems we expect to encounter, we can be comfortable tailoring our algorithms to those problems and sacrificing performance on remote cases. \begin{figure} \centering \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{circles} \caption{Non-spherical clusters} \end{subfigure}% ~% \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{varied} \caption{Unequal variances} \end{subfigure} \caption{\(k\)-means clustering when certain assumptions are violated.} \label{fig:means} \end{figure} As an example, the \(k\)-means algorithm for \(k\)-clustering is widely used for its simplicity and strength, but it assumes spherical clusters, equal variance in those clusters, and similar cluster sizes (equivalent to a homoscedastic Gaussian prior). \autoref{fig:means} shows the degraded performance on problems where these assumptions are violated. To prove a No Free Lunch theorem for a particular task demands an appropriate loss function. A No Free Lunch theorem was argued for community detection \citep{peel2017ground}, using the adjusted mutual information function \citep{vinh2009information}.\footnote{Throughout this work, we assume that we evaluate against a known ground truth, as opposed to some intrinsic measure of partition properties like modularity \citep{newman2004modularity}.} However, the theorem is inexact. A No Free Lunch theorem relies on a loss function which imparts \term{generalizer-independence} (formally defined below): one which does not assume \emph{a priori} that some prediction is superior to another. The loss function used in the proof is only \emph{asymptotically} independent in the size of the input. We present a correction: by substituting an appropriate loss function, we are able to claim an exact version of the No Free Lunch theorem for community detection. The result generalizes to other set-partitioning tasks when evaluated with this loss function, including clustering, \(k\)-clustering, and graph partitioning. \section{Background} \subsection{Community detection} A number of tasks on graphs seek a partition of the graph's nodes that maximizes a score function. Situated between the microscopic node-level and the macroscopic graph-level, these partitions form a \term{mesoscopic structure}---be it a core--periphery separation, a graph coloring, or our focus: \term{community detection} (CD). Community detection has been historically ill-defined \citep{radicchi2004defining, yang2016comparative}, though the intuition is to collect nodes with high interconnectivity (or edge density) into communities with low edge density between them. The task is analogous to clustering, in which points near one another in a metric space are grouped together. To assess whether the formulation of community detection matches one's needs, one performs extrinsic evaluation against a known \term{ground truth} clustering. This ground truth can come from domain knowledge of real-world graphs or can be planted into a graph as a synthetic benchmark. After running community detection on the graph, some similarity or error measure between the computed community structure and the correct one can be computed. \paragraph{No bijection between true structure and graph} Unfortunately, ground truth communities do not imply a single graph---and vice versa. \citet{peel2017ground} go as far as to claim, \say{Searching for the ground truth partition without knowing the exact generative mechanism is an impossible task.} We can imagine the following steps for how problem instances are created, given that we have \(N = |V|\) nodes: \begin{enumerate} \item Sample (true) partition \ensuremath{\mathcal{T}}{} $\in \Omega$; \item Generate graph $G$ from \ensuremath{\mathcal{T}}{} by adding edges according to the edge-generating process $g$. \end{enumerate} where \(\Omega\) is our \term{universe}: the space of all partitions of \(N = | V |\) objects. Given a graph $G = (V, E)$, we can imagine multiple truths \(\ensuremath{\mathcal{T}}_i \in \Omega\) that could define its edge set \(E\) by different generative processes~\(g_i: \Omega \to \Gamma \), where \(\Gamma\) is the set of all graphs with \(N\) nodes. \citet{peel2017ground} give a proof that extends from this simple example: Imagine that \(\ensuremath{\mathcal{T}}_1\) partitions the \(N\) nodes into \(N\) components (the \(N\)-partition), and \(\ensuremath{\mathcal{T}}_2\) partitions them into \(1\) component (the \(1\)-partition). Let \(g_1\) exactly specify the number of edges between each pair of communities, such that \(g_1(\ensuremath{\mathcal{T}}_1)\) is \(\ensuremath{G}\) with probability 1. Similarly, let \(g_2\) be an Erd\H{o}s--R\'enyi model such that \(g_2(\ensuremath{\mathcal{T}}_2)\) is \(\ensuremath{G}\) with nonzero probability. (\citet{peel2017ground} note that this is easily extended to graphs with more nodes.) We thus have two different ways to create a single graph; how can a method discern the correct one, without knowledge of \(g\)? Community detection is then an ill-posed inverse problem: Use a function \(f: \Gamma \to \Omega\) to produce a clustering~\(\ensuremath{\mathcal{C}} = f(G)\), which is hopefully representative of \ensuremath{\mathcal{T}}{} \citep[Appendix C]{peel2017ground}.\footnote{That is, the objective is to find \(f = g^{-1}\).} The function $f$ is not a bijection, so there isn't a unique \ensuremath{\mathcal{T}}{} represented in the given graph. Our algorithm~\(f\) must encode our prior beliefs about the generative process~\(g\) to select from among candidates. For this reason, we must hope that the benchmark graphs that we use are representative of the generative process for graphs in our real-world applications. That is, we hope that our benchmark domain matches our practical domain. \paragraph{Other set-partitioning tasks} While the remainder of this work focuses on community detection, our claims are relevant to other set-partitioning tasks. Notable examples are clustering (the vector space analogue to community detection), graph $k$-partitioning, and \(k\)-clustering. Metadata about the nodes and edges, such as vector coordinates, are used to guide the identification of such structure, but the tasks are all fundamentally set-partitioning problems. They can also have different universes \(\Omega\)---the latter tasks have a smaller universe than does community detection, for a given graph~\(G\): They consider only partitions with a fixed number of clusters. \subsection{No Free Lunch theorems} The \term{No Free Lunch theorem} in machine learning is a claim about the universal (in)effectiveness of learning algorithms. Every algorithm performs equally well when averaging over all possible input--output pairs. Formally, for any learning method $f$, the error (or \term{loss})~\ensuremath{\mathcal{L}}{} of the method \(f\), summed over all possible problems \(( g, \ensuremath{\mathcal{T}})\) equals a loss-specific constant \(\Lambda(\ensuremath{\mathcal{L}})\): \begin{equation} \sum_{( g, \ensuremath{\mathcal{T}} )} \ensuremath{\mathcal{L}} \left(\ensuremath{\mathcal{T}}, f\left(g\left(\ensuremath{\mathcal{T}}\right)\right)\right) = \Lambda(\ensuremath{\mathcal{L}})\text{,} \label{eqn:nfl} \end{equation} defining the edge-generative process \(g\) and partition \(\ensuremath{\mathcal{T}}\) as above. This loss is thus \emph{generalizer-independent}. To reduce loss on a particular set of problems means sacrificing performance on others---\say{\emph{there is no free lunch}} \citep{wolpert1996lack, schumacher2001no}. Judiciously choosing which set to improve involves making assumptions about the distribution of the data: as we've mentioned, \(k\)-means is a method for \(k\)-clustering which works well on data with spherical covariance, similar cluster sizes, and roughly equal class sizes. When these assumptions are violated, performance suffers and overall balance is achieved. \subsection{Community detection as supervised learning} We follow \citet{peel2017ground} in framing the task of community detection (CD) as a learning problem. While recent algorithms, e.g.\ \citet{chen2018supervised}, have introduced learnable parameters to community detection algorithms the CD literature's algorithms are by and large untrained. These untrained algorithms encode knowledge of the problem domain in prior beliefs. We note that our work and \citet{peel2017ground} straightforwardly handle both of these cases. In general supervised machine learning problem , we seek to learn the function that maps an input space \(\mathbf{X}\) to an output space \(\mathbf{Y}\). We consider problem instances as sampled from random variables over each, so our goal is to learn the conditional distribution~\(p(Y \mid X)\). In the process of training on a dataset \(\mathcal{D}\), we develop a distribution over hypotheses~\(q\) which are estimates of the distribution~\(p\). In the case of most community detection algorithms, our input space is the set of graphs on \(N\) nodes \(\Gamma\), and the output space is \(\omega\). There is no training data: \(\mathcal{D} = \varnothing\). All of our prior beliefs about \(p\) must be encoded in the prior distribution \(\Pr(q)\). That is, the model itself must contain our beliefs about the definition of community structure. Only from the encoded \(\Pr(q)\) and an observed \(x \in \mathbf{X}\) (our graph \ensuremath{G}) do we form our point estimate of the true distribution~\(p\) \citep{peel2017ground}. However, in the case of trainable CD algorithms, we encode our beliefs in the posterior distribution~\(\Pr(q \mid \mathcal{D})\). \subsection{Loss functions and a priori superiority} How should we evaluate an algorithm's predictions? Classification accuracy won't cut it: When comparing to the ground truth, there are no specific labels (e.g.\ no notion of a specific \say{Cluster 2})---only unlabeled groups of like entities. We settle for a measure of similarity in the groupings, quantifying how much the computed partition tells us about the ground truth. A popular choice of measure is the \emph{normalized mutual information} \citep[NMI;][]{kvalseth1987entropy} between the prediction and the ground truth. While this measure has a long history in community detection, its flaws have been well-noted \citep{vinh2009information, peel2017ground,mccarthy2018normalized,mccarthy2019metrics}. It imposes a \say{geometric} structure upon the universe \(\Omega\),\footnote{To take the example of \citet{peel2017ground}, \(L^2\) loss (squared Euclidian distance) imposes a geometric structure: In the task of guessing points in the unit circle, guessing the center will garner a higher reward, on average, than any other point.} so something as simple as guessing the trivial all-singletons clustering outperforms methods that try at all to find a mesoscopic-level structure \citep{mccarthy2019metrics}. The property which NMI lacks is \emph{generalizer-independence}. The property of generalizer-independence is defined by the generalization error function, an expectation of the loss $\Expect[L \mid p, q, \mathcal{D}]$. To satisfy this property, the generalization error must be independent of the particular true value \ensuremath{\mathcal{T}}{}. This is best expressed by \autoref{eqn:nfl}. The adjusted mutual information (AMI, defined in \autoref{prevres}) \citep{vinh2009information} is a proposed replacement for NMI which does not impose a geometric structure upon the space. Unfortunately, this benefit is not fully realized when the expectation is computed over a space $\Psi \subset \Omega$. For the $\Psi$ used in \citet{peel2017ground}, the expected AMI across all problems is only \emph{asymptotically} generalizer-independent as the graph size grows---it is within some diminishing amount of error \(\varepsilon(N)\) of generalizer-independence, as proven by \citet{peel2017ground}. \section{Previous Result: Approximate No Free Lunch Theorem}\label{prevres} \citet{peel2017ground} frame community detection in the style of learning algorithms, letting them prove a No Free Lunch theorem for community detection. They note that the claim holds for \say{an appropriate choice of \ldots \(\mathcal{L}\)}---specifically a loss function~\(\mathcal{L}\) that is generalizer-independent---but their chosen loss function is not fully generalizer-independent. They also consider a stricter property than generalizer-independence: \term{homogeneity}. With a homogeneous loss function, the \emph{distribution} of the error (not just its expectation) is identical, regardless of the ground truth. A measure which deviates from homogeneity may have this deviation bounded by a function of the number of vertices (the graph order). \begin{lemma}[\citealp{peel2017ground}] \label{thm:old-ami-homogeneous} Adjusted mutual information (AMI) is a homogeneous loss function over the interior of the space of partitions of \(N\) objects, i.e., excluding the \(1\)-partition and the \(N\)-partition. Including these, AMI is homogeneous within~\(\frac{1}{\mathcal{B}_N}\).\footnote{ \(\mathcal{B}_N\) is the \(N\)-th Bell number, i.e., the number of partitions of a set of \(N\) nodes.} \end{lemma} \citet{wolpert1996lack} gives a generalized No Free Lunch theorem, which assumes a homogeneous loss. \begin{theorem}[\citealp{wolpert1996lack}] \label{thm:nfl-homogeneity} For homogeneous loss \(\mathcal{L}\), the uniform average over all distributions \(p\) of \(\Pr\left(\ell \mid p, \mathcal{D}\right)\) equals \(\frac{\Lambda(\ell)}{|\mathbf{Y}|}\). \emph{(Plainly, \say{there is no free lunch}.)} \end{theorem} \citet{peel2017ground} then use Wolpert's result with their inexactly homogeneous measure to claim a No Free Lunch result. \begin{theorem}[\citealp{peel2017ground}] \label{thm:bad-nfl} By \autoref{thm:old-ami-homogeneous} and \autoref{thm:nfl-homogeneity}, for the community detection problem with a loss function of AMI, the uniform average over all distributions \(p\) of \(\Pr(\ell \mid p, \mathcal{D})\) equals \(\frac{\Lambda(\ell)}{|\mathbf{Y}|}\). \end{theorem} But this choice of measure (AMI) is not, in fact, homogeneous over the \emph{entire} universe \(\Omega\) (\autoref{thm:old-ami-homogeneous}). A strategy that guesses either of the non-interior (i.e., boundary) partitions---the \(1\)-partition or \(N\)-partition---will yield a higher-than-average reward. There is indeed a negligible amount of free lunch---a free morsel, if you will. \section{Diagnosis: Random Models} \begin{figure} \centering \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{a2} \caption{Ground truth has cluster size pattern \(\{2, 1\}\).} \end{subfigure} ~ \begin{subfigure}[b]{0.48\linewidth} \includegraphics[width=\linewidth]{a1} \caption{Ground truth has cluster size pattern \(\{3\}\).} \end{subfigure} \caption{\model{all} and \model{perm} when clustering three nodes, for two different ground truths (circled). The top and bottom clusterings---the $1$ and $N$ clusterings---are the boundary partitions. All other partitions form the interior. \model{perm} changes based on the ground truth, but \model{all} stays the same.} \label{fig:random-models} \end{figure} \citet{peel2017ground} use AMI out of the box, as proposed by \citet{vinh2009information}, which involves subtracting an expected value from a raw score. Unfortunately, AMI as given takes its expectation over the wrong distribution. Because of the mismatch, \citet{peel2017ground}'s claim of homogeneity is accurate only to within \(\frac{1}{\mathcal{B}_N}\) when considering the trivial partitions into either one community or \(N\)~communities. Correcting this is arguably a pedantic demand, for two reasons: \begin{enumerate} \item The fraction \(\frac{1}{\mathcal{B}_N}\) converges to 0 superexponentially as \(N\) increases. \item The deficiency is only present when \ensuremath{\mathcal{T}}{} is one of the trivial partitions. Otherwise, AMI as used is exactly homogeneous. But the trivial partitions reflect a lack of any mesoscopic community structure. \end{enumerate} Nevertheless, we'd like to see a tight claim of generalizer independence. To do this, we must select the proper \term{random model}, a sample space for a distribution. AMI adjusts NMI by subtracting the expected value from both the numerator and the denominator, shown in blue: \begin{equation} \label{eqn:ami} \AMI(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \triangleq \frac% {I(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}'}\left[ I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}') \right] }}% {\mathcolor{magenta}{\max_{\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}'} I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}') } \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}'}\left[ I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}') \right] }} \textrm{,} \end{equation} where \(I\) is the mutual information, maximized when the specific clustering~\ensuremath{\mathcal{C}}{} equals the ground truth~\ensuremath{\mathcal{T}}{}. By inspecting \autoref{eqn:ami}, we see that AMI's value is \(1\) (the maximum) when \(\ensuremath{\mathcal{C}} = \ensuremath{\mathcal{T}}\), \(0\) in expectation, and negative when the agreement between \ensuremath{\mathcal{C}}{} and \ensuremath{\mathcal{T}}{} is worse than chance. Subtly hidden in this equation is the decision of which distribution to compute the expectation over. For decades, this distribution has been what \citet{gates2017impact} call \model{perm}: all partitions of the same \term{partition shape}\footnote{A multiset of cluster sizes, also called a decomposition pattern \citep{hauer2016decoding} or a group-size distribution \citep{lai2016corrected}. It is equivalent to an integer partition of \(N\).} as \ensuremath{\mathcal{C}}{} or \ensuremath{\mathcal{T}}{}. For example, if \ensuremath{\mathcal{C}}{} partitioned 7 nodes into clusters of sizes 2, 2, and 3, then we would compute the expected mutual information over all clusterings where one had cluster sizes of 2, 2, and 3. \citet{mccarthy2019metrics} argue that \model{perm} is inappropriate. To use this random model assumes that we can only produce outputs within that restricted space, when in actuality \(\Omega\) is the set of \emph{all} partitions of \(N\) nodes. Furthermore, during evaluation, we hold our ground truth fixed, rather than marginalizing over possible ground truths. Were we to instead consider a distribution over \ensuremath{\mathcal{T}}{}s, we would add noise from other possible generative processes which yield the same graph from different underlying partitions. In our average, we might be including scores on ground truths that better align with our notions of, say, core--periphery partitioning. For this reason, we take a \term{one-sided expectation}---over \(\mathcal{C}\), holding \(\mathcal{T}\) fixed. The one-sided distribution over all partitions of \(N\) nodes is called \modelone{all} \cite{gates2017impact}. This distribution is what we use for our AMI expectation, giving a measure denoted as \(\mathrm{AMI}_{\textrm{all}}^1\), which is recommended by \citet{mccarthy2019metrics}. It takes the form \begin{equation} \label{eqn:ami-all} \AMI_{\mathrm{all}}^1(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \triangleq \frac% {I(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}^\prime \sim \modelone{all}}\left[ I(\ensuremath{\mathcal{C}}^\prime, \ensuremath{\mathcal{T}}) \right] }}% {\mathcolor{magenta}{\max_{\ensuremath{\mathcal{C}}'} I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}) } \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}^\prime \sim \modelone{all}}\left[ I(\ensuremath{\mathcal{C}}^\prime, \ensuremath{\mathcal{T}}) \right] }} \textrm{.} \end{equation} The differences between \model{all} and \model{perm} are illustrated in \autoref{fig:random-models} under \(|V| = 3\). We will now show that substituting \model{all} for \model{perm}, hence using \(\AMI_{\rm all}^1\), allows for an exact No Free Lunch theorem. \section{An Exact No Free Lunch Theorem} We strengthen the No Free Lunch theorem for community detection given by \citet{peel2017ground} by using an improved loss function, \(\mathrm{AMI}_{\textrm{all}}^1\), for community detection. Our proof does not distinguish the \say{boundary} partitions (the two trivial partitions) from the \say{interior} partitions (the remainder). It is entirely agnostic toward the particular ground truth \ensuremath{\mathcal{T}}{}, which is exactly what we need. We improve the previous result by moving from \model{interior} (which excludes the boundary partitions) to \model{all}. \subsection{Generalizer-independence of \(\AMI_{\mathrm{all}}^1\)} \begin{lemma} \label{thm:homogeneity} \(\AMI_{\mathrm{all}}^1\) is a generalizer-independent loss function over the \emph{entire} space \model{all} of partitions of \(N\)~objects. \end{lemma} \begin{proof} Like \citet{peel2017ground}, we must show that the sum of scores is independent of~\ensuremath{\mathcal{T}}: \begin{equation} \label{eqn:L} \forall \ensuremath{\mathcal{T}}_1, \ensuremath{\mathcal{T}}_2,\quad \sum_{\ensuremath{\mathcal{C}} \in \Omega} \AMI_{\mathrm{all}}^1 \left(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}_1\right) = \sum_{\ensuremath{\mathcal{C}} \in \Omega} \AMI_{\mathrm{all}}^1 \left(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}_2\right) \text{,} \end{equation} where \(\Omega\)~is the space of all partitions of \(N\)~objects. Unlike \citet{peel2017ground}, we take the AMI expectation over all \(\mathcal{B}_N\) clusterings in \(\Omega\) using the random model \modelone{all} \citep{gates2017impact}. To prove our claim about \autoref{eqn:L}, we note that denominator of \(\AMI_{\mathrm{all}}^1\) is a constant with respect to \ensuremath{\mathcal{C}}~(\autoref{eqn:ami-all}), so we can factor it out of the sum and restrict our attention to the numerator. This is because the max-term in the denominator is the constant \(\log N\) \citep{gates2017impact} and the expectation term for a given \ensuremath{\mathcal{T}}{} is independent of the particular \ensuremath{\mathcal{C}}{}. Having factored this out, we will now prove \autoref{eqn:L} by the stronger claim: \begin{equation} \label{eqn:numerators} \sum_{\ensuremath{\mathcal{C}} \in \Omega} \left[ I(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) - \Expect_{\ensuremath{\mathcal{C}}' \sim \modelone{all}}\left[ I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}) \right] \right] \stackrel{?}{=} 0 \quad \forall \, \ensuremath{\mathcal{T}} \end{equation} To prove \autoref{eqn:numerators}, we separate the summation's two terms: \begin{equation} \sum_{\ensuremath{\mathcal{C}} \in \Omega} \left[ I(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \right] - \sum_{\ensuremath{\mathcal{C}} \in \Omega} \left[ \Expect_{\ensuremath{\mathcal{C}}' \sim \modelone{all}}\left[ I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}) \right] \right] \end{equation} The expectation is uniform over the universe \(\Omega\),\footnote{Why do we assume uniformity over \(\Omega\)? Because this is the highest-entropy (i.e., least informed) distribution---it places the fewest assumptions on the distribution.} so we can apply the law of the unconscious statistician, then push the constant probability out, to get \begin{equation} \sum_{\ensuremath{\mathcal{C}} \in \Omega} \left[ I(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \right] - \sum_{\ensuremath{\mathcal{C}} \in \Omega} \left[ \frac{1}{|\Omega|} \sum_{\ensuremath{\mathcal{C}}' \in \Omega}\left[ I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}) \right] \right] \end{equation} Because the inner sum is independent of any particular \(\ensuremath{\mathcal{C}}{}'\), the outer sum is a sum of constants---one for each element in \(\Omega\). We can now express \autoref{eqn:numerators} as follows, where the reciprocals straightforwardly cancel out: \begin{equation} \sum_{\ensuremath{\mathcal{C}} \in \Omega} \left[ I(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \right] - |\Omega| \frac{1}{|\Omega|} \sum_{\ensuremath{\mathcal{C}}' \in \Omega}\left[ I(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}) \right] \equiv 0 \text{.} \end{equation} This equivalence implies that \autoref{eqn:L} is true. \end{proof} The proof is valid without loss of generality vis-\`a-vis the distribution---that is, as long as the AMI expectation is computed uniformly over the problem universe \(\Omega\), AMI is a generalizer-independent measure. This stipulation is relevant to tasks which assume a fixed number of clusterings---using \model{num}---like \(k\)-clustering and graph partitioning. Having demonstrated the generalizer-independence of AMI, we can define our loss function as, say, \begin{equation} \ensuremath{\mathcal{L}} (\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) = 1 - \AMI(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}})\text{.} \end{equation} The loss is zero when we exactly match the true clustering and positive otherwise. Having proven the generalizer-independence of \(\mathrm{AMI}_{\mathrm{all}}^1\), we now turn to a more general form of the No Free Lunch theorem, which admits not just a homogeneous loss function but any generalizer-independent loss. \begin{theorem}[\citealp{wolpert1996lack}] \label{thm:nfl-gi} For generalizer-independent loss \(\ell\), the uniform average over all \(p\), \(\mathbb{E}\left[\ell \;\middle\vert\; p, \mathcal{D}\right]\), equals \(\frac{\Lambda(\ell)}{|\mathbf{Y}|}\). (Plainly, \say{There is no free lunch.}) \end{theorem} \begin{proof} See \citet{wolpert1996lack}. \end{proof} \begin{theorem}[No Free Lunch theorem for community detection and other set-partitioning tasks] For a set-partitioning problem with a loss function of adjusted mutual information \emph{using the appropriate random model for the task}, the uniform average over all \(p\), \(\mathbb{E}\left[\ell \;\middle\vert\; p, \mathcal{D}\right]\), equals \(\frac{\Lambda(\ell)}{|\mathbf{Y}|}\). \end{theorem} \begin{proof} \autoref{thm:homogeneity} proves that AMI \emph{using the appropriate random model} is generalizer-independent. Applying \autoref{thm:nfl-gi} completes the proof \citep{peel2017ground}. \end{proof} \subsection{Other measures} AMI stemmed from a series of efforts to improve normalized mutual information (NMI). We note that six other measures, when extended to \modelone{all} instead of \model{perm}, are also generalizer-independent: the adjusted Rand index \citep[ARI;][]{hubert1985comparing}, relative NMI \citep[rNMI;][]{zhang2015evaluating}, ratio of relative NMI \citep[rrNMI;][]{zhang2015relationship}, Cohen's \(\kappa\) \citep{liu2018evaluation}, corrected NMI \citep[cNMI][]{lai2016corrected}, and standardized mutual information \citep[SMI;][]{romano2014standardized}. We elide the proofs because they are similar to \autoref{thm:homogeneity}. Each of the six measures satisfies the precondition for the No Free Lunch theorem when the random model matches the problem domain. Of late, a renewed push has advocated using the adjusted Rand index \citep[ARI;][]{hubert1985comparing} to evaluate community detection; in fact, ARI and AMI are specializations of the same underlying function which uses \emph{generalized} information-theoretic measures~\citep{romano2016adjusting}. Every claim in the proof works for ARI, by replacing every mutual information \(I\) term with the Rand index \(\mathrm{RI}\). Another line of research, focusing on improving NMI, produced rNMI \cite{zhang2015evaluating}, rrNMI \citep{zhang2015relationship}, and cNMI \citep{lai2016corrected}. We note that rrNMI is identical to one-sided AMI when both are extended to \modelone{all}. Consequently, our claim above works just as well for rrNMI. Further, because we were able to ignore the denominator of AMI in our proof of \autoref{thm:homogeneity}, we can do the same for rrNMI, which gives its unnormalized variant, rNMI. This means that rNMI is a generalizer-independent measure as well, when used in the appropriate one-sided random model. The practical benefit of normalizing rNMI into rrNMI is that the normalized measure gives a more interpretable notion of success. Additionally, \autoref{thm:homogeneity} holds true for standardized mutual information (which is equivalent to standardized variation of information and standardized V-measure) \citep{romano2014standardized}, the adjusted variation of information \citep{vinh2009information}, and for Cohen's $\kappa$, advocated for CD by \citet{liu2018evaluation}. This is because each measure shares the form of AMI: an observed score minus an expectation. Finally, to show whether cNMI is generalizer-independent under the correct random model, we must show how to specialize it into a one-sided variant, because there is room for interpretation about how this should be done, even restricting our focus to \modelone{all}. The expression for cNMI \begin{equation} \cNMI(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \triangleq { \frac% {2\NMI(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}'}\left[ \NMI(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}) \right] - \Expect_{\ensuremath{\mathcal{T}}'}\left[ \NMI(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}') \right] }}% {2 \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}'}\left[ \NMI(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{C}}) \right] - \Expect_{\ensuremath{\mathcal{T}}'}\left[ \NMI(\ensuremath{\mathcal{T}}, \ensuremath{\mathcal{T}}') \right] }} } \end{equation} depends on both \ensuremath{\mathcal{C}}{} and \ensuremath{\mathcal{T}}{} relative to the universes that contain them. Our specialization should remove dependence on the family of \ensuremath{\mathcal{T}}, so we arrive at the following expression after cancellation and noting that the NMI between a clustering and itself is 1: \begin{equation} \cNMI(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) = { \frac% {\NMI(\ensuremath{\mathcal{C}}, \ensuremath{\mathcal{T}}) \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}'}\left[ \NMI(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{T}}) \right]}}% {1 \mathcolor{blue}{- \Expect_{\ensuremath{\mathcal{C}}'}\left[ \NMI(\ensuremath{\mathcal{C}}', \ensuremath{\mathcal{C}}) \right]}} } \end{equation} As it turns out, this quasi-adjusted measure is also generalizer-independent. In general, we now have a recipe for generalizer-independent loss functions: They can be created by subtracting the expected score from the observed score. This recipe works whenever a uniform expectation can be well defined. \section{Conclusion} We now have a proof of the No Free Lunch theorem for community detection and clustering that is both complete and exact. We show that a corrected form of AMI, namely \(\AMI_{\mathrm{all}}^1\), computes its expectation in a way that does not advantage the boundary partitions ($1$ cluster and $N$ singleton clusters). Indeed, this expectation is over the entire universe of partitions \(\Omega\), rather than any proper subset, such as the historically common \model{perm}. We affirm the claim: \say{Any subset of problems for which an algorithm outperforms others is balanced by another subset for which the algorithm underperforms others. Thus, there is no single community detection algorithm that is best overall} \citep{peel2017ground}. It is still possible for an algorithm to perform better on a \emph{subset} of community detection problems, so we can strive toward improved results on such a subset. To hope to perform well, we must note the assumptions about the subset of problems we expect to encounter. Some work has been done on estimating network properties to select the correct algorithm for the task at hand---a coarse way of checking assumptions \citep{peel2011estimating, yang2016comparative}. Beyond this, though, we must clarify what the problem of community detection \emph{is}; the formulation we choose will guide which subset of problem instances to prioritize and which to sacrifice. \section*{Acknowledgments} The authors thank, alphabetically by surname, Daniel Larremore, Leto Peel, David Wolpert, Patrick Xia, and Jean-Gabriel Young for discussions that improved the work. Any mistakes are the authors' alone. \bibliographystyle{acl_natbib}
train/arxiv
BkiUf9vxK7ICUt7cRQl7
5
1
\section{Introduction} Nonlinear complex wave fields are known to support quantized vortices in two and three spatial dimensions \cite{Pismen,PS1,PS2,KFC2015,SF2000,FS2001,F2009,PGFK2004}. Vortices have also been studied in discrete systems (on lattices; see, e.g., \cite{MK2001,KMFC2004,CKMF2005,KFCMB2005,LSCASS2008,KMT2011,CJKL2009,LSK2014} and citations therein). As far as weakly dissipative lattice dynamics is considered, among the most popular mathematical models are modifications of a discrete nonlinear Schr\"odinger equation (DNLSE) \cite{CJKL2009,LSCASS2008,KMT2011,LSK2014,HT1999,KP1992,MK2001,DKMF2003,KMFC2004, CKMF2005,KFCMB2005,XZL2008,ACMSSW2010,RVSDK2012,ACM2014,AC2019,R2019}. They arise in various scientific contexts (but mostly in nonlinear optics \cite{LSCASS2008,KMT2011} and in physics of nonlinear metamaterials \cite{LSK2014}), where we have nearly identical oscillators with their normal complex variables $a_n(t)=A_n(t)\exp(-i\omega_0 t)$, and with nonlinear frequency shifts $g|A_n|^2\ll \omega_0$. The simplest form of DNLSE is \begin{equation} i(\dot A_n+\gamma \omega_0 A_n) = g|A_n|^2A_n -\frac{1}{2}\sum_{n'}c_{n,n'}A_{n'}, \label{A_n_eq} \end{equation} where overdot means time derivative. A linear damping rate $\gamma\omega_0$ takes into account dissipative effects, with small $\gamma=1/Q\ll 1$ being an inverse quality factor. Oscillators are weakly coupled by (real) coefficients $c_{n,n'}=c_{n',n}\ll \omega_0$ (if coupling strength and/or nonlinearity level are not weak, then more complicated forms of DNLSE appear, including non-linearities in coupling terms \cite{LSCASS2008,KMT2011}). In many interesting cases, multi-index $n$ is a node ${\bf n}=(n_1,\dots,n_d)$ of a simple regular lattice in one, two, or three spatial dimensions ($d=1$, $d=2$, and $d=3$, respectively). Besides electromagnetic artificially created structures \cite{LSK2014}, DNLSE has been successfully applied in nonlinear optics where it describes stationary regime of light propagation in waveguide arrays \cite{LSCASS2008} (one-dimensional (1D) and two-dimensional (2D) cases, with time variable $t$ replaced by propagation coordinate $z$). The coupling coefficients $c_{{\bf n},{\bf n}'}$ are often considered as translationally invariant on the lattice and taking place between a few near neighbors. If they have a definite sign, then in the long-scale quasi-continuous limit we have either defocusing regime (when $gc>0$), or focusing one (when $gc<0$). Accordingly, different nonlinear coherent wave structures can take place in each case. In particular, in the most well-studied focusing regime, there are highly localized discrete solitons and discrete vortex solitons (see \cite{MK2001,KMFC2004,CKMF2005,KFCMB2005,LSCASS2008}, and references therein). In the defocusing regime, there are dark solitons, and besides that, discrete analogues of superfluid quantized vortices can be excited and interact with each other over long distances. In this work, we consider discrete vortices, but in somewhat more complicated arrangements where coupling coefficients are not translationally invariant, $c_{{\bf n}+{\bf l},{\bf n}'+{\bf l}}\neq c_{{\bf n},{\bf n}'}$, and the corresponding terms contain differences $(A_n-A_{n'})$ instead of $(-A_{n'})$, \begin{equation} i(\dot A_n+\gamma\omega_0 A_n) = g|A_n|^2A_n +\frac{1}{2}\sum_{n'}c_{n,n'}(A_n-A_{n'}). \label{A_n_eq_diff} \end{equation} In general, equations (\ref{A_n_eq}) and (\ref{A_n_eq_diff}) are not equivalent. Exception is for infinite and uniform lattices, where they are related to each other by a simple gauge transformation. It is important that Eq.(\ref{A_n_eq_diff}), with any coefficients $c_{n,n'}$, admits a class of spatially uniform solutions, \begin{equation} A_n=A_0\exp\big[-\gamma\omega_0 t -ig|A_0|^2(1-e^{-2\gamma\omega_0 t})/(2\gamma\omega_0)\big]. \label{background} \end{equation} However, spatial nonuniformity of couplings should strongly affect vortex dynamics on the above background, since vortices are known to have highly de-localized phase gradients even if the amplitude variation (vortex core) is localized. Continuous quantized vortices on spatially nonuniform backgrounds have been extensively studied in application to trapped Bose-Einstein condensates, where nonuniformity is introduced by external potential (see, e.g., Refs. \cite{SF2000,FS2001,F2009,PGFK2004,RP1994,AR2001,GP2001,R2001,A2002,AR2002,RBD2002,AD2003, AD2004,D2005,Kelvin_waves,ring_instability,v-2015,BWTCFCK2015,R2017-1,R2017-2,R2017-3, R2018-1,R2018-5,reconn-2017,top-2017,WBTCFCK2017,TWK2018,TRK2019}, and citations therein). Effects of dispersive nonuniformity are still waiting for studying. Therefore the first goal of this work is to investigate such effects for vortices on discrete lattices within model (\ref{A_n_eq_diff}). For simplicity, we consider below a square lattice and interactions between the nearest neighbors in the form \begin{equation} c_{{\bf n},{\bf n}'}=f(h[{\bf n}+{\bf n}']/2), \label{f} \end{equation} where $h\ll 1$ is a lattice spacing, and $f(x,y)$ is a sign-definite function varying on scales $(\Delta x;\Delta y)\sim 1$. Eq.(\ref{A_n_eq_diff}) with nonuniform couplings has been introduced recently in a formal manner as a three-dimensional (3D) discrete system supporting long-lived vortex knots \cite{R2019}. But no physical prototype was indicated there. In the present work, as a possible physical implementation approximately corresponding to this equation, we theoretically suggest and analyze a specially designed electric circuit network. Implementation of discrete nonlinear dynamic systems in the form of 1D and 2D electric networks has a long and rich history \cite{HS1973,HCS1996,SO1999,CGBFR1998,KMR1988,MBR1994,MBR1995,LM1999,MREV2001,YMB2003, EPSKB2010,PECCK2011,EPSCCK2013,SKVFDLR2017,PECLCK2019}, including even experimental simulations of the integrable Toda chain \cite{HS1973,HCS1996,SO1999,CGBFR1998}. Major attention has been devoted to modulationally unstable systems. Here we consider a network possessing stable solutions (\ref{background}). We adopt a scheme consisting of nonlinear oscillator circuits coupled by relatively small, non-equal capacitors, as shown in Fig.\ref{scheme}. It will be derived below that nonlinear constant $g$ and coupling coefficients $c_{{\bf n},{\bf n}'}$ appear both negative in this case, so the corresponding DNLSE is defocusing and appropriate for vortices. If instead of small capacitances, oscillators are coupled by large inductances, then a focusing DNLSE arises. That case has been already studied previously (on uniform lattices) in the context of discrete solitons, breathers and vortex solitons \cite{MBR1994,MBR1995,EPSKB2010,PECCK2011,EPSCCK2013}. From a formal viewpoint, each inductor represents a separate degree of freedom. Therefore our scheme is mathematically different. From a practical viewpoint, small capacitors on links are more convenient than large inductors. \begin{figure} \begin{center} \epsfig{file=Fig1.eps, width=80mm} \end{center} \caption{Schematic representation of coupled oscillators. Only a fragment of whole network is shown (two cells and coupling between them).} \label{scheme} \end{figure} It should be noted that electric networks can be of macroscopic sizes and assembled of standard radiotechnical elements. Typical dispersive and nonlinear times can be about milliseconds, with the carrier frequency $\omega_0/2\pi$ of order 1 MHz. Additional convenience of electric implementation is in easy setting the model parameters and in controllability including arbitrary variation of coupling capacitances with time. Moreover, flexible wires make possible to construct topologically nontrivial 2D discrete manifolds as M\"obius strip, torus, Klein bottle, projective plane, and so on. This fact opens wide new perspectives in studying vortices on such discretized surfaces. Another important thing is that our electric scheme can be equally suitable for construction of 3D nonlinear lattices. A practical problem is only in a very big number of elements. So, to observe interesting nonlinear behavior of vortices, in a 2D lattice we need about ${\mathsf N}_{\rm 2D}\sim 10^3-10^4$ individual oscillators, while for a 3D lattice the required number is ${\mathsf N}_{\rm 3D}\sim 10^5-10^6$. Therefore planar constructions seem more realistic at the moment. Since the electric model is very promising, we put also the second goal in this work, that is to simulate the dynamics directly within equations of motion governing the scheme in Fig.1, and then compare the results with DNLSE simulations. This article is organized as follows. In section II we introduce the theoretical model and derive the corresponding DNLSE together with the parameters. Some technical details about DNLSE are included because it is more easy for theoretical analysis than the basic system of circuit equations. In section III, we generally analyze vortex motion in 2D case, with orientation on quasi-continuous limit. Special attention is given there to coupling profiles with a barrier. This feature is new in comparison with Ref.\cite{R2019}. In section IV, we present some numerical results demonstrating nontrivial behavior of interacting vortices in discrete, spatially nonuniform, weakly dissipative 2D systems. Both DNLSE and the original system of circuit equations are simulated. In particular, it will be shown that depending on parameters, vortex clusters can be stably trapped for some initial period of time by a circular barrier in function $f$ profile, but then, due to gradual dissipative broadening of vortex cores, they lose stability and suddenly start to move in a complicated manner, some of vortices penetrating the barrier. Finally, section V contains a brief summary of the work. \section{Model description and basic equations} In the beginning, we describe our simple scheme (see Fig.\ref{scheme}). Let each electric oscillator in the network consist of a coil with inductance $L$ and small active resistance $R_L$, connected in series to a voltage-dependent differential capacitance (varicap) $C_v(V)=dq/dV$, where $q$ is electric charge. A reverse-biased varactor diode is implied or another nonlinear capacitor (perhaps in parallel with an ordinary capacitor). The varicap is characterized by a large shunt resistance $R_C$ (for leakage current). For simplicity, we assume $R_C=const$ thus neglecting nonlinearity in dissipation. The remaining end of the coil is connected to a d.c. bias voltage $V_b$, while the remaining contact of the varicap is grounded. A voltage at the contact between the coil and the varicap is $V_b+V_n(t)$. Functional dependencies $C(V_n)=C_v(V_b+V_n)$ differ for devices fabricated under different technologies, so many expressions were suggested to approximate them. In particular, for a reverse-biased diode in parallel with a constant capacitor, the following combined formula is able to ensure good accuracy within a sufficiently wide voltage range (see, e.g., \cite{MBR1995,HCS1996,CGBFR1998,EPSKB2010,PECCK2011}), \begin{equation} C(V_n)=C_0\Big[\mu+\frac{(1-\mu)}{(1+V_n/V_*)^\nu}+\eta e^{-\kappa V_n}\Big]/(1+\eta), \label{C_V} \end{equation} with fitting parameters $C_0=C(0)$, $V_*$, $\mu$, $\nu$, $\eta$, and $\kappa$. Here $0<\mu<1$ takes into account an ordinary capacitor in parallel, while $0.3\lesssim \nu\lesssim 6.0$ is related to the diode (by the way, for the Toda lattice implementation, one needs to take diodes with $\nu=1$). Very often in theoretical studies it is put $\eta=0$. In some research works, a different kind of variable capacitor is also considered, with $C(V_n)=C_0(1+V_n^2/V_*^2)$ \cite{SKVFDLR2017}. Such a symmetric dependence is possible in devices using special nonlinear dielectric films \cite{MFSY2016}. In any case, (additional) accumulated electrostatic energy at the varicap is given by formula \begin{equation} W(V_n)=\int_{0}^{V_n}C(u)u du, \end{equation} while a.c. electric charge is \begin{equation} q_n=q(V_n)=\int_{0}^{V_n}C(u) du. \end{equation} Taking the inverse relation, we have $V_n=U(q_n)$. Equation of motion for a single oscillator circuit, with dissipative terms neglected, is $L\ddot q_n+U(q_n)=0$. It will be important for our purposes that a nonlinear frequency shift can be negative in this dynamics. Of course, fully nonlinear regime should be studied numerically, but analytical investigations may be based on the expansion \begin{equation} U(q_n)=C_0^{-1}\big[q_n +\alpha q_n^2+\beta q_n^3+\cdots \big] \end{equation} assuming relatively small amplitudes. Then a frequency shift for weakly nonlinear regime is known to be \begin{equation} \Delta\omega=\omega_0(3\beta/8-5\alpha^2/12)q_0^2, \end{equation} with $\omega_0=2\pi/T_0=1/\sqrt{LC_0}$, and $q_0$ being an amplitude of the main harmonics. The inverse quality factor of the oscillator is apparently \begin{equation} \gamma=\left(R_L\sqrt{C_0/L}+R_C^{-1}\sqrt{L/C_0}\right)/2. \end{equation} It is presumed very small (as we will see below, the most interesting things happening with vortices begin at $Q\gtrsim 10^4$). For example, with $L=5.0\times 10^{-4}$ H, $C_0=5.0\times 10^{-10}$ F, $R_L<0.1$ Ohm, and $R_C>10^7$ Ohm, we have $\omega_0=2.0\times 10^6$ rad/s, corresponding to a frequency about 0.3 MHz, and a sufficiently high quality factor $Q>10^4$. Perhaps, even smaller values of $R_L$ and larger values of $R_C$ can be achieved at reasonably low temperatures, making $Q\gtrsim 10^5$. There are also weak ordinary capacitors $C_{n,n'}\ll C_0$ inserted between points $V_n$ and $V_{n'}$. They unite individual oscillators into a whole network. Equations of motion for the united system can be derived in a very simple manner. Indeed, electric current through the coil is $I_n$, while currents through the capacitors are $C(V_n)\dot V_n$ and $C_{n,n'}(\dot V_n -\dot V_{n'})$. Leakage current parallel to varicap is $V_n/R_C$. Thus we obtain equations \begin{equation} C(V_n)\dot V_n+\sum_{n'} C_{n,n'}(\dot V_n -\dot V_{n'})+\frac{V_n}{R_C}= I_n. \label{current} \end{equation} A voltage difference at the coil is $L \dot I_n+R_L I_n$. In sum with $V_b+V_n$ it should give $V_b$. Therefore we have the second sub-set of equations, closing the system, \begin{equation} L \dot I_n + V_n + R_L I_n = 0. \label{voltage} \end{equation} It is clear that our system admits a class of $n$-independent solutions related to Eq.(\ref{background}), when each node oscillates as if there were no couplings. It is not so obvious at first glance but can be easily checked that without dissipative terms containing active resistances $R_L$ and $R_C$, equations (\ref{current})-(\ref{voltage}) correspond to a Lagrangian system with the Lagrangian function \begin{eqnarray} {\mathsf L}&=&\sum_n \frac{L}{2}\Big[C(V_n)\dot V_n +\sum_{n'} C_{n,n'}(\dot V_n -\dot V_{n'})\Big]^2 \nonumber\\ &&-\sum_n W(V_n)-\sum_{n,n'}\frac{C_{n,n'}}{4}(V_n - V_{n'})^2. \end{eqnarray} Equations of motion in the form (\ref{current})-(\ref{voltage}) are suitable enough for numerical simulations, but difficult for theoretical analysis. Therefore our next steps will be to rewrite the Lagrangian in terms of charges $q_n$, and then introduce a Hamiltonian description. It is convenient to adopt non-dimensionalization (voltage in units $V_*$, charge in units $C_0 V_*$, time in units $1/\omega_0$), formally corresponding to $L=1$, $C_0=1$. Then, in the first order on small quantities $\bar c_{n,n'}=C_{n,n'}/C_0$, and retaining only main terms on oscillation amplitudes in the couplings, we have \begin{eqnarray} {\mathsf L}&\approx&\sum_n \Big[\frac{\dot q_n^2}{2} -\frac{q_n^2}{2}-\alpha\frac{q_n^3}{3} -\beta\frac{q_n^4}{4}\Big]\nonumber\\ &+&\frac{1}{4}\sum_{n,n'}\bar c_{n,n'}[2(\dot q_n -\dot q_{n'})^2-(q_n -q_{n'})^2]. \end{eqnarray} The canonical momenta for this Lagrangian are \begin{equation} p_n=\dot q_n+2\sum_{n,n'}\bar c_{n,n'}(\dot q_n -\dot q_{n'}). \end{equation} Inverse relations, again with the first-order accuracy on $\bar c_{n,n'}$, are easily obtained as \begin{equation} \dot q_n\approx p_n-2\sum_{n,n'}\bar c_{n,n'}(p_n -p_{n'}). \end{equation} As the result, the Hamiltonian function of weakly interacting oscillators acquires the following form, \begin{eqnarray} {\mathsf H}&\approx&\sum_n \Big[\frac{p_n^2}{2} +\frac{q_n^2}{2}+\alpha\frac{q_n^3}{3} +\beta\frac{q_n^4}{4}\Big]\nonumber\\ &-&\frac{1}{4}\sum_{n,n'}\bar c_{n,n'}[2(p_n -p_{n'})^2-(q_n -q_{n'})^2]. \end{eqnarray} When an oscillator is taken separately, then there exists a weakly nonlinear canonical transform, \begin{eqnarray} q_n&\approx&\tilde q_n -\frac{\alpha}{3}(\tilde q_n^2+2\tilde p_n^2)\nonumber\\ &+&\frac{\tilde q_n}{16}\Big[\Big(\frac{25}{9}\alpha^2\!-\!\frac{5}{2}\beta\Big)\tilde q_n^2 +\Big(\frac{13}{9}\alpha^2\!-\!\frac{9}{2}\beta\Big)\tilde p_n^2\Big], \end{eqnarray} \begin{eqnarray} p_n&\approx&\tilde p_n +\frac{2\alpha}{3}\tilde p_n\tilde q_n\nonumber\\ &-&\frac{\tilde p_n}{16}\Big[\Big(\frac{11}{9}\alpha^2\!-\!\frac{15}{2}\beta\Big)\tilde q_n^2 +\Big(\frac{47}{9}\alpha^2\!-\!\frac{3}{2}\beta\Big)\tilde p_n^2\Big], \end{eqnarray} such that combination $a_n=(\tilde q_n +i \tilde p_n)/\sqrt{2}$ (the normal complex variable) is related to the action-angle variables $S_n$ and $\phi_n$ by formula $a_n=\sqrt{S_n}\exp(i\phi_n)$. That transform excludes third-order terms from the partial Hamiltonians. Neglecting again nonlinearities in the couplings, we reduce the total Hamiltonian to the following expression: \begin{eqnarray} {\mathsf H}&\approx&\sum_n \big(|a_n|^2+\frac{g}{2}|a_n|^4\big)\nonumber\\ &-&\frac{1}{4}\sum_{n,n'}\bar c_{n,n'}(a_n -a_{n'})(a^*_n -a^*_{n'})\nonumber\\ &+&\frac{3}{8}\sum_{n,n'}\bar c_{n,n'}[(a_n -a_{n'})^2+(a^*_n -a^*_{n'})^2], \label{H_a} \end{eqnarray} where the nonlinear coefficient is $g=(3\beta/4-5\alpha^2/6)$. In terms of $a_n$, Hamiltonian equations of motion are $i\dot a_n=\partial {\mathsf H}/\partial a_n^*$. In the main approximation, $a_n$ behaves proportionally to $\exp(-it)$, since the nonlinearity and the couplings are weak. Therefore, the last double sum in Eq.(\ref{H_a}) contains quickly oscillating quantities which are not important after averaging. Introducing slow envelopes $A_n=a_n\exp(it)$ and taking into account linear damping (not covered by Hamiltonian theory), we arrive at Eq.(\ref{A_n_eq_diff}), with negative $c_{n,n'}=-\bar c_{n,n'}$. Nonlinear coefficient $g$, for physically relevant parameters in Eq.(\ref{C_V}), appears also negative. In particular, if $\eta=0$, then \begin{equation} g=\frac{\nu(1-\mu)}{24}[-3+\nu(1-4\mu)]. \end{equation} It is very important that a non-zero value of $\mu$, corresponding to a constant capacitor in parallel with the diode, results in stronger negative frequency shift. For example, with $\nu=2$ and $\mu=0.5$, we have $g=-5/24$, while for $\nu=2$ and $\mu=0$ it is $g=-1/12$. \section{Analysis of vortex motion in 2D} As far as our goal is to study vortices on 2D networks, it is convenient to introduce new complex variables $\psi_n(t)$ through the following substitution (compare to Ref.\cite{R2019} where positive frequency shift was considered): \begin{equation} A_n(t)=A_0\psi^*_n(t)\exp[-\gamma t-i\varphi(t)], \end{equation} where real $A_0$ is a typical amplitude at $t=0$, and $\varphi(t)=gA_0^2(1-e^{-2\gamma t})/(2\gamma)$. As the result, we reduce our dissipative autonomous system to a non-autonomous Hamiltonian system, \begin{equation} i\dot \psi_n=\sum_{n'}\frac{\bar c_{n,n'}}{2}(\psi_n-\psi_{n'}) +|gA_0^2|e^{-2\gamma t}(|\psi_n|^2-1)\psi_n. \label{psi_eq} \end{equation} Let a typical value of $\bar c_{n,n'}$ be $\bar c\ll 1$. For purposes of further analysis, we introduce a slow time $\tau=h^2\bar c t$ and small parameters, \begin{equation} \delta=\gamma/(h^2\bar c )\ll 1,\qquad \xi=(h^2 \bar c/|gA_0^2|)^{1/2}\ll 1. \end{equation} Then Eq.(\ref{psi_eq}) takes the following form, \begin{equation} i\frac {d \psi_{\bf n}}{d\tau }= \sum_{\bf n'}\frac{F_{{\bf n},{\bf n}'}}{2h^2}(\psi_{\bf n}-\psi_{\bf n'}) +\frac{e^{-2\delta \tau}}{\xi^2}(|\psi_{\bf n}|^2-1)\psi_{\bf n}, \label{psi_eq_tau} \end{equation} where ${\bf n}'$ are the nearest neighbors for ${\bf n}$ on square lattice, $F_{{\bf n},{\bf n}'}=F(h[{\bf n}+{\bf n}']/2)$, and $F({\bf r})\sim 1$ is a non-negative function. In the continuous limit, the above equation reduces to a defocusing NLSE with spatially variable dispersion coefficient and time-dependent nonlinear coefficient, \begin{equation} i\psi_\tau=-\frac{1}{2}\nabla\cdot \left[F({\bf r})\nabla\psi\right] + \frac{e^{-2\delta \tau}}{\xi^2}(|\psi|^2-1)\psi. \label{psi_eq_tau_contin} \end{equation} We are interested in vortices on constant background $\psi_0=1$. It is clear from the equation above that intervals $\Delta\tau\sim 1$ are typical vortex turnover times in the system, $\xi$ is a typical relative healing length at $\tau=0$, while \begin{equation} \tilde\xi({\bf r},\tau)=\xi e^{\delta\tau}\sqrt{F({\bf r})} \label{tilde_xi} \end{equation} is a local relative vortex core width. Vortices described by Eq.(\ref{psi_eq_tau_contin}) have been analyzed in Ref.\cite{R2019} for 3D case. Applying similar analysis to 2D situation, we easily obtain that coordinates $x_j$ and $y_j$ of $N$ ``point'' vortices are canonically conjugate quantities (up to vortex signs $\sigma_j=\pm 1$). On not very long times and for small $\xi$, when $\xi_{eff}=\xi \exp(\delta\tau)\ll 1$, vortex motion is approximately described by a time-dependent Hamiltonian function (compare to Refs.\cite{R2017-1,R2017-2}), \begin{eqnarray} &&H=\sum_j\sigma_j^2 {\cal E}({\bf r}_j,\tau) +{\sum_{j, k}}'\frac{\sigma_j \sigma_k}{2}G({\bf r}_j,{\bf r}_k), \label{H_v}\\ &&{\cal E}({\bf r},\tau)\approx \frac{1}{2}G({\bf r}-{\bf e}\tilde\xi({\bf r},\tau)/2,{\bf r}+{\bf e}\tilde\xi({\bf r},\tau)/2), \end{eqnarray} where the prime means omitting diagonal terms in the double sum determining pair interactions between vortices, ${\bf e}$ is a unit vector, and a two-dimensional Green function $G({\bf r},{\bf r}_1)$ satisfies equation \begin{equation} -\nabla_{\bf r}\cdot\frac{1}{F({\bf r})}\nabla_{\bf r}G({\bf r},{\bf r}_1) = 2\pi\delta_{\rm Dirac}({\bf r}-{\bf r}_1). \label{G} \end{equation} The physical meaning of $G({\bf r},{\bf r}_1)$ can be explained as follows. Let $\psi=\sqrt{\rho}\exp(i\Phi)$ be the Madelung transform, and ${\bf J}=\rho F({\bf r})\nabla\Phi$ be a ``current density'' (in the hydrodynamic sense) for Eq.(\ref{psi_eq_tau_contin}). In the ``long-scale'' hydrodynamic regime, away from vortex cores we have $\rho\approx 1$ and thus $\nabla\cdot{\bf J}\approx 0$, so a stream function $\Theta$ exists for 2D vector field $F({\bf r})\nabla\Phi$. Since $\Phi$-field created by vortices is not single-valued and has singularities, it satisfies equation $\mbox{curl}_{2D}\nabla\Phi=2\pi\sum_j \sigma_j \delta_{\rm Dirac}({\bf r}-{\bf r}_j)$. Therefore we have a partial differential equation determining the stream function, \begin{equation} -\nabla_{\bf r}\cdot\frac{1}{F({\bf r})}\nabla_{\bf r}\Theta({\bf r}) = 2\pi\sum_j\sigma_j\delta_{\rm Dirac}({\bf r}-{\bf r}_j). \end{equation} So $G({\bf r},{\bf r}_1)$ is a stream function created at point ${\bf r}$ by a vortex placed in point ${\bf r}_1$. Expression (\ref{H_v}) for vortex Hamiltonian $H$ then follows from appropriately regularized ``kinetic energy'' integral \begin{equation} 2\pi H=\frac{1}{2}\int \frac{(\nabla\Theta)^2}{F({\bf r})}d^2{\bf r}. \end{equation} It follows from Eq.(\ref{G}) that \begin{equation} G({\bf r}_1,{\bf r}_2)= \tilde\theta({\bf r}_1,{\bf r}_2)- \sqrt{F({\bf r}_1)F({\bf r}_2)}\ln|{\bf r}_1-{\bf r}_2|, \end{equation} with some smooth function $\tilde \theta({\bf r}_1,{\bf r}_2)\sim 1$. Therefore the self-energy is \begin{equation} {\cal E}({\bf r},\tau)= \theta({\bf r})-\frac{1}{2} F({\bf r})\left[\ln\left(\xi\sqrt{F({\bf r})}\right)+\delta \tau\right], \end{equation} where $\theta({\bf r})=\tilde \theta({\bf r},{\bf r})/2$. \begin{figure} \begin{center} \epsfig{file=Fig2.eps, width=80mm} \end{center} \caption{Critical values of $\xi_{eff}$ found numerically by minimizing the Hamiltonian (\ref{H_xi_delta}), starting with a small $\xi_{eff}$ and increasing it by small steps until cluster destruction.} \label{critical_xi} \end{figure} In particular, we may take circularly symmetric profile $F(r)$, with $r=\sqrt{x^2+y^2}$, and roughly (with a logarithmic accuracy) estimate energy of a vortex cluster in the form of a regular $N$-polygon, \begin{equation} E_N(r,\tau)\approx\frac{N}{2} F(r)[\Lambda(\tau)-(N-1)\ln(r)], \label{E_N} \end{equation} where $\Lambda(\tau)=[\ln(1/\xi)-\delta \tau]=-\ln(\xi_{eff})$ is a logarithmically large quantity. It is not difficult to understand that if $F(r)$ has a barrier at some finite $r_b$, and $N$ is not too large, then expression (\ref{E_N}) may have a minimum at some $0<r_*<r_b$. Thus, while $\xi_{eff}$ is less than a critical value, such a profile is able to trap vortex cluster. Discreteness (finite $h$) acts also to stabilize vortex configurations because, while $\xi_{eff}\lesssim h$, the lattice tends to create local minima (in inter-node vortex center positions) for the Hamiltonian corresponding to Eq.(\ref{psi_eq_tau}), \begin{equation} \tilde{\mathsf H}=\sum_{\bf n,n'} \frac{F_{{\bf n},{\bf n}'}}{4h^2}|\psi_{\bf n}-\psi_{\bf n'}|^2 +\sum_{\bf n}\frac{e^{-2\delta \tau}}{2\xi^2}(|\psi_{\bf n}|^2-1)^2. \label{H_xi_delta} \end{equation} Fig.\ref{critical_xi} illustrates this fact for a particular case of ``rectangular'' barrier ($F(r)=1$ if $r^2 < 1$, and $F(r)=B>1$ if $1\leq r^2 < 3$; otherwise $F=0$). There, for different $N$, and for two different values of $h$, numerical estimates are presented how critical value of parameter $\xi_{eff}$ depends on barrier height $B$. It is seen that spatial nonuniformity of links has a strong influence on vortex stability for $1\lesssim B\lesssim 3$. However, saturation on larger $B$ is still waiting for explanation. So we can expect stable trapping of a few vortices of the same sign within a domain surrounded by the barrier. However, as time increases, function $\Lambda(\tau)$ decreases, and therefore vortex configuration should suddenly become unstable at some moment. In the next section, we numerically verify such a scenario within Eq.(\ref{psi_eq_tau}), and then within Eqs.(\ref{current})-(\ref{voltage}). \begin{figure} \begin{center} \epsfig{file=Fig3a.eps, width=42mm} \epsfig{file=Fig3b.eps, width=42mm}\\ \epsfig{file=Fig3c.eps, width=42mm} \epsfig{file=Fig3d.eps, width=42mm}\\ \epsfig{file=Fig3e.eps, width=42mm} \epsfig{file=Fig3f.eps, width=42mm} \end{center} \caption{An example of evolution of two vortices in DNLSE.} \label{N2} \end{figure} \begin{figure} \begin{center} \epsfig{file=Fig4a.eps, width=42mm} \epsfig{file=Fig4b.eps, width=42mm}\\ \epsfig{file=Fig4c.eps, width=42mm} \epsfig{file=Fig4d.eps, width=42mm} \end{center} \caption{An example of evolution of three vortices in DNLSE.} \label{N3} \end{figure} \begin{figure} \begin{center} \epsfig{file=Fig5a.eps, width=42mm} \epsfig{file=Fig5b.eps, width=42mm}\\ \epsfig{file=Fig5c.eps, width=42mm} \epsfig{file=Fig5d.eps, width=42mm} \end{center} \caption{An example of evolution of four vortices in DNLSE.} \label{N4} \end{figure} \section{Numerical results} Eq.(\ref{psi_eq_tau}) has been numerically simulated using a 4-th order Runge-Kutta scheme for time stepping. Function $F(r)$ was taken in the above described simple form, with $B=3.0$. That corresponds to using just two kinds of coupling capacitors $C_{n,n'}$. Thus we have a compact planar structure with a finite number of interacting degrees of freedom. We present numerical results for $N=2$, $N=3$ and $N=4$ vortices (Figs. \ref{N2}, \ref{N3}, and \ref{N4}, respectively, where each vortex is seen as a density depletion). The parameters in these numerical experiments were: $h=0.12$, $\xi=0.05$, and $\delta=0.04$. As initial states, we took non-symmetric vortex configurations corresponding to numerically found local minima of Hamiltonian (\ref{H_xi_delta}). The most regular dynamics was observed for $N=2$, perhaps because the simplified continuous counterpart (\ref{H_v}) is an integrable system in the case of two vortices (besides the Hamiltonian, the angular momentum is conserved). After initial quasi-static period of evolution [Fig.\ref{N2}(a)], there was stage of oscillatory motion without orbiting [Fig.\ref{N2}(b)]. Then, it was orbiting in anticlockwise azimuthal direction, with gradually widening cores [Figs.\ref{N2}(c) and \ref{N2}(d)]. Finally, wide vortices comparable to the whole system size were transformed to a wave structure propagating mainly clockwise [Figs.\ref{N2}(e) and \ref{N2}(f)]. The last stage was practically in linear regime because the effective nonlinear coefficient $\exp(-2\delta\tau)/\xi^2$ was very small at $\tau\gtrsim 100$. Vortex clusters with $3\leq N\leq 5$ passed similar initial two stages in their evolution, but the subsequent dynamics was different. The first stage was again a stable, nearly static configuration, when vortex centers were motionless while their cores gradually broadened according to Eq.(\ref{tilde_xi}) [Figs.\ref{N3}(a) and \ref{N4}(a)]. The second stage was oscillation of vortices around their previous positions [Figs.\ref{N3}(b) and \ref{N4}(b)]. At the third stage, vortices lose stability and begin to move in a complicated manner, typically one or two of them at fast ``external'' orbits [Figs.\ref{N3}(c) and \ref{N4}(c)]. At the fourth stage, the external vortices quit the lattice producing strong short-scale non-vortical oscillations in it [Figs.\ref{N3}(d) and \ref{N4}(d)]. During a further evolution, some of the remaining vortices go to external orbits and leave the lattice in a similar manner, until one or two are present on a highly disturbed background (not shown). Static initial configurations with $N\geq 6$ were not found with the given parameters. However, cases $N=6$ and $N=6+1$ (hexagon plus central vortex) were successfully simulated with $h=0.04$, $\xi=0.025$, and $\delta=0.02$ (not shown). It should be noted that for this case the quality factor should be extremely high, since $\gamma/g A_0^2=\delta\xi^2\sim 10^{-5}$, while $g A_0^2\sim 0.1$. The dynamics was qualitatively similar to that described above. It is interesting to note that in the last case, the central vortex lost stability first and quickly passed to external orbit, crossing the system boundary soon after that. \begin{figure} \begin{center} \epsfig{file=Fig6a.eps, width=42mm} \epsfig{file=Fig6b.eps, width=42mm}\\ \epsfig{file=Fig6c.eps, width=42mm} \epsfig{file=Fig6d.eps, width=42mm} \end{center} \caption{An example of evolution of four vortices in the basic electric model.} \label{VI_N4} \end{figure} \begin{figure} \begin{center} \epsfig{file=Fig7a.eps, width=42mm} \epsfig{file=Fig7b.eps, width=42mm}\\ \epsfig{file=Fig7c.eps, width=42mm} \epsfig{file=Fig7d.eps, width=42mm} \end{center} \caption{Four vortices in the electric model: phases corresponding to Fig.\ref{VI_N4}. The presence of vortices is clearly seen.} \label{VI_N4-phase} \end{figure} \begin{figure} \begin{center} \epsfig{file=Fig8a.eps, width=42mm} \epsfig{file=Fig8b.eps, width=42mm}\\ \epsfig{file=Fig8c.eps, width=42mm} \epsfig{file=Fig8d.eps, width=42mm} \end{center} \caption{An example of evolution of four vortices in the electric model with smaller $h=0.06$.} \label{VI_N4new} \end{figure} Of course, the above results were obtained within DNLSE under many simplifying assumptions, and therefore they cannot be completely convincing. In order to get a more direct evidence of vortex existence and behavior in fully nonlinear regime, the original system of circuit equations (\ref{current})-(\ref{voltage}) has been numerically simulated using expression (\ref{C_V}) with parameters $\eta=0$, $\nu=2$, $\mu=0.5$ (and $C_0=1$, $V_*=1$). Two numerical experiments are presented below. In the first one (see Figs.\ref{VI_N4}-\ref{VI_N4-phase}), the remaining dimensionless parameters were $\bar c=0.02$, $h=0.12$, $L=1$, $R_L=10^{-4}$, $R_C=10^{4}$. At $t=0$, the partial energies of oscillators were corresponding to $I_n^2/2+W(V_n)=0.32$ (excluding vortex cores), while their ``phases'' $\Phi_n=\arctan(I_n/V_n)$ were the same as the initial phases for DNLSE simulation. Therefore $|gA_0^2|\approx (5/24)\cdot 0.32\approx 0.067$ in these numerical experiments. Initial a.c. voltages were in the range $-0.6\lesssim V_n\lesssim 1.0$. To resolve Eqs.(\ref{current}) with respect to $\dot V_n$, a simple iterative scheme was developed, \begin{eqnarray} D_n^{(j+1)}&=&D_n^{(j)}-0.2\Big[C(V_n)D_n^{(j)}\nonumber\\ &+&\sum_{n'} C_{n,n'}(D_n^{(j)} -D_{n'}^{(j)}) +\frac{V_n}{R_C}- I_n\Big], \end{eqnarray} with $D_n^{(0)}=(I_n-V_n/R_C)/C(V_n)$. The result of 60-th iteration $\dot V_n\approx D_n^{(60)}$ was then used in a Runge-Kutta 4-th order time stepping. The convergence of this scheme was ensured by positive definiteness of the corresponding quadratic form, and by choosing the coefficient $0.2$ sufficiently small to have $|1-0.2\cdot C_{max}|<1$ (where $C_{max}=C(V_{min})$, and $V_{min}\approx -0.57$ is the negative root of equation $W(V)=0.32$). In Fig.\ref{VI_N4}, the evolution of quantities $2\varepsilon_n=I_n^2+2W(V_n)$ is shown for the case of four vortices, while in Fig.\ref{VI_N4-phase} we see the corresponding phases. In particular, Fig.\ref{VI_N4-phase} indicates unambiguously that we deal with vortices, not simply with some amplitude depressions. Qualitatively, the system passed the same stages as in the simplified DNLSE model. However, since here the initial phase distribution was not appropriately adjusted to strong nonlinearity, the first (trapping) stage was not so long as in experiment shown in Fig.\ref{N4}. Finally, in Figs.\ref{VI_N4new}-\ref{VI_N4new-xy} we present results for a smaller $h=0.06$ and for larger initial energies $I_n^2+2W(V_n)=0.81$. In this simulation $\bar c=0.04$. In general, vortices look more smooth here. As Figs.\ref{VI_N4new}a-b demonstrate, and Fig.\ref{VI_N4new-xy} confirms, the cluster was almost static till $t\sim 1000 T_0$. After that time the configuration was deformed by appeared instability, and the vortices started intense motion. Unlike the case $h=0.12$, here no vortex exited the disc till the very end of simulation. \begin{figure} \begin{center} \epsfig{file=Fig9a.eps, width=80mm}\\ \epsfig{file=Fig9b.eps, width=80mm} \end{center} \caption{Time-dependence of $x$ and $y$ coordinates of four vortices, corresponding to simulation with $h=0.06$. Conventionally, vortices are located at the centers of those $h\times h$ squares, where the sum of phase increments along the sides is $2\pi$.} \label{VI_N4new-xy} \end{figure} \section{Summary} To summarize, in this work a general scheme of an electric network has been suggested which can be approximately described by a weakly dissipative defocusing discrete nonlinear Schr\"odinger equation of special kind, where coupling terms are not translationally invariant but spatially uniform background solutions exist. Discrete vortices in such systems have been analyzed and then numerically simulated. Simulations have demonstrated qualitatively similar results within DNLSE and within the original circuit equations. Of crucial importance is the quality factor of oscillator circuits. Numerical experiments have shown that nontrivial behavior of vortices is observable with $Q\gtrsim 10^4-10^5$. In practice such values could be achieved at sufficiently low temperatures when conductivity of metals as well as resistivity of dielectrics are both substantially higher than they are at the room temperature. The study above is apparently far from being exhaustive. This system seems deserving further thorough investigation, especially in its highly nonlinear regimes, and under external driving (driving signals can be easily introduced into electric network, resulting in many resonance phenomena, perhaps similar in some sense to those reported in Ref.\cite{RVSDK2012}). The author also hopes that experimentalists will be interested in conducting laboratory experiments inspired by the present theory.
train/arxiv
BkiUdJ85qU2Apwwg7nQj
5
1
\section{Introduction} Fast radio bursts (FRBs) propagate from as far as $\sim$Gpc distances through their local environments, the interstellar medium (ISM) and circumgalactic medium (CGM) of their host galaxy, the intergalactic medium (IGM) and any intervening galaxies or galaxy haloes, the halo and ISM of the Milky Way, and finally through the interplanetary medium (IPM) of our Solar System before arriving at the detector. Along their journey they experience dispersion and multi-path propagation from free electrons along the line-of-sight (LoS). The dispersion measure ${\rm DM} = \int n_e dl/(1+z)$, where $n_e$ is the electron density and $z$ is the redshift. Most FRBs are extragalactic and have DMs much larger than the expected contribution from our Galaxy, with the single possible exception being the Galactic magnetar source SGR 1935$+$2154 \citep{2020arXiv200510828B, 2020arXiv200510324T}. Many studies of FRB propagation have focused on the ``DM budget," constraining the relative contributions of intervening media to the total observed FRB DM, with particular attention paid to determining the DM contribution of the IGM, which in principle can be used to estimate the distance to an FRB without a redshift \citep[e.g.,][]{2015MNRAS.451.4277D,2019ApJ...886..135P}. For FRBs with redshifts the subsequent intergalactic ${\rm DM}(z)$ relationship can be used to measure the cosmic baryon density, as was first empirically demonstrated by \cite{2020Natur.581..391M}. Arguably the least constrained DM contribution to FRBs is that of their host galaxies, which has been estimated in only one case using Balmer line observations \citep[][]{2017ApJ...834L...7T}. \indent Understanding FRB propagation requires study of not just dispersion but also scattering. Bursts propagate along multiple ray paths due to electron density fluctuations, which leads to detectable chromatic effects like pulse broadening, scintillation, and angular broadening. These effects are respectively characterized by a temporal delay $\tau$, frequency bandwidth $\Delta \nu_{\rm d}$, and full width at half maximum (FWHM) of the scattered image $\theta_{\rm d}$. Scattering effects generally reduce FRB detectability, obscure burst substructure, or produce multiple images of the burst, and may contaminate emission signatures imprinted on the signal at the source \citep[e.g.,][]{2017ApJ...842...35C, 2019ApJ...876L..23H, 2020MNRAS.497.3335D}. On the other hand, scattering effects can also be used to resolve the emission region of the source \citep{2020ApJ...899L..21S} and to constrain properties of the source's local environment \citep[for a review of FRB scattering see][]{2019A&ARv..27....4P, 2019ARA&A..57..417C}. Since the relationship between $\tau$ and DM is known for Galactic pulsars \citep[e.g.,][]{1997MNRAS.290..260R, 2004ApJ...605..759B,2015ApJ...804...23K,2016arXiv160505890C}, it is a promising basis for estimating the DM contributions of FRB host galaxies based on measurements of $\tau$ (Cordes et al., in prep). In order to use scattering measurements for these applications, we need to assess how intervening media contribute to the observed scattering (i.e., a ``scattering budget"). To disentangle any scattering effects intrinsic to the host galaxy or intervening galaxies, we need to accurately constrain the scattering contribution of the Milky Way. \indent Broadly speaking, an FRB will encounter ionized gas in two main structural components of the Milky Way, the Galactic disk and the halo. The Galactic disk consists of both a thin disk, which has a scale height of about 100 pc and contains the spiral arms and most of the Galaxy's star formation \citep[e.g.,][]{2002astro.ph..7156C}, and the thick disk, which has a scale height of about 1.6 kpc and is dominated by the more diffuse, warm ($T\sim10^4$ K) ionized medium \citep[e.g.,][]{2020ApJ...897..124O}. The halo gas is \added{thought to be} dominated by the hot ($T\sim 10^6$ K) ionized medium, and most of this hot gas is contained within 300 kpc of the Galactic center \added{\citep[e.g.,][]{2017ApJ...835...52F}}. While the DMs and scattering measurements of Galactic pulsars and pulsars in the Magellanic Clouds predominantly trace plasma in the thin and thick disks, extragalactic sources like FRBs are also sensitive to gas in the halo. \indent In this paper we assess the contribution of galaxy haloes to the scattering of FRBs. We demonstrate how scattering measurements of FRBs can be interpreted in terms of the internal properties of the scattering media, and apply this formalism to galaxy haloes intervening LoS to FRBs. We first assess scattering from the Milky Way halo using two case studies: FRB 121102 and FRB 180916. \deleted{These FRBs are seen towards the Galactic anti-center and have highly precise localizations and host galaxy associations, in addition to precise scattering measurements.} \added{These FRBs have the most comprehensive, precise scattering measurements currently available, in addition to highly precise localizations and host galaxy associations}. Due to their location close to the Galactic plane, the emission from these sources samples both the outer spiral arm of the Galaxy and the Galactic thick disk, and the scattering observed for these FRBs is broadly consistent with the scattering expected from the spiral arm and disk. Only a minimal amount of scattering is allowed from the Galactic halo along these LoS, thus providing an upper limit on the halo's scattering contribution. We then extrapolate this analysis to two FRBs that pass through haloes other than those of their host galaxies and the Milky Way, FRB 181112 and FRB 191108. \indent In Section~\ref{sec:theory} we summarize the formalism relating electron density fluctuations and the observables $\tau$, DM, $\Delta \nu_{\rm d}$, and $\theta_{\rm d}$, and describe our model for the scattering contribution of the Galactic halo. A new measurement of the fluctuation parameter of the Galactic thick disk is made in Section~\ref{sec:disk} using Galactic pulsars. An overview of the scattering measurements for FRB 121102 and FRB 180916 is given in Section~\ref{sec:frbs}, including an updated constraint on the scintillation bandwidth for FRB 121102 and a comparison of the scattering predictions made by Galactic electron density models NE2001 \citep{2002astro.ph..7156C, 2003astro.ph..1598C} and YMW16 \citep{2017ApJ...835...29Y}. The FRB scattering constraints are used to place an upper limit on the fluctuation parameter of the Galactic halo in Section~\ref{sec:halo}. A brief comparison of the FRB-derived limit with scattering observed towards the Magellanic Clouds is given in Section~\ref{sec:MCs} and scattering constraints for intervening galaxy haloes are discussed in Section~\ref{sec:conc}. \section{Modeling}\label{sec:theory} \subsection{Electron Density Fluctuations and Scattering} \indent We characterize the relationship between electron density fluctuations and scattering of radio emission using an ionized cloudlet model in which clumps of gas in the medium have a volume filling factor $f$, internal density fluctuations with variance $\epsilon^2 = \langle (\delta n_e)^2 \rangle/{n_e}^2$, and cloud-to-cloud variations described by $\zeta = \langle {n_e}^2 \rangle/\langle {n_e} \rangle^2$, where $n_e$ is the local, volume-averaged mean electron density \citep[][]{1991Natur.354..121C, 2002astro.ph..7156C, 2003astro.ph..1598C}. We assume that internal fluctuations follow a power-law wavenumber spectrum of the form \citep[][]{cfrc87} $P_{\delta n_e}(q) = {\rm C_n^2} q^{-\beta} \exp(-(ql_{\rm i} / 2\pi)^2)$ that extends over a wavenumber range $2\pi / l_{\rm o} \le q \lesssim 2\pi / l_{\rm i}$ defined by the outer and inner scales, $l_{\rm o}, l_{\rm i}$, respectively. We adopt a wavenumber index $\beta = 11/3$ that corresponds to a Kolmogorov spectrum. Typically, $l_{\rm i} \ll l_{\rm o}$, but their magnitudes depend on the physical mechanisms driving and dissipating turbulence, which vary between different regions of the ISM. \indent Multipath propagation broadens pulses by a characteristic time $\tau$ that we relate to DM and other quantities. For a medium with homogeneous properties, the scattering time in Euclidean space is $\tau({\rm DM}, \nu) = K_\tau A_\tau \nu^{-4} \widetilde{F} G_{\rm scatt} {\rm DM}^2$ \citep[][]{2016arXiv160505890C}. The coefficient $K_\tau = \Gamma(7/6) c^3 r_{\rm e}^2 / 4$, where $c$ is the speed of light and $r_{\rm e}$ is the classic electron radius, while the factor $A_\tau \lesssim 1$ scales the mean delay to the $1/e$ delay that is typically estimated from pulse shapes. Because $A_\tau$ is medium dependent, we include it in relevant expressions symbolically rather than adopting a specific value. The scattering efficacy is determined by the fluctuation parameter $\widetilde{F} = \zeta\epsilon^2 / f (l_{\rm o}^2l_{\rm i})^{1/3}$ combined with a dimensionless geometric factor, $G_{\rm scatt}$, discussed below. \indent Evaluation with DM in pc cm$^{-3}$, the observation frequency $\nu$ in GHz, the outer scale in pc units, and the inner scale in km, $\widetilde{F}$ has units pc$^{-2/3}$ km$^{-1/3}$\ and \begin{eqnarray} \label{eq:taudmnu} \tau({\rm DM},\nu) \approx 48.03~{\rm ns} \,A_\tau \nu^{-4} \widetilde{F} G_{\rm scatt} {\rm DM}^2 . \end{eqnarray} For reference, the NE2001 model uses a similar parameter, $F = \widetilde{F} l_i^{1/3}$ \citep[Eq. 11-13][]{2002astro.ph..7156C}, that relates the scattering measure SM to {\rm DM}\ and varies substantially between different model components (thin and thick disks, spiral arms, clumps). \indent The geometric factor $G_{\rm scatt}$ in Eq.~\ref{eq:taudmnu} depends on the location of the FRB source relative to the dominant scattering medium and is calculated using the standard Euclidean weighting $(s/d)(1-s/d)$ in the integral of ${\rm C_n^2}(s)$ along the LoS. If both the observer and source are embedded in a medium with homogeneous scattering strength, $G_{\rm scatt} = 1/3$, while $G_{\rm scatt} = 1$ if the source to observer distance $d$ is much larger than the medium's thickness and either the source or the observer is embedded in the medium. \indent For a thin scattering layer with thickness $L$ at distance $\delta d \gg L$ from the source or observer, $G_{\rm scatt} \simeq 2\delta d / L \gg 1$ because of the strong leverage effect. For thin-layer scattering of cosmological sources by, e.g., a galaxy disk or halo, $G_{\rm scatt} = 2d_{\rm sl} d_{\rm lo}/Ld_{\rm so}$ where $d_{\rm sl}, d_{\rm lo}$ and $d_{\rm so}$ are angular diameter distances for source to scattering layer, scattering layer to observer, and source to observer, respectively. The scattering time is also multiplied by a redshift factor $(1+ z_{\ell})^{-3}$ that takes into account time dilation and the higher frequency at which scattering occurs in the layer at redshift $z_{\ell}$, with ${\rm DM}_{\ell}$ representing the lens frame value. We thus have for distances in Gpc and $L$ in Mpc, \begin{eqnarray} \label{eq:taudmnuz} \tau({\rm DM},\nu, z) \approx 48.03~{\rm \mu s} \times \frac{A_\tau \widetilde{F}\, {\rm DM}_{\ell}^2 }{(1 + z_{\ell})^3 \nu^4} \left[\frac{2d_{\rm sl}d_{\rm lo}}{ Ld_{\rm so}} \right] . \end{eqnarray} If the layer's DM could be measured, it would be smaller by a factor $(1+z_{\ell})^{-1}$ in the observer's frame. The pulse broadening time is related to the scintillation bandwidth $\Delta \nu_{\rm d}$ through the uncertainty principle $2\pi \tau \Delta \nu_{\rm d} = C_1$, where $C_1 = 1$ for a homogeneous medium and $C_1=1.16$ for a Kolmogorov medium \citep{1998ApJ...507..846C}. Multipath propagation is also manifested as angular broadening, $\theta_{\rm d}$, defined as the FWHM of the scattered image of a point source. The angular and pulse broadening induced by a thin screen are related to the distance between the observer and screen, which will be discussed further in Section~\ref{sec:121102}. \indent Measurements of $\tau$, $\Delta \nu_{\rm d}$, and $\theta_{\rm d}$ can include both extragalactic and Galactic components. We use the notation $\tau_{\rm MW, d}$, $\tau_{\rm MW, h}$, and $\tau_{\rm i,h}$ to refer to scattering contributed by the Galactic disk (excluding the halo), the Galactic halo, and intervening haloes, respectively, and an equivalent notation for DM. To convert between $\Delta \nu_{\rm d}$ and $\tau$ we adopt $C_1 = 1$. Wherever we use the notation $\tau$ and $\theta_{\rm d}$ we refer to the $1/e$ delay and FWHM of the autcorrelation function that are typically measured. \subsection{Electron Density Model for the Galactic Halo}\label{sec:halomodels} Models of the Milky Way halo based on X-ray emission and oxygen absorption lines depict a dark matter halo permeated by hot ($T\sim 10^6$ K) gas with a virial radius between 200 and 300 kpc \citep[e.g.,][]{2019MNRAS.485..648P, 2020ApJ...888..105Y, 2020MNRAS.496L.106K}. Based on these models, the average DM contribution of the hot gas halo to FRBs is about 50 pc cm$^{-3}$, which implies a mean electron density $n_e \sim 10^{-4}$ cm$^{-3}$. However, the DM contribution of the Milky Way halo is still not very well constrained. \cite{2020MNRAS.496L.106K} compare the range of $\DM_{\rm MW,h}$ predicted by various halo models with the XMM-Newton soft X-ray background \citep{2013ApJ...773...92H} and find that the range of $\DM_{\rm MW,h}$ consistent with the XMM-Newton background spans over an order of magnitude and could be as small as about 10 pc cm$^{-3}$. Using a sample of DMs from 83 FRBs and 371 pulsars, \cite{2020ApJ...895L..49P} place a conservative upper limit on $\widehat{\DM}_{\rm MW, h}<123$ pc cm$^{-3}$, with an average value of $\widehat{\DM}_{\rm MW, h} \approx 60$ pc cm$^{-3}$. \indent Most models of the hot gas halo adopt a spherical density profile, but \cite{2020ApJ...888..105Y} and \cite{2020NatAs.tmp..214K} argue that a disk component with a scale height of about 2 kpc and a radial scale length of about 5 kpc should be added to the spherical halo based on the directional dependence of emission measure found in Suzaku and HaloSat X-ray observations \citep{2018ApJ...862...34N, 2020NatAs.tmp..214K}. In such a combined disk-spherical halo model, the disk would account for most of the observed X-ray emission attributed to the halo, while the diffuse, extended, spherical halo contains most of the baryonic mass. \cite{2020NatAs.tmp..214K} also suggest that significant, patchy variations may exist in the halo gas on scales $\sim400$ pc. The physical scales of the disk models fit to these recent X-ray observations are similar to the spatial scale of the warm ionized medium in the Galactic disk, and several orders of magnitude smaller than the spatial scales ($\sim 100$s of kpc) typical of spherical halo models. Whether such a disk component should really be attributed to the circumgalactic medium and not to the ISM of the Galactic disk is unclear. \indent \deleted{We adopt a fiducial model for the halo's density profile based on} \added{We use} the \citeauthor{2019MNRAS.485..648P} (PZ19) modified Navarro-Frenk-White (mNFW) profile \added{to model the halo density}. The mNFW profile adjusts the NFW profile's matter density cusp at the Galactic center with a more physical roll-off, giving a matter density of \begin{equation} \rho(y) = \frac{\rho_0}{y^{1-\alpha}(y_0 + y)^{2+\alpha}}, \end{equation} \deleted{where $y = K_c(r/r_{200})$} \added{where $y = K_c\times(r/r_{200})$}, $r$ is radial distance from the Galactic center, and $r_{200}$ is the virial radius within which the average density is 200 times the cosmological critical density. The characteristic matter density $\rho_0$ is found by dividing the total mass of the halo by the volume within the virial radius. The concentration parameter $K_c$ depends on the galaxy mass; e.g., $K_c = 7.7$ for a total Milky Way halo mass $M = 1.5 \times 10^{12} M_\odot$, and can range from $K_c = 13$ for $M = 10^{10} M_\odot$ to $K_c = 5$ for $M = 10^{14} M_\odot$ for redshifts $z<0.1$ \citep{1997ApJ...490..493N}. Like PZ19, we assume that $75\%$ of the baryonic matter in a galaxy is in the halo ($f_{\rm b} = 0.75$), and the fraction of the total matter density that is baryonic is $\Omega_{\rm b}/\Omega_{\rm m} $, the ratio of the baryonic matter density to the total matter density ($\Omega_{\rm b}/\Omega_{\rm m} = 0.16$ today). If $f_{\rm b}$ is smaller, then the electron density and the predicted scattering from a halo will be smaller. \indent The electron density profile of the halo is \deleted{$n_e(r) = f_{\rm b}(\Omega_{\rm b}/\Omega_{\rm m})\rho(r)\frac{\mu_{\rm e}}{m_{\rm p} \mu_{\rm H}}U(r)$} \added{\begin{equation}\label{eq:neh} n_e(r) \approx 0.86f_{\rm b}\times(\Omega_{\rm b}/\Omega_{\rm m})\frac{\rho(r)}{m_{\rm p}}U(r),\end{equation}} \deleted{where $\mu_{\rm e} = 1.12$ for fully ionized hydrogen and helium, $\mu_{\rm H} = 1.3$, and $m_{\rm p}$ is the proton mass. The function $U(r) = (1/2)\{1-{\rm tanh}[(r-r_c)/w]\}$ imposes a physical roll-off at a characteristic radius $r_c$ with a width set by $w$. Previous studies often implicitly assumed a physical boundary at the virial radius when integrating $n_e$ to obtain $\DM_{\rm MW,h}$, despite the fact that $\rho(r)$ extends well beyond the virial radius. Nonetheless, most of the halo gas likely lies within a few times the virial radius, so we set $r_c = 2r_{200} \approx 480$ kpc and $w = 20$ kpc. A comparison of our halo density profile with the PZ19 model, evaluated for the Milky Way, is shown in Figure~\ref{fig:halomod}.} \added{where $m_p$ is the proton mass and we have assumed a gas of fully ionized hydrogen and helium. The function $U(r) = (1/2)\{1-{\rm tanh}[(r-r_c)/w]\}$ imposes an explicit integration limit at a radius $r_c = 2r_{200}$ over a region of width $w = 20$ kpc so as to avoid sharp truncation of the model. Our implementation of the PZ19 model, evaluated for the Milky Way, is shown in Figure~\ref{fig:halomod}.} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{halomod.pdf} \caption{Electron density model and resulting DM contribution of the Milky Way halo, for an observer 8 kpc from the Galactic center. The modified NFW profile of \cite{2019MNRAS.485..648P} for $\alpha = y_0 = 2$ is shown in orange, and our \replaced{adaptation}{implementation} of the PZ19 model is shown in blue. The maximum DM contribution of the halo predicted by the model is 63 pc cm$^{-3}$, similar to the predictions of other halo density models for lines of sight to the Galactic anti-center \citep[e.g.,][]{2020ApJ...895L..49P, 2020ApJ...888..105Y,2020MNRAS.496L.106K}.} \label{fig:halomod} \end{figure} \subsection{Scattering from the Galactic Halo} \indent Generally speaking, scattering from the Galactic halo traces the same plasma that gives rise to dispersion, weighted by the fluctuation parameter $\widetilde{F}$. To constrain $\widetilde{F}$ from measurements of $\tau$ and $\theta_{\rm d}$, we approximate the total scattering as a sum of two components: one from the disk and spiral arms of the Milky Way, which we denote with the subscript ($\rm MW,d$), and one from the Galactic halo, which we denote with the subscript ($\rm MW,h$). The total $\tau$ and $\theta_{\rm d}$ predicted by the model are therefore \begin{equation} \tau_{\rm MW}^{\rm total} = \tau_{\rm MW,d} + \tau_{\rm MW,h} \end{equation} and \begin{equation} \theta_{\rm d,MW}^{\rm total} = \sqrt{\theta_{\rm MW,d}^2 + \theta_{\rm MW,h}^2}. \end{equation} We adopt the NE2001 predictions for the $\rm MW$ components and model the halo components using Equations~\ref{eq:taudmnu} and~\ref{eq:ds}. The composite parameter $A_\tau(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$ is constrained by maximizing the likelihood function \begin{equation}\label{eq:Like} \mathcal{L}((\widetilde{F} \times {\rm DM}^2)_{\rm MW,h} | \tau_j) = \prod_j N(\tau_{j,\rm MW}^{\rm total} - \tau_j^{\rm obs}, \sigma_{\tau,j}^2) \end{equation} using measurements $\tau^{\rm obs}$ and $\theta_{\rm d}^{\rm obs}$. The variance of the likelihood function $\sigma_{\tau_j}^2$ is taken from the measurement uncertainties, where we implicitly assume that measurements of $\tau$ and $\theta_{\rm d}$ are sufficiently approximated by Gaussian PDFs. An estimate of $\widetilde{F}_{\rm MW,h}$ can then be obtained by assuming a given halo density profile or constructing a PDF for $\DM_{\rm MW,h}$, and adopting a value for $A_\tau$. \section{The Milky Way Halo} \indent In order to determine how the Milky Way halo (and in turn other galaxy haloes) contributes to the scattering of FRBs, we must constrain the scattering contribution of the Galactic disk. In the following sections, we first determine the amount of scattering that can occur in the thick disk of the Galaxy using the distribution of pulsar scattering measurements and DMs at high Galactic latitudes. We then compare the scattering measurements of FRB 121102 and FRB 180916 to the scattering expected from the Galactic disk using NE2001, and explain discrepancies between the scattering predictions of the NE2001 and YMW16 Galactic disk models. Finally, in Section~\ref{sec:halo}, we constrain the scattering contribution of the Galactic halo, followed by discussion of scattering constraints from pulsars in the Magellanic Clouds. \subsection{Scattering from the Thick Disk}\label{sec:disk} Most currently known FRBs lie at high Galactic latitudes, and their LoS through the Galaxy predominantly sample the thick disk, which has a mean density at mid-plane of 0.015 cm$^{-3}$ and a scale height $\approx 1.6$ kpc \citep{2020ApJ...897..124O}. The distribution of $\tau/{\rm DM}^2$ for Galactic pulsars with measurements of $\tau$ \citep[][and references therein]{2016arXiv160505890C} and {\rm DM}\ and other parameters from \citet{2005AJ....129.1993M}\footnote{\url{http://www.atnf.csiro.au/research/pulsar/psrcat}} yields a direct constraint on $\widetilde{F}$: $\tau/{\rm DM}^2 \approx (16 \hspace{0.05in} {\rm ns})A_\tau\widetilde{F}$, for $\nu = 1$ GHz and $G_{\rm scatt} = 1/3$ for sources embedded in the scattering medium. The distribution of $\widetilde{F}$ for all Galactic pulsars, assuming $A_\tau\approx1$, is shown in Figure~\ref{fig:taudmb}. For the pulsars above $|20|^\circ$ Galactic latitude, the mean value of $\tau/{\rm DM}^2$ from a logarithmic fit is $(5.3^{+5.0}_{-3.3})\times10^{-8}$ ms pc$^{-2}$ cm$^{6}$, which yields $\widetilde{F} \approx (3\pm2)\times10^{-3}$ pc$^{-2/3}$ km$^{-1/3}$. The value of $\widetilde{F}$ based on high-latitude pulsars is consistent with the related $F = l_i^{1/3} \widetilde{F}$ factor used in the NE2001 model for scattering in the thick disk. \indent A structural enhancement to radio scattering for LoS below $|20|^\circ$ is reflected in the distribution of $\tau/{\rm DM}^2$ shown in Figure~\ref{fig:taudmb}, which shows a multiple orders of magnitude increase in $\widetilde{F}$ at low latitudes, with the largest values of $\widetilde{F}$ dominating LoS to the Galactic center. This latitudinal and longitudinal dependence of $\widetilde{F}$ is directly responsible for the ``hockey-stick" relation between $\tau$ and DM for Galactic pulsars, in which high-DM pulsars lying close to the Galactic plane and towards the Galactic center have a much steeper dependence on ${\rm DM}$ than pulsars lying high above the Galactic plane or towards the Galactic anti-center \citep[e.g.,][]{2004ApJ...605..759B, 2015ApJ...804...23K, 2016arXiv160505890C}. The implications of the directional dependence of $\widetilde{F}$ for FRB LoS are discussed further in Section~\ref{sec:ymw16}. \indent For the many FRBs in high Galactic latitude directions, the Galactic disk has a virtually undetectable contribution to the observed pulse broadening. The DM contribution of the thick disk is about ($23.5 \times {\rm csc}(|b|)$) pc cm$^{-3}$, which varies negligibly with longitude for $|b|>20^\circ$ \citep{2020ApJ...897..124O}. The pulse broadening at 1 GHz expected from the thick disk therefore ranges from $\tau < 0.25$ $\mu$s at $|b|=20^\circ$ to $\tau < 29$ ns at $|b| = 90^\circ$. As discussed in the following section, scattering from the Galactic thin disk and spiral arms increases dramatically for FRB LoS close to the Galactic plane. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{taudm.pdf} \caption{The distribution of $\tau/{\rm DM}^2$ (which is directly proportional to $\widetilde{F}$) versus Galactic latitude for all Galactic pulsars, with $\tau$ in ms referenced to 1 GHz and DM in pc cm$^{-3}$. The average value and root-mean-square of the distribution for all pulsars above $\pm 20^\circ$ latitude is shown in blue. Pulsars closer to the Galactic center ($|b|<10^\circ$, $|l|<40^\circ$) are \replaced{highlighted in black}{shown as orange crosses}.} \label{fig:taudmb} \end{figure} \subsection{Scattering Constraints for Two FRB Case Studies}\label{sec:frbs} Unlike Galactic pulsars, for which the scintillation bandwidth and pulse broadening both result from the same electrons and conform to the uncertainty relation $2\pi\Delta\nu_{\rm d}\tau_{\rm d} = C_1$, some FRBs indicate that two \added{scattering} media are involved. \deleted{The scintillation bandwidth is caused by} \added{In these cases, the scintillation bandwidth is consistent with diffractive interstellar scintillation caused by} foreground Galactic turbulence while \added{the} pulse broadening \deleted{is extragalactic in origin, most likely from the host galaxy} \added{also has contributions from an extragalactic scattering medium \citep{ 2015Natur.528..523M, 2018ApJ...863....2G, 2019ARA&A..57..417C}}. Here we analyze the Galactic scintillations of two FRBs \added{with highly precise scattering measurements} in order to place constraints on any scattering in the Galactic halo. \subsubsection{FRB 121102}\label{sec:121102} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{FRB121102_DISSBW_binned.pdf} \caption{Scintillation bandwidth vs. radio frequency for FRB 121102. The blue line and shaded region designates the least-squares fit and errors of ${\rm log}(\Delta \nu_{\rm d})$ vs. $\rm log(\nu)$. The green and orange lines are anchored to the lowest frequency point with the indicated frequency dependences. The corresponding pulse broadening time is shown on the right-hand axis assuming $\tau = 1/(2\pi\Delta \nu_{\rm d})$. Data points are from \cite{2019ApJ...876L..23H}, \cite{2018Natur.553..182M}, \cite{2018ApJ...863..150S}, and \cite{2018ApJ...863....2G}.} \label{fig:dissbw} \end{figure} FRB 121102 currently has the most comprehensive set of scattering constraints on an FRB source so far, with scintillation bandwidths $\Delta \nu_{\rm d} = 58.1 \pm 2.3$ kHz at 1.65 GHz \citep{2019ApJ...876L..23H}, 5 MHz at 4.5 GHz \citep{2018Natur.553..182M}, $6.4\pm1.4$ MHz at 4.85 GHz \citep{2018ApJ...863..150S}, and $10-50$ MHz between 5 and 8 GHz \citep{2018ApJ...863....2G}. All of these scintillation bandwidth measurements are assembled in Figure~\ref{fig:dissbw}, along with a power-law fit to the data using linear least squares that gives a mean value $\Delta \nu_{\rm d} = 3.8^{+2.5}_{-1.5}$ kHz at 1 GHz. \indent FRB 121102 also has a pulse broadening time upper limit of $\tau < 9.6$ ms at 500 MHz \citep{2019ApJ...882L..18J}, and angular broadening $\theta_{\rm d} = 2\pm 1$ mas at 1.7 GHz and $\theta_{\rm d} \sim 0.4-0.5$ mas at 5 GHz, which was measured in the \cite{2017ApJ...834L...8M} high-resolution VLBI study of the FRB and its persistent radio counterpart. These angular diameters are consistent with those reported in \cite{2017Natur.541...58C} and with the NE2001 prediction. The scattering measurements are shown in Table~\ref{tab:measures} and are referenced to 1 GHz assuming a $\tau \propto \nu^{-4}$ frequency scaling. \indent The NE2001 angular broadening and scintillation bandwidth predictions for FRB 121102 are broadly consistent with the corresponding empirical constraints (see Table~\ref{tab:measures}). In NE2001, the scattering for this LoS is dominated by an outer spiral arm located 2 kpc away and by the thick disk, which extends out to 17 kpc from the Galactic center \citep{2002astro.ph..7156C}. The ${\rm C_n^2}$, electron density, and DM predicted by NE2001 along the LoS to FRB 121102 are shown in Figure~\ref{fig:ne2001}. Modeling of the anti-center direction in NE2001 is \added{independent of our analysis and is} based on DM and scattering measurements of \added{Galactic} pulsars in the same general direction and upper bounds on the angular scattering of extragalactic sources. The \added{NE2001} model parameters were constrained through a likelihood analysis of these measurements, which revealed that the fluctuation parameter was smaller for LoS that probe the outer Galaxy compared to the inner Galaxy \citep{1998ApJ...497..238L}. This result required that the thick disk component of NE2001 have a smaller fluctuation parameter compared to the thin disk and spiral arm components that are relevant to LoS through the inner Galaxy. \indent Since the measured $\Delta \nu_{\rm d}$ and $\theta_{\rm d}$ for FRB 121102 are broadly consistent with the predicted amount of scattering from the Galactic disk, we use $\Delta \nu_{\rm d}$ and $\theta_{\rm d}$ to estimate the effective distance to the dominant scattering material. For thin-screen scattering of a source located at a distance $d_{\rm so}$ from the observer, the scattering diameter $\theta_{\rm s}$ is related to the observed angular broadening by \begin{equation} \theta_{\rm d} \sim \theta_{\rm s} ( d_{\rm sl} / d_{\rm so}) = \theta_s \bigg(1 - \frac{d_{\rm lo}}{d_{\rm so}}\bigg), \end{equation} where $d_{\rm sl}$ is the source-to-screen distance and $d_{\rm lo}$ is the screen-to-observer distance. The scattering diameter is related to the pulse broadening delay by $\tau \approx A_\tau d_{\rm so}(d_{\rm sl}/d_{\rm so})(1-d_{\rm sl}/d_{\rm so})\theta_{\rm s}^2/8 ({\rm ln} 2)c$ \citep{2019ARA&A..57..417C}. For a thin screen near the observer and an extragalactic source, $d_{\rm lo} \ll d_{\rm sl}$, giving $\theta_{\rm d} \approx \theta_{\rm s}$ and \begin{equation}\label{eq:ds} \theta_{\rm d} \approx \left(\frac{4({\rm ln}2) A_\tau C_1 c}{ \pi \Delta \nu_{\rm d} d_{\rm lo}} \right)^{1/2}. \end{equation} The scattering screen location can thus be directly estimated from measurements of $\theta_{\rm d}$ and $\Delta \nu_{\rm d}$. \indent Assuming that the same Galactic scattering material gives rise to both the angular broadening and the scintillation of FRB 121102, Equation~\ref{eq:ds} implies the scattering material has an effective distance $\hat{d}_{\rm lo} \approx 2.3$ kpc from the observer (assuming $A_\tau \approx 1$), which is consistent with the distance to the spiral arm, as shown in Figure~\ref{fig:ne2001}. A numerical joint-probability analysis of the uncertainties in $\Delta \nu_{\rm d}$ and $\theta_{\rm d}$, assuming both quantities follow normal distributions, allows the screen to be as close as 1.6 kpc or as far as 5.5 kpc (corresponding to the 15$\%$ and 85$\%$ confidence intervals). Figure~\ref{fig:thetad} shows a comparison of the relationship between $\theta_{\rm d}$ and $d_{\rm lo}$ for a thin screen and the measured $\theta_{\rm d}$ from \cite{2017ApJ...834L...8M}. Given that there is no known HII region along the LoS to FRB~121102, the most likely effective screen is in fact the spiral arm. \indent The scintillation bandwidth implies a pulse broadening contribution from the Milky Way disk and spiral arms $\tau_{\rm MW,d} \approx 0.04 \pm 0.02$ ms at 1 GHz. The upper limit on $\tau$ measured by \cite{2019ApJ...882L..18J} is an order of magnitude larger than the $\tau_{\rm MW,d}$ inferred from $\Delta \nu_{\rm d}$. Any additional scattering beyond the Galactic contribution is more likely from the host galaxy due to the lack of intervening galaxies along the LoS, and the small amount of scattering expected from the IGM \citep{2013ApJ...776..125M, 2020arXiv201108519Z}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{DM_vs_d_NE2001p_l_b_dm_174.95_-0.22_188.33012280240598.pdf} \caption{Galactic DM, electron density $n_e$, and ${\rm C_n^2}$ contribution predicted by NE2001 for FRB 121102. The maximum DM (excluding the Galactic halo) is 188 pc cm$^{-3}$. The sharp changes in $n_e$ and ${\rm C_n^2}$ between 0 and 0.3 kpc are due to structure in the local ISM. The shaded grey region indicates the distance to the scattering screen derived from a numerical joint-probability analysis of the measured scattering constraints for FRB~121102; see Figure~\ref{fig:thetad}.} \label{fig:ne2001} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{FRB121102_thetad_screen_constraint.pdf} \caption{Scattering diameter vs. effective distance. The blue band indicates the angular diameter $\theta_{\rm d} = 2\pm1$ mas (1$\sigma$ errors) from \cite{2017ApJ...834L...8M}. The green band is the predicted angular diameter for a thin-screen that matches the least-square fit to scintillation bandwidth measurements and the uncertainties at $\pm1\sigma$. All values are expressed at 1.7 GHz. A numerical joint-probability estimate constraining the overlap of the green and blue regions gives a screen distance $\hat{d}_{\rm lo} \approx 2.3^{+3.2}_{-0.7}$ kpc. } \label{fig:thetad} \end{figure} \begin{deluxetable*}{c c c c c c }\label{tab:measures} \tablecaption{DM and Scattering Referenced to 1 GHz of FRB 121102 and FRB 180916} \tablewidth{0pt} \tablehead{\multicolumn{6}{c}{Measurements} \\ \hline \colhead{FRB} & \colhead{DM (pc cm$^{-3}$)} & \colhead{$\tau$ (ms)$^{(1, 2)}$} & $\Delta \nu_{\rm d}$ (kHz)$^{(3,4)}$ & \colhead{$\theta_{\rm d}$ (mas)$^{(5)}$} & \colhead{$\hat{d}_{\rm lo}$ (kpc)}} \startdata 121102 & $557$ & $<0.6$ & $3.8^{+2.5}_{-1.5}$ & $5.8 \pm 2.9$ & $2.3^{+3.2}_{-0.7}$ \\ 180916 & $349$ & $<0.026$ & $7.1\pm1.5$& \nodata & \nodata \\ \hline \hline \multicolumn{6}{c}{Models}\\ \hline FRB & DM (pc cm$^{-3}$) & $\tau$ (ms) & $\Delta \nu_{\rm d}$ (kHz) & $\theta_{\rm d}$ (mas) & $d_{\rm lo}$ (kpc) \\ \hline \multicolumn{6}{c}{NE2001}\\ \hline 121102 & $188$ & $0.016$ & $11$ & $6$ & $2.05$ \\ 180916 & $199$ & $0.015$ & $12$ & $5$ & $2.5$\\ \hline \multicolumn{6}{c}{YMW16}\\ \hline 121102 & $287$ & $0.84$ & $0.2$ & \nodata & \nodata \\ 180916 & $243$ & $0.42$ & $0.4$ & \nodata & \nodata \\ \enddata \tablecomments{The top of the table shows the observed DM and scattering for FRB 121102 and FRB 180916. Since the scintillation and angular broadening measurements are broadly consistent with Galactic foreground predictions by NE2001, we emphasize that they are Galactic, whereas the pulse broadening may have an extragalactic contribution from the host galaxy. The scattering measurements are referenced to 1 GHz assuming $\tau \propto \nu^{-4}$ unless otherwise noted.\\ \indent The bottom of the table shows the asymptotic DM and scattering predictions of NE2001 and YMW16. NE2001 adopts $C_1 = 1.16$ to convert between $\Delta \nu_{\rm d}$ and $\tau$, so we also use this value to calculate $\Delta \nu_{\rm d}$ from $\tau$ for YMW16. YMW16 predictions were calculated in the IGM mode, but we only report the Galactic component of DM predicted by the model for comparison with NE2001.\\ \indent References: (1) \cite{2019ApJ...882L..18J}; (2) \cite{2020ApJ...896L..41C}; (3) this work (see Figure~\ref{fig:dissbw}; referenced to 1 GHz using a best-fit power law); (4) \cite{2020Natur.577..190M}; (5) \cite{2017ApJ...834L...8M}.} \end{deluxetable*} \subsubsection{FRB 180916}\label{sec:180916} The scattering constraints for FRB 180916 consist of a scintillation bandwidth $\Delta \nu_{\rm d} = 59 \pm 13$ kHz at 1.7 GHz \citep{2020Natur.577..190M} and a pulse broadening upper limit $\tau < 1.7$ ms at 350 MHz \citep{2020ApJ...896L..41C}. The $\Delta \nu_{\rm d}$ and $\tau$ upper limit are entirely consistent with each other, so we again use $\Delta \nu_{\rm d}$ and the inferred $\tau_{\rm MW,d}$ for the rest of the analysis due to its higher precision. Based on $\Delta \nu_{\rm d}$, $\tau_{\rm MW,d} = 0.023\pm0.005$ ms at 1 GHz. As with FRB 121102 the NE2001 scattering predictions for this LoS are consistent with the empirical constraints to within the model's uncertainty, suggesting that the Galactic halo has a small ($\lesssim \mu$s level) contribution to the observed scattering. \subsubsection{Comparison with YMW16 Scattering Predictions}\label{sec:ymw16} The YMW16 model significantly overestimates the scattering of FRB 121102 and FRB 180916. The DM and scattering predictions for these FRBs are shown in Table~\ref{tab:measures}. Evaluating YMW16 for FRB 121102 using the \texttt{IGM} mode gives \replaced{$\log(\tau)=-3.012$}{$\log(\tau)=-3.074$} with $\tau$ in seconds, implying $\tau = 0.84$~ms at 1~GHz, corresponding to $\Delta \nu_{\rm d} \approx 0.2$~kHz, about 50 times smaller than the NE2001 value. Compared to the measured scattering, the nominal output of the YMW16 model overestimates the scattering \added{toward FRB 121102} by a factor of 28 to 35 (depending on whether a $\nu^{-4}$ or $\nu^{-4.4}$ scaling is used). Moreover, combining the measured $\theta_{\rm d}$ with the YMW16 estimate for $\tau$ implies a scattering screen distance $\sim 500$ kpc, beyond any local Galactic structure that could reasonably account for the scattering. For FRB 180916, YMW16 also overestimates $\tau$ to be 0.42~ms at 1~GHz, implying a scintillation bandwidth of 0.4~kHz at 1~GHz. \indent The discrepancies between the observed scattering and the YMW16 predictions are due to several important factors. Unlike NE2001, YMW16 does not explicitly model electron density fluctuations. Instead, it calculates DM for a given LoS and then uses the $\tau-{\rm DM}$ relation based on Galactic pulsars to predict $\tau$. In using the $\tau-{\rm DM}$ relation, the YMW16 model incorrectly adjusts for the scattering of extragalactic bursts. The waves from extragalactic bursts are essentially planar when they reach the Galaxy, \deleted{which means that the Galactic plasma will scatter them more than it would the spherical waves from a Galactic pulsar at a distance of a few scale lengths of the electron density} \added{which means they are scattered from wider angles than diverging spherical waves from a Galactic pulsar would be}. The differences between plane and spherical wave scattering are discussed in detail for FRBs in \cite{2016arXiv160505890C}. The YMW16 model accounts for this difference by reducing the Galactic prediction of $\tau$ by a factor of two, when \added{geometric weighting of the mean-square scattering angle implies that} the \added{Galactic} scattering \added{prediction} should really be larger by a factor of three \added{to apply to extragalactic FRBs (see Eq.~10 in \citealt{2016arXiv160505890C})}. This implies that values for $\tau$ in the output of YMW16 should be multiplied by a factor of six, which means that the model's overestimation of the scattering is really by a factor of 170 to 208 when one only considers the correction for planar wave scattering. \indent YMW16 may also overestimate the Galactic contribution to DMs of extragalactic sources viewed in the Galactic anti-center direction (and perhaps other low-latitude directions) because it significantly overestimates the observed DM distribution of Galactic pulsars that NE2001 is based on. In YMW16, the dominant DM contributions to extragalactic sources in this direction are from the thick disk and from the spiral arms exterior to the solar circle. Together these yield DM values of 287 pc cm$^{-3}$\ and 243 pc cm$^{-3}$\ for FRB~121102 and FRB~180916, respectively. These DM predictions are over $50\%$ and $20\%$ larger for each FRB than the NE2001 values, which may be due to overestimation of the densities or characteristic length scales of the outer spiral arm and thick disk components. \indent The primary cause for YMW16's scattering over-prediction is that the part of the pulsar-derived $\tau-{\rm DM}$ relation that applies to large values of DM should not be used for directions toward the Galactic anti-center. The $\tau-{\rm DM}$ relation has the empirical form \citep{2016arXiv160505890C} \begin{equation} \tau = (2.98\times10^{-7}\ {\rm ms})\, {\rm DM}^{1.4}(1+3.55\times10^{-5}{\rm DM}^{3.1}) \label{eq:taud_vs_DM} \end{equation} based on a fit to pulsar scattering data available through 2016. Similar fits were previously done by \cite{1997MNRAS.290..260R}, \cite{2004ApJ...605..759B}, and \cite{2015ApJ...804...23K}. It scales as ${\rm DM}^{1.4}$ for ${\rm DM} \lesssim 30$ pc cm$^{-3}$ and as ${\rm DM}^{4.5}$ for ${\rm DM} \gtrsim 100$ pc cm$^{-3}$. \indent The YMW16 model uses Krishnakumar's model for 327~MHz scattering times scaled to 1~GHz by a factor $(0.327)^4$, giving $\tau = (4.1\times10^{-8}\ {\rm ms})\ {\rm DM}^{2.2}(1+0.00194\times{\rm DM}^{2})$. This scaling law is in reasonable agreement with the expression in Equation~\ref{eq:taud_vs_DM} except at very low DM values. The Krishnakumar scaling law adopted the \cite{1997MNRAS.290..260R} approach of fixing the leading DM exponent to 2.2, which is based on the assumption that the relatively local ISM is uniform. This assumption is imperfect given our knowledge of the Local Bubble and other shells and voids in the local ISM. The steep $\tau\propto {\rm DM}^{4.2 \ {\rm to }\ 4.5}$ scaling for Galactic pulsars is from LoS that probe the inner Galaxy, where the larger star formation rate leads to a higher supernova rate that evidently affects the turbulence in the HII gas, and results in a larger $\widetilde{F}$, as shown in Section~\ref{sec:disk}. \indent If the YMW16 model were to use instead the shallow part of the $\tau-{\rm DM}$ relation, $\tau\propto {\rm DM}^{2.2 \ {\rm to} \ 1.4}$, which is more typical of LoS through the outer Galaxy, its scattering time estimates would be smaller by a factor of $\sim 1.94\times 10^{-3} \times (287)^{2} \sim 160$ or $\sim 3.55\times 10^{-5} \times (287)^{3.1} \sim 1500$, depending on which $\tau-{\rm DM}$ relation is used (and using FRB 121102 as an example). These overestimation factors could be considerably smaller if smaller DM values were used. While there is a considerable range of values for the overestimation factor based on the uncertainties in the empirical scaling law, it is reasonable to conclude that the high-DM part of the $\tau-{\rm DM}$ relation should not be used for the anti-center FRBs. \subsection{Fluctuation Parameter of the Galactic Halo}\label{sec:halo} The measurements of $\tau$ and $\theta_{\rm d}$ for FRBs 121102 and 180916, combined with the scattering predictions of NE2001, yield a maximum likelihood estimate for the pulse broadening contribution of the Milky Way halo (see Equation~\ref{eq:Like}). \deleted{The likelihood function is shown in Figure~6. The likelihood function is positive for values of $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$ extending to zero because the measured scattering from both FRBs is close enough to the NE2001 predictions that the halo could have a negligible scattering contribution.} The $95\%$ upper confidence interval yields $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h} < 250/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$. The maximum amount of pulse broadening expected from the Galactic halo is therefore $\tau_{\rm MW,h} < 12$ $\mu$s at 1 GHz, which is comparable to the scattering expected from the Galactic disk for LoS towards the Galactic anti-center or at higher Galactic latitudes. \indent Based on \deleted{a meta-analysis of} the broad range of $\DM_{\rm MW,h}$ currently consistent with the empirical and modeled constraints (see Section~\ref{sec:halomodels}), we construct a Gaussian probability density function (PDF) for $\widehat{{\rm DM}}_{\rm MW,h}$ with a mean of 60 pc cm$^{-3}$\ and $\sigma_{\rm \widehat{DM}} = 18$ pc cm$^{-3}$ \deleted{, which is shown in Figure~6}. \added{Combining} this PDF \added{with the maximum likelihood estimate for $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$} yields an upper limit $\widetilde{F}_{\rm MW,h} < 0.03/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. While $A_\tau$ is probably about 1, if $A_\tau$ is as small as $1/6$ then $\widetilde{F}_{\rm MW,h}$ could be up to 6 times larger. \indent \added{This estimate of $\widetilde{F}_{\rm MW,h}$ is based on just two LoS towards the Galactic anti-center, and it is unclear how much $\widetilde{F}_{\rm MW,h}$ will vary between different LoS through the halo. Given that sources viewed through the inner Galaxy (near $b = 0^\circ$) are more heavily scattered, it is unlikely that estimates of $\widetilde{F}_{\rm MW,h}$ will be obtainable for LoS that intersect the halo through the inner Galaxy. However, FRB 121102, FRB 180916, and most of the FRBs detected at higher latitudes do not show evidence of any intense scattering regions that might be associated with the halo \citep[e.g.,][]{2020MNRAS.497.1382Q}, suggesting that extremely scattered FRBs would be outliers and not representative of the Galactic halo's large-scale properties. All FRBs are ultimately viewed through not only the Galactic halo but also the haloes of their host galaxies and, in some cases, the haloes of intervening galaxies. Given the observed variations in scattering between different LoS through the Milky Way, it appears most likely that the heaviest scattered FRBs will be viewed through scattering regions within galaxy disks rather than haloes, and extrapolating our analysis to a larger sample of FRBs will require determining whether observed variations in $\widetilde{F}$ are due to variations between galaxy haloes, disks, or the sources' local environments.} In the following sections, we compare the Milky Way halo scattering contribution \added{inferred from FRBs 121102 and 180916} to scattering observed from the Magellanic Clouds and galaxy haloes intervening LoS to FRBs. \subsection{Constraints from Pulsars in the Magellanic Clouds}\label{sec:MCs} At distances of 50 to 60 kpc and latitudes around $-30^\circ$, pulsar radio emission from the Large and Small Magellanic Clouds (LMC/SMC) mostly samples the Galactic thick disk and a much smaller path length through the Galactic halo than FRBs. So far, twenty-three radio pulsars have been found in the LMC and seven in the SMC \citep[e.g.,][]{1991MNRAS.249..654M,2001ApJ...553..367C,2006ApJ...649..235M, 2013MNRAS.433..138R, 2019MNRAS.487.4332T}. Very few scattering measurements exist for these LoS. PSR B0540$-$69 in the LMC has a DM of 146.5 pc cm$^{-3}$\ and was measured to have a pulse broadening time $\tau = 0.4$ ms at 1.4 GHz \citep{2003ApJ...590L..95J}. The Galactic contribution to DM and scattering predicted by NE2001 towards this pulsar are ${\rm DM}_{\rm NE2001} = 55$ pc cm$^{-3}$, and $\tau_{\rm NE2001} = 0.3\times10^{-3}$ ms at 1 GHz. Based on this DM estimate, the pulsar DM receives a contribution of about $92$ pc cm$^{-3}$\ from the LMC and the Galactic halo. The lowest DMs of pulsars in the LMC have been used to estimate the DM contribution of the halo to be about 15 pc cm$^{-3}$\ for pulsars in the LMC \citep{2020ApJ...888..105Y}, which suggests that the LMC contributes about 77 pc cm$^{-3}$\ to the DM of B0540$-$69. \indent The scattering observed towards B0540$-$69 is far in excess of the predicted scattering from the Galactic disk. Since B0540$-$69 not only lies within the LMC but also within a supernova remnant, it is reasonable to assume that most of the scattering is contributed by material within the LMC. Using the upper limit $\widetilde{F}_{\rm MW,h} < 0.03/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$\ and $\widehat{{\rm DM}}_{\rm MW,h} = 15$ pc cm$^{-3}$\ yields $\tau_{\rm MW,h} < 0.1$ $\mu$s at 1.4~GHz for this LoS, which is too small to explain the observed scattering. If we instead combine $\tau = 0.4$ ms at 1.4~GHz and the estimated $\widehat{{\rm DM}}_{\rm LMC} = 77$ pc cm$^{-3}$, \added{we find $\widetilde{F}_{\rm LMC} \approx 16/A_\tau$} \deleted{$\widetilde{F}_{\rm LMC} \approx 4.2/A_\tau$} pc$^{-2/3}$ km$^{-1/3}$. More scattering measurements for LMC and SMC pulsars are needed to better constrain the fluctuation parameters of the LMC and SMC, which in turn will improve our understanding of interstellar plasma in these satellite galaxies. \section{Constraints on Intervening Haloes along Lines of Sight to FRBs}\label{sec:conc} As of this paper, two FRBs are found to pass through galactic haloes other than those of their host galaxies and the Milky Way: FRB 181112, which passes within 30 kpc of the galaxy DES J214923.89$-$525810.43, otherwise known as FG$-$181112 \citep{2019Sci...366..231P}, and FRB 191108, which passes about 18 kpc from the center of M33 and 185 kpc from M31 \citep{2020MNRAS.tmp.2810C}. Both FRBs have measurements of $\tau$ which are somewhat constraining. \subsection{FRB 181112} \indent FRB 181112 was initially found to have $\tau < 40$ $\mu$s at 1.3 GHz by \cite{2019Sci...366..231P}; follow-up analysis of the ASKAP filterbank data and higher time resolution data for this burst yielded independent estimates of $\tau < 0.55$ ms \citep{2020MNRAS.497.1382Q} and $\tau \approx 21 \pm 1$ $\mu$s \citep{2020ApJ...891L..38C} at 1.3 GHz. We adopt the last value for our analysis, with the caveat that the authors report skepticism that the data is best fit by a pulse broadening tail following the usual frequency dependence expected from scattering in a cold plasma, and that the measured decorrelation bandwidth of the burst spectrum is in tension with the pulse broadening fit (for a full discussion, see Section 4 of \citealt{2020ApJ...891L..38C}). \indent We first place an upper limit on $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ for FG$-$181112 by assuming that the intervening halo may contribute up to all of the observed scattering of FRB 181112. The halo density profile (Equation~\ref{eq:neh}) is re-scaled to the lens redshift ($z_{\rm i,h} = 0.36$) and evaluated at the impact parameter $R_\perp = 29$ kpc. \cite{2019Sci...366..231P} constrain the mass of the intervening halo to be $M_{\rm halo}^{\rm FG-181112} \approx 10^{12.3}M_\odot$. Again assuming a physical extent to the halo of $2r_{200}$ gives \deleted{an effective} \added{a} path length through the halo $L \approx 930$ kpc. The observed scattering $\tau \approx 21$ $\mu$s is the maximum amount of scattering that could be contributed by the halo, i.e., $\tau_{\rm i,h} < 21$ $\mu$s at 1.3 GHz, which yields $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} < 13/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ using Equation~\ref{eq:taudmnuz}. \indent Assuming the halo density profile where $y_0 = \alpha = 2$ gives $\widehat{{\rm DM}}_{\rm i,h} \approx 135$ pc cm$^{-3}$\ in the frame of the intervening galaxy. This DM estimate for the halo is similar to the estimate of $122$ pc cm$^{-3}$\ from \cite{2019Sci...366..231P}, but as they note, the DM contribution is highly sensitive to the assumed density profile and could be significantly smaller if the physical extent and/or the baryonic fraction of the halo are smaller. This DM estimate yields $\widetilde{F}_{\rm i,h} < (7\times10^{-4})/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. If the DM contribution of the intervening halo is smaller, then $\widetilde{F}_{\rm i,h}$ could be up to an order of magnitude larger. The total observed DM of the FRB is broadly consistent with the estimated DM contributions of the Milky Way, host galaxy, and IGM alone \citep{2019Sci...366..231P}, so the uncertainty in ${\rm DM}_{\rm i,h}$ remains the greatest source of uncertainty in deconstructing $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$. Both estimates of $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ and $\widetilde{F}_{\rm i,h}$ for FRB 181112 are within the upper limits for the Milky Way halo. \subsection{FRB 191108} \indent FRB 191108 passes through both the M31 and M33 haloes and has a source redshift upper limit $z\lesssim0.5$ based on DM. \cite{2020MNRAS.tmp.2810C} report an upper limit of $80$ $\mu$s on the intrinsic pulse width and scattering time at 1.37 GHz, but they demonstrate that this limit is likely biased by dispersion smearing. \cite{2020MNRAS.tmp.2810C} also report $25\%$ intensity modulations at a decorrelation bandwidth $\sim 40$ MHz. This decorrelation bandwidth may be attributable to scattering in the M33 halo and/or in the host galaxy (for a full discussion, see Section 3.4 of \citealt{2020MNRAS.tmp.2810C}). \indent Re-scaling our galactic halo density profile using halo masses $M_{\rm halo}^{\rm M33} \approx 5\times10^{11}M_\odot$ and $M_{\rm halo}^{\rm M31} \approx 1.5\times10^{12}M_\odot$ yields a total DM contribution from both haloes of about 110 pc cm$^{-3}$, nearly two times larger than the DM contribution estimated by \cite{2020MNRAS.tmp.2810C}, who use a generic model for the M33 and M31 haloes from \cite{2019MNRAS.485..648P} based on the same galaxy masses. We assume that the density profiles are independent; if there are dynamical interactions between the haloes then these may slightly modify the overall density distribution along the LoS, but it is unclear how turbulence in the plasma would be affected, if at all. Since the impact parameter of 18 kpc for M33 is significantly smaller than the 185 kpc for M31, M33 dominates the predicted DM contribution to FRB 191108 (with $\widehat{{\rm DM}}_{\rm i,h} \approx 90$ pc cm$^{-3}$), and therefore is more likely than M31 to also dominate the scattering. \indent If we were to assume that $\Delta \nu_{\rm d}\approx 40$ MHz (at 1.37 GHz, which translates to $\tau \approx 4$ ns) is attributable to scattering in the M33 halo, then we get $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} \approx 0.23/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ for $z_{\rm host} = 0.5$. A smaller source redshift would increase $d_{\rm sl} d_{\rm lo}/d_{\rm so}$, resulting in an even smaller value of $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$. For a halo DM contribution of about 90 pc cm$^{-3}$\, this estimate of $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ yields $\widetilde{F}_{\rm i,h} \approx (2.7\times10^{-5})/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$, which is three orders of magnitude smaller than the upper limit we infer for the Milky Way halo. Using a smaller ${\rm DM}_{\rm i,h} \approx 50$ pc cm$^{-3}$\ increases $\widetilde{F}$ to $\widetilde{F}_{\rm i,h} \approx (9\times10^{-5})/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. Generally speaking, if $\widetilde{F}$ is about a factor of 10 larger, then the pulse broadening from the halo would be 10 times larger and the scintillation bandwidth 10 times smaller. On the other hand, if M31 were to contribute more significantly to the DM then $\widetilde{F}_{\rm i,h}$ would be smaller than our estimate. While there is a range of reasonable values for $\widetilde{F}_{\rm i,h}$, it appears that scattering in the M33 halo is negligible. \indent \cite{2020MNRAS.tmp.2810C} use a different approach to evaluate the scattering of FRB 191108. They estimate a scattering angle from the decorrelation bandwidth in order to obtain an estimate of the diffractive scale and rms electron density fluctuations in the halo. Making assumptions about the outer scale and the relationship between the mean density and rms density fluctuations, they find a mean electron density for the halo that is larger than expected, and conclude that if the scattering occurs in M33, then it is more likely from cool clumps of gas embedded in the hot, extended halo. \cite{2019Sci...366..231P} use a similar methodology to estimate a mean density for the halo of FG$-$181112. Rather than make an indirect estimate of $n_e$ in each halo, our analysis yields a direct constraint on $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ from observable quantities. The corresponding estimates of $\widetilde{F}$ are sufficient to demonstrate that very little scattering occurs \added{along either of these FRB LoS through the} galaxy haloes. Further deconstructing $\epsilon^2$, $\zeta$, and $f$ from $\widetilde{F}$ will require more information about the outer and inner scales of turbulence, which may differ from halo to halo. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{Fsummary_horiz.pdf} \caption{Nominal values of $\widetilde{F}$ for different components of the Milky Way (MW), the Large Magellanic Cloud (LMC), and for the foreground galaxies of FRB 181112 and FRB 191108. The Galactic anti-center and inner galaxy values are calculated by integrating NE2001 through the entire disk in the directions $(l = 130^\circ, b = 0^\circ)$ and $(l = 30^\circ, b=0^\circ)$, respectively. The nominal $\widetilde{F}$ for each halo was calculated assuming the modeled halo DMs discussed in the text. The orange error bars indicate the maximum values of $\widetilde{F}$ for $A_\tau > 1/6$, with $A_\tau = 1$ for the black points. The green error bars indicate a range of $\widetilde{F}$ for a representative range of halo DMs (and $A_\tau = 1$). The green lower and upper bounds correspond to: $20<{\rm DM}_{\rm MW,h}<120$ pc cm$^{-3}$\ for the Galactic halo, $50<{\rm DM}_{\rm i,h}<140$ pc cm$^{-3}$\ for FG$-$181112, and $50<{\rm DM}_{\rm i,h}<120$ pc cm$^{-3}$\ for M33. For each halo, the blue bar indicates that the scattering constraints are upper limits. The blue bar for the thick disk indicates the root-mean-square error in the distribution of $\widetilde{F}$ for high Galactic latitude pulsars.} \label{fig:Fsummaryplot} \end{figure} \section{Discussion} We present a straightforward methodology for constraining the internal electron density fluctuations of galaxy haloes using FRB scattering measurements. The pulse broadening time $\tau \propto \widetilde{F} \times {\rm DM}^2$, where the fluctuation parameter $\widetilde{F}$ quantifies the amount of scattering per unit DM and is directly related to the density fluctuation statistics. We analyze two case studies, FRB 121102 and FRB 180916, and find their scattering measurements to be largely consistent with the predicted scattering from the Galactic disk and spiral arms, plus a small or negligible contribution from the Galactic halo. A likelihood analysis of their scintillation bandwidths and angular broadening places an upper limit on the product of the Galactic halo DM and fluctuation parameter $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h} < 250/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$, where $A_\tau$ is the dimensionless constant relating the mean scattering time to the $1/e$ time of a scattered pulse. This estimate can be used to calculate the pulse broadening delay induced by electron density fluctuations in the halo, independent of any assumptions about the electron density distribution of the Galactic halo. The upper limit on $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$ implies a maximum amount of pulse broadening from the Galactic halo $\tau_{\rm MW,h} < 12$ $\mu$s at 1 GHz. \indent While the DM contribution of the Milky Way halo to FRB DMs is still poorly constrained, we adopt a Gaussian PDF for the observed DM of the halo to estimate $\widetilde{F}_{\rm MW,h} < 0.03/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. We compare this to the fluctuation parameter of the Galactic thick disk using the distribution of $\tau/{\rm DM}^2$ for all Galactic pulsars at high Galactic latitudes with pulse broadening measurements. We measure the fluctuation parameter of the thick disk to be $\widetilde{F}_{\rm disk}^{\rm thick} = (3\pm2)\times10^{-3}$ pc$^{-2/3}$ km$^{-1/3}$, about an order of magnitude smaller than the halo upper limit. At high Galactic latitudes, the thick disk will only cause a scattering delay on the order of tens of nanoseconds at 1 GHz. Larger samples of FRBs and continued X-ray observations of the Galactic halo will refine our understanding of the DM contribution of the halo and may modify our current constraint on $\widetilde{F}_{\rm MW,h}$\added{, which is only based on two LoS through the halo}. While we assume for simplicity that the density distribution of the halo is spherically symmetric, $\widetilde{F}_{\rm MW,h}$ and $\DM_{\rm MW,h}$ will vary between different LoS through the halo, and an extension of our analysis to a larger sample of FRBs may yield a more constraining limit on the average fluctuation parameter of the halo. \indent Extrapolating the scattering formalism we use for the Galactic halo to intervening galaxies, we examine two examples of FRBs propagating through intervening haloes, FRB 181112 and FRB 191108. The observed upper limits on each halo's contribution to $\tau$ are $\tau_{\rm i,h}<21$ $\mu$s at 1.3 GHz for FRB 181112 \citep{2020ApJ...891L..38C} and $\tau_{\rm i,h} < 4$ ns at 1.37 GHz for FRB 191108 \citep{2020MNRAS.tmp.2810C}. We find $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} < 13/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ for FRB 181112 and $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} < 0.2/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ for FRB 191108. Both estimates fall within the upper limit for the Milky Way halo, and all of these haloes have small to negligible scattering contributions \added{for the FRBs considered in this paper}. \indent We also model the DM contribution of each intervening halo to find nominal constraints on $\widetilde{F}$. The values of $\widetilde{F}$ from our analysis of FRB 181112, FRB 191108, the LMC, the Galactic halo, the Galactic thick disk, and the values of $\widetilde{F}$ used in NE2001 for the Galactic anti-center and inner Galaxy are all assembled in Figure~\ref{fig:Fsummaryplot}. The uncertainties associated with the conversion factor $A_\tau$ and the halo DMs are also shown. \added{The values of $\widetilde{F}$ for M33, FG$-$181112, the Galactic halo, and the LMC are essentially point estimates because they are based on individual sources, while the estimates provided for the Galactic thick disk, anti-center, and inner Galaxy are based on the population of Galactic pulsars.} Broadly speaking, the $\widetilde{F}$ upper limit for the Galactic halo is similar to that of the disk and spiral arms in the anti-center direction, and is about an order of magnitude larger than the fluctuation parameter of the thick disk. The value of $\widetilde{F}$ for the LMC is similar to that of the inner Milky Way because it is based on the pulse broadening of B0540$-$69, which lies within a supernova remnant and hence within an enhanced scattering region. Our estimates of $\widetilde{F}_{\rm i,h}$ for both FRB 181112 and FRB 191108 indicate that very little scattering occurs in the haloes intervening their LoS. \indent The fluctuation parameter is directly related to the inner and outer scales of turbulence as $\widetilde{F} \propto (\zeta \epsilon^2/f)(l_{\rm o}^2l_{\rm i})^{-1/3}$, where $\zeta$ and $\epsilon$ respectively describe changes in the mean density between different gas cloudlets and the variance of the density fluctuations within cloudlets. While the inner and outer scales in the Galactic warm ionized medium (WIM) are constrained by pulsar measurements to be on the order of \replaced{$l_{\rm i} \sim 1000$}{$100 \lesssim l_{\rm i} \lesssim 1000$} km and $l_{\rm o} \gtrsim 10$ pc \replaced{\citep{1995ApJ...443..209A}}{\citep{1990ApJ...353L..29S,1995ApJ...443..209A,2004ApJ...605..759B,2009MNRAS.395.1391R}} , the corresponding scales in hot halo gas are probably much larger. Given the size of the halo, $l_{\rm o}$ could be on the order of tens of kpc. The inner scale could also be larger if it is related to the proton gyroradius and the magnetic field strength is smaller in the halo than in the disk, which is probably the case given that the rotation measures of extragalactic sources tend to be larger closer to the Galactic plane \citep{2017ARA&A..55..111H}. Given that $f$, $l_{\rm o}$, and $l_{\rm i}$ are all probably larger in the halo than in the disk, we would expect $\widetilde{F}_{\rm MW,h}$ to be much smaller than $\widetilde{F}_{\rm disk}^{\rm thick}$. If we further expect the Milky Way halo to be similar to other galaxy haloes like FG$-$181112 and M33, then $\widetilde{F}_{\rm MW,h}$ would likely be less than $10^{-3}$ pc$^{-2/3}$ km$^{-1/3}$. However, our current constraints allow $\widetilde{F}$ to be larger in the halo than in the disk, which suggests that the upper limit for $\widetilde{F}_{\rm MW,h}$ is not constraining enough to make any further conclusions about $\zeta$, $\epsilon^2$, $f$, $l_{\rm o}$, and $l_{\rm i}$ in the halo. \indent \added{On the other hand, quasar absorption studies of the CGM of other galaxies (mostly at redshifts $z\gtrsim2$) indicate the presence of $\sim10^4$ K gas \citep{2015Sci...348..779H, 2016ApJS..226...25L, 2018MNRAS.473.5407M}, suggesting that the CGM is a two-phase medium consisting of warm gas clumps embedded in a hot ($10^6$ K) medium \citep{2018MNRAS.473.5407M}. Using a cloudlet model based on the simulations of \cite{2018MNRAS.473.5407M}, \cite{2019MNRAS.483..971V} demonstrate that a clumpy CGM could significantly scatter FRBs. Our empirical constraints on $\widetilde{F}$ are largely independent of any assumptions about the physical properties of the scattering medium. We assume a halo density model to estimate the DM contribution of a halo, although mapping of ionized and neutral high-velocity clouds in the Galactic CGM indicate that the DM is likely dominated by the hot gas \citep{2019MNRAS.485..648P}. As a composite parameter, $\widetilde{F}$ is insensitive to a broad range of assumptions about gas temperature or clumps, and could serve as an independent test of the two-phase model for the CGM. In a clumpy, cooler CGM, the inner and outer scales of turbulence would be similar to those in the WIM and $f \ll 1$, and $\widetilde{F}$ would be larger than it would be in a hot medium with a larger filling factor and scale size. Adopting fiducial values of $\epsilon^2 = \zeta = 1$, $f\sim10^{-4}$ (the value used by \cite{2019MNRAS.483..971V}), $l_{\rm i} \sim 100$ km, and $l_{\rm o} \sim 10$ pc gives $\widetilde{F} \sim 500$ pc$^{-2/3}$ km$^{-1/3}$. This estimate is orders of magnitude larger than our results for the Galactic halo and the foreground haloes of FRBs 181112 and 191108, suggesting that halo gas probed by these LoS is either not dominated by cooler clumps, or that $f$, $l_{\rm o}$, and $l_{\rm i}$ are significantly different in the clumpy CGM than otherwise assumed by \cite{2018MNRAS.473.5407M} and \cite{2019MNRAS.483..971V}. } \indent \deleted{A more stringent comparison of the hot halo and thick disk will require a stricter constraint on $\widetilde{F}$ for the halo.} \added{A more stringent comparison of hot gas in the halo and the WIM will require a larger sample of precise FRB scattering measurements.} Regardless, the nominal range of $\widetilde{F}$ constrained for the Galactic halo and the haloes intervening FRB 181112 and FRB 191108 demonstrate the range of internal properties that different galaxy haloes can have. A broader sample of FRB scattering measurements with intervening halo associations will expand this range and may potentially reveal an interesting diversity of galaxy haloes. \indent Many more FRBs with intervening galaxy haloes will likely be discovered in the near future. In these cases, the amount of scattering to be expected from the intervening haloes will depend not only on the fluctuation parameter $\widetilde{F}$ and DM of the halo, but also on the relative distances between the source, halo, and observer, and the effective path length through the halo. Depending on the relative configuration, an intervening halo may amplify the amount of scattering an FRB experiences by factors of 100 or more relative to the amount of scattering expected from the Milky Way halo. However, plausibly attributing scattering to an intervening halo will still require careful consideration of the FRB host galaxy, which in many cases may be the dominant source of FRB scattering. \acknowledgements The authors thank the referee for their useful comments and acknowledge support from the National Aeronautics and Space Administration (NASA 80NSSC20K0784) and the National Science Foundation (NSF AAG-1815242), and are members of the NANOGrav Physics Frontiers Center, which is supported by the NSF award PHY-1430284. \section{Introduction} Fast radio bursts (FRBs) propagate from as far as $\sim$Gpc distances through their local environments, the interstellar medium (ISM) and circumgalactic medium (CGM) of their host galaxy, the intergalactic medium (IGM) and any intervening galaxies or galaxy haloes, the halo and ISM of the Milky Way, and finally through the interplanetary medium (IPM) of our Solar System before arriving at the detector. Along their journey they experience dispersion and multi-path propagation from free electrons along the line-of-sight (LoS). The dispersion measure ${\rm DM} = \int n_e dl/(1+z)$, where $n_e$ is the electron density and $z$ is the redshift. Most FRBs are extragalactic and have DMs much larger than the expected contribution from our Galaxy, with the single possible exception being the Galactic magnetar source SGR 1935$+$2154 \citep{2020arXiv200510828B, 2020arXiv200510324T}. Many studies of FRB propagation have focused on the ``DM budget," constraining the relative contributions of intervening media to the total observed FRB DM, with particular attention paid to determining the DM contribution of the IGM, which in principle can be used to estimate the distance to an FRB without a redshift \citep[e.g.,][]{2015MNRAS.451.4277D,2019ApJ...886..135P}. For FRBs with redshifts the subsequent intergalactic ${\rm DM}(z)$ relationship can be used to measure the cosmic baryon density, as was first empirically demonstrated by \cite{2020Natur.581..391M}. Arguably the least constrained DM contribution to FRBs is that of their host galaxies, which has been estimated in only one case using Balmer line observations \citep[][]{2017ApJ...834L...7T}. \indent Understanding FRB propagation requires study of not just dispersion but also scattering. Bursts propagate along multiple ray paths due to electron density fluctuations, which leads to detectable chromatic effects like pulse broadening, scintillation, and angular broadening. These effects are respectively characterized by a temporal delay $\tau$, frequency bandwidth $\Delta \nu_{\rm d}$, and full width at half maximum (FWHM) of the scattered image $\theta_{\rm d}$. Scattering effects generally reduce FRB detectability, obscure burst substructure, or produce multiple images of the burst, and may contaminate emission signatures imprinted on the signal at the source \citep[e.g.,][]{2017ApJ...842...35C, 2019ApJ...876L..23H, 2020MNRAS.497.3335D}. On the other hand, scattering effects can also be used to resolve the emission region of the source \citep{2020ApJ...899L..21S} and to constrain properties of the source's local environment \citep[for a review of FRB scattering see][]{2019A&ARv..27....4P, 2019ARA&A..57..417C}. Since the relationship between $\tau$ and DM is known for Galactic pulsars \citep[e.g.,][]{1997MNRAS.290..260R, 2004ApJ...605..759B,2015ApJ...804...23K,2016arXiv160505890C}, it is a promising basis for estimating the DM contributions of FRB host galaxies based on measurements of $\tau$ (Cordes et al., in prep). In order to use scattering measurements for these applications, we need to assess how intervening media contribute to the observed scattering (i.e., a ``scattering budget"). To disentangle any scattering effects intrinsic to the host galaxy or intervening galaxies, we need to accurately constrain the scattering contribution of the Milky Way. \indent Broadly speaking, an FRB will encounter ionized gas in two main structural components of the Milky Way, the Galactic disk and the halo. The Galactic disk consists of both a thin disk, which has a scale height of about 100 pc and contains the spiral arms and most of the Galaxy's star formation \citep[e.g.,][]{2002astro.ph..7156C}, and the thick disk, which has a scale height of about 1.6 kpc and is dominated by the more diffuse, warm ($T\sim10^4$ K) ionized medium \citep[e.g.,][]{2020ApJ...897..124O}. The halo gas is \added{thought to be} dominated by the hot ($T\sim 10^6$ K) ionized medium, and most of this hot gas is contained within 300 kpc of the Galactic center \added{\citep[e.g.,][]{2017ApJ...835...52F}}. While the DMs and scattering measurements of Galactic pulsars and pulsars in the Magellanic Clouds predominantly trace plasma in the thin and thick disks, extragalactic sources like FRBs are also sensitive to gas in the halo. \indent In this paper we assess the contribution of galaxy haloes to the scattering of FRBs. We demonstrate how scattering measurements of FRBs can be interpreted in terms of the internal properties of the scattering media, and apply this formalism to galaxy haloes intervening LoS to FRBs. We first assess scattering from the Milky Way halo using two case studies: FRB 121102 and FRB 180916. \deleted{These FRBs are seen towards the Galactic anti-center and have highly precise localizations and host galaxy associations, in addition to precise scattering measurements.} \added{These FRBs have the most comprehensive, precise scattering measurements currently available, in addition to highly precise localizations and host galaxy associations}. Due to their location close to the Galactic plane, the emission from these sources samples both the outer spiral arm of the Galaxy and the Galactic thick disk, and the scattering observed for these FRBs is broadly consistent with the scattering expected from the spiral arm and disk. Only a minimal amount of scattering is allowed from the Galactic halo along these LoS, thus providing an upper limit on the halo's scattering contribution. We then extrapolate this analysis to two FRBs that pass through haloes other than those of their host galaxies and the Milky Way, FRB 181112 and FRB 191108. \indent In Section~\ref{sec:theory} we summarize the formalism relating electron density fluctuations and the observables $\tau$, DM, $\Delta \nu_{\rm d}$, and $\theta_{\rm d}$, and describe our model for the scattering contribution of the Galactic halo. A new measurement of the fluctuation parameter of the Galactic thick disk is made in Section~\ref{sec:disk} using Galactic pulsars. An overview of the scattering measurements for FRB 121102 and FRB 180916 is given in Section~\ref{sec:frbs}, including an updated constraint on the scintillation bandwidth for FRB 121102 and a comparison of the scattering predictions made by Galactic electron density models NE2001 \citep{2002astro.ph..7156C, 2003astro.ph..1598C} and YMW16 \citep{2017ApJ...835...29Y}. The FRB scattering constraints are used to place an upper limit on the fluctuation parameter of the Galactic halo in Section~\ref{sec:halo}. A brief comparison of the FRB-derived limit with scattering observed towards the Magellanic Clouds is given in Section~\ref{sec:MCs} and scattering constraints for intervening galaxy haloes are discussed in Section~\ref{sec:conc}. \section{Modeling}\label{sec:theory} \subsection{Electron Density Fluctuations and Scattering} \indent We characterize the relationship between electron density fluctuations and scattering of radio emission using an ionized cloudlet model in which clumps of gas in the medium have a volume filling factor $f$, internal density fluctuations with variance $\epsilon^2 = \langle (\delta n_e)^2 \rangle/{n_e}^2$, and cloud-to-cloud variations described by $\zeta = \langle {n_e}^2 \rangle/\langle {n_e} \rangle^2$, where $n_e$ is the local, volume-averaged mean electron density \citep[][]{1991Natur.354..121C, 2002astro.ph..7156C, 2003astro.ph..1598C}. We assume that internal fluctuations follow a power-law wavenumber spectrum of the form \citep[][]{cfrc87} $P_{\delta n_e}(q) = {\rm C_n^2} q^{-\beta} \exp(-(ql_{\rm i} / 2\pi)^2)$ that extends over a wavenumber range $2\pi / l_{\rm o} \le q \lesssim 2\pi / l_{\rm i}$ defined by the outer and inner scales, $l_{\rm o}, l_{\rm i}$, respectively. We adopt a wavenumber index $\beta = 11/3$ that corresponds to a Kolmogorov spectrum. Typically, $l_{\rm i} \ll l_{\rm o}$, but their magnitudes depend on the physical mechanisms driving and dissipating turbulence, which vary between different regions of the ISM. \indent Multipath propagation broadens pulses by a characteristic time $\tau$ that we relate to DM and other quantities. For a medium with homogeneous properties, the scattering time in Euclidean space is $\tau({\rm DM}, \nu) = K_\tau A_\tau \nu^{-4} \widetilde{F} G_{\rm scatt} {\rm DM}^2$ \citep[][]{2016arXiv160505890C}. The coefficient $K_\tau = \Gamma(7/6) c^3 r_{\rm e}^2 / 4$, where $c$ is the speed of light and $r_{\rm e}$ is the classic electron radius, while the factor $A_\tau \lesssim 1$ scales the mean delay to the $1/e$ delay that is typically estimated from pulse shapes. Because $A_\tau$ is medium dependent, we include it in relevant expressions symbolically rather than adopting a specific value. The scattering efficacy is determined by the fluctuation parameter $\widetilde{F} = \zeta\epsilon^2 / f (l_{\rm o}^2l_{\rm i})^{1/3}$ combined with a dimensionless geometric factor, $G_{\rm scatt}$, discussed below. \indent Evaluation with DM in pc cm$^{-3}$, the observation frequency $\nu$ in GHz, the outer scale in pc units, and the inner scale in km, $\widetilde{F}$ has units pc$^{-2/3}$ km$^{-1/3}$\ and \begin{eqnarray} \label{eq:taudmnu} \tau({\rm DM},\nu) \approx 48.03~{\rm ns} \,A_\tau \nu^{-4} \widetilde{F} G_{\rm scatt} {\rm DM}^2 . \end{eqnarray} For reference, the NE2001 model uses a similar parameter, $F = \widetilde{F} l_i^{1/3}$ \citep[Eq. 11-13][]{2002astro.ph..7156C}, that relates the scattering measure SM to {\rm DM}\ and varies substantially between different model components (thin and thick disks, spiral arms, clumps). \indent The geometric factor $G_{\rm scatt}$ in Eq.~\ref{eq:taudmnu} depends on the location of the FRB source relative to the dominant scattering medium and is calculated using the standard Euclidean weighting $(s/d)(1-s/d)$ in the integral of ${\rm C_n^2}(s)$ along the LoS. If both the observer and source are embedded in a medium with homogeneous scattering strength, $G_{\rm scatt} = 1/3$, while $G_{\rm scatt} = 1$ if the source to observer distance $d$ is much larger than the medium's thickness and either the source or the observer is embedded in the medium. \indent For a thin scattering layer with thickness $L$ at distance $\delta d \gg L$ from the source or observer, $G_{\rm scatt} \simeq 2\delta d / L \gg 1$ because of the strong leverage effect. For thin-layer scattering of cosmological sources by, e.g., a galaxy disk or halo, $G_{\rm scatt} = 2d_{\rm sl} d_{\rm lo}/Ld_{\rm so}$ where $d_{\rm sl}, d_{\rm lo}$ and $d_{\rm so}$ are angular diameter distances for source to scattering layer, scattering layer to observer, and source to observer, respectively. The scattering time is also multiplied by a redshift factor $(1+ z_{\ell})^{-3}$ that takes into account time dilation and the higher frequency at which scattering occurs in the layer at redshift $z_{\ell}$, with ${\rm DM}_{\ell}$ representing the lens frame value. We thus have for distances in Gpc and $L$ in Mpc, \begin{eqnarray} \label{eq:taudmnuz} \tau({\rm DM},\nu, z) \approx 48.03~{\rm \mu s} \times \frac{A_\tau \widetilde{F}\, {\rm DM}_{\ell}^2 }{(1 + z_{\ell})^3 \nu^4} \left[\frac{2d_{\rm sl}d_{\rm lo}}{ Ld_{\rm so}} \right] . \end{eqnarray} If the layer's DM could be measured, it would be smaller by a factor $(1+z_{\ell})^{-1}$ in the observer's frame. The pulse broadening time is related to the scintillation bandwidth $\Delta \nu_{\rm d}$ through the uncertainty principle $2\pi \tau \Delta \nu_{\rm d} = C_1$, where $C_1 = 1$ for a homogeneous medium and $C_1=1.16$ for a Kolmogorov medium \citep{1998ApJ...507..846C}. Multipath propagation is also manifested as angular broadening, $\theta_{\rm d}$, defined as the FWHM of the scattered image of a point source. The angular and pulse broadening induced by a thin screen are related to the distance between the observer and screen, which will be discussed further in Section~\ref{sec:121102}. \indent Measurements of $\tau$, $\Delta \nu_{\rm d}$, and $\theta_{\rm d}$ can include both extragalactic and Galactic components. We use the notation $\tau_{\rm MW, d}$, $\tau_{\rm MW, h}$, and $\tau_{\rm i,h}$ to refer to scattering contributed by the Galactic disk (excluding the halo), the Galactic halo, and intervening haloes, respectively, and an equivalent notation for DM. To convert between $\Delta \nu_{\rm d}$ and $\tau$ we adopt $C_1 = 1$. Wherever we use the notation $\tau$ and $\theta_{\rm d}$ we refer to the $1/e$ delay and FWHM of the autcorrelation function that are typically measured. \subsection{Electron Density Model for the Galactic Halo}\label{sec:halomodels} Models of the Milky Way halo based on X-ray emission and oxygen absorption lines depict a dark matter halo permeated by hot ($T\sim 10^6$ K) gas with a virial radius between 200 and 300 kpc \citep[e.g.,][]{2019MNRAS.485..648P, 2020ApJ...888..105Y, 2020MNRAS.496L.106K}. Based on these models, the average DM contribution of the hot gas halo to FRBs is about 50 pc cm$^{-3}$, which implies a mean electron density $n_e \sim 10^{-4}$ cm$^{-3}$. However, the DM contribution of the Milky Way halo is still not very well constrained. \cite{2020MNRAS.496L.106K} compare the range of $\DM_{\rm MW,h}$ predicted by various halo models with the XMM-Newton soft X-ray background \citep{2013ApJ...773...92H} and find that the range of $\DM_{\rm MW,h}$ consistent with the XMM-Newton background spans over an order of magnitude and could be as small as about 10 pc cm$^{-3}$. Using a sample of DMs from 83 FRBs and 371 pulsars, \cite{2020ApJ...895L..49P} place a conservative upper limit on $\widehat{\DM}_{\rm MW, h}<123$ pc cm$^{-3}$, with an average value of $\widehat{\DM}_{\rm MW, h} \approx 60$ pc cm$^{-3}$. \indent Most models of the hot gas halo adopt a spherical density profile, but \cite{2020ApJ...888..105Y} and \cite{2020NatAs.tmp..214K} argue that a disk component with a scale height of about 2 kpc and a radial scale length of about 5 kpc should be added to the spherical halo based on the directional dependence of emission measure found in Suzaku and HaloSat X-ray observations \citep{2018ApJ...862...34N, 2020NatAs.tmp..214K}. In such a combined disk-spherical halo model, the disk would account for most of the observed X-ray emission attributed to the halo, while the diffuse, extended, spherical halo contains most of the baryonic mass. \cite{2020NatAs.tmp..214K} also suggest that significant, patchy variations may exist in the halo gas on scales $\sim400$ pc. The physical scales of the disk models fit to these recent X-ray observations are similar to the spatial scale of the warm ionized medium in the Galactic disk, and several orders of magnitude smaller than the spatial scales ($\sim 100$s of kpc) typical of spherical halo models. Whether such a disk component should really be attributed to the circumgalactic medium and not to the ISM of the Galactic disk is unclear. \indent \deleted{We adopt a fiducial model for the halo's density profile based on} \added{We use} the \citeauthor{2019MNRAS.485..648P} (PZ19) modified Navarro-Frenk-White (mNFW) profile \added{to model the halo density}. The mNFW profile adjusts the NFW profile's matter density cusp at the Galactic center with a more physical roll-off, giving a matter density of \begin{equation} \rho(y) = \frac{\rho_0}{y^{1-\alpha}(y_0 + y)^{2+\alpha}}, \end{equation} \deleted{where $y = K_c(r/r_{200})$} \added{where $y = K_c\times(r/r_{200})$}, $r$ is radial distance from the Galactic center, and $r_{200}$ is the virial radius within which the average density is 200 times the cosmological critical density. The characteristic matter density $\rho_0$ is found by dividing the total mass of the halo by the volume within the virial radius. The concentration parameter $K_c$ depends on the galaxy mass; e.g., $K_c = 7.7$ for a total Milky Way halo mass $M = 1.5 \times 10^{12} M_\odot$, and can range from $K_c = 13$ for $M = 10^{10} M_\odot$ to $K_c = 5$ for $M = 10^{14} M_\odot$ for redshifts $z<0.1$ \citep{1997ApJ...490..493N}. Like PZ19, we assume that $75\%$ of the baryonic matter in a galaxy is in the halo ($f_{\rm b} = 0.75$), and the fraction of the total matter density that is baryonic is $\Omega_{\rm b}/\Omega_{\rm m} $, the ratio of the baryonic matter density to the total matter density ($\Omega_{\rm b}/\Omega_{\rm m} = 0.16$ today). If $f_{\rm b}$ is smaller, then the electron density and the predicted scattering from a halo will be smaller. \indent The electron density profile of the halo is \deleted{$n_e(r) = f_{\rm b}(\Omega_{\rm b}/\Omega_{\rm m})\rho(r)\frac{\mu_{\rm e}}{m_{\rm p} \mu_{\rm H}}U(r)$} \added{\begin{equation}\label{eq:neh} n_e(r) \approx 0.86f_{\rm b}\times(\Omega_{\rm b}/\Omega_{\rm m})\frac{\rho(r)}{m_{\rm p}}U(r),\end{equation}} \deleted{where $\mu_{\rm e} = 1.12$ for fully ionized hydrogen and helium, $\mu_{\rm H} = 1.3$, and $m_{\rm p}$ is the proton mass. The function $U(r) = (1/2)\{1-{\rm tanh}[(r-r_c)/w]\}$ imposes a physical roll-off at a characteristic radius $r_c$ with a width set by $w$. Previous studies often implicitly assumed a physical boundary at the virial radius when integrating $n_e$ to obtain $\DM_{\rm MW,h}$, despite the fact that $\rho(r)$ extends well beyond the virial radius. Nonetheless, most of the halo gas likely lies within a few times the virial radius, so we set $r_c = 2r_{200} \approx 480$ kpc and $w = 20$ kpc. A comparison of our halo density profile with the PZ19 model, evaluated for the Milky Way, is shown in Figure~\ref{fig:halomod}.} \added{where $m_p$ is the proton mass and we have assumed a gas of fully ionized hydrogen and helium. The function $U(r) = (1/2)\{1-{\rm tanh}[(r-r_c)/w]\}$ imposes an explicit integration limit at a radius $r_c = 2r_{200}$ over a region of width $w = 20$ kpc so as to avoid sharp truncation of the model. Our implementation of the PZ19 model, evaluated for the Milky Way, is shown in Figure~\ref{fig:halomod}.} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{halomod.pdf} \caption{Electron density model and resulting DM contribution of the Milky Way halo, for an observer 8 kpc from the Galactic center. The modified NFW profile of \cite{2019MNRAS.485..648P} for $\alpha = y_0 = 2$ is shown in orange, and our \replaced{adaptation}{implementation} of the PZ19 model is shown in blue. The maximum DM contribution of the halo predicted by the model is 63 pc cm$^{-3}$, similar to the predictions of other halo density models for lines of sight to the Galactic anti-center \citep[e.g.,][]{2020ApJ...895L..49P, 2020ApJ...888..105Y,2020MNRAS.496L.106K}.} \label{fig:halomod} \end{figure} \subsection{Scattering from the Galactic Halo} \indent Generally speaking, scattering from the Galactic halo traces the same plasma that gives rise to dispersion, weighted by the fluctuation parameter $\widetilde{F}$. To constrain $\widetilde{F}$ from measurements of $\tau$ and $\theta_{\rm d}$, we approximate the total scattering as a sum of two components: one from the disk and spiral arms of the Milky Way, which we denote with the subscript ($\rm MW,d$), and one from the Galactic halo, which we denote with the subscript ($\rm MW,h$). The total $\tau$ and $\theta_{\rm d}$ predicted by the model are therefore \begin{equation} \tau_{\rm MW}^{\rm total} = \tau_{\rm MW,d} + \tau_{\rm MW,h} \end{equation} and \begin{equation} \theta_{\rm d,MW}^{\rm total} = \sqrt{\theta_{\rm MW,d}^2 + \theta_{\rm MW,h}^2}. \end{equation} We adopt the NE2001 predictions for the $\rm MW$ components and model the halo components using Equations~\ref{eq:taudmnu} and~\ref{eq:ds}. The composite parameter $A_\tau(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$ is constrained by maximizing the likelihood function \begin{equation}\label{eq:Like} \mathcal{L}((\widetilde{F} \times {\rm DM}^2)_{\rm MW,h} | \tau_j) = \prod_j N(\tau_{j,\rm MW}^{\rm total} - \tau_j^{\rm obs}, \sigma_{\tau,j}^2) \end{equation} using measurements $\tau^{\rm obs}$ and $\theta_{\rm d}^{\rm obs}$. The variance of the likelihood function $\sigma_{\tau_j}^2$ is taken from the measurement uncertainties, where we implicitly assume that measurements of $\tau$ and $\theta_{\rm d}$ are sufficiently approximated by Gaussian PDFs. An estimate of $\widetilde{F}_{\rm MW,h}$ can then be obtained by assuming a given halo density profile or constructing a PDF for $\DM_{\rm MW,h}$, and adopting a value for $A_\tau$. \section{The Milky Way Halo} \indent In order to determine how the Milky Way halo (and in turn other galaxy haloes) contributes to the scattering of FRBs, we must constrain the scattering contribution of the Galactic disk. In the following sections, we first determine the amount of scattering that can occur in the thick disk of the Galaxy using the distribution of pulsar scattering measurements and DMs at high Galactic latitudes. We then compare the scattering measurements of FRB 121102 and FRB 180916 to the scattering expected from the Galactic disk using NE2001, and explain discrepancies between the scattering predictions of the NE2001 and YMW16 Galactic disk models. Finally, in Section~\ref{sec:halo}, we constrain the scattering contribution of the Galactic halo, followed by discussion of scattering constraints from pulsars in the Magellanic Clouds. \subsection{Scattering from the Thick Disk}\label{sec:disk} Most currently known FRBs lie at high Galactic latitudes, and their LoS through the Galaxy predominantly sample the thick disk, which has a mean density at mid-plane of 0.015 cm$^{-3}$ and a scale height $\approx 1.6$ kpc \citep{2020ApJ...897..124O}. The distribution of $\tau/{\rm DM}^2$ for Galactic pulsars with measurements of $\tau$ \citep[][and references therein]{2016arXiv160505890C} and {\rm DM}\ and other parameters from \citet{2005AJ....129.1993M}\footnote{\url{http://www.atnf.csiro.au/research/pulsar/psrcat}} yields a direct constraint on $\widetilde{F}$: $\tau/{\rm DM}^2 \approx (16 \hspace{0.05in} {\rm ns})A_\tau\widetilde{F}$, for $\nu = 1$ GHz and $G_{\rm scatt} = 1/3$ for sources embedded in the scattering medium. The distribution of $\widetilde{F}$ for all Galactic pulsars, assuming $A_\tau\approx1$, is shown in Figure~\ref{fig:taudmb}. For the pulsars above $|20|^\circ$ Galactic latitude, the mean value of $\tau/{\rm DM}^2$ from a logarithmic fit is $(5.3^{+5.0}_{-3.3})\times10^{-8}$ ms pc$^{-2}$ cm$^{6}$, which yields $\widetilde{F} \approx (3\pm2)\times10^{-3}$ pc$^{-2/3}$ km$^{-1/3}$. The value of $\widetilde{F}$ based on high-latitude pulsars is consistent with the related $F = l_i^{1/3} \widetilde{F}$ factor used in the NE2001 model for scattering in the thick disk. \indent A structural enhancement to radio scattering for LoS below $|20|^\circ$ is reflected in the distribution of $\tau/{\rm DM}^2$ shown in Figure~\ref{fig:taudmb}, which shows a multiple orders of magnitude increase in $\widetilde{F}$ at low latitudes, with the largest values of $\widetilde{F}$ dominating LoS to the Galactic center. This latitudinal and longitudinal dependence of $\widetilde{F}$ is directly responsible for the ``hockey-stick" relation between $\tau$ and DM for Galactic pulsars, in which high-DM pulsars lying close to the Galactic plane and towards the Galactic center have a much steeper dependence on ${\rm DM}$ than pulsars lying high above the Galactic plane or towards the Galactic anti-center \citep[e.g.,][]{2004ApJ...605..759B, 2015ApJ...804...23K, 2016arXiv160505890C}. The implications of the directional dependence of $\widetilde{F}$ for FRB LoS are discussed further in Section~\ref{sec:ymw16}. \indent For the many FRBs in high Galactic latitude directions, the Galactic disk has a virtually undetectable contribution to the observed pulse broadening. The DM contribution of the thick disk is about ($23.5 \times {\rm csc}(|b|)$) pc cm$^{-3}$, which varies negligibly with longitude for $|b|>20^\circ$ \citep{2020ApJ...897..124O}. The pulse broadening at 1 GHz expected from the thick disk therefore ranges from $\tau < 0.25$ $\mu$s at $|b|=20^\circ$ to $\tau < 29$ ns at $|b| = 90^\circ$. As discussed in the following section, scattering from the Galactic thin disk and spiral arms increases dramatically for FRB LoS close to the Galactic plane. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{taudm.pdf} \caption{The distribution of $\tau/{\rm DM}^2$ (which is directly proportional to $\widetilde{F}$) versus Galactic latitude for all Galactic pulsars, with $\tau$ in ms referenced to 1 GHz and DM in pc cm$^{-3}$. The average value and root-mean-square of the distribution for all pulsars above $\pm 20^\circ$ latitude is shown in blue. Pulsars closer to the Galactic center ($|b|<10^\circ$, $|l|<40^\circ$) are \replaced{highlighted in black}{shown as orange crosses}.} \label{fig:taudmb} \end{figure} \subsection{Scattering Constraints for Two FRB Case Studies}\label{sec:frbs} Unlike Galactic pulsars, for which the scintillation bandwidth and pulse broadening both result from the same electrons and conform to the uncertainty relation $2\pi\Delta\nu_{\rm d}\tau_{\rm d} = C_1$, some FRBs indicate that two \added{scattering} media are involved. \deleted{The scintillation bandwidth is caused by} \added{In these cases, the scintillation bandwidth is consistent with diffractive interstellar scintillation caused by} foreground Galactic turbulence while \added{the} pulse broadening \deleted{is extragalactic in origin, most likely from the host galaxy} \added{also has contributions from an extragalactic scattering medium \citep{ 2015Natur.528..523M, 2018ApJ...863....2G, 2019ARA&A..57..417C}}. Here we analyze the Galactic scintillations of two FRBs \added{with highly precise scattering measurements} in order to place constraints on any scattering in the Galactic halo. \subsubsection{FRB 121102}\label{sec:121102} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{FRB121102_DISSBW_binned.pdf} \caption{Scintillation bandwidth vs. radio frequency for FRB 121102. The blue line and shaded region designates the least-squares fit and errors of ${\rm log}(\Delta \nu_{\rm d})$ vs. $\rm log(\nu)$. The green and orange lines are anchored to the lowest frequency point with the indicated frequency dependences. The corresponding pulse broadening time is shown on the right-hand axis assuming $\tau = 1/(2\pi\Delta \nu_{\rm d})$. Data points are from \cite{2019ApJ...876L..23H}, \cite{2018Natur.553..182M}, \cite{2018ApJ...863..150S}, and \cite{2018ApJ...863....2G}.} \label{fig:dissbw} \end{figure} FRB 121102 currently has the most comprehensive set of scattering constraints on an FRB source so far, with scintillation bandwidths $\Delta \nu_{\rm d} = 58.1 \pm 2.3$ kHz at 1.65 GHz \citep{2019ApJ...876L..23H}, 5 MHz at 4.5 GHz \citep{2018Natur.553..182M}, $6.4\pm1.4$ MHz at 4.85 GHz \citep{2018ApJ...863..150S}, and $10-50$ MHz between 5 and 8 GHz \citep{2018ApJ...863....2G}. All of these scintillation bandwidth measurements are assembled in Figure~\ref{fig:dissbw}, along with a power-law fit to the data using linear least squares that gives a mean value $\Delta \nu_{\rm d} = 3.8^{+2.5}_{-1.5}$ kHz at 1 GHz. \indent FRB 121102 also has a pulse broadening time upper limit of $\tau < 9.6$ ms at 500 MHz \citep{2019ApJ...882L..18J}, and angular broadening $\theta_{\rm d} = 2\pm 1$ mas at 1.7 GHz and $\theta_{\rm d} \sim 0.4-0.5$ mas at 5 GHz, which was measured in the \cite{2017ApJ...834L...8M} high-resolution VLBI study of the FRB and its persistent radio counterpart. These angular diameters are consistent with those reported in \cite{2017Natur.541...58C} and with the NE2001 prediction. The scattering measurements are shown in Table~\ref{tab:measures} and are referenced to 1 GHz assuming a $\tau \propto \nu^{-4}$ frequency scaling. \indent The NE2001 angular broadening and scintillation bandwidth predictions for FRB 121102 are broadly consistent with the corresponding empirical constraints (see Table~\ref{tab:measures}). In NE2001, the scattering for this LoS is dominated by an outer spiral arm located 2 kpc away and by the thick disk, which extends out to 17 kpc from the Galactic center \citep{2002astro.ph..7156C}. The ${\rm C_n^2}$, electron density, and DM predicted by NE2001 along the LoS to FRB 121102 are shown in Figure~\ref{fig:ne2001}. Modeling of the anti-center direction in NE2001 is \added{independent of our analysis and is} based on DM and scattering measurements of \added{Galactic} pulsars in the same general direction and upper bounds on the angular scattering of extragalactic sources. The \added{NE2001} model parameters were constrained through a likelihood analysis of these measurements, which revealed that the fluctuation parameter was smaller for LoS that probe the outer Galaxy compared to the inner Galaxy \citep{1998ApJ...497..238L}. This result required that the thick disk component of NE2001 have a smaller fluctuation parameter compared to the thin disk and spiral arm components that are relevant to LoS through the inner Galaxy. \indent Since the measured $\Delta \nu_{\rm d}$ and $\theta_{\rm d}$ for FRB 121102 are broadly consistent with the predicted amount of scattering from the Galactic disk, we use $\Delta \nu_{\rm d}$ and $\theta_{\rm d}$ to estimate the effective distance to the dominant scattering material. For thin-screen scattering of a source located at a distance $d_{\rm so}$ from the observer, the scattering diameter $\theta_{\rm s}$ is related to the observed angular broadening by \begin{equation} \theta_{\rm d} \sim \theta_{\rm s} ( d_{\rm sl} / d_{\rm so}) = \theta_s \bigg(1 - \frac{d_{\rm lo}}{d_{\rm so}}\bigg), \end{equation} where $d_{\rm sl}$ is the source-to-screen distance and $d_{\rm lo}$ is the screen-to-observer distance. The scattering diameter is related to the pulse broadening delay by $\tau \approx A_\tau d_{\rm so}(d_{\rm sl}/d_{\rm so})(1-d_{\rm sl}/d_{\rm so})\theta_{\rm s}^2/8 ({\rm ln} 2)c$ \citep{2019ARA&A..57..417C}. For a thin screen near the observer and an extragalactic source, $d_{\rm lo} \ll d_{\rm sl}$, giving $\theta_{\rm d} \approx \theta_{\rm s}$ and \begin{equation}\label{eq:ds} \theta_{\rm d} \approx \left(\frac{4({\rm ln}2) A_\tau C_1 c}{ \pi \Delta \nu_{\rm d} d_{\rm lo}} \right)^{1/2}. \end{equation} The scattering screen location can thus be directly estimated from measurements of $\theta_{\rm d}$ and $\Delta \nu_{\rm d}$. \indent Assuming that the same Galactic scattering material gives rise to both the angular broadening and the scintillation of FRB 121102, Equation~\ref{eq:ds} implies the scattering material has an effective distance $\hat{d}_{\rm lo} \approx 2.3$ kpc from the observer (assuming $A_\tau \approx 1$), which is consistent with the distance to the spiral arm, as shown in Figure~\ref{fig:ne2001}. A numerical joint-probability analysis of the uncertainties in $\Delta \nu_{\rm d}$ and $\theta_{\rm d}$, assuming both quantities follow normal distributions, allows the screen to be as close as 1.6 kpc or as far as 5.5 kpc (corresponding to the 15$\%$ and 85$\%$ confidence intervals). Figure~\ref{fig:thetad} shows a comparison of the relationship between $\theta_{\rm d}$ and $d_{\rm lo}$ for a thin screen and the measured $\theta_{\rm d}$ from \cite{2017ApJ...834L...8M}. Given that there is no known HII region along the LoS to FRB~121102, the most likely effective screen is in fact the spiral arm. \indent The scintillation bandwidth implies a pulse broadening contribution from the Milky Way disk and spiral arms $\tau_{\rm MW,d} \approx 0.04 \pm 0.02$ ms at 1 GHz. The upper limit on $\tau$ measured by \cite{2019ApJ...882L..18J} is an order of magnitude larger than the $\tau_{\rm MW,d}$ inferred from $\Delta \nu_{\rm d}$. Any additional scattering beyond the Galactic contribution is more likely from the host galaxy due to the lack of intervening galaxies along the LoS, and the small amount of scattering expected from the IGM \citep{2013ApJ...776..125M, 2020arXiv201108519Z}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{DM_vs_d_NE2001p_l_b_dm_174.95_-0.22_188.33012280240598.pdf} \caption{Galactic DM, electron density $n_e$, and ${\rm C_n^2}$ contribution predicted by NE2001 for FRB 121102. The maximum DM (excluding the Galactic halo) is 188 pc cm$^{-3}$. The sharp changes in $n_e$ and ${\rm C_n^2}$ between 0 and 0.3 kpc are due to structure in the local ISM. The shaded grey region indicates the distance to the scattering screen derived from a numerical joint-probability analysis of the measured scattering constraints for FRB~121102; see Figure~\ref{fig:thetad}.} \label{fig:ne2001} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{FRB121102_thetad_screen_constraint.pdf} \caption{Scattering diameter vs. effective distance. The blue band indicates the angular diameter $\theta_{\rm d} = 2\pm1$ mas (1$\sigma$ errors) from \cite{2017ApJ...834L...8M}. The green band is the predicted angular diameter for a thin-screen that matches the least-square fit to scintillation bandwidth measurements and the uncertainties at $\pm1\sigma$. All values are expressed at 1.7 GHz. A numerical joint-probability estimate constraining the overlap of the green and blue regions gives a screen distance $\hat{d}_{\rm lo} \approx 2.3^{+3.2}_{-0.7}$ kpc. } \label{fig:thetad} \end{figure} \begin{deluxetable*}{c c c c c c }\label{tab:measures} \tablecaption{DM and Scattering Referenced to 1 GHz of FRB 121102 and FRB 180916} \tablewidth{0pt} \tablehead{\multicolumn{6}{c}{Measurements} \\ \hline \colhead{FRB} & \colhead{DM (pc cm$^{-3}$)} & \colhead{$\tau$ (ms)$^{(1, 2)}$} & $\Delta \nu_{\rm d}$ (kHz)$^{(3,4)}$ & \colhead{$\theta_{\rm d}$ (mas)$^{(5)}$} & \colhead{$\hat{d}_{\rm lo}$ (kpc)}} \startdata 121102 & $557$ & $<0.6$ & $3.8^{+2.5}_{-1.5}$ & $5.8 \pm 2.9$ & $2.3^{+3.2}_{-0.7}$ \\ 180916 & $349$ & $<0.026$ & $7.1\pm1.5$& \nodata & \nodata \\ \hline \hline \multicolumn{6}{c}{Models}\\ \hline FRB & DM (pc cm$^{-3}$) & $\tau$ (ms) & $\Delta \nu_{\rm d}$ (kHz) & $\theta_{\rm d}$ (mas) & $d_{\rm lo}$ (kpc) \\ \hline \multicolumn{6}{c}{NE2001}\\ \hline 121102 & $188$ & $0.016$ & $11$ & $6$ & $2.05$ \\ 180916 & $199$ & $0.015$ & $12$ & $5$ & $2.5$\\ \hline \multicolumn{6}{c}{YMW16}\\ \hline 121102 & $287$ & $0.84$ & $0.2$ & \nodata & \nodata \\ 180916 & $243$ & $0.42$ & $0.4$ & \nodata & \nodata \\ \enddata \tablecomments{The top of the table shows the observed DM and scattering for FRB 121102 and FRB 180916. Since the scintillation and angular broadening measurements are broadly consistent with Galactic foreground predictions by NE2001, we emphasize that they are Galactic, whereas the pulse broadening may have an extragalactic contribution from the host galaxy. The scattering measurements are referenced to 1 GHz assuming $\tau \propto \nu^{-4}$ unless otherwise noted.\\ \indent The bottom of the table shows the asymptotic DM and scattering predictions of NE2001 and YMW16. NE2001 adopts $C_1 = 1.16$ to convert between $\Delta \nu_{\rm d}$ and $\tau$, so we also use this value to calculate $\Delta \nu_{\rm d}$ from $\tau$ for YMW16. YMW16 predictions were calculated in the IGM mode, but we only report the Galactic component of DM predicted by the model for comparison with NE2001.\\ \indent References: (1) \cite{2019ApJ...882L..18J}; (2) \cite{2020ApJ...896L..41C}; (3) this work (see Figure~\ref{fig:dissbw}; referenced to 1 GHz using a best-fit power law); (4) \cite{2020Natur.577..190M}; (5) \cite{2017ApJ...834L...8M}.} \end{deluxetable*} \subsubsection{FRB 180916}\label{sec:180916} The scattering constraints for FRB 180916 consist of a scintillation bandwidth $\Delta \nu_{\rm d} = 59 \pm 13$ kHz at 1.7 GHz \citep{2020Natur.577..190M} and a pulse broadening upper limit $\tau < 1.7$ ms at 350 MHz \citep{2020ApJ...896L..41C}. The $\Delta \nu_{\rm d}$ and $\tau$ upper limit are entirely consistent with each other, so we again use $\Delta \nu_{\rm d}$ and the inferred $\tau_{\rm MW,d}$ for the rest of the analysis due to its higher precision. Based on $\Delta \nu_{\rm d}$, $\tau_{\rm MW,d} = 0.023\pm0.005$ ms at 1 GHz. As with FRB 121102 the NE2001 scattering predictions for this LoS are consistent with the empirical constraints to within the model's uncertainty, suggesting that the Galactic halo has a small ($\lesssim \mu$s level) contribution to the observed scattering. \subsubsection{Comparison with YMW16 Scattering Predictions}\label{sec:ymw16} The YMW16 model significantly overestimates the scattering of FRB 121102 and FRB 180916. The DM and scattering predictions for these FRBs are shown in Table~\ref{tab:measures}. Evaluating YMW16 for FRB 121102 using the \texttt{IGM} mode gives \replaced{$\log(\tau)=-3.012$}{$\log(\tau)=-3.074$} with $\tau$ in seconds, implying $\tau = 0.84$~ms at 1~GHz, corresponding to $\Delta \nu_{\rm d} \approx 0.2$~kHz, about 50 times smaller than the NE2001 value. Compared to the measured scattering, the nominal output of the YMW16 model overestimates the scattering \added{toward FRB 121102} by a factor of 28 to 35 (depending on whether a $\nu^{-4}$ or $\nu^{-4.4}$ scaling is used). Moreover, combining the measured $\theta_{\rm d}$ with the YMW16 estimate for $\tau$ implies a scattering screen distance $\sim 500$ kpc, beyond any local Galactic structure that could reasonably account for the scattering. For FRB 180916, YMW16 also overestimates $\tau$ to be 0.42~ms at 1~GHz, implying a scintillation bandwidth of 0.4~kHz at 1~GHz. \indent The discrepancies between the observed scattering and the YMW16 predictions are due to several important factors. Unlike NE2001, YMW16 does not explicitly model electron density fluctuations. Instead, it calculates DM for a given LoS and then uses the $\tau-{\rm DM}$ relation based on Galactic pulsars to predict $\tau$. In using the $\tau-{\rm DM}$ relation, the YMW16 model incorrectly adjusts for the scattering of extragalactic bursts. The waves from extragalactic bursts are essentially planar when they reach the Galaxy, \deleted{which means that the Galactic plasma will scatter them more than it would the spherical waves from a Galactic pulsar at a distance of a few scale lengths of the electron density} \added{which means they are scattered from wider angles than diverging spherical waves from a Galactic pulsar would be}. The differences between plane and spherical wave scattering are discussed in detail for FRBs in \cite{2016arXiv160505890C}. The YMW16 model accounts for this difference by reducing the Galactic prediction of $\tau$ by a factor of two, when \added{geometric weighting of the mean-square scattering angle implies that} the \added{Galactic} scattering \added{prediction} should really be larger by a factor of three \added{to apply to extragalactic FRBs (see Eq.~10 in \citealt{2016arXiv160505890C})}. This implies that values for $\tau$ in the output of YMW16 should be multiplied by a factor of six, which means that the model's overestimation of the scattering is really by a factor of 170 to 208 when one only considers the correction for planar wave scattering. \indent YMW16 may also overestimate the Galactic contribution to DMs of extragalactic sources viewed in the Galactic anti-center direction (and perhaps other low-latitude directions) because it significantly overestimates the observed DM distribution of Galactic pulsars that NE2001 is based on. In YMW16, the dominant DM contributions to extragalactic sources in this direction are from the thick disk and from the spiral arms exterior to the solar circle. Together these yield DM values of 287 pc cm$^{-3}$\ and 243 pc cm$^{-3}$\ for FRB~121102 and FRB~180916, respectively. These DM predictions are over $50\%$ and $20\%$ larger for each FRB than the NE2001 values, which may be due to overestimation of the densities or characteristic length scales of the outer spiral arm and thick disk components. \indent The primary cause for YMW16's scattering over-prediction is that the part of the pulsar-derived $\tau-{\rm DM}$ relation that applies to large values of DM should not be used for directions toward the Galactic anti-center. The $\tau-{\rm DM}$ relation has the empirical form \citep{2016arXiv160505890C} \begin{equation} \tau = (2.98\times10^{-7}\ {\rm ms})\, {\rm DM}^{1.4}(1+3.55\times10^{-5}{\rm DM}^{3.1}) \label{eq:taud_vs_DM} \end{equation} based on a fit to pulsar scattering data available through 2016. Similar fits were previously done by \cite{1997MNRAS.290..260R}, \cite{2004ApJ...605..759B}, and \cite{2015ApJ...804...23K}. It scales as ${\rm DM}^{1.4}$ for ${\rm DM} \lesssim 30$ pc cm$^{-3}$ and as ${\rm DM}^{4.5}$ for ${\rm DM} \gtrsim 100$ pc cm$^{-3}$. \indent The YMW16 model uses Krishnakumar's model for 327~MHz scattering times scaled to 1~GHz by a factor $(0.327)^4$, giving $\tau = (4.1\times10^{-8}\ {\rm ms})\ {\rm DM}^{2.2}(1+0.00194\times{\rm DM}^{2})$. This scaling law is in reasonable agreement with the expression in Equation~\ref{eq:taud_vs_DM} except at very low DM values. The Krishnakumar scaling law adopted the \cite{1997MNRAS.290..260R} approach of fixing the leading DM exponent to 2.2, which is based on the assumption that the relatively local ISM is uniform. This assumption is imperfect given our knowledge of the Local Bubble and other shells and voids in the local ISM. The steep $\tau\propto {\rm DM}^{4.2 \ {\rm to }\ 4.5}$ scaling for Galactic pulsars is from LoS that probe the inner Galaxy, where the larger star formation rate leads to a higher supernova rate that evidently affects the turbulence in the HII gas, and results in a larger $\widetilde{F}$, as shown in Section~\ref{sec:disk}. \indent If the YMW16 model were to use instead the shallow part of the $\tau-{\rm DM}$ relation, $\tau\propto {\rm DM}^{2.2 \ {\rm to} \ 1.4}$, which is more typical of LoS through the outer Galaxy, its scattering time estimates would be smaller by a factor of $\sim 1.94\times 10^{-3} \times (287)^{2} \sim 160$ or $\sim 3.55\times 10^{-5} \times (287)^{3.1} \sim 1500$, depending on which $\tau-{\rm DM}$ relation is used (and using FRB 121102 as an example). These overestimation factors could be considerably smaller if smaller DM values were used. While there is a considerable range of values for the overestimation factor based on the uncertainties in the empirical scaling law, it is reasonable to conclude that the high-DM part of the $\tau-{\rm DM}$ relation should not be used for the anti-center FRBs. \subsection{Fluctuation Parameter of the Galactic Halo}\label{sec:halo} The measurements of $\tau$ and $\theta_{\rm d}$ for FRBs 121102 and 180916, combined with the scattering predictions of NE2001, yield a maximum likelihood estimate for the pulse broadening contribution of the Milky Way halo (see Equation~\ref{eq:Like}). \deleted{The likelihood function is shown in Figure~6. The likelihood function is positive for values of $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$ extending to zero because the measured scattering from both FRBs is close enough to the NE2001 predictions that the halo could have a negligible scattering contribution.} The $95\%$ upper confidence interval yields $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h} < 250/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$. The maximum amount of pulse broadening expected from the Galactic halo is therefore $\tau_{\rm MW,h} < 12$ $\mu$s at 1 GHz, which is comparable to the scattering expected from the Galactic disk for LoS towards the Galactic anti-center or at higher Galactic latitudes. \indent Based on \deleted{a meta-analysis of} the broad range of $\DM_{\rm MW,h}$ currently consistent with the empirical and modeled constraints (see Section~\ref{sec:halomodels}), we construct a Gaussian probability density function (PDF) for $\widehat{{\rm DM}}_{\rm MW,h}$ with a mean of 60 pc cm$^{-3}$\ and $\sigma_{\rm \widehat{DM}} = 18$ pc cm$^{-3}$ \deleted{, which is shown in Figure~6}. \added{Combining} this PDF \added{with the maximum likelihood estimate for $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$} yields an upper limit $\widetilde{F}_{\rm MW,h} < 0.03/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. While $A_\tau$ is probably about 1, if $A_\tau$ is as small as $1/6$ then $\widetilde{F}_{\rm MW,h}$ could be up to 6 times larger. \indent \added{This estimate of $\widetilde{F}_{\rm MW,h}$ is based on just two LoS towards the Galactic anti-center, and it is unclear how much $\widetilde{F}_{\rm MW,h}$ will vary between different LoS through the halo. Given that sources viewed through the inner Galaxy (near $b = 0^\circ$) are more heavily scattered, it is unlikely that estimates of $\widetilde{F}_{\rm MW,h}$ will be obtainable for LoS that intersect the halo through the inner Galaxy. However, FRB 121102, FRB 180916, and most of the FRBs detected at higher latitudes do not show evidence of any intense scattering regions that might be associated with the halo \citep[e.g.,][]{2020MNRAS.497.1382Q}, suggesting that extremely scattered FRBs would be outliers and not representative of the Galactic halo's large-scale properties. All FRBs are ultimately viewed through not only the Galactic halo but also the haloes of their host galaxies and, in some cases, the haloes of intervening galaxies. Given the observed variations in scattering between different LoS through the Milky Way, it appears most likely that the heaviest scattered FRBs will be viewed through scattering regions within galaxy disks rather than haloes, and extrapolating our analysis to a larger sample of FRBs will require determining whether observed variations in $\widetilde{F}$ are due to variations between galaxy haloes, disks, or the sources' local environments.} In the following sections, we compare the Milky Way halo scattering contribution \added{inferred from FRBs 121102 and 180916} to scattering observed from the Magellanic Clouds and galaxy haloes intervening LoS to FRBs. \subsection{Constraints from Pulsars in the Magellanic Clouds}\label{sec:MCs} At distances of 50 to 60 kpc and latitudes around $-30^\circ$, pulsar radio emission from the Large and Small Magellanic Clouds (LMC/SMC) mostly samples the Galactic thick disk and a much smaller path length through the Galactic halo than FRBs. So far, twenty-three radio pulsars have been found in the LMC and seven in the SMC \citep[e.g.,][]{1991MNRAS.249..654M,2001ApJ...553..367C,2006ApJ...649..235M, 2013MNRAS.433..138R, 2019MNRAS.487.4332T}. Very few scattering measurements exist for these LoS. PSR B0540$-$69 in the LMC has a DM of 146.5 pc cm$^{-3}$\ and was measured to have a pulse broadening time $\tau = 0.4$ ms at 1.4 GHz \citep{2003ApJ...590L..95J}. The Galactic contribution to DM and scattering predicted by NE2001 towards this pulsar are ${\rm DM}_{\rm NE2001} = 55$ pc cm$^{-3}$, and $\tau_{\rm NE2001} = 0.3\times10^{-3}$ ms at 1 GHz. Based on this DM estimate, the pulsar DM receives a contribution of about $92$ pc cm$^{-3}$\ from the LMC and the Galactic halo. The lowest DMs of pulsars in the LMC have been used to estimate the DM contribution of the halo to be about 15 pc cm$^{-3}$\ for pulsars in the LMC \citep{2020ApJ...888..105Y}, which suggests that the LMC contributes about 77 pc cm$^{-3}$\ to the DM of B0540$-$69. \indent The scattering observed towards B0540$-$69 is far in excess of the predicted scattering from the Galactic disk. Since B0540$-$69 not only lies within the LMC but also within a supernova remnant, it is reasonable to assume that most of the scattering is contributed by material within the LMC. Using the upper limit $\widetilde{F}_{\rm MW,h} < 0.03/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$\ and $\widehat{{\rm DM}}_{\rm MW,h} = 15$ pc cm$^{-3}$\ yields $\tau_{\rm MW,h} < 0.1$ $\mu$s at 1.4~GHz for this LoS, which is too small to explain the observed scattering. If we instead combine $\tau = 0.4$ ms at 1.4~GHz and the estimated $\widehat{{\rm DM}}_{\rm LMC} = 77$ pc cm$^{-3}$, \added{we find $\widetilde{F}_{\rm LMC} \approx 16/A_\tau$} \deleted{$\widetilde{F}_{\rm LMC} \approx 4.2/A_\tau$} pc$^{-2/3}$ km$^{-1/3}$. More scattering measurements for LMC and SMC pulsars are needed to better constrain the fluctuation parameters of the LMC and SMC, which in turn will improve our understanding of interstellar plasma in these satellite galaxies. \section{Constraints on Intervening Haloes along Lines of Sight to FRBs}\label{sec:conc} As of this paper, two FRBs are found to pass through galactic haloes other than those of their host galaxies and the Milky Way: FRB 181112, which passes within 30 kpc of the galaxy DES J214923.89$-$525810.43, otherwise known as FG$-$181112 \citep{2019Sci...366..231P}, and FRB 191108, which passes about 18 kpc from the center of M33 and 185 kpc from M31 \citep{2020MNRAS.tmp.2810C}. Both FRBs have measurements of $\tau$ which are somewhat constraining. \subsection{FRB 181112} \indent FRB 181112 was initially found to have $\tau < 40$ $\mu$s at 1.3 GHz by \cite{2019Sci...366..231P}; follow-up analysis of the ASKAP filterbank data and higher time resolution data for this burst yielded independent estimates of $\tau < 0.55$ ms \citep{2020MNRAS.497.1382Q} and $\tau \approx 21 \pm 1$ $\mu$s \citep{2020ApJ...891L..38C} at 1.3 GHz. We adopt the last value for our analysis, with the caveat that the authors report skepticism that the data is best fit by a pulse broadening tail following the usual frequency dependence expected from scattering in a cold plasma, and that the measured decorrelation bandwidth of the burst spectrum is in tension with the pulse broadening fit (for a full discussion, see Section 4 of \citealt{2020ApJ...891L..38C}). \indent We first place an upper limit on $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ for FG$-$181112 by assuming that the intervening halo may contribute up to all of the observed scattering of FRB 181112. The halo density profile (Equation~\ref{eq:neh}) is re-scaled to the lens redshift ($z_{\rm i,h} = 0.36$) and evaluated at the impact parameter $R_\perp = 29$ kpc. \cite{2019Sci...366..231P} constrain the mass of the intervening halo to be $M_{\rm halo}^{\rm FG-181112} \approx 10^{12.3}M_\odot$. Again assuming a physical extent to the halo of $2r_{200}$ gives \deleted{an effective} \added{a} path length through the halo $L \approx 930$ kpc. The observed scattering $\tau \approx 21$ $\mu$s is the maximum amount of scattering that could be contributed by the halo, i.e., $\tau_{\rm i,h} < 21$ $\mu$s at 1.3 GHz, which yields $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} < 13/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ using Equation~\ref{eq:taudmnuz}. \indent Assuming the halo density profile where $y_0 = \alpha = 2$ gives $\widehat{{\rm DM}}_{\rm i,h} \approx 135$ pc cm$^{-3}$\ in the frame of the intervening galaxy. This DM estimate for the halo is similar to the estimate of $122$ pc cm$^{-3}$\ from \cite{2019Sci...366..231P}, but as they note, the DM contribution is highly sensitive to the assumed density profile and could be significantly smaller if the physical extent and/or the baryonic fraction of the halo are smaller. This DM estimate yields $\widetilde{F}_{\rm i,h} < (7\times10^{-4})/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. If the DM contribution of the intervening halo is smaller, then $\widetilde{F}_{\rm i,h}$ could be up to an order of magnitude larger. The total observed DM of the FRB is broadly consistent with the estimated DM contributions of the Milky Way, host galaxy, and IGM alone \citep{2019Sci...366..231P}, so the uncertainty in ${\rm DM}_{\rm i,h}$ remains the greatest source of uncertainty in deconstructing $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$. Both estimates of $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ and $\widetilde{F}_{\rm i,h}$ for FRB 181112 are within the upper limits for the Milky Way halo. \subsection{FRB 191108} \indent FRB 191108 passes through both the M31 and M33 haloes and has a source redshift upper limit $z\lesssim0.5$ based on DM. \cite{2020MNRAS.tmp.2810C} report an upper limit of $80$ $\mu$s on the intrinsic pulse width and scattering time at 1.37 GHz, but they demonstrate that this limit is likely biased by dispersion smearing. \cite{2020MNRAS.tmp.2810C} also report $25\%$ intensity modulations at a decorrelation bandwidth $\sim 40$ MHz. This decorrelation bandwidth may be attributable to scattering in the M33 halo and/or in the host galaxy (for a full discussion, see Section 3.4 of \citealt{2020MNRAS.tmp.2810C}). \indent Re-scaling our galactic halo density profile using halo masses $M_{\rm halo}^{\rm M33} \approx 5\times10^{11}M_\odot$ and $M_{\rm halo}^{\rm M31} \approx 1.5\times10^{12}M_\odot$ yields a total DM contribution from both haloes of about 110 pc cm$^{-3}$, nearly two times larger than the DM contribution estimated by \cite{2020MNRAS.tmp.2810C}, who use a generic model for the M33 and M31 haloes from \cite{2019MNRAS.485..648P} based on the same galaxy masses. We assume that the density profiles are independent; if there are dynamical interactions between the haloes then these may slightly modify the overall density distribution along the LoS, but it is unclear how turbulence in the plasma would be affected, if at all. Since the impact parameter of 18 kpc for M33 is significantly smaller than the 185 kpc for M31, M33 dominates the predicted DM contribution to FRB 191108 (with $\widehat{{\rm DM}}_{\rm i,h} \approx 90$ pc cm$^{-3}$), and therefore is more likely than M31 to also dominate the scattering. \indent If we were to assume that $\Delta \nu_{\rm d}\approx 40$ MHz (at 1.37 GHz, which translates to $\tau \approx 4$ ns) is attributable to scattering in the M33 halo, then we get $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} \approx 0.23/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ for $z_{\rm host} = 0.5$. A smaller source redshift would increase $d_{\rm sl} d_{\rm lo}/d_{\rm so}$, resulting in an even smaller value of $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$. For a halo DM contribution of about 90 pc cm$^{-3}$\, this estimate of $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ yields $\widetilde{F}_{\rm i,h} \approx (2.7\times10^{-5})/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$, which is three orders of magnitude smaller than the upper limit we infer for the Milky Way halo. Using a smaller ${\rm DM}_{\rm i,h} \approx 50$ pc cm$^{-3}$\ increases $\widetilde{F}$ to $\widetilde{F}_{\rm i,h} \approx (9\times10^{-5})/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. Generally speaking, if $\widetilde{F}$ is about a factor of 10 larger, then the pulse broadening from the halo would be 10 times larger and the scintillation bandwidth 10 times smaller. On the other hand, if M31 were to contribute more significantly to the DM then $\widetilde{F}_{\rm i,h}$ would be smaller than our estimate. While there is a range of reasonable values for $\widetilde{F}_{\rm i,h}$, it appears that scattering in the M33 halo is negligible. \indent \cite{2020MNRAS.tmp.2810C} use a different approach to evaluate the scattering of FRB 191108. They estimate a scattering angle from the decorrelation bandwidth in order to obtain an estimate of the diffractive scale and rms electron density fluctuations in the halo. Making assumptions about the outer scale and the relationship between the mean density and rms density fluctuations, they find a mean electron density for the halo that is larger than expected, and conclude that if the scattering occurs in M33, then it is more likely from cool clumps of gas embedded in the hot, extended halo. \cite{2019Sci...366..231P} use a similar methodology to estimate a mean density for the halo of FG$-$181112. Rather than make an indirect estimate of $n_e$ in each halo, our analysis yields a direct constraint on $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h}$ from observable quantities. The corresponding estimates of $\widetilde{F}$ are sufficient to demonstrate that very little scattering occurs \added{along either of these FRB LoS through the} galaxy haloes. Further deconstructing $\epsilon^2$, $\zeta$, and $f$ from $\widetilde{F}$ will require more information about the outer and inner scales of turbulence, which may differ from halo to halo. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{Fsummary_horiz.pdf} \caption{Nominal values of $\widetilde{F}$ for different components of the Milky Way (MW), the Large Magellanic Cloud (LMC), and for the foreground galaxies of FRB 181112 and FRB 191108. The Galactic anti-center and inner galaxy values are calculated by integrating NE2001 through the entire disk in the directions $(l = 130^\circ, b = 0^\circ)$ and $(l = 30^\circ, b=0^\circ)$, respectively. The nominal $\widetilde{F}$ for each halo was calculated assuming the modeled halo DMs discussed in the text. The orange error bars indicate the maximum values of $\widetilde{F}$ for $A_\tau > 1/6$, with $A_\tau = 1$ for the black points. The green error bars indicate a range of $\widetilde{F}$ for a representative range of halo DMs (and $A_\tau = 1$). The green lower and upper bounds correspond to: $20<{\rm DM}_{\rm MW,h}<120$ pc cm$^{-3}$\ for the Galactic halo, $50<{\rm DM}_{\rm i,h}<140$ pc cm$^{-3}$\ for FG$-$181112, and $50<{\rm DM}_{\rm i,h}<120$ pc cm$^{-3}$\ for M33. For each halo, the blue bar indicates that the scattering constraints are upper limits. The blue bar for the thick disk indicates the root-mean-square error in the distribution of $\widetilde{F}$ for high Galactic latitude pulsars.} \label{fig:Fsummaryplot} \end{figure} \section{Discussion} We present a straightforward methodology for constraining the internal electron density fluctuations of galaxy haloes using FRB scattering measurements. The pulse broadening time $\tau \propto \widetilde{F} \times {\rm DM}^2$, where the fluctuation parameter $\widetilde{F}$ quantifies the amount of scattering per unit DM and is directly related to the density fluctuation statistics. We analyze two case studies, FRB 121102 and FRB 180916, and find their scattering measurements to be largely consistent with the predicted scattering from the Galactic disk and spiral arms, plus a small or negligible contribution from the Galactic halo. A likelihood analysis of their scintillation bandwidths and angular broadening places an upper limit on the product of the Galactic halo DM and fluctuation parameter $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h} < 250/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$, where $A_\tau$ is the dimensionless constant relating the mean scattering time to the $1/e$ time of a scattered pulse. This estimate can be used to calculate the pulse broadening delay induced by electron density fluctuations in the halo, independent of any assumptions about the electron density distribution of the Galactic halo. The upper limit on $(\widetilde{F} \times {\rm DM}^2)_{\rm MW,h}$ implies a maximum amount of pulse broadening from the Galactic halo $\tau_{\rm MW,h} < 12$ $\mu$s at 1 GHz. \indent While the DM contribution of the Milky Way halo to FRB DMs is still poorly constrained, we adopt a Gaussian PDF for the observed DM of the halo to estimate $\widetilde{F}_{\rm MW,h} < 0.03/A_\tau$ pc$^{-2/3}$ km$^{-1/3}$. We compare this to the fluctuation parameter of the Galactic thick disk using the distribution of $\tau/{\rm DM}^2$ for all Galactic pulsars at high Galactic latitudes with pulse broadening measurements. We measure the fluctuation parameter of the thick disk to be $\widetilde{F}_{\rm disk}^{\rm thick} = (3\pm2)\times10^{-3}$ pc$^{-2/3}$ km$^{-1/3}$, about an order of magnitude smaller than the halo upper limit. At high Galactic latitudes, the thick disk will only cause a scattering delay on the order of tens of nanoseconds at 1 GHz. Larger samples of FRBs and continued X-ray observations of the Galactic halo will refine our understanding of the DM contribution of the halo and may modify our current constraint on $\widetilde{F}_{\rm MW,h}$\added{, which is only based on two LoS through the halo}. While we assume for simplicity that the density distribution of the halo is spherically symmetric, $\widetilde{F}_{\rm MW,h}$ and $\DM_{\rm MW,h}$ will vary between different LoS through the halo, and an extension of our analysis to a larger sample of FRBs may yield a more constraining limit on the average fluctuation parameter of the halo. \indent Extrapolating the scattering formalism we use for the Galactic halo to intervening galaxies, we examine two examples of FRBs propagating through intervening haloes, FRB 181112 and FRB 191108. The observed upper limits on each halo's contribution to $\tau$ are $\tau_{\rm i,h}<21$ $\mu$s at 1.3 GHz for FRB 181112 \citep{2020ApJ...891L..38C} and $\tau_{\rm i,h} < 4$ ns at 1.37 GHz for FRB 191108 \citep{2020MNRAS.tmp.2810C}. We find $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} < 13/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ for FRB 181112 and $(\widetilde{F} \times {\rm DM}^2)_{\rm i,h} < 0.2/A_\tau$ pc$^{4/3}$ km$^{-1/3}$ cm$^{-1/3}$\ for FRB 191108. Both estimates fall within the upper limit for the Milky Way halo, and all of these haloes have small to negligible scattering contributions \added{for the FRBs considered in this paper}. \indent We also model the DM contribution of each intervening halo to find nominal constraints on $\widetilde{F}$. The values of $\widetilde{F}$ from our analysis of FRB 181112, FRB 191108, the LMC, the Galactic halo, the Galactic thick disk, and the values of $\widetilde{F}$ used in NE2001 for the Galactic anti-center and inner Galaxy are all assembled in Figure~\ref{fig:Fsummaryplot}. The uncertainties associated with the conversion factor $A_\tau$ and the halo DMs are also shown. \added{The values of $\widetilde{F}$ for M33, FG$-$181112, the Galactic halo, and the LMC are essentially point estimates because they are based on individual sources, while the estimates provided for the Galactic thick disk, anti-center, and inner Galaxy are based on the population of Galactic pulsars.} Broadly speaking, the $\widetilde{F}$ upper limit for the Galactic halo is similar to that of the disk and spiral arms in the anti-center direction, and is about an order of magnitude larger than the fluctuation parameter of the thick disk. The value of $\widetilde{F}$ for the LMC is similar to that of the inner Milky Way because it is based on the pulse broadening of B0540$-$69, which lies within a supernova remnant and hence within an enhanced scattering region. Our estimates of $\widetilde{F}_{\rm i,h}$ for both FRB 181112 and FRB 191108 indicate that very little scattering occurs in the haloes intervening their LoS. \indent The fluctuation parameter is directly related to the inner and outer scales of turbulence as $\widetilde{F} \propto (\zeta \epsilon^2/f)(l_{\rm o}^2l_{\rm i})^{-1/3}$, where $\zeta$ and $\epsilon$ respectively describe changes in the mean density between different gas cloudlets and the variance of the density fluctuations within cloudlets. While the inner and outer scales in the Galactic warm ionized medium (WIM) are constrained by pulsar measurements to be on the order of \replaced{$l_{\rm i} \sim 1000$}{$100 \lesssim l_{\rm i} \lesssim 1000$} km and $l_{\rm o} \gtrsim 10$ pc \replaced{\citep{1995ApJ...443..209A}}{\citep{1990ApJ...353L..29S,1995ApJ...443..209A,2004ApJ...605..759B,2009MNRAS.395.1391R}} , the corresponding scales in hot halo gas are probably much larger. Given the size of the halo, $l_{\rm o}$ could be on the order of tens of kpc. The inner scale could also be larger if it is related to the proton gyroradius and the magnetic field strength is smaller in the halo than in the disk, which is probably the case given that the rotation measures of extragalactic sources tend to be larger closer to the Galactic plane \citep{2017ARA&A..55..111H}. Given that $f$, $l_{\rm o}$, and $l_{\rm i}$ are all probably larger in the halo than in the disk, we would expect $\widetilde{F}_{\rm MW,h}$ to be much smaller than $\widetilde{F}_{\rm disk}^{\rm thick}$. If we further expect the Milky Way halo to be similar to other galaxy haloes like FG$-$181112 and M33, then $\widetilde{F}_{\rm MW,h}$ would likely be less than $10^{-3}$ pc$^{-2/3}$ km$^{-1/3}$. However, our current constraints allow $\widetilde{F}$ to be larger in the halo than in the disk, which suggests that the upper limit for $\widetilde{F}_{\rm MW,h}$ is not constraining enough to make any further conclusions about $\zeta$, $\epsilon^2$, $f$, $l_{\rm o}$, and $l_{\rm i}$ in the halo. \indent \added{On the other hand, quasar absorption studies of the CGM of other galaxies (mostly at redshifts $z\gtrsim2$) indicate the presence of $\sim10^4$ K gas \citep{2015Sci...348..779H, 2016ApJS..226...25L, 2018MNRAS.473.5407M}, suggesting that the CGM is a two-phase medium consisting of warm gas clumps embedded in a hot ($10^6$ K) medium \citep{2018MNRAS.473.5407M}. Using a cloudlet model based on the simulations of \cite{2018MNRAS.473.5407M}, \cite{2019MNRAS.483..971V} demonstrate that a clumpy CGM could significantly scatter FRBs. Our empirical constraints on $\widetilde{F}$ are largely independent of any assumptions about the physical properties of the scattering medium. We assume a halo density model to estimate the DM contribution of a halo, although mapping of ionized and neutral high-velocity clouds in the Galactic CGM indicate that the DM is likely dominated by the hot gas \citep{2019MNRAS.485..648P}. As a composite parameter, $\widetilde{F}$ is insensitive to a broad range of assumptions about gas temperature or clumps, and could serve as an independent test of the two-phase model for the CGM. In a clumpy, cooler CGM, the inner and outer scales of turbulence would be similar to those in the WIM and $f \ll 1$, and $\widetilde{F}$ would be larger than it would be in a hot medium with a larger filling factor and scale size. Adopting fiducial values of $\epsilon^2 = \zeta = 1$, $f\sim10^{-4}$ (the value used by \cite{2019MNRAS.483..971V}), $l_{\rm i} \sim 100$ km, and $l_{\rm o} \sim 10$ pc gives $\widetilde{F} \sim 500$ pc$^{-2/3}$ km$^{-1/3}$. This estimate is orders of magnitude larger than our results for the Galactic halo and the foreground haloes of FRBs 181112 and 191108, suggesting that halo gas probed by these LoS is either not dominated by cooler clumps, or that $f$, $l_{\rm o}$, and $l_{\rm i}$ are significantly different in the clumpy CGM than otherwise assumed by \cite{2018MNRAS.473.5407M} and \cite{2019MNRAS.483..971V}. } \indent \deleted{A more stringent comparison of the hot halo and thick disk will require a stricter constraint on $\widetilde{F}$ for the halo.} \added{A more stringent comparison of hot gas in the halo and the WIM will require a larger sample of precise FRB scattering measurements.} Regardless, the nominal range of $\widetilde{F}$ constrained for the Galactic halo and the haloes intervening FRB 181112 and FRB 191108 demonstrate the range of internal properties that different galaxy haloes can have. A broader sample of FRB scattering measurements with intervening halo associations will expand this range and may potentially reveal an interesting diversity of galaxy haloes. \indent Many more FRBs with intervening galaxy haloes will likely be discovered in the near future. In these cases, the amount of scattering to be expected from the intervening haloes will depend not only on the fluctuation parameter $\widetilde{F}$ and DM of the halo, but also on the relative distances between the source, halo, and observer, and the effective path length through the halo. Depending on the relative configuration, an intervening halo may amplify the amount of scattering an FRB experiences by factors of 100 or more relative to the amount of scattering expected from the Milky Way halo. However, plausibly attributing scattering to an intervening halo will still require careful consideration of the FRB host galaxy, which in many cases may be the dominant source of FRB scattering. \acknowledgements The authors thank the referee for their useful comments and acknowledge support from the National Aeronautics and Space Administration (NASA 80NSSC20K0784) and the National Science Foundation (NSF AAG-1815242), and are members of the NANOGrav Physics Frontiers Center, which is supported by the NSF award PHY-1430284.
train/arxiv
BkiUeDQ4dbghbE-_nOKa
5
1
\section{Introduction} The second order correlation function $g^{(2)}$ is one of the most important characteristic function for a light source~\cite{Plenio:1998ul}. It is the major feature to distinguish non-classical, anti-bunching light sources from the classical thermal ones. For a thermal light sources, the coherence time and the linewidth of the sources can also be directly derived from this function. To acquire an accurate $g^{(2)}$ plays an important role in various newly developed researches, such as quantum information \cite{NeergaardNielsen:2006hl}, fluorescence correlation spectroscopy on quantum dot \cite{Michler:2000wv}, cold atomic cloud \cite{Nakayama:2010vb, Das:2010vq}, single molecule \cite{DeMartini:1996vh,Fleury:2000uf} and et al. In the most of experiments, the Hanbury-Brown-Twiss (HBT) interferometer, whose simplified scheme is shown as Fig. \ref{figure:simple}, was implemented to measure $K(\tau)$, the probability distribution of photon pairs with time intervals $\tau$. The second order correlation function $g^{(2)}(\tau)$ can be approximated as $K(\tau)$, if the coherence times of the sources is relatively short and the photon flux is low. For the light sources with a very long coherence time, ex. fluorescence of ultracold atoms \cite{Du:2008do}, the direct conversion to $g^{(2)}(\tau)$ is problematic, due to the wave packet overlap of consecutive photons. Thus, reducing photon flux to avoid the overlapping could give a more accurate result of direct $g^{(2)}$ conversion. However, the background noise level limits the achievable minimum photon flux. This dilemma constrains the applications of the method, which directly takes the photon pair time intervals as $g^{(2)}(\tau)$. In this paper, we discuss the high order correction of the conversion to $g^{(2)}(\tau)$ from the photon pair time intervals $K(\tau)$ using the direct numerical convolution, and experimentally examined this method with a pseudo-thermal light source. A novel random phase modulation method is also demonstrated to measure the coherence time of a highly coherent source. It converts the source in the coherent sate to the chaotic state, in order to use the second order correlation function to characterize the (first order correlation) coherence time. Meanwhile, to overcome the dilemma of photon flux, as mentioned above, this method shortens the coherence time to allow a higher photon flux above the background noise. To measure the second order correlation function using the HBT interferometer, two related physical quantities: $K(\tau)$ and $J(\tau)$ should be discussed as formulated in \cite{Reynaud:1983tq}. $K(\tau)$, the experimentally measured result of the HBT, is the histogram of the consecutive photon pairs with a time interval $\tau$. $J(\tau)$ is the histogram of photons at time=$\tau$ with a photon at t=0. The second order correlation function $g^{(2)}(\tau)$ is proportional to $J(\tau)$, as: \begin{equation} J(\tau)=\bar{I}g^{(2)}(\tau) \end{equation} where $\bar{I}$ is the average photon detection rate per time bin of the light source and normalizes the histogram to the distribution of probability density. The time resolution (the bin size of the histogram) must be shorter than the coherence time of the light source under study. Since $J(\tau)$ should include the counts of the photon pair, which are not necessary to be consecutive, but just has a time interval $\tau$. $J(\tau)$ is an infinite convolution power series of $K(\tau)$ \cite{Reynaud:1983tq}, which is a histogram of the time intervals between two consecutive signal received from the "START" and the "STOP" detectors (see Fig. \ref{figure:simple}). Thus, \begin{equation}{\label{eq:series}} J(\tau) = K(\tau)+K(\tau)*K(\tau)+......=\sum^{\infty}_{n=1}K_n(\tau) \end{equation} $K_n(\tau)$ is denoted as the self-convolution of $K(\tau)$ in $n$ orders. The Laplace transformed $\tilde{J}(s)$ then can be derived from $\tilde{K}(s)$ as: \begin{equation} \tilde{J}(s) = \frac{\tilde{K}(s)}{1-\tilde{K}(s)} \end{equation} where $\tilde{J}(s)$ and $\tilde{K}(s)$ are the Laplace transforms of $J(\tau)$ and $K(\tau)$, respectively. Using this equation to calculate $g^{(2)}(\tau)$ requires an accurate and efficient numerical Laplace transformation and its inversion, which is very sensitive and a challenging task for numerical analysis. Therefore, it is not very often directly implemented in $g^{(2)}$ measurement experiments. In some of the experiments, it is to take $K(\tau)\sim J(\tau)$ as approximation by ignoring the high order terms, or to replace Laplace transformation with Fast-Fourier-Transformation (FFT) ~\cite{Fleury:2000uf,Fleury:2001uw}. In the following sections, we examined a direct numerical convolution algorithm to derive $g^{(2)}(\tau)$ from $K(\tau)$. The direct numerical convolution, which takes the high order correction into account, can be computed using a simple recursion loop. The accuracy of this approach was tested using a pseudo-thermal light source, to show a significant improvement, particularly for a light source with a long coherence time. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\linewidth]{simple.pdf} \end{center} \caption{\label{figure:simple}The simplified HBT experimental scheme. A clock (counter) is triggered by the photon received from the "START" detector, then is stopped by the subsequent photon received from the "STOP" detector. The time intervals were measured using the clock, and then recorded as a histogram. } \end{figure} \section{High-order correction of $g^{(2)}(\tau)$}{\label{sec:high-order}} $J(\tau)$, proportional to $g^{(2)}(\tau)$, is a self-convolution power series of $K(\tau)$, as Eq.~(\ref{eq:series}). The order number of the self-convolution to reach a satisfactory accuracy depends on the convergency of the self-convolution power of $K(\tau)$. Conducting a Fourier transformation on Eq. (\ref{eq:series}) can simplify the successive convolution of $K(\tau)$ to the frequency domain $\hat{K}(\omega)$: \begin{equation} \bar{I}\hat{g}^{(2)}(\omega)=\hat{J}(\omega)=\sum^{\infty}_{n=1}\hat{K}^{n}(\omega)=\frac{\hat{K}(\omega)}{1-\hat{K}(\omega)} \end{equation} It is valid, if $0< \left|\hat{K}(\omega)\right|<1$. A fast convergency will be given by a smaller $\hat{K}(\omega)$, the rate of the convergency can be quantified by : \begin{equation}{\label{eq:k-omega}} \hat{K}(\omega)=\frac{\bar{I}\hat{g}^{(2)}(\omega)}{1+\bar{I}\hat{g}^{(2)}(\omega)} \end{equation} For a chaotic thermal light source, we have \begin{equation}\label{eq:1to2} g^{(2)}(\tau) = 1+|g^{(1)}(\tau)|^{2} \end{equation} Then, for $\omega\neq0$, the Fourier transform of $K(\tau)$ is: \begin{equation}{\label{eq:k-f-ig}} \hat{K}(\omega)=\frac{\bar{I}\|\hat{g}^{(1)}(\omega)\|^{2}}{1+\bar{I}\|\hat{g}^{(1)}(\omega)\|^{2}} \end{equation} The convergency is predominated by the product $ \bar{I}\|g^{(1)}(\omega)\|^{2}$, which is the power spectrum of the light source. A Lorentzian chaotic light source model was used to further investigate the convergency of $\hat{K}(\omega)$. The $g^{(1)}(\tau)$ of a homogenous broadened light source can be modeled as \cite{Loudor}: \begin{equation}{\label{eq:l-model}} g^{(1)}(\tau)=e^{-\frac{\tau}{\tau_{c}}} \end{equation} And, the power spectrum of such a light source is given as: \begin{equation}{\label{eq:l-model-f}} \|\hat{g}^{(1)}(\omega)\|^2=\frac{ \tau_c}{1+({\omega \tau_c/2})^2} \end{equation} By Eq. (\ref{eq:k-f-ig}), \begin{equation} \hat{K}(\omega)=\frac{\bar{I} \tau_c}{1+({\omega \tau_c}/{2})^2+{\bar{I} \tau_c}} \end{equation} It shows that $\hat{K}\sim0$ at the high frequency region, and the high order correction is only important in the low frequency region within the linewidth of the source, $\omega \tau_c<1$. Then, the convergency of the high order correction on calculating $g^{(2)}$ is dominated by the factor: \begin{equation}{\label{eq:c-factor}} \frac{\bar{I}\tau_{c}}{\frac{5}{4}+\bar{I}\tau_{c}}<1 \end{equation} The convergency of the power series $\sum\hat K^n(\omega)$ is related to the product of the average photon detection rate and the coherence time: $\bar{I}\tau_{c}$. A smaller $\bar{I}\tau_{c}$ can result in a faster convergency. The product $\bar{I}\tau_{c}$ can be used to characterize the degree of the overlapping of photon wave packets. For a stronger overlapping, the high order self-convolution will play a more important role. Thus, while the direct summation of finite high orders is utilized to calculate $g^{(2)}$, a lower photon flux $\bar{I}$ or a short coherence time $\tau_{c}$ needs to be satisfied for an accurate result. The error of $J(\tau)$, due to only finite high order terms included in the numerical calculation, can be estimated as: \begin{equation} \Delta \hat J_m(\omega)=\frac{\sum^{\infty}_{n=m+1}\hat K^{n}(\omega)}{\sum^{\infty}_{1}\hat K^{n}(\omega)}=\hat K^{m}(\omega) \end{equation} where $m$ is the highest order included in the finite power series. For a weak light source $\bar{I}\tau_c\ll1$, the error is $\sim(0.8\bar{I}\tau_c)^m$ and the convergency is with a power of $m$ . On the contrary, for the strong light sources with $\bar{I}\tau_c\gg1$, the error is approximated as $1-(1.25m/\bar{I}\tau_c)$, which goes down linearly with $m$. Therefore, it is important to have a sufficiently weak photon overlapping $\bar{I}\tau_c$ for a fast convergency. For a source with a long coherence time $\tau_c$, a very low photon flux rate $\bar{I}$ is required by an accurate calculation of the correlation function. However, although the photon flux of the source under the measurement can be reduced simply using an attenuator, the minimum photon rate is limited by the stray light or the dark counts of the electronic. The photon flux of the source must be significantly higher than the background noise to reach a reliable measurement. Therefore, a dilemma is imposed on the measurement of $g^{(2)}$ of the light source with a long coherence time. In our following experiment, the rotating disk modulation method was demonstrated to overcome this obstacle. The coherence time of a long photon, a single frequency Ne-He laser, was measured. \section{Uncertainty of the Beam splitter ratio} As our discussion above, it seems that the more high order correction are included, the accuracy of the $g^{(2)}$ will be higher. In this section, we argue that the uncertainty of the beam splitter ratio will limit the highest order that can be included in the calculation. To use the direct numerical convolution method, because of the partial reflection beam splitter, the splitting ratio correction factor must be taken into account. The experimentally measured time interval is not truly of two consecutive photons. In HBT measurement, after the "START" detector receiving a photon, the consecutive photon may go to the "START" detector, rather than the "STOP" detector, with a probability of 1/2 for a 50:50 beam splitter. In such a case, there will be no signal generated from this event. Thus, the time interval histogram of the consecutive photon pairs $K(\tau)$ should be related to the experimentally measured photon pair time interval distribution $D(\tau)$ as: \begin{equation} D(\tau)=\sum^{\infty}_{n=1}\frac{1}{2^n}K_n(\tau) \end{equation} The $m$th order self-convolution $D_m(\tau)$s are expressed as: \begin{equation} \begin{aligned} D(\tau) =&\frac{1}{2}K(\tau)+&\frac{1}{4}K_2(\tau)+&\frac{1}{8}K_3(\tau)+&\frac{1}{16}K_4(\tau)\dots\\ D_2(\tau)=& &\frac{1}{4}K_2(\tau)+&\frac{2}{8}K_3(\tau)+&\frac{3}{16}K_4(\tau)\dots\\ D_3(\tau)=& & &\frac{1}{8}K_3(\tau)+&\frac{3}{16}K_4(\tau)\dots\\ D_4(\tau)=& & & &\frac{1}{16}K_4(\tau)\dots\\ &\vdots\\ \end{aligned} \end{equation} Thus, $J(\tau)$ can be calculated from the self-convolution power series of the experimentally measured time interval distribution $D(\tau)$ with an additional factor 2, while a 50:50 beam splitter was used. \begin{equation}{\label{eq:D2J}} 2\sum^{\infty}_{n=1}D_n(\tau)=2(\frac{1}{2})\sum^{\infty}_{n=1}K_n(\tau)=J(\tau)\\ \end{equation} However, practically, a non-equal splitting ratio should be considered. A deviation $\epsilon$, which could be caused by the imbalanced beam splitting ratio or the difference between the efficiencies of the detectors, leads to a correction of the 50\% detection probability ratio as $(0.5+\epsilon):(0.5-\epsilon)$. The experimentally measured $D(\tau)$ is then written as: \begin{equation} D(\tau,\epsilon)=\sum^{\infty}_{n=1}(\frac{1}{2}+\epsilon)^{n-1}(\frac{1}{2}-\epsilon)K_n(\tau) \end{equation} The uncertainty of the splitting ratio $\epsilon$ will affect the accuracy of the resulted $g^{(2)}$. The higher order self-convolution terms will be more severely affected. That is, such uncertainties will be amplified in the high order terms and limits the final achievable accuracy. A deviation $\epsilon$=5\% in the splitting ratio, will result in a 30\% error for the 5th order, and 100\% for the 10th order. The experimental difficulty in measuring an accurate splitting ratio limits the maximum applicable order in calculating the final second order correlation function. \section{HBT experimental test} \begin{figure}[h] \begin{center} \includegraphics[width=0.8\linewidth]{set-up.pdf} \end{center} \caption{\label{figure:setup}The experimental set-up. To generate a pseudo-thermal light source, a single longitudinal mode 632~nm He-Ne laser, passing through two polarizers for controlling the incident power, is focused on a rotating wheel with a surface of sandpaper. The back scattering of light was collected into a fiber splitter without any collimator. One of SPCM (PMT1) was as the START to trigger the universal counter for time interval measurement. The other SPCM (PMT2) was the "STOP". The time intervals were recorded by a computer for subsequent off-line analysis. The second order correlation function of the pseudo-thermal light was then calculated from the histogram of the recoded time intervals. } \end{figure} The improvement and the accuracy of the direct numerical convolution method for analyzing $\rm g^{(2)}(\tau)$ was tested using a HBT photon interferometer to measure a pseudo-thermal light source with a variable coherence time. Meanwhile, to measure a very long coherence time (a narrow linewidth), we demonstrate a novel method, which is based on this experimental setup and needs no km-long optical fiber for self-heterodyne detection \cite{OKOSHI:1980un,Richter:1986us}. As shown in Fig.\ref{figure:setup}, the HBT interferometer is based on a fiber-splitter (50:50, throlabs 780-HP) with single photon counting modules (SPCM, HAMAMATSU H7421-40) as the "START" and the "STOP" detectors to measure the photon pair time intervals using an universal counter (Agilent 53131A). The pseudo-thermal light is the back scattering of the single frequency 632~nm He-Ne laser from a rotating sand disk wheel. No frequency or power stabilization was utilized onto the He-Ne laser. It is particularly suitable to simulate a light source with a very low intensity and a very long coherence time \cite{Martienssen:1964ig, Scarcelli:2004bc, ESTES:1971}. The coherence time of the pseudo-thermal light was controlled by the rotating speed $\omega_r$ of the sand paper (as ${\tau_c\propto1/\omega_r}$) to allow us to compare the deduced coherence time with the theoretical predication. The He-Ne laser power was controlled by rotating two polarizers, then focused on the rotating sand disk using a convex lens. For a good stability and precision of the rotating speed of the sand disk, the rotating wheel was modified from a light chopper wheel, whose rotating speed was locked to an oscillator clock. Reducing the stray room light is important for measuring a long coherence time. As shown by Eq. (\ref{eq:c-factor}), the product of the average photon detection rate and the coherence time ${\bar{I} \tau_c}$ should be smaller than 1 for an accurate measurement. That is, the longer coherence time requires a lower photon rate. On the other hand, the detected photon counts of the light source should be much higher than the background photon counts. We have reinforced the light shield for the entire experimental setup, including the fiber jackets and the connectors. The resulted background photon counting rate is about 20\% of the pseudo-thermal light. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{contribution.pdf} \end{center} \caption{\label{fig:contribution} Typical $J(\tau)$ with various orders of $D_n(\tau)$ at a 20~Hz rotating frequency. The coherence time of the pseudo-thermal light source $\tau_{c}$ is $\sim 10\mu sec$ and $\bar{I} \sim 4\times10^4$ photons/sec. The results of 4th and 6th order are in a very good agreement within the region $\tau<100~\mu sec$, where is important for deriving the coherence time. The bin size = 100~nsec was used. } \end{figure} In our experiment, the detected photon rate was reduced to $\sim4\times10^4$ photons/sec. A recorded time interval sequence was then converted to a histogram ($D(\tau)$) with a 100~nsec bin size of time, which is sufficiently small for the measurement of a coherence time $\sim\mu$sec. A smaller bin size has also been used for our test run and showed no improvement on the result, but just cost more calculation time. $J(\tau)$, which is proportional to the second order correlation function $g^{(2)}(\tau)$, was derived using direct numerical self-convolution of $D(\tau)$ (Eq. (\ref{eq:D2J})). Figure \ref{fig:contribution} shows the $J(\tau)$s with various high order corrections. We found that a six-order convolution is sufficient to converge and to reach an accurate $J(\tau)$. Taking the distribution of the photon pair intervals as $g^{(2)}$ or underestimating the high order correction will result in a prolonged coherence time. It also shows that the convergency is faster at the region of small $\tau$ as discussed in the section \ref{sec:high-order}. On the contrary, the tail of $J(\tau)$ with a large $\tau$, which does not affect the resulted coherence time $\tau_c$, has a slower convergency. We suggest the proper criterion of the highest order $n$ included in the calculation to be: $D_{n+1}/\sum^{n}_{1}D_n<10^{-4}$ in the region of $\tau<\tau_c$, since the deviation of the resulted coherence time will be $<1\%$. In our experiment, $n=6$ is sufficient to meet the criterion. \section{Novel method of measuring the coherence time} To test and demonstrate the accuracy of $g^{(2)}(\tau)$ ($J(\tau)$) using our method, the rotating speed of the sand disk was varied to generate photons with different coherence time in our experiment. The result is shown in Fig. \ref{fig:fitting}. The measured $J(\tau)$s using our direct convolution method were fitted with exponential decay curves with constant offsets, as: $A+Be^{-2\tau/\tau_{c}}$. From Eq. (\ref{eq:1to2}), we have \cite{Loudor}: \begin{equation} g^{(2)}(\tau)=1+e^{-2\tau/\tau_c} \end{equation} The coherence times of the scattered light (pseudo-thermal, chaotic) can be derived from the fitting parameter $\tau_c$. It is a direct relation between the second order and the first order correlations. However, this equation is only valid for a chaotic light source. For a coherent source, such as lasers, the coherence time can not be measured using the second order correlation. In our experiment, the coherent laser source was thus converted to the chaotic using the random phase modulation (rotating sand disk). In Fig.\ref{fig:fitting}, the zero time delay second order correlations $g^{(2)}(0)$ are about 2 for all the rotating frequencies, except for the 0~Hz. It indicates that the rotating sand disk has fully "thermalized" the coherent light to be chaotic, due to the random phase modulation. However, for the 0~Hz, the light was still in a very good coherent state, $g^{(2)}(0)\sim1$. The random phase modulation is not only to convert the light source to be pseudo-thermal, but also to broaden the linewidth. The broadened linewidth is proportional to the rotating speed. A composite (voigt) power spectrum with both Gaussian and Lorentzian parts could be the most adequate shape for our pseudo thermal light source. However, due to limited accuracy of the measurements, we are not able to determine the composition ratio, as \cite{Yuan:2009ue}. Both lineshape models are not much different to our experiment. As shown in the inset of Fig. \ref{fig:fitting}, the exponential decay curve (Lorentzian power spectrum) is more suitable than the Gaussian (Gaussian power spectrum). It is due to that the backscattering random modulation is similar to the mechanism of the collision broadening, which broadens the spectrum of light source as a Lorentzian shape. While the incident light is a monochromatic i.e. laser, with a negligible linewidth $\delta\omega_0\sim 0$, the coherence time of the pseudo-thermal light is proportional to the inverse of the rotating frequency \cite{ESTES:1971}. \begin{equation} \frac{1}{\tau_{c}}=k\omega_r \end{equation} where k is a constant related to the experimental setup of the pseudo-thermal light. It is a good approximation for a high rotating frequency, whose scattering broadening is much larger than the laser linewidth itself. Thus, \begin{equation} \tau_c\omega_r=\frac{1}{k}=\rm{const.} \end{equation} can be used to test the validity of the derived coherence times $\tau_c$. As illustrated in Fig.~\ref{fig:comp}, $\tau_c\omega_r$ exhibits as a horizontal straight line at the higher rotating frequency $\omega_r$ regime. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{fit-lorentz.pdf} \end{center} \caption{\label{fig:fitting}The second order correlation functions $g^{(2)}(\tau)$ with rotating frequencies 0~Hz, 20~Hz, 100~Hz, 300~Hz, 500~Hz and 900~Hz. The black lines are the fitting functions $A+Be^{-2\tau/\tau_{c}}$. Except the 0~Hz, the resulted coherence times ${\tau_c}$ are $\rm{28.10(80)~\mu s}$, $\rm{7.40(11)~\mu s}$, $\rm{3.00(5)~\mu s}$, $\rm{1.75(2)~\mu s}$ and $\rm{0.96(3)~\mu s}$, respectively. The inset shows a Gaussian fit (thin red line) of the $g^{(2)}(\tau)$ with a rotating frequency 700 Hz, in comparison with the exponential decay fit (thin blue line). The fitting residual and $\chi^2$ show that the exponential decay function is slightly better than the Gaussian.} \end{figure} However, for a finite linewidth $\delta \omega_o$ of the incident light, the resulted spectrum should be considered as the convolution of the incident light and the broadening caused by the rotating sand disk. The total linewidth $\delta\omega_m$ of the scattered pseudo-thermal light was then corrected as: \begin{equation} \delta\omega_{m}= \delta\omega_{0}+k\omega_r\\ \end{equation} And, the coherence time of the pseudo-thermal light, including the incident laser linewidth, is: \begin{equation} \frac{1}{\tau_{c}}=\frac{1}{\tau_{0}}+k\omega_r\\ \end{equation} where $\tau_0$ is the coherence time of the incident light. Consequently, $\tau_c\omega_r$ is no longer a constant, but a function of the rotating frequency $\omega_r$: \begin{equation}\label{eq:model} \tau_{c}\omega_r=\frac{1}{\frac{1}{\omega_r \tau_{0}}+k} \end{equation} It implies that the finite linewidth $\delta\omega_0$ correction becomes very pronounced at the low rotating frequency, where the scattering broadening $k\omega_r$ is comparable to the incident linewidth $\delta\omega_0$ (=$1/\tau_0$), and $ \tau_{c}\omega_r$ remains as a constant at the high rotating frequency regime, where $k\omega_r>>\delta\omega_0$. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{lorentz.pdf} \end{center} \caption{\label{fig:comp} $\tau_{c}\omega_{r}$ v.s. $\omega_{r}$. At the low frequency regime, the uncorrected ${\tau_c}$ (red dot) strongly deviates from the high-order corrected $\rm{\tau_c}$ (black square) and the theoretical model (blue line). The corrected ${\tau_c}$ are in very good agreement with the theory, which gives $ \tau_{c}\omega_r=({(\omega_r \tau_{0}})^{-1}+k)^{-1}$. The coherence time ${\tau_0}$ of the incident light (He-Ne laser) was derived as 74(15)~$\mu s$ from the fitting parameter of the theoretical model. The inset shows a typical beat-note signal of two HeNe lasers with a RBW=3~kHz. The measured (-3db) linewidth is 6.5(1.3)~kHz. Assuming equal linewidth of the two lasers, the coherence time is $\rm{97(20)~\mu s}$.} \end{figure} Figure~\ref{fig:comp}, which shows $\tau_{c}\omega_r$ v.s. the rotating frequency $\omega_r$, is used to evaluate the validity of the high-order correction. Firstly, the high-order corrected data points (black) are in a very good agreement with the theoretical model Eq.~(\ref{eq:model}). The product $\tau_{c}\omega_r$ remains constant at the high frequency regime, and decreases at the low frequency due to the finite linewidth of the light source. In contrast, $\tau_{c}\omega_r$ of the uncorrected data points (red) increases at the low frequency and deviates further away from the theoretical prediction, because of the strong overlapping of the photon wave packets, i.e., a large $\bar I\tau_c$. Secondly, The high-order corrected data were fitted using the mathematical model $y=1/(A+B/x)$ (blue line). The finite coherence time of the unstabilized laser $\tau_0$ was derived from the parameter 1/B. The derived laser coherence time ${\tau_0}$ is 74(15)$\mu$s with a statistic uncertainty of 20\%, which was given by the fitting to the data points. The error-bars of each data points are given by Fig.~\ref{fig:fitting}, and too small to show in Fig.~\ref{fig:comp}. They are mainly caused by the low frequency noise as our discussion in section \ref{sec:high-order}. Generally, the uncertainty of the coherence time can be improved by more measurements at various rotating frequency, especially at the low rotating frequency regime, where $\omega_r\tau_0\sim1/k$. $k$ is a parameter related to the experimental setup, including the sand disk roughness, the distance between the scattering spot and the centre of rotation, and so on. However, for coherent incident light source (e.g., a laser), the lowest applicable rotating frequency must provide sufficiently strong random modulation to thermalize the source. In our experiment, a 20~Hz rotating frequency can fully thermalize the laser source ($g^{(2)}(0)=2$). A larger photon collecting aperture could maintain the thermalization with a lower rotating frequency \cite{Martienssen:1964ig}. The frequency beat-note experiment with two HeNe lasers was also performed to measure the laser linewidth, and to compare with the results of the random phase modulation. The laser used for the random phase modulation measurement was mixed with another nearly identical HeNe laser using a beam splitter. The beat-note of these two lasers was measured using an avalanche-photodiode APD and a radio-frequency spectrum analyzer with a 3~kHz resolution-bandwidth (RBW). The beat-note signal is shown as the inset of Fig.~\ref{fig:comp}. The measured (-3db) linewidth of the beat-note is 6.5(1.3)~kHz. Assuming that this width was equally contributed from two nearly identical lasers ($\Delta f$=3.3(0.7)~kHz for one laser), the coherence time $\tau_c$ ($=(\pi \Delta f)^{-1}$) is $\rm{97(20)~\mu s}$, which is in agreement with our random phase modulation measurement. \section{Conclusions} To measure the second order correlation function $g^{(2)}(\tau)$ of a light source, we have examined the feasibility and the reliability of the direct numerical convolution method, which is efficient and straightforward, in comparison with the other more delicate methods. The significance of the high order correction is related to the factor $\bar{I} \tau_c$, which indicates the overlapping between the wave packets of photons. It has been experimentally tested using a pseudo-thermal light source with a variable coherence time. In our experiment, we found that the summation up to the 6th order self-convolution can reach an accuracy sufficient to derive the second order correlation function $g^{(2)}$. The advantage of the direct numerical self-convolution is that it can be implemented with fast digital logic electronics, such as Field-programmable gate array (FPGA), to obtain a `real time' $g^{(2)}$ with a higher accuracy. By applying this direct self-convolution method, a novel random phase modulation method of measuring the linewidth (coherence time) of a light source has been demonstrated. It is to use a rotating sand disk to randomly modulate the phase and broaden the linewidth of the light source, and then the linewidth (coherence time) of the source can be extrapolated to the zero modulation. In comparison with the commonly used self-heterodyne measurement, our method, which needs no kilo-meter long optical fiber and high intensity, is more favourable for a weak light source, such as molecule fluorescence, bio-photon emission et al., whose interference fringes or beat note are not easy to be detected. \section*{Acknowledgments} This work was supported by the Ministry of Science and Technology of Taiwan under grant no. 103-2112-M-007-007-MY3. \end{document}
train/arxiv
BkiUdNE5jDKDyJ94XKke
5
1
\section{\label{sec:int}Introduction} We consider chaotic dynamical systems $F: \boldsymbol{x}_n \mapsto \boldsymbol{x}_{n+1}$ for which almost all trajectories $\boldsymbol{x}$ starting in a region of interest $\Gamma \in R^d$ (restraining region) leave it after a finite number of iterations $n=\tau$ (escape time). The zero-measure non-attracting set of trajectories that never escape ($\tau \rightarrow \infty$) build a fractal set, the stable manifold of the chaotic saddle that governs the asymptotic dynamics of the system. This is the familiar picture of transient chaos, which describes a variety of physical applications~\cite{LaiTamasBook,Tel2008,Rempel2007,Taylor2007}. An important problem in numerical investigations is to approximate the invariant sets, which amounts to maximize the escape time $\tau(\boldsymbol{x})$ (a function $\mathbbm{R}^d \mapsto \mathbbm{R}$)\footnote{The higher the escape time, the nearer the point is to the stable manifold of the invariant set, the nearer $F^{\tau/2}(\boldsymbol{x})$ is to the chaotic saddle and nearer $F^{\tau}(\boldsymbol{x})$ is to the unstable manifold.~\cite{LaiTamasBook}}. Two difficulties appear in (hyperbolic) chaotic systems: (i) the phase-space volume of trajectories with $\tau$ decays as $P(\tau) \sim e^{-\kappa \tau}$, where $\kappa$ is the escape rate of the system; (ii) the set of points for which $\tau\rightarrow \infty$ is a complicated fractal set with $D<d$ (the stable manifold of the chaotic saddle). Different numerical methods have been designed to approximate the invariant sets~\cite{Sweet2001, Bollt2005} or to compute specific properties such as the fractal dimension~\cite{DeMoura2001}. While the success of these methods for systems with a single positive Lyapunov exponent ($\lambda_1 >0, \lambda_{2,\ldots, d} <0)$ is well established, for the challenging case of hyperchaotic systems the most popular~\cite{LaiTamasBook,Taylor2007} and efficient solution is the Stagger-and-Step method proposed by Sweet and Yorke in 2001~\cite{Sweet2001}. Here we revisit the core problem of all these methods, which can be formulated as follows: starting from a point $\boldsymbol{x}$ with escape-time $\tau$, the task is to search a new point $\position'$ with higher escape-time $\tau(\position')>\tau(\boldsymbol{x})$. A simple search strategy is to consider the pre-image of $\boldsymbol{x}$, $\position' = F^{-1}(\boldsymbol{x})$, which by construction has $\tau(\position')=\tau(\boldsymbol{x})+1$~\cite{Dellago2002,Leitao2014}. This strategy is, however, of limited use because repeating it quickly leads $\position'$ to fall outside $\Gamma$ (the search domain). The other generic search strategy, which we focus in this paper, is to search in a vicinity $\boldsymbol{\delta}(\boldsymbol{x})$ of the original point $\position'=\boldsymbol{x}+\boldsymbol{\delta}(\boldsymbol{x})$. In this paper we investigate the efficiency of different search strategies (choices of $\boldsymbol{\delta}(\boldsymbol{x})$) and compare their efficiency as the number of searches needed in order to find a trajectory with a pre-assigned escape time $\tau=\tau_{max}$. We start introducing the system in which we test our algorithms (in Sec.~\ref{sec.Henon}). The first method we test is the stagger and step (in Sec.~\ref{sec.stagger}). We conclude that its efficiency grows faster than linearly with $\tau_{max}$ due to an inefficient search strategy $\boldsymbol{\delta}(\boldsymbol{x})$. To address this problem, we propose (in Sec.~\ref{sec.adaptive}) an adaptive method which considers a $\boldsymbol{\delta}(\boldsymbol{x})$ whose size decays with $\tau(\boldsymbol{x})$. A comparison of the efficiency of the two methods (in Sec.~\ref{sec.efficiency}) confirms the benefit of our adaptive proposal but shows that the existence of more than one unstable direction in the phase space poses a major limitation to both methods. We show (in Sec.~\ref{sec.anisotropic}) that this limitation can be overcome using a specific anisotropic distribution of $\boldsymbol{\delta}(\boldsymbol{x})$. Finally, we discuss the implications of our results (in Sec.~\ref{sec.conclusions}). \section{Coupled H\'enon maps}\label{sec.Henon} We test different methods through simulations in a family of dissipative systems with variable dimension that shows transient chaos. We consider a $d=2N$ dimensional map $F: (x_i,y_i)_n \mapsto (x_i, y_i)_{n+1}$ built coupling $i=1, \ldots, N$ two-dimensional maps \begin{equation} \left(\begin{array}{c} x_{i}\\ y_{i} \end{array}\right)_{n+1} = \left(\begin{array}{c} A_i - x_i^2 + B y_i + k(x _i - x_{i+1})\\ x_i \end{array}\right)_n, \label{eq.henon} \end{equation} with $i=1, \ldots, N$, periodic boundary conditions $(N+1 \equiv 1)$, $k=0.4 ,B=0.3, A_1=3$ (if $N>1$), $A_N=5,$ and $A_i=A_1+(A_N-A_1)(i-1)/(N-1)$. Initial conditions are chosen on a $2N$ hypercube $\Gamma=[-4,4]^{2N}$ (restraining region), which according to our exploratory numerical simulations contains the invariant set of the system for all $N$. This system was defined in Ref.~\cite{Leitao2013}, recovers for $N=1$ a well-studied chaotic H\'enon map, and recovers for $N=2$ the system used in the original paper of the Stagger-and-Step method~\cite{Sweet2001}. For increasing dimensionality~$N$, our numerical simulations systematically show exponential decay in time of the number of surviving trajectories, $N$ positive Lyapunov exponents, and fractal invariant sets. \section{Stagger method\label{sec.stagger}} The state-of-the-art for computing approximations of the chaotic invariant sets in systems with more than one $\lambda_i >0$ is the Stagger-and-Step method\cite{Sweet2001}. It combines two procedures: a random search for points with high escape-time (the stagger part), followed by the construction of pseudo-orbits starting from such points (the step part). Here we are interested in testing and improving the stagger part (the step part can be incorporated in the methods discussed below). The stagger search method considers $\boldsymbol{\delta}=\delta\hat{\mathbf{u}}$, where $\hat{\mathbf{u}}$ is a unit vector chosen with uniform probability in the unit sphere (isotropic) and $\delta$ is chosen randomly from a truncated scale-free distribution given by \begin{equation} \label{eq.stagger} P(\delta)= \frac{Z}{\delta} \text{ for } \delta_{min} < \delta < \delta_{max}, \end{equation} where $Z$ is the normalization and where we use $\delta_{max}$ to be of the system size and $\delta_{min}$ to be equal to the (potentially variable) machine precision \footnote{In Ref.~\cite{Sweet2001} the distribution is constructed by considering $\delta=10^{-s}$ with $s$ chosen randomly from a uniform distribution in $[\log_{10}(\delta_{min}), \log_{10}(\delta_{max})]$}. Starting from a point $\boldsymbol{x}$ with $\tau(\boldsymbol{x})$, the method consists in: \begin{itemize} \item[1.] calculate a proposal position $\position'=\boldsymbol{x}+\boldsymbol{\delta}$ and respective escape time $\tau(\position')$, where $\boldsymbol{\delta} = \delta \hat{\mathbf{u}}$ and $\delta$ is drawn randomly from Eq.~(\ref{eq.stagger}); \item[2.] accept the proposal if $\tau(\position') > \tau(\boldsymbol{x})$ (a success) or stay in the same position $\boldsymbol{x}$ if not; \item[3.] go to 1 until $\tau(\boldsymbol{x})$ reaches a maximum escape time $\tau_{\max}$. \end{itemize} In Fig.~\ref{fig.I} we test the Stagger proposal~(\ref{eq.stagger}) by investigating how the successful proposals depend on $\tau$. We find that, on average, the magnitude of the successful proposal $\delta^*$ decays with $\tau$, consistent with \begin{equation}\label{eq.rlambda} \delta^* \sim e^{-\lambda_1 \tau}, \end{equation} where $\lambda_1$ is the largest Lyapunov exponent of the system. This is justified by the notion that the points that escape at time $\tau$ are the $\tau$-th pre-images of a suitably defined escape region, and thus the size of its pre-images decrease exponentially with the maximum Lyapunov exponent~\cite{Grunwald2008,Leitao2013}. The stagger proposal~(\ref{eq.stagger}) is independent of $\tau$ and does not explore this regularity (right panel of Fig.~\ref{fig.I}): the overlap is small between the perturbation range $[\delta_{min}, \delta_{max}]$ and the range of successful perturbations whose average scales with Eq.~(\ref{eq.rlambda}). As $\tau_{max}$ increases, in order to find successful trajectories one has to reduce $\delta_{\min}$ (e.g., by increasing the machine precision). This increases the search range $[\log \delta_{\min}, \log \delta_{\max}]$, which further decreases the overlap with the range of successful proposals. As a consequence, the efficiency of the stagger search decreases with $\tau_{\max}$. The natural question we explore in the next section is how to use the regularity in the $\delta$ vs. $\tau$ relation -- Eq.~(\ref{eq.rlambda}) -- to design a more efficient search algorithm (even if $\lambda_1$ is unknown). \begin{figure}[bt!] \centering \includegraphics[width=\columnwidth]{FIG_1.pdf} \caption{ The successful proposals $\position'=\boldsymbol{x}+\boldsymbol{\delta}$ of the Stagger method depend on the escape time $\tau=\tau(x)$. Left: average and standard deviation of the length of successful proposed values $\delta=|\boldsymbol{\delta}|$ for a given $\tau(\boldsymbol{x})$ (i.e., for which $\position'=\boldsymbol{x}+\boldsymbol{\delta}$ leads to $\tau(\position')>\tau(\boldsymbol{x})$). The dashed line corresponds to an exponential decay with an exponent $\lambda_1=1.33$, the largest Lyapunov exponent of this system as reported in Ref.~\cite{Sweet2001}. Right: the distributions $P_S(\delta)$ of successful $\delta$ for two different $\tau$'s (8 in white and 16 in gray) along with the Stagger distribution $P(\delta)$ defined in Eq.~(\ref{eq.stagger}). stagger. Results are obtained from independent simulations in system~(\ref{eq.henon}) with $N=2$, $\tau_{\max}=19, \delta_{min}=2^{-39}$, and $\delta_{\max}=1$. } \label{fig.I} \end{figure} \section{Adaptive method\label{sec.adaptive}} Motivated by the idea that there is a typical scale $\delta$ for a successful search at each value of $\delta$, we propose a search algorithm using the same procedure as the Stagger, but with a distribution $P(\delta)$ different from Eq.~(\ref{eq.stagger}). Specifically, we consider a normal distribution with average 0 and standard deviation $\sigma(\boldsymbol{x})$. We choose this distribution because the distance $|\boldsymbol{x} - \position'|$ is a half-normal distribution which has expected value proportional to $\sigma$ (the characteristic length of proposals). Ideally, we would choose $\sigma\sim \delta$ to scale as $\sigma(\tau) \sim e^{-\lambda_1 \tau}$. However, since $\lambda_1$ is typically unknown, we use the adaptive procedure proposed in Ref.~\cite{Leitao2013}, where $\sigma$ is adapted at each step of the simulation by \begin{equation}\label{eq.sigma} \sigma_{t+1}(\tau) = \left\{\begin{array}{c} \sigma_t(\tau)/f \text{ for } \tau(\position') < \tau(\boldsymbol{x}) \\ \sigma_t(\tau) f \text{ for } \tau(\position') = \tau(\boldsymbol{x}), \end{array}\right. \end{equation} where $f\gtrapprox 1$ is a free parameter (we use $f=1.1$). The idea behind this procedure is that a too large $\sigma$ will propose points $\position'$ that are mostly drawn from the escape time distribution, which will have $\tau(\position') \approx 1$ and thus are mostly unsuccessful. Thus, we decrease $\sigma$ when a low escape time is proposed. Likewise, when $\sigma$ is too small the points $\position'$ will be indistinguishable from $\boldsymbol{x}$ after $\tau$ iterations and thus $\tau(\position')=\tau(\boldsymbol{x})$. In this case we increase $\sigma$. We expect this iterative procedure to converge to a value of $\sigma$ for which the desired $\tau(\position')$ is achieved. In order to increase the mixing in $\boldsymbol{x}$, we accept proposals $\position'$ for which $\tau(\position')=\tau(\boldsymbol{x})$ (differently from the Stagger method summarized above). Our numerical simulations are summarized in Fig.~\ref{fig.II} and indicate that the adaptive method with Eq.~(\ref{eq.sigma}) converges to proposals with the appropriate $\sigma$. In particular, both the distributions of the proposed and successful distances $\delta$ scale with $\tau$ as $ \sim e^{-\lambda_1 \tau}$, in agreement with Eq.~(\ref{eq.rlambda}). The advantage of this procedure in comparison to the Stagger is that the proposal at a given $\tau$ does not depend on $\tau_{\max}$ and therefore the efficiency does not decay with $\tau_{\max}$. In the next section we compare the efficiency of the adaptive and Stagger method as a function of $\tau$ for a fixed $\tau_{\max}$. \begin{figure}[bt!] \centering \includegraphics[width=\columnwidth]{FIG_2_A.pdf} \includegraphics[width=\columnwidth]{FIG_2_B.pdf} \caption{The adaptive method of proposal. Upper panel: evolution of a single realization of the adaptive method as a function of the number of searches $t$ (Monte Carlo time). The line shows the evolution of the scale $\sigma$ of the proposal according to Eq.~(\ref{eq.sigma}). Each red dot represents a step on which the escape time $\tau$ increased (a success). Lower panel: average and standard deviation of the proposals (blue) and successful proposals (black) computed over $100$ independent simulations (compare to Fig.~\ref{fig.I}). Left: in blue is the average and standard deviation displacement proposed for each $\tau$, averaged over 100 independent simulations. Right: full distributions for $\tau=8$ and $\tau=16$. White ($\tau=8$) and gray $(\tau=16)$ represent the success, light blue ($\tau=8$) and blue ($\tau=16$) represent the proposals. Simulations were performed in system~(\ref{eq.henon}) with $N=2$ and using $f=1.1$. } \label{fig.II} \end{figure} \section{Efficiency \label{sec.efficiency}} We quantify the efficiency of a search method by the rate of success in finding trajectories. We define the $\tau$-dependent {\it success rate} $C(\tau)$ as an average $\langle \ldots \rangle$ over the different realizations $r$ of the algorithm as \begin{equation}\label{eq.c} C(\tau) \equiv \langle C_r(\tau) \rangle = \left\langle \frac{t_f(\tau)}{\tau}\right\rangle , \end{equation} where $t_f(\tau)$ is the total number of proposals needed for the realization $r$ to reach $\tau$ (e.g., in the upper panel of Fig.~\ref{fig.II}, $C_r(\tau=26)=490/26$ because the algorithm needed $490$ search steps to reach $\tau=26$) \footnote{To compute $\tau$, the map has to be iterated $\tau$ times and thus the number of map iterations needed to reach $\tau_{\max}$ is of the order of $\sum_{\tau=1}^{\tau_{\max}} C(\tau)\tau$.}. For $\tau \gg 1$, the escape time function is self similar with $\tau$ and thus we aim for the algorithm to be independent of $\tau$. This optimal situation corresponds to have $C(\tau\rightarrow \infty) = C_r(\tau\rightarrow \infty)\equiv C$ where $C$ is independent of $\tau$ that thus can be used to characterize the efficiency of the method in the particular system. \begin{figure*} \includegraphics[width=1.4\columnwidth]{FIG_3.pdf} \caption{Efficiency of the Stagger and Adaptive methods depend on the dimension $N$. Success rate $C(\tau)$ as a function of $\tau$ for the Stagger (left) and adaptive (right) methods, for $N=1,...,4$. Each curve corresponds to an average over $100$ independent runs starting at $\tau_{\min}=1$ and ending at $\tau_{\max}=32$ or $\tau_{\max}=64$. } \label{fig.III} \end{figure*} In Fig.~\ref{fig.III} we present the numerical results for $C(\tau)$ for the Stagger and Adaptive methods. For the (easy) case of low dimensions ($N=1$ and $N=2$), we see a convergence in $\tau$ to the well-defined success rate $C$. The value of $C$ for the adaptive method is lower than for the Stagger method, indicating that it is more efficient. The $C$ value of the Stagger could be reduced by increasing $\delta_{\min}$, at the expense of reducing $\tau_{\max}$ (the maximum obtainable $\tau$). More surprising, the (difficult) case of high dimensions $N$ shows that neither the Stagger nor the Adaptive method have a constant cost $C(\tau)$. This suggests that, contrary to previous claims~\cite{Sweet2001, LaiTamasBook}, the Stagger method becomes inefficient in higher dimensions (it shows a qualitatively different scaling). We verified that if the proposals are drawn from a Gaussian with width scaling as in Eq.~(\ref{eq.rlambda}), $C(\tau)$ also grows with $\tau$ for $N=3$. Overall, our results hint that the Stagger method may not be as robust to increasing dimensions as suggested in Refs.~\cite{Sweet2001, LaiTamasBook}. \section{Anisotropic method\label{sec.anisotropic}} The results above indicate that a choice of the right scale of the proposal $P(\delta)$ is not sufficient to efficiently search in high-dimensions. The fact that the problems arise (become more severe) when more than one Lyapunov exponent is positive suggests that it is related to the existence of different stretching rates along different directions in the phase space. Based on this insight, we consider in this section $\boldsymbol{\delta}$ drawn from a non-isotropic distribution (i.e., $\boldsymbol{\delta} \neq \delta \hat{\mathbf{u}}$). Ref.~\cite{Bollt2005} already employed an anisotropic search in high dimensions, considering a deterministic (gradient) search in a single direction. Our approach here is to generalize our ideas to an anisotropic search. Consider a point $\boldsymbol{x}$ with an escape time $\tau(\boldsymbol{x}) = \tau$ and its $\tau$-th evolution in time, $\boldsymbol{x}_\tau = \boldsymbol{F}^\tau(\boldsymbol{x})$. An initial infinitesimal displacement $d\boldsymbol{x}$ close to $\boldsymbol{x}$ will, after a time $\tau$, be transformed into $d\boldsymbol{x}_\tau$ according to the dynamics in the tangent space \begin{equation} d\boldsymbol{x}_\tau =J(\boldsymbol{x}) d\boldsymbol{x}, \label{eq:tangent} \end{equation} where $J(\boldsymbol{x})\equiv d\boldsymbol{F}^\tau(\boldsymbol{x})/d\boldsymbol{x}$ is the Jacobian matrix of the map iterated $\tau$ times. In an isotropic proposal, $\boldsymbol{\delta}$ is proposed isotropically around $\boldsymbol{x}$ (the previous sections showed that the optimal scale for $|\boldsymbol{\delta}|$ scales with the largest Lyapunov exponent $\lambda_1$). The problem with isotropic proposals is that when the distribution of $d\boldsymbol{x}$ is isotropic, the distribution of $d\boldsymbol{x}_\tau$ is not: it contracts and stretches along the singular/covariant basis according, respectively, to the singular/Lyapunov spectrum of matrix ${\bf J}$. In particular, for large $\tau$, $d\boldsymbol{x}_\tau$ is aligned with the most unstable direction (associated to $\lambda_1$). This poses a major problem because this direction (which is a one dimensional object) may not intersect any region of the phase-space with higher escape time for a given point $\boldsymbol{x}$, and this situation makes the algorithm stuck on that point. We expect this to become more dramatic with increasing dimension as the one dimensional search volume becomes increasingly small in comparison to the total phase-space dimension D. \begin{figure}[bt!] \centering \includegraphics[width=\columnwidth]{FIG_4.pdf} \caption{ Anisotropic proposals in $\boldsymbol{x}$ lead to isotropic points in $\boldsymbol{x}_\tau=F^\tau(\boldsymbol{x})$. The figure represents two stretching directions of a phase space (red arrow associated to $\lambda_1$, green arrow, associated to $\lambda_2$). Proposals (gray regions on the left) and its corresponding region after $\tau = \tau(\boldsymbol{x})$ iterations are shown. After $\tau$ iterations, $\boldsymbol{x}_\tau$ ($\bullet$) is in the exit set (blue region). The isotropic proposal fails when the target region with $\tau' > \tau$ (green rectangle) does not intersect the one-dimensional most-unstable direction. Our anisotropic proposal in $\boldsymbol{x}$ is constructed so that it is isotropic in $\boldsymbol{x}_\tau\equiv F^\tau(\boldsymbol{x})$. This maximizes the search dimension thus improving the probability to find higher escape times. } \label{fig.explanation} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{FIG_5.pdf} \caption{A self-similar distribution of escape times~$\tau$ is observed through anistropic magnifications of the phase space of system~(\ref{eq.henon}). Starting from the $x_1,x_2$ cut of the phase space for $N=2$ -- in panel (a) --, we obtain a similar (rotated) distribution after performing an anisotropic magnification -- in panel (b) -- while the isotropic magnification leads to a distorted picture -- in panel (c). A similar picture is seen also at a tiny portion of the phase space for $N=3$ after an anisotropic magnification, as shown in panel (d).} \label{fig.5} \end{figure*} Our main idea is to use an anisotropic perturbation $\boldsymbol{\delta}$ such that, by construction, its evolution after $\tau$ iterations, $\delta \boldsymbol{x}_\tau$, is isotropic on the unstable directions, see Fig.~\ref{fig.explanation}. We can use Eq.~(\ref{eq:tangent}) to construct such perturbation because, for a small perturbation $\delta \boldsymbol{x}$, we have \begin{equation} \delta \boldsymbol{x} = J^{-1} \delta \boldsymbol{x}_\tau. \label{eq:anisotropic_bad} \end{equation} However, we should not use ${\bf J}^{-1}$ directly because it contains a mix of unstable directions and stable directions. Specifically, on its single value decomposition, ${\bf J}={\bf U\Sigma V}^T$ where ${\bf U}$ and ${\bf V}$ are unitary matrices and ${\bf \Sigma}$ is a diagonal matrix, ${\bf \Sigma}$ contains entries higher than 1 (corresponding to unstable directions), and smaller than 1 (corresponding to stable directions). Because ${\bf J}^{-1} = {\bf V\Sigma}^{-1} {\bf U}^T$, Eq.~(\ref{eq:anisotropic_bad}) would generate a proposal $\delta \boldsymbol{x}$ with extremely large values due to the diagonal values higher than 1 of ${\bf \Sigma}^{-1}$ (corresponding to the stable directions), which would be a problem because $\delta \boldsymbol{x}$ would violate the approximation that $\delta \boldsymbol{x}$ is small. In order to obtain an isotropic proposal along all unstable directions of the $\tau$-th iteration of the map, we use a transformed matrix ${\bf J}^{-1}$ that considers only the unstable directions. This is achieved as \begin{equation} \delta \boldsymbol{x} = {\bf V}({\bf \Sigma}^+)^{-1} {\bf U}^T \delta \boldsymbol{x}_\tau, \label{eq:anisotropic} \end{equation} where the diagonal matrix ${\bf \Sigma}^+$ is constructed by setting to zero the entries smaller than 1 in ${\bf \Sigma}$. The diagonal elements of ${\bf \Sigma}^+$ quantify the strength of the divergence along the directions defined by the columns of $U$ on forward iterations. However, they are not related to the (finite-time) Lyapunov exponents via $-\log(({\bf \Sigma}^+)_{ii})/\tau$. A good approximation of the (finite-time $\tau_{cs}$) Lyapunov exponents and covariant Lyapunov vectors is obtained storing the Jacobian matrix of a piece of the trajectory in the interval $[\tau/2 - \tau_{cs}, \tau/2 + \tau_{cs}]$~\cite{LaiTamasBook,Note1}. By finding long escape-times $\tau$, our method allow us to select large $\tau_{cs}$ (i.e., segments of orbit that stay inside the system {\it long enough}) to allow for the computation of the spectrum of Lyapunov exponents and their associated covariant vectors~\cite{Kuptsov2012}. A comparison between isotropic and anisotropic magnification of the phase space of system~(\ref{eq.henon}) appears in Fig.~\ref{fig.5}. The efficiency of our numerical simulations using the anisotropic method is summarized in Fig.~\ref{fig.IV}. They confirm that the cost $C(\tau)$ is independent of $\tau$ for up to $N=12$, and that it is more efficient than the stagger and the isotropic proposal. It also indicates that the cost increases exponentially with increasing dimension, a consequence of the increasing search space. This method also requires computing the product of Jacobian matrices and a single-value decomposition, both of which have a cost that increases with $d$. Nevertheless, our method allows to find trajectories with very high escape times using lower computational resources than the previous approaches. \begin{figure} \centering \includegraphics[width=\columnwidth]{FIG_6.pdf} \caption{The cost of the anisotropic method does not increase with $\tau$. (a) The average cost, computed with the same parameters as in Fig. 3, for the anisotropic method (blue) for 3 different phase-space dimensions, stagger and adaptive for 6D. Inset shows how the method scales with increasing phase-space dimension.} \label{fig.IV} \end{figure} \section{Summary and Conclusions\label{sec.conclusions}} Chaotic saddles are a generic feature of open non-linear dynamical systems that are expected to appear in arbitrary large dimensions. For instance, they are known to appear in high-dimensional systems such as delay differential equations~\cite{Taylor2007} and spatially-extended systems~\cite{Rempel2007,Tel2008}. Finding long-living trajectories that approximate such chaotic saddles is a critical requirement to numerically study them. The computational difficulty in this task is that the escape-time function $\tau(x)$ is extremely rough and that the phase-space volume with escape time $\tau'>\tau$ decays exponentially with $\tau$. The state-of-the-art method is the stagger method, that searches trajectories using an isotropic and power-law distributed proposal, intended to cover all the potential scales of the escape-time function. For low-dimensional systems, we showed that the stagger method can be improved by using a proposal distribution with a $\tau$-dependent length scale (governed by the largest Lyapunov exponent, Eq.~(\ref{eq.rlambda})). We showed also that an adaptive procedure can be used when the Lyapunov exponent is unknown (Eq.~(\ref{eq.sigma})). For high-dimensional systems, we found that both the stagger and the adaptive methods become increasingly inefficient for increasing $\tau$. We presented an explanation of why this is so, based on general arguments of dimensionality and dynamics of the tangent space. This motivated the main result of our paper: the introduction of an anisotropic proposal distribution that considerably improves the efficiency of searches of high-dimensional chaotic saddles. We expect our results to bring new insights in the study of chaotic saddles in high-dimensional (spatially-extended) systems and also to be applicable in cases beyond the ones discussed above. For instance, proposals increasing the escape time are required also in the more general problem of sampling the phase space of open chaotic systems~\cite{Leitao2013}. Our results and methods remain valid also in continuous-time systems: while the application of the adaptive method is straightforward, for the anisotropic method our singular-value-based filter (used to construct Eq.~(\ref{eq:anisotropic})) has to be implemented on the Jacobian matrix of the set of ordinary differential equations (see, e.g., Ref.~\cite{Kuptsov2012}). \section*{Acknowledgements} MS was supported by CAPES (Brazil). JCL was supported by FCT (Portugal), grant no. SFRH/BD/90050/2012. EGA thanks T. T\'el for helpful discussions. \bibliographystyle{apsrev4-1}
train/arxiv
BkiUaYHxK7Tt52mM6c65
5
1
\section{Introduction} \subsection{Motivation} In the fundamental hydrodynamical setting (cf. \cite{chandr}), the gravitational hydrostatic equilibrium of a star satisfies the following equation: \begin{equation}\label{1}p(\bar \rho)_r+ 4\pi\mathcal{G}\bar \rho r^{-2}\int_0^r s^2 \bar\rho(s) ds=0 \ \ {\rm for} \ \ r>0, \end{equation} where $\bar\rho(r)$ is the density, $p(\bar \rho)$ is the pressure and $\mathcal{G}$ is the gravitational constant. We assume that the pressure satisfies \begin{subequations}\label{pcon1} \begin{align} &p\in C^1\left([0, \infty)\right)\cap C^2\left((0, \infty)\right), \ \ p'(s)>0 \ {\rm for} \ s>0, \label{pcon1-a}\\ & \lim_{s\to 0+}s^{-4/3}p(s)=0, \ \ \lim_{s\to \infty} s^{-4/3}p(s)=\mathcal{K} \label{pcon1-b} \end{align}\end{subequations} for either $ \mathcal{K}\in (0, \infty)$ or $\mathcal{K}=\infty$. There are two typical stars, {\em polytropes} satisfying \begin{equation}\label{polytropic} p(\rho)=\kappa \rho^\gamma \ \ \textrm{with positive constnats $\kappa$ and $\gamma$}, \end{equation} and {\em white dwarfs} satisfying \eqref{pcon1} with $ \mathcal{K}\in (0, \infty)$. For a white dwarf, the gravity of which is balanced by electron degeneracy pressure (not dependent on the temperature of the white dwarf), and the pressure $p$ as a function of density $\rho$ is given through the following implicit relation (cf. \cite{chandr}): $$ p(x)=\Gamma_1 (x(2x^2-3)(x^2+1)^{1/2}+3\sinh^{-1} x) \ \ {\rm and} \ \ \rho(x)=\Gamma_2 x^3 $$ for $x\ge 0$, and satisfies the following asymptotics (cf. \cite{chandr}): \begin{equation*}p(\rho)=\left\{\begin{split} &c_1\rho^{4/3}-c_2\rho^{2/3}+\cdots, \quad \rho\rightarrow \infty,\\ &d_1\rho^{5/3}-d_2\rho^{7/3}+O( \rho^3), \quad \rho\rightarrow 0,\\ \end{split}\right. \end{equation*} where $\Gamma_1$, $\Gamma_2$, $c_1, c_2, d_1, d_2$ are positive physical constants. The existence, uniqueness and properties of gravitational hydrostatic equilibria with the prescribed total mass are summarized (cf. \cite{ab, liebyau, LSmoller, Rein}) as follows: \begin{proposition}\label{prop1} Suppose that the pressure satisfies \eqref{pcon1}. There exists a constant $M_c$ satisfying $M_c\in (0, \infty)$ if $\mathcal{K}\in (0, \infty)$ and $M_c=\infty$ if $\mathcal{K}=\infty$ such that if $M\in (0,M_c)$, then there exist a unique $\bar R>0$ and a unique solution $\bar \rho(r)$ to problem \eqref{1} satisfying that $\bar\rho(r)=0$ for $r\ge \bar R$, $\bar\rho_r(0)=0$, $\bar\rho_r(r)\in (-\infty, 0)$ for $r\in(0,\bar R)$ and $4\pi \int_0^{\bar R} s^2 \bar\rho(s)ds=M$. In this case, $\bar \rho$ is a minimizer of the energy functional \begin{equation}\label{energy} E(\rho)=\int_{\mathbb{R}^3} \mathcal{A}(\rho) dx + 2^{-1}\int_{\mathbb{R}^3}\rho \Psi dx \end{equation} satisfying the total mass constraint $\int_{\mathbb{R}^3} \rho dx=M$ in the class of nonnegative functions, where $$\mathcal{A} (\rho)=\rho\int_0^\rho s^{-2}p(s) ds \ \ {\rm and} \ \ \Psi(\rho)(x)=-\mathcal{G}\int_{\mathbb{R}^3} \frac{\rho (y)}{|x-y|}dy. $$ The first term on the right-hand-side of \eqref{energy} represents the potential energy, and the second stands for the gravitational energy. \end{proposition} The stellar material for a white dwarf by virtue of Pauli's exclusion principle and the uncertainty principle exercises a ground state pressure, which is the sole local balance for the gravitational force in a non-rotating white dwarf, since no more nuclear fuel to burn to supply additional thermal and radiation pressure gradients. In the famous Chandraskhar theory, gravitational collapse occurs when the total mass exceeds a critical mass, the ``Chandraskhar Limit" (cf. \cite{chandr, liebyau, wein}), which is $M_c$ in Proposition \ref{prop1}. The problem of the gravitational collapse for white dwarfs was formulated in \cite{4,8,16} leading to the equation for the density, equation \eqref{1}, called the ``Chandrasekhar equation" in \cite{liebyau}. The gravitational collapse was predicted in \cite{chandr,4} by this equation at some critical mass, which was verified in \cite{liebyau} as the limit of quantum mechanics. It was also proved in \cite{liebyau} that, for a white dwarf, there exists $M_c\in(0,\infty)$ such that for each $M\in (0,M_c)$, there exists a unique radially symmetric decreasing minimizer $\rho_M$ for the energy functional \eqref{energy} with the total mass constraint $\int_{\mathbb{R}^3} \rho_M dx=M$, which satisfies the Euler-Lagrangian equation for some Lagrangian multiplier $\mu_M$: $$ \mathcal{A}'(\rho_M(x))=\{-\Psi(\rho_M)(x)-\mu_M\}_+, $$ where $\{f(x)\}_+=\max\{f(x), 0\}$, and $\mu_M\to \infty$ as $M\to M_c$. When the total mass $M$ is less than the ``Chandrasekhar limit'' $M_c$, the dynamical stability of the gravitational hydrostatic equilibrium (the stationary solution) becomes a very interesting and important problem for white dwarfs. In this direction, a conditional nonlinear Lyapunov type stability theory was established in \cite{LSmoller} by use of a variational approach, assuming the existence of global solutions of the Cauchy problem for the three-dimensional compressible Euler-Poisson equations. (The same type of nonlinear stability results can be found in \cite{Rein} for gaseous stars with $\mathcal{K}=\infty$ in \eqref{pcon1-b}, and for rotating stars in \cite{LSmoller, LSmoller2}.) However, rigorous mathematical justification of the nonlinear large time asymptotic stability of the gravitational hydrostatic equilibrium (the stationary solution) for white dwarfs is missing, even on the linear level. This is also true for polytropes with $\gamma\ge 2$, though this type of stability was proved in \cite{LXZ2, LZeng} for viscous polytropic stars of \eqref{polytropic} with $4/3<\gamma<2$. The main purpose of this article is to address these issues. Precisely, we consider the vacuum free boundary problem for compressible Navier-Stokes-Poisson equations with a very general equation of state including white dwarfs and polytropes with all $\gamma>4/3$ in three dimensions with spherical symmetry, and prove that if the initial data are small perturbations of the gravitational hydrostatic equilibrium with the same total mass, then there exists a unique strong solution to the vacuum free boundary problem which converges to the gravitational hydrostatic equilibrium as time goes to infinity, with the detailed convergence rates. The vacuum boundary considered in the present work satisfies the {\em physical vacuum} condition that $\sqrt{p'(\rho)}$ is $C^{1/2}$-H\"{o}lder continuous near vacuum states. We remark that the physical vacuum has been attracting much attention and interests in the study of the compressible fluids (cf. \cite{CHWW}-\cite{CS2}, \cite{DuanQ}-\cite{FZ2}, \cite{GL}-\cite{hlz}, \cite{jang3}-\cite{jang6}, \cite{tpliudamping}-\cite{LY}, \cite{LXZ1}-\cite{LZ}, \cite{YT}-\cite{Zeng2}). Besides for white dwarfs, the results obtained in this paper also apply to viscous polytropic stars of \eqref{polytropic} with $\gamma>4/3$, as mentioned earlier. The value of $4/3$ is critical for the stability of polytropic stars. In fact, the energy (cf. \cite{chandr, wein}) for a polytrope with $\gamma>6/5$ on the gravitational hydrostatic equilibrium is given by $$E_{\gamma}=-\frac{3\gamma-4}{5\gamma-6}\frac{\mathcal{G} M^2}{\bar R}, $$ where $M$ is the total mass and $\bar R$ is the radius of the star. It should be noted that for a white dwarf, the exponent $4/3$ is the critical case for the pressure $p(\rho)$ as $\rho\to \infty$, which is in sharp contrast to the case considered in \cite{LXZ2, LZeng}. The present work does not cover the case for supermassive stars, which are polytropes with $\gamma=4/3$ and supported by the pressure of radiation, also called extreme relativistic degenerate stars (cf. \cite{Shapiro, wein}). Indeed, for the non-rotating gravitational hydrostatic equilibrium of supermassive stars, the total energy $E_{4/3}=0$, quoting (\cite{wein}, p. 327) ``the polytrope with $\gamma= 4/3$ is trembling between stability and instability", and it is remarked that the General Relativity is needed to settle this stability problem. \subsection{Formulation of the problem and main theorems} The evolution of the free boundary which is the interface between the gas and vacuum for a viscous gaseous star with spherical symmetry can be modeled by the following free boundary problem: \begin{subequations}\label{sphereqn} \begin{align} &(r^2 \rho)_t+(r^2\rho u)_r=0 & \text{in}\quad(0, R(t)), \label{sphereqn-1}\\ &\rho(u_t+uu_r)+p_r+ \frac{4\pi \mathcal{G} \rho}{ r^{2}}\int_0^r s^2 \rho(s, t) ds =\nu\left[\frac{(r^2 u)_r}{r^2}\right]_r & \text{in}\quad(0, R(t)), \label{sphereqn-2}\\ &\rho >0 & \text{in} \quad[0, R(t)),\\ &\rho (R(t),t)=0 , \ \ u(0,t)=0, & \\ &\frac{4}{3}\nu_1 \left(u_r-\frac{u}{r}\right) +\nu_2\left(u_r+2\frac{u}{r}\right)=0 &{\rm for} \ r=R(t), \\ & \dot R(t)=u(R(t), t) \ \ {\rm with} \ \ R(0)=R_0, & \\ &(\rho, u)=(\rho_0, u_0) & \text{on}\quad(0, R_0). \end{align} \end{subequations} Here $(r,t)$, $ \rho$, $u $ and $p$ denote, respectively, the space and time variable, density, velocity and the pressure; the positive constants $\nu_1$, $\nu_2$ and $\mathcal{G}$ represent, respectively, the shear viscosity, the bulk viscosity and the gravitational constant, and $\nu=4\nu_1/3+\nu_2$; and $R(t)$ is the free boundary of the gaseous star. This formulation can be found in \cite{LXZ2}. Let $(\rho,u,R(t))=(\bar\rho(r), 0, \bar R)$ be the stationary solution to \eqref{sphereqn} with the total mass $M=4\pi\int_0^{\bar R} s^2 \bar\rho(s)ds\in (0,M_c)$ as stated in Proposition \ref{prop1}. Then it holds that for $r\in (0, \bar R)$, \begin{subequations}\label{yo.1} \begin{align} &(p(\bar\rho))_r=-r \bar\rho \phi, \ \ \bar\rho_r\in (-\infty,0), \label{pxphi}\\ & \bar\rho_0:= \bar\rho(0) > \bar\rho(r)> \bar\rho(\bar R)= 0, \label{ibm} \end{align}\end{subequations} where $ \phi(r)= {4\pi \mathcal{G}}{r^{-3}}\int_0^r s^2 \bar\rho(s)ds \in \left[ {M\mathcal{G}}{\bar R^{-3}}, \ {4\pi\mathcal{G} \bar\rho_0}/{3}\right] $. Clearly, it holds that for $r\in (0, \bar R)$, \begin{align}\label{irhor} 2^{-1} {\bar R}^{-2} M \mathcal{G} (\bar R-r)\le i(\bar\rho(r))\le ({4\pi \mathcal{G}\bar\rho_0\bar R}/{3}) (\bar R-r), \end{align} where the enthalpy $i$ is defined by $i(s)=\int_0^s \tau^{-1}{p'(\tau)}d\tau$ for $s\ge 0$. In order to capture the behavior \eqref{irhor} of the stationary solutions near the vacuum boundary, we assume that the initial density satisfies the following condition: \begin{equation}\label{inidens} \begin{split} &\rho_0(r)>0 \ \ {\rm for} \ \ r\in [0,R_0), \ \ M=4\pi\int_0^{R_0} s^2 \rho_0(s)ds \in (0, \infty), \\ & \rho_0(R_0)=0, \ \ \left(i(\rho_0)\right)_r \in (-\infty, 0) \ \ {\rm at}\ \ r=R_0. \end{split}\end{equation} That is, $i(\rho_0(r))$ is equivalent to $R_0-r$ as $r$ is close to $R_0$. Under the additional assumption: \begin{align} \frac{4}{3}\le \inf_{0<s\le \bar\rho_0}\frac{sp'(s)}{p(s)} \ \ {\rm and} \ \ \bar{\gamma}:=\lim_{s\to 0+}\frac{sp'(s)}{p(s)} \in \left(\frac{4}{3},\ \infty\right), \label{pcon2} \end{align} we prove in this article that the global existence of the strong solution to the vacuum free boundary problem \eqref{sphereqn} and \eqref{inidens} with a general class of pressure laws including white dwarfs and polytropes with $\gamma>4/3$, and the nonlinear asymptotic stability of the gravitational hydrostatic equilibria, extending the results obtained in \cite{LXZ2,LZeng} for polytropes with $4/3<\gamma<2$. We use the Lagrangian coordinates to fix the domain, as did in \cite{CS1, CS2}. Let $x\in I=(0, \bar R)$ be the reference variable, we define the Lagrangian variable $r(x, t)$ by \begin{align*} &r_t(x, t)=u(r(x, t), t) \ \text{for}\ t>0,\ \ r(x, 0)=r_0(x) \ {\rm for} \ x\in I, \end{align*} where $r_0$ is a diffeomorphism from $\bar I$ to $[0, R_0]$ defined by $\int_0^{r_0(x)}s^2 \rho_0(s)ds =\int_0^x s^2 \bar\rho(s) ds$. Clearly, $r_0^2(x)\rho_0(r_0(x))r_0'(x)=x^2 \bar\rho(x)$ for $x\in \bar I$. Then, it follows from \eqref{sphereqn-1} that \begin{equation}\label{r0x} \rho(r(x, t), t)=\frac{ r_0^2(x)\rho_0(r_0(x))r_{0x}(x)}{r^2(x, t)r_x(x, t)} =\frac{x^2\bar\rho(x)}{r^2(x, t)r_x(x, t)}. \end{equation} We set $ v(x, t)=u(r(x, t), t) $ and use \eqref{r0x} to rewrite problem \eqref{sphereqn} as the following initial boundary value problem: \begin{subequations}\label{fixedp} \begin{align} &\bar\rho\frac{x^2}{r^2}v_t +\left[p\left(\frac{x^2\bar\rho}{r^2r_x}\right) \right]_x+4\pi \mathcal{G}\frac{x^2}{r^4}\bar\rho\int_0^x y^2\bar\rho dy=\nu \left(\frac{(r^2 v)_x}{r^2r_x}\right)_x & \text{in} &\ I\times(0,T], \label{fixedp-a}\\ &v(0, t)=0 \ {\rm and} \ \mathscr{B}(\bar R, t)=0 & \text{on} & \ (0, T],\label{fixedp-b}\\ &(r, v)(x, 0)=(r_0(x), u_0(r_0(x))) & \text{on}& \ I,\label{fixedp-c} \end{align}\end{subequations}\ where \begin{equation}\label{mb} \mathscr{B}:=\frac{4}{3}\nu_1\left(\frac{v_x}{r_x}-\frac{v}{r}\right) +\nu_2\left(\frac{v_x}{r_x}+2\frac{v}{r}\right) =\frac{4}{3}\nu_1\frac{r}{r_x}\left(\frac{v}{r}\right)_x +\nu_2\frac{(r^2v)_x}{r_xr^2}. \end{equation} In the setting, the moving vacuum boundary for problem \eqref{sphereqn} is the given by $ R(t)=r(\bar R, t) $. The main result of this paper is stated as follows. We set \begin{equation}\label{E}\begin{split} \mathfrak{E}(t)=& \left\|(r_x-1, v_x)(\cdot, t)\right\|^2_{L^\infty(I)} +\left\|\bar\rho^{1/2}v_t(\cdot, t)\right\|_{L^2(I)}^2 \\ & +\left\|\bar\rho^{-1/2} p(\bar \rho)\left((r/x)_x,\ r_{xx}\right)(\cdot, t)\right\|_{L^2(I)}^2. \end{split}\end{equation} \begin{theorem} \label{theo1} Assume that $p$ satisfies \eqref{pcon1} and \eqref{pcon2}, $\bar \rho$ is a stationary solution with the total mass $M=4\pi \int_0^{\bar R} s^2 \bar\rho(s)ds\in (0, M_c)$ as stated in Proposition \ref{prop1}, the initial density $\rho_0$ satisfies \eqref{inidens} and $\int_0^{R_0} s^2 \rho_0(s)ds=\int_0^{\bar R} s^2 \bar\rho(s)ds$, and the compatibility conditions $v_0(0)=0$ and $\mathscr{B}(\bar R, 0)=0$ hold. There exists a constant $\bar\delta>0$ such that if $\mathfrak{E}(0)\le \bar\delta$, then problem \eqref{fixedp} admits a solution in $I\times[0,\infty)$ with $$ \mathfrak{E}(t)\leq C \mathfrak{E}(0),\ \ t\geq 0, $$ for some positive constant $C$ independent of $t$. \end{theorem} As a corollary of Theorem \ref{theo1}, we have the nonlinear asymptotic stability of the stationary solution to problem \eqref{sphereqn} as follows: \begin{theorem}\label{theo2} Suppose that the assumptions in Theorem \ref{theo1} holds. There exists a constant $\bar\delta>0$ such that if $\mathfrak{E}(0)\le \bar\delta$, then problem \eqref{sphereqn} admits a global solution $(\rho,u, R(t))$ for $t\in [0,\infty)$ satisfying $R\in W^{1,\infty}([0,\infty))$ and the following estimates: for any $\theta\in \left(0, 1-5/(4\bar\gamma)\right]$ and $l\in (0, \bar R)$, there exist positive constants $c$, $C$, $C(\theta^{-1})$ and $C(\theta^{-1},l^{-1})$ independent of $x$ and $t$ such that for $t\in[0, \infty)$, \begin{subequations}\begin{align} &c^{-1}\left(R(t)-r(x,t)\right) \le i\left( \rho(r(x,t), t) \right) \le c \left(R(t)-r(x,t)\right), \ \ x\in I, \label{thest0}\\ & \bar\rho^{-2}(x) |\rho(r(x, t), t)- \bar\rho(x)|^2 \le C \mathfrak{E}(0),\ \ x\in I, \label{thest1}\\ & \Upsilon(1+t)^{2- {2}/{\bar\gamma}-3\theta/2} |r(x,t)-x|^2 + (1+t)^{2\zeta-1/2} \left\{ (1-\Upsilon) |r(x,t)-x|^2 \right.\notag\\ &\ \ \left. +u^2(r(x,t),t)\right\} +(1+t)^{2\zeta-1}\left\{|u_r(r(x, t), t)|^2+|r^{-1}u(r(x, t), t)|^2\right.\notag\\ &\ \ \left. +\Upsilon \bar\rho^{(3\bar\gamma-6)/2}(x) |\rho(r(x, t), t)- \bar\rho(x)|^2\right\}\le C(\theta^{-1}) \mathfrak{E}(0) ,\ \ x\in I, \label{thest2}\\ &(1+t)^{\min\left\{\frac{4-\bar\gamma}{2\bar\gamma} -\theta\left(\frac{\bar\gamma-1}{2\bar\gamma} +\frac{4-\bar\gamma-\theta(\bar\gamma-1)} {4\bar\gamma-2}\right), \ 2\zeta-1 \right\}} |\rho(r(x, t), t)- \bar\rho(x)|^2\notag\\ &\ \ \le C(\theta^{-1}) \mathfrak{E}(0), \ \ x\in I, \ \ {\rm if} \ 2 < \bar\gamma<4 \ {\rm and}\ \theta<({4-\bar\gamma})/({\bar\gamma-1}), \label{thest3}\\ & (1+t)^{2\zeta-1}\left(|r_x(x, t)-1|^2 +|x^{-1}r(x, t)-1|^2\right)\notag\\ &\ \ \le C(\theta^{-1},l^{-1})\mathfrak{E}(0) , \ \ x\in [0, \bar R -l].\label{thest4} \end{align}\end{subequations} Here $\zeta=1- {1}/({2\bar\gamma}) - \theta/2$, and $\Upsilon=1 $ for $\bar\gamma\le 2$ and $\Upsilon=0 $ for $\bar\gamma> 2$. \end{theorem} Due to the high degeneracy of system \eqref{sphereqn} caused by the behaviour \eqref{irhor} near the vacuum boundary, as pointed out in \cite{LXZ2, LZeng} for viscous polytropic stars of \eqref{polytropic} with $4/3<\gamma<2$, it is extremely challenging to establish the global-in-time regularity of higher-order derivatives of solutions uniformly up to the vacuum boundary. Compared with the polytropes considered in \cite{LXZ2, LZeng}, it is way intricate to build up the higher-order uniform-in-time regularity up to the vacuum boundary for the general pressure law considered in this paper. \subsection{Review of related results} Besides the results already mentioned earlier, we review some related results concerning the mathematical study of white dwarfs and the stability of gaseous stars. The existence of steady solutions of rotating and non-rotating white dwarfs was proved in \cite{ab}, which was extended in \cite{lions,FT}. With the constant angular velocity, the existence and properties such as the bounds of diameters and number of connected components of rotating white dwarfs can be found in \cite{CL,lyy}. The related properties of free boundaries for steady solutions are referred to \cite{CF}. In \cite{linss}, it was investigated that the linearized stability and instability of gaseous stars including polytropes with $\gamma\in (4/3, 2)$ and $\gamma\in (6/5, 4/3)$, respectively; concerning that a turning point principle was proved in a recent work \cite{linzeng} modeled by Euler-Poisson equations. The nonlinear instability was proved for polytropes with $\gamma=6/5$ in \cite{jang1} and $6/5<\gamma<4/3$ in \cite{jang4}, respectively, in the framework of Euler-Poisson equations; and for polytropes with $6/5<\gamma<4/3$ in \cite{jang6} in the framework of Navier-Stokes-Poisson equations. Instability result of Euler-Poisson equations for polytropes with $\gamma=4/3$ was identified in \cite{DLYY}, since a small perturbation can cause part of the mass to go off to infinity. For Euler-Possion equations modelling polytropic stars with $p(\rho)=\rho^{\gamma}$, a class of global expanding solutions was constructed in \cite{HJ2} for $\gamma=4/3$ and in \cite{HJ1} for other values of $\gamma$, and the continued gravitational collapse was demonstrated in \cite{GHJ2} for $1<\gamma<4/3$ , (see also further study in \cite {HY}) , and the Larson-Penston self-similar gravitational collapse was proved in \cite{GHJ2} for $\gamma=1$. The global existence of weak spherically symmetric solutions in multi-dimensions was proved in \cite{CHWY} for certain range of $\gamma$. The rest of this paper is devoted to the proofs of Theorems \ref{theo1} and \ref{theo2} by establishing the uniform-in-time higher-order regularity up to the vacuum boundary, and the detailed decay rates of perturbations. \section{Proof of Theorems \ref{theo1} and \ref{theo2} } The local existence of solutions to problem \eqref{fixedp} in the functional space $\mathfrak{E}(t)$ can be obtained easily by combining the approximation techniques in \cite{jang3,LXZ2} and the following a priori estimates stated in Proposition \ref{22.1.4}. Indeed, the a priori estimates obtained in Proposition \ref{22.1.4} are sufficient for the local existence theory, at least for small $\mathfrak{E}(0)$. Hence, Theorem \ref{theo1} follows from a standard continuation argument with the following estimates: \begin{proposition}\label{22.1.4} Suppose that the assumptions in Theorem \ref{theo1} holds. Let $v$ be a solution to problem \eqref{fixedp} in the time interval $[0, T]$ with $r(x, t)=r_0(x)+\int_0^t v(x,\tau) d\tau$ satisfying $\mathfrak{E}(t)<\infty$ on $[0, T]$ and the a priori assumption: \begin{subequations}\label{21apri} \begin{align} &\left\|\left(r_x-1, \ {r}/{x}-1\right) (\cdot, t) \right\|_{L^\infty}\le \varepsilon_0, \ \ t\in [0,T],\label{priori1}\\ & \left\|\left(v_x, \ {v}/{x}\right) (\cdot, t) \right\|_{L^\infty}\le 1 , \ \ t\in [0,T] \label{priori2} \end{align} \end{subequations} for some suitably small fixed number $\varepsilon_0\in (0, 1/8]$ independent of $t$. Then there exist positive constants $C$, $C(\theta^{-1})$ and $C(\theta^{-1},l^{-1})$ independent of $t$ such that for $t\in [0,T]$, \begin{subequations}\begin{align} & \mathfrak{E}(t)+ \left\| ({r}/{x}-1, v/x)(\cdot,t)\right\|_{L^\infty}^2 \le C \mathfrak{E}(0), \label{22.1.11-a}\\ &(1+t)^{2\zeta} \left\| (x^{1/2} v )(\cdot,t)\right\|^2_{L^\infty} + (1+t)^{2\zeta-1/2} \left\{ \left\| v (\cdot,t)\right\|^2_{L^\infty} \right.\notag \\ &\left. + (1-\Upsilon) \left\|(r-x)(\cdot,t)\right\|^2_{L^\infty} \right\}+ \Upsilon(1+t)^{2- {2}/{\bar\gamma}-3\theta/2} \left\|(r-x)(\cdot,t)\right\|^2_{L^\infty} \notag\\ & + (1+t)^\zeta \left\| (x^{3/2} v_x)(\cdot,t)\right\|^2_{L^\infty} +(1+t)^{2\zeta-1}\left\{\left\|(v_x,v/x) (\cdot, t)\right\|_{L^\infty}^2 \right.\notag\\ & \left. +\Upsilon \left\| \left(\bar\rho^{(3\bar\gamma-2)/4} \left(x^2/(r^2 r_x)-1\right)\right)(\cdot,t)\right\|_{L^\infty}^2 \right\}\le C(\theta^{-1}) \mathfrak{E}(0), \label{22.1.11-b}\\ &(1+t)^{\min\left\{\frac{4-\bar\gamma}{2\bar\gamma} -\theta\left(\frac{\bar\gamma-1}{2\bar\gamma} +\frac{4-\bar\gamma-\theta(\bar\gamma-1)} {4\bar\gamma-2}\right), \ 2\zeta-1 \right\}} \left\| \left(\bar\rho \left(x^2/(r^2 r_x)-1\right)\right)(\cdot,t)\right\|_{L^\infty}^2 \notag\\ & \le C(\theta^{-1}) \mathfrak{E}(0), \ \ {\rm if} \ 2 < \bar\gamma<4 \ {\rm and}\ \theta<({4-\bar\gamma})/({\bar\gamma-1}) ,\label{22.1.11-c}\\ & (1+t)^{2\zeta-1}\left(\left\|\left(r_{xx}, (r/x)_x, v_{xx}, (v/x)_x\right)(\cdot, t)\right\|_{L^2([0,\bar R-l ])}^2\right. \notag\\ & \left.+\left\|(r_x-1,r/x) (\cdot, t)\right\|_{L^\infty([0,\bar R-l ])}^2 \right) \le C(\theta^{-1},l^{-1})\mathfrak{E}(0) .\label{22.1.11-d} \end{align}\end{subequations} Here $L^\infty=L^\infty(I)$ in \eqref{21apri} and \eqref{22.1.11-a}-\eqref{22.1.11-c}, $\zeta=1- {1}/({2\bar\gamma}) - \theta/2$, and \begin{align} \Upsilon=1 \ \ {\rm for} \ \ \bar\gamma\le 2, \ \ \Upsilon=0 \ \ {\rm for} \ \ \bar\gamma > 2. \label{22.1.11-1} \end{align} Moreover, there exists a positive constant $c$ independent of $x$ and $t$ such that for $(x,t)\in I\times [0,T]$, \begin{align}c^{-1}(\bar R -x) \le i\left( \left({\bar\rho x^2 }/({r^2 r_x }) \right)(x,t)\right)\le c(\bar R -x). \label{22.1.11-e} \end{align} \end{proposition} Indeed, \eqref{22.1.11-a} follows from \eqref{22.1.5}; \eqref{22.1.11-b} from \eqref{12.21-da} and \eqref{12.28-9}; \eqref{22.1.11-c} from \eqref{20220111}; \eqref{22.1.11-d} from \eqref{22.1.6-1}; and \eqref{22.1.11-e} from \eqref{har-1} and \eqref{til-3}. To prove Theorem \ref{theo2}, we recall that \begin{align*} \rho(r(x, t), t)= \frac{\bar\rho(x) x^2}{r^2(x, t)r_x(x, t)},\ u (r(x, t), t)=v(x,t) \ {\rm and} \ R(t)=r(\bar R, t), \end{align*} and notice that $$ (1/2) x \le r(x,t) \le (3/2) x \ {\rm and} \ 1/2\le r_x(x,t) \le 3/2, $$ which is due to \eqref{22.1.11-a} and the smallness of $\mathfrak{E}(0)$. \eqref{thest0} follows from \eqref{22.1.11-e} and $$\frac{1}{2}(\bar R -x)\le R(t)-r(x,t)=\int_x^{\bar R} r_y(y,t)dy\le \frac{3}{2}(\bar R -x);$$ \eqref{thest1} from \eqref{22.1.11-a}; \eqref{thest2} from \eqref{22.1.11-b}, $u/r=(v/x)/(r/x)$ and $u_r=v_x/r_x$; \eqref{thest3} from \eqref{22.1.11-c}; and \eqref{thest4} from \eqref{22.1.11-d}. The rest of this section devotes to proving Proposition \ref{22.1.4}, which consists of the preliminaries in Section \ref{22.1.4-1}, lower-order estimates in Section \ref{22.1.4-2}, and higher-order estimates in Section \ref{22.1.4-3}. \subsection{Preliminaries}\label{22.1.4-1} First, it should be noted that $$ 1/2 \le r_x \le 3/2 \ \ {\rm and} \ \ 1/2 \le r/x \le 3/2 \ \ {\rm for}\ \ (x, t)\in I\times [0,T], $$ which is due to \eqref{priori1}. We list some properties for functions $p$, which will be useful in the energy estimates. It follows from \eqref{pcon1} and \eqref{pcon2} that \begin{align} & p(0)=0, \label{d-0} \\ & \lim_{s\to 0+}\frac{s^{a+1}}{p(s)}\frac{d}{ds} \left(\frac{p(s)}{s^a}\right) =\lim_{s\to 0+} \frac{sp'(s)}{p(s)}-a=\bar{\gamma}-a \label{d-1} \end{align} for any $a>0$. If $a<\bar{\gamma}$, then there exists a positive constant $\delta_1$ such that $\frac{d}{ds} \left(s^{-a} {p(s)} \right)>0$ for $s\in (0, \delta_1]$ and $\lim_{s\to 0+} s^{-a} {p(s)} \le {\delta_1^{-a}}{p(\delta_1)}<\infty$. This, together with the L'Hopital rule, \eqref{d-0} and \eqref{pcon2}, means \begin{align} &\lim_{s\to 0+} \frac{p(s)}{s^a} =\frac{1}{a} \lim_{s\to 0+} \frac{sp'(s)}{p(s)} \frac{p(s)}{s^a}=\frac{\bar{\gamma}}{a} \lim_{s\to 0+} \frac{p(s)}{s^a}.\label{d-2} \end{align} So, \begin{align} \lim_{s\to 0+} {s^{-a}} {p(s)}=0 \ \ {\rm for} \ \ a <\bar{\gamma}. \label{d-3} \end{align} In particular, \begin{subequations}\label{d-00} \begin{align} &p'(0)=\lim_{s\to 0+} s^{-1}p(s)=0,\\ & \lim_{s\to 0+} s^{-1/6}p'(s)=\bar\gamma \lim_{s\to 0+} s^{-7/6}p(s)=0. \end{align}\end{subequations} Similarly, if $a>\bar{\gamma}$, there exists a positive constant $\delta_2$ such that $\lim_{s\to 0+} s^{-a} {p(s)} \ge {\delta_2^{-a}}{p(\delta_2)}>0$. Suppose that $\lim_{s\to 0+} s^{-a}{p(s)}<\infty$, then it yields from \eqref{d-2} that $\lim_{s\to 0+} s^{-a}{p(s)}=0$. This is a contradiction. Therefore, \begin{align} \lim_{s\to 0+} {s^{-a}} {p(s)}=\infty \ \ {\rm for} \ \ a >\bar{\gamma}. \label{d-5} \end{align} Due to \eqref{pcon1}, \eqref{d-3} and \eqref{d-5}, we have that $$ \bar{\gamma}=\sup\left\{a>0 \ \big| \ \sup_{0 < s \le \bar\rho_0}s^{-a}p(s) < \infty \right\}. $$ It follows from \eqref{pcon1} and \eqref{pcon2} that \begin{align*} 0< K_2:=\inf_{0 < s\le 2\bar\rho_0} \frac{sp'(s)}{p(s)} \le \sup_{0 < s\le 2\bar\rho_0} \frac{sp'(s)}{p(s)}=:K_3<\infty. \end{align*} Using \eqref{pcon1}, \eqref{d-0}, the L'Hopital rule and \eqref{pcon2}, one has that \begin{align} & \lim_{s\to 0+}\frac{sp''(s)}{p'(s)}= \lim_{s\to 0+}\frac{sp'(s)}{p(s)}-1 = \bar{\gamma} -1 \in \left[3^{-1}, \infty\right),\notag\\ &\sup_{0 < s\le 2\bar\rho_0} \frac{s|p''(s)|}{p'(s)}=:K_4 \in (0, \infty). \notag \end{align} So, it holds that for $s\in (0, 2\bar\rho_0]$, \begin{align}\label{21.3.15} K_2 p(s) \le sp'(s) \le K_3 p(s) \ {\rm and} \ s|p''(s)| \le K_4 p'(s). \end{align} In view of \eqref{d-00}, we see that \begin{align} & i(0)=0, \label{21-11-2}\\ & i(s)=\int_0^s \tau^{-1}p'(\tau) d\tau =s^{-1}p(s)+ \int_0^s \tau^{-2} p(\tau) d\tau,\notag \end{align} which gives, with the aid of \eqref{pcon2}, that for $s\in (0, \bar\rho_0]$, \begin{align*} &i(s)\le s^{-1}p(s)+ \frac{3}{4} \int_0^s \tau^{-1} p'(\tau) d\tau=s^{-1}p(s)+ \frac{3}{4} i(s) . \end{align*} Then, we obtain that for $s\in (0, \bar\rho_0]$, \begin{align}\label{irho} p(s) \le s i(s) \le 4 p(s), \end{align} which, together with $p'>0$ and $i'>0$, implies that for $s\in (0, 2\bar\rho_0]$, \begin{align}\label{irho-1} p(s) \le s i(s) \le K_5 p(s), \end{align} where $K_5=\max\{4, \ 2\bar\rho_0 i(2\bar\rho_0)/p(\bar\rho_0) \}$. It follows from \eqref{irho} and \eqref{d-3} that for $a<\bar\gamma-1$, \begin{align*} \lim_{s\to 0+} \frac{i(s)}{s^a}= \lim_{s\to 0+} \frac{si(s)}{p(s)}\frac{p(s)}{s^{a+1}}\le 4\lim_{s\to 0+} \frac{p(s)}{s^{a+1}}=0, \end{align*} and from \eqref{irho} and \eqref{d-5} that for $a>\bar\gamma-1$, \begin{align*} \lim_{s\to 0+} \frac{s^a}{i(s)}= \lim_{s\to 0+} \frac{s^{a+1}}{p(s)} \frac{p(s)}{si(s)}\le \lim_{s\to 0+} \frac{s^{a+1}}{p(s)}=0. \end{align*} This means that for $s\in (0, \bar\rho_0]$, \begin{align}\label{April4} i(s)\le K_6 s^a \ {\rm when} \ a<\bar\gamma-1, \ \ s^a \le K_7 i(s) \ {\rm when} \ a>\bar\gamma-1, \end{align} where $K_6=\sup_{0<s\le \bar\rho_0} s^{-a}i(s)$ and $K_7=\sup_{0<s\le \bar\rho_0} s^ai^{-1}(s)$ are finite positive constants. We present some properties for the stationary solution $\bar\rho(x)$. It follows from \eqref{pxphi} and $i(\bar\rho(\bar R))=i(0)=0$ that for $x\in I$, \begin{align}\label{har-1} K_{8}(\bar R -x) \le i(\bar\rho(x))=\int_x^{\bar R} s\phi(s)ds \le K_{9} (\bar R -x), \end{align} where $K_{8}= {M}\mathcal{G}/({2 \bar R^2})$ and $K_{9}={4\pi \mathcal{G}\bar\rho_0 \bar R}/{3}$. Let $$\tilde \rho(x,t)=\bar\rho(x) \left\{1+\vartheta \left(\frac{x^2}{r^2 r_x} -1\right)\right\} \ {\rm with} \ \vartheta\in [0,1],$$ then it produces from \eqref{priori1} and the smallness of $\varepsilon_0$ that \begin{align}\label{til-1} (1/2)\bar\rho(x) \le \tilde \rho(x,t) \le (3/2) \bar\rho(x). \end{align} Notice that $|p(\tilde \rho )-p(\bar\rho)|=p'(\underline{\rho})|\tilde \rho-\bar\rho|$ with $\underline{\rho}=\vartheta_1 \tilde \rho + (1-\vartheta_1) \bar\rho$ for some constant $\vartheta_1\in(0,1)$, which implies, using \eqref{21.3.15}, $p'>0$ and \eqref{priori1}, that \begin{align*} & |p(\tilde \rho )-p(\bar\rho)| \le K_3 p(\underline{\rho}) |\underline{\rho}^{-1}(\tilde \rho-\bar\rho)|\\ & \le 8 K_3 \epsilon_0 \max\{p(\tilde \rho ), \ p(\bar\rho)\} \le 2^{-1}\max\{p(\tilde \rho ), \ p(\bar\rho)\}. \end{align*} Thus, \begin{align} & 2^{-1}{p(\bar\rho)}\le {p(\tilde \rho )}\le 2{p(\bar\rho)}, \label{til-2}\\ &16^{-1}{i(\bar\rho)}\le {i(\tilde \rho )}\le 4K_5 {i(\bar\rho)},\label{til-3} \end{align} where $\eqref{til-3}$ follows from \eqref{irho}, \eqref{irho-1}, \eqref{til-1} and \eqref{til-2}. {\em The Hardy inequality}. Let $k>1$ be a given real number, $l$ a positive constant, and $f$ a function satisfying $\int_0^{l} x^k(f^2+|f'|^2)dx <\infty$, then it holds that \begin{equation}\label{hardy1} \int_0^{l} x^{k -2} f^2 dx \leq \bar C\int_0^{l} x^{k}\left( f^2+|f'|^2\right)dx \end{equation} for a certain constant $\bar C$ depending only on $k$ and $l$, whose proof can be found in \cite{KMP}. In fact, \eqref{hardy1} is a general version of the standard Hardy inequality: $\int_0^\infty |x^{-1}f|^2 dx\le C \int_0^\infty |f'|^2 dx$ for some constant $C$. As a consequence of \eqref{hardy1}, we have the following estimates: for $l\in(0, \bar R)$, \begin{equation}\label{hardy2} \int_{\bar R-l}^{\bar R} (\bar R -x)^{k-2} f^2 dx\le \bar C \int_{\bar R-l}^{\bar R} (\bar R -x)^k\left( f^2+|f'|^2\right)dx<\infty. \end{equation} {\em Notation}. throughout the rest of this paper, $C$ will denote a positive constant which does not depend on the time $t$ or the data, and $C(\beta)$ a certain positive number depending only on quantity $\beta$. They are referred as universal and can change from one inequality to another one. We will adopt the notation $ a \lesssim b$ to denote $a \le C b$, where $C$ is the universal constant as defined above. We will use $\int$ to denote $\int_I$. \subsection{Lower-order estimates}\label{22.1.4-2} This subsection consists of Lemmas \ref{lowt1}-\ref{lempoint}. \begin{lemma}\label{lowt1} It holds that for $t\in [0,T]$, \begin{align} &\mathcal{E}_0(t)+ (1+t)\sum_{i=1,2} \left(\mathcal{E}_i+\mathcal{D}_{i-1}\right)(t) + \int_0^t\mathcal{D}_0(s) ds\notag\\ & +\sum_{i=1,2}\int_0^t (1+s) \mathcal{D}_i(s) ds\lesssim \sum_{0\le i\le 2}\mathcal{E}_i(0) +\sum_{i=0,1}\mathcal{D}_i(0), \label{vvt} \end{align} where \begin{align*} &\mathcal{E}_0(t)=\int \left[(r-x)^2+x^2 (r_x-1)^2 \right] dx,\\ &\mathcal{D}_0(t)=\int p(\bar\rho)\left[(r-x)^2 +x^2 (r_x-1)^2\right]dx,\\ &\mathcal{E}_i(t)=\int x^2\bar\rho \left|\partial_t^{i}r\right|^2 dx, \ \ \mathcal{D}_i(t) =\int \left(\left|\partial_t^{i}r\right|^2 +\left|x\partial_t^{i}r_x\right|^2\right)dx, \ i= 1,2. \end{align*} \end{lemma} {\em Proof}. To prove the lemma, we use \eqref{pxphi} to rewrite \eqref{fixedp-a} as \begin{align}\label{remaineq} \bar\rho \frac{x^2}{r^2} v_t+\left[p\left(\frac{x^2 \bar\rho}{r^2 r_x} \right)\right]_x-\frac{x^4}{r^4} \left(p(\bar\rho)\right)_x=\mathscr{B}_x+4\nu_1 \left(\frac{v}{r}\right)_x, \end{align} where $\mathscr{B}$ is defined by \eqref{mb}. The proof consists of three steps. {\em Step 1}. In this step, we prove that \begin{equation}\label{l2t0} (\mathcal{E}_1+\mathcal{D}_0)(t) +\int_0^t \mathcal{D}_1(s) ds \lesssim (\mathcal{E}_1+\mathcal{D}_0)(0). \end{equation} We set $ A(s)=\int_0^s \tau^{-2 }{p(\tau)}d\tau$ for $ s >0$, and integrate the product of \eqref{remaineq} and $r^2 v$ over $I$ to get \begin{equation}\label{l2} \frac{1}{2}\frac{d}{dt} \mathcal{E}_1(t)+ \frac{d}{dt}\int x^2 \eta(x,t) dx +\int D(x, t) dx =0, \end{equation} where \begin{subequations}\label{10.14.2}\begin{align} &\eta(x,t)= \bar\rho A\biggl(\frac{x^2}{r^2} \frac{\bar\rho}{r_x}\biggr) + p(\bar\rho)\biggl(\frac{x^2}{r^2}r_x-4\frac{x}{r}\biggr) - \bar\rho A(\bar\rho) +3 p(\bar\rho), \\ & D(x,t)= \mathscr{B}(r^2 v)_x-4\nu_1\left(\frac{v}{r}\right)_xr^2 v\geq 3\sigma \left(\frac{r^2}{r_x}v_x^2+ 2r_x v^2\right)\label{double} \end{align}\end{subequations} with $\sigma=\min\{2\nu_1/3,\ \nu_2\}$. Indeed, \eqref{double} follows from the estimates obtained in \cite{LXZ2}. It follows from the Taylor expansion that \begin{equation}\label{h0} \eta( x,t)= 2^{-1} G(x,t) + \widetilde{G}(x,t), \end{equation} where \begin{align} & G=\bar\rho p'(\bar\rho)(r_x-1)^2+4\left(\bar\rho p'(\bar\rho) -p(\bar\rho)\right)(r/x-1)^2\notag\\ & \ \ +4\left(\bar\rho p'(\bar\rho)-2p(\bar\rho)\right)(r/x-1)(r_x-1), \label{G} \\ & | \widetilde{G} | \lesssim \left\{ p(\bar\rho)+ \bar\rho p'(\bar\rho) + \left( {\bar\rho}/{\tilde\rho} \right)^4\left| \tilde{\rho}^2 p''(\tilde{\rho}) - 4 \tilde{\rho} p'(\tilde{\rho} ) + 6p(\tilde{\rho}) \right| \right\}\notag\\ & \ \ \times \left(\left|r_x-1\right|^3 +\left|\frac{r}{x}-1\right|^3 \right) \ {\rm with} \ \tilde \rho=\bar\rho \left\{1+\vartheta_2 \left(\frac{x^2}{r^2} \frac{1}{r_x}-1\right)\right\} \notag \end{align} for some constant $\vartheta_2\in (0,1)$. Simple calculations show that \begin{align}\label{jo.1} \frac{3G}{4p(\bar\rho)}=\left(r_x-\frac{r}{x}\right)^2 +\left(\frac{3\bar\rho p'(\bar\rho)}{4p(\bar\rho)} - 1\right) \left( \left(r_x-\frac{r}{x}\right)+3\left(\frac{r}{x}-1\right) \right)^2, \end{align} which, together with \eqref{21.3.15}, means \begin{align} \int x^2 G(x,t) dx \lesssim \mathcal{D}_0(t). \label{12.30-1} \end{align} In view of \eqref{ibm} and \eqref{pcon2}, we see that for $x\in [0, \bar R]$, \begin{align}\label{basic1} \frac{3 \bar\rho(x) p'(\bar\rho(x))}{4p(\bar\rho(x))} \ge 1, \ {\rm and} \ \lim_{x\to \bar R}\frac{3 \bar\rho(x) p'(\bar\rho(x))}{4p(\bar\rho(x))} =\frac{3}{4}\bar{\gamma}>1. \end{align} Thus, there exists a constant $\iota\in (0, \bar R/4]$ such that for $x\in [\bar R-2\iota, \bar R]$, \begin{align}\label{basic2} \underline{c}=\frac{1}{2} \left(\frac{3}{4}\bar{\gamma}-1\right)\le \frac{3 \bar\rho(x) p'(\bar\rho(x))}{4p(\bar\rho(x))} -1 \le 3 \underline{c}, \end{align} which means, with the help of \eqref{jo.1}, that for $x\in [\bar R-2\iota, \bar R]$, \begin{align*} \frac{3G}{4p(\bar\rho)} \ge\left(r_x-\frac{r}{x}\right)^2 +\underline{c}\left( \left(r_x-\frac{r}{x}\right)+3\left(\frac{r}{x}-1\right) \right)^2 . \end{align*} Thus, one has that for $x\in [\bar R-2\iota, \bar R]$, \begin{subequations}\label{jo.2}\begin{align} &p(\bar\rho)\left({r}/{x}-1\right)^2 \le 6^{-1}(1+\underline{c}^{-1})G ,\label{jo.2-a}\\ &p(\bar\rho)\left(r_x-1\right)^2 \le 6^{-1}\left(11+2\underline{c}^{-1}\right)G. \label{jo.2-b} \end{align}\end{subequations} Away from the vacuum boundary, it follows from \eqref{jo.1} and \eqref{basic1} that $ 3G/4 \ge p(\bar\rho) (r_x-r/x)^2 $, which, together with $p(\bar\rho(x))\ge p(\bar\rho(\bar R- \iota))>0$ on $[0, \bar R- \iota]$, implies \begin{align} &\frac{3}{4}\int \chi x^2 G dx \ge p(\bar\rho(\bar R- \iota)) \int \chi (x r_x-r)^2 dx \notag \\ = & p(\bar\rho(\bar R- \iota)) \left\{\int \chi \left[ (x r_x-x)^2 + 2(r-x)^2 \right]dx +\int \chi' x (r-x)^2 dx\right\} \notag\\ \ge & p(\bar\rho(\bar R- \iota)) \int^{\bar R-2\iota}_0 \left[ (x r_x-x)^2 + 2(r-x)^2 \right]dx \notag\\ &-\frac{4}{\iota}\int_{\bar R-2\iota}^{\bar R- \iota} p(\bar\rho) x (r-x)^2 dx , \label{jo.3} \end{align} where $\chi=\chi(x)$ is a smooth cut-off function satisfying $\chi=1$ on $[0, \bar R-2\iota]$, $\chi=0$ on $[ \bar R-\iota, \bar R]$, and $-4/\iota\le \chi' \le 0$. It follows from \eqref{jo.2} and \eqref{jo.3} that $ \mathcal{D}_0(t)\lesssim \int x^2 G(x,t) dx $. This, together with \eqref{12.30-1}, gives \begin{align} \mathcal{D}_0(t)\lesssim \int x^2 G(x,t) dx \lesssim \mathcal{D}_0(t).\label{h1} \end{align} We use \eqref{priori1}, \eqref{21.3.15} and \eqref{til-1} to obtain \begin{align*} | \widetilde{G} |\lesssim \epsilon_0 p(\bar\rho) \left( 1+ \frac{p(\tilde{\rho})}{p(\bar\rho)}\right) \left(\left(r_x-1\right)^2 + \left(\frac{r}{x}-1\right)^2\right), \end{align*} which means, with the aid of \eqref{til-2}, that \begin{align}\label{h3} \int x^2 | \widetilde{G}(x,t) | dx \lesssim \epsilon_0 \mathcal{D}_0(t) . \end{align} We integrate the product of $(1+t)^{\ell}$ $(\ell\ge 0)$ and \eqref{l2} over $[0, t]$, and use \eqref{double}, \eqref{h0}, \eqref{h1} and \eqref{h3} to achieve \begin{align}\label{l2tl} &(1+t)^\ell(\mathcal{E}_1+\mathcal{D}_0)(t) +\int_0^t (1+s)^\ell \mathcal{D}_1(s) ds\notag\\ &\lesssim (\mathcal{E}_1+\mathcal{D}_0)(0) + \ell \int_0^t (1+s)^{\ell-1} (\mathcal{D}_0+\mathcal{D}_1)(s) ds, \end{align} because of $\mathcal{E}_1\lesssim \mathcal{D}_1$ which is due to \eqref{yo.1}. Letting $\ell=0$ in \eqref{l2tl} proves \eqref{l2t0}. {\em Step 2}. In this step, we prove that \begin{equation}\label{pr2v} \begin{aligned} &\mathcal{E}_0(t) + (1+t)(\mathcal{E}_1+ \mathcal{D}_0)(t) + \int_0^t \left[\mathcal{D}_0(s) \right.\\ &\left.+ (1+s) \mathcal{D}_1(s) \right] ds \lesssim (\mathcal{E}_0+\mathcal{E}_1+ \mathcal{D}_0)(0). \end{aligned} \end{equation} It follows from \eqref{l2tl} with $\ell=1$ and \eqref{l2t0} that \begin{align} (1+t)(\mathcal{E}_1+\mathcal{D}_0)(t) + \int_0^t(1+s) \mathcal{D}_1(s) ds\notag\\ \lesssim (\mathcal{E}_1+\mathcal{D}_0)(0) + \int_0^t \mathcal{D}_0(s) ds.\label{l2t1} \end{align} To bound the last term in \eqref{l2t1}, we integrate the product of \eqref{remaineq} and $r^3- x^3$ over $I$ to get \begin{align} &\frac{d}{dt}\int \left\{ x^2\eta_0(x,t)+x^3\bar\rho v\left({r}/{x}- {x^2}/{r^2}\right)\right\}dx \notag\\ & +\int D_0(x,t) dx =\int x^2\bar\rho v^2\biggl(1+2{x^3}/{r^3}\biggr)dx,\label{r3x3} \end{align} where \begin{subequations}\label{10.14.1}\begin{align} &\eta_0= 4\nu_1 \left[\ln\left(\frac{r}{xr_x}\right) +\frac{xr_x}{r}-1\right] +3\nu_2\left[\frac{r^2r_x}{x^2}-\ln\left(\frac{r^2 r_x}{x^2}\right)-1\right], \\ &D_0=p(\bar\rho)\biggl[\frac{x^4}{r^4}(r^3-x^3)\biggr]_x - p\biggl(\frac{x^2\bar\rho}{r^2r_x} \biggr)(r^3-x^3)_x. \end{align}\end{subequations} Clearly, we can use the estimates obtained in \cite{LXZ2} to show \begin{align} \eta_0 \lesssim |r/x-1|^2+|r_x-1|^2 \lesssim \sigma^{-1} \eta_0 , \label{eta0} \end{align} where $\sigma=\min\{2\nu_1/3,\ \nu_2\}$. It follows from the Taylor expansion that $ D_0=3x^2 G+ x^2 \widetilde{G}_0$, where $G$ is defined by \eqref{G}, and $$ |\widetilde{G}_0| \lesssim \left( p(\bar\rho)+\bar\rho p'(\bar\rho) + \left( {\bar\rho}/{\hat\rho} \right)^2 \hat{\rho}^2 | p''(\hat{\rho})| \right) \left(\left|r_x-1\right|^3 +\left|{r}/{x}-1\right|^3 \right) $$ with $\hat \rho=\bar\rho \left\{1+\vartheta_3 \left({x^2}/({r^2r_x}) -1\right)\right\}$ for some constant $\vartheta_3\in (0,1)$. In a similar way to deriving \eqref{h1} and \eqref{h3}, one has \begin{align} \mathcal{D}_0(t)\lesssim \int x^2 G(x,t) dx \le \int D_0(x,t) dx. \label{r3x3est} \end{align} We integrate \eqref{r3x3} over $[0,t]$, and use \eqref{l2t0}, \eqref{eta0}, \eqref{r3x3est} and the Cauchy inequality to obtain \begin{align}\label{l2r3x3} \mathcal{E}_0(t)+\int_0^t \mathcal{D}_0(s) ds \lesssim (\mathcal{E}_0+ \mathcal{E}_1+\mathcal{D}_0)(0), \end{align} where we have used the following estimates: \begin{align*} &\int x^2\bar\rho v^2\biggl(1+2{x^3}/{r^3}\biggr)dx \lesssim \mathcal{E}_1(t) \lesssim \mathcal{D}_1(t),\\ &\int \left| x^3\bar\rho v\left({r}/{x}- {x^2}/{r^2}\right) \right| dx \lesssim \int \left| x^2 \bar\rho v (r-x)\right| dx\\ &\lesssim \omega \int (r-x)^2 dx +\omega^{-1} \int x^4 \bar\rho^{2} v^2 dx \lesssim \omega \mathcal{E}_0(t) +\omega^{-1} \mathcal{E}_1(t) \end{align*} for any $\omega>0$. As a consequence of \eqref{l2t1} and \eqref{l2r3x3}, we prove \eqref{pr2v}. {\em Step 3}. In this step, we prove \begin{align}\label{l2tderi} (1+t)\left(\mathcal{E}_2+\mathcal{D}_1\right)(t) + \int_0^t(1+s)\mathcal{D}_2(s) ds \lesssim \sum_{0\le i\le 2}\mathcal{E}_i(0) +\sum_{i=0,1}\mathcal{D}_i(0). \end{align} Setting $\varrho=\bar\rho x^2/(r^2 r_x)$, and integrating the product of $\partial_t(r^2\eqref{remaineq})$ and $v_t$ over $I$, we have \begin{equation}\label{phivt} \frac{1}{2}\frac{d}{dt}\mathcal{E}_2(t) +\frac{d}{dt}\int \Phi(x, t) dx +\int J_1(x,t) dx=\int J_2(x,t) dx, \end{equation} where \begin{align*} &\Phi=\left(2\varrho p'\left(\varrho\right)-p \left(\varrho\right)\right)r_x v^2+2\left(\varrho p'\left(\varrho\right)-p \left(\varrho\right)\right)rvv_x \\ &\quad +\frac{r^2}{2r_x}\varrho p'\left(\varrho\right)v_x^2 -p(\bar\rho)\biggl[\biggl(4\frac{x^3}{r^3}-3\frac{x^4}{r^4}r_x\biggr)v^2+2\frac{x^4}{r^3}v v_x\biggr],\\ &J_1=\left[\mathscr{B}_t(r^2 v_t)_x- 4\nu_1r^2v_t\left(\frac{v}{r}\right)_{xt}\right] +2 \left[ \mathscr{B}(rvv_t)_x-4\nu_1\left(\frac{v}{r}\right)_xrvv_t \right],\\ &J_2=\left[\left(2\varrho p'\left(\varrho\right)-p \left(\varrho\right)\right)r_x\right]_t v^2+2\left[\left(\varrho p'\left(\varrho\right)-p \left(\varrho\right)\right)r\right]_tvv_x \\ &\quad +\left[\frac{r^2}{2r_x}\varrho p'\left(\varrho\right)\right]_tv_x^2 -p(\bar\rho) \left[\left(4\frac{x^3}{r^3}-3\frac{x^4}{r^4}r_x\right)_tv^2 +2\left(\frac{x^4}{r^3}\right)_t v v_x\right]. \end{align*} Clearly, it holds that \begin{subequations}\label{22.1.1-1}\begin{align} &J_1 \ge 2 \sigma \left[ {r^2} v_{tx}^2/r_x+2r_x v_t^2\right] -C \sigma^{-1} \left(x^2 v_x^2 + v^2\right), \label{21.10.13-1}\\ &| J_2 | \lesssim x^2 v_x^2 + v^2, \label{21.10.13-2} \end{align}\end{subequations} where $\sigma=\min\{2\nu_1/3,\ \nu_2\}$. Indeed, \eqref{21.10.13-1} follows from the estimates obtained in \cite{LXZ2}; and \eqref{21.10.13-2} from \eqref{pcon1-a}, \eqref{ibm}, \eqref{21apri} and \eqref{21.3.15}. Note that \begin{equation}\label{21.10.13-3} \Phi(x, t)= 2^{-1}G_1(x,t)+\widetilde \Phi(x,t), \end{equation} where \begin{align*} & G_1=\bar\rho p'(\bar\rho) x^2v_x^2+ 4\left(\bar\rho p'(\bar\rho)-p(\bar\rho)\right) v^2+4\left (\bar\rho p'(\bar\rho)-2p(\bar\rho) \right)vxv_x, \notag\\ & \widetilde \Phi=\left[\left(2\varrho p'\left(\varrho\right)-p \left(\varrho\right)\right)r_x-\left(2\bar\rho p'(\bar\rho)-p(\bar\rho)\right) \right] v^2\\ &\quad +2\left[\left(\varrho p'\left(\varrho\right)-p \left(\varrho\right)\right)r -\left(\bar\rho p'\left(\bar\rho\right)-p \left(\bar\rho\right)\right)x \right] v v_x \\ &\quad +\frac{1}{2}\left[\frac{r^2}{r_x}\varrho p'\left(\varrho\right)-x^2 \bar\rho p'\left(\bar\rho\right)\right] v_x^2 \\ & \quad -p(\bar\rho) \left[\left(4\frac{x^3}{r^3}-3\frac{x^4}{r^4}r_x-1\right) v^2 +2\left(\frac{x^4}{r^3}-x\right) v v_x\right]. \end{align*} Then, we use a similar way to deriving \eqref{h1} and \eqref{h3} to get \begin{subequations}\label{22.1.1-2}\begin{align} & \int G_1(x, t)dx \lesssim \int p(\bar\rho) (v^2 +x^2v_x^2)dx \lesssim \int G_1(x, t)dx ,\label{G-1}\\ &\int |\widetilde\Phi(x,t)| dx \lesssim \epsilon_0 \int p(\bar\rho) (v^2 +x^2v_x^2)dx. \label{tphi} \end{align}\end{subequations} We integrate \eqref{phivt} over $[0,t]$, and use \eqref{22.1.1-1}-\eqref{22.1.1-2} and \eqref{pr2v} to obtain \begin{align} &\mathcal{E}_2(t)+\int p(\bar\rho) (v^2 +x^2v_x^2)dx +\int_0^t \mathcal{D}_2(s)ds \lesssim \sum_{0\le i\le 2}\mathcal{E}_i(0) +\sum_{i=0,1}\mathcal{D}_i(0). \label{21-10-13} \end{align} Similarly, we integrate the product of $1+t$ and \eqref{phivt} over $[0,t]$ to achieve \begin{align} &(1+t)\mathcal{E}_2(t)+(1+t)\int p(\bar\rho) (v^2 +x^2v_x^2)dx +\int_0^t (1+s) \mathcal{D}_2(s)ds \notag\\ & \lesssim \sum_{0\le i\le 2}\mathcal{E}_i(0) +\sum_{i=0,1}\mathcal{D}_i(0), \label{21-10-13-1} \end{align} where \eqref{21-10-13} has been used. Note that for any function $f$ and positive constant $a$, \begin{align}\label{12.13.a} & (1+t)^a\int f^2(x,t) dx \le \int f^2(x,0) dx +\int_0^t (1+s)^a \int (f^2 +f_s^2)(x,s)dxds\notag\\ &\qquad +a\int_0^t (1+s)^{a-1} \int f^2(x,s)dxds, \end{align} then \eqref{l2tderi} follows from \eqref{pr2v} and \eqref{21-10-13-1}. \hfill $\Box$ \begin{lemma}\label{l2palpha} Let $\Upsilon$ be defined by \eqref{22.1.11-1}. Then it holds that for $\theta\in \left(0, 1-5/(4\bar\gamma)\right]$ and $t\in [0, T]$, \begin{subequations}\label{12-21}\begin{align} &\mathscr{E}_0(t)+(1+t)^\zeta\mathscr{D}_0(t) +(1+t)^{2\zeta}\left( \mathcal{D}_0+\mathcal{D}_1+\mathcal{E}_2\right)(t) +(1+t)^{2\zeta-1}\mathcal{E}_0(t)\notag\\ &\quad+\int_0^t\left[\mathscr{D}_0(s) +(1+s)^\zeta\mathscr{D}_1(s) +(1+s)^{2\zeta}(\mathcal{D}_1+\mathcal{D}_2) (s)\right. \notag\\ &\quad\left. + (1+s)^{2\zeta-1}\mathcal{D}_0(s)\right]ds \lesssim C(\theta^{-1}) \left(\mathscr{E}_0 +\mathcal{D}_1+\mathcal{E}_2\right)(0), \label{l2palphaest}\\ & \left( \Upsilon(1+t)^{3- {3}/{\bar\gamma}-2\theta} + (1-\Upsilon) (1+t)^{2\zeta} \right) \int (r(x, t)-x)^2 dx\notag\\ &\quad\lesssim C(\theta^{-1}) \left(\mathscr{E}_0 +\mathcal{D}_1+\mathcal{E}_2\right)(0), \label{rxt00} \end{align}\end{subequations} where $\alpha=1-\theta$, $\zeta=1- {1}/({2\bar\gamma}) - \theta/2$, \begin{align*} &\mathscr{E}_0(t)=\int i^{-\alpha}(\bar\rho)\left[(r-x)^2+x^2(r_x-1)^2 \right]dx,\\ &\mathscr{D}_0(t)=\int i^{-\alpha}(\bar\rho) p(\bar\rho)\left[(r-x)^2+x^2(r_x-1)^2 \right]dx,\\ &\mathscr{D}_1(t)=\int i^{-\alpha}(\bar\rho)(v^2+x^2v_x^2) dx. \end{align*} \end{lemma} {\em Proof}. The proof consists of four steps. {\em Step 1}. In this step, we prove that for any $\omega>0$, \begin{align} &\mathscr{E}_0(t)+\int_0^t \mathscr{D}_0(s) ds \lesssim \mathscr{E}_0(0) +\left(1+\omega^{-1}\right)\sum_{0\le i\le 2}\int_0^t \mathcal{D}_i(s)ds \notag \\ &+ \omega \int_0^t \left\{\mathscr{D}_0(s)+(1+s)^{\zeta} \mathscr{D}_1(s) +(1+s)^{2\zeta} \mathcal{D}_1(s)\right\} ds\notag\\ & + \int_0^t \left\{ \omega^{1-2q_1} (1+s)^{- q_1 \zeta}+\omega^{-1} \theta^{-1}(1+s)^{-2\zeta} \right\}\mathscr{E}_0(s)ds, \label{imt0} \end{align} where $q_1=2\bar\gamma/(2\bar\gamma-1- 2\bar\gamma \theta)$ satisfies $q_1 \zeta>1$. We integrate the product of \eqref{remaineq} with $\int_0^x i^{-\alpha}(\bar\rho)(r^3-y^3)_y \ dy$ over $I$ to get \begin{equation} \frac{d}{dt}\int i^{-\alpha}(\bar\rho) x^2 \eta_0 dx + \int i^{-\alpha}(\bar\rho) D_0 dx =\sum_{1\le i\le 3} L_i, \label{22.1.2-1} \end{equation} where $\eta_0$ and $D_0$ are defined by \eqref{10.14.1}, and \begin{align*} &L_1=-\int \bar\rho \frac{x^2}{r^2} v_t\int_0^x i^{-\alpha}(\bar\rho)(r^3-y^3)_y dydx,\\ &L_2=\int p(\bar\rho)\biggl(\frac{x^4}{r^4}\biggr)_x\left[i^{-\alpha}(\bar\rho)(r^3-x^3)-\int_0^x i^{-\alpha}(\bar\rho)(r^3-y^3)_ydy \right]dx,\\ &L_3=4\nu_1 \int \left(\frac{v}{r}\right)_x\left[\int_0^x i^{-\alpha}(\bar\rho)(r^3-y^3)_y dy -i^{-\alpha}(\bar\rho)(r^3-x^3)\right]dx. \end{align*} In a similar way to deriving \eqref{r3x3est}, one has $$\mathscr{D}_0(t)\lesssim \int i^{-\alpha}(\bar\rho) x^2 G(x,t) dx \le \int i^{-\alpha}(\bar\rho) D_0(x,t) dx. $$ Integrating \eqref{22.1.2-1} over $[0,t]$, and using \eqref{eta0} and the inequality above, we have \begin{equation}\label{r3x3alpha} \mathscr{E}_0(t)+\int_0^t \mathscr{D}_0(s) ds \lesssim \mathscr{E}_0(0)+\sum_{1\le i\le 3} \int_0^t | L_i| ds. \end{equation} For $L_1$, it follows from the Cauchy and H$\ddot{o}$lder inequalities that for any $\omega>0$, \begin{align*} |L_1|\lesssim & \omega^{-1}\int v_t^2 dx+\omega \int\bar\rho^2 \left|\int_0^x i^{-\alpha}(\bar\rho)y\left(|r-y|+y|r_y-1|\right)dy\right|^2dx\notag\\ \lesssim & \omega^{-1} \mathcal{D}_2(t)+\omega \mathscr{D}_0(t) H_1, \end{align*} where $H_1=\int \bar\rho^2 \int_0^xp^{-1}(\bar\rho) i^{-\alpha}(\bar\rho)y^2dy dx$. Due to the L'Hopital rule, \eqref{pxphi} and \eqref{irho}, we have $$\lim_{x\rightarrow \bar R}\frac{\int_0^xp^{-1}(\bar\rho) i^{-\alpha}(\bar\rho)dy}{p^{-1}(\bar\rho)i^{1-\alpha}(\bar\rho)} =\lim_{x\rightarrow \bar R}\frac{1}{\left(\bar\rho i(\bar\rho)p^{-1}(\bar\rho)+\alpha-1\right)x\phi}\le \frac{{\bar R}^2}{\alpha M}, $$ which, together with \eqref{irho}, \eqref{April4} and \eqref{har-1}, means \begin{align}\label{21.11.2-6} H_1\lesssim \int \bar\rho^2 p^{-1}(\bar\rho)i^{1-\alpha}(\bar\rho) dx \lesssim \int \bar\rho i^{-\alpha}(\bar\rho) dx \lesssim \int i^{\frac{8}{9(\bar\gamma-1)}-\alpha}(\bar\rho) dx\lesssim 1. \end{align} So, we obtain that for any $\omega>0$, \begin{align}\label{L1} |L_1|\lesssim \omega \mathscr{D}_0(t) +\omega^{-1} \mathcal{D}_2(t). \end{align} For $L_2$, we can rewrite $L_2$ as $L_2=L_{21}+L_{22}$, where \begin{align} L_{21}&=\int_0^{\bar R/2} p(\bar\rho)\left(\frac{x^4}{r^4}\right)_x\int_0^x \left(i^{-\alpha}(\bar\rho)\right)_y(r^3-y^3)dy dx,\notag\\ L_{22}&=\int_{\bar R/2}^{\bar R} p(\bar\rho)\biggl(\frac{x^4}{r^4}\biggr)_x\left[i^{-\alpha}(\bar\rho)(r^3-x^3)-\int_0^x i^{-\alpha}(\bar\rho)(r^3-y^3)_ydy \right]dx.\notag \end{align} Clearly, it holds that \begin{align}\label{L21} |L_{21}|\lesssim \mathcal{D}_0(t). \end{align} (The derivation can be found in \cite{LZeng}.) It follows from the Cauchy and H$\ddot{o}$lder inequalities that for any $\omega>0$, \begin{align} &|L_{22}|\lesssim\int_{\bar R/2}^{\bar R} p(\bar\rho)i^{-\alpha}(\bar\rho)(x|r_x-1|+|r-x|)|r-x|dx\notag\\ & +\int_{\bar R/2}^{\bar R} p(\bar\rho)(x|r_x-1|+|r-x|) \int_0^xi^{-\alpha} (\bar\rho)y(y|r_y-1|+|r-y|)dydx\notag\\ & \lesssim 2 \omega \mathscr{D}_0(t)+\omega^{-1}H_2 +\omega^{-1} \mathcal{D}_0(t) H_3, \label{L22} \end{align} where \begin{align} &H_2=\int_{\bar R/2}^{\bar R}p(\bar\rho)i^{-\alpha}(\bar\rho)(r-x)^2 dx, \notag\\ &H_3= \int_{\bar R/2}^{\bar R}p(\bar\rho)i^{\alpha}(\bar\rho) \int_0^xp^{-1}(\bar\rho)i^{-2\alpha} (\bar\rho)y^2dydx \lesssim 1. \label{21.11.4} \end{align} Indeed, the bound for $H_3$ can be obtained by the same way as that for $H_1$ in \eqref{21.11.2-6}. In view of \eqref{April4}, we see that $\bar\rho^{9(\bar\gamma-1)/8}\lesssim i(\bar\rho)\lesssim \bar\rho^{7(\bar\gamma-1)/8}$, which, together with \eqref{irho}, \eqref{har-1} and \eqref{hardy2}, gives \begin{align} &H_2\lesssim \int_{\bar R/2}^{\bar R} i^{1-\alpha+\frac{8}{9(\bar\gamma-1)}}(\bar\rho)(r-x)^2 dx\notag \\ & \lesssim \int_{\bar R/2}^{\bar R} i^{3-\alpha+\frac{8}{9(\bar\gamma-1)}}(\bar\rho) \left[(r-x)^2+ (r_x-1)^2 \right]dx\notag\\ & \lesssim \int_{\bar R/2}^{\bar R} \bar\rho^{\frac{7}{8}(\bar\gamma-1)(2-\alpha)-\frac{2}{9}} p(\bar\rho)\left[(r-x)^2+ x^2 (r_x-1)^2 \right]dx \lesssim \mathcal{D}_0(t).\notag \end{align} This gives, with the help of \eqref{L21}-\eqref{21.11.4}, that for any $\omega>0$, \begin{equation}\label{L2} |L_2|\lesssim \omega\mathscr{D}_0(t)+\left(1+\omega^{-1}\right)\mathcal{D}_0(t). \end{equation} For $L_3$, we can rewrite $L_3$ as $L_3=-4\nu_1(L_{31}+L_{32})$, where \begin{align} &L_{31}= \int_0^{\bar R/2 }\left(\frac{v}{r}\right)_x \int_0^x (i^{-\alpha}( \bar\rho))_y(r^3-y^3)dy dx,\notag\\ & L_{32}=\int_{\bar R/2}^{\bar R} \left(\frac{v}{r}\right)_x\left[ i^{-\alpha}(\bar \rho)(r^3-x^3)- \int_0^x i^{-\alpha}(\bar\rho)(r^3-y^3)_y dy \right]dx.\notag \end{align} It is easy to show that \begin{align} |L_{31}|\lesssim \left(\mathcal{D}_1+\mathcal{D}_0\right)(t), \ \ |L_{32}|\lesssim L_{32}^I+L_{32}^{II}, \label{L31} \end{align} where \begin{align*} & L_{32}^I=\int_{\bar R/2}^{\bar R} i^{-\alpha}(\bar\rho) (x|v_x|+|v|)\left| r-x\right|dx ,\\ & L_{32}^{II}=\int_{\bar R/2}^{\bar R}(x|v_x|+|v|)\int_0^x i^{-\alpha}(\bar\rho)y\left(y|r_y-1|+|r-y|\right)dy dx. \end{align*} It follows from the Cauchy inequality, \eqref{har-1}, \eqref{hardy2} and $\alpha<1$ that for any $\omega>0$, \begin{align}\label{L321} L_{32}^I & \lesssim \omega (1+t)^{\zeta} \mathscr{D}_1(t) +\omega^{-1} (1+t)^{-\zeta} \int_{\bar R/2}^{\bar R} i^{ -\alpha}(\bar\rho) ( r-x)^2 dx\notag\\ & \lesssim \omega (1+t)^{\zeta} \mathscr{D}_1(t) +\omega^{-1} (1+t)^{-\zeta} \widetilde{L}_{32}^I , \end{align} where $$\widetilde{L}_{32}^I =\int_{\bar R/2}^{\bar R} i^{2-\alpha}(\bar\rho) \left[(r-x)^2+x^2(r_x-1)^2 \right]dx.$$ We choose constants $q_1$, $q_2$ and $\delta_3$ satisfying $$ \frac{1}{q_1}=1-\frac{1}{2\bar\gamma}-\theta, \ \ \frac{1}{q_2}=1-\frac{1}{q_1} \ \ {\rm and} \ \ \delta_3=\frac{1}{2q_2-1},$$ then $ q_1, q_2>1$, $q_1 \zeta>1$ and $0<\delta_3<\bar\gamma-1$. In view of \eqref{irho} and \eqref{April4}, we see that $$i^{2q_2}(\bar\rho)\le 4 p(\bar\rho) \bar\rho^{-1} i^{2q_2-1}(\bar\rho) \lesssim p(\bar\rho) \bar\rho^{\delta_3 (2q_2-1)-1} =p(\bar\rho),$$ which, together with the H\"{o}lder inequality, implies \begin{align} \widetilde{L}_{32}^I \lesssim\mathscr{E}_0^{1/q_1} \left(\int_{\bar R/2}^{\bar R} i^{2q_2-\alpha}(\bar\rho) \left[(r-x)^2+x^2(r_x-1)^2 \right]dx\right)^{1/q_2} \lesssim \mathscr{E}_0^{1/q_1} \mathscr{D}_0^{1/q_2}. \notag \end{align} Substitute this into \eqref{L321} and use the Young inequality to get \begin{align}\label{L321-21} &L_{32}^I \lesssim \omega \left\{ \mathscr{D}_0(t)+(1+t)^{\zeta} \mathscr{D}_1(t) \right\} + \omega^{1-2q_1} (1+t)^{- q_1 \zeta}\mathscr{E}_0(t) . \end{align} It follows from the Cauchy and H\"{o}lder inequalities that for any $\omega>0$, \begin{align*} L_{32}^{II} &\lesssim \int_{\bar R/2}^{\bar R}(x|v_x|+|v|)\left( \theta^{-1} \mathscr{E}_0(t) \right)^{1/2}dx \\ & \lesssim \omega (1+t)^{2\zeta} \mathcal{D}_1(t) + \omega^{-1} \theta^{-1}(1+t)^{-2\zeta} \mathscr{E}_0(t), \end{align*} which gives, with the help of \eqref{L31} and \eqref{L321-21}, that for any $\omega>0$, \begin{align}\label{L3-} |L_3|\lesssim& (\mathcal{D}_0+ \mathcal{D}_1)(t)+ \omega \left\{\mathscr{D}_0(t)+(1+t)^{\zeta} \mathscr{D}_1(t) +(1+t)^{2\zeta} \mathcal{D}_1(t)\right\}\notag\\ &+ \left\{ \omega^{1-2q_1} (1+t)^{- q_1 \zeta}+\omega^{-1} \theta^{-1}(1+t)^{-2\zeta} \right\}\mathscr{E}_0(t) . \end{align} Therefore, \eqref{imt0} follows from \eqref{r3x3alpha}, \eqref{L1}, \eqref{L2} and\eqref{L3-}. {\em Step 2}. In this step, we prove that for any $\omega>0$, \begin{align} &(1+t)^{\zeta}\mathscr{D}_0(t) +\int_0^t(1+s)^\zeta \mathscr{D}_1(s)ds \lesssim \mathscr{D}_0(0) +\int_0^t \mathscr{D}_0(s) ds\notag\\ &+ (1+\omega^{-1}) \int_0^t (1+s) \left( \theta^{-1} \mathcal{D}_1+\mathcal{D}_2 \right)(s) ds\notag\\ & + \omega \int_0^t \left\{ (1+s)^\zeta \mathscr{D}_1(s) + (1+t)^{2\zeta-1}\mathcal{D}_0(s) \right\} ds . \label{tzeta0} \end{align} We integrate the product of \eqref{remaineq} and $\int_0^x i^{-\alpha}(\bar\rho)(r^2 v)_y dy$ over $I$ to obtain $$\frac{d}{dt}\int x^2 i^{-\alpha}(\bar\rho) \eta dx + \int i^{-\alpha}(\bar\rho) D dx = \sum_{1\le i\le 3} \mathcal{L}_i,$$ where $\eta$ and $D$ are given by \eqref{10.14.2}, and \begin{align*} &\mathcal{L}_1=-\int \bar\rho \frac{x^2}{r^2}v_t \int_0^x i^{-\alpha}(\bar\rho)(r^2 v)_ydydx,\\ &\mathcal{L}_2=\int p(\bar\rho)\left(\frac{x^4}{r^4}\right)_x\left[ i^{-\alpha}(\bar\rho)r^2 v- \int_0^x i^{-\alpha}(\bar\rho) (r^2 v)_y dy\right]dx,\\ &\mathcal{L}_3=4\nu_1\int \left(\frac{v}{r}\right)_x\left[\int_0^x i^{-\alpha}(\bar\rho)(r^2 v)_ydy-i^{-\alpha}(\bar\rho)r^2 v\right]dx. \end{align*} We integrate the product of $(1+t)^\zeta$ and the equation above over $[0,t]$, and use the similar way to deriving \eqref{l2tl} to get \begin{align}\label{zetakkk} &(1+t)^{\zeta}\mathscr{D}_0(t) +\int_0^t(1+s)^\zeta \mathscr{D}_1(s)ds\notag\\ & \lesssim \mathscr{D}_0(0)+ \sum_{i=1}^3\int_0^t (1+s)^\zeta |\mathcal{L}_i| ds+\int_0^t \mathscr{D}_0(s) ds. \end{align} The estimates for $\mathcal{L}_i$ $(i=1,2,3)$ are similar to those for $L_i$ $(i=1,2,3)$ obtained in the first step. For $\mathcal{L}_1$, it follows from the Cauchy and H\"{o}lder inequalities, and \eqref{21.11.2-6} that for any $\omega>0$, \begin{align}\label{k1} |\mathcal{L}_1| \lesssim\omega \mathscr{D}_1(t)+\omega^{-1} \mathcal{D}_2(t), \end{align} due to $\int \bar\rho^2 \int_0^x i^{-\alpha}(\bar\rho) y^2 dy dx \lesssim H_1 \lesssim 1$. For $\mathcal{L}_2$, note that \begin{align*} &\mathcal{L}_2=\int_{\bar R/2}^{\bar R} p(\bar\rho)\left(\frac{x^4}{r^4}\right)_x\left[ i^{-\alpha}(\bar\rho)r^2 v- \int_0^x i^{-\alpha}(\bar\rho) (r^2 v)_y dy\right]dx\\ &+\int_0^{\bar R/2} p(\bar\rho)\left(\frac{x^4}{r^4}\right)_x\int_0^x \left(i^{-\alpha}(\bar\rho) \right)_y r^2 v dydx=\mathcal{L}_{22}+\mathcal{L}_{21}, \end{align*} and that for any $\omega>0$, $ |\mathcal{L}_{21}| \lesssim\omega (1+t)^{\zeta-1}\mathcal{D}_0(t) + \omega^{-1} (1+t)^{1-\zeta}\mathcal{D}_1(t) $ and \begin{align*} &|\mathcal{L}_{22}|\lesssim \int_{\bar R/2}^{\bar R} p(\bar\rho)i^{-\alpha}(\bar\rho)(|r-x|+x|r_x-1|)|v| dx\notag\\ &+\int_{\bar R/2}^{\bar R} p(\bar\rho)(|r-x|+x|r_x-1|)\int_0^x i^{-\alpha}(\bar\rho)y(|v|+|yv_y|)dydx\notag\\ & \lesssim 2\omega (1+t)^{\zeta-1}\mathcal{D}_0(t)+ \omega^{-1} (1+t)^{1-\zeta} \left\{ \int_{\bar R/2}^{\bar R} p(\bar\rho) i^{-2\alpha}(\bar\rho) v^2 dx\right.\\ & \left. + \mathcal{D}_1(t) \int_{\bar R/2}^{\bar R} p(\bar\rho) \int_0^x i^{-2\alpha}(\bar\rho)y^2 dy dx \right\} \\ & \lesssim 2\omega (1+t)^{\zeta-1}\mathcal{D}_0(t) + \omega^{-1} (1+t)^{1-\zeta}\mathcal{D}_1(t). \end{align*} Then, we have that for any $\omega>0$, \begin{align}\label{k2} |\mathcal{L}_{2}|\lesssim \omega (1+t)^{\zeta-1}\mathcal{D}_0(t) + \omega^{-1} (1+t)^{1-\zeta}\mathcal{D}_1(t). \end{align} For $\mathcal{L}_3$, notice that \begin{align*} &-\frac{1}{4\nu_1}\mathcal{L}_3=\int_{\bar R/2}^{\bar R} \left(\frac{v}{r}\right)_x\left[i^{-\alpha}(\bar\rho)r^2 v-\int_0^x i^{-\alpha}(\bar\rho)(r^2 v)_ydy\right]dx \\ &+\int^{\bar R/2}_0 \left(\frac{v}{r}\right)_x \int_0^x \left(i^{-\alpha}(\bar\rho) \right)_y r^2 v dydx= \mathcal{L}_{32} +\mathcal{L}_{31}, \end{align*} and $ |\mathcal{L}_{31}| \lesssim \mathcal{D}_1(t)$, $|\mathcal{L}_{32}| \lesssim \mathcal{L}_{32}^{I}+ \mathcal{L}_{32}^{II}$, where \begin{align*} \mathcal{L}_{32}^{I} &\lesssim \int_{\bar R/2}^{\bar R} i^{-\alpha}(\bar\rho) (x|v_x|+|v|)\left|v\right|dx\\ & \lesssim \omega \mathscr{D}_1(t) + \omega^{-1} \int _{\bar R/2}^{\bar R} i^{-\alpha}(\bar\rho) v^2 dx \lesssim \omega \mathscr{D}_1(t) \\ &\quad + \omega^{-1} \int _{\bar R/2}^{\bar R} i^{2-\alpha}(\bar\rho) (v^2 + v_x^2) dx \lesssim \omega \mathscr{D}_1(t) + \omega^{-1} \mathcal{D}_1(t), \\ \mathcal{L}_{32}^{II} & \lesssim \int_{\bar R/2}^{\bar R} (x|v_x|+|v|)\int_0^x i^{-\alpha}(\bar\rho)y\left(y|v_y|+|v|\right) dydx\\ &\lesssim \int_{\bar R/2}^{\bar R} (x|v_x|+|v|)\left( \theta^{-1} \mathscr{D}_1(t) \right)^{1/2} dx\\ & \lesssim \omega \mathscr{D}_1(t) + \omega^{-1} \theta^{-1}\mathcal{D}_1(t) \end{align*} for any $\omega>0$. Thus, one has that for any $\omega>0$, \begin{align} |\mathcal{L}_{3}| \lesssim \omega \mathscr{D}_1(t) + (1+\omega^{-1})\theta^{-1} \mathcal{D}_1(t).\label{k3} \end{align} So, \eqref{tzeta0} follows form \eqref{zetakkk}-\eqref{k3}. {\em Step 3}. In this step, we prove that for any $\omega>0$, \begin{align}\label{tka-1} &(1+t)^{2\zeta-1}\mathcal{E}_0(t) +(1+t)^{2\zeta}\left(\mathcal{E}_1 +\mathcal{D}_0\right)(t) +\int_0^t \left\{(1+s)^{2\zeta-1}\mathcal{D}_0(s)\right. \notag\\ &\left. +(1+s)^{2\zeta}\mathcal{D}_1(s) \right\}ds\lesssim(\mathcal{E}_0+\mathcal{E}_1+\mathcal{D}_0)(0) \notag \\&+\omega \int_0^t \mathscr{D}_0(s)ds +\omega^{1-q_3}\int_0^t (1+s)^{q_3(2\zeta-2)} \mathscr{E}_0(s)ds, \end{align} where $q_3=2\bar\gamma/(2+2\bar\gamma\theta-\theta)$ satisfies $q_3(2\zeta-2)<-1$. A suitable combination of $\int_0^t(1+s)^{2\zeta-1}\eqref{r3x3}ds$ with \eqref{l2tl} for $\ell=2\zeta$ gives that \begin{align}\label{tka-2} &(1+t)^{2\zeta-1}\mathcal{E}_0(t) +(1+t)^{2\zeta}\left(\mathcal{E}_1 +\mathcal{D}_0\right)(t) \notag\\ &+\int_0^t \left\{(1+s)^{2\zeta-1}\mathcal{D}_0(s) +(1+s)^{2\zeta}\mathcal{D}_1(s) \right\}ds\notag\\ \lesssim& (\mathcal{E}_0+\mathcal{E}_1 +\mathcal{D}_0)(0) +(1+t)^{2\zeta-1}\mathcal{E}_1(t) \notag\\ & +\int_0^t \left\{(1+s)^{2\zeta-2} \mathcal{E}_0(s) +(1+s)^{2\zeta-1}\mathcal{D}_1(s)\right\}ds\notag\\ \lesssim & (\mathcal{E}_0+\mathcal{E}_1+\mathcal{D}_0)(0)+\int_0^t(1+s)^{2\zeta-2} \mathcal{E}_0(s)ds, \end{align} where $2\zeta-1<1$ and \eqref{pr2v} have been used to derive the last inequality. We choose constants $q_3$, $q_4$ and $\delta_4$ satisfying $$ \frac{1}{q_3}=\frac{1}{\bar\gamma}+\theta -\frac{\theta}{2\bar\gamma}, \ \ \frac{1}{q_4}=1-\frac{1}{q_3} \ \ {\rm and} \ \ \delta_4=\frac{1}{\alpha q_4-1},$$ then $q_3,q_4>1$, $q_3 (2\zeta-2) <-1$ and $0<\delta_4<\bar\gamma-1$. In view of \eqref{irho} and \eqref{April4}, we see that \begin{align*} & i^{\alpha q_4/q_3}(\bar\rho)= i^{-\alpha }(\bar\rho) i^{\alpha q_4}(\bar\rho)\le 4 i^{-\alpha }(\bar\rho) p(\bar\rho) \bar\rho^{-1} i^{\alpha q_4-1}(\bar\rho) \\ &\lesssim i^{-\alpha }(\bar\rho) p(\bar\rho) \bar\rho^{\delta_4 (\alpha q_4-1)-1} =i^{-\alpha }(\bar\rho) p(\bar\rho), \end{align*} which, together with the H\"{o}lder inequality, implies \begin{align*} \mathcal{E}_0\lesssim \mathscr{E}_0^{1/q_3} \left(\int i^{\alpha q_4/q_3}(\bar\rho) (|r-x|^2+|xr_x-x|^2)dx \right)^{1/q_4} \lesssim \mathscr{E}_0^{1/q_3} \mathscr{D}_0^{1/q_4}. \end{align*} Substitute this into \eqref{tka-2} and use the Young inequality to prove \eqref{tka-1}. {\em Step 4}. This step devotes to proving \eqref{12-21}. It follows from \eqref{vvt}, and a suitable combination of \eqref{imt0}, \eqref{tzeta0} and \eqref{tka-1} with small $\omega$ that \begin{align*} &\mathscr{E}_0(t)+(1+t)^\zeta\mathscr{D}_0(t) +(1+t)^{2\zeta}\left(\mathcal{E}_1 +\mathcal{D}_0\right)(t) +(1+t)^{2\zeta-1}\mathcal{E}_0(t)\notag\\ &+\int_0^t\left[\mathscr{D}_0(s) +(1+s)^\zeta\mathscr{D}_1(s) +(1+s)^{2\zeta}\mathcal{D}_1 (s)+ (1+s)^{2\zeta-1}\mathcal{D}_0(s)\right]ds\notag\\ & \lesssim \left(\mathscr{E}_0+ \mathscr{D}_0\right) (0) + \theta^{-1}(\mathcal{E}_0+\mathcal{D}_1+\mathcal{E}_2)(0) \\ & +\int_0^t \left\{ (1+s)^{- q_1 \zeta}+ \theta^{-1}(1+s)^{-2\zeta}+ (1+s)^{-q_3(2-2\zeta)} \right\} \mathscr{E}_0(s)ds, \end{align*} which, together with $q_1 \zeta>1$, $2\zeta>1$, $q_3(2-2\zeta)>1$ and the Gronwall inequality, implies that \begin{align}\label{zetadec-1} &\mathscr{E}_0(t)+(1+t)^\zeta\mathscr{D}_0(t) +(1+t)^{2\zeta}\left(\mathcal{E}_1 +\mathcal{D}_0\right)(t) +(1+t)^{2\zeta-1}\mathcal{E}_0(t)\notag\\ &+\int_0^t\left[\mathscr{D}_0(s) +(1+s)^\zeta\mathscr{D}_1(s) +(1+s)^{2\zeta}\mathcal{D}_1 (s)+ (1+s)^{2\zeta-1}\mathcal{D}_0(s)\right]ds\notag\\ & \lesssim C(\theta^{-1}) \left(\mathscr{E}_0 +\mathcal{D}_1+\mathcal{E}_2\right)(0) . \end{align} Based on \eqref{zetadec-1} and \eqref{vvt}, we integrate the product of $(1+t)^{2\zeta}$ and \eqref{phivt} over $[0,t]$ to obtain \begin{align} &(1+t)^{2\zeta}\mathcal{E}_2(t)+(1+t)^{2\zeta}\int p(\bar\rho) (v^2 +x^2v_x^2)dx +\int_0^t (1+s)^{2\zeta} \mathcal{D}_2(s)ds \notag\\ & \lesssim C(\theta^{-1}) \left(\mathscr{E}_0 +\mathcal{D}_1+\mathcal{E}_2\right)(0). \label{l2decay} \end{align} Indeed, the derivation of \eqref{l2decay} is the same as that of \eqref{21-10-13-1}. In view of \eqref{12.13.a}, we see that \begin{align*} (1+t)^{2\zeta}\mathcal{D}_1(t) \lesssim \mathcal{D}_1(0) + \int_0^t (1+s)^{2\zeta} (\mathcal{D}_1+\mathcal{D}_2)(s)ds , \end{align*} which, together with \eqref{zetadec-1} and \eqref{l2decay}, proves \eqref{l2palphaest}. It follows from \eqref{hardy1}, \eqref{hardy2} and \eqref{har-1} that \begin{align}\label{12.15-1} \int (r-x)^2 dx \lesssim \int x^2 i^2(\bar\rho)(|r-x|^2+|r_x-1|^2) dx. \end{align} When $\bar\gamma\le 2$, we set $$\frac{1}{q_5}=\frac{2-\bar\gamma}{\bar\gamma}+\theta, \ \ \frac{1}{q_6}=1-\frac{1}{q_5} \ {\rm and} \ \delta_5=\frac{1}{2q_6-1},$$ then $q_5,q_6>1$ and $0<\delta_5<\bar\gamma-1$. So, it follows from \eqref{irho} and \eqref{April4} that $$ i^{2 q_6}(\bar\rho)\le 4 p(\bar\rho) \bar\rho^{-1} i^{2 q_6-1}(\bar\rho) \lesssim p(\bar\rho) \bar\rho^{\delta_5 (2 q_6-1)-1} = p(\bar\rho), $$ which gives, with the aid of the H\"{o}lder inequality and \eqref{12.15-1}, that \begin{align} \int (r-x)^2 dx \lesssim \mathcal{E}_0^{1/q_5}\left(\int i^{2 q_6}(\bar\rho) (|r-x|^2+|xr_x-x|^2)dx \right)^{1/q_6} \lesssim \mathcal{E}_0^{1/q_5}\mathcal{D}_0^{1/q_6}.\label{12.21-2} \end{align} When $\bar\gamma > 2$, it follows from \eqref{d-2} and \eqref{d-3} that $$p''(0)=\lim_{t\to 0+} \frac{p'(s)}{s} =\bar\gamma\lim_{t\to 0+} \frac{p(s)}{s^2} =0 .$$ Then, we use the L'Hopital rule, \eqref{d-0}, \eqref{d-00} and \eqref{21-11-2} to get $$ \lim_{s\to 0+}\frac{i^2(s)}{p(s)}= 2\lim_{s\to 0+}\frac{ i(s)}{s}=2\lim_{s\to 0+}\frac{p'(s)}{s}=2p''(0)=0, $$ so that $i^2(\bar\rho)\lesssim p(\bar\rho)$ and $ \int (r-x)^2 dx \lesssim \mathcal{D}_0 $, where \eqref{12.15-1} has been used to derive the last inequality. This, together with \eqref{12.21-2} and \eqref{l2palphaest}, proves \eqref{rxt00}. \hfill$\Box$ \begin{lemma}\label{lempoint} Let $\Upsilon$ be defined by \eqref{22.1.11-1}. Then it holds that for $\theta\in \left(0, 1-5/(4\bar\gamma)\right]$ and $(x,t)\in I\times [0, T]$, \begin{subequations}\label{22.1.5-2}\begin{align} & \left(\Upsilon (1+t)^{2- {2}/{\bar\gamma}- {3\theta}/{2}} +(1-\Upsilon)(1+t)^{2\zeta-{1}/{2}}\right)x|r(x, t)-x|^2 \notag \\ &\quad +(1+t)^{2\zeta}x v^2(x,t) \lesssim C(\theta^{-1}) \left(\mathscr{E}_0 +\mathcal{D}_1+\mathcal{E}_2\right)(0) , \label{12.21-da}\\ & x^3|r_x(x, t)-1|^2 \lesssim C(\theta^{-1}) \left(\mathscr{E}_0 +\mathcal{D}_1+\mathcal{E}_2\right)(0)+x^3|r_{0x}(x) -1|^2,\label{12.21-1} \end{align}\end{subequations} where $\alpha=1-\theta$ and $\zeta=1- {1}/({2\bar\gamma}) - \theta/2$. \end{lemma} {\em Proof}. The bound for $xv^2(x, t)$ follows from \eqref{l2palphaest} and \begin{align*} & xv^2(x, t)=\int_0^x \left(y v^2(y, t)\right)_y dy \le \int v^2(y, t)dy \notag\\ &+2\left(\int v^2(y, t) dy\right)^{1/2}\left(\int y^2 v^2_y(y,t) dy\right)^{1/2} \le 3\mathcal{D}_1(t). \end{align*} Similarly, the bound for $x|r(x, t)-x|^2 $ follows from \eqref{12-21}. This finishes the proof of \eqref{12.21-da}. To bound $x^3|r_x(x, t)-1|^2 $, we integrate \eqref{remaineq} over $[x,\bar R]$ to obtain \begin{align}\label{Zp} \nu Z_t+ p\left(\frac{x^2 \bar\rho}{r^2} \right)- p\left(\frac{x^2 \bar\rho}{r^2 r_x} \right) = \mathscr{L}_1, \end{align} where \begin{align*} &Z(x,t)=\ln r_x,\\ &\mathscr{L}_1(x,t)=\left( \frac{4}{3}\nu_1-2\nu_2\right) \left(\ln \frac{ r}{x}\right)_t-\int_x^{\bar R} \mathscr{L}_2(y,t)dy,\\ &\mathscr{L}_2(x,t)=\bar\rho \frac{x^2}{r^2} v_t+\left[p\left(\frac{x^2 \bar\rho}{r^2} \right)\right]_x-\frac{x^4}{r^4} p(\bar\rho)_x -4\nu_1 \left(\frac{v}{r}\right)_x. \end{align*} It follows from the Taylor expansion that there exist constants $\vartheta_4, \vartheta_5 \in (0,1) $ such that \begin{align} &Z(x,t)=\frac{1}{\vartheta_4 r_x+1-\vartheta_4} (r_x-1), \label{12.21-4}\\ &p\left(\frac{x^2 \bar\rho}{r^2} \right)- p\left(\frac{x^2 \bar\rho}{r^2 r_x} \right)=p'\left(\frac{x^2 \bar\rho}{r^2 r_x}\left( \vartheta_5 r_x +1-\vartheta_5 \right)\right)\frac{x^2 \bar\rho}{r^2 r_x}(r_x-1). \notag \end{align} Thus, \begin{align*} &h(x,t)=\frac{1}{ p(\bar\rho) Z}\left\{ p\left(\frac{x^2 \bar\rho}{r^2} \right) -p\left(\frac{x^2 \bar\rho}{r^2 r_x} \right)\right\}\\ & =\frac{\vartheta_4 r_x+1-\vartheta_4}{\vartheta_5 r_x +1-\vartheta_5 } \frac{\check{\rho} p'(\check{\rho})}{ p(\bar\rho)}, \ {\rm where} \ \check{\rho}=\frac{x^2 \bar\rho}{r^2 r_x}\left( \vartheta_5 r_x +1-\vartheta_5 \right). \end{align*} Due to \eqref{priori1} and the smallness of $\varepsilon_0$, one has $$ \frac{1}{2} \frac{\check{\rho} p'(\check{\rho})}{ p(\bar\rho)} \le h(x,t) \le 2 \frac{\check{\rho} p'(\check{\rho})}{ p(\bar\rho)} \ \ {\rm and} \ \ \frac{1}{2} \bar\rho\le \check{\rho}\le \frac{3}{2} \bar\rho,$$ which, together with \eqref{21.3.15} and \eqref{til-2}, implies that \begin{align}\label{12-17-a} 0<4^{-1} K_2 \le h(x,t) \le 4 K_3 < \infty. \end{align} So, \eqref{Zp} can be rewritten as \begin{align}\label{12.21-h} \nu Z_t+h(x,t) p(\bar\rho) Z=\mathscr{L}_1, \end{align} which means \begin{align*} &Z(x, t)=\exp\left\{-\nu^{-1}p(\bar\rho) \int_0^t h(x, \tau)d\tau \right\} Z(x,0) \notag \\ & +\nu^{-1}\int_0^t \exp\left\{-\nu^{-1}p(\bar\rho) \int_s^t h(x, \tau)d\tau \right\}\mathscr{L}_1(x,s) ds. \end{align*} This gives, with the aid of \eqref{12.21-4} and \eqref{12-17-a}, that \begin{align}\label{zsim-1} &\left|r_x(x, t)-1\right| \lesssim \left|r_{0x}(x)-1\right| + \left|\int_0^t \exp\left\{-\nu^{-1}p(\bar\rho) \int_s^t h(x, \tau)d\tau \right\}\mathscr{L}_1(x,s) ds\right|. \end{align} For the first term in $\mathscr{L}_1$, we use the integration by parts, \eqref{12-17-a} and the Taylor expansion to get \begin{align} &\left|\int_0^t \exp\left\{-\nu^{-1}p(\bar\rho) \int_s^t h(x, \tau)d\tau \right\} \left(\ln \frac{ r(x,s)}{x}\right)_s ds\right|\notag\\ & \lesssim \sup_{s\in[0,t]}\ln \frac{ r(x,s)}{x} \lesssim \frac{1}{x} \sup_{s\in[0,t]} \left| r(x,s)-x\right|, \label{12.21-3} \end{align} by noticing that \begin{align} & p(\bar\rho) \int_0^t \exp\left\{-\nu^{-1}p(\bar\rho) \int_s^t h(x, \tau)d\tau \right\} ds\notag\\ & \le p(\bar\rho) \int_0^t \exp\left\{- \nu^{-1}p(\bar\rho) 4^{-1} K_2 (t-s) \right\} ds \le 4\nu K_2^{-1}. \label{12.20-a} \end{align} For the second term in $\mathscr{L}_1$, we have \begin{align} &\left|\int_x^{\bar R} \mathscr{L}_2(y,t)dy\right| \lesssim \int_x^{\bar R} y^{-2}\left(y^2 \bar\rho|v_t|+ y|v_y|+|v| \right)dy\notag\\ &\quad+\left|\frac{x^4}{r^4} p(\bar\rho) -p\left(\frac{x^2 \bar\rho}{r^2} \right) \right|+\int_x^{\bar R} y^{-2}p(\bar\rho) \left(y|r_y-1|+|r-y|\right)dy\notag\\ &\lesssim x^{-\frac{3}{2}}\left(\mathcal{E}_2^{\frac{1}{2}}+ \mathcal{D}_1^{\frac{1}{2}}\right)(t) +x^{-\frac{3}{2}}p(\bar\rho) \left(x^{\frac{1}{2}}\left|r(x,t)-x\right| + \mathcal{E}_0^{\frac{1}{2}}(t)\right),\label{12.19-a} \end{align} due to $ p(\bar\rho)\ge 0$, $(p(\bar\rho))_x\le 0$, and the following estimate: \begin{align} &\left|\frac{x^4}{r^4} p(\bar\rho) -p\left(\frac{x^2 \bar\rho}{r^2} \right) \right|\le p(\bar\rho) \left|\frac{x^4}{r^4} -1 \right|+\left|p(\bar\rho) -p\left(\frac{x^2 \bar\rho}{r^2} \right) \right|\notag\\ &\quad\le p(\bar\rho) \left|\frac{x^4}{r^4} -1 \right|+ 4K_3 p(\bar\rho) \left|1 -\frac{x^2 }{r^2} \right|.\label{12.19.1} \end{align} Here the derivation of \eqref{12.19.1} is the same as that of \eqref{12-17-a}. In view of \eqref{12.19-a}, \eqref{12-17-a}, \eqref{12.20-a}, $1<2\zeta$ and $\mathcal{E}_2\lesssim \mathcal{D}_2$, we see that \begin{align*} & x^{{3}/{2}} \int_0^t \exp\left\{-\nu^{-1}p(\bar\rho) \int_s^t h(x, \tau)d\tau \right\}\left|\int_x^{\bar R} \mathscr{L}_2(y,s)dy\right| ds\notag\\ & \lesssim \left(\int_0^t (1+s)^{-2\zeta}ds\right)^{1/2}\left(\int_0^t (1+s)^{2\zeta}\left(\mathcal{D}_1 + \mathcal{D}_2 \right)(s)ds\right)^{1/2} \notag \\ & + p(\bar\rho) \int_0^t \exp\left\{-\nu^{-1}p(\bar\rho) \int_s^t h(x, \tau)d\tau \right\} ds \sup_{s\in [0,t]} \left( x^{1/2}|r(x,s)-x|+ \mathcal{E}_0^{{1}/{2}}(s) \right)\notag\\ & \lesssim \left(\int_0^t (1+s)^{2\zeta}\left(\mathcal{D}_1 + \mathcal{D}_2 \right)(s) ds\right)^{1/2}+ \sup_{s\in [0,t]} \left( x^{1/2}|r(x,s)-x|+ \mathcal{E}_0^{{1}/{2}}(s) \right). \end{align*} This, together with \eqref{zsim-1}, \eqref{12.21-3}, \eqref{12.21-da} and \eqref{l2palphaest}, proves \eqref{12.21-1}. \hfill $\Box$ \subsection{Higher-order estimates}\label{22.1.4-3} This subsection consists of Lemma \ref{22.1.4-4}. \begin{lemma}\label{22.1.4-4} Let $\Upsilon$ be defined by \eqref{22.1.11-1} and $L^\infty=L^\infty(I)$. Then it holds that for $l\in (0, \bar R)$, $\theta\in \left(0, 1-5/(4\bar\gamma)\right]$ and $t\in [0, T]$, \begin{subequations}\label{22.1.5-1} \begin{align} &\mathfrak{E}(t)+ \left\| (D^{0,0}+D^{1,0})(\cdot,t)\right\|_{L^\infty}\lesssim \mathfrak{E}(0), \label{22.1.5}\\ &(1+t)^{2\zeta-1/2} \left\{ (1-\Upsilon) \left\| (r-x)^2(\cdot,t)\right\|_{L^\infty} +\left\|v^2(\cdot,t)\right\|_{L^\infty} \right\} \notag\\ & \ \ +(1+t)^{\zeta} \left\| (x^3v_x^2)(\cdot,t)\right\|_{L^\infty} + \Upsilon(1+t)^{2- {2}/{\bar\gamma}-3\theta/2} \left\| (r-x)^2(\cdot,t)\right\|_{L^\infty} \notag\\ & \ \ +(1+t)^{2\zeta-1} \left\{ \left\| {D}^{1,0} (\cdot,t)\right\|_{L^\infty} +\Upsilon\left\| \left(\bar\rho^{(3\bar\gamma-2)/2} \mathcal{Q}^2\right)(\cdot,t)\right\|_{L^\infty} \right.\notag\\ & \ \ \left. +\int \left(\bar\rho v^2_t + {D}^{0,0} \right) (x, t) dx \right\} \lesssim C(\theta^{-1})\mathfrak{E}(0),\label{12.28-9}\\ &(1+t)^{\min\left\{\frac{4-\bar\gamma}{2\bar\gamma} -\theta\left(\frac{\bar\gamma-1}{2\bar\gamma}+ \frac{4-\bar\gamma-\theta(\bar\gamma-1)} {4\bar\gamma-2}\right), \ 2\zeta-1 \right\}} \left\| (\bar\rho^{2}\mathcal{Q}^2) (\cdot,t)\right\|_{L^\infty}\notag\\ & \ \ \lesssim C(\theta^{-1})\mathfrak{E}(0), \ \ {\rm if} \ \ 2 < \bar\gamma<4 \ \ {\rm and}\ \ \theta<({4-\bar\gamma})/({\bar\gamma-1}),\label{20220111} \\ &(1+t)^{2\zeta-1} \left\{ \int_0^{\bar R -l} \left({D}^{0,1}+{D}^{1,1}\right)(x,t) dx \right. \notag\\ & \ \ \left. +\left\| {D}^{0,0} (\cdot, t)\right\|_{L^\infty\left([0, \bar R-l]\right)} \right\} \lesssim C\left(\theta^{-1}, \ l^{-1}\right)\mathfrak{E}(0),\label{22.1.6-1} \end{align} \end{subequations} where $\zeta=1- {1}/({2\bar\gamma}) - \theta/2$, $ {D}^{i,j} = |\partial_t^i\partial_x^j (r/x-1)|^2 +|\partial_t^i\partial_x^j(r_{x}-1)|^2 $ and $\mathcal{Q}={x^2}/({r^2r_x})-1$. \end{lemma} {\em Proof}. The proof of this lemma is based on the following estimates: \begin{subequations}\label{12.26-1}\begin{align} &\int w\left(10|(r/x)_x|^2+|r_{xx}|^2\right)dx\le 2 \int w |\mathcal{Q}_x|^2 dx, \label{wrxx}\\ &\int w \left(10|(v/x)_x|^2+|v_{xx}|^2\right)dx\le 6\int w\left(|\mathcal{Q}_{xt}|^2+32 |\mathcal{Q}_x|^2 \right)dx,\label{wvxx} \end{align}\end{subequations} for any given function $w(x)$ on $[0, \bar R]$ satisfying $w(x)\geq 0$ and $w'(x)\leq 0$ on $[0, \bar R],$ and $w(\bar R)=0$, where \begin{equation}\label{12.25-1} \mathcal{Q}={x^2}/({r^2r_x})-1. \end{equation} Indeed, the proof of \eqref{12.26-1} is the same as that of Lemma 3.9 in \cite{LZeng}, so we omit the detail here. {\em Step 1}. In this step, we prove that \begin{subequations}\begin{align} &\int \bar\rho^{-1} p^2(\bar\rho) {D}^{0,1} dx + \int_0^t \int \bar\rho^{-1} p^3(\bar\rho) {D}^{0,1}dx ds \lesssim \mathfrak{E}(0), \label{higlobal}\\ &(1+t)^{\zeta}x^3v_x^2(x, t) \lesssim C(\theta^{-1})\mathfrak{E}(0), \label{22.1.6-2} \\ & \Upsilon(1+t)^{2\zeta-1} x^3\bar\rho^{(3\bar\gamma-2)/2}(x)\mathcal{Q}^2(x,t) \lesssim C(\theta^{-1})\mathfrak{E}(0), \label{22.1.10-2} \end{align}\end{subequations} and for $ 2 < \bar\gamma<4$ and $\theta<({4-\bar\gamma})/({\bar\gamma-1})$, \begin{align} (1+t)^{\min\left\{\frac{4-\bar\gamma}{2\bar\gamma} -\theta\left(\frac{\bar\gamma-1}{2\bar\gamma}+ \frac{4-\bar\gamma-\theta(\bar\gamma-1)} {4\bar\gamma-2}\right), \ 2\zeta-1 \right\}} x^3\bar\rho^{2}(x)\mathcal{Q}^2(x,t) \lesssim C(\theta^{-1})\mathfrak{E}(0). \label{22.1.10-4} \end{align} We use \eqref{pxphi} to rewrite \eqref{fixedp-a} as \begin{align}\label{remainq} &\nu \left((1+ \mathcal{Q})^{-1} \mathcal{Q}_x\right)_t+ \bar\rho p'\left( \bar\rho(1+\mathcal{Q})\right)\mathcal{Q}_{x}=\mathcal{H}, \end{align} where $\mathcal{Q}$ is defined by \eqref{12.25-1}, and \begin{align*} & \mathcal{H} = \left(\frac{x^4}{r^4}-1-\mathcal{Q}\right) \left(p(\bar\rho)\right)_x +\left(p'(\bar\rho)- p'\left(\bar\rho(1+\mathcal{Q})\right)\right) (1+\mathcal{Q}) \bar\rho_x-\bar\rho \frac{x^2}{r^2} v_t. \end{align*} In view of the Taylor expansion, \eqref{pxphi}, \eqref{priori1}, \eqref{21.3.15} and \eqref{til-2}, we see that \begin{subequations}\label{12.25-2} \begin{align} &4^{-1}K_2 p(\bar\rho)\le \bar\rho p'\left( \bar\rho(1+\mathcal{Q})\right)\le 4K_3p(\bar\rho), \label{12.25-2.a} \\ &|\mathcal{H}|\lesssim \bar\rho \left(x|r_x-1|+|r-x|+|v_t|\right). \end{align} \end{subequations} In fact, the second term in $\mathcal{H}$ can be bounded as follows: \begin{align*} &\left|\left( p'\left(\bar\rho(1+\mathcal{Q})\right)-p'(\bar\rho) \right) \bar\rho_x \right|= \frac{\bar\rho \left|p''\left(\bar\rho \left(1+\vartheta_6\mathcal{Q}\right)\right) \right|}{p'(\bar\rho)} x\bar\rho\phi\left|\mathcal{Q}\right| \\ & \lesssim \frac{ p'\left(\bar\rho \left(1+\vartheta_6\mathcal{Q}\right)\right) }{p'(\bar\rho)} {x\bar\rho } \left|\mathcal{Q}\right| \lesssim \frac{ p\left(\bar\rho \left(1+\vartheta_6\mathcal{Q}\right)\right) }{p(\bar\rho)} {x\bar\rho }\left|\mathcal{Q}\right| \lesssim x\bar\rho \left|\mathcal{Q}\right| \end{align*} for some constant $\vartheta_6\in (0,1)$. We integrate the product of \eqref{remainq} and $\bar\rho^{-1} p^2(\bar\rho) (1+ \mathcal{Q})^{-1} \mathcal{Q}_x$ over $I$, and use the Cauchy inequality and \eqref{12.25-2} to obtain \begin{align}\label{qx} &\frac{\nu}{2}\frac{d}{dt}\int \bar\rho^{-1} p^2(\bar\rho)(1+\mathcal{Q})^{-2} \mathcal{Q}_{x}^2 dx + \frac{K_2}{8} \int \bar\rho^{-1} p^3(\bar\rho) \mathcal{Q}_{x}^2 dx \notag \\ & \lesssim \int \bar\rho p(\bar\rho) \left(v_t^2 + x^2|r_x-1|^2+|r-x|^2\right), \end{align} which gives, with the aid of \eqref{vvt} and the fact that $f(x,t)-f(0,t)=\int_0^x f_y(y,t)dy$ for any function $f$, that \begin{align}\label{wl2qx} \int \bar\rho^{-1} p^2(\bar\rho) \mathcal{Q}_{x}^2 dx + \int_0^t \int \bar\rho^{-1} p^3(\bar\rho) \mathcal{Q}_{x}^2 dxds \lesssim \mathfrak{E}(0). \end{align} Due to \eqref{pxphi}, \eqref{pcon2} and \eqref{d-3}, one has that for any constant $k > 3/4$, \begin{subequations}\label{12.25-3}\begin{align} & \bar\rho^{-1} p^k(\bar\rho) =0 \ {\rm at} \ x=\bar R,\\ &\left( \bar\rho^{-1} p^k(\bar\rho) \right)_x =-\frac{x \phi}{\bar\rho p'(\bar\rho)}p^{k-1}(\bar\rho) \left(k\bar\rho p'(\bar\rho)-p(\bar\rho)\right)\le 0, \end{align}\end{subequations} which, together with \eqref{wl2qx} and \eqref{wrxx}, proves \eqref{higlobal}. To prove \eqref{22.1.6-2}, we notice that \begin{align} &\left|v_x(x,t)\right|=\left|r_xZ_t\right|=\nu^{-1} r_x \left|\mathscr{L}_1- h p(\bar\rho) Z\right|\lesssim \left|\mathscr{L}_1\right|+ p(\bar\rho)\left| r_x-1\right|\notag\\ & \lesssim \left|\int_x^{\bar R} \mathscr{L}_2(y,t)dy\right| +x^{-1}|v(x,t)| + p(\bar\rho)\left| r_x(x,t)-1\right|,\notag \end{align} which is due to \eqref{12.21-4}-\eqref{12.21-h}; \begin{align*} \left|\int_x^{\bar R} \mathscr{L}_2(y,t)dy\right| \lesssim x^{-\frac{3}{2}}\left(\mathcal{E}_2^{\frac{1}{2}}+ \mathcal{D}_1^{\frac{1}{2}}+\mathcal{D}_0^{\frac{1}{2}} \right)(t) +x^{-1} p(\bar\rho) \left|r(x,t)-x\right| , \end{align*} which follows from the same derivation as that of \eqref{12.19-a}; and \begin{align*} &x^3 p^2(\bar\rho(x))\left(r_x(x,t)-1\right)^2 =\int_0^x \left( y^3 p^2(\bar\rho(y))\left(r_y(y,t)-1\right)^2\right)_y dy\\ & \ \lesssim \mathcal{D}_0(t)+ \int y^3 p^2(\bar\rho(y))|r_y(y,t)-1||r_{yy}(y,t)| dy \\ & \ \lesssim \mathcal{D}_0(t)+ \left( \int \bar\rho^{-1} p^2(\bar\rho(y)) r_{yy}^2(y,t) dy \right)^{1/2} \mathcal{D}_0^{1/2}(t),\\ & x p^2(\bar\rho(x))\left(r(x,t)-x\right)^2 \lesssim \mathcal{D}_0(t), \end{align*} which is due to \eqref{pxphi}. Then, \eqref{22.1.6-2} follows from \eqref{higlobal}, \eqref{12.21-da} and \eqref{l2palphaest}. When $\bar\gamma\le 2$, we use \eqref{pxphi} and the H$\ddot{o}$lder inequality to get \begin{align} &x^3\bar\rho^{(3\bar\gamma-2)/2}\mathcal{Q}^2= \int_0^x \left(y^3 \bar\rho^{(3\bar\gamma-2)/2} \mathcal{Q}^2 \right)_y dy \notag\\ & \le 3 \int_0^x y^2 \bar\rho^{(3\bar\gamma-2)/2} \mathcal{Q}^2 dy+2\int_0^x y^3 \bar\rho^{(3\bar\gamma-2)/2} \mathcal{Q} \mathcal{Q}_y dy \notag\\ & \lesssim \mathcal{E}_0 + \left(\int \bar\rho^{-1} p^2(\bar\rho) \mathcal{Q}_{y}^2 dy \right)^{1/2} \left(\int y^2 \bar\rho^{3\bar\gamma-1} p^{-2}(\bar\rho) \mathcal{Q}^2 dy \right)^{1/2}. \label{22.1.10-1} \end{align} We set $$\frac{1}{q_7}=2\zeta-1, \ \ \frac{1}{q_8}=1-\frac{1}{q_7} \ \ {\rm and} \ \ \delta_6=\frac{3q_8-1}{1+(3\bar\gamma-4)q_8}, $$ then $q_7,q_8>1$ and $1/\delta_6 >{\bar\gamma-1}$, which means, with the help of \eqref{irho} and \eqref{April4}, that \begin{align*} \bar\rho^{(3\bar\gamma-1)q_8} p^{1-3q_8}(\bar\rho) \lesssim \bar\rho^{1+(3\bar\gamma-4)q_8} i^{1-3q_8}(\bar\rho) \lesssim i^{\delta_6(1+(3\bar\gamma-4)q_8)+1-3q_8} (\bar\rho)=1. \end{align*} So, it follows from the H$\ddot{o}$lder inequality that \begin{align*} &\int y^2 \bar\rho^{3\bar\gamma-1} p^{-2}(\bar\rho) \mathcal{Q}^2 dy\le \left(\int y^2 p(\bar\rho) \mathcal{Q}^2 dy \right)^{1/q_7} \notag \\ & \times\left(\int y^2 \bar\rho^{(3\bar\gamma-1)q_8}p^{1-3q_8}(\bar\rho) \mathcal{Q}^2 dy \right)^{1/q_8} \le \mathcal{D}_0^{1/q_7} \mathcal{E}_0^{1/q_8}, \end{align*} which, together with \eqref{22.1.10-1}, \eqref{wl2qx} and \eqref{l2palphaest}, proves \eqref{22.1.10-2}. In a similar way to deriving \eqref{22.1.10-1}, we have \begin{align} x^3\bar\rho^2\mathcal{Q}^2 \lesssim \mathcal{E}_0 + \left(\int \bar\rho^{-1} p^2(\bar\rho) \mathcal{Q}_{y}^2 dy \right)^{1/2} \left(\int y^2 \bar\rho^{5} p^{-2}(\bar\rho) \mathcal{Q}^2 dy \right)^{1/2}. \label{22.1.10-3} \end{align} When $ 2<\bar\gamma<4$ and $\theta<({4-\bar\gamma})/({\bar\gamma-1})$, we set $$\frac{1}{q_9}=\frac{4-\bar\gamma-\theta(\bar\gamma-1)} {2\bar\gamma-1}, \ \frac{1}{q_{10}}=1-\frac{1}{q_9} \ {\rm and} \ \delta_7=\frac{(2-\alpha) q_9+1+\alpha}{3q_9-1}, $$ then $q_9,q_{10}>1$ and $1/\delta_7 >{\bar\gamma-1}$. So, $$\bar\rho^{5q_9}p^{-2 q_9-1}(\bar\rho) i^{\alpha(q_9-1)} \lesssim \bar\rho^{3q_9-1}i^{(\alpha-2) q_9-1-\alpha}(\bar\rho) \lesssim i^{\delta_7(3q_9-1)+(\alpha-2) q_9-1-\alpha }(\bar\rho)=1,$$ which means, using the H$\ddot{o}$lder inequality, that \begin{align} &\int y^2 \bar\rho^{5} p^{-2}(\bar\rho) \mathcal{Q}^2 dy \le \left(\int \bar\rho^{5q_9}p^{-2 q_9-1}(\bar\rho) i^{\alpha(q_9-1)} (\bar\rho) y^2 p(\bar\rho) \mathcal{Q}^2 dy\right)^{1/q_{9}} \notag\\ &\times\left(\int y^2 i^{-\alpha}(\bar\rho) \mathcal{Q}^2 dy\right)^{1/q_{10}} \le \mathcal{D}_0^{1/q_9}\mathscr{E}_0^{1/q_{10}}.\notag \end{align} This proves \eqref{22.1.10-4}, by noting \eqref{22.1.10-3}, \eqref{wl2qx} and \eqref{l2palphaest}. {\em Step 2}. In this step, we prove that \begin{align} (1+s)^{2\zeta-1}\int \bar\rho v^2_t dx+\int_0^t (1+s)^{2\zeta-1}\int {D}^{2,0} dxds\lesssim C(\theta^{-1})\mathfrak{E}(0)\label{glovt}. \end{align} In a similar way to deriving \eqref{qx}, one has \begin{align} &\frac{\nu}{2}\frac{d}{dt}\int \psi \bar\rho^{-1} p^2(\bar\rho)(1+\mathcal{Q})^{-2} \mathcal{Q}_{x}^2 dx + \frac{K_2}{8} \int \psi \bar\rho^{-1} p^3(\bar\rho) \mathcal{Q}_{x}^2 dx \notag \\ & \lesssim \int \psi \bar\rho p(\bar\rho) \left(v_t^2 + x^2|r_x-1|^2+|r-x|^2\right), \label{psil2} \end{align} where $\psi$ is a smooth cut-off function on $[0, \bar R]$ satisfying that for any fixed constant $l\in (0, \bar R)$, \begin{equation}\label{psir}\begin{split} &\psi=1 \ \ {\rm on} \ \ [0, \bar R-l], \ \ \psi=0 \ \ {\rm on} \ \ [\bar R-l/2, \bar R],\\ &{\rm and} \ \ -8/l\leq \psi'(x)\leq 0 \ \ {\rm and} \ \ [0, \bar R]. \end{split} \end{equation} In view of \eqref{irho}, \eqref{April4} and \eqref{har-1}, we see that on $[0,\bar R-l/2]$, \begin{align}\label{12.27-1} p^{-1}(\bar\rho)\lesssim \bar\rho^{-1} i^{-1}(\bar\rho)\lesssim i^{-1-8/(7\bar\gamma-7)} \lesssim (\bar R -x)^{-1-8/(7\bar\gamma-7)} \lesssim C(l^{-1}), \end{align} which, together with \eqref{psil2}, \eqref{l2palphaest}, \eqref{wl2qx} and $2\zeta-1<1$, implies that \begin{align} &(1+t)^{2\zeta-1}\int \psi \bar\rho^{-1} p^2(\bar\rho) \mathcal{Q}_{x}^2 dx +\int_0^t (1+s)^{2\zeta-1} \int \psi \bar\rho^{-1} p^3(\bar\rho) \mathcal{Q}_{x}^2 dxds \notag \\ & \lesssim C(\theta^{-1})\mathfrak{E}(0) +\int_0^t \int_0^{\bar R -l/2} \psi \bar\rho^{-1} p^2(\bar\rho) \mathcal{Q}_{x}^2 dxds\notag\\ & \lesssim C(\theta^{-1})\mathfrak{E}(0) +C(l^{-1})\int_0^t \int \bar\rho^{-1} p^3(\bar\rho) \mathcal{Q}_{x}^2 dxds\lesssim C(\theta^{-1}, l^{-1})\mathfrak{E}(0). \label{qxdec} \end{align} It follows from \eqref{remainq}, \eqref{21apri} and \eqref{12.25-2} that \begin{align} & |{Q}_{xt}| \lesssim |\mathcal{Q}_{x}|+\bar\rho \left(x|r_x-1|+|r-x|+|v_t|\right), \label{12.27-h} \\ &\int \psi \bar\rho^{-1} p^3(\bar\rho) {Q}_{xt}^2 dx \lesssim \int \psi \bar\rho^{-1} p^3(\bar\rho) \mathcal{Q}_{x}^2 dx +\mathcal{D}_0+\mathcal{D}_2,\notag \end{align} which means, with the help of \eqref{l2palphaest} and \eqref{qxdec}, that $$ \int_0^t (1+s)^{2\zeta-1}\int \psi \bar\rho^{-1} p^3(\bar\rho) {Q}_{xs}^2 dx ds \lesssim C(\theta^{-1}, l^{-1})\mathfrak{E}(0). $$ This, together with \eqref{12.26-1}, \eqref{12.25-3}, \eqref{psir} and \eqref{qxdec}, gives that \begin{align} &(1+t)^{2\zeta-1}\int \psi \bar\rho^{-1} p^2(\bar\rho) {D}^{0,1}(x,t) dx \notag\\ & +\int_0^t (1+s)^{2\zeta-1} \int \psi \bar\rho^{-1} p^3(\bar\rho) {D}^{1,1}(x,s) dxds \lesssim C(\theta^{-1}, l^{-1})\mathfrak{E}(0). \label{wrvxx} \end{align} Take $\partial_t$ on \eqref{remaineq} and use \eqref{pxphi} to yield \begin{align*} &\bar\rho \frac{x^2}{r^2}v_{tt} -\nu\left(2\frac{v_t}{r}+\frac{v_{tx}}{r_x}\right)_x =-\bar\rho \left(\frac{x^2}{r^2}\right)_tv_{t} \\ & -\left[p\left(\frac{x^2 \bar\rho}{r^2 r_x} \right)\right]_{tx}+\left(\frac{x^4}{r^4} \right)_t \left(p(\bar\rho)\right)_x -\nu\left(2\frac{v^2}{r^2} +\frac{v_x^2}{r_x^2}\right)_x. \end{align*} Let $\psi$ be defined by \eqref{psir} with $l=\bar R/2$, we integrate the product of the equation above and $\psi v_t$ over $[0, \bar R]$, and use the boundary condition $ v_t(0, t)=\psi(\bar R)=0$, \eqref{pxphi}, \eqref{21apri}, \eqref{12.25-2.a} and the Cauchy inequality to obtain \begin{align} &\frac{1}{2}\frac{d}{dt}\int \psi \bar\rho \frac{x^2}{r^2} v_t^2 dx +\frac{\nu}{2} \int \psi \left(r_x\frac{v_t^2}{r^2} +\frac{v_{tx}^2}{r_x}\right) dx \notag\\ &\lesssim (\mathcal{D}_1+\mathcal{D}_2)(t) +\int {D}^{1,0}(x,t)dx ,\label{intvt} \end{align} where we have used the following estimate: $$ \int \left(2\frac{v_t}{r}+\frac{v_{tx}}{r_x}\right) (\psi v_t)_x dx= \int \psi \left(r_x\frac{v_t^2}{r^2} +\frac{v_{tx}^2}{r_x}\right) dx + \int \psi'\left(\frac{v_t^2}{r} +\frac{v_tv_{tx}}{r_x}\right) dx. $$ Due to \eqref{hardy1}, one has \begin{align}\label{12.28.1} \int {D}^{1,0}dx \lesssim \sum_{j=0,1}\int_0^{\bar R/2} x^2 {D}^{1,j}dx + \int_{\bar R/2}^{\bar R} x^2 {D}^{1,0} dx \lesssim \int_0^{\bar R/2} {D}^{1,1}dx+ \mathcal{D}_1, \end{align} which means, using \eqref{l2palphaest}, and \eqref{wrvxx} with $l=\bar R/2$, that \begin{align} \int_0^t (1+s)^{2\zeta-1}\int {D}^{1,0}(x,s) dxds\lesssim C(\theta^{-1}) \mathfrak{E}(0). \label{iivx} \end{align} It follows from \eqref{intvt}, \eqref{l2palphaest}, \eqref{iivx} and $0<2\zeta-1<1$ that $$ (1+s)^{2\zeta-1}\int \psi \bar\rho v^2_t dx+\int_0^t (1+s)^{2\zeta-1}\int \psi {D}^{2,0} dxds\lesssim C(\theta^{-1})\mathfrak{E}(0), $$ which proves \eqref{glovt}, by use of \eqref{l2palphaest}. {\em Step 3}. This step devotes to proving \eqref{22.1.5-1}. In view of \eqref{12.27-h}, \eqref{qxdec}, \eqref{glovt} and \eqref{l2palphaest}, we see that $$ (1+t)^{2\zeta-1}\int \psi \bar\rho^{-1} p^2(\bar\rho) {Q}_{xt}^2 dx \lesssim C(\theta^{-1}, l^{-1})\mathfrak{E}(0), $$ where $\psi$ is defined by \eqref{psir} for any fixed constant $l\in (0, \bar R)$. Thus, we use \eqref{wvxx}, \eqref{12.25-3}, \eqref{psir} and \eqref{qxdec} to get $$ (1+t)^{2\zeta-1}\int \psi \bar\rho^{-1} p^2(\bar\rho) {D}^{1,1}(x,t) dx \lesssim C(\theta^{-1}, l^{-1})\mathfrak{E}(0), $$ which, together with \eqref{wrvxx} and \eqref{12.27-1}, implies \begin{align} (1+t)^{2\zeta-1} \int_0^{\bar R -l} \left({D}^{0,1}+{D}^{1,1}\right)(x,t) dx \lesssim C\left(\theta^{-1}, \ l^{-1}\right)\mathfrak{E}(0). \label{locdec} \end{align} In a similar way to deriving \eqref{12.28.1}, we have $\int {D}^{0,0}dx \lesssim \int_0^{\bar R/2} {D}^{0,1}dx+ \mathcal{E}_0$, which means, with the aid of \eqref{12.28.1}, \eqref{l2palphaest}, and \eqref{locdec} with $l=\bar R/2$, that \begin{align}\label{22.1.6-4} (1+t)^{2\zeta-1} \int \left({D}^{0,0}+D^{1,0}\right)(x,t)dx \lesssim C(\theta^{-1})\mathfrak{E}(0). \end{align} Due to \eqref{locdec}, \eqref{22.1.6-4} and $\|\cdot\|_{L^\infty}\lesssim \|\cdot\|_{H^1} $, one has \begin{align}\label{22.1.7-1} (1+t)^{2\zeta-1} \left\|\left({D}^{0,0} + {D}^{1,0} \right)(\cdot, t)\right\|_{L^\infty\left([0, \bar R-l]\right)} \lesssim C\left(\theta^{-1}, \ l^{-1}\right)\mathfrak{E}(0), \end{align} which proves \eqref{22.1.6-1}, using \eqref{locdec}. \eqref{20220111} follows from \eqref{22.1.10-4} and \eqref{22.1.7-1} with $l=\bar R/2$. It follows from the fact that $\|f\|_{L^\infty}^2\le 2\|f\|_{L^2}\|f_x\|_{L^2}$ for any function $f$ satisfying $f(x=0)=0$, \eqref{12-21} and \eqref{22.1.6-4} that \begin{align} & \left( \Upsilon(1+t)^{2- {2}/{\bar\gamma}-3\theta/2} + (1-\Upsilon) (1+t)^{2\zeta-1/2} \right) |r(x,t)-x|^2 \notag\\ & +(1+t)^{2\zeta-1/2}v^2(x,t)\lesssim C(\theta^{-1})\mathfrak{E}(0), \label{22-1-7-1} \end{align} which, together with \eqref{22.1.7-1} with $l=\bar R/2$ and \eqref{22.1.6-2}, means \begin{align} (1+t)^{2\zeta-1} D^{1,0} (x, t) \lesssim C(\theta^{-1})\mathfrak{E}(0). \label{22-1-7-2} \end{align} \eqref{12.28-9} follows from \eqref{22.1.6-2}, \eqref{22.1.10-2}, \eqref{glovt}, and \eqref{22.1.6-4}-\eqref{22-1-7-2} with $l=\bar R/2$. Letting $\theta=1/32$ and $l=\bar R/2$ in \eqref{12.21-1} and \eqref{22.1.7-1}-\eqref{22-1-7-2} gives $ \left( D^{0,0}+{D}^{1,0} \right)(x,t) \lesssim \mathfrak{E}(0)$, which proves \eqref{22.1.5}, by use of \eqref{glovt} with $\theta=1/32$ and \eqref{higlobal}. \hfill $\Box$ \subsection*{Acknowledgements.} This research was supported in part by National Sciences Fundation of China (NSFC) Grants 12171267, 11822107 and 12101350; China Postdoctoral Science Foundation under grant 2021M691818; and a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. 11307420). \bibliographystyle{plain}
train/arxiv
BkiUfXXxK0flNeFb55Se
5
1
\section{Introduction} Given an $m$-dimensional, piecewise-linear, compact manifold $M$ which is homotopy equivalent to some closed manifold $N$ of dimension $n<m$, a \emph{spine} of $M$ is a piecewise-linear embedding $N \to M$ which is a homotopy equivalence. Such an embedding is not required to be locally flat. We call $M$ \emph{spineless} if it does not admit a spine. In this paper, we prove: \begin{theorem}\label{thm:spineless} There exist infinitely many smooth, compact, spineless $4$-manifolds which are homotopy equivalent to $S^2$. \end{theorem} By way of background, Browder \cite{BrowderEmbedding}, Casson, Haefliger \cite{HaefligerKnotted}, Sullivan, and Wall \cite{WallSurgery} showed that when $m-n > 2$, any homotopy equivalence from $N$ to $M$ can be perturbed into a spine. When $m-n=2$, Cappell and Shaneson \cite{CappellShanesonPL} showed that the same is true for any odd $m\ge 5$, and for any even $m \ge 6$ provided that $M$ and $N$ are simply-connected; they also produced examples of non-simply-connected, spineless manifolds for any even $m \ge 6$ \cite{CappellShanesonSpineless}. (See \cite{ShanesonSpines} for a summary of their results.) In dimension $4$, Matsumoto \cite{MatsumotoSpine} produced an example of a spineless $4$-manifold homotopy equivalent to the torus; the proof relies on higher-dimensional surgery theory. However, the question of finding spineless, simply-connected $4$-manifolds has remained open until now. (It appears in Kirby's problem list \cite[Problem 4.25]{Kirby}.) The proof of the theorem proceeds in two parts. The first is to give an obstruction to a spine in a PL $4$-manifold homotopy equivalent to $S^2$ coming from Heegaard Floer homology. This obstruction only depends on the boundary of the 4-manifold and the sign of the intersection form. The second step is to construct the manifolds homotopy equivalent to $S^2$ that fail the obstruction. \subsection*{Acknowledgments} We are grateful to Weimin Chen, \c{C}a\u{g}r{\i} Karakurt, and Yukio Matsumoto for bringing this problem to our attention, and to Marco Golla, Josh Greene, Jen Hom, and Danny Ruberman for helpful conversations. \section{Obstruction} \label{sec:d-invt} In order to prove Theorem~\ref{thm:spineless}, we use an obstruction coming from Heegaard Floer homology. Recall that for any rational homology sphere $Y$ and any spin$^c$ structure $\spincs$ on $Y$, Ozsv\'ath and Szab\'o \cite{OSabsgr} define the \emph{correction term} $d(Y,\spincs) \in \Q$, which is invariant under spin$^c$ rational homology cobordism. To state our obstruction, we first establish the following notational convention. \begin{convention} \label{conv:spinc} Suppose $X$ is a smooth, compact, oriented $4$-manifold with $H_*(X) \cong H_*(S^2)$, and let $n$ denote the self-intersection number of a generator of $H_2(X)$. Let $Y = \partial X$, which has $H_1(Y) \cong H^2(Y) \cong \Z/n$. Fix a generator $\alpha \in H_2(X)$. For $i \in \Z$, let $\spinct_i$ denote the unique spin$^c$ structure on $X$ with \[ \gen{c_1(\spinct_i), \alpha} + n = 2i. \] Let $\spincs_i = \spinct_i |_Y$; this depends only on the class of $i$ mod $n$. We will often treat the subscript of $\spincs_i$ as an element of $\Z/n$. Conjugation of spin$^c$ structures swaps $\spinct_i$ with $\spinct_{n-i}$ and $\spincs_i$ with $\spincs_{n-i} = \spincs_{-i}$. In particular, $\spincs_0$ is self-conjugate, as is $\spincs_{n/2}$ if $n$ is even. Choosing the opposite generator for $H_2(X)$ likewise replaces each $\spinct_i$ or $\spincs_i$ with its conjugate. Because of the conjugation symmetry of Heegaard Floer homology, all statements below are insensitive to this choice. Finally, when $n \ne 0$, we have \begin{equation} \label{eq:d-mod2} d(Y,\spincs_i) \equiv \frac{(2i-n)^2 - \abs{n} }{4n} \pmod {2\Z} \end{equation} by \cite[Theorem 1.2]{OSabsgr}. \end{convention} Our obstruction to the existence of a spine comes from the following theorem: \begin{theorem}\label{thm:obstruction} Let $X$ be any smooth, compact, oriented $4$-manifold with $H_*(X) \cong H_*(S^2)$, with a generator of $H_2(X)$ having self-intersection $n > 1$, and let $Y = \partial X$. If a generator of $H_2(X)$ can be represented by a piecewise-linear embedded $2$-sphere (e.g., if $X$ admits an $S^2$ spine), then for each $i \in \{0, \dots, n-1\}$, \begin{equation} \label{eq:obstruction} d(Y, \spincs_i) - d(Y, \spincs_{i+1}) = \begin{cases} \dfrac{n-2i-1}{n} \text{ or } \dfrac{-n-2i-1}{n} & \text{if } 0 \le i \le \dfrac{n-2}{2} \\ 0 & \text{if $n$ is odd and } i = \dfrac{n-1}{2} \\ \dfrac{n-2i-1}{n} \text{ or } \dfrac{3n-2i-1}{n} & \text{if } \dfrac{n}{2} \le i \le n-1. \end{cases} \end{equation} In particular, for any $i$, we have \begin{equation} \label{eq:Y-abs} \abs{d(Y, \spincs_i) - d(Y, \spincs_{i+1})} \le \frac{2n-1}{n}. \end{equation} \end{theorem} It is easy to verify that \eqref{eq:Y-abs} follows as an easy consequence of \eqref{eq:obstruction}. For any knot $K \subset S^3$, let $X_n(K)$ denote the trace of $n$-surgery on $S^3$, i.e., the manifold obtained by attaching an $n$-framed $2$-handle to the $4$-ball along a knot $K \subset S^3$. Note that $X_n(K)$ is homotopy equivalent to $S^2$ and has a spine obtained as the union of the cone over $K$ in $B^4$ with the core of the $2$-handle. \begin{lemma} \label{lemma:S3-surgery} For any knot $K \subset S^3$ and any $n>0$, the manifold $Y = S^3_n(K)$ satisfies the conclusions of Theorem \ref{thm:obstruction}. \end{lemma} \begin{proof} Associated to any knot $K \subset S^3$, Ni and Wu \cite[Section 2.2]{NiWu} defined a sequence of nonnegative integers $V_i(K)$, which are derived from the knot Floer complex of $K$. (See also \cite{RasmussenThesis}.) By \cite[Equation 2.3]{HomWu4Genus}, these numbers have the property that \begin{equation} \label{eq:Vi-noninc} V_i(K)-1 \le V_{i+1}(K) \le V_i(K); \end{equation} that is, the sequence $(V_i(K))$ is non-increasing and only decreases in increments of $1$. Ni and Wu proved that for each $i=0, \dots, n-1$, we have \begin{equation} \label{eq:NiWu} d(S^3_n(K), \spincs_i) = \frac{(2i-n)^2-n}{4n} - 2 \max\{ V_i(K), V_{n-i}(K)\}. \end{equation} (The first term in \eqref{eq:NiWu} is the $d$ invariant of the lens space $L(n,1)$ in a particular spin$^c$ structure; see \cite[Proposition 4.8]{OSabsgr}.) For $0 \le i \le \frac{n-2}{2}$, we then compute: \begin{align*} d(S^3_n(K), \spincs_i) - d(S^3_n(K), \spincs_{i+1}) &= \frac{(2i-n)^2 - (2i+2-n)^2}{4n} - 2(V_i(K)-V_{i+1}(K)) \\ &= \frac{n-2i-1}{n} - 2(V_i(K)-V_{i+1}(K)) \\ &= \frac{n-2i-1}{n} \text{ or } \frac{-n-2i-1}{n}. \end{align*} (The last line follows from the fact that $V_i(K)-V_{i+1}(K)$ equals either $0$ or $1$.) If $\frac{n}{2} \le i \le n-1$, then \begin{align*} d(S^3_n(K), \spincs_i) - d(S^3_n(K), \spincs_{i+1}) &= d(S^3_n(K), \spincs_{n-i}) - d(S^3_n(K), \spincs_{n-i-1}), \end{align*} and we may apply the previous case using $n-i-1$ in place of $i$. Finally, in the special case where $n$ is odd and $i=\frac{n-1}{2}$, it is easy to compute that \[ d(S^3_{n}(K), \spincs_i) = d(S^3_{n}(K), \spincs_{i+1}) = \frac{1-n}{4n} - 2V_{(n-1)/2}(K), \] so the difference is $0$, as required. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:obstruction}] Suppose $S$ is a PL embedded sphere representing a generator of $H_2(X)$. We may assume that $S$ has a single singularity modeled on the cone of a knot $K \subset S^3$ and is otherwise smooth. Therefore, $S$ has a tubular neighborhood diffeomorphic to $X_n(K)$. To see this, observe that a neighborhood of the cone point is a copy of $B^4$ and the rest of the neighborhood then makes up a 2-handle attached along $K$. That the framing is $n$ follows from the fact that the intersection form of $X$ is $(n)$. The complement of the interior of this neighborhood is a homology cobordism between $S^3_n(K)$ and $Y$; moreover, for each $i \in \Z/n$, the spin$^c$ structures labeled $\spincs_i$ on $S^3_n(K)$ and $Y$ as in Convention~\ref{conv:spinc} are identified through this cobordism. In particular, $d(Y,\spincs_i) = d(S^3_n(K), \spincs_i)$. By Lemma \ref{lemma:S3-surgery}, we deduce that the conclusions of the theorem hold for $Y$. \end{proof} \begin{remark} \label{rmk:NiWu-fail} For surgery on a knot $K$ in an arbitrary homology sphere $Y$, the analogue of the Ni--Wu formula \eqref{eq:NiWu} need not hold. Instead, just as in our paper with Hom \cite[Lemma 2.2]{HomLevineLidmanConcordance}, one can prove an inequality \begin{equation} \label{eq:NiWu-ineq} -2N_Y \leq d(Y_n(K), \spincs_i) - d(Y) - \frac{(2i-n)^2-n}{4n} + 2 \max\{ V_i(K), V_{n-i}(K)\} \leq 0 \end{equation} where \[ N_Y = \min \{k \geq 0 \mid U^k \cdot \HFred(Y) = 0 \}. \] It is precisely the failure of \eqref{eq:NiWu} to hold in general that makes it possible to obstruct the existence of PL disks and spheres. \end{remark} \begin{remark} \label{rmk:0-framing} There is also an obstruction to the existence of a PL sphere in the case where $n=0$, although we do not know of any actual example where it is effective. If $Y$ is any $3$-manifold with vanishing triple cup product on $H^1(Y)$, and $\spincs$ is any torsion spin$^c$ structure on $Y$, then there are two relevant invariants to consider: the untwisted ``bottom'' $d$ invariant $d_b(Y,\spincs)$ defined by Ozsv\'ath and Szab\'o \cite{OSabsgr} (see also \cite{LRS}), and the totally twisted $d$ invariant $\ul d(Y,\spincs)$ defined by Behrens and Golla \cite{BehrensGollaCorrection}. These invariants are both preserved under spin$^c$ homology cobordism, and they satisfy $\ul d(Y,\spincs) \le d_b(Y,\spincs)$ \cite[Proposition 3.8]{BehrensGollaCorrection}. We do not know of any $3$-manifold for which this inequality is strict. For any knot $K \subset S^3$, Behrens and Golla showed that $\ul d(S^3_0(K), \spincs_0) = d_b(S^3_0(K), \spincs_0)$, where $\spincs_0$ denotes the unique torsion spin$^c$ structure \cite[Example 3.9]{BehrensGollaCorrection}. Just as in the proof of Theorem \ref{thm:obstruction}, it follows that if $X$ is a smooth $4$-manifold with the homology of $S^2$ and vanishing intersection form, and the generator of $H_2(X)$ can be represented by a PL sphere, then $\ul d(\partial X, \spincs_0) = d_b(\partial X, \spincs_0)$. \end{remark} \section{Construction} We now describe a family of $4$-manifolds homotopy equivalent to $S^2$ which fail to satisfy the conclusion of Theorem \ref{thm:obstruction}. \begin{figure} \subfigure[]{ \labellist \pinlabel $0$ [t] at 17 8 \pinlabel $m+2$ [t] at 60 8 \endlabellist \includegraphics{T24} \label{subfig:Qm-T24}} \subfigure[]{ \labellist \pinlabel $0$ [t] at 17 8 \pinlabel $m-2$ [t] at 60 8 \endlabellist \includegraphics{T2-4} } \subfigure[]{ \labellist \pinlabel $-2$ [b] at 32 95 \pinlabel $-1$ [b] at 72 95 \pinlabel $m$ [b] at 112 95 \pinlabel $-2$ [t] at 72 8 \endlabellist \includegraphics{seifert} \label{subfig:Qm-seifert}} \caption{Three surgery descriptions of $Q_m$.} \label{fig:Qm} \end{figure} For any integer $m$, let $Q_m$ denote the total space of a circle bundle over $\RP^2$ with Euler number $m$. This is a rational homology sphere with \[ H_1(Q_m) \cong \begin{cases} \Z/2 \oplus \Z/2 & m \text{ even} \\ \Z/4 & m \text{ odd}. \end{cases} \] The manifold $Q_m$ can be described by any of the surgery diagrams in Figure \ref{fig:Qm}. For any $m$, Doig \cite[Section 3]{Doig} proved that the $d$ invariants of $Q_m$ in the four spin$^c$ structures are \begin{equation}\label{eq:d(Qm)} \left\{\frac{m+2}{4}, \frac{m-2}{4}, 0, 0\right\}. \end{equation} (See also the work of Ruberman, Strle, and the first author \cite[Theorem 5.1]{LRS}.) \begin{figure} \labellist \pinlabel $p$ [b] at 41 95 \pinlabel $-2$ [b] at 81 95 \pinlabel $-1$ [b] at 121 95 \pinlabel $-4p-3$ [b] at 161 95 \pinlabel $-2$ [t] at 121 8 \pinlabel $K_p$ [r] at 9 72 \endlabellist \includegraphics{plumbing} \caption{Surgery description of the Brieskorn sphere $Y_p$. The knot $K_p$ represents a singular fiber in a Seifert fibration on $Y_p$.} \label{fig:plumbing}. \end{figure} For each integer $p$, let $Y_p$ be the $3$-manifold given by the surgery diagram in Figure \ref{fig:plumbing}, which naturally bounds a plumbed $4$-manifold. It is easy to check that $Y_p$ is the Seifert fibered homology sphere \[ Y_p \cong \begin{cases} \Sigma(2,-(2p+1),-(4p+3)) & p < -1 \\ S^3 & p=-1,0 \\ -\Sigma(2,2p+1,4p+3) & p > 0. \end{cases} \] (Our convention is that for pairwise relatively prime integers $a,b,c>0$, the Brieskorn sphere $\Sigma(a,b,c)$ is oriented as the boundary of a positive-definite plumbing. Note, however, that the plumbing shown in Figure \ref{fig:plumbing} is indefinite.) Let $K_p \subset Y_p$ be the knot obtained as a meridian of the $p$-framed surgery curve, shown in Figure~\ref{fig:plumbing}. In the cases $p=-1$ or $p=0$, where $Y_p \cong S^3$, $K_p$ is the unknot or the right-handed trefoil, respectively; otherwise, $K_p$ is the singular fiber of order $2p+1$. The $0$-framing on this curve (viewed as a knot in $S^3$) corresponds to the $+4$ framing on $K_p$ (as a knot in $Y_p$). Performing surgery using this framing produces $Q_{-4p-3}$, since we can cancel the $p$-framed component with its $0$-framed meridian to produce Figure \ref{subfig:Qm-seifert} with $m=-4p-3$. We are now able to construct the spineless four-manifolds claimed in Theorem~\ref{thm:spineless}. Define the four-manifold $W_p$ obtained by taking $(Y_p - B^3) \times [0,1]$, which has boundary $Y_p \conn {- Y_p}$, and attaching a $+4$-framed 2-handle along the knot $K_p \times \{1\}$. The boundary of $W_p$ is $Q_{-4p-3} \conn {- Y_p}$; denote this three-manifold by $M_p$. \begin{proposition} For each $p$, the manifold $W_p$ is homotopy equivalent to $S^2$. \end{proposition} \begin{proof} First, notice that $(Y_p - B^3) \times [0,1]$ is an integer homology ball, so after attaching the 2-handle, $W_p$ has the same homology as $S^2$. To show that $W_p$ is simply-connected (and hence homotopy equivalent to $S^2$), it is sufficient to show that the homotopy class of $K_p$ normally generates $\pi_1(Y_p)$. This is obvious in the case that $p = -1, 0$ as $Y_p = S^3$. The following lemma proves this claim in the remaining cases. \end{proof} \begin{figure} \labellist \pinlabel $\dfrac{p}{p'}$ [r] at 10 79 \pinlabel $\dfrac{q}{q'}$ [l] at 135 79 \pinlabel $\dfrac{r}{r'}$ [r] at 50 34 \pinlabel $e$ [bl] at 80 102 \small \pinlabel $x$ [b] at 32 112 \pinlabel $h$ [b] at 72 112 \pinlabel $y$ [b] at 112 112 \pinlabel $z$ [t] at 72 8 \endlabellist \includegraphics{seifert-pi1} \caption{Surgery description of $\Sigma(p,q,r)$, along with generators for $\pi_1$.} \label{fig:seifert-pi1} \end{figure} \begin{lemma}\label{lemma:brieskornweightone} For any pairwise relatively prime integers $p,q,r$, the fundamental group of the Brieskorn sphere $\Sigma(p,q,r)$ is normally generated by any of the singular fibers. \end{lemma} \begin{proof} Recall that if $\Sigma(p,q,r) = S^2(e;(p,p'), (q,q'), (r,r'))$, then \begin{equation} \label{eq:brieskorn-pi1} \pi_1(\Sigma(p,q,r)) = \langle x,y,z,h \mid h \text{ central}, \, x^{p} h^{p'} = y^{q} h^{q'} = z^{r} h^{r'} = xyzh^e = 1\rangle. \end{equation} To see this presentation, we consider the standard surgery description for $\Sigma(p,q,r)$ as in Figure \ref{fig:seifert-pi1}. The complement of the surgery link $L$ has \[ \pi_1(S^3 - L) = \langle x, y, z, h \mid h \text{ central} \rangle. \] Here, $x,y,z$ represent meridians of the three parallel curves while $h$ represents the fiber direction. The four additional relators in \eqref{eq:brieskorn-pi1} represent the longitudes filled by the Dehn surgeries. Without loss of generality, we consider the singular fiber of order $p$, which is the core of the Dehn surgery on the leftmost component in Figure \ref{fig:seifert-pi1}. This curve is represented in $\pi_1(\Sigma(p,q,r))$ by $x^a h^b$, where $a,b$ are any integers such that $\abs{bp - ap'} = 1$. Thus, we must show that the quotient $G = \pi_1(\Sigma(p,q,r)) / \langle\! \langle x^a h^b \rangle \!\rangle$ is trivial. Because $x$ and $h$ commute and $\abs{bp-ap'} = 1$, the subgroup of $G$ generated by $x$ and $h$ is the same as the subgroup generated by $x^a h^b$ and $x^p h^{p'}$. Therefore, $x = h = 1$ in $G$, so \[ G \cong \langle y, z \mid y^q = z^r = yz = 1\rangle. \] Since $q$ and $r$ are relatively prime, this implies that $G$ is the trivial group. Consequently, the singular fibers normally generate the fundamental group of $\Sigma(p,q,r)$. \end{proof} The following proposition now establishes Theorem~\ref{thm:spineless}; specifically, it shows that the manifolds $W_p$ are spineless for $p \not\in \{-2,-1,0\}$. (Both $W_{-1}$ and $W_0$ contain spines since they are obtained by attaching a $2$-handle to the $4$-ball; we do not know whether $W_{-2}$ has a spine.) \begin{proposition} If $M_p$ bounds a compact, smooth, oriented $4$-manifold $X$ with $H_*(X) \cong H_*(S^2)$ in which a generator of $H_2(X)$ can be represented by a PL $2$-sphere, then $p \in \{-2,-1,0\}$. \end{proposition} \begin{proof} Suppose $M_p$ bounds a compact, smooth, oriented $4$-manifold $X$ with $H_*(X) \cong H_*(S^2)$. Observe that the four $d$ invariants of $M_p$ are equal to those of $Q_{-4p-3}$ minus the even integer $d(Y_p)$. To be precise, label the four spin$^c$ structures on $M_p$ by $\spincs_0, \dots, \spincs_3$ according to Convention \ref{conv:spinc}. By \eqref{eq:d-mod2}, we deduce that the intersection form of $X$ must be positive-definite, and \[ d(M_p, \spincs_0) \equiv \frac34, \quad d(M_p, \spincs_1) = d(M_p, \spincs_3) \equiv 0, \quad d(M_p, \spincs_2) \equiv \frac74 \pmod{2\Z}. \] (If the intersection form were negative-definite, the $d$ invariants of $\spincs_0$ and $\spincs_2$ would be congruent to $\frac54$ and $\frac14$ respectively, which would violate \eqref{eq:d(Qm)}.) These congruences enable us to identify which of the two self-conjugate spin$^c$ structures is $\spincs_0$ and which is $\spincs_2$. Specifically, when $p$ is odd, we have \begin{align*} d(M_p, \spincs_0) &= -d(Y_p) - \frac{4p+1}{4} \\ d(M_p, \spincs_1) = d(M_p, \spincs_3) &= -d(Y_p) \\ d(M_p, \spincs_2) &= -d(Y_p) - \frac{4p+5}{4}. \end{align*} By Theorem \ref{thm:obstruction}, if there is a PL sphere representing a generator of $H_2(X)$, then: \begin{align*} -\frac{4p+1}{4} = d(M_p, \spincs_0) - d(M_p, \spincs_1) &= \frac34 \text{ or } {-\frac54} \\ \frac{4p+5}{4} = d(M_p, \spincs_1) - d(M_p, \spincs_2) &= \frac14 \text{ or } {-\frac74} \end{align*} These two equations imply that $p=-1$. Similarly, when $p$ is even, the roles of $\spincs_0$ and $\spincs_2$ are exchanged, and we deduce that $p$ equals either $-2$ or $0$. \end{proof} \begin{remark} In \cite{Doig}, Doig computed the $d$ invariants of $Q_m$ and used these to show that many of the $Q_m$ cannot be obtained by surgery on a knot in $S^3$. Our arguments further show that $Q_m$ cannot be integrally homology cobordant to surgery on a knot. While Doig's arguments use $d$ invariants, which are homology cobordism invariants, they also rely on the fact that the $Q_m$ are L-spaces, which is not a property that is preserved under homology cobordism. \end{remark} \begin{remark} For any $k>1$, one can modify the construction above to obtain spineless $4$-manifolds $X$ with $H_1(\partial X) \cong \Z/k^2$. Let $Q_{k,m}$ be the manifold obtained by $(0,m+k)$ surgery on the $(2,2k)$ torus link. (Using our previous notation, $Q_m = Q_{2,m}$, as seen in Figure \ref{subfig:Qm-T24}.) Then $\abs{H^2(Q_{k,m})} = k^2$, and $H^2(Q_{k,m})$ is cyclic iff $\gcd(k,m)=1$. Since $Q_{k,m}$ bounds a rational homology ball, the $d$ invariants of $k$ of the $k^2$ spin$^c$ structures on $Q_{k,m}$ vanish. On the other hand, the exact triangle relating the Heegaard Floer homologies of $S^1 \times S^2$, $Q_{k,m}$, and $Q_{k,m+1}$ shows that the $d$ invariants of the remaining spin$^c$ structures vary roughly linearly in $m$. In particular, the differences between $d$ invariants of adjacent spin$^c$ structures can be arbitrarily large. Moreover, one can realize $Q_{k,m}$ (for appropriate $m$) as surgery on a fiber in a Brieskorn sphere; the result then follows as above. We do not know of any instances where Theorem \ref{thm:obstruction} obstructs the existence of a PL sphere when $n$ is not a perfect square. \end{remark} \bibliographystyle{amsalpha}
train/arxiv
BkiUdXrxaKgT8WttEPNQ
5
1
\section{Introduction: TFE--4 and known related regularity results} \label{S1} \noindent This paper is devoted to an ``optimal regularity" analysis for a fourth-order quasilinear degenerate evolution equation of the parabolic type, called the {\em thin film equation} (TFE--4), with a fixed exponent $n>0$, \begin{equation} \label{i1} u_{t} = -\nabla \cdot(|u|^{n} \nabla \Delta u) \quad \mbox{in} \quad {\bf R}^N \times {\bf R}_+\,, \quad u(x,0)=u_0(x) \quad \mbox{in} \quad {\bf R}^N, \end{equation} where $ n \in (0,2)$ is a fixed exponent, and bounded, sufficiently smooth, and compactly supported initial data $u_0$ with an arbitrary dimension $N \ge 2$. Moreover, we assume $u_0=u_0(|x|)$ to be radially symmetric, so solutions $u=u(|x|,t)$ do the same. These initial conditions could be relaxed (for example $u_0\in L^1\cap L^\infty$, etc.). However, for simplicity, we have chosen those initial conditions, since the focusing phenomenon to be studied here exists in any reasonable functional class of initial data, including radial ones. The main result obtained here is an approach to the so-called optimal regularity of solutions of the TFE--4 for the $N$-dimensional case with $N\geq 2$, ascertaining an ``optimal" H\"{o}lder continuity exponent in $x$ and $t$. For the one-dimensional case, similar results were obtained in \cite{BF1}, establishing the H\"{o}lder continuity in $C^{0, \frac 12}$ with respect to the variable $x$ and in $C^{0, \frac 1 8}$ with respect to $t$. However, in the $N$-dimensional case, the optimal regularity has been unsolved besides the interest of the specialised mathematical community; see \cite{BHRR,DGG, Grun95, Grun04}. In many of those works it is also assumed that the solutions are non-negative for the Cauchy problem (CP). However, as we recently proved in the CP settings, the solutions for the TFE--4 \eqref{i1} are oscillatory and sing-changing for all not that large values of $n>0$; see \cite{PV3, EGK2} for recent results and references therein. One of the difficulties concerning optimal regularity for the equation \eqref{i1} is due to the fact that this equation \eqref{i1} is {\em not fully divergent}, so the whole ``global" regularity theory for the TFE--4 can use just a couple of well-known integral identities/inequalities. Indeed, as discussed in \cite{PV3} an integral identities argument, as that performed by Bernis--Friedman in 1D, fails to show the H\"{o}lder continuity in higher dimensions; see \cite{PV3} for details and reasons for these issues. Consequently, we claim that, for such ``partially" (and/or fully non-) divergent equations, a different approach should be put in charge. \subsection{Methodology: Graveleau-type focusing similarity solutions} Here, working on the radially symmetric problem of TFE--4, such that \[u(x,t)=u_*(r,t),\quad \hbox{with} \quad r=|x|,\] we base our analysis on the \emph{focusing argument} performed, firstly, by Graveleau \cite{Grav72} in a formal sense, and later justified in Aronson--Graveleau \cite{AG} for a class of non-negative solutions (via the Maximum Principle, MP) of the {\em classic porous medium equation} (PME--2): \begin{equation} \label{pormed} v_{t} = \Delta (v^m) \quad \mbox{in} \quad {\bf R}^N \times (-\infty,0),\quad \hbox{with an exponent $m>1$.} \end{equation} Actually, in comparison with \eqref{i1}, $m=1+n>1$. In particular, the authors in \cite{AG} obtained, after constructing a one-parameter family of self-similar solutions, that, $$\hbox{when}\quad N\geq 2, \quad \hbox{there exists}\quad 0<\mu=\mu(m,N)<1,$$ and a radial self-similar solution $v_*(r,t)$ to the focusing problem and with the focusing time $t=0^-$, such that\footnote{We do not put further details how to get \eqref{Hol1}, since we will explain this shortly for the TFE--4.} \begin{equation} \label{Hol1} v_*(r,0^-)=C_* r^\mu,\quad \hbox{where $C_*$ is an arbitrary constant.} \end{equation} This finished a long discussion concerning optimal regularity for the PME--2 after seminar results by Caffarelli--Friedman \cite{Caff79} (see {\tt MathSciNet} for many further extensions) proving, earlier, H\"older continuity of $v(x,t)$ in a very general setting. The above results of Aronson and Graveleau showed that such a H\"older continuity is, indeed, optimal, and even the corresponding H\"older exponent $\mu=\mu(m,N)$ from \eqref{Hol1} can be evaluated, at least, numerically for any $m>1$ and $N \ge 2$. Indeed, the blow-up singularity \eqref{Hol1} shows, actually, not an {\em optimal} regularity for such PME--2 flows, but the {\em non-improvable} one, in the sense that, for the H\"older exponent of solutions of \eqref{pormed}, \begin{equation} \label{Opt1} \mbox{a non-improvable regularity} \quad \Rightarrow \quad \mbox{H\"older exponent $ \not >$ $\mu$}. \end{equation} Fortunately, as we have mentioned above, Caffarelli--Friedman earlier results of the 1980s had been already available at that time, so, together with \eqref{Opt1}, those proved that a proper H\"older regularity of any non-negative solutions of the PME--2 is optimal. These Aronson--Graveleau focusing (singularity--blow-up) ideas for the PME--2 \eqref{pormed} and other second-order quasilinear parabolic equations (e.g. for the $p$-Laplacian one) got further development and extensions in a number of papers devoted to various important aspects of porous medium flows; see references and results in \cite{Ang01, Ar03}. More precisely talking about such Graveleau-type similarity approach, from the point of view of applications, in the focusing problem, one assumes that, initially, there is a non-empty compact set $K$ in the complement of the ${\rm supp}\,v_0$, where $v$ vanishes. In other words, there is a hole in the support of the initial value $v(x,0)=v_0(x) \ge 0$, and, in finite time $T$, this hole disappears, making the solution $v$ to become positive along the boundary of $K$ and eventually at all points of $K$. Basically, due to the finite propagation property that these equations possess (the porous medium equation and the thin film equation). Thus, as the flow evolves the liquid enters $K$ and eventually reaches all points of $K$ at the instant $T$. We are then interested in how the solution for the TFE--4 \eqref{i1} behaves near the focusing time $T$. We again suppose (as in the Graveleau's argument for PME--2) the focusing time $t=0^-$. However, in our case, with the TFE--4 \eqref{i1} and $n>0$, but close to zero, we need to take into consideration the existence of oscillatory solutions of changing sign. Furthermore, for the TFE--4 \eqref{i1}, there are not still any general regularity results even in the radial setting in ${\bf R}^N$. Consequently, we will follow the lines and the logic of \eqref{Opt1}, i.e. we will estimate a certain {\em non-improvable} H\"older exponent for radial (and, hence, all other) solutions. To this end, we use those previous singularity (blow-up) ideas from the PME--2 \eqref{pormed} to establish some properties for the self-similar solutions of the radially symmetric problem of the TFE--4 \eqref{i1} and, eventually, some optimal regularity information about its more general solutions. Indeed, as we will see through the analysis performed in this paper, to obtain such an ``optimal" (non-improvable) regularity for the TFE--4 \eqref{i1}, it seems that we must solve a nonlinear focusing problem for that TFE--4 that will be derived from the associated self-similar equation. To be precise, we first apply the focusing ideas performed by Aronson-Graveleau \cite{AG} for the PME--2 \eqref{pormed} to the TFE--4 \eqref{i1}. To do so, we will work on the space of radially self-similar solutions with $r=|x|>0$ of the form \begin{equation} \label{sssol} \textstyle{ u_*^{\pm}(r,t)= (\pm t)^{\a} f(y), \quad y=\frac{r}{(\pm t)^\b} \,\,\, \mbox{for} \,\,\, \pm t > 0 ,\quad \mbox{where} \quad \b= \frac {1+\a n}4>0. } \end{equation} Thus, these solutions will solve an associated self-similar equation, or \emph{nonlinear eigenvalue problem}, $$ \textstyle{ - \frac{1}{y^{N-1}} \big[ y^{N-1}|f|^n \big(\frac{1}{y^{N-1}}(y^{N-1} f')' \big)' \big]' \pm \b y f' \mp \alpha f = 0, } $$ with $-$ or $+$ depending on if we are analysing the focusing phenomena or the defocusing phenomena. In other words, before the focusing time or after (in our situation $t=0$). As usual, the unknown {\em a priori} $\a$ represents the nonlinear eigenvalues supporting the reason to call that equation a \emph{nonlinear eigenvalue problem}. Crucial in our analysis will be to impose some radiation-type conditions, or some minimal and maximal growth at infinity, i.e. as $y\to +\infty$. This actually allows us, among other things, to determine the existence of a discrete family of nonlinear eigenvalues denoted by $\{\a_k\}$. Moreover, this behaviour at infinity leads us to have the solutions of the self-similar TFE--4 bounded at infinity by functions of the form $$ \textstyle{ f(y)=C y^{\mu} (1+o(1)) \quad C \in {\bf R} \quad \hbox{and}\quad \mu=\mu(\a,n):=\frac{4\a}{1+\a n} >0. } $$ Thus, we ascertain a limit at the focusing point $$ \textstyle{ u_*(r,t) \to C r^{\mu} \quad \mbox{as} \quad t \to 0^-, \quad \mbox{with} \quad \mu= \frac \a \b, } $$ uniformly on compact subsets (see details in Section \ref{S3}). However, this does not provide yet information about the influence of the dimension on the behaviour of the solutions at infinity. To get to that point, we need to find those nonlinear eigenvalues\footnote{Since we obtain those nonlinear eigenvalues as a perturbation from a family of linear eigenvalues when the parameter $n$ is close to zero, we will use indistinctively this notation for the family of linear or nonlinear eigenvalues. To be precise, the nonlinear eigenvalues correspond to the family $\{\a_k(n)\}$ and the linear eigenvalues to the family $\{\a_k(0)\}$.} $\{\a_k\}$ associated with a discrete family of nonlinear eigenfunctions $\{f_k\}$. Here, we have done this analysis via a homotopy deformation as $n\to 0^+$ (Sections \ref{S.lin} and \ref{S5}) to the linear problem with $n=0$, \begin{equation} \label{linbh} u_t=- \Delta^2 u \quad \mbox{in} \quad {\bf R}^N \times {\bf R}_-, \end{equation} and with those patterns occurring at $n = 0$ acting as branching points of nonlinear eigenfunctions of the Cauchy problem \eqref{i1} when $n$ is close to zero. In particular, according to our analysis, when $n=0$, it follows that $$ \textstyle{ \a_k(0)=\a_k=\frac{k}{2},\quad k=1,2,3,\cdots. } $$ As an observation, those values of the parameter $\a$ when $n=0$ might be written as a perturbation of the eigenvalues for the fourth-order operator $$ \textstyle{ {\bf B}^* f \equiv - \frac{1}{y^{N-1}} \big[ y^{N-1} \big(\frac{1}{y^{N-1}}(y^{N-1} f')' \big)' \big]' - \frac 14\, y f'=\l f, } $$ analysed in full detail in \cite{EGKP}; see further comments below. Hence, we perform a homotopy deformation from the TFE--4 \eqref{i1} to the parabolic bi-harmonic equation \eqref{linbh}, for which we can apply again a similar logic in ascertaining Graveleau-type ``focusing solutions". Indeed, we get a minimal and maximal growth for the radial self-similar solutions of this linear problem \eqref{linbh}. Moreover, since we know that there exists a discrete family of eigenvalues for the corresponding self-similar equation associated with \eqref{linbh}, using this branching/homotopy argument we obtain the desired family of values for the parameter $\a$ from which we will have a family of radial self-similar solutions emanating at $n=0$ from the self-similar solutions of the linear problem \eqref{linbh}. Note also that the family of eigenfunctions for the linear problem \eqref{linbh} is a complete set of generalised Hermite polynomials with finite oscillatory properties \cite{EGKP}. In this paper, we also support this analysis numerically, performing a shooting procedure from $y=0$ to $y=+\infty$ that gives us those linear eigenvalues $\a_k$ and the profile of the corresponding eigenfunctions. Additionally, we show some numerical analysis that provided us, with very difficult to ascertain, profiles of the nonlinear eigenfunctions $\{f_k\}$ with $n>0$ and sufficiently close to zero. This numerical analysis supports the conjecture that by homotopy continuity the properties for the profile when $n=0$ remain valid for $n>0$ and sufficiently close to zero. As seen above, we have fixed the interval $n\in (0,2)$, however, the numerics suggest that those properties are lost from $n=0.8$ or $0.9$. In fact, if $n\geq 2$ we just have nonnegative solutions as shown in \cite{BF1} so that, this focusing argument is not possible. Thus, we restrict this analysis to $n\in (0,2)$, or more precisely, of $n>0$ sufficiently close to zero. Finally, we obtain such an ``optimal" (non-improvable) regularity for the TFE--4 \eqref{i1} proving that there is H\"{o}lder continuity with an specific coefficient $$\mu_k =\frac{4\a_k}{1+n\a_k}\quad \hbox{and}\quad \nabla u(x,t) \in L^{p}_{\rm loc}({\bf R}^N) \quad \hbox{with}\quad p < p^*(n,N)=\frac{N}{1-\mu_k},$$ having such regularisation depending on the dimension $N$, as we claimed and were looking for. Consequently, if $n>0$ small, we find that \begin{equation} \label{reguc3} u_*(r,t) \in C_r^{2-\varepsilon(n)},\quad \hbox{with} \quad \varepsilon(n)=n\mu+O(n^2). \end{equation} It turns out that, thanks to our branching analysis, there holds $\mu>0$, so that, for $n>0$, the regularity condition \eqref{reguc3} becomes obviously worse. Note that, if $\mu<0$, we have a better regularity than for $n=0$ which is impossible. In general, we obtain (and the numerics suggest that as well) that there exist several focusing singularities in the radial geometry of the type $C^{2k-\varepsilon}$ with $k=1,2,3,\cdots$. In particular, for the values of $\a_k$ obtained here, it follows that for $k=1$, $\a_1=\frac 1 2$ is the minimal and crucial one, since it seems to be the only degenerate case, i.e. $f_1(0)=0$. All others satisfy $$f_{k}(0)>0, \quad K=1,2,3,\cdots,\quad \hbox{i.e. the TFE--4 for $n>0$ is not degenerate} $$ initially, but, indeed, will be eventually, at the focusing time $t=T^-(=0^-)$. Therefore, we can conclude that slight changes in the parameter $\a$ destroy the regularity, supporting the fact that these results are valid only when $n$ is sufficiently close to zero. Indeed, via branching analysis we find that $$\a_k(n)=\a_k+\mu_k n+O(n^2),$$ so that we have the regularity condition $$u_*(r,t) \in C^{2k-\mu_k n+O(n^2)}.$$ Note that, even for $\a_2=1$, the analytic positive solution becomes, at $t=T^-$, $C^{4-\varepsilon}$, i.e. not classical solutions in $C^4$. Hence, for the minimal $\a_1=\frac 1 2$ it is $C^{2-\varepsilon}$, which is much worse. \vspace{0.2cm} \noindent{\bf Remark.} As mentioned above, we also observe that, for the PME--2 \eqref{pormed}, one needs to have such a non-empty compact hole to apply the Maximum Principle. However, we believe that, for the TFE--4 \eqref{i1} and the value $\a_1=\frac 1 2$, that provides us with the minimal regularity $C^{2-\varepsilon(n)}$, there is not such a hole. We do not have a rigorous justification of it but the numerics presented in this paper suggest it that way. \vspace{0.2cm} In relation to the H\"{o}lder regularity with respect to the temporal variable $t$, assuming the radially self-similar solutions of the form \eqref{sssol} we find that $$|u(0,t)-u(0,0)| = (-t)^\a f(0).$$ Hence,the H\"{o}lder's exponent for the variable $t$ when it is very close to $t=0$ cannot be bigger than $$\a_1(n)=1/2 + O(n).$$ Therefore, we provide with an estimation from above for the H\"{o}lder continuity with respect to the $t$ variable. Moreover, this estimation improves the H\"{o}lder's exponent obtained by Bernis--Friedman \cite{BF1} showing that theirs was actually not optimal since it was $1/8$. \subsection{TFE-4 problem settings} This setting is well-known nowadays (though many things were not fully proved in a general case), so, below, we omit many details; see surveys \cite{Grun95, Grun04}. Note that, for some values of $n>0$ (not that large), focusing similarity solutions to be constructed do not exhibit finite interfaces, so the resulting optimal regularity results are true for any FBP and/or Cauchy problem settings. Principal differences between the CP and the standard FBP settings for the TFE--4 \eqref{i1} are explained in \cite{EGK1, EGK2}. We recall that the solutions are assumed to satisfy the following zero contact angle boundary conditions: \begin{equation} \label{i5} \left\{\begin{array}{ll} \textstyle{ u=0,} & \textstyle{ \hbox{zero-height,} } \\ \textstyle{ \nabla u=0,} & \textstyle{ \hbox{zero contact angle,} } \\ \textstyle{ -{\bf n} \cdot \nabla (\left|u\right|^{n} \Delta u)=0,} & \textstyle{ \hbox{conservation of mass (zero-flux)} } \end{array} \right. \end{equation} at the singularity surface (interface) $\Gamma_0[u]$, which is the lateral boundary of \begin{equation*} \textstyle{ \hbox{supp} \;u \subset {\bf R}^{N} \times {\bf R}_+,\quad N \geq 1\,,} \end{equation*} where ${\bf n}$ stands for the unit outward normal to $\Gamma_0[u]$, which is assumed to be sufficiently smooth (the treatment of such hypotheses is not any goal of this paper). For smooth interfaces, the condition on the flux can be read as \begin{equation*} \lim_{\hbox{dist}(x,\Gamma_0[u])\downarrow 0} -{\bf n} \cdot \nabla (|u|^{n} \Delta u)=0. \end{equation*} Next, we denote by \begin{equation} \label{mass} \textstyle{ M(t):=\int u(x,t) \, {\mathrm d}x } \end{equation} the mass of the solution, where integration is performed over the support. Then, differentiating $M(t)$ with respect to $t$ and applying the divergence theorem, we have that \begin{equation*} \textstyle{ J(t):= \frac{{\mathrm d}M}{{\mathrm d}t}= - \int\limits_{\Gamma_0\cap\{t\}}{\bf n} \cdot \nabla (|u|^{n} \Delta u )\, . } \end{equation*} The mass is conserved if $ J(t) \equiv 0$, which is satisfied by the flux condition in \eqref{i5}. \section{Radial self-similar solutions: focusing and defocusing cases} \label{S2} \noindent Now, we construct the operators and specific solutions in order to apply the ideas performed by Aronson--Graveleau \cite{AG} for the PME--2 \eqref{pormed} to the TFE--4 \eqref{i1}. Thus, thanks to the scaling invariant property of these nonlinear parabolic equations, we now construct radially self-similar solutions, i.e. in terms of $r=|x|>0$, with a still unknown value of the parameter $\a>0$ (clearly, it must be positive, as shown below) \begin{equation} \label{upm} \textstyle{ u_*^{\pm}(r,t)= (\pm t)^{\a} f(y), \quad y=\frac{r}{(\pm t)^\b} \,\,\, \mbox{for} \,\,\, \pm t > 0 ,\quad \mbox{where} \quad \b= \frac {1+\a n}4>0. } \end{equation} Here, by the time-translation, we ascribe the blow-up or \emph{focusing} time to $T = 0^-$. We then simultaneously consider two cases: \begin{enumerate} \item[(i)] {\bf Focusing} (i.e., Graveleau-type) similarity solutions, which play a key role, corresponding to $(-t)$ in \eqref{upm} and the singular blow-up limit as $t \to 0^-$, and \item[(ii)] {\bf Defocusing} similarity solutions, with $(+t)$ in \eqref{upm}, playing a secondary role as extensions of the previous ones for $t>0$, i.e., corresponding to the not-that-singular (or, at least, less singular) limit $t \to 0^+$. Actually, these defocusing solutions are well-posed solutions of the Cauchy problem for the TFE--4 for $t>0$ with initial data $u_*(r,0^-)$ obtained from the previous blow-up limit as $t \to 0^-$. \end{enumerate} \smallskip Substituting \eqref{upm} into \eqref{i1} we arrive at the similarity profiles $f(y)$ satisfying the following {\em nonlinear eigenvalue problems}: \begin{equation} \label{eigpm} \textstyle{ {\bf B}^\pm_{n} (\a,f) \equiv - \nabla_y \cdot (|f|^n \nabla_y \Delta_y f) \pm \b y \cdot \nabla_y f \mp \a f=0 \,\,\,\mbox{for} \,\,\, y>0; \quad f'(0)=f'''(0)=0, } \end{equation} where $\nabla_y$ and $\Delta_y$ stand for the radial gradient and the radial Laplacian. In \eqref{eigpm}, we present two symmetry conditions at the origin (which should be modified if $f(y) \equiv 0$ near the origin, i.e. it contains a ``zero hole" nearby). As we will see with the numerical analysis performed in Section \ref{S.lin} of this paper, we can choose other conditions depending on if either $f=0$ or $f\neq 0$. To complete these nonlinear eigenvalue settings one needs extra ``radiation-type" (or growth-type) conditions at infinity to be introduced next. Indeed, we actually find radially similarity profiles $f$ depending on the single variable $y$. The operator of the equation \eqref{eigpm} is then \begin{equation} \label{rad22} \textstyle{{\bf A}^\pm_{n} (\a,f) \equiv- \frac{1}{y^{N-1}} \big[ y^{N-1}|f|^n \big(\frac{1}{y^{N-1}}(y^{N-1} f')' \big)' \big]' \pm \b y f' \mp \alpha f = 0,} \end{equation} denoting the radial nonlinear operator as \begin{equation} \label{rad23} \textstyle{ {\bf R}_{n}(\a,f)= \frac{1}{y^{N-1}} \big[ y^{N-1}|f|^n \big(\frac{1}{y^{N-1}}(y^{N-1} f')'\big)' \big]'. } \end{equation} Thus, here, $\a >0$ is a parameter, which, in fact, stands in both cases for admitted real {\em nonlinear eigenvalues} to be determined, in the focusing Graveleau-type case, via the solvability of the corresponding {\em nonlinear eigenvalue problem}, accomplished with some special (radiation-type) conditions at infinity, to be introduced shortly. \begin{remark} {\rm In general, with respect to the similarity profiles described above we note that there are two main types of self-similar solutions. For solutions of the first kind the similarity variable $y$ can be determined \emph{a priori} from dimensional considerations and conservation laws, such as the conservation of mass \eqref{mass} or momentum. For solutions of the second kind the exponent $\b$ (and by relations the exponent $\a$) in the similarity variable must be obtained along with the solution by solving a nonlinear eigenvalue problem of the form \eqref{eigpm}. The first published examples of self-similar solutions of second kind are due to G.~Guderley in 1942 \cite{G42} studying imploding shock waves, although the term was introduced by Ya. B.~Zel'dovich in 1956 \cite{Zel56}. Further examples might be found in the theory of the collapse of bubbles in compressible fluids or in works on gas motion under an impulsive load; see Barenblatt \cite{B} for an extensive work on this matter. } \end{remark} \section{Minimal growth at infinity (a ``nonlinear radiation condition") for Graveleau-type profiles} \label{S3} \noindent Here, we consider the blow-up problem $(\ref{rad22})_-$, i.e. with the lower signs. One concludes that, in order to get a possible discrete set of eigenvalues $\a>0$, some extra conditions denoted by nonlinear radiation condition on the behaviour (in particular, growth) of $f(y)$ as $y \to + \infty$ must be imposed. Obviously, such a ``radiation-type" condition (we use a standard term from dispersion theory) just follows from analysing all possible types of such a behaviour, which can be admitted by the ODE $(\ref{rad22})_-$, which is not that a difficult problem. There are two cases: \begin{enumerate} \item[(I)] {\em More difficult: there is a ``zero hole" for $f(y)$ near the origin.} Finite interfaces for the TFE--4 are well known in the case of the standard FBP setting and also in the Cauchy one (see \cite{EGK1,EGK2} and references therein). Then we need to look for profiles $f(y)$ which vanish at finite $y=y_0>0$ and describe asymptotics of the general solution satisfying zero contact angle and zero flux conditions as $y\to y_0^+$ $$ f(y)\to 0,\quad f'(y)\to 0,\quad -|f|^n f'''(y)\to 0. $$ This case causes a difficult problem, since the oscillatory behaviour of $f(y)$ close to the interface is quite tricky for the CP \cite{EGK1}, while, for the FBP, it is much better understood. \item[(II)] {\em More standard and easier: $f(0) \ne 0$.} This case also includes the border possibility $y_0=0$, i.e. when $f(0)=0$, but $f \not \equiv 0$ in any arbitrarily small neighbourhood of $y=0$. \end{enumerate} \subsection{Minimal and maximal growth at infinity} This is key for our regularity analysis. Our radial ODE $(\ref{rad22})_-$ admits two kinds of asymptotic behaviour at infinity, i.e. when $y_0 \to +\infty$. For the operator ${\bf A}^-_{n} (\a,f)$, we state the following result: \begin{proposition} \label{theneg} For any $\a>0$, the ODE $\eqref{rad22}_-$, with the operator ${\bf A}^-_{n} (\a,f)$ possesses: {\rm (i)} Solutions $f(y)$ with a minimal growth \begin{equation} \label{gg1} \textstyle{ f(y)=C y^{\mu} (1+o(1))\quad \hbox{as}\quad y\to +\infty,} \quad C \in {\bf R}, \end{equation} where \begin{equation} \label{param} \textstyle{ \mu=\mu(\a,n)=\frac{4\a}{1+\a n} >0. } \end{equation} {\rm (ii)} Moreover, there exist solutions of $\eqref{rad22}_-$ with a maximal growth \begin{equation} \label{gg2} \textstyle{ f(y) \sim y^{\mu_0},\quad \hbox{as}\quad y\to +\infty, \quad \mbox{where} \quad \mu_0= \frac 4 n. } \end{equation} \end{proposition} \vspace{0.2cm} \noindent{\bf Remark.} \begin{itemize} \item It is important to mention that the expression \eqref{gg2} for solutions of the maximal growth does not include the ``additional or extra" corresponding {\em oscillatory component} $\varphi(s)$, with $s= \ln y$, and shows just the growth behaviour of its ``envelope" with algebraic growth. Such an oscillatory maximal behaviour will be introduced and studied later. \item Note also that, as $n \to 0^+$, we have that \begin{equation} \label{gg81} \textstyle{ \frac 4n \to +\infty, } \end{equation} which corresponds to an exponential oscillatory growth for the linear problem occurring for $n=0$; see next section. \item The difference between \eqref{gg1} and \eqref{gg2}, which implies such terms as minimal and maximal growth at infinity, is obvious: \begin{equation} \label{gg3} \textstyle{ \mbox{for any $\a>0$}, \quad \frac 4n > \frac \a \b = \frac {4 \a}{1+\a n}. } \end{equation} \end{itemize} \vspace{0.2cm} {\em Proof.} This follows from a balancing of linear and nonlinear operators in this ODE, though a rigorous justification is rather involved and technical. A formal derivation is surely standard and easy: \begin{enumerate} \item[(i)] In this first case, we assume a linear asymptotic as $y\to +\infty$ (assuming simple radial behaviour $f \sim y^{\mu}$) \begin{equation} \label{kk3} \textstyle{ \frac{1+\a n}4 \, y f' - \a f+...=0 \quad \Longrightarrow \quad f(y) \sim y^\mu ,\quad \mbox{where} \quad \mu= \frac {4\a}{1+\a n}>0. } \end{equation} Since this behaviour is asymptotically linear, eventually, we get an arbitrary constant $C \not = 0$ in \eqref{gg1}. A full justification of existence of such orbits is straightforward, since the nonlinear term in the ODE is then negligible. So that in the equivalent integral equation though a singular term, it produces a negligible perturbation. \item[(ii)] On the other hand, the solutions are also bounded by a maximal growth, which in this case comes from non-linear asymptotics that balance all three operators \begin{equation} \label{kk2} \textstyle{ {\bf R}_{n}(f)+ \frac{1+\a n}4\, y f' - \a f=0 \quad \Longrightarrow \quad f(y) \sim y^{\frac 4n} \quad \mbox{as} \quad y \to +\infty, } \end{equation} where we again indicate the envelope behaviour of this oscillatory bundle. A justification here, via Banach's contraction principle is even easier, but rather technical, so we omit details. \qed \end{enumerate} \smallskip Overall this allows us to formulate such a condition at infinity, which now takes a clear ``minimal nature" such that solutions $f$ are now bounded at infinity by a function \begin{equation} \label{mingro} \textstyle{ f(y)=C y^{\mu} (1+o(1)),\quad \hbox{with}\quad \mu= \frac {4\a}{1+\a n}>0.} \end{equation} Obviously, thus we just need a global solution $f(y)$ of our ODE $(\ref{rad22})_-$ in ${\bf R}_+$, satisfying the minimal growth \eqref{gg1}. Indeed, for such profiles, there exists a finite limit at the focusing (blow-up) point: \begin{equation} \label{gg5} \textstyle{ u_*(r,t) \to C r^{\mu} \quad \mbox{as} \quad t \to 0^-, \quad \mbox{with} \quad \mu= \frac \a \b, } \end{equation} uniformly on compact intervals in $r=|x| \ge 0$. One can see that, for any maximal profile as in \eqref{gg2}, the limit as in \eqref{gg5} is infinite, so that such similarity solutions do not leave a finite trace as $t \to 0^-$. However, the above proposition leaves aside the principle question on the dimensions of the corresponding minimal and maximal bundles as $y \to +\infty$, which is not a straightforward problem. Note that the latter is actually supposed to determine the strategy of a well posed shooting of possible solutions of the above focusing problem. Indeed, the main problem is how to find those admitted values of nonlinear eigenvalues $\{\a_k\}_{k \ge 1}$ (possibly and hopefully, a discrete set), for which $f=f_k(y)$ exist producing finite limits as in \eqref{gg5}. To clarify those issues, we consider the much simpler linear case when $n=0$ and, subsequently, pass to the limit as $n\to 0^+$ in \eqref{rad22}. This analysis will provide us, eventually, with some qualitative information about the solutions, at least when $n$ is very close to zero. \section{The linear problem: discretisation of $\a$ for $n=0$} \label{S.lin} For $n=0$, the TFE--4 \eqref{i1} becomes the classic {\em bi-harmonic equation} \begin{equation} \label{bi.1} u_t=- \Delta^2 u \quad \mbox{in} \quad {\bf R}^N \times {\bf R}_-. \end{equation} Of course, solutions of \eqref{bi.1} are analytic in both $x$ and $t$, so any focusing for it makes no sense. However, we will show that (following a similar philosophy to the one above) Graveleau-type ``focusing solutions" for \eqref{bi.1} are rather helpful to predict some properties of true blow-up self-similar solutions of \eqref{i1}, at least, for small $n>0$. \subsection{Maximal and minimal bundles} Thus, first we consider the same ``focusing" solutions $(\ref{upm})_-$ for \eqref{bi.1}, that take a simpler form \begin{equation} \label{b1} \textstyle{ u_*(r,t)=(-t)^\a f(y), \quad y = \frac r{(-t)^{1/4}} \quad \big(\b= \frac 14 \big). } \end{equation} Then, the corresponding linear radial ODE \eqref{rad22} takes also a simpler form \begin{equation} \label{b2} \textstyle{{\bf A}^-_{0,y} (\a,f) \equiv- \frac{1}{y^{N-1}} \big[ y^{N-1} \big(\frac{1}{y^{N-1}}(y^{N-1} f')' \big)' \big]' - \frac 14\, y f' + \alpha f = 0.} \end{equation} We first calculate the solutions of \eqref{b2} with a maximal behaviour. Those are exponentially growing solutions of the form \begin{equation} \label{b3} \textstyle{ f(y) \sim {\mathrm e}^{a y^\gamma} \quad \mbox{as} \quad y \to + \infty \quad \Longrightarrow \quad a^3=- \frac 14 \,\big( \frac 34 \big)^3. } \end{equation} This characteristic equation gives two roots with ${\rm Re}(\cdot)>0$: \begin{equation} \label{b4} \textstyle{ a_{1,2}=\frac 34\,4^{-\frac 13} \big[ \frac 12 \pm {\rm i} \, \frac{\sqrt 3}2 \big] \equiv c_0 \pm {\rm i} \, c_1. } \end{equation} and one negative root (this actually goes to the bundle of minimal solutions; see below) \begin{equation} \label{b5} \textstyle{ a_3= -\frac 34 \, 4^{-\frac 13}<0. } \end{equation} Hence, the bundle of maximal solutions is oscillatory as $y \to + \infty$ (including the multiplicative algebraic factor), \begin{equation} \label{b51} \textstyle{ f(y) \sim y^{-\frac{2}{3}(N+2\alpha)} {\mathrm e}^{c_0 y^{4/3}}[C_1 \cos (c_1 y^{\frac 4 3}) + C_2 \sin(c_1 y^{\frac 43})], \quad C_{1,2} \in {\bf R}. } \end{equation} In fact, obviously, this bundle is 3D since it includes the 1D sub-bundle of exponentially decaying solutions with the exponent \eqref{b5}. However, in a shooting procedure performed at the end of this section, we intend to get rid of just two coefficients in \eqref{b51} $$C_1=C_2=0.$$ Indeed, this numerical evidence shows that those coefficients vanish at certain values of the parameter $\a$ that we will determine below specifically. Moreover, we provide an explanation in the next subsection. On the other hand, the minimal behaviour \eqref{gg1} now reads \begin{equation} \label{b6} \textstyle{ f(y)= C y^{4 \a}(1+o(1)), \quad C \in {\bf R} \quad \big( \hbox{with}\quad \frac \a \b= 4 \a \big). } \end{equation} The whole bundle of such minimal solutions is 2D. Besides the parameter $C$ in \eqref{b6}, it includes a 1D sub-bundle of exponentially decaying solutions with the exponent \eqref{b5}, so that, overall, the minimal solutions compose a 2D family: \begin{equation} \label{b7} \textstyle{ f(y) \sim C y^{4 \a}(1+o(1))+ D {\mathrm e}^{a_3 y^{4/3}} (1+o(1)), \quad C,D \in {\bf R}. } \end{equation} Justification of both behaviours is, indeed, simpler for such a standard linear ODE problem. Finally, passing to the limit as in \eqref{gg5} then gives, for minimal solutions, a finite ``focusing trace": \begin{equation} \label{b8} u_*(r,t) \to C r^{4\a} \quad \mbox{as} \quad t \to 0^-. \end{equation} We explain why those trivial results are so important in what follows. The conclusion from \eqref{b8} is straightforward: since $u_*(r,0^-)$ must be analytic then we have the following. \begin{proposition} \label{Pr.anal} For the above linear ``focusing eigenvalue problem" $\eqref{b2}$, there exists not more than a countable set of admissible eigenvalues $\{\a_k\}_{k \ge 1}$ given by \begin{equation} \label{b9} \textstyle{ \a_k= \frac k2, \quad k=1,2,3,...\,. } \end{equation} \end{proposition} {\em Proof.} The function $r^{4\a}$ is analytic at $r=0$ only for values $\a_k$ from \eqref{b9}, i.e. $4 \a$ must be a real even integer, say $2k$. Note also that, in the present problem, all functions are analytic, so that, automatically, the set of roots $\{\a_k\}$ is discrete with a possible accumulation point at infinity only. \qed \smallskip \noindent{\bf Remark.} Solving the {\em linear eigenvalue problem}: \[ \textstyle{ {\bf A}^-_{0,y} (\a_k,f_k) = 0 \quad \mbox{in} \quad {\bf R}, \quad f_k \in L^2_\rho({\bf R}), } \] then, it seems that the nonlinear eigenvalue problem $\eqref{rad22}_{-}$ formally reduces to the classic linear eigenvalue problem \eqref{b2} at $n=0$, providing us with another reason to call \eqref{rad22} a {\em nonlinear eigenvalue problem}. Also, the values of the parameter $\a$ can be written as shifting from the eigenvalues of the eigenvalue problem \[ \textstyle{ {\bf B}^* f \equiv - \frac{1}{y^{N-1}} \big[ y^{N-1} \big(\frac{1}{y^{N-1}}(y^{N-1} f')' \big)' \big]' - \frac 14\, y f'=\l f, } \] analysed in \cite{EGKP} and whose discrete spectrum takes the form \[ \textstyle{ {\bf \sigma}({\bf B}^*)=\{\l_k=-\frac{k}{4}\,;\, k=0,1,\cdots\}. } \] Hence, \[ \textstyle{ \a_k=\l_k -\frac{k+1}{4},\quad \hbox{with}\quad k=1,2,3,\cdots, } \] having a countable family of eigenvalues for the problem \eqref{b2}. Moreover, note that ${\bf A}^-_{0,y} (\a,f)$ \eqref{b2} is a non-symmetric linear operator, which is bounded from $H_{\rho}^4({\bf R})$ to $L_{\rho}^2({\bf R})$ with the exponential weight given by \eqref{b51}, i.e., $$\rho(y)= {\mathrm e}^{-c_0 |y|^{4/3}}, \quad c_0>0 \,\,\,\mbox{small and defined by \eqref{b4}}.$$ \subsection{Well-posed shooting procedure} We now discuss a practical procedure to obtain the linear eigenvalues $\a_k$. We perform standard shooting from $y=0$ to $y=+\infty$ for the ODE \eqref{b2} by posing four conditions at the origin: \begin{itemize} \item Either \begin{equation} \label{sh1} f(0)=1 \,\,\,\mbox{(normalisation)}, \quad f''(0)=0, \quad f'(0)=f'''(0)=0 \,\,\,\mbox{(symmetry)}; \end{equation} \item or \begin{equation} \label{sh2} f(0)=0, \,\,\, \quad f''(0)=1 \,\,\, \mbox{(normalisation)}, \quad f'(0)=f'''(0)=0 \,\,\,\mbox{(symmetry)}. \end{equation} \end{itemize} These two sets of conditions correspond to partitioning of the eigenfunctions into two subsets with the corresponding properties at the origin of $$\hbox{either}\quad f(0)=1,\quad f''(0)=0,\quad \hbox{or}\quad f(0)=0, \quad f''(0)=1,$$ together with symmetry conditions. Explicitly, the first four eigenfunctions are \begin{eqnarray} && \textstyle{ k=1: \quad \alpha_1=\frac{1}{2}, \quad f_1(y) = \frac{1}{2} y^2 } , \nonumber \\ && \textstyle{ k=2: \quad \alpha_2=1, \quad f_2(y) = 1 + \frac{1}{8N(N+2)} y^4 }, \nonumber \\ && \textstyle{ k=3: \quad \alpha_3=\frac{3}{2}, \quad f_3(y) = \frac{1}{2} y^2 + \frac{1}{48(N+2)(N+4)}y^6 } , \nonumber \\ && \textstyle{ k=4: \quad \alpha_4=2, \quad f_4(y) = 1 + \frac{1}{4N(N+2)} y^4 + \frac{1}{192N(N+2)(N+4)(N+6)} y^8, } \nonumber \end{eqnarray} where the above properties are immediately apparent. Numerically we may illustrate the appearance of the eigenfunctions and their eigenvalues through consideration of an Initial Value Problem (IVP). The linear ODE (\ref{b2}) is solved numerically using ode15s (with tight error tolerances AbsTol=RelTol=$10^{-13}$) and subject to either (\ref{sh1}) or (\ref{sh2}) as initial conditions. The far-field behaviour (\ref{b51}) is extracted from the numerical solution. The oscillatory component in the far-field behaviour is revealed by considering the scaled solution \begin{equation} \label{scaledf} f(y) y^{\frac{2}{3}(N+2\alpha)} e^{-c_0 y^{4/3}}. \end{equation} A least squares fitting of this function to the remaining oscillatory component in (\ref{b51}) over a suitable interval for large $y$, allows the determination of the constants $C_{1,2}$. The large $y$ interval taken was typically [250,300]. The scaled function (\ref{scaledf}) is shown in Figure \ref{fig1} for each of the two types of initial conditions in the $N=1,2$ and $3$ cases. Two selected values of the parameter $\alpha$ are taken in each case. The figures illustrate convergence to the oscillatory part of the far-field behaviour, with the extracted least squares estimates of the constants $C_{1,2}$ shown inset for each of the two $\alpha$ values. Figure \ref{fig2} shows the variation of the far-field constants $C_{1,2}$ with $\alpha$ in the $N=1,2,3$ cases and the two initial conditions. The constants $C_{1,2}$ both vanish precisely at the eigenvalues, when $\alpha=\alpha_k$, the first five being shown in Figure \ref{fig2} in each $N$ case. The magnitude of the constants $C_{1,2}$ grow rapidly as $\alpha$ increases, making determination of further eigenvalues more difficult. This approach demonstrates the validity of numerical determination of the eigenvalues and eigenfunctions by choosing $\alpha$ to minimise the far-field behaviour (\ref{b51}). \begin{figure}[htp] \vskip -2.2cm \hspace*{-1cm} \includegraphics[scale=0.6]{Proflin.pdf} \vskip -0.5cm \caption{ \small Numerical illustration of the oscillatory component in the far-field behaviour (\ref{b51}) in the linear case. In each dimensional case $N=1,2,3$ scaled profiles (\ref{scaledf}) are shown for two selected values of the parameter $\alpha$ for each initial condition (\ref{sh1}) and (\ref{sh2}). The extracted least squares values of the far-field constants $C_{1,2}$ are stated inset in each figure. } \label{fig1} \end{figure} \begin{figure}[htp] \vskip -2.2cm \hspace*{-1cm} \includegraphics[scale=0.6]{Clin.pdf} \vskip -0.5cm \caption{ \small Numerical determination of the far-field constants $C_{1,2}$ for (\ref{b51}) in the stated dimensional and initial condition cases. Shown are their variation with the parameter $\alpha$. The coincident zeros correspond to the vanishing of the far-field maximal bundle yielding the eigenvalues of the linear problem $\alpha=\alpha_k$. } \label{fig2} \end{figure} \section{A ``homotopic" transition to small $n>0$: some key issues} \label{S5} Next, we are going to use the above linear results to predict true nonlinear eigenvalues $\a_k(n)$ for the TFE--4, at least, for sufficiently small $n>0$. By continuity (to be discussed later on as a continuous deformation after applying a homotopic argument) we now know that \begin{equation} \label{b10} \textstyle{ \a_k(0)= \frac k2, \quad k=1,2,3,...\,. } \end{equation} Another important conclusion: since, for $n=0$, any $f_k(0) \not = 0$ (this could happen only accidentally, with a probability 0), we can also expect that \begin{equation} \label{b11} \mbox{for small $n>0$,} \quad f_k(0) \not = 0, \end{equation} meaning that, in this case, we do not need to perform a shooting from the interface point, with a quite tricky behaviour nearby. Also, we conclude from \eqref{b51} that: \begin{equation} \label{b12} \mbox{for small $n>0$, maximal bundle is 2D and oscillatory as $y \to + \infty$}. \end{equation} Those properties will aid progress on the nonlinear eigenvalue problem. \subsection{Passing to the limit $n \to 0^+$ in the nonlinear eigenvalue problem.} Subsequently, we perform a homotopy deformation from the self-similar equation $\eqref{rad22}_{-}$ (with lower signs) of the TFE-4 \eqref{i1} to the linear radial ODE \eqref{b2} corresponding to the \emph{classical bi-harmonic parabolic equation} \eqref{bi.1}. In particular we construct a continuous deformation from the radial equation $\eqref{rad22}_{-}$ to the linear equation \eqref{b2} for which we know solutions explicitly. It is clear that the CP for the \emph{bi-harmonic equation} \eqref{bi.1} is well-posed and has a unique solution given by the convolution \begin{equation} \label{b.11} u(x,t)=b(x-\cdot,t)\, * \, u_0(\cdot), \end{equation} where $b(x,t)$ is the fundamental solution of the operator $D_t + \Delta^2$. Due to our analysis is possible to establish a connection between the radial solutions of \eqref{i1} and the corresponding to \eqref{bi.1} via the self-similar associated equations when $n \to 0^+$. To this end we apply the Lyapunov--Schmidt method to ascertain qualitative properties of the self-similar equation $\eqref{rad22}_{-}$ following a similar analysis as the one carried out in \cite{TFE4PV}. Thus, as we already know, the operator ${\bf A}^-_{0,y} (\a,f)$ defined by \eqref{b2} produces a countable family of eigenvalues \[ \textstyle{ \a_k\equiv \a_k(0)=\frac{k}{2}, \quad \hbox{with}\quad k=0,1,\cdots \, . } \] Note also that, \eqref{b2} admits a complete and closed set of eigenfunctions being {\em generalised Hermite polynomials}, which exhibit finite oscillatory properties. This oscillatory issue seems to be crucial. In fact, in \cite{EGK1} was observe that a similar analysis of blow-up patterns for a TFE-4 like \eqref{i1} did not detect any stable oscillatory behaviour of solutions near the interfaces of the radially symmetric associated equation. Hence, all the blow-up patterns turned out to be nonnegative, which is a specific feature of the PDE under consideration therein. However, this does not mean that blow-up similarity solutions of the CP do not change sign near the interfaces or inside the support. Actually, it was pointed out that local sign-preserving property could be attributed only to the blow-up ODE and not to the whole PDE \eqref{i1}. Hence, the possibility of having oscillatory solutions cannot be ruled out for every case. \subsection{Branching/bifurcation analysis} Now, we construct a continuous deformation such that the patterns occurring for the nonlinear eigenvalue problem $\eqref{rad22}_{-}$ are homotopically connected to the ones of the equation \eqref{b2}. Thus, we assume for small $n>0$ in \eqref{rad22} the following expansions: \begin{equation} \label{br3} \a_k(n):= \a_k+ \mu_{1,k} n+ o(n),\quad |f|^n \equiv |f|^n= {\mathrm e}^{n\ln |f|}:= 1 +n \ln |f|+o(n), \end{equation} where the last one is assumed to be understood in a weak sense. The second expansion cannot be interpreted pointwise for oscillatory changing sign solutions $f(y)$, though now these functions are assumed to have {\em finite} number of zero surfaces (as the generalised Hermite polynomials for $n=0$ do). Indeed, as discussed in \cite{TFE4PV} for \eqref{rad22} this is true if the zeros are transversal. Furthermore, in order to apply the Lyapunov--Schmidt branching analysis we suppose the expansion \begin{equation} \label{br12} \textstyle{ f=\sum_{|\b|=k} c_\b f_\b +V_k, \quad \hbox{for every $k\geq 1$,} } \end{equation} under the natural ``normalising" constraint \begin{equation} \label{nor} \textstyle{ \sum\limits_{|\b|=k} c_\b=1. } \end{equation} Moreover, we write \[\{f_\b\}_{|\b|=k}=\{f_1,...,f_{M_k}\},\] as the natural basis of the $M_k$-dimensional eigenspace, with $M_k\geq 1$, such that \[ \textstyle{ f_k = \sum_{|\b|=k} c_\b f_\b \quad \hbox{and}\quad V_k=\sum_{|\b|>k} c_\b f_\b, } \] with $V_k \in Y_k$ where $Y_k$ is the complementary invariant subspace of corresponding kernel. In particular, we consider the expansion \[ V_k:=n \Phi_{1,k} + o(n).\] Thus, applying the Fredholm alternative, we obtain the existence of a number of branches emanating from the solutions $(\a_k,f_k)$ at the value of the parameter $n=0$. We can guarantee that the first profile is unique but for the rest there could be more that branch of solutions emanating at $n=0$. The number of branches will depend on the dimension of the eigenspace so that the dimension is somehow involved; see \cite{TFE4PV} for any further comments and a detailed analysis of a similar branching analysis. \vspace{0.2cm} \noindent{\bf Remark.} Furthermore, when $n\to 0^+$ we have a few different profiles of $f(y)$. For the PME--2 \eqref{pormed} the Graveleau profiles are always unique by the Maximum Principle but not for the TFE--4 \eqref{i1}. However, for $\a=\frac{1}{2}$ the only profile is $f(y)=y^2$. \vspace{0.2cm} The previous discussion can be summarised in the following Lemma. \begin{lemma} The patterns occurring for equation \eqref{b2}, i.e. the ones for $\eqref{rad22}_{-}$ for $n=0$, act as branching points of nonlinear eigenfunctions of the Cauchy problem for the operator $\eqref{rad22}_{-}$, at least when the parameter $n$ is sufficiently close to zero. \end{lemma} \vspace{0.2cm} \noindent{\bf Remark.} It turns out that using classical branching theory ``nonlinear eigenfunctions" $f(y)$ of changing sign, which satisfies the {\em nonlinear eigenvalue problem} $\eqref{rad22}_{-}$ (with an extra ``radiation-minimal-like" condition at infinity) at least, for sufficiently small $n > 0$, can be connected with eigenfunctions $f_k$ of the linear problem \eqref{b2}. \section{Non-improvable regularity for the TFE--4} The following main result is a straightforward consequence of our focusing self-similar analysis. \begin{theorem} \label{theneg2} Let, for a fixed $n>0$, the nonlinear eigenvalue problem $(\ref{rad22})_-$ have a nontrivial solution (eigenfunction) $f_k(y)$ for some eigenvalue $\a_k>0$, i.e. there exists a self-similar focusing solution of the problem $(\ref{eigpm})_-$ exhibiting the finite-time trace $\eqref{gg5}$. Then: \begin{enumerate} \item[(i)] For the general Cauchy problem for the TFE--4 $\eqref{i1}$, even in the radial setting, the H\"older continuity exponent of solutions cannot exceed\footnote{We mean $C^{l+\varepsilon}$, if $\mu_k \ge 1$, that, not that surprisingly, happens for all small $n>0$.} \begin{equation} \label{h11} \textstyle{ \mu_k(n,N) = \frac {\a_k}{ \b_k} \equiv \frac {4 \a_k}{1+ n \a_k}. } \end{equation} \item[(ii)] Let there exist $\mu_k<1$, i.e. $\a_k < \frac 1{4-n}$, for $n \in (0,2)$\footnote{Actually, for smaller $n$'s; Note that for every larger $n$'s solutions of the TFE--4 are known to be strictly positive, \cite{BF1}).}. Then, for the TFE--4 \eqref{i1}, \begin{equation} \label{h12} \textstyle{ \nabla u_*(r,0^-) \in L^p_{\rm loc}({\bf R}^N) \quad \mbox{iff} \quad p \in[1,p_*), \,\,\, p_*(n,N)= \frac N{1- \mu_k}, } \end{equation} so that, for (even radial) solutions of \eqref{i1}, in general, for any $t>0$, \begin{equation} \label{h13} \nabla u(x,t) \not \in L^{p_*}_{\rm loc}({\bf R}^N). \end{equation} \end{enumerate} \end{theorem} \vspace{0.2cm} \noindent{\bf Remark.} Basically $|\nabla u_*(r,0^-)| \in L^p_{\rm loc}({\bf R}^N)$ if and only if $$\textstyle{\int_{{\bf R}^N} |\nabla u_*(r,0^-)|^p = C \int_0^1 r^{N-1} (r^{\mu_k-1})^p <\infty.}$$ Thus, since $0<\mu_*<1$ we find that to have that integral bounded we need that, evaluating the coefficients in the second integral $$\textstyle{ p <p_*\equiv p_*(n,N) =\frac{N}{1-\mu_k(n,N)}.}$$ \vspace{0.2cm} \noindent{\bf Remark.} Of course, it can happen that $\mu_k>1$, so focusing does not supply us with a truly H\"older continuous focusing trace but rather than a $C^{l+\varepsilon}$-trace. Actually, by continuity in $n$, exactly this happens for $n>0$ small, where $\mu_1(0,N)=2$. We expect that the minimal value of $\mu_k$ in \eqref{h11} is attained at $k=1$, but cannot prove this for all $n>0$ (for small ones, this is obvious). Note again that, for all large $n \ge 2$, such a focusing is not possible in principle, since the solutions must remain strictly positive for all times, \cite{BF1}. Thus, this optimal (non-improvable) regularity results for the TFE--4 (and, seems for many other parabolic equations) depends on the solvability of the nonlinear eigenfunction focusing problem. Note that in the previous section we showed a homotopy connection between that nonlinear eigenvalue problem and the linear problem at $n=0$ \eqref{b2}. For those reasons the analysis performed in those two previous sections is crucial in ascertaining qualitative information about the solutions of the TFE--4. Furthermore, we can also ascertain the H\"{o}lder continuity with respect to the temporal variable $t$. This fact comes directly from the non-improvable H\"{o}lder's exponent we have already obtained above. Indeed, assuming the radial self-similar solutions of the form \eqref{sssol} for the focusing, one easily finds that $$|u(0,t)-u(0,0)|=(-t)^\a f(0).$$ Hence, the $t$-H\"{o}lder's exponent close to $t=0$ cannot be bigger than $\a$. Therefore, the focusing solution given here provides us with optimal H\"{o}lder estimations for the variables $x$ and $t$. \section{Oscillatory structure of maximal solutions for $n>0$} \label{S.maxosc} Let us now consider $n \in (0,2)$. For the ODE \eqref{rad22}, we will try the same {\em anzatz} as in \cite{EGK2}. Namely, we will find solutions of the maximal type, with the envelope as in \eqref{gg2}. We introduce a corresponding {\em oscillatory} component as follows: \begin{equation} \label{zz1} f(y)= y^{\mu} \varphi(s), \quad s= \ln y, \quad \mu = \frac{4}{n} \quad \mbox{for} \quad y \gg 1. \end{equation} Then, \[ \textstyle{ f'=y^{\mu -1} \big( \dot{\varphi} + \mu \varphi \big), \quad f''=y^{\mu -2} \big[ \ddot{\varphi} + \big( \mu - 1 \big) \dot{\varphi} + \mu \big(\mu-1 \big) \varphi \big], } \] \[ f''' = y^{\mu-3} \left[ \dddot{\varphi} + 3(\mu-1) \ddot{\varphi} + (3 \mu^2 - 6\mu +2) \dot{\varphi} + \mu(\mu^2 -3 \mu + 2)\ \varphi \right], \] where $'={\mathrm d}/{\mathrm d}y$ and $\dot{}={\mathrm d}/{\mathrm d}s$. Substituting these expressions into \eqref{rad22} yields the following fourth-order homogeneous ODE for $\varphi(s)$ \begin{align} & \ddddot{\varphi} + 2(N-4 + 2\mu) \dddot{\varphi} + (6 \mu^2 + 6(N-4)\mu + 11 + (N-1)(N-9)) \ddot{\varphi} \label{phieqn}\\ & + 2(2\mu+N-4)(\mu^2 + (N-4)\mu +2-N) \dot{\varphi} + \mu (\mu-2) (\mu^2 +2(N-3)\mu + 3+ (N-1)(N-5)) \varphi \nonumber \\ & \textstyle{ + n \left( \frac{\dot{\varphi}}{\varphi} + \mu \right) \left[ \dddot{\varphi} + (N-4+3\mu) \ddot{\varphi} + (3\mu^2 + 2(N-4)\mu +4-2N) \dot{\varphi} + \mu (\mu-2)(N-2+\mu) \varphi \right] } \nonumber \\ & \textstyle{ + \left( \frac{1}{4}(1+n\alpha)(\dot{\varphi} + \mu \varphi) -\alpha \varphi \right) |\varphi|^{-n} = 0 . } \nonumber \end{align} We mention that this equation is autonomous and thus may be reduced to third-order, although we do not utilise this reduction here. Figure \ref{fig3} illustrates the periodic nature of the solutions for $\phi$, at least for $n$ small enough. Since the oscillations occur over such a large range, we use the following transformation to allow the oscillations to be visible on the plots, \begin{equation} \mbox{t}(\phi(s)) = \left\{ \begin{array}{ll} \ln \phi(s) +1, & \mbox{if}\; \phi(s)>1, \\ \phi(s), & \mbox{if}\; -1<\phi(s)<1, \\ - \ln (-\phi(s)) -1, & \mbox{if}\; \phi(s)<-1. \end{array}\right. \label{tphi} \end{equation} Plotted are numerical solutions for $\phi$ in the two cases $\alpha=0.5,1$ with $N=1$. The subplots illustrate the change in the profile behaviour with $n$. The numerics support the proposition that the family \eqref{zz1} is 2D, composed of: \begin{enumerate} \item[(i)] a 1D stable manifold $\varphi_*(s)$,\\ and \item[(ii)] a phase shift $\varphi_*(s+s_0)$ for any $s_0 \in {\bf R}$. \end{enumerate} As we have seen earlier, these two properties are true for $n=0$, and, hence, by continuity, we conjecture remain true for small $n>0$. However, the periodic exponential structure, with the linear ODE for $n=0$ is replaced by a more difficult one \eqref{zz1} for $n>0$. Nevertheless, the periodic nature of such a behaviour appears universal, and is expected to remain for larger $n$. The numerics though do suggest that it is lost (in a homoclinic-heteroclinic bifurcation) for $n$ near to 0.8 or 0.9. \begin{figure}[htp] \vskip -.2cm \hspace*{-1cm} \includegraphics[scale=0.6]{OscN1ap5.pdf} \vskip -0.5cm \caption{ \small Illustrative numerical profiles of transformed $\phi(s)$ via (\ref{tphi}). Shown is the parameter case $N=1,\alpha=0.5$ for selected $n$. } \label{fig3} \end{figure} \begin{figure}[htp] \vskip -2.2cm \hspace*{-1cm} \includegraphics[scale=0.6]{OscN1a1.pdf} \vskip -0.5cm \caption{ \small Illustrative numerical profiles of transformed $\phi(s)$ via (\ref{tphi}). Shown is the parameter case $N=1,\alpha=1$ for selected $n$. } \label{fig4} \end{figure} To reconcile with the known behaviour in the linear case, we now study the behaviour of the periodic solutions for small $n>0$. To reveal the limiting oscillatory behaviour as $n \to 0$, we keep the leading terms of the coefficients in (\ref{phieqn}) as $\mu \to \infty$ to obtain \begin{equation} \ddddot{\varphi} + 4\mu \dddot{\varphi} + 6 \mu^2 \ddot{\varphi} + 4\mu^3 \dot{\varphi} + \mu^4 \varphi + \frac{1}{4} \left( \dot{\varphi} + \mu \varphi \right) |\varphi|^{-n} = 0 . \label{phismalln} \end{equation} We now rescale as follows \begin{equation} \label{AAA.1} \textstyle{ \varphi(s)= \mu^{-\frac{3}{n}} \hat{\varphi}(\hat{s}), \quad \displaystyle s = \frac{\hat{s}}{\mu} , } \end{equation} leading to \begin{equation} \textstyle{ \ddddot{\hat{\varphi}} + 4 \dddot{\hat{\varphi}} + 6 \ddot{\hat{\varphi}} + 4 \dot{\hat{\varphi}} + \hat{\varphi} + \frac{1}{4} \left( \dot{\hat{\varphi}} + \hat{\varphi} \right) |\hat{\varphi}|^{-n} = 0 , } \label{phihat} \end{equation} where here $\dot{}$ denotes ${\mathrm d}/{\mathrm d}\hat{s}$. For $n=0$, the equation (\ref{phihat}) becomes linear, \begin{equation} \textstyle{ \ddddot{\hat{\varphi}} + 4 \dddot{\hat{\varphi}} + 6 \ddot{\hat{\varphi}} + 4 \dot{\hat{\varphi}} + \hat{\varphi} + \frac{1}{4} \left( \dot{\hat{\varphi}} + \hat{\varphi} \right) = 0 , } \label{phihatn0} \end{equation} with the characteristic equation $$ \textstyle{ (m+1)\left( (m+1)^3 + \frac{1}{4} \right)=0, } $$ for exponential solutions $ \hat{\varphi}(\hat{s})= {\mathrm e}^{m \hat{s} } . $ The roots of the characteristic equation are linked to those of the controlling factor in (\ref{b3}) being $$ \textstyle{ m +1 = \frac{4}{ 3}a_1,\quad \frac{4}{3}a_2,\quad \frac{4}{3}a_3 \quad \mbox{and} \quad 0 . } $$ Consequently we obtain the dominant asymptotic behaviour \begin{equation} \textstyle{ \hat{\varphi}(\hat{s}) \sim e^{(-1+\frac{4}{3}c_0) \hat{s} } \left[ \hat{A}_1 \cos \left(\frac{4}{3}c_1\hat{s} \right) + \hat{A}_2 \cos \left(\frac{4}{3}c_1\hat{s} \right) \right] \quad \mbox{as $\hat{s} \to \infty$}, } \label{phihatasy} \end{equation} for arbitrary constants $\hat{A}_{1,2}$. Denoting the independent variable by $\hat{y}$ rather than $y$ for convenience, we thus have the small $n$ behaviour \begin{equation} \textstyle{ f(\hat{y}) \sim \hat{y}^{\frac{4}{n}} \left( \frac{4}{n}\right)^{-\frac{3}{n}} \hat{\varphi} \left({\frac{4}{n}\ln(\hat{y})} \right) . } \end{equation} This may be reconciled with the expression in (\ref{b51}) through the identifications \[ \textstyle{ \hat{y} = e^{\frac{3n}{16} {y}^{4/3}}, \quad \hat{A}_{1,2} = \left( \frac{4}{n}\right)^{\frac{3}{n}} C_{1,2} . } \] \section{Nonlinear eigenfunctions by shooting} The eigenfunctions for $n>0$ may be obtained via shooting. The conditions (\ref{sh1}) or (\ref{sh2}) are again used as initial conditions and $\alpha$ determined by capturing the growth (\ref{mingro}) for sufficiently large $y$ values. This growth behaviour may be imposed by $$\hbox{minimising}\quad \mu y f' -\alpha f\quad \hbox{at}\quad y\quad \hbox{values},$$ typically chosen to be around 40. Standard regularisation of the term $|f(y)|^n$ is required in the form $(f^2+\delta^2)^{\frac{n}{2}}$ with $\delta$ taken relatively small. Figure \ref{fig5} shows the eigenvalues of the first three branches $k=1,2,3$. These numerically extend the $n=0$ eigenvalues of Proposition \ref{Pr.anal} to $n>0$. As in the $n=0$ linear case, the eigenvalues remain the same irrespective of the spatial dimension $N$, at least to the accuracy of the numerical calculations that were performed. Furthermore, it is worth remarking that along each branch i.e. fixed $k$, the exponent $\mu$ of the far-field behaviour of the eigenfunctions $$f\sim C y^{\mu},$$ remains quite close to its value in the linear case i.e., $$\mu(n)\approx \mu(0).$$ Thus, as an approximation we have $$\alpha_k(n) \approx \alpha_k(0)/(1-\alpha_k(0) n),$$ which seems to be a reasonable approximation at least $$\hbox{for}\quad n< 1/\alpha_k(0).$$ The corresponding eigenfunctions are thus similar to those in the linear case. \begin{figure}[htp] \hspace*{-1cm} \includegraphics[scale=0.6]{EvN1N2.pdf} \vskip -0.5cm \caption{ \small Numerical determination of the eigenvalues $\alpha_k(n)$ for the first three branches $k=1,2,3$ of eigenfunctions. } \label{fig5} \end{figure} An alternative approach to determining the eigenfunctions in both the linear $n=0$ and nonlinear $n>0$ cases, is through minimisation of the oscillatory maximal profile to obtain the non-oscillatory minimal profile for $y\gg 1$. In principle we have two parameters $\alpha$ and, say $\nu=f''(0)$ or $f(0)$, depending on which branch of eigenfunctions is considered. In the linear case, using \eqref{b51}, we are required to satisfy two algebraic equations with analytic functions: \begin{equation} \label{sh3} \left\{ \begin{matrix} C_1(\a,\nu)=0, \smallskip \\ C_2(\a,\nu)=0. \end{matrix} \right. \end{equation} Therefore, we arrive at a well-posed $2-2$ shooting problem, which cannot have more than a countable pairs of solutions (as mentioned already). This approach is practical for the linear $n=0$ case, as the two parameter form of the maximal bundle is known explicitly as given in (\ref{b51}) and was essentially pursued in section 4.2. However, lacking such an explicit expression for $n>0$, limits this approach in the nonlinear case. Finally it is worth presenting some numerical experiments showing the behaviour of the maximal profiles. Figure \ref{fig6} shows profiles in the case $N=1,\alpha=0.75$ for selected $n$. Since the oscillatory profiles occur over such large ranges, the transformed profile using (\ref{tphi}) is depicted. The figures suggest the loss of a pure oscillatory structure as $n$ increases, which is compatible with the behaviour seen for $\phi$ in Figures \ref{fig3} and \ref{fig4}. We conjecture that this is highly suggestive of a homoclinic-heteroclinic bifurcation. The precise values of $n$ though at which this occurs is difficult to determine with sufficient accuracy. \vspace{0.1cm} \begin{figure}[htp] \vskip -2.2cm \hspace*{-1cm} \includegraphics[scale=0.6]{fN1ap75.pdf} \vskip -0.5cm \caption{ \small Illustrative numerical profiles of transformed $\phi(s)$ via (\ref{tphi}). Shown is the parameter case $N=1,\alpha=1$ for selected $n$. } \label{fig6} \end{figure} \section{After-focusing self-similar extension} As usual and as we have observed in many similar problems on a (unique) extension of a solution after blow-up, this is much easier. In fact, after focusing at $t=0^-$, we arrive at a well-posed CP (or TFE, that does not matter since $u_0(r)>0$, no interfaces are available) for the TFE--4 \eqref{i1} with initial data \eqref{gg5} (with $\mu=\mu_k$ when possible), already satisfying the necessary minimal growth at infinity. Therefore, there exists a self-similar solution $(\ref{upm})_+$ with $f(y)$ satisfying the ODE $(\ref{rad22})_+$, {\em with already fixed ``eigenvalue" $\a=\a_k$}. Because of changing the signs in front of the linear terms, this changes the dimension of the stable/unstable manifolds as $y \to +\infty$ (in particular, the unstable manifold becomes 1D, similar to $n=0$). To explain the latter, as usual consider the case $n=0$ (and hence small $n>0$). Then, with the change of sign in the two linear terms in \eqref{b2}, we arrive at a different characteristic equation for $a$: \begin{equation} \label{b3N} \textstyle{ f(y) \sim {\mathrm e}^{a y^\gamma} \quad \mbox{as} \quad y \to + \infty \quad \Longrightarrow \quad a^3= {\bf +} \frac 14 \,\big( \frac 34 \big)^3. } \end{equation} This characteristic equation gives two stable roots with ${\rm Re}(\cdot)<0$: \begin{equation} \label{b4N} \textstyle{ b_{1,2}=\frac 34\,4^{-\frac 13} \big[ -\frac 12 \pm {\rm i} \, \frac{\sqrt 3}2 \big] \equiv c_0 \pm {\rm i} \, c_1. } \end{equation} and one positive root maximal solutions; see below) \[ \textstyle{ b_3= \frac 34 \, 4^{-\frac 13} > 0. } \] Hence, the bundle of maximal solutions is oscillatory as $y \to + \infty$ is just 1D, so we arrive at an {\em undetermined} problem for $\nu,\a$, which satisfy just a single algebraic equation (unlike two in \eqref{sh3}). Indeed, this makes the $\a$-spectrum continuous and solvability for any $\a>0$. \smallskip This allows us to get a unique extension profile $f(y)$ for such a {\em a priori} fixed eigenvalue $\a_k$. In other words, the focusing extension problem is not an eigenvalue one, since a proper $\a_k$ has been fixed by the previous focusing blow-up evolution. It might be said, using standard terminology from linear operator theory, that for the sign $+$ the spectrum of this problem becomes continuous, unlike the discrete one for $-$. Therefore, we do not study this much easier problem anymore, especially, since it has nothing to do with our goal: to detect an non-improvable regularity for the TFE--4 via blow-up focusing.
train/arxiv
BkiUa8HxK2li-Hhr9xax
5
1
\section{Introduction} The work in this note was prompted by the following natural question: \noindent \textit{``Does every closed $4$--manifold admit a branched covering by a symplectic $4$--manifold?'',} \noindent which was studied by the authors at the 2018 American Institute of Mathematics Workshop on ``Symplectic four-manifolds through branched coverings'', motivated by a conjecture of Eliashberg \cite[Conjecture 6.2]{Eliashberg}. We provide a fairly strong answer to the above question in the case of simply connected $4$--manifolds, which in particular solves Problem 4.113(C) in Kirby's list~\cite{KirbyList}: \begin{theorem} \label{thm1} Let $X$ be a closed oriented simply-connected smooth $4$--manifold. Then there exists $g\in{\mathbb{N}}$ and a degree $16$ branched covering $f\: X' \to X$ such that $X'$ is the smooth 4-manifold $T^2 \times \Sigma_g$. In addition, if the 4--manifold $X$ is spin, the branched covering $f$ is natural with respect to a spin structure on $T^2 \times \Sigma_g$. \end{theorem} \noindent Note that the smooth 4-manifold $T^2 \times \Sigma_g$ admits a symplectic structure. It follows from Theorem \ref{thm1} that if instead $X$ is a closed (possibly non-orientable) connected smooth \linebreak $4$--manifold with finite $\pi_1(X)$, then there is a branched covering $T^2 \times \Sigma_g \to X$ of degree $16 \, |\pi_1(X)|$, which factors through the universal covering $\widetilde{X} \to X$. The above do not generalize to $4$--manifolds with arbitrary fundamental groups; for instance, no $\Sigma_g$--bundle over $\Sigma_h$ with $g, h \geq 2$ and infinite monodromy group (e.g. any surface bundle with nonzero signature) can be dominated by a product $4$--manifold by \mbox{\cite[Theorem~1.4]{KotschickLoh}.} Nonetheless, there are comparable results for many other $4$--manifolds with infinite fundamental groups. For example, $X=\mathbin{\#}_g (S^1 \times S^3)$, with $\pi_1(X)\cong *\,{\mathbb{Z}}^{\,g}$, is degree $4$ branched covered by $X'=S^2 \times \Sigma_g$ by \cite[Theorem 1.2]{PiergalliniZuddas}. In addition, the branched virtual fibering theorem of Sakuma \cite[Addendum~1]{Sakuma} implies the following: \begin{proposition} \label{prop1} Let $X=S^1 \times Y$ be a smooth \mbox{$4$--manifold} which is the product of $S^1$ and a closed connected oriented $3$--manifold $Y$. Then there exists $g\in{\mathbb{N}}$ and a double branched covering $X' \to X$, where $X'$ is a symplectic $4$--manifold which is a $\Sigma_g$--bundle over $T^2$. \end{proposition} \noindent Indeed, \cite{Sakuma} shows that any closed oriented \mbox{$3$--manifold} $Y$ is double branched covered by a surface bundle over a circle (see also \cite{Montesinos} for a different proof), from which Proposition \ref{prop1} is immediately deduced; this provides yet another class of $4$--manifolds with infinite fundamental group for which a (symplectic) branched cover can be readily described. Here, we recall that the product of a fibered $3$--manifold and the circle is symplectic \cite{Thurston}. It is worth noting that with a little more information on the smooth topology of $X$, one can easily determine the topology of the branched coverings $X' \to X$ in Theorem~\ref{thm1} and Proposition~\ref{prop1}. For the former, one only needs to know the number of stabilizations by taking the connected sum with $S^2 \times S^2$ that are required before the simply-connected $4$--manifold $X$ completely decomposes into a connected sum of copies of ${\mathbb{CP}}{}^2$, $S^2 \times S^2$ and the ${\mathrm{K}3}$ surface, taken with either orientation. This of course can always be achieved by a classical result of Wall \cite{Wall}, and for vast families of simply-connected $4$--manifolds, one stabilization is known to be enough \cite{BaykurSunukjian}. Similarly, for Proposition~\ref{prop1}, one just needs to know a Heegaard decomposition of the $3$--manifold factor $Y$ \cite{Sakuma}, or any open book on it \cite{Montesinos}. See Remark \ref{explicit} for some explicit examples. In all the results we have discussed above, the covering symplectic \mbox{$4$--manifold} $X'$ is not of general type, in contrast with the symplectic domination results of Fine--Panov \cite{FinePanov}; see Remark~\ref{domination} below. It would be interesting to find more general families of non-symplectic \mbox{$4$--manifolds} branched covered (with universally fixed degree) by specific families of symplectic $4$--manifolds like ours, say by $\Sigma_g$--bundles over $\Sigma_h$, for arbitrary $h$. \vspace{0.2in} \noindent \textit{Acknowledgments.} This project was started at the 2018 AIM workshop on \textit{``Symplectic four-manifolds through branched coverings'',} and was resumed following the 2020 BIRS Workshop on \textit{``Interactions of gauge theory with contact and symplectic topology in dimensions 3 and 4."} The authors would like to thank the American Institute of Mathematics and the Banff International Research Station, and the other organizers of these workshops. D.\,A. was partially supported by the Simons Foundation grant 585139 and NSF grant DMS 1952755. R.\,I.\,B. was partially supported by the NSF grants DMS-200532 and DMS-1510395. R.\,C. is supported by the NSF grant DMS-1841913, the NSF CAREER grant DMS-1942363 and the Alfred P. Sloan Foundation. T.\,L. was partially supported by the NSF grant DMS-1709702 and a Sloan Fellowship. D.\,Z. was partially supported by the 2013 ERC Advanced Research Grant 340258 TADMICAMT; he is member of GNSAGA, Istituto Nazionale di Alta Matematica ``Francesco Severi'', Italy. We would like to thank the anonymous referee for helpful comments. \medskip \section{Proof of Theorem \ref{thm1}} Henceforth all the manifolds and maps we consider are assumed to be smooth. We denote by $\xoverline[0.7]{\mkern-1.1mu{X}\mkern1.1mu}$ the oriented $4$--manifold $X$ with the reversed orientation, and by $\mathbin{\#}_a X \mathbin{\#}_b Y$ the smooth connected sum of $a$ copies of $X$ and $b$ copies of $Y$. We denote by $\Sigma_g^b$ a closed connected oriented surface of genus $g$ with $b$ boundary components, and we drop $b$ from the notation when there is no boundary. \subsection{Preliminaries} Let us briefly recall the definition of a branched covering. \begin{definition} Let $X$ and $X'$ be compact connected smooth manifolds (possibly with boundary) of the same dimension, and let $f \: X' \to X$ be a smooth proper surjective map. We say that $f$ is a branched covering if it is finite-to-one and open, and moreover the (open) subset of $X'$ where $f$ is locally injective coincides with the subset of $X'$ where $f$ is a local diffeomorphism.\hfill$\Box$ \end{definition} The subset $B'_f \subset X'$ where $f$ fails to be locally injective is called the \textit{branch set} of $f$, and its image $B_f = f(B'_f) \subset X$ is called the \textit{branch locus} of $f$. By a result of Church \cite[Corollary 2.3]{Church}, either $B'_f = \emptyset$ or $\dim B'_f = \dim B_f = \dim X -2$, and then the restriction of $f$ over the complement of $B_f$ is an ordinary connected covering space $X' \setminus f^{-1}(B_f) \to X \setminus B_f$. Moroever, for every \textit{smooth} point of $B'_f$ at which $f|_{B'_f} \: B'_f \to X$ is a local smooth embedding, the map $f$ is \textit{topologically} locally equivalent to the map $p_d \: {\mathbb{C}} \times {\mathbb{R}}^{n-2} \to {\mathbb{C}} \times {\mathbb{R}}^{n-2}$ defined by $p_d (z, x) = (z^d, x)$, for some $d \geq 2$, where $n = \dim X' = \dim X$. However, the branched coverings $f_i$ that we consider below turn out to be \textit{smoothly} locally equivalent to $p_2$, while their composition, which will be indicated by $f$, has this property away from the singular points of $B'_f$. Notice that every finite composition of branched coverings is a branched covering, and the restriction to the boundary of a branched covering is a branched covering as well. Throughout, we assume that branched coverings between oriented manifolds are orientation-preserving. \subsection{The argument} Let $X$ be a closed oriented simply-connected smooth $4$--manifold. We will describe the branched covering in the statement of Theorem~\ref{thm1}, that is \linebreak $f\: T^2 \times \Sigma_g \to X$, as a composition of four simpler double branched coverings $f_1, f_2, f_3, f_4$. While all the latter will be branched over embedded orientable surfaces, the branch locus of the composition will typically be singular. For the clarity of the exposition, we will not explicitly keep track of how the topology is growing at each step, but instead, we will illustrate with some examples in Remark \ref{explicit} how one can deduce this information. \medskip \noindent \underline{\smash{\textsl{Step 1:}}} \, By Wall \cite{Wall}, the connected sum of $X$ with a certain number $m$ of copies of $S^2 \times S^2$ is diffeomorphic to a connected sum of copies of the standard $4$--manifolds ${\mathbb{CP}}{}^2$, $S^2 \times S^2$ and the ${\mathrm{K}3}$ surface, taken with either orientations. Note that when $X$ is spin, the decomposition has only spin connected summands, and also that the resulting $4$--manifold does satisfy $11/8$ when $m$ is large enough. Moreover, since we have ${\mathrm{K}3} \mathbin{\#} \xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu \cong \mathbin{\#}_{22} (S^2 \times S^2)$ and ${\mathbb{CP}}{}^2 \mathbin{\#} (S^2 \times S^2) \cong \mathbin{\#}_2 {\mathbb{CP}}{}^2 \mathbin{\#} {\xoverline[0.75]{\mathbb{CP}}}{}^2$ \cite[Page 344]{GS}, the complete decomposition as above can be written as $\mathbin{\#}_a {\mathrm{K}3} \mathbin{\#}_b (S^2 \times S^2)$ or $\mathbin{\#}_a \xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu \mathbin{\#}_b (S^2 \times S^2)$ when $X$ is spin (depending on the sign of the signature $\sigma(X)$), and as $\mathbin{\#}_a {\mathbb{CP}}{}^2 \mathbin{\#}_b {\xoverline[0.75]{\mathbb{CP}}}{}^2$ when $X$ is non-spin, for some non-negative integers $a$ and $b$ which are not both zero. In the spin case, we can guarantee that $b \geq 2a$ by taking sufficiently many stabilizations. The conjugation map $(z_1, z_2) \mapsto (\bar z_1, \bar z_2)$, which is an anti-holomorphic involution on ${\mathbb{CP}}^{1} \times {\mathbb{CP}}^{1} \cong S^2 \times S^2$, induces a double branched covering $S^2 \times S^2 \to S^4$, where the branch locus is the unknotted $T^2 \subset S^4$ (bounding a handlebody). Taking equivariant connected sum of $m$ copies of it, we get an involution on $\mathbin{\#}_m (S^2 \times S^2)$, which induces a double covering $\mathbin{\#}_m (S^2 \times S^2) \to S^4$ branched along an unknotted $\Sigma_m$, for every $m \geq 1$. We can now take a double covering of $X$ branched along an unknotted $\Sigma_{m}$ in $X$ (viewing $X \cong X \mathbin{\#} S^4$, take an unknotted $\Sigma_{m}$ in $S^4$), which we denote by $f_1\: X_1 \to X$, where clearly $X_1 \cong X \mathbin{\#} X\mathbin{\#}_m (S^2 \times S^2)$. We choose $m \geq 1$ such that $X\mathbin{\#}_m (S^2 \times S^2)$ completely decomposes, so does $X_1$ (as one gets at least $m$ copies of $S^2 \times S^2$ after decomposing $X \mathbin{\#}_m (S^2 \times S^2)$). Then $X_1$ is diffeomorphic to one of the standard connected sums we listed above. \smallskip \noindent \underline{\smash{\textsl{Step 2:}}} \, We would like to obtain a double branched covering of $X_1$ by some $\mathbin{\#}_g (S^2 \times S^2)$. We will describe this covering in essentially two different ways, depending on whether $X$ (and thus $X_1)$ is spin or not. The ${\mathrm{K}3}$ surface can be obtained as a holomorphic double covering of $S^2 \times S^2$ branched along a curve of bi-degree $(4,4)$ in ${\mathbb{CP}}^{1} \times {\mathbb{CP}}^{1} \cong S^2 \times S^2$ \cite[Page 262]{GS}. Reversing the orientations, we see that $\xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu$ is also a double branched covering of $S^2 \times S^2$ (recall that $S^2 \times S^2$ admits an orientation-reversing diffeomorphism). By taking equivariant connected sums, we can then express both $\mathbin{\#}_n {\mathrm{K}3}$ and $\mathbin{\#}_n \xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu$ as branched double coverings of $\mathbin{\#}_n (S^2 \times S^2)$. Taking \mbox{$n=2a$,} we then conclude that $\mathbin{\#}_a {\mathrm{K}3} \mathbin{\#}_b (S^2 \times S^2)$ admits a double branched covering by $\mathbin{\#}_{2a} {\mathrm{K}3} \mathbin{\#}_{2a} \xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu \mathbin{\#}_{2(b-2a)} (S^2 \times S^2)$. Since ${\mathrm{K}3} \mathbin{\#} \xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu \cong \mathbin{\#}_{22} (S^2 \times S^2)$, we have obtained the desired double branched cover $\mathbin{\#}_g (S^2 \times S^2)$, for $g=40a+2b$. Mirroring the same argument, we see that $\mathbin{\#}_a \xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu \mathbin{\#}_b (S^2 \times S^2)$ is also double branched covered by some $\mathbin{\#}_g (S^2 \times S^2)$. This concludes the construction in the spin case. The following variation can be run for both spin and non-spin manifolds. Switching the two factors $(z_1, z_2) \mapsto (z_2, z_1)$, which is a holomorphic involution on ${\mathbb{CP}}^{1} \times {\mathbb{CP}}^{1} \cong S^2 \times S^2$, induces a double branched covering $S^2 \times S^2 \to {\mathbb{CP}}{}^2$, where the branch locus is the quadric (this may be interpreted as the map taking a pair of numbers to the quadratic equation having those roots). Reversing the orientations, we obtain a double branched covering over ${\xoverline[0.75]{\mathbb{CP}}}{}^2$. Taking equivariant connected sums once again, we then deduce that $\mathbin{\#}_a (S^2 \times S^2) \mathbin{\#}_b (S^2 \times S^2)$ is a double branched covering of $\mathbin{\#}_a {\mathbb{CP}}{}^2 \mathbin{\#}_b {\xoverline[0.75]{\mathbb{CP}}}{}^2$. So in the non-spin case, we arrive at the desired double covering $\mathbin{\#}_g (S^2 \times S^2)$ as well. We let $f_2\: X_2 \to X_1$ denote the double branched covering we described in either case. \smallskip \noindent \underline{\smash{\textsl{Step 3:}}} \, We next show that $S^1 \times \mathbin{\#}_g (S^1 \times S^2)$ is a double branched covering of $ \mathbin{\#}_g (S^2 \times S^2)$, which will prescribe our next covering $f_3 \: X_3 \to X_2$. A similar double branched covering over $\mathbin{\#}_g ({\mathbb{CP}}{}^2 \mathbin{\#} {\xoverline[0.75]{\mathbb{CP}}}{}^2)$ was described by Neofytidis in \cite[Theorem~1]{Neofytidis}. (Also see \cite{NeofytidisThesis} for similar constructions in other dimensions.) The hyperelliptic involution on $T^2$ induces a double branched covering $p\: T^2 \to S^2$ with four simple branch points $x_1, x_2, x_3, x_4 \in S^2$. Taking its product with the identity map on $S^2$ yields a double branched covering $p \times \textrm{id}_{S^2} \: T^2 \times S^2 \to S^2 \times S^2$ with branch locus $\{x_1, x_2, x_3, x_4\} \times S^2$. Note that if $g=1$, we can stop here and skip Step 4. Let $D \subset S^2$ be a $2$--disk containing exactly two branch points of $p$. So $A = p^{-1}(D) \subset T^2$ is an equivariant annulus that contains two fixed points of the hyperelliptic involution. Moreover, $D$ can be chosen such that $A$ is a union of fibers of the trivial $S^1$--bundle $T^2 = S^1 \times S^1 \to S^1$ given by the canonical projection onto the second factor. Let $S^2 \cong S^2 \times \{y\} \subset S^2 \times S^2$ be a fiber sphere, for a certain $y \in S^2$. Let $D' \subset S^2$ be a disk centered at $y$. Then, $U = D \times D' \subset S^2 \times S^2$ is a fibered bidisk, whose preimage $V = (p\times \mathop{\mathrm{id}}\nolimits_{S^2})^{-1} (U) \cong A \times D'$ is a fibered neighborhood of a fiber of the trivial $S^1$--bundle \[T^2 \times S^2 = S^1 \times (S^1 \times S^2) \to S^1 \times S^2.\] By taking two copies of the branched covering $T^2 \times S^2 \to S^2 \times S^2$, and performing equivariant fiber sum upstairs along $V$ and connected sum downstairs along $U \cong D^4$, and repeating the construction for every $g \geq 2$, we finally get a branched double covering $S^1 \times \mathbin{\#}_g (S^1 \times S^2) \to \mathbin{\#}_g (S^2 \times S^2)$. We can also describe this branched covering as follows: start with a double covering $q \: S^1 \times D^1 \to D^2$ branched over two points in $\Int D^2$ (this is the above branched covering $A \to D$), so the product $q \times \mathop{\mathrm{id}}\nolimits_{D^1} \: S^1 \times D^1 \times D^1 \to D^2 \times D^1$ yields a double covering $q' \: S^1 \times D^2 \to D^3$ branched over the union of two parallel proper trivial arcs in $D^3$ (this fills the above branched covering $p \: T^2 \to S^2$), up to the identifications $S^1 \times D^1 \times D^1 \cong S^1 \times D^2$ and $D^2 \times D^1 \cong D^3$. Then, we get a double branched covering \linebreak $q'' = q' \times \mathop{\mathrm{id}}\nolimits_{S^2} \: S^1 \times D^2 \times S^2 \to D^3 \times S^2$. Let $D \subset S^2$ be a $2$--disk. Up to the identification $D^2 \times D^1 \times S^2 \cong D^3 \times S^2$, we consider the bidisks $C^- = D^2 \times \{-1\} \times D$ and \mbox{$C^+ = D^2 \times \{1\} \times D \subset \partial(D^3 \times S^2)$,} each of which intersects the branch locus of $q''$ along the union of two parallel proper trivial $2$--disks. Consider $g$ copies of $q''$, say $q''_i \: (S^1 \times D^2 \times S^2)_i \to (D^3 \times S^2)_i$, and let $C_i^-, C_i^+ \subset \partial(D^3 \times S^2)_i$ be the corresponding bidisks. Thus, we obtain a double branched covering \[q''' = q''_1 \cup \dots\cup q''_g \: {\cup_i} (S^1 \times D^2 \times S^2)_i \to \cup_i (D^3 \times S^2)_i,\] where $(D^3 \times S^2)_i$ is attached to $(D^3 \times S^2)_{i+1}$ by identifying $C_i^+$ with $C_{i+1}^-$ and\linebreak $(S^1 \times D^2 \times S^2)_i$ is attached to $(S^1 \times D^2 \times S^2)_{i+1}$ by identifying $(q''_i)^{-1}(C_i^+)$ with\linebreak $(q''_{i+1})^{-1}(C_{i+1}^-)$ in the obvious way, for all $i = 1,\dots,g-1$. This in turn is a double branched covering \[q''' \: S^1 \times \mathbin{\sharp}_g(D^2 \times S^2) \to \mathbin{\sharp}_g (D^3 \times S^2),\] as it can be easily realized by looking at the attaching maps, where $\mathbin{\sharp}$ denotes the boundary connected sum. Finally, the desired branched covering $S^1 \times \mathbin{\#}_g (S^1 \times S^2) \to \mathbin{\#}_g (S^2 \times S^2)$ can be obtained by restricting $q'''$ to the boundary. \smallskip \noindent \underline{\smash{\textsl{Step 4:}}} \, Our final double branched covering $f_4\: T^2 \times \Sigma_g \to S^1 \times \mathbin{\#}_g (S^1 \times S^2)$ is a special case of Proposition~\ref{prop1} and can be obtained by taking the product of the identity map on the $S^1$ factor with a double branched covering $S^1 \times \Sigma_g \to \mathbin{\#}_g (S^1 \times S^2)$. The latter can be derived from the work of Sakuma \cite{Sakuma} we mentioned in the introduction, or from Montesinos' alternative construction \cite{Montesinos}, which is quicker to describe here: the involution $(z,t) \to (\bar{z}, -t)$ on the annulus $A=S^1\times [-1,1] \subset {\mathbb{C}} \times {\mathbb{R}}$ induces a double covering $q\: A \to D^2$ branched at two points (this is same as the double branched cover described in Step 3), so we get a double branched covering $q \times \textrm{id}_{S^1}\: A \times S^1 \to D^2 \times S^1$. Then, for any open book decomposition of a closed connected oriented 3-manifold $Y$ with pages $\Sigma_k^m$ and monodromy $\phi$, we can get a double covering $h\: Y' \to Y$ branched over two parallel copies of the binding, where $Y'$ is now a surface bundle whose fiber and the monodromy are the doubles of $\Sigma_k^m$ and $\phi$. Indeed, by lifting the usual splitting $Y = (D^2 \times \partial \Sigma_k^m) \cup_\partial T(\phi)$ that gives the open book decomposition of $Y$, with the branch link contained in $D^2 \times \partial \Sigma_k^m$, and where $T(\phi)$ denotes the mapping torus of $\phi$, one obtains a splitting $Y' = (A \times \partial \Sigma_k^m) \cup_\partial (T(\phi)_1\cup T(\phi)_2)$, with the annulus $A$ instead of $D^2$, where $T(\phi)_1$ and $T(\phi)_2$ are two disjoint copies of $T(\phi)$ (the branched covering $h \: Y' \to Y$ is trivial over $T(\phi)$). By looking at the attaching maps, it is immediate to get the bundle structure on $Y'$ as above. In our case, since $\mathbin{\#}_g (S^1 \times S^2)$ admits a planar open book with pages $\Sigma_0^{g+1}$ and $\phi=\textrm{id}$, we obtain the desired covering. (The covering produced by the arguments of both Sakuma and Montesinos in this simple setting is equivalent to the one given in \cite[Proposition~4]{KotschickNeofytidis}.) The composition $f=f_1 \circ f_2 \circ f_3 \circ f_4 : T^2 \times \Sigma_g \to X$ gives the desired covering.\\ \noindent \underline{\smash{\textsl{The spin case:}}} \, Let us conclude by observing that our construction is natural with respect to the spin structures, when $X$ is spin, and then briefly discuss the topology of the branch locus of $f$. Recall that a spin structure on a $4$--manifold is the same as a trivialization of the tangent bundle over the $1$--skeleton that extends over the $2$--skeleton \cite{Milnor, Kirby}. We may use a handlebody decomposition in this definition. Given an unramified cover over a spin $4$--manifold the trivialization will lift to the tangent bundle of the cover restricted to the $1$--skeleton and any extension to the $2$--skeleton, so there is a natural lift of a spin structure to a covering space. Now consider a $2$--fold branched covering with branch locus $B$. We may build a handle decomposition of the base in the following way. Start with a handle decomposition of $B$. This extends to a handle decomposition of a tubular neighborhood of $B$ with only zero, one and two handles. Now extend this to a handle decomposition of the rest of the base $X$. Finally turn the entire handle decomposition over. Notice that all of the $1$--handles of this new handle decomposition are in the exterior of $B$. Each of these handles lifts to the cover of the exterior and the restriction of the spin structure to the exterior lifts to the cover. We now complete the handle decomposition of the total space of the branched cover as follows. Use the identification of the inverse image $\widetilde B$ with $B$ to construct a decomposition of $\widetilde B$ which is then extended to a decomposition of the normal bundle of $\widetilde B$ . Turn this upside down and add it the to decomposition of the inverse image of the exterior. This only adds 2-, 3- and 4-handles to the decomposition. It is not necessarily true that the trivialization of the tangent bundle over the $1$--skeleton will extend over the $2$--skeleton. It will extend precisely when the mod two reduction of the integral homology class $[B]/2$ is zero in the second homology of the base with $\mathbb{Z}_2$ coefficients \cite{Brand, Nagami}. Note that the class of $[B]$ is necessarily divisible by 2 due to the existence of the double branched cover. So, a spin structure does not have to lift to the total space of a $2$--fold branched covering, but if it does, there is a natural lift. It is now straightforward to check that each double cover $f_i$ that we employed in our construction when $X$ is spin satisfies the above criterion, so for the initial spin structure $\mathfrak{s}$ on $X$, there is a spin structure $\mathfrak{s}'$ on $X' \cong T^2 \times \Sigma_g$ constructed this way. (Note that there are $2^{2(g+1)}$ different spin structures on $X'$.) Thus the branched covering $X' \to X$ is compatible with the spin structures $\mathfrak{s}$ on $X$ and $\mathfrak{s}'$ on $X'$.\\ \noindent \underline{\smash{\textsl{The branch locus:}}} \, The branch locus $B_f \subset X$ of $f$ is given by \[B_f = B_{f_1} \cup f_1\Big(B_{f_2} \cup f_2\big(B_{f_3} \cup f_3(B_{f_4})\big)\!\Big),\] where $B_{f_i} \subset X_{i-1}$ denotes the branch locus of $f_i$, for $i=1,2,3,4$, with $X_0 = X$. Each $B_{f_i}$ is a smooth embedded closed orientable surface in $X_{i-1}$. By taking into account that each covering $f_i$ is two-to-one and its tangent map has a 2-dimensional kernel along the branch set, an easy transversality argument based on perturbing the $f_i$'s up to isotopy, shows that the branch locus $B_f \subset X$ can be assumed to be a smooth orientable surface away from at most finitely many singular points, which are transversal or tangential double points (at the latter the local link has two trivial components with linking number $\pm 2$). \qed \bigskip \section{Ancillary Remarks} Let us list a few comments in relation to Theorem \ref{thm1}, its proof and related works. \begin{rk}[Variations] \label{variations} In Step 1 above we could have stabilized by taking connected sums with copies of ${\mathbb{CP}}{}^2$ and ${\xoverline[0.75]{\mathbb{CP}}}{}^2$ so that we get a double covering \mbox{$g_1\: \#_a{\mathbb{CP}}{}^2 \mathbin{\#}_b {\xoverline[0.75]{\mathbb{CP}}}{}^2 \to X$} branched over a genus $m$ \textit{non-orientable} surface, which is trivially embedded in $X$, for certain integers $a,b$ and $m$ (once again by Wall \cite{Wall}). The complex conjugation on ${\mathbb{CP}}{}^2$ induces this double covering ${\mathbb{CP}}{}^2 \to S^4$ branched over the standard smooth ${\mathbb{RP}}^{2} \subset S^4$ \cite{Ma73, Ku74}. Now, we can invoke Theorem 1.2 in \cite{PiergalliniZuddas} to conclude that there exists a $4$--fold simple branched covering \,$g_2\: \Sigma_h \times \Sigma_g \to \mathbin{\#}_a{\mathbb{CP}}{}^2 \mathbin{\#}_b {\xoverline[0.75]{\mathbb{CP}}}{}^2$ for \emph{every} given $a,b \geq 0$ and $h\geq 1$, and for some $g$ large enough. Thus, the composition $g_1 \circ g_2 \: \Sigma_g \times \Sigma_h \to X$ is a degree 8 branched covering. Again by Theorem 1.2 in \cite{PiergalliniZuddas} (see also Remark 2 therein), there exist degree 4 branched coverings $T^4 = T^2 \times T^2 \to X$, with $X = \mathbin{\#}_m {\mathbb{CP}}{}^2 \mathbin{\#}_n {\xoverline[0.75]{\mathbb{CP}}}{}^2$ and $X = \mathbin{\#}_n (S^2 \times S^2)$, for every $m,n \leq 3$. Note that the case $X = S^2 \times S^2$ is straightforward by taking the product $p \times p \: T^2 \times T^2 \to S^2 \times S^2$, and the case $X = \mathbin{\#}_2 (S^2 \times S^2)$ was previously obtained by Rickman \cite{Rickman}. Branched coverings from the $n$--dimensional torus are relevant in connection with the theory of \textit{quasiregularly elliptic} manifolds, see Bonk and Heinonen \cite{Bonk}. In this direction, a result by Prywes \cite[Theorem 1.1]{Prywes} implies that if there is a branched covering $T^4 \to X$, then $b_1(X) \leq 4$ and $b_2(X) \leq 6$, so in Theorem \ref{thm1} we cannot take $g \leq 1$ if $b_2(X) \geq 7$. However, unlike in our construction above, the results in \cite{PiergalliniZuddas} do not give explicit branched coverings, and there is not much control on the topology of the branch locus. \end{rk} \smallskip \begin{rk}[Branched cover geometries] \label{geometry} Theorem~\ref{thm1} and our subsequent remark in the introduction imply that any $X$ with finite $\pi_1(X)$ is branched covered by $T^2 \times \Sigma_g$, where it is easy to see from our proof that we can always assume $g\geq 2$. In terms of \mbox{$4$--dimensional} geometries \cite{Hillman}, this shows that all such $X$ can be branched covered by a $4$--manifold with $\mathbb{E}^2 \times \mathbb{H}^2$ geometry. However, if we replace the double branched covering $h\: Y' \to \mathbin{\#}_g (S^1 \times S^2)$ we used in the construction of $f_4=\textrm{id}_{S^1} \times h$ with the one built by Brooks in \cite{Brooks}, we can also get $Y'$ to be a $\Sigma_g$--bundle over $S^1$ with hyperbolic total space. Therefore, any $X$ with finite $\pi_1(X)$ can also be branched covered by a $4$--manifold with $\mathbb{E} \times \mathbb{H}^3$ geometry. Similarly, one can modify the construction in Proposition~\ref{prop1} to get a double branched cover of any product $4$--manifold $S^1 \times Y$ by a $4$--manifold with $\mathbb{E} \times \mathbb{H}^3$ geometry. \end{rk} \smallskip \begin{rk}[Topology of the branched coverings] \label{explicit} Here we will try to demonstrate by way of example how one can control the topology of the branched coverings in Theorem~\ref{thm1}. For some variety, we will run our construction for two infinite families of irreducible $4$--manifolds which are not completely decomposable: Dolgachev surfaces, which are non-spin complex surfaces of general type, and knot surgered ${\mathrm{K}3}$ surfaces of Fintushel--Stern, which include spin \mbox{$4$--manifolds} that do not admit any symplectic structures \cite{FintushelStern}. The members of either one of these two families of simply-connected $4$--manifolds completely decompose after a single stabilization by $S^2 \times S^2$; see e.g. \cite{Baykur}. Now, if $X$ is a Dolgachev surface, we can take the branch locus of $f_1$ as an unknotted $T^2$, and get \mbox{$X_1= \mathbin{\#}_3 {\mathbb{CP}}{}^2 \mathbin{\#}_{19} {\xoverline[0.75]{\mathbb{CP}}}{}^2$.} The branched covering $f_2$ performed along the connected sum of $22$ quadrics (each coming from distinct copies of ${\mathbb{CP}}{}^2$ and ${\xoverline[0.75]{\mathbb{CP}}}{}^2$) then gives $X_2=\mathbin{\#}_{22} (S^2 \times S^2)$, and the last two coverings yield $X'=T^2 \times \Sigma_{22}$. If we take $X$ to be a knot surgered ${\mathrm{K}3}$ surface instead, thinking ahead of the second step, we take the branch locus of $f_1$ this time as $\Sigma_4$, so $X_1= \mathbin{\#}_2 {\mathrm{K}3} \mathbin{\#}_4 (S^2 \times S^2)$. The next double cover $f_2$ is taken along a connected sum of four bi-degree $(4,4)$ curves (each coming from distinct copies of $S^2 \times S^2$, with non-complex orientation), and we get $X_2= \mathbin{\#}_4 {\mathrm{K}3} \mathbin{\#}_4 \xoverline[0.75]{\mkern1mu{\mathrm{K}3}}\mkern-1mu \cong \mathbin{\#}_{88} (S^2 \times S^2)$. The last two coverings this time yield $X'=T^2 \times \Sigma_{88}$. \end{rk} \smallskip \begin{rk} [Symplectic domination] \label{domination} A recent article of Fine--Panov provides a symplectic domination result \cite[Theorem 1]{FinePanov} which is worth mentioning here. Their beautiful construction is very general: for any closed oriented even dimensional smooth manifold $M$, the authors build a closed symplectic manifold $S$ of the same dimension with a positive degree map $f\: S \to M$. In dimension $4$, where we can compare their result with ours in Theorem~\ref{thm1}, their symplectic manifold $S$ is constructed as a Donaldson hypersurface in the $6$--dimensional symplectic twistor space $Z$ of a negatively pinched manifold $N$, where the latter admits a degree one map \mbox{$g\: N\to M$.} The construction of $N$, with sectional curvature arbitrarily close to $-1$ is implicit, and relies on the recent works of Ontaneda involving rather intricate new techniques in Riemannian geometry. (The condition on the sectional curvature is to guarantee that the twistor space $Z$ of $N$ is a symplectic $6$--manifold.) Secondly, the construction of a symplectic hypersurface $S$ in $N$, which is built through asymptotically holomorphic techniques of \cite{Donaldson}, is also implicit and the smooth topology of $S$ is effectively impossible to control. Hence, one does not have any information on the smooth topology of the dominating symplectic $4$--manifold $S$, other than that it is of general type, i.e. of Kodaira dimension $2$ \cite{FinePanov}. Besides the very implicit nature of this construction, since the map $f\: S \to M$ factors through the degree one map $g$ above, the authors' domination is essentially never a branched covering. Moreover, because the symplectic twistor space $Z$ is in fact known to be non-K\"ahler \cite{Reznikov}, the dominating symplectic $4$--manifold $S$ has a priori no reason to be a K\"ahler surface. On the other hand, the dominating symplectic $4$--manifold $X'=T^2 \times \Sigma_g$ of Theorem~\ref{thm1} is obviously a K\"{a}hler surface, and $X'$ in both Theorem~\ref{thm1} and Proposition~\ref{prop1} is of Kodaira dimension $-\infty$, $0$ or $1$, depending on whether this (possibly trivial) $\Sigma_g$--bundle over $T^2$, has fiber genus $g=0$, $1$ or $\geq 2$, respectively. Domination is certainly distinct from branched covering as the following example shows. There is a degree one map from $\Sigma_4\times\Sigma_2$ to $\Sigma_3\times\Sigma_2$ given by the extension of the natural collapse of a copy of $\Sigma_1^1\times\Sigma_2$ to $\Sigma_0^1\times\Sigma_2$. However there can be no branched covering from $\Sigma_4\times\Sigma_2$ to $\Sigma_3\times\Sigma_2$ since the Gromov norm of the former is $24(4-1)(2-1)=72$, the Gromov norm of the latter is $24(3-1)(2-1)=48$ and the Gromov norm is super multiplicative with respect to degree \cite{BK}. \end{rk} \clearpage
train/arxiv
BkiUbujxK3xgpfdGOs1a
5
1
\section{Surface brightness and luminosity} The question of what fraction of the central attraction should be attributed to dark matter (DM) within the disk of a spiral galaxy is still unresolved. Most of the controversy surrounds the higher luminosity, high surface brightness (HSB) galaxies which I argue here have very little DM in their inner regions. As the rotation curves of most LSB and low-luminosity galaxies, on the other hand, do not have the shapes predicted from their light distributions, a significant DM contribution is required in their inner parts. While this conclusion has been established from much careful work on individual galaxies, trends suggesting increasing DM content towards both types of galaxy can also be found in a statistical analysis of a large galaxy sample. \begin{figure} \centerline{\psfig{figure=sellwood_1.ps,width=0.9\hsize,angle=0}} \caption{Surface density profiles (left) and corresponding rotation curves (right) arising from three different thin disks of equal mass and outer scale length.} \label{fig:shapes} \end{figure} If the surface density of a disk of mass $M$ decreases exponentially in its outer parts with scale length $R_0$, the rotation speed arising from the disk alone may be written \be V_{\rm disk}(R) = \sqrt{{GM_{\rm disk} \over R_0}} \; S\left({R \over R_0}\right) \ee The (dimensionless) function $S$, which describes the shape, is plotted in Figure \ref{fig:shapes} for three different disks having the same total mass and outer exponential scale length. It can be seen that while the shape of the rotation curve arising from the disk is strongly dependent on the surface density profile of the inner disk, the height of the maximum is not; it varies from 0.6 by about 10\% only between the three cases. Thus the peak of the disk's contribution to the rotation speed in most galaxies $V_{\rm disk,max} \sim 0.6\sqrt{GM_{\rm disk} / R_0}$ even when the inner surface density profile departs significantly from exponential. We can use this fact to rank galaxies according to the relative contributions of the disk and halo to the observed peak rotation speed, $V_{\rm m}$. Following Syer, Mao \& Mo (1998), we form the ratio of observable quantities \be \epsilon_{\rm I} = {V_{\rm m} \over \sqrt{GL_{\rm I}/R_0}} {1 \over \sqrt h} \qquad \sim 0.6 {V_{\rm m} \over V_{\rm disk,max}} \sqrt{{\Upsilon_{\rm I}\over h}}, \label{eq:syer} \ee where $L_{\rm I}$ is the disk luminosity (in Solar units) in the I-band. The second form shows that the values obtained depend both on the disk M/L, $\Upsilon_{\rm I}$, and on $h \; (= H_0/100$ km s\per\ Mpc\per), because both luminosity and scale length are distance dependent. More than 2000 galaxies from the sample gathered by Mathewson \& Ford (1996) are plotted in Figure \ref{fig:syer}; galaxies with $V_{\rm sys}<1000\;$km~\per\ are omitted. Combining the information they provide with the {\it assumption\/} that the disk is exponential, allows us to deduce $R_0$ and the central surface brightness, $\mu_0$, in the manner described by Syer \etal \begin{figure} \centerline{\psfig{figure=sellwood_2.ps,width=\hsize,angle=0}} \caption{Evidence for increasing DM fractions in both (a) low-luminosity and (b) LSB galaxies. Galaxies classified as barred are marked with crosses in (b).} \label{fig:syer} \end{figure} The dashed line in Figure \ref{fig:syer}(a) indicates the slope of simple proportionality between $V_{\rm m}$ and $\sqrt{GL_{\rm I}/R_0}$; it is clear that the distribution of points towards the low $V_{\rm m}$ (low luminosity) end has a shallower slope, indicating increasing DM fractions in these galaxies. The right hand panel shows a trend of increasing DM content from high to low SB galaxies. Both trends are strong enough to show through the scatter which is increased by deviations from exponential, bulge contributions, intrinsic M/L variations, departures from Hubble flow, imperfect inclination and extinction corrections, \etc The trends seen in Figure \ref{fig:syer} suggest increasing mass discrepancies within the optical disks towards both low-luminosity and LSB galaxies. But because M/L is so hard to pin down, such diagrams do not tell us whether the disks in galaxies having small values of $\epsilon_{\rm I}$ are maximal. [A ``maximum disk'' model is one in which $V_{\rm disk,max} / V_{\rm m} \gtsim 0.85$ (\eg\ Sackett 1997). This ratio cannot be much larger than 0.85 when a model includes an extended DM halo with a density profile that decreases monotonically with radius.] Disagreements arise over how much DM to add to the disk contribution in order to fit the observed rotation curve. As pointed out by van Albada \etal\ (1985; see also Navarro 1998), the appropriate M/L for the disk is not constrained at all by conventional goodness-of-fit estimators, such as $\chi^2$, especially since the mass profile of the DM is unknown. Here I review some arguments that bear on the masses of disks and refer the reader to others given by Bosma elsewhere in these proceedings. The DM content of LSB galaxies is discussed by de Blok (also this volume). \section{No halo fits} The constraint on the disk M/L is especially weak when low spatial resolution 21cm data are used to determine the inner rotation curve; tighter constraints are furnished by optical data. Many authors (Kalnajs 1983; Kent 1986; Buchhorn 1992; Broeils \& Courteau 1997; Palunas \& Williams 1999) have successfully fitted pure disk models to optical rotation curve data. In this approach, one assumes a constant M/L for the disk, and sometimes a separate value for the bulge, and determines the central attraction from an exact solution of Poisson's equation for the mass distribution thus inferred from the light. The shape of the rotation curve predicted in this manner generally bears a remarkably close resemblance to that observed, at least in the inner disk. \begin{figure}[t] \centerline{\psfig{figure=sellwood_3a.ps,width=0.4\hsize,angle=0} \hskip0.5cm \psfig{figure=sellwood_3b.ps,width=0.4\hsize,angle=0}} \caption{Halo rotation curves required for sub-maximum disks. The error bars show the measured circular speed and the curve shows the pure disk fit (the bulge contribution is omitted in e267g29). The symbols show the halo contribution required if the disk has 100\% (filled circles), 80\% (open circles), 60\% (filled squares), 40\% (open squares) and 20\% (stars) of the maximum value. The horizontal line indicates the vertical zero point for the halo velocities required for 100\% disk. Subsequent curves are shifted upwards by 30~km~s\per\ for every 20\% decrease in disk mass. Note the similarity in shapes of the curves marked by symbols and the error bars.} \label{fig:submax} \end{figure} The largest possible M/L value allowed by the observed circular speed in the inner disk leads to an impressive fit, with discrepancies noticeable near the outer edge only in some cases. The ``bumps and wiggles'' seen in rotation curves from single-slit observations have long been thought to arise from spiral arm streaming (see also Bosma, this volume). This suspicion has been confirmed in the 2-D velocity maps obtained by Palunas \& Williams (1999) which allow most non-axisymmetric flows to be removed. The principal conclusion from previous work survives, namely that the overall shape of the rotation curve is well predicted by the light distribution. As previously remarked by van Albada \& Sancisi (1986), Freeman (1992) and others, reducing the M/L for the disk would require DM mass distributions tailored individually to match the rotation curve shape of each galaxy. Two illustrative examples taken from the Palunas \& Williams sample are shown in Figure \ref{fig:submax}. Figure \ref{fig:mtol} shows the distribution of disk M/L values obtained by Palunas \& Williams for their no-halo fits. One can see a smaller spread of values for high luminosity galaxies ($V_{\rm m} > 200\;$km/s), with perhaps a hint of a higher M/L in earlier Hubble types. The broader spread for the lower luminosity galaxies could be due to two factors; first, it is harder to determine the inclination for these often strongly non-axisymmetric galaxies and second, there are some galaxies for which large M/L values give acceptable fits, even though one might expect substantial DM fractions in these low-luminosity systems. It is also worth noting that the M/L$_{\rm I}$ values obtained for the luminous galaxies (for $h \sim 0.6$) are in line with values predicted by Jablonka \& Arimoto (1992) and by Worthey (1994) for quite reasonable population models. \begin{figure} \centerline{\psfig{figure=sellwood_4.ps,width=0.8\hsize,angle=0}} \caption{M/L$_{\rm I}$ values for maximum disks in the Palunas \& Williams sample. The values plotted should be reduced by a factor $h$.} \label{fig:mtol} \end{figure} As already noted, these ``no halo'' fits frequently reveal a mass discrepancy near the edge of the optical rotation curve. Conventional ``maximum disk'' models require a halo with a large, low-density core (\eg\ Broeils 1992). Figure \ref{fig:submax} shows that fits with a non-hollow halo would require at most a slight reduction in the disk M/L. None of these arguments is either new or truly compelling. Others (\eg\ van der Kruit 1995; Navarro 1998; Courteau \& Rix 1999) stress that less than maximum disk models also yield acceptable fits. It has been argued (\eg\ Blumenthal \etal\ 1986) that halo compression as the disk forms leads to featureless rotation curves (see also \S6), and this argument is extended somehow to account for the similarity of the shape of halo contribution to that of the disk. The population synthesis argument is weak because minor changes to the low-mass end of the IMF can lead to significant changes in the predicted M/L. \section{Barred galaxy velocity fields} The above argument is not decisive because we have considerable freedom to decompose a 1-D rotation curve into contributions from the sum of two, or more, other 1-D functions. Our group at Rutgers has therefore embarked on a program to use the 2-D velocity fields of barred spiral galaxies to provide extra constraints. A massive bar in a disk galaxy distorts the usual circular flow pattern, leading to characteristic `$\cal S$'-shaped iso-velocity contours when such a galaxy is observed in a suitable projection. The strength of the non-axisymmetric flow pattern can be modeled to determine the mass required in the barred component of the potential, leading to an estimate of the disk M/L that is independent of rotation curve fitting. The first such study has been completed for the southern barred spiral galaxy NGC~4123. Weiner (1998) has collected broad-band photometry and velocity maps both for the inner galaxy, using Fabry-Perot measurements of the H$\alpha$ emission, and for the outer galaxy, using 21~cm data from the VLA. He has constructed models based on full 2-D hydrodynamical simulations of a massless gas layer in a rotating potential derived from the observed light distribution. In order to construct the model potential from his I-band image, Weiner subtracts an unresolved source at the center, rectifies the image to face on, assumes a constant thickness and computes the gravitational field, modulo a single unknown M/L. For a number of assumed M/L values, he constructs axisymmetric DM halos which, when combined with the disk contribution, fit the rotation curve well outside the barred region. He has run a grid of models covering the parameter space of two unknowns: the M/L and the pattern speed of the bar and compares the projected velocity distribution with the high spatial resolution Fabry-Perot velocity field in the inner parts of NGC~4123. He finds (with a very high degree of confidence) $2.6 \leq {\rm M/L_I} h^{-1} \leq 3.3$ is required to produce a flow pattern with strong enough non-circular motions to match the data. This range of M/L values requires that 72\% to 90\% of the mass within 10 kpc is in the stars -- the contribution from DM can be little more than the minimal amount required to prevent a hollow halo. Work on other galaxies is in hand to check that this is not just a freak case. It should be noted that Englmaier \& Gerhard (this volume) reach a similar conclusion from similar work on the Milky Way. \section{Dynamical friction on bars} A second argument from barred galaxies relates to the prediction (Weinberg 1985) that bars should be slowed dramatically by dynamical friction. This prediction has been confirmed (Debattista \& Sellwood 1998) for moderate halo masses in $N$-body simulations of a barred disk in a responsive halo. The strong retarding torque acting on the bar causes the pattern speed to decrease rapidly to about one fifth of its initial value. Since this change occurs while the bar length increases only marginally, the ratio, $\cal R$, of the corotation radius to the bar semi-major axis, increases from $\gtsim 1$ at a time soon after the bar formed, to some ${\cal R} \sim 2.5$ late in the simulation. On the other hand, $\cal R$ remains close to 1.3 in a ``maximum disk'' model having a realistic rotation curve for as long as the calculation was run. Further work (Debattista \& Sellwood, in preparation) has shown that friction is only moderately reduced in halos that rotate in the same sense as the disk, even when halo rotation is cranked up to a perhaps unrealistic extent. Friction is also largely independent of whether the velocity distribution of a non-rotating halo is isotropic, radially or azimuthally biased, but friction was reduced when an anisotropic halo was given a high degree of rotation. The position of corotation is not easily determined in real barred galaxies. Weiner (\S3) finds $1 \ltsim {\cal R} \ltsim 1.4$ for NGC~4123. A value ${\cal R} \simeq 1.2$ was deduced by Lindblad \etal\ (1996) from a very similar study of NGC~1365. More direct estimates can be made for SB0 galaxies using the technique proposed by Tremaine \& Weinberg (1984) which requires that the observed material, stars in this case, obeys an equation of continuity. Application of this method to NGC~936 by Merrifield \& Kuijken (1995) and to NGC~4596 by Gersson \etal\ (this volume) also places corotation at a radius $\ltsim 1.5$ times the bar semi-major axis. The shapes and locations of dust lanes in many other barred galaxies also suggest a ratio of 1.2 (Athanassoula 1992) and finally some still more model-dependent studies of ringed galaxies (Buta \& Combes 1995) suggest a similar value. While this heterogeneous collection of estimates is not as solid as one would like, all evidence is consistent with ${\cal R} \ltsim 1.5$, implying that real bars have not experienced strong braking. Once again, therefore, the DM halo must have a large, low-density core if the radius of corotation is to stay as close to the bar end as observations seem to imply. \begin{figure} \psfig{figure=sellwood_5a.ps,width=\hsize,angle=0} \psfig{figure=sellwood_5b.ps,width=\hsize,angle=0} \caption{Simulation of a massive, cool and bar-stable disk. The top panels show the second half of the evolution in which strong spiral patterns are present but no bar. The rotation curves (below left) are measured at the times illustrated and show the gradual increase in disk mass; the fixed contributions from the rigid halo (dashed) and central mass (dotted) are shown. The values of $Q$ are also plotted (below right).} \label{fig:stable} \end{figure} \section{Stability} Since local stability arguments bearing on the question of the appropriate disk mass are used by Fuchs (this volume) and his arguments are critically reviewed by Bosma (also this volume), I avoid repeating them here and confine this part of my discussion to the question of global bar stability only. It has been known ever since the pioneering $N$-body simulations by Miller \etal\ (1971) and Hohl (1971) that self-gravitating galaxy disk models are prone to a bar-forming instability. In a widely cited paper, Ostriker \& Peebles (1973) argued that the global stability of unbarred galaxies required a significant DM content which, as stressed by Kalnajs (1987), must reside in the inner galaxy. This idea is too simple, however. Toomre (1981) argued that galaxies can be stabilized by other means; his argument is restated by Binney \& Tremaine (1987, \S6.3). The fact that $m=2$ modes in an almost fully self-gravitating disk can be stabilized by a dense center has been shown in linear stability analyses by Zang (1976), Evans \& Read (1998) and Toomre (unpublished). I was able (Sellwood 1989) to confirm Toomre's linear theory predictions, but I found that the stability of the extreme model he chose was rather delicate, since bars were still triggered by quite modest finite-amplitude effects. These extreme cases also suffer from $m=1$ instabilities. Not all models stabilized in this way are delicate, however. Figure \ref{fig:stable} shows a model that is robustly stable to bar formation, even though it has a moderately cool massive disk in a diffuse halo. This model closely resembles those described by Sellwood \& Moore (1999), but the radial distribution of the added particles is different. By concentrating all the infalling particles into an rather narrow annulus, Sellwood \& Moore were able to show that spiral patterns could be strong enough to re-arrange the surface density and alter the rotation curve shape. Since the strong spirals that achieved this result led to a rather hot disk ($Q \sim 4$), I added particles in the model presented here over a wide radial range, which allowed the disk to remain cool as shown. Almost all the central attraction in this model comes from the disk; it supports strong two-arm spiral patterns yet does not form a bar. The key difference between models of this kind and previous bar-unstable disks is the steep inner rise of the rotation curve, caused mostly by mobile particles, but seeded by the introduction of a fixed mass, having $\sim1$\% of the final disk mass. The steep central rise in the rotation curve of this model resembles that of many galaxies of both late (Rubin, Ford \& Thonnard 1980) and early (Rubin, Kenney \& Young 1997) Hubble types. The conclusion of this section is that considerations of global bar-stability do not require a high central density of DM in every galaxy. There is no universal stability criterion; as Ostriker \& Peebles argued, galaxies with gently rising curves are unstable unless the disk is significantly sub-maximum, but we now know that fully self-gravitating disks with steeply rising curves are stable. Strong evidence that the existence or absence of a bar has nothing whatsoever to do with DM content can be seen in Figures \ref{fig:syer}(b) and \ref{fig:mtol} in which the barred and unbarred galaxies are marked by different symbols. There is no apparent tendency for barred galaxies to have lower $\epsilon_{\rm I}$ or higher M/L than their unbarred counterparts. \section{Halo compression} As the principal source of central attraction switches from disk to halo matter, we might expect the rotation curve of a maximum disk galaxy to have a feature near the disk edge. Bahcall \& Casertano (1985), among others, stressed the absence of such a feature, which became known as the ``disk-halo conspiracy.'' In fact, it is a serious overstatement to suggest that there is no such feature -- quite a number of galaxies are known in which a marked decrease in circular speed is observed near the disk edge; Bosma (this volume) gives a list of some of the well-established cases and other examples can be found (\eg\ Verheijen 1997). Nevertheless, a weaker conspiracy remains because the circular speed well beyond the optical disk is very rarely less than 90\% of that in the disk region. Amongst enthusiasts for dynamically significant DM even near the centers of galaxies, the flatness of galaxy rotation curves is regarded as a natural consequence of halo compression as the disk forms within it (Blumenthal \etal\ 1986; Flores \etal\ 1993). If disks are maximum, however, the close correspondence between the circular speeds in the disk and halo still requires a conspiracy. Navarro (1998) has taken the argument further, to suggest that halos with large cores are unphysical because the standard formula for halo compression implies a hollow, or even negative density, halo before the disk formed. The problem discovered by Navarro, however, is not an argument against large cores, but merely reveals the severe limitations of the standard halo compression formula (hereafter HCF). \begin{figure}[t] \centerline{\psfig{figure=sellwood_6.ps,width=0.9\hsize,angle=0}} \caption{Compression of a spherical halo by a maximum disk. The solid curve which peaks near $R=20R_0$ is the rotation curve of the halo before the disk was added. The upper solid curve shows the combined rotation curve of the disk and halo, after adding the disk, and the dot-dashed and dashed curves show the contributions of the halo and disk respectively. The dotted curve shows the prediction of the initial halo rotation curve required by HCF.} \label{fig:decomp} \end{figure} The HCF, as derived by Barnes \& White (1984), Blumenthal \etal\ (1986) and Ryden \& Gunn (1987), embodies three principal assumptions. (1) The mass distributions of both the halo and the disk(!)\ are spherical. (2) The disk forms slowly enough that the halo response is adiabatic. Assumptions (1) and (2) imply that halo particles conserve their actions, in particular, the component of their angular momentum normal to the orbit plane, $J_z$, throughout. (3) The mean radius of a halo particle's orbit can be computed from its ``home radius'' -- the radius of a circular orbit of the same $J_z$. This circular orbit assumption is sometimes stated that shells of halo matter do not cross. These drastic approximations do lead to a single (implicit) relation relating the radial mass profiles of the halo before and after the disk formed. Barnes (1987) conducted a direct test and concluded that the HCF ``overestimates the halo response by as much as a factor of two.'' Despite his warning, the na\"\i ve HCF is still widely used. Figure \ref{fig:decomp} demonstrates its failure for a maximum disk case. The plot shows the formation of a maximum disk galaxy model as an exponential disk builds up within an initially spherical, low-concentration halo. The halo is composed of $100\,000$ particles that move freely in their own gravitational well supplemented by that of the disk. The final disk has a total mass of one tenth that of the halo and, to avoid any angular momentum redistribution, I held the disk particles fixed once they were placed in position. The disk was grown gradually by adding two particles per time-step. The resulting maximum disk model has a flat rotation curve out to $\sim15R_0$. The dotted curve shows the rotation curve of the pre-compressed halo that would be required by the HCF to yield the final maximum disk model. The fact that velocities become imaginary at radii $\ltsim 8R_0$ confirms Navarro's finding that the na\"\i ve HCF predicts nonsense. That the HCF fails badly in this case is evidenced by the reasonable initial halo actually required for the final maximum disk model. I conducted further experiments to determine which approximation above is most responsible for this gross error. The simulation shown in Figure \ref{fig:decomp} was, in fact, run backwards; I started with an equilibrium disk-halo model and evaporated the disk to find the required ``initial'' halo. As a check of the adiabatic assumption (2), I re-grew the same disk and recovered the rotation curve from which I had started to impressive precision. In order to test assumption (1), I ran a new calculation with the mass of the flat disk replaced by the equivalent, but now spherical, $M(R)$ profile. This change did alter the rotation curve of the halo after evaporation of the ``spherical disk'' but still it was far from correctly predicted by the HCF, implying that assumption (3) is the principal source of error in the simple formula. While this experiment disposes of the objection that a compressed halo having a large core is unphysical, it also highlights the disk-halo conspiracy. In order to yield a reasonably flat rotation curve after the disk has formed, a $V_{\rm max}$ for the initial halo similar to that for the disk which will form within it seems to be required. It is far from obvious that such well-tuned initial conditions would arise naturally. \section{Surface Brightness and the Tully-Fisher Relation} The rotation curve of any galaxy in which the DM contribution in the inner disk is negligible, will have the form given in equation (1). If disks are maximal, equation (1) predicts that the observed circular speed should vary as $R_0^{-1/2}$ at fixed disk mass, or luminosity if the M/L is approximately constant. Thus one might expect that surface brightness variations to give rise to scatter about the Tully-Fisher relation (TFR). This prediction is well known to fail for LSB galaxies; one of the principal surprises from studies of LSB galaxies is that they (or at least the larger ones) lie on the same TFR as do the HSBs (Zwaan \etal\ 1995; Sprayberry \etal\ 1995). This result is one strand of evidence for large mass discrepancies in their inner parts (de Blok, this volume). If all bright HSB galaxies are maximal, on the other hand, the above predicted correlation between circular velocity in the inner disk and scale length should hold. Courteau \& Rix (1999) tested for this in a sample of bright galaxies with well measured $R_0$ and circular speed at $R=2.2R_0$, but found that the residuals from a TF-like relation do not correlate with surface brightness. Thus even within a sample of HSB galaxies with tightly controlled properties, surface brightness was not a second parameter in the TFR. As was similarly inferred for the LSBs (de Blok \& McGaugh 1997), their result requires either that the disk M/L varies systematically with surface brightness, or that the halo contribution picks up by just enough to compensate for a decreasing disk contribution, or that Newtonian dynamics breaks down. The same conclusions can be drawn from the trend in Figure \ref{fig:syer}(b), albeit from data of lower quality. Courteau \& Rix favor the second alternative, which evidently requires that at least some higher SB galaxies have significant DM contributions to their inner rotation speeds. \section{Conclusions} It is now well-established that LSB and low luminosity galaxies have larger mass discrepancies in their inner parts than do the bright HSB galaxies (Figure \ref{fig:syer}). The controversial question is how small is the DM contribution to the inner rotation curves of the larger HSB galaxies? Studies of barred galaxies, in particular, have yielded strong new evidence suggesting that most mass in their inner parts is in stars. The completely independent arguments presented in \S\S3 \& 4 both suggest low upper limits to the DM content of inner parts of barred galaxies. These limits are not so extreme as to violate the conventional maximum disk constraint that the DM halo density should not decrease towards the center. Combined with the apparent absence of any systematic offsets between barred and unbarred galaxies in Figures \ref{fig:syer}(b) and \ref{fig:mtol}, it seems reasonable to argue that all HSB galaxies are similarly deficient in DM in their inner parts. This conclusion is supported by the older evidence from rotation curve fitting (\S2), by some recent evidence for the Milky Way -- especially the result obtained by Englmaier \& Gerhard (1999, and this volume). Bosma (this volume) presents further arguments for disk masses ranging up to maximum. The case for maximum disks in bright HSB galaxies is therefore strong, though still not decisive, partly because it leaves at least two serious puzzles: First, why is the circular speed from the disk in the inner galaxy generally so similar to that from the halo further out? Second, why does DM gradually become more important as the disk surface brightness declines? Courteau \& Rix (1999) argue that even HSB galaxies have, on average, substantial DM fractions in their inner parts and that barred galaxies have generally smaller fractions. This last suggestion is, however, inconsistent with the evidence in Figures \ref{fig:syer}(b) and \ref{fig:mtol}. The DM halos of LSB and low luminosity galaxies are well known to have large cores with low central densities and the evidence presented here suggests this is also true for bright HSBs. DM halos of this type are quite different from those predicted in many simulations of hierarchical clustering in an expanding universe (but see Primack, this volume). \acknowledgments I thank Stacy McGaugh and Scott Tremaine for helpful conversations and both of them, as well as Ben Weiner, for comments on the manuscript. This work was supported by NSF grant AST 96/17088 and NASA LTSA grant NAG 5-6037.
train/arxiv
BkiUdBY5qoYAsbqIRWdg
5
1
\section{Introduction} We are interested in trying to understand how small gaps between primes can be. If we let $p_n$ denote the $n^{\text{th}}$ prime, it is conjectured that \begin{equation} \liminf_n p_{n+1}-p_n=2. \end{equation} This is the famous twin prime conjecture. Unfortunately we appear unable to prove any results of this strength. The best unconditional result is due to Goldston, Pintz and Y\i ld\i r\i m \cite{GPY} which states that \begin{equation} \liminf_n \frac{p_{n+1}-p_n}{\sqrt{\log{p_n}}(\log\log{p_n})^2}<\infty. \end{equation} Therefore we do not know that $\liminf p_{n+1}-p_n$ is finite. The method of \cite{GPY} relies heavily on results about primes in arithmetic progressions. We say that the primes have `level of distribution' $\theta$ if for any constant $A$ there is a constant $C=C(A)$ such that \begin{equation} \sum_{q\le x^\theta(\log{x})^{-C}}\max_{\substack{a\\(a,q)=1}}\left|\sum_{\substack{p\equiv a \pmod{q}\\ p\le x}}1-\frac{\text{Li}(x)}{\phi(q)}\right|\ll_A\frac{x}{(\log{x})^A}. \end{equation} The Bombieri-Vinogradov theorem states that the primes have level of distribution $1/2$, and this is a major ingredient in the proof of the Goldston-Pintz-Y\i ld\i r\i m result. If we could improve the Bombieri-Vinogradov theorem to show that the primes have level of distribution $\theta$ for some constant $\theta>1/2$, then it would follow from \cite{GPYI}[Theorem 1] that there is a constant $D=D(\theta)$ such that \begin{equation}\liminf_n p_{n+1}-p_n<D,\end{equation} and so there would be infinitely many bounded gaps between primes. It is believed that such improvements to the Bombieri-Vinogradov theorem are true, and Elliott and Halberstam \cite{ElliottHalberstam} conjectured the following much stronger result. \begin{cnjctr}[Elliott-Halberstam Conjecture] For any fixed $\epsilon>0$, the primes have level of distribution $1-\epsilon$. \end{cnjctr} Friedlander and Granville \cite{FriedlanderGranville} have shown that the primes do not have level of distribution $1$, and so the Elliott-Halberstam conjecture represents the strongest possible result of this type. Under the Elliott-Halberstam conjecture the Goldston-Pintz-Y\i ld\i r\i m method gives \cite{GPYI} that \begin{equation} \liminf_n p_{n+1}-p_n\le 16. \end{equation} If we consider the length of 3 or more consecutive primes, however, we are unable to prove as strong results, even under the full strength of the Elliott-Halberstam conjecture. In particular we are unable to prove that there are infinitely many intervals of bounded length that contain at least 3 primes. The Goldston-Pintz-Y\i ld\i r\i m methods can still be used, but even with the Elliot-Halberstam conjecture we are only able to prove that \begin{equation} \liminf_n \frac{p_{n+2}-p_n}{\log{p_n}}=0. \end{equation} This should be contrasted with the following conjecture. \begin{cnjctr}[Prime $k$-tuples conjecture] Let $\mathcal{L}=\{L_1,\dots,L_k\}$ be a set of integer linear functions whose product has no fixed prime divisor. Then there are infinitely many $n$ for which all of $L_1(n), L_2(n), \dots,L_k(n)$ are simultaneously prime. \end{cnjctr} By `no fixed prime divisor' above we mean that for every prime $p$ there is an integer $n_p$ such that $L_i(n_p)$ is coprime to $p$ for all $1\le i\le k$. We call such a set of linear functions \textit{admissible}. We note that $\{n,n+2,n+6\}$ is an admissible set of linear functions, and so the prime $k$-tuples conjecture predicts that $\liminf_n p_{n+2}-p_n\le 6$ (it is easy to verify that one cannot have $p_{n+2}-p_n<6$ for $n>2$). More generally, for any constant $k>0$ the conjecture predicts that $\liminf p_{n+k}-p_n<\infty$, and so there are infinitely many intervals of bounded size containing at least $k$ primes. At the moment the prime $k$-tuples conjecture appears beyond the techniques currently available to us. As an approximation to the conjecture, it is common to look for \textit{almost-prime} numbers instead of primes, where almost-prime indicates that the number has only a `few' prime factors. Graham, Goldston, Pintz and Y\i ld\i r\i m \cite{GGPY} have shown that given an integer $k$, there are infinitely many intervals of bounded length (depending on $k$) containing at least $k$ integers each with exactly two prime factors. It is a classical result of Halberstam and Richert \cite{HalberstamRichert} that there are infinitely many intervals of bounded length (depending on $k$) which contain a prime and at least $k$ numbers each with at most $r$ prime factors for $r$ sufficiently large (depending on $k$). We investigate, under the assumption that the primes have level of distribution $\theta>1/2$, whether there are infinitely many intervals of bounded length (depending on $k$) containing $2$ primes and $k$ numbers each with at most $r$ prime factors. \section{Initial Hypotheses} We will work with an assumption either on the distribution of primes in arithmetic progressions of level $\theta$, or a stronger assumption on numbers with exactly $r$ prime factors each of which is of a given size. Given constants $0\le \eta_i\le\delta_i\le 1$ for $1\le i\le r$ we define \begin{equation} \beta_{r,\eta,\delta}(n)=\begin{cases} 1,\qquad&\text{$n=p_1p_2\dots p_r$ with $n^{\eta_i}\le p_i\le n^{\delta_i}$ for $1\le i\le r$,}\\ 0,&\text{otherwise.} \end{cases} \end{equation} We put \begin{align} \Delta(x;q,a)&=\sum_{\substack{x<p\le 2x\\p\equiv a\pmod{q}}}1-\frac{1}{\phi(q)}\sum_{x<p\le 2x}1,\\ \Delta_{r,\eta,\delta}(x;q,a)&=\sum_{\substack{x<p\le 2x\\p\equiv a\pmod{q}}}\beta_{r,\eta,\delta}(n)-\frac{1}{\phi(q)}\sum_{x<p\le 2x}\beta_{r,\eta,\delta}(n),\\ \Delta^*(x;q)&=\max_{y\le x}\max_{\substack{a\\(a,q)=1}}\left|\Delta(y;q,a)\right|,\\ \Delta_{r,\eta,\delta}^*(x;q)&=\max_{y\le x}\max_{\substack{a\\(a,q)=1}}\left|\Delta_{r,\eta,\delta}(y;q,a)\right|. \end{align} We can now state the two hypotheses that we will consider, the Bombieri-Vinogradov hypothesis of level $\theta$, BV($\theta$), and the generalised Bombieri-Vinogradov hypothesis of level $\theta$ for $E_r$ numbers, GBV($\theta$,r). \begin{NamedTheorem}[Hypothesis BV($\theta$)]For every constant $A>0$ and integer $h>0$ there is a constant $C=C(A,h)$ such that if $Q\le x^\theta(\log{x})^{-C}$ then we have \[\sum_{q\le Q}\mu^2(q)h^{\omega(q)}\Delta^*(x;q)\ll_A x(\log{x})^{-A}.\] \end{NamedTheorem} \begin{NamedTheorem}[Hypothesis GBV($\theta$,r)]For every constant $A>0$ and integer $h>0$ there is a constant $C=C(A,h)$ such that if $Q\le x^\theta(\log{x})^{-C}$ then uniformly for $0\le \eta_i\le \delta_i\le 1$ ($1\le i\le r$) we have \[\sum_{q\le Q}\mu^2(q)h^{\omega(q)}\Delta^*_{r,\eta,\delta}(x;q)\ll_A x(\log{x})^{-A}.\] \end{NamedTheorem} We note that by standard arguments in sieve methods (see, for example, \cite{HalberstamRichert}[Lemma 3.5]) Hypothesis BV($\theta$) follows from the primes having level of distribution $\theta$. \section{Statement of Results} \begin{thrm}\label{thrm:MainTheorem} Let $k\ge 1$ be an integer. Let $1/2<\theta<0.99$. Assume Hypothesis BV($\theta$) holds. Let \[r=\frac{240k^2}{(2\theta-1)^3}.\] Then there are infinitely many integers $n$ such that the interval $[n,n+C(k,\theta)]$ contains two primes and $k$ integers, each with at most $r$ prime factors. \end{thrm} \begin{thrm}\label{thrm:ElliottHalberstam} Let $\theta\ge 0.99$, and assume Hypothesis GBV($\theta$,r) holds for $1\le r\le 4$. Then there exist infinitely many integers $n$ such that the interval $[n,n+90]$ contains two primes and one other integer with at most 4 prime factors.\end{thrm} \section{Proof of Theorem \ref{thrm:MainTheorem}} We consider two finite disjoint sets of integer linear functions $\mathcal{L}^{(1)}_1=\{L^{(1)}_1,\dots,L^{(1)}_{k_1}\}$ and $\mathcal{L}^{(1)}_2=\{L^{(1)}_{k_1+1},\dots,L^{(1)}_{k_1+k_2}\}$, whose union $\mathcal{L}^{(1)}=\mathcal{L}_1^{(1)}\cup\mathcal{L}^{(1)}_2$ is admissible. (We recall that a such set is admissible if for every prime $p$ there is an integer $n_p$ such that every function evaluated at $n_p$ is coprime to $p$). We wish to show that there are infinitely many $n$ for which two of the functions from $\mathcal{L}_1^{(1)}$ take prime values at $n$, and at all of the functions from $\mathcal{L}^{(2)}$ take almost-prime values at $n$. Since we are only interested in showing there are infinitely many such $n$, we adopt a normalisation of our linear functions, as done originally by Heath-Brown \cite{HeathBrown} which simplifies our argument. By considering $L_i(n)=L_i^{(1)}(An+B)$ for suitable constants $A$ and $B$ we may assume that the functions $L_i$ satisfy the following conditions. \begin{enumerate} \item The functions $L_i(n)=a_i n+b_i$ ($1\le i \le k_1+k_2$) are distinct with $a_i>0$. \item Each of the coefficients $a_i$ is composed of the same primes none of which divides the $b_j$. \item If $i\ne j$, then any prime factor of $a_i b_j-a_j b_i$ divides each of the $a_l$. \end{enumerate} We let $\mathcal{L}_1=\{L_1,\dots,L_{k_1}\}$ and $\mathcal{L}_2=\{L_{k_1+1},\dots,L_{k_1+k_2}\}$. We now consider the sum \begin{equation} S(N;\mathcal{L}_1,\mathcal{L}_2)=\sum_{N\le n \le 2N}\left(\sum_{L\in\mathcal{L}_1}\chi_{1}(L(n))+\sum_{L\in\mathcal{L}_2}\chi_{r}(L(n))-k_2-1\right)\left(\sum_{\substack{d|\Pi(n)\\ d\le R}}\lambda_d\right)^2, \end{equation} where \begin{align} \chi_r(n)&=\begin{cases} 1,\qquad &\text{$n$ has at most $r$ prime factors}\\ 0,&\text{otherwise,}\end{cases}\\ R&=N^{\theta/2}(\log{N})^{-C},\\ \Pi(n)&=\prod_{L\in\mathcal{L}_1\cup\mathcal{L}_2}L(n), \end{align} and the $\lambda_d$ are real numbers which we declare later. $C>0$ is a constant chosen sufficiently large so we can use the estimates of hypotheses $BV(\theta)$ or $GBV(\theta,r)$. If we can show that $S>0$ then we know there must be at least one $n\in[N,2N]$ for which the terms in parentheses give a positive contribution to $S$. The second term in our expression for $S$ is a square, and so is always non-negative. We see that the first term in parentheses is positive only when there are at least two primes and $k_2$ numbers each with at most $r$ prime factors amongst the $L_i(n)$ ($1\le i\le k_1+k_2$). If we choose all our original functions to be of the form $L^{(1)}_i(n)=n+h_i$ (with $h_i\ge 0$) then all these integers then lie in an interval $[m,m+H]$, where $H=\max_{i} h_i$. Thus it is sufficient to show that $S>0$ for any large $N$ to prove Theorem \ref{thrm:MainTheorem}. We can get such a bound by following a method similar to Goldston, Pintz and Y\i ld\i r\i m \cite{GPY}, which we refer to as the GPY method. To simplify notation we put \begin{equation} \Lambda^2(n)=\left(\sum_{\substack{d|\Pi(n)\\d\le R}}\lambda_d\right)^2. \end{equation} To avoid confusion we mention that $\Lambda^2(n)$ is unrelated to the Von-Mangold function. We expect to be able to show that $S>0$ for suitably large $k_1$ and $r$ (depending on $k_2$) when the primes have level of distribution $\theta>1/2$. This is because the original GPY method shows that for sufficiently large size of $k_1$ (depending on $k_2$ and $\epsilon$) we can choose the $\lambda_d$ to give \begin{equation} \sum_{N\le n\le 2N}\sum_{L\in \mathcal{L}_1}\chi_1(L(n))\Lambda^2(n)\ge (2\theta-\epsilon)\sum_{N\le n\le 2N}\Lambda^2(n). \end{equation} Moreover, since $\Lambda^2(n)$ is small when $\Pi(n)$ has many prime factors, we expect for sufficiently large $r$ (depending on $k_2$ and $\epsilon$) that \begin{equation} \sum_{N\le n\le 2N}\sum_{L\in\mathcal{L}_2}(1-\chi_r(L(n)))\Lambda^2(n)\le \epsilon\sum_{N\le n\le 2N}\Lambda^2(n). \end{equation} And so provided that $\theta>1/2+\epsilon$ we expect that \begin{equation} S\gg_\epsilon \sum_{N\le n\le 2N}\Lambda^2(n)>0. \end{equation} Although the method of Graham, Goldston, Pintz and Y\i ld\i r\i m allows one to estimate similar sums involving numbers with a fixed number of prime factors, these results rely on level-of-distribution results for such numbers, which we are not assuming in Theorem \ref{thrm:MainTheorem}. Instead we proceed by noting that any integer which is at most $2N$ and has more than $r$ prime factors must have a prime factor of size at most $(2N)^{1/(r+1)}$. Thus for $n\le 2N$ \begin{equation} \chi_r(n)\ge 1-\sum_{\substack{p|n\\p\le (2N)^{1/(r+1)}}}1. \end{equation} Substituting this into our expression for $S$ we have \begin{align} S&\ge \sum_{N\le n\le 2N}\left(\sum_{L\in \mathcal{L}_1}\chi_1(L(n))-1-\sum_{L\in \mathcal{L}_2}\sum_{\substack{p|L(n)\\p\le (2N)^{1/r+1}}}1\right)\Lambda^2(n)\nonumber\\ &=\sum_{L\in \mathcal{L}_1}Q_1(L)-Q_2-\sum_{L\in\mathcal{L}_2}Q_3(L),\label{eq:SSplit} \end{align} where {\allowdisplaybreaks \begin{align} Q_1(L)&=\sum_{N\le n\le 2N}\chi_1(L(n))\Lambda^2(n),\\ Q_2&=\sum_{N\le n\le 2N}\Lambda^2(n),\\ Q_3(L)&=\sum_{N\le n\le 2N}\sum_{\substack{p|L(n)\\p\le (2N)^{1/(r+1)}}}\Lambda^2(n). \end{align}} The choice of good values for $\lambda_d$ and the corresponding evaluation of $Q_1$, $Q_2$, $Q_3$ already exists in the literature. We quote from \cite{Maynard}[Proposition 4.1] taking $W_0(t)=1$ and\cite{GGPY}[Theorem 7 and Theorem 9]. We note that \cite{GGPY}[Theorem 9] does not require that $E_2$-numbers have level of distribution $\theta$, so Hypothesis $BV(\theta)$ is sufficient for the statement to hold. These results give for a fixed polynomial $P$, for $k=k_1+k_2=\#(\mathcal{L}_1\cup\mathcal{L}_2)$, for $L\in\mathcal{L}$ and for sufficiently large $C$ that we can choose the $\lambda_d$ such that \begin{align} Q_1(L)&\sim\frac{\mathfrak{S}(\mathcal{L})N(\log{R})^{k+1}}{(k-2)!\log{N}}\int_0^1\tilde{P}(1-t)^2t^{k-2}d t,\\ Q_2&\sim\frac{\mathfrak{S}(\mathcal{L})N(\log{R})^{k}}{(k-1)!}\int_0^1P(1-t)^2t^{k-1}d t,\\ Q_3(L)&\sim\frac{\mathfrak{S}(\mathcal{L})N(\log{R})^{k}}{(k-1)!}I,\label{eq:Q3Def} \end{align} where \begin{align} \mathfrak{S}(\mathcal{L})&\text{ is a positive constant depending only on $\mathcal{L}$,}\\ \tilde{P}(x)&=\int_0^xP(t)d t,\\ I&=\int_0^{\delta}\frac{1}{y}\int_{1-y}^1P(1-t)^2t^{k-1}d t d y\nonumber\\ &\qquad+\int_0^{\delta}\frac{1}{y}\int_0^{1-y}\left(P(1-t)-P(1-t-y)\right)^2t^{k-1}d td y,\\ \delta&=\frac{2}{\theta(r+1)}. \end{align} Here the asymptotic for $Q_3$ is valid only for $R^2(2N)^{1/(r+1)}\le N(\log{N})^{-C}$, and so we introduce the condition \begin{equation}\label{eq:rCondition} r+1>\frac{1}{1-\theta} \end{equation} to ensure that this is satisfied for $N$ sufficiently large. All the other asymptotics are valid without further conditions. We choose the polynomial \begin{equation} P(x)=x^l, \end{equation} where $l\ge 0$ is an integer to be declared later. To ease notation we let \begin{equation} C(\mathcal{L})=\frac{\mathfrak{S}(\mathcal{L})N(\log{R})^k(2l)!}{(k+2l)!}.\label{eq:CDef} \end{equation} We note that for $a,b\in \mathbb{N}$ \begin{equation} \int_0^1x^a(1-x)^b d x =\frac{a!b!}{(a+b+1)!}. \end{equation} Thus we see that \begin{align} \sum_{L\in\mathcal{L}_1}Q_1(L)&\sim \theta\left(\frac{2l+1}{l+1}\right)\left(\frac{k_1}{k+2l+1}\right)C(\mathcal{L}),\label{eq:Q1}\\ Q_2&\sim C(\mathcal{L}).\label{eq:Q2} \end{align} We follow a similar approach to Graham, Goldston, Pintz and Y\i ld\i r\i m \cite{GGPY} to estimate $Q_3$. We let \begin{equation} I=\int_0^\delta \frac{F(y)}{y}d y,\label{eq:Idef} \end{equation} where \begin{align} F(y)&=\int_{1-y}^1 P(1-t)^2t^{k-1}d t+\int_0^{1-y}\left(P(1-t)-P(1-t-y)\right)^2t^{k-1}d t\nonumber\\ &=\int_0^1P(1-t)^2t^{k-1}d t+\int_0^{1-y}P(1-t-y)^2t^{k-1}d t\nonumber\\ &\qquad-2\int_0^{1-y}P(1-t)P(1-t-y)t^{k-1}d t. \end{align} We recall that $P(x)=x^l$, and note that \begin{equation} P(1-t)=(1-t)^l=(1-t-y)^l+\sum_{j=1}^l\binom{l}{j}y^j(1-t-y)^{l-j}. \end{equation} Thus \begin{align} F(y)&=\int_0^1(1-t)^{2l}t^{k-1}d t+\int_0^{1-y}(1-t-y)^{2l}t^{k-1}d t\nonumber\\ &\qquad-2\int_0^{1-y}(1-t-y)^{2l}t^{k-1}d t-2\sum_{j=1}^l\binom{l}{j}y^j\int_0^{1-y}(1-t-y)^{2l-j}t^{k-1}d t\nonumber\\ &\le \int_0^1(1-t)^{2l}t^{k-1}d t-\int_0^{1-y}(1-t-y)^{2l}t^{k-1}d t\nonumber\\ &=\frac{(k-1)!(2l)!}{(k+2l)!}\left(1-(1-y)^{k+2l}\right). \end{align} Substituting this into \eqref{eq:Idef} gives \begin{align} I=\int_0^\delta\frac{F(y)}{y}d y &\le \frac{(k-1)!(2l)!}{(k+2l)!}\int_0^\delta\frac{1-(1-y)^{k+2l}}{y}d y\\ &=\frac{(k-1)!(2l)!}{(k+2l)!}\sum_{j=0}^{k+2l-1}\int_0^\delta (1-y)^j dy\\ &\le \frac{(k-1)!(2l)!}{(k+2l)!}\delta(k+2l).\label{eq:IVal} \end{align} Therefore from \eqref{eq:Q3Def}, \eqref{eq:CDef} and \eqref{eq:IVal}, for any $L\in\mathcal{L}_2$ we have that \begin{equation}\label{eq:Q3} Q_3(L)\le C(\mathcal{L})\left(\delta(k+2l)+o(1)\right). \end{equation} Substituting \eqref{eq:Q1}, \eqref{eq:Q2} and \eqref{eq:Q3} into \eqref{eq:SSplit} we see \begin{equation} S\ge C(\mathcal{L})\left(\theta\left(\frac{2l+1}{l+1}\right)\left(\frac{k_1}{k+2l+1}\right)-1-k_2\delta(k+2l)+o(1)\right). \end{equation} We let \begin{equation} k+2l+1=\lceil C_1(2\theta-1)^{-2}\rceil ,\qquad l+1=\lceil C_2(2\theta-1)^{-1}\rceil\label{eq:klDef} \end{equation} for some $C_1$, $C_2$. This gives \begin{align} \frac{S}{C(\mathcal{L})}&\ge \theta\left(2-\frac{2\theta-1}{C_2}\right)\left(1-\frac{2C_2(2\theta-1)}{C_1}-\frac{(k_2+1)(2\theta-1)^2}{C_1}\right)-1\nonumber\\ &\qquad-k_2\delta C_1(2\theta-1)^{-2}+o(1)\nonumber\\ &=(2\theta-1)\left(1-\frac{\theta}{C_2}-\frac{4\theta C_2}{C_1}-\frac{2k_2(2\theta-1)\theta}{C_1}+\frac{(1+k_2)\theta(2\theta-1)^2}{C_1C_2}\right)\nonumber\\ &\qquad-k_2\delta C_1(2\theta-1)^{-2}+o(1). \end{align} We let \begin{equation} C_1=40k_2,\qquad C_2=3. \end{equation} We see from \eqref{eq:klDef} that this choice of $C_1$ and $C_2$ corresponds to positive integer values for $k_1$ and $l$ for any choice of $0.5<\theta\le 0.99$ or $k_2$, and so is a valid choice. Since $k_2$ is a positive integer and $1/2<\theta\le 1$, this gives \begin{align} \frac{S}{C(\mathcal{L})}&\ge (2\theta-1)\left(\frac{30-17\theta-5\theta^2+2\theta^3}{30}\right)-40k_2^2(2\theta-1)^{-2}\delta+o(1)\nonumber\\ &\ge (2\theta-1)\left(\frac{31-21\theta}{30}\right)-40k_2^2(2\theta-1)^{-2}\delta+o(1). \end{align} Thus $S>0$ for large $N$ if $\delta$ is chosen such that \begin{equation} \delta < \frac{(2\theta-1)^3(31-21\theta)}{1200k_2^2}. \end{equation} We recall $\delta=2/\theta(r+1)$, so $S$ is positive provided $r$ is chosen larger than \begin{equation} \frac{2400k_2^2}{(2\theta-1)^3\theta(31-21\theta)}-1< \frac{240k_2^2}{(2\theta-1)^3}. \end{equation} We note that if $r=240k_2^2/(2\theta-1)^3$ then for $\theta<0.99$ the condition \eqref{eq:rCondition} is satisfied. This completes the proof of Theorem \ref{thrm:MainTheorem}. We remark that by choosing $L(n)=n+h$ with $h<H$ for $L\in\mathcal{L}_2$ and $L(n)=n+h$ with $h>H$ for $L\in\mathcal{L}_1$ we can ensure that of the $k_2+2$ almost-primes we find, the largest two are primes. \section{Proof of Theorem \ref{thrm:ElliottHalberstam}} We can get better quantitative results for the number of prime factors involved in our almost-prime if we assume a fixed level of distribution result for almost-primes and for primes, and then follow the work of \cite{GGPY}. We consider the same sum $S$, but now we assume that $\mathcal{L}_2=\{L_0\}$ and we take $r=4$. Thus $k_2=1$ and $k=k_1+1$. \begin{align} S=S(N;\mathcal{L}_1,\{h_0\})&=\sum_{N\le n \le 2N}\left(\sum_{L\in\mathcal{L}_1}\chi_{1}(L(n))+\chi_{4}(L_0(n))-2\right)\left(\sum_{\substack{d|\Pi(n)\\ d\le R}}\lambda_d\right)^2\nonumber\\ &=\sum_{L\in\mathcal{L}_1}Q_1(L)+Q_1'(L_0)-Q_2, \end{align} where $Q_1(L)$, $Q_2$ are as before and \begin{align} Q_1'&=\sum_{N\le n \le 2N}\chi_{4}(L_0(n))\left(\sum_{\substack{d|\Pi(n)\\ d\le R}}\lambda_d\right)^2\\ R&=N^{0.99/2}(\log{N})^C. \end{align} As before, $C$ is a suitably large positive constant. We split the contribution to $Q_1'$ depending on whether $L_0(n)$ has exactly 1, 2, 3 or 4 prime factors. Thus \begin{equation} Q_1'=Q_{1}(L_0)+Q_{12}'+Q_{13}'+Q_{14}', \end{equation} where \begin{equation} Q_{1j}'=\sum_{N\le n\le 2N}\beta_j(L_0(n))\left(\sum_{\substack{d|\Pi(n)\\ d\le R}}\lambda_d\right)^2, \end{equation} and \begin{equation} \beta_j(n)=\begin{cases} 1,\qquad &\text{$n$ has exactly $j$ prime factors}\\ 0, &\text{otherwise.} \end{cases} \end{equation} For technical reasons we find it harder to deal with terms arising when $L(n)$ has a prime factor less than $N^{\epsilon}$ or no prime factor greater than $N^{1/2}$. Thus we obtain a lower bound for $Q_{1j}'$ by replacing $\beta_j(L_0(n))$ with $\beta'_j(L_0(n))$, where \begin{equation} \beta_j'(n)=\begin{cases} 1,\qquad &\text{$n=p_1p_2\dots p_j$ with $n^\epsilon<p_1<\dots<p_j$ and $n^{0.505}<p_j$}\\ 0, &\text{otherwise.} \end{cases} \end{equation} We can then obtain these asymptotic lower bounds. By following an equivalent argument to \cite{Thorne} and \cite{Maynard}[Proposition 4.2] but using Hypothesis GBV(0.99,$j$) to bound the error terms we have \begin{equation} Q_{1j}'\ge (1+o(1))\frac{\mathfrak{S}(\mathcal{L})(\log{R})^{k+1}}{(k-2)!\log{N}}J_{j}, \end{equation} where {\allowdisplaybreaks \begin{align} J_{r}&=\int_{(x_1,\dots,x_{r-1})\in \mathcal{A}_r}\frac{ I_1(Bx_1,\dots,Bx_{r-1})}{\left(\prod_{i=1}^{r-1}x_i\right)\left(1-\sum_{i=1}^{r-1}x_i\right)}d x_1\dots dx_{r-1},\\ I_1&=\int_0^1\left(\sum_{J\subset\{1,\dots,r-1\}}(-1)^{|J|}\tilde{P}^+(1-t-\sum_{i\in J}x_i) \right)^2t^{k-2}d t,\\ \tilde{P}^+(x)&=\begin{cases} \int_0^x P(t)d t,\qquad &x\ge 0\\ 0,&\text{otherwise,} \end{cases}\\ B&=\frac{2}{0.99},\\ \mathcal{A}_r&=\left\{x\in[0,1]^{r-1}:\epsilon<x_1<\dots<x_{r-1},\sum_{i=1}^{r-1}x_i<B^{-1}\right\}. \end{align}} As before, by Hypothesis BV(0.99) we also have for any $L\in\mathcal{L}$ \begin{align} Q_1(L)&\sim\frac{\mathfrak{S}(\mathcal{L})(\log{R})^{k+1}}{(k-2)!\log{N}}\int_0^1\tilde{P}(1-t)^2t^{k-2}d t,\\ Q_2&\sim\frac{\mathfrak{S}(\mathcal{L})(\log{R})^{k}}{(k-1)!}\int_0^1P(1-t)^2t^{k-1}d t. \end{align} Thus we have that \begin{equation} S\ge \frac{\mathfrak{S}(\mathcal{L})(\log{R})^{k}}{(k-1)!}\left(\frac{0.99(k-1)}{2}\left(k J_1+J_2+J_3+J_4\right)-2I_0+o(1)\right), \end{equation} where $J_r$ is given above and \begin{equation} I_0=\int_0^1P(1-t)^2t^{k-1}d t. \end{equation} Therefore given a polynomial $P$ we can get an asymptotic lower bound for $S$ by explicitly calculating the integrals $I_0$, $J_1$, $J_2$, $J_3$ and $J_4$. Explicitly we have for $r=1$ \begin{equation} J_1=\int_0^1\tilde{P}(1-t)^2t^{k-2}d t. \end{equation} Similarly for $r=2$ we have \begin{equation} J_2=J_{21}+J_{22}+O(\epsilon), \end{equation} where \begin{align} J_{21}&=\int_{0}^{1}\frac{B}{x(B-x)}\int_0^{1-x}\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)\right)^2t^{k-2}d t d x,\\ J_{22}&=\int_{0}^{1}\frac{B}{x(B-x)}\int_{1-x}^1\tilde{P}(1-t)^2t^{k-2}d t d x. \end{align} Similarly for $r=3$ we have \begin{equation} J_3=J_{31}+J_{32}+J_{33}+J_{34}+O(\epsilon), \end{equation} where {\allowdisplaybreaks \begin{align} J_{31}&=\int_{0}^{1/2}\int_{x}^{1-x}\frac{B}{xy(B-x-y)}\int_{1-x}^1\left(\tilde{P}(1-t)\right)^2t^{k-2}d t d y d x,\\ J_{32}&=\int_{0}^{1/2}\int_{x}^{1-x}\frac{B}{xy(B-x-y)}\int_{1-y}^{1-x}\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)\right)^2t^{k-2}d t d y d x,\\ J_{33}&=\int_{0}^{1/2}\int_{x}^{1-x}\frac{B}{xy(B-x-y)}\int_{1-x-y}^{1-y}\nonumber\\ &\quad\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)\right)^2t^{k-2}d t d y d x,\\ J_{34}&=\int_{0}^{1/2}\int_{x}^{1-x}\frac{B}{xy(B-x-y)}\int_{0}^{1-x-y}\nonumber\\ &\quad\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)+\tilde{P}(1-t-x-y)\right)^2t^{k-2}d t d y d x. \end{align}} Finally for $r=4$ we have \begin{equation} J_4=J_{41}+J_{42}+J_{43}+J_{44}+J_{45}+J_{46}+J_{47}+J_{48}+J_{49}+J_{410}+J_{411}+O(\epsilon), \end{equation} where {\allowdisplaybreaks \begin{align} J_{41}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_{1-x}^1\left(\tilde{P}(1-t)\right)^2t^{k-2}d t d z d y d z,\\ J_{42}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_{1-y}^{1-x}\nonumber\\ &\qquad\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)\right)^2t^{k-2}d t d z d y d z,\\ J_{43}&=\int_{0}^{1/4}\int_{x}^{1/2-x}\int_{x+y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_{1-x-y}^{1-y}\nonumber\\ &\qquad\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)\right)^2t^{k-2}d t d z d y d z,\\ J_{44}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{x+y}\frac{B}{xyz(B-x-y-z)}\int_{1-z}^{1-y}\nonumber\\ &\qquad\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)\right)^2t^{k-2}d t d z d y d z,\\ J_{45}&=\int_{0}^{1/4}\int_{x}^{1/2-x}\int_{x+y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_{1-z}^{1-x-y}\nonumber\\ &\quad\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)+\tilde{P}(1-t-x-y)\right)^2t^{k-2}d t d z d y d z,\\ J_{46}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{x+y}\frac{B}{xyz(B-x-y-z)}\int_{1-x-y}^{1-z}\nonumber\\ &\quad\left(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)-\tilde{P}(1-t-z)\right)^2t^{k-2}d t d z d y d z,\\ J_{47}&=\int_{0}^{1/4}\int_{x}^{1/2-x}\int_{x+y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_{1-x-z}^{1-z}\nonumber\\ &\quad\Bigl(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)\nonumber\\ &\quad\quad-\tilde{P}(1-t-z)+\tilde{P}(1-t-x-y)\Bigr)^2t^{k-2}d t d z d y d z,\\ J_{48}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{x+y}\frac{B}{xyz(B-x-y-z)}\int_{1-x-z}^{1-x-y}\nonumber\\ &\quad\Bigl(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)-\tilde{P}(1-t-z)\nonumber\\ &\quad\quad+\tilde{P}(1-t-x-y)\Bigr)^2t^{k-2}d t d z d y d z,\\ J_{49}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_{1-y-z}^{1-x-z}\nonumber\\ &\quad\Bigl(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)-\tilde{P}(1-t-z)\nonumber\\ &\quad\quad+\tilde{P}(1-t-x-y)+\tilde{P}(1-t-x-z)\Bigr)^2t^{k-2}d t d z d y d z,\\ J_{410}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_{1-x-y-z}^{1-y-z}\nonumber\\ &\quad\Bigl(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)-\tilde{P}(1-t-z)+\tilde{P}(1-t-x-y)\nonumber\\ &\quad\quad+\tilde{P}(1-t-x-z)+\tilde{P}(1-t-y-z)\Bigr)^2t^{k-2}d t d z d y d z,\\ J_{411}&=\int_{0}^{1/3}\int_{x}^{(1-x)/2}\int_{y}^{1-x-y}\frac{B}{xyz(B-x-y-z)}\int_0^{1-x-y-z}\nonumber\\ &\Bigl(\tilde{P}(1-t)-\tilde{P}(1-t-x)-\tilde{P}(1-t-y)-\tilde{P}(1-t-z)+\tilde{P}(1-t-x-y)\nonumber\\ &+\tilde{P}(1-t-x-z)+\tilde{P}(1-t-y-z)-\tilde{P}(1-t-x-y-z)\Bigr)^2t^{k-2}d t d z d y d z. \end{align}} We choose $k=22$ and $P(t)=1+60t-300t^2+3500t^3$ and find that \begin{align} I_0&=\frac{121351}{59202}=2.04978\dots,\\ J_1&=\frac{228380}{18027009}=0.01266\dots,\\ J_2&\ge 0.041+O(\epsilon),\\ J_3&\ge 0.048+O(\epsilon),\\ J_4&\ge 0.028+O(\epsilon). \end{align} Thus we have that \begin{align} S&\ge \frac{\mathfrak{S}(\mathcal{L})(\log{R})^{k}}{(k-1)!}\left(\frac{0.99(k-1)}{2}\left(k J_1+J_2+J_3+J_4\right)-2I_0+O(\epsilon)+o(1)\right)\nonumber\\ &\ge \frac{\mathfrak{S}(\mathcal{L})(\log{R})^{k}}{(k-1)!}\left(0.013+O(\epsilon)+o(1)\right). \end{align} In particular, for $N$ sufficiently large and $\epsilon$ sufficiently small we have $S>0$, and so there are infinitely many $n$ for which an admissible 22-tuple attains at least two prime values and one value with at most 4 prime factors. The set $\{0, 6, 8, 14, 18, 20, 24, 30, 36, 38, 44, 48, 50, 56, 60, 66, 74, 78, 80, 84, 86, 90\}$ is an admissible 22-tuple, and so the interval $[n,n+90]$ infinitely often contains at least two primes and an integer with at most 4 prime factors. We remark here that if we can take the level of distribution $\theta=1-\delta$ for every $\delta>0$ then we can take $k=19$ instead of 22, which reduces the length of the interval to 80. \section{Acknowledgment} I would like to thank my supervisor, Prof. Heath-Brown, for suggesting this problem and for his careful reading of this paper. \bibliographystyle{acm}
train/arxiv
BkiUbCHxaL3SuiVmgQlV
5
1
\section{Introduction} Claire Voisin in \cite{Vo} defines the subset $V_k(A)$ of a complex abelian variety $A$ consisting in the collection of points $x\in A$ such that the zero-cycle $\{x\}-\{0_A\}$ is $k$-nilpotent with respect to the Pontryagin product in the Chow group: \[ V_k(A):=\{x\in A \mid (\{x\}-\{0_A\})^{*k}=0 \text{ in } CH_0(A)_{\mathbb Q}\}. \] Here we have denoted by $\{x\}$ the zero-cycle of degree $1$ corresponding to the point $x\in A$. These are naturally defined sets in the sense that they exist in all the abelian varieties, are functorial and move in families. Moreover they are related with the gonality of the abelian variety itself (the minimal gonality of a curve contained in $A$) in a natural way. We consider the following subsets of the moduli space of abelian varieties of dimension $g$ with a polarization of type $\delta$: \[ \mathcal V_{g,k,l}=\{ A \in \mathcal A_g^\delta \mid \dim V_k(A) \ge l \}. \] Since the sets $V_k$ are naturally defined, then $\mathcal V_{g,k,l}$ is a union of countably many closed subvarieties of $\mathcal A_g^\delta$. Hence it makes sense to ask about its dimension. Put $\mathcal V_{g,k}:=\mathcal V_{g,k,1}$. For an abelian subvariety $B\subset A$ the inclusion $V_k(B)\subset V_k(A)$ holds and a well-known theorem of Bloch implies that $B=V_k(B)$ if $\dim B+1\le k$, hence in this situation $B\subset V_k(A)$. These are in some sense ``degenerated examples''. In this paper we concentrate in the case $k=2$ and we take care of the non-degenerate case, that is, we will assume that $V_2(A)$ contains some curve generating the abelian variety $A$. Our main result is: \vskip 3mm \begin{thm}\label{m.res} Let $g\ge 3$ and consider an irreducible subvariety $\mathcal Y \subset \mathcal V_{g,2}$ such that for a very general $y\in \mathcal Y$ there is a curve in $V_2(A_y)$ generating $A_y$. Then $\dim \mathcal Y\le 2g-1.$ \end{thm} This result is sharp due to the fact that the hyperelliptic locus is contained in $\mathcal V_{g,2}$, see section 2. In fact, the motivation for this study is to understand the geometrical meaning of the positive dimensional components in $V_2$. Our result gives some evidence that there is a link between the existence of hyperelliptic curves in abelian varieties and the fact that $V_2$ is positive dimensional. We remark that the statement of the theorem (\ref{m.res}) was suggested by the analogous result in \cite{Isog_hyp} concerning hyperelliptic curves. \vskip 3mm Section $2$ is devoted to give some preliminaries and some useful properties of the loci $V_k(A)$ focusing specially on the case $k=2$. A remarkable property is that $V_2(A)$ is the preimage of the orbit of the image of the origin with respect to rational equivalence in the Kummer variety $Kum(A)$. We also prove the following interesting facts (see Corollaries (\ref{cor1}) and (\ref{cor2})): \vskip 3mm \begin{prop} For any abelian variety $A$ the inclusion $$V_k(A)+V_l(A)\subset V_{k+l-1}(A)$$ holds for all $\, 1\le k, l \le g$. Moreover if $C$ is a hyperelliptic curve of genus $g$, and $J(C)$ be its Jacobian variety, then for all $1\le k \le g+1$, we have that $\dim V_{k}(JC)=k-1$ (the maximal possible value). \end{prop} The rest of the paper is devoted to the proof of the main theorem. The beginning follows closely the same strategy as in \cite{Isog_hyp} since we reduce to prove the vanishing of a certain adjoint form. The novelty here is that we prove this vanishing by using the action of a family of rationally trivial zero-cycles as in the spirit of Mumford and Roitman results revisited by Bloch-Srinivas, Voisin and others. More precisely, we can assume that there is relative map $f:{\mathcal C} \longrightarrow {\mathcal A}$ of curves in abelian varieties over a base ${\mathcal U}$ such that $f(C_y)\subset V_2(A_y)$ generates the abelian variety $A_y$ for all $y\in {\mathcal U}$. We use the properties of the sets $V_2(A_y)$ to construct a cycle on the family of abelian varieties which acts trivially on the differential forms. Then, by using deformation of differential forms as in \cite{Isog_hyp} we compute the so-called adjunction form in a generic point of the family. This technique can be traced-back to \cite{collino_pirola}, where this procedure is introduced for the first time. In section $4$ we assume that, by contradiction, the dimension of the family is $\ge 2g$ and we prove that this implies the existence of a non-trivial adjoint form. This contradicts the results on section 3. \textbf{Acknowledgments:} We warmly thank Olivier Martin for his careful reading of the paper, and the referee for valuable suggestions that have simplified and clarified the paper. \section{Preliminaires on the subsets $V_k$ of an abelian variety $A$} \subsection{On the dimension of $V_k$} The most part of this subsection comes from \cite{Vo}, where the sets $V_k(A)$ appear for the first time. \begin{defn} Let $A$ be an abelian variety and denote by $CH_0(A)$ the Chow group of zero-cycles in $A$ with rational coefficients. We also denote by $*$ the Pontryagin product in the Chow group. Given a point $x \in A$, we put $\{x\}$ for the class of $x$ in $CH_0(A)$. Then we define the Voisin sets (cf. \cite{Vo}): \[ V_k=V_k(A):=\{x\in A \mid (\{x\}-\{0_A\})^{*k}=0\}. \] \end{defn} It is known that the set $V_k$ is a countable union of closed subvarieties of $A$. A typical way to obtain points in $V_k$ is given by the following Proposition: \vskip 3mm \begin{prop}(\cite[Prop. (1.9)]{Vo}) \label{Vo_Be} Assume that $\{x_1\}+\dots +\{x_k\}=k\{0_A\}$ in $CH_0(A)$ for some points $x_i\in A$, then for all $i$ we have $x_i\in V_k$. \end{prop} Since the orbits w.r.t. rational equivalence $\vert k \{0_A\}\vert$ are hard to compute there are only a few examples of positive dimensional components in $V_k(A)$ that we can construct from this proposition. The simplest instance of this comes from a $k$-gonal curve $C$ contained in $A$ such that there is a totally ramified point $p$ for the degree $k$ map $f:C\longrightarrow \mathbb P^1$. Translating we can assume that $p$ is the origin $0_A$ and then the fibers of $f$ provide a $1$-dimensional component in the symmetric product, $k$ times, of $A$. Therefore, by the proposition above, we obtain that $C$ is contained in $V_k(A)$. Observe that the sets $V_k$ are invariant under isogenies, hence these positive dimensional components also appear in many other abelian varieties. In particular for any integer $n$, we have that $n_* V_2(A)\subset V_2(A)$. \begin{rem}\label{remarks_on_V_k} We have the following properties: \begin{enumerate} \item [a)] All the abelian varieties containing hyperelliptic cur\-ves have positive dimensional com\-po\-nents in $V_2$. \item [b)] A very well known theorem of Bloch (see \cite{bloch}) implies that $$V_{g+1}(A)=A,$$ hence the natural filtration \[ V_1(A) \subset V_2(A)\subset \ldots \subset V_g(A) \subset V_{g+1}(A)=A \] has at most $g$ steps. It is natural to ask what is the behaviour of the dimension of $V_k(A)$, with $k\le g$, for very general abelian varieties, and which geometric properties of $A$ codify these sets. \item [c)] Assume that $B\subset A$ is an abelian subvariety, then $V_k(B)\subset V_k(A)$. In particular, if $k\ge \dim B+1$, then $B\subset V_k(A)$. For instance: all the elliptic curves in $A$ passing through the origin are contained in $V_2(A)$. \item [d)] Let $C$ be a smooth quartic plane curve and let $p$ be a flex point with tangent $t$, then $t\cdot C=3p+q$ and the projection from $q$ provides a collection of zero-cycles of degree $3$ in $JC$ rationally equivalent to $3\{0_{JC}\}$, then $C\subset V_3(JC)$. Using isogenies we get that there are in fact a countably number of curves in $V_3(A)$ for a very general abelian variety of dimension $3$. \end{enumerate} \end{rem} The following is proved in Theorem (0.8) of \cite{Vo} by using some ideas from \cite{Mumford} and improving the techniques of \cite{Kummer}: \vskip 3mm \begin{thm}\label{dim_V_k} Let $A$ be an abelian variety of dimension $g$. Then: \begin{enumerate} \item [a)] $\dim V_k(A) \le k-1$. \item [b)] If $A$ is very general and $g\ge 2k-1$ we have that $\dim V_k(A)=0$. \end{enumerate} \end{thm} In the specific case of $V_2(A)$ we have the following properties. \begin{prop}\label{properties_V_2} Let $A$ be an abelian variety and let $Kum(A)$ be its Kummer variety. \begin{enumerate} \item [a)] We have the equality $V_2(A)=\{x\in A \mid \{x\}+\{-x\}=2\{0_A\}\}.$ \item [b)] Let $\alpha : A\longrightarrow Kum(A)$ be the quotient map. Then $$ V_2(A)=\alpha^{-1}(\{y\in Kum(A) \mid \{y\} \sim_{rat} \{\alpha (0_A)\} \}). $$ \end{enumerate} \end{prop} \begin{proof} Part a) follows from the observation that \[ (\{x\}-\{0_A\})^{*2}=\{2x\}-2\{x\}+\{0_A\}=0 \] is equivalent, translating with $-x$, to $\{x\}+\{-x\}=2\{0_A\}$. To prove b) we first see that $x\in V_2(A)$ if and only if $\{ \alpha (x) \} \sim_{rat} \{\alpha (0_A)\}$. Indeed, assume that $\{x\}+\{-x\}=2\{0_A\}$, applying $\alpha$ we get that $2 \{\alpha(x)\}\sim_{rat} 2\{\alpha ( 0_A)\}$. Since $Alb({Kum(A)})=0$ the Chow group has no torsion (see \cite{Roitman_tors}) therefore $\{\alpha(x)\}\sim_{rat} \{\alpha (0_A)\}$. In the opposite direction, if $\{\alpha (x)\}\sim_{rat} \{\alpha (0_A)\}$ we apply $\alpha^*$ at the level of Chow groups and we obtain that $x\in V_2(A)$. Hence $V_2(A)$ is the pre-image by $\alpha $ of the points rationally equivalent to $\alpha (0_A)$. \end{proof} \subsection{Relation with the Chow ring} In this part we collect some computations on $0$-cycles on abelian varieties which are more or less implicit in \cite{beauville_quelques}, \cite{beauville} and \cite{Vo}. Let us recall first some facts on the Chow group (with rational coefficients) of an abelian variety $A$ of dimension $g$ which are proved in \cite{beauville}. Let us define the subgroups: \[ CH^g_s(A):=\{z\in CH^g(A)_{\mathbb Q} \mid k_*(z)=k^s z, \quad \forall k\in\mathbb Z \}. \] Then: \[ CH^g(A)_{\mathbb Q}=CH^g_0(A) \oplus CH^g_1(A) \oplus \ldots \oplus CH^g_g(A). \] Moreover $CH^g_0(A)=\mathbb Q \{0_A\}$ and $I=\bigoplus _{s\ge 1} CH^g_s(A)$ is the ideal, with respect to the Pontryagin product, of the zero-cycles of degree $0$. It is known that $I^{*\, r}= \bigoplus _{s\ge r} CH^g_s(A)$ and that $I^{*\, 2}$ is the kernel of the albanese map tensored with $\mathbb Q$: \[ I \longrightarrow A_{\mathbb Q}, \] sending a zero cycle $\sum n_i\{a_i\}$ to the sum $\sum n_i \, a_i$ in $A$. Another useful property is that $CH^g_s(A)*CH^g_t(A)=CH^g_{s+t}(A)$. We point out that the filtration $V_1(A)\subset V_2(A) \subset \ldots \subset A$ is, in some sense, induced by the filtration $\ldots \subset I^{* \, 2}\subset I \subset CH^g(A)_{\mathbb Q}$. Indeed, given a point $x\in A$ we use the notation: \begin{equation}\label{decomposition} \{x\}=\{0_A\}+x_1+\ldots +x_g, \qquad x_i \in CH^g_i(A). \end{equation} Then we have: \begin{prop}\label{V_k_vs_Chow} For all $x\in A$, $x$ belongs to $V_k(A)$ if and only if $$x_{k}=\ldots =x_g=0.$$ In particular $x \in V_2(A)$ if and only if $\{x\}-\{0_A\}\in CH^g_1(A)$. \end{prop} \begin{proof} We apply to (\ref{decomposition}) the multiplication by $l$ in $A$: \[ \{l x\}=\{0_A\}+l x_1+\ldots + l^g x_g. \] Using this we have: \[ \begin{aligned} (\{x\}-\{0_A\})^{* k}&= \sum_{i=0}^k (-1)^i\binom{k}{i}\{(k-i)x\}=\\ &= \sum_{i=0}^k (-1)^i\binom{k}{i}(\{0_A\}+(k-i)x_1+\ldots +(k-i)^g x_g)= \\ &= \sum_{i=0}^k (-1)^i\binom{k}{i} \{0_A\}+ \sum_{i=0}^k (-1)^i\binom{k}{i} (k-i)x_1+\ldots \\ &+ \sum_{i=0}^k (-1)^i\binom{k}{i}(k-i)^g x_g. \end{aligned} \] Now we use the following formulas (see the proof of Lemma 3.3 in \cite{Vo} or prove them by induction): \[ \begin{aligned} \sum_{i=0}^k (-1)^i\binom{k}{i}(k-i)^l=& 0 \qquad \text{ if }\, l<k \\ \sum_{i=0}^k (-1)^i\binom{k}{i}(k-i)^k=& k! \end{aligned} \] Therefore we obtain that: \[ (\{x\}-\{0_A\})^{* k}=k! x_k+\ldots \] and similarly $ (\{x\}-\{0_A\})^{* l}=l! x_l+\ldots $ for any $l\ge k$. Hence $x\in V_k(A)$ if and only if $x\in V_l(A)$ for all $l\ge k$ if and only if $x_k=\ldots =x_g=0$. \end{proof} We have several consequences of this Proposition and of its proof: \begin{cor} With the same notations we have that $x_k=\frac 1 {k!}x_1^k$. Hence $\{x\}=exp(x_1).$ Moreover $x\in V_k$ if and only if $x_k=x_1^k=0$. \end{cor} \begin{proof} We have seen along the proof of the Proposition that \[ (\{x\}-\{0_A\})^{* k}=k! x_k+\ldots \text{ higher degree terms}. \] Computing directly we get that \[ (\{x\}-\{0_A\})^{* k}=(x_1+\ldots +x_k)^{*k}=x_1^k+\ldots \text{higher degree terms}. \] By comparing both formulas we obtain the equality. \end{proof} \begin{rem} Notice that our computations are more or less contained in \cite{beauville_quelques}. Indeed define as in section $4$ of loc. cit. the map \[ \gamma: A \longrightarrow I, \qquad a \mapsto \{0\} -\{a\} + \frac 12 (\{0\} -\{a\})^{*2} +\frac 13 (\{0\} -\{a\})^{*3}+\ldots \] this is a morphims of groups. Then, with our notations, $\gamma (x)= -x_1$. In particular the image of $\gamma $ belongs to $CH^g_1(A)$. \end{rem} \begin{cor}\label{points_V_2} Let $a, b \in A$ be two points such that $$n \{a\} + m\{b\}=(n+m)\{0_A\}$$ for some integers $1\le n,m$. Then $a,b \in V_2(A)$. \end{cor} \begin{proof} Decomposing as before: \[ \{a\}=\{0\}+a_1+\frac 12 a_1^2+\ldots \qquad \{b\}=\{0\}+b_1+\frac 12 b_1^2+\ldots \] the equality of the statement implies that $n a_1+m b_1=0=n a_1^2+m b_1^2$. Then $b_1=-\frac nm a_1$ and thus $n a_1^2+ \frac {n^2}{m^2} a_1^2=0$, so $a_1^2=b_1^2=0$ and $a,b \in V_2(A)$. \end{proof} \begin{cor} Let $\varphi:A\longrightarrow B$ be an isogeny, then $\varphi^{-1}(V_k(B))\subset V_k(A)$. In particular $\varphi (V_k(A))=V_k(B)$ and $\varphi ^{-1}(V_k(B))=V_k(A)$. \end{cor} \begin{proof} Since we work with Chow groups with rational coefficients it is clear that for an integer $n\neq 0$ the map $n_*:CH^g_k(A)\longrightarrow CH^g_k(A)$ is bijective. Let $\psi : B\longrightarrow A$ be an isogeny such that $\psi \circ \varphi =n$, we deduce that $\varphi _*: CH^g_k(A) \longrightarrow CH^g_k(B)$ is injective. Let $x\in \varphi^{-1}(V_k(B))$ and set $\{x\}=\{0_A\}+x_1+\ldots + x_g$. By hypothesis \[ \varphi (\{x\}) =\{0_B\} +\varphi _*(x_1)+\ldots +\varphi _*(x_g) \in V_k(B). \] Hence $\varphi_* (x_k)=0$ and $x_k=0$. Therefore $x \in V_k(A)$ and we are done. \end{proof} \begin{rem} This corollary also follows from the definition of $V_k$ and the fact that $\varphi _*$ is an isomorphism modulo torsion on Chow groups compatible with the Pontryagin product. \end{rem} Another interesting consequence of this characterization is the following property: \begin{cor}\label{cor1} For any $0\le k,l \le g$ we have that $$V_k(A)+V_l(A)\subset V_{k+l-1}(A).$$ \end{cor} \begin{proof} Let $x\in V_k(A)$, $y\in V_l(A)$. Then $\{x\}=\{0_A\}+x_1+\ldots +x_{k-1}$ and $\{y\}=\{0_A\}+y_1+\ldots +y_{l-1}$. Since $x_i*y_j \in CH^g_{i+j}(A)$ we obtain \[ \{x+y\}= \{x\}*\{y\}=\{0_A\}+(x_1+y_1)+ (x_2+x_1*y_1+y_2)+\ldots+ x_{k-1}*y_{l-1}. \] Thus $x+y\in V_{k+l-1}(A)$. \end{proof} As an application we have: \begin{cor}\label{cor2} Let $C$ be a hyperelliptic curve of genus $g$. Then \[ \dim V_k(JC)=k-1 \] for $1\le k\le g$, that is, the maximal possible dimension is attained. \end{cor} \begin{proof} Choosing a Weiestrass point to define the Abel-Jacobi map we can assume that the curve $C$ is contained in $V_2(JC)$, using inductively the previous Corollary, we have that \[ C+\stackrel{(k-1)}{\ldots} +C=W^0_{k-1}(C)\subset V_{k}(JC). \] \end{proof} For instance, for the Jacobian of a genus $3$ curve $C$ we have, in the hyperelliptic case, that $\dim V_2(JC)=1$ and $\dim V_3(JC)=2$. If instead $C$ is a generic quartic plane curve we have that $\dim V_2(JC)=0$ by Theorem (\ref{dim_V_k}) and $\dim V_3(JC)\ge 1$ by Remark (\ref{remarks_on_V_k},d). Denoting as in that remark $p$ a flex point and $q$ its residual point, we have also the following: for any $x\in C$ such that the tangent line to $C$ in $x$ goes through $q$, $x\in V_2(C)$ (we identify $C$ with the Abel-Jacobi image in $JC$ using $p$). Indeed: there exists a $y\in C$ with $2x+y+q\sim 3p+q$, hence in $JC$ there is a relation of the form $2\{x\}+\{y\}=3\{0\}$ and then Corollary (\ref{points_V_2}) implies that $\{x\}, \{y\} \in V_2(JC)$. Assume now that $C$ is a quartic plane with a hyperflex, that is a point $p$ such that $\mathcal O_C(1)\cong \mathcal O_C (4p)$. This condition defines a divisor in $\mathcal M_3$. Embed $C$ in its Jacobian using $p$ as base-point. Then for any bitangent $2x+2y$ we have $\{x\}, \{y\} \in V_2(JC)$. Also, with the same argument, for a standard flex $q$ we have $\{q\}\in V_2(JC)$. Everything suggests that the points in ``$C\cap V_2(JC)$'' could have some geometrial meaning. Notice that this leaves open the question whether the dimension of $V_3(JC)$ is $1$ or $2$ for a generic quartic plane curve $C$. \vskip 3mm \section{A family of zero-cycles and the action on differential forms} In this section we begin the proof of the main theorem. We proceed by contradiction, hence we assume that there exists an irreducible component $\mathcal Y$ of $\mathcal V_{g,2}$ of dimension $\ge 2g$. By \ref{dim_V_k} we have $\dim V_2(A)\le 1$, for any abelian variety $A$. Hence $V_2(A_y)$ contains curves for all $y\in \mathcal Y$ and, by hypothesis, at least one of these curves generates the abelian variety. By a standard argument (involving the properness and countability of relative Chow varieties and the existence of universal families of abelian varieties up to base change) we can assume the existence of the following diagram: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal {C} \ar[rd] \ar[rr]^ f && \mathcal {A} \ar[ld]^{\pi } \\ &\mathcal U, } \] where the parameter space ${\mathcal U}$ comes equipped with a generically finite map $\Phi: {\mathcal U} \longrightarrow {\mathcal Y} $ such that $\Phi(y)$ is the isomorphism class of $A_y$ for all $y\in \mathcal U$. Moreover $f_y:C_y \longrightarrow A_y$ is the normalization map of an irreducible curve $f_y(C_y)$ contained in $V_2(A_{y})$, and generating $A_y$, followed by the inclusion. We can also assume that ${\mathcal C} \longrightarrow {\mathcal U}$ has a section and then that $f$ induces a map of families of abelian varieties $F:{\mathcal {JC}} \longrightarrow \mathcal A$ over $\mathcal U$. To start with we pull-back the families of curves and abelian varieties to ${\mathcal C}$ itself: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal A_{{\mathcal C}} \ar[d]_{\pi_{{\mathcal C} }} \ar[rr] && \mathcal {A} \ar[d]^\pi\\ {\mathcal C} \ar[rr] && {\mathcal U}. } \] Now we define a family of zero-cycles in $\mathcal A_{{\mathcal C} }$ parametrized by ${\mathcal C}$. Let $s_+: {\mathcal C} \longrightarrow {\mathcal A}_{{\mathcal C}}$ be the section given by the maps $(id_{{\mathcal C}},f)$: \[ \xymatrix@C=1.pc@R=1.8pc{ {\mathcal C} \ar@/_/[ddr]_{id_{{\mathcal C}}} \ar@/^/[drr]^f \ar[dr]^{s_+} \\ & {\mathcal A}_{{\mathcal C}} \ar[d]^{\pi_{{\mathcal C}}} \ar[r] & {\mathcal A} \ar[d]^{\pi} \\ & {\mathcal C} \ar[r] & {\mathcal U} } \] Put $\mathcal Z^+:=s_+({\mathcal C})$. Analogously, by considering $-1_{\mathcal A}\circ f$, where $-1_{\mathcal A} $ is the relative $-1$ map on the family of abelian varieties we define a section $s_-:{\mathcal C} \longrightarrow {\mathcal A}_{\mathcal C}$ of $\pi_{\mathcal C}$ and a cycle $\mathcal Z^-:=s_-({\mathcal C})$. Finally the zero section $0_{\mathcal A}$ induces a section $s_0$ and a cycle $\mathcal Z_0$. Set $\mathcal Z=\mathcal Z^++\mathcal Z^- - 2 \mathcal Z_0$, a cycle on $\mathcal A_{{\mathcal C}}$. In fact $\mathcal Z^++\mathcal Z^-, 2 \mathcal Z_0 \subset Sym^2 \mathcal A_{{\mathcal C}}$. Observe that $\mathcal Z_{t}$, $t\in {\mathcal C}$ is the $0$-cycle \[ \mathcal Z_t=\{f(t)\}+\{-f(t)\}-2\{0_{A_y}\} \] on $A_y$, where $\pi(t)=y$. Since $f(t)\in f(C_y)\subset V_2(A_y)$ we have that $\mathcal Z_{t}\sim _{rat}0$ in $A_y$ (see Proposition \ref{properties_V_2}). We are interested in an infinitesimal deformation of a curve $C_y$ for a general $y\in \mathcal U$. Thus, let us denote by $\Delta $ the spectrum of the ring of the dual numbers $Spec \, \mathbb C[\varepsilon ]/(\varepsilon ^2)$. We consider a tangent vector $\xi \in T_{\mathcal U}(y)$ and we take a smooth quasi projective curve $B\subset {\mathcal U}$ passing through $y$ and with $\xi \in T_B(y)$. This induces the maps \[ \alpha_{\xi }:\Delta \longrightarrow B \longrightarrow \mathcal U. \] We pull-back to $B$ and to $\Delta $ the families of curves and abelian varieties and the cycle $\mathcal Z$, thus we have \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal A_{{\mathcal C}_{\Delta }} \ar[d]_{\pi_{\Delta}} \ar[rr] && \mathcal A_{{\mathcal C}_B} \ar[d]^\pi \ar[rr] && \mathcal A_{{\mathcal C}} \ar[d]^{\pi_{\mathcal C}}\\ \mathcal {\mathcal C}_{\Delta } \ar[d] \ar[rr] && \mathcal C_B \ar[d] \ar[rr] && \mathcal C \ar[d] \\ \Delta \ar[rr]^{\alpha_{\xi}} && B \ar[rr] && {\mathcal U}, } \] and $\mathcal Z_{\Delta }$ (resp. $\mathcal Z_{B }$) denotes the pull-back to $\mathcal A_{\mathcal C_{\Delta}}$ (resp. to $\mathcal A_{\mathcal C_{B}}$) of the cycle $\mathcal Z$. The cycle $\mathcal Z_B$ determines a cohomological class $[\mathcal Z_B]\in H^g(\mathcal A_{{\mathcal C}_B},\Omega^g_{\mathcal A_{{\mathcal C}_B}})$ and its restriction a class $[\mathcal Z_B]_t$ in $H^g(A_{t}, \Omega ^g_{\mathcal A_{{\mathcal C}_B} \vert A_{ t} } )$, where $\pi(t)=y$. The following is well-known by the experts and is a version of the classical results by Mumford and Roitman on zero-cycles (see \cite[Lemma 2.2]{voisin_AG} and also \cite{Bloch_Srinivas}): \begin{prop}\label{Bloch-Srinivas} If for any $t\in {\mathcal C}_B$, the restricted cycle $\mathcal Z_t$ is rationally equivalent to $0$, then there is a dense Zariski open set $\mathcal V\subset {\mathcal C}_B$ such that $[\mathcal Z_{\mathcal V}]=0$ in $H^g(\mathcal A_{\mathcal V}, \Omega ^g_{\mathcal A_{\mathcal V}})$. In particular for all $t\in \mathcal V$ we have $[\mathcal Z_{\mathcal V}]_t=[\mathcal Z_{B}]_t =0$ in $H^g( A_{t}, \Omega ^g_{\mathcal A_{\mathcal V}\vert A_t})$. \end{prop} Now we look at the action of $\mathcal Z_{\Delta }$ in the space of differential forms on the infinitesimal family of abelian varieties. This works as follows. Let $\mathcal A_{C_y}=A_y\times C_y$ be the restriction of $\mathcal A_{C_{\Delta }}$ over $C_y$. Consider $\Omega \in H^0(\mathcal A_{C_y}, \Omega^2_{\mathcal A_{{\mathcal C}_ {\Delta}}|\mathcal A_{C_y}}) $ and define: \begin{equation}\label{def_action} \mathcal Z_{\Delta }^*(\Omega )=(s_+)^*(\Omega)+(s_-)^*(\Omega )-2\, s_0^*(\Omega), \end{equation} which belongs to $H^0( C_y, \Omega^2_{\mathcal C_{\Delta }|C_y}). $ Then we have by (\ref{Bloch-Srinivas}) that this action is trivial which gives the vanishing $ \mathcal Z_{\Delta }^*(\Omega )=0$. Consider also the family of abelian varieties on $\Delta $: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal {\mathcal A}_{\Delta } \ar[d] \ar[rr] && \mathcal A \ar[d] \\ \Delta \ar[rr] && {\mathcal U}. } \] Notice that there is a natural map $\mathcal A_{{\mathcal C}_{\Delta } } \xrightarrow{h} \mathcal A_{\Delta}$ and that the composition: \[ {\mathcal C}_{\Delta } \xrightarrow{s_+} \mathcal A_{{\mathcal C}_{\Delta } } \xrightarrow{ h} \mathcal A_{\Delta} \] is simply the original family of curves $f_{\Delta}: {\mathcal C}_{\Delta } \longrightarrow \mathcal A_{\Delta }$ over $\Delta $. Therefore for any form $\Omega' \in H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y}) $, denoting $\Omega = h^*(\Omega ')$, we have that \[ f_{\Delta}^*(\Omega')=s_+^*(\Omega). \] An almost identical computation can be done with $s_-$ since $-1_{\mathcal A_{{\mathcal C}_{\Delta}}}$ acts trivially on the $(2,0)$-forms. Finally if $\Omega' \in H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y})^0 \subset H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y})$, is a form vanishing at the origin, then $s^*_0(h^*(\Omega ') )=0$. These considerations combined with (\ref{def_action}) gives that for any form $\Omega '\in H^0(A_y , \Omega ^2 _{{\mathcal A}_{\Delta }|A_y})^0$: \[ \mathcal Z_{\Delta}^* h^*(\Omega')=2f_{\Delta}^*(\Omega'). \] Then, using the vanishing of $\mathcal Z^*(\Omega)$, this implies: \begin{prop} \label{vanishing_Delta} With the above notations, \[ f_{\Delta}^*:H^0(A_y,\Omega^2_{\mathcal A_{\Delta} \vert A_y})^0 \longrightarrow H^0(C_y,\Omega^2_{{\mathcal C}_\Delta \vert C_y}) \cong H^0(C_y,\omega _{C_y}) \] is the zero map. \end{prop} We will see in the next section that, if the dimension of the family is $\ge 2g$, then for a generic point of $\mathcal U$ and a convenient infinitesimal deformation there is a form in $ H^0(A_y,\Omega^2_{\mathcal A_{\Delta } \vert A_y})^0$ with non-trivial image in $H^0(C_y,\Omega^2_{{\mathcal C}_\Delta \vert C_y})$. This contradicts the Proposition. \vskip 3mm \section{The geometry of the adjoint form and end of the proof} In this section we end the proof of the main Theorem. Assuming that the dimension of the family is $\ge 2g$, we will find a contradiction with Proposition (\ref{vanishing_Delta}). As at the beginning of section $3$ we can assume the existence of the following diagram: \[ \xymatrix@C=1.pc@R=1.8pc{ \mathcal {C} \ar[rd] \ar[rr]^ f && \mathcal {A} \ar[ld]^{\pi } \\ &\mathcal U, } \] where the parameter space ${\mathcal U}$ comes equipped with a generically finite map $\Phi: {\mathcal U} \longrightarrow {\mathcal Y}$. We fix a generic point $y$ in ${\mathcal U}$ and we denote by $\mathbb T$ the tangent space of ${\mathcal U}$ at $y$. Observe that $\mathbb T \hookrightarrow Sym^2 H^{1,0}(A_y)^\ast$. Moreover the surjective map $F_y$ induces an inclusion of $W_y:=H^{1,0}(A_y)$ in $H^0(C_y,\omega_{C_y})$. Let $D$ be the base locus of the linear system generated by $W_y$, therefore $W_y\subset H^{0}(C_y,\omega_{C_y}(-D_y))$. Lemma 3.1 in \cite{Isog_hyp} states that for a generic two dimensional subspace $E$ of $W_y$ the base locus of the pencil attached to $E$ is still $D_y$. As in the proof of Theorem 1.4 \cite{Isog_hyp} we consider the map sending $\xi \in \mathbb T$, seen as a symmetric map $\, \cdot \xi:\, W_y=H^{1,0}(A_y)\longrightarrow H^{1,0}(A_y)^*$, to its restriction to $E$. Let $E_0$ be a complementary of $E$ in $W_y$. Then we have an element in $E^*\otimes E^* +E^*\otimes E_0^*$ which, by the symmetry, belongs to $Sym^2 E^* + E^* \otimes E_0^*$. This last space has dimension $3+2(g-2)=2g-1$. Since $\dim \mathcal Y \ge 2g$ we get that the linear map \[ \mathbb T \longrightarrow Sym^2 E^* + E^* \otimes E_0^* \] sending $\xi$ to $\delta_{\xi \mid E} $ has non trivial Kernel. Therefore we conclude the following: \begin{lem}\label{existence_xi} For any $2$-dimensional vector space $E \subset W_y$ there exists $\xi \in \mathbb T$ killing all the forms in $E$. Hence, if $\omega_1,\omega_2$ is a basis of $E$, then $\xi \cdot \omega_1=\xi \cdot \omega _2=0$. \end{lem} We want to compute the adjunction form for a basis $\omega_1, \omega _2$ of $E$, as defined in \cite{collino_pirola}. Observe that $\xi$ can be seen as an infinitesimal deformation $\mathcal A_{\Delta }$ of $A_y$. We denote by $F_\xi $ the rank $2$ vector bundle on $A_y$ attached to $\xi$ via the isomorphism $H^ 1(A_y,T_{A_y})\cong Ext^ 1(\Omega^1_{A_y},\mathcal O_{A_y})$. With this notation the sheaf $F_\xi$ can be identified with $\Omega ^1_{{\mathcal A}_{\Delta}\vert A_y}$. By definition there is a short exact sequence of sheaves: \[ 0\longrightarrow \mathcal O_{A_y} \longrightarrow \Omega ^1_{{\mathcal A}_{\Delta}\vert A_y} \longrightarrow \Omega^1_{A_y} \longrightarrow 0. \] The connection map $H^0(A_y,\Omega^1_{A_y})\longrightarrow H^1(A_y,\mathcal O_{A_y})$ is the cup-product with $\xi \in H^1(A_y,T_{A_y})$. Then the forms $\omega_1, \omega_2$ lift to sections $s_1,s_2\in H^ 0(A_y,\Omega ^1_{{\mathcal A}_{\Delta}\vert A_y})$. These sections are not unique, but they are by imposing them to be $0$ on the $0$-section of $\mathcal A_{\Delta }\longrightarrow \Delta$. Then the adjoint form of $\omega_1, \omega_2$ with respect to $\xi$ is defined as the restriction of \[ s_1\wedge s_2 \in H^0(A_y, \Omega ^2_{{\mathcal A}_{\Delta}\vert A_y} )^0 \] to $C_y$. This is a section of $H^0(C_y,\Omega ^2 _{{\mathcal C}_{\Delta}|C_y} )\cong H^0(C_y,\omega _{C_y})$. \vskip 3mm \begin{prop} \label{vanishing_implies_end} If the adjoint form vanishes then $\xi$ belongs to the kernel of $d\Phi: \mathbb T \longrightarrow T_{{\mathcal A}_g}(A_y)=Sym^2 H^{1,0}(A_y)^*$. \end{prop} \begin{proof} According to Theorem 1.1.8 in \cite{collino_pirola}, the adjoint form vanishes if and only if the image of $\xi $ in \[ H^1(C_y, T_{C_y}(D))\cong Ext^1(\omega_{C_y}(-D), \mathcal O_{C_y}) \] is zero. This says that the corresponding extension is trivial, so the short exact sequence in the first row of the next diagram splits (i.e. $i^* \Omega ^1_{{\mathcal C}_{\Delta}\vert C_y}=\mathcal O_{C_y} \oplus \omega _{C_y}(-D)$): \[ \xymatrix@C=1.pc@R=1.8pc{ 0 \ar[r] & \mathcal O_{C_y} \ar[r] \ar@{=}[d] & i^* \Omega ^1_{{\mathcal C}_{\Delta}\vert C_y} \ar[r] \ar[d] & \omega _{C_y}(-D) \ar[r] \ar@{^{(}->}[d]^{i} & 0 \\ 0 \ar[r]& O_{C_y} \ar[r] & \Omega ^1_{{\mathcal C}_{\Delta}\vert C_y} \ar[r] & \omega _{C_y} \ar[r] & 0 } \] which implies that the connecting homomorphism in the associated long exact sequence of cohomology $H^0(C_y,\omega _{C_y}(-D))\longrightarrow H^1(C_y,\mathcal O_{C_y})$ is trivial. Therefore $\xi \cdot H^0(C_y,\omega _{C_y}(-D))=0$ and in particular $\xi \cdot W_{y}=0$. Hence $\xi$ is in the kernel of $d\Phi_y$. \end{proof} \vskip 3mm \textbf{End of the proof of \ref{m.res}:} Since $d\Phi$ is injective in a generic point, we are reduced to prove the vanishing of the adjoint form to reach a contradiction. Our aim is to use the vanishing obtained in Corollary (\ref{vanishing_Delta}). We fix a generic point $y\in \mathcal U$ and consider $(\xi, E)$ as in Lemma (\ref{existence_xi}). As in the section 3, set $\Delta:=Spec \mathbb C[\varepsilon]/(\varepsilon ^2)$ and let $\alpha_{\xi } \longrightarrow \mathcal U $ be the map attached to $\xi $. From now on we restrict our family over $\mathcal U$ to a family over $\Delta $. Moreover we denote by ${\mathcal C}_{\Delta }$ the pull-back of $\mathcal C$ to $\Delta$, hence we have an infinitesimal deformation of $C_y$: \[ {\mathcal C}_{\Delta } \longrightarrow \Delta. \] Again we pull-back to $\Delta $ the family of abelian varieties and the family of curves we get the diagram: \begin{equation}\label{definition_Gamma} \xymatrix@C=1.pc@R=1.8pc{ &\mathcal A_{\Delta } \ar[dd] \ar[rr] && \mathcal A \ar[dd]^{\pi} \\ {\mathcal C}_{\Delta} \ar[ru]^{f_{\Delta }} \ar[rd] \ar[rr] && \mathcal {C} \ar[ru]^f \ar[rd]& \\ &\Delta \ar[rr] &&\mathcal U. } \end{equation} Notice that $E$ is generated by two linearly independent forms $\omega_1, \omega_2 \in H^0(A_y,\Omega ^1_{A_y})\subset H^0(C_y,\omega_{C_y})$, we still denote by $s_1,s_2\in H^ 0(A_y,\Omega ^1_{{\mathcal A}_{\Delta}\vert A_y})$ the lifting of both sections. Then by the Proposition (\ref{vanishing_Delta}) the restriction to $C_{\Delta}$ of the form $s_1 \wedge s_2$ is zero. Hence the adjoint form is zero and thus the Theorem is proved. \qed \vskip 3mm \begin{rem} Since the Voisin set $V_2$ can be seen as the preimage of the rational orbit of the image of the origin in the corresponding Kummer variety our Theorem gives a bound on the dimension of the locus of Kummer varieties where these orbits are positive dimensional and ``non-degenerate'' (that is, the preimage in the abelian variety generates the abelian variety itself). \end{rem}
train/arxiv
BkiUabTxK6wB9dbuRJMA
5
1
\section{Introduction} Magnetic skyrmions are swirls in a magnetic spin system, analogous to the skyrmion particle originally described in the context of pion fields~\cite{Pf_MnSi_Science_09,2010:Jonietz:Science,2010:Yu:Nature,Tokura_CuOSeO_LTEM_Science_12,Rocsh_MnSi_emergent_NatPhys,2013:Milde:Science,2013:Fert:NatureNano,Tokura_review_skyrmion_Natnano_13,Romming2013,Tokura_CuOSeO-MnSi_ratchet_Natmater_14,2015:Schwarze:NatureMater}. Due to their unique topological properties, they are proposed as a promising candidate for the advanced spintronics applications \cite{Tokura_review_skyrmion_Natnano_13}. The most famous skyrmion-carrying materials system are the helimagnets with the crystalline space group $P2_13$, such as MnSi, FeGe, and Cu$_2$OSeO$_3$ \cite{Pf_MnSi_Science_09,2010:Munzer:PhysRevB, Adams_Domains_2010,2010:Yu:Nature,2011:Yu:NatureMater,Seki2012}. The magnetic phases and formation of the skyrmion lattice phase is well-described by the Ginzburg-Landau equation that takes into account thermal fluctuations \cite{Pf_MnSi_Science_09}. The size of a skyrmion is usually 20-70~nm for these materials, which to a large degree limits the available techniques that can fully characterize their magnetic structure. So far, small angle neutron scattering (SANS) \cite{Pf_MnSi_Science_09} and Lorentz transmission electron microscopy (LTEM) \cite{2010:Yu:Nature} have successfully been applied to characterize the skyrmion lattice phase on a microscopic and macroscopic scale, respectively. On the other hand, spontaneous symmetry breaking and domain formation are the natural consequences of magnetism, suggesting that a similar domain effect may exist in the skyrmion lattice phase. However, this requires a characterization technique that probes the material on a mesoscopic scale, in-between the local probing of LTEM and the macroscopic averaging of neutron diffraction. Here, we present resonant soft x-ray scattering on single crystal Cu$_2$OSeO$_3$, which covers the length scale needed to observe skyrmion domains, revisiting earlier work by Langner \textit{et al.}~\cite{Langner2014}. Among all $P2_13$-type skyrmion-carrying systems, Cu$_2$OSeO$_3$ is a unique compound due to its complex crystalline structure compared with B20 helimagnets, as well as its dielectric and ferroelectric properties \cite{Seki2012}. It is composed of a complex arrangement of distorted CuO$_5$ square-based pyramids and trigonal bipyramids, and a lone-pair tetrahedral SeO$_3$ unit \cite{Berger_Cu2OSeO3_infrared_PRB_10}. The oxygen atoms in the unit cell are shared among these basic elements. All copper ions possess a divalent oxidation state, however, they are distinguished depending on their oxygen environment. Below $T_c \approx 60$~K the material displays ferrimagnetic ordering. Bos \textit{et al.}\ \cite{Bos2008} determined the magnetic properties and found an effective moment of $\sim$1.36~$\mu_\mathrm{B}$/Cu, which is lower than the value of $\sim$1.73~$\mu_\mathrm{B}$/Cu expected for Cu$^{2+}$, where only the spin moment plays a role. Such a reduced moment is commonly found in metal oxides. The field dependence measurements at 5~K give a saturation value of 0.5~$\mu_\mathrm{B}$/Cu, i.e., half the value expected for a $S=1/2$ spin system, indicative of a collinear ferrimagnetic alignment. The anti-aligned spins are situated on the two chemically distinct copper sites in a ratio of 3:1. The Cu sites have a strong Dzyaloshinskii-Moriya interaction, which is at the origin of the ferroelectricity of this material \cite{Yang2012}. Cu$_2$OSeO$_3$ has an energy hierarchy similar to other metallic helimagnets \cite{Tokura_review_skyrmion_Natnano_13}. For the skyrmion phase, the helix propagation orientation is weakly pinned by the cubic anisotropy, and can be easily unpinned by introducing fluctuations, such as an electrical field \cite{White_Cu2OSeO3_SANS_E_field_rot_PRL_14} or a thermal gradient \cite{Tokura_CuOSeO-MnSi_ratchet_Natmater_14}. Density-functional-theory calculations show that the propagation wave vectors along all orientations are degenerate for the skyrmion phase \cite{Fudan_Cu2OSeO3_theory_RPL_12}. Therefore, it is expected that under weak perturbation condition, multiple skyrmion domains can exist. The domains have identical absolute values of the propagation vectors, but differ in orientation. This has been observed in both MnSi and Cu$_2$OSeO$_3$ systems by real-time LTEM, and in Fe$_{1-x}$Co$_x$Si by SANS \cite{2010:Munzer:PhysRevB,Adams_Domains_2010}. One can observe the rotating skyrmion domains, confirmed by the two sets of six-fold symmetric spots of the Fourier transform images \cite{Tokura_CuOSeO-MnSi_ratchet_Natmater_14}. In a recent study, Langner \textit{et al.}~\cite{Langner2014} reported resonant soft x-ray scattering (REXS) of Cu$_2$OSeO$_3$. In their experiment the wavelength of the polarized x-rays was tuned to the Cu $L_3$ edge, and the magnetic diffraction spots were captured on the CCD camera plane in the (001) Bragg condition. In the camera image, the shape of the magnetic satellites is field- and photon energy-dependent, and develops a fine structure, ultimately splitting into more than one spot. The authors interpret this spot splitting as arising from the moir\'{e} pattern of two superposed skyrmion sublattices, which originate from two inequivalent Cu sites, as evidenced by a 2~eV split in their x-ray absorption spectra. We performed resonant soft x-ray diffraction experiments on a well-characterized Cu$_2$OSeO$_3$ single crystal \cite{EPFL-ARTICLE-196610} and obtained a reciprocal space maps in the $hk$-plane of the skyrmion phase. In the following we give a detailed description of these measurements and present a critical discussion of Langner \textit{et al.}'s results together with an alternative explanation for the peak splitting of the magnetic diffraction peaks observed in Cu$_2$OSeO$_3$ based on the formation of a multidomain state. \section{Resonant scattering\label{sec:theo}} For resonant scattering at the $L_{2,3}$ edge of $3d$ transition metals it is sufficient to take only electric-dipole transitions into account \cite{Gerrit_REXS_Physique_08}. Further, there are two main characteristics of $3d$ materials worth noting. First, the photon energy falls into the range of 0.4-1~keV, leading to relatively long wavelengths, which limits the number of accessible materials for experiments in reflection geometry. At the Cu $L_3$ edge, the x-ray wavelength $\lambda$ is $13.3$~\AA. For B20 helimagnetic metals, such as MnSi, Fe$_x$Co$_{1-x}$Si, or FeGe, the (structural) lattice constant $d$ is around 4.5-4.7~\AA\ \cite{Tokura_review_skyrmion_Natnano_13}. Since $\lambda = 2d \sin \theta$, where $\theta$ is the Bragg angle, no structural Bragg reflection is accessible. Cu$_2$OSeO$_3$, on the other hand, has a relatively large lattice constant ($d$ = 8.925~\AA) \cite{Bos2008} so that the (forbidden) (001) peak is accessible. This provides an ideal condition for performing resonant elastic x-ray scattering (REXS) in reflection geometry. Second, the penetration depth of soft x-rays varies strongly for photon energies across the absorption edge \cite{Gerrit_REXS_Physique_08}. The scattering is more bulk-sensitive for photon energies below the $L_3$ and further above the $L_2$ edges, whereas it is more surface-sensitive at resonance. In Cu$_2$OSeO$_3$ the x-ray attenuation length at normal incidence at the $L_3$ edge maximum is 95~nm, while below the absorption edge the attenuation length is 394~nm \cite{vanderFigueroa}. In a REXS experiment, magnetic diffraction occurs around a structural Bragg peak, as the local magnetic moments are connected to the magnetic atoms that give rise to the resonant diffraction. Therefore, for scattering from single-crystalline Cu$_2$OSeO$_3$, the positions of the magnetic satellites around (001) yield the information about the orientation and periodicity of the modulation. The diffraction intensity reveals information about the detailed magnetization configuration $\mathbf{M}(\mathbf{q})$. This is the basic principle for characterizing magnetic structures and for distinguishing between different magnetic phases. Note that for space group $P2_13$ the (001) peak is crystallographically forbidden and absent in off-resonant scattering, however, the anisotropic third-rank tensor stemming from the mixed dipole-quadrupole term allows for the extinction peak to appear for non-centrosymmetric crystals at the x-ray resonance condition \cite{Templeton_PRB_94,Dmitrienko_PRL2012}. Cu$_2$OSeO$_3$ is a chiral magnet, suggesting ordered spin helices, which further host, in a pocket of the $T$-$H$ (temperature vs magnetic-field) phase space, `crystalline' magnetic order in the form of the skyrmion lattice. The magnetic modulation is incommensurate, which means that it is decoupled from the atomic lattice. Moreover, the modulation has a long periodicity \cite{Pf_CuOSeO_PRL_12}. Therefore, the continuum approximation can be applied to model the magnetic properties \cite{Tokura_review_skyrmion_Natnano_13}. The system's ground state ($H =0$) is the one-dimensional, helically ordered state composed of single-harmonic modes, where $\mathbf{q}_h$ is the wave vector of the helix with $\lambda_h=2\pi/{q}_h$ being the real-space helical pitch. The orientation of the modulation is pinned along a $\langle100\rangle$ direction by the cubic anisotropy \cite{White_Cu2OSeO3_E_rotation_PRL_14}. The magnetization configuration for one helical pitch is illustrated in Fig.\ \ref{fig_1}(a), in which the modulation is along $x$. This is the elemental unit of the helical periodic structure, i.e., the motif of the magnetic crystal. It is worth mentioning that the helical pitch $\lambda_h$ is equal to the helix-to-helix distance $a_h$ for Cu$_2$OSeO$_3$, as well as other B20 metallic helimagnets. This is well-established by both SANS \cite{Pf_CuOSeO_PRL_12,Tokura_CuOSeO_rotation_PRB_12} and LTEM \cite{Tokura_CuOSeO_LTEM_Science_12,White_Cu2OSeO3_LTEM_PNAS_15} studies. \begin{figure}[ht!] \begin{center} \includegraphics[width=8.6cm]{Fig1_MagUnits.eps} \caption{Magnetic elemental units and unit cells of the magnetically ordered phases of Cu$_2$OSeO$_3$. The magnetic elemental units are shown for (a) helical and (b) conical phase. The magnetic elemental unit for the skyrmion phase is indicated in (c) along with the basis vectors of the magnetic unit cell. The elemental units give rise to the form factors for resonant diffraction and the magnetic unit cells to the structure factors of resonant scattering, as well as the magnetic reciprocal space maps.} \label{fig_1} \end{center} \end{figure} Above a certain magnetic field $H_{c1}$, the conical spiral state becomes the lowest energy solution. The single-harmonic spiral has the same pitch as the helical state. The spiral rotates in the $\mathbf{n}_2$-$\mathbf{n}_3$ plane. The magnetization component along $\mathbf{n}_1$-direction is proportional to the magnetic field. At a certain field, $H_{c2}$, all the magnetization vectors are parallel, forming the ferrimagnetic state. The conical periodic order can also be described by a unit cell that consists of two conical spirals. The skyrmion vortex structure is a metastable solution in the phenomenological model, unless thermal fluctuation are being taken into account \cite{Pf_MnSi_Science_09}. A single skyrmion vortex, as shown in Fig.\ \ref{fig_1}(c), can be written in the form of an axially symmetric magnetization distribution \cite{VA}, \begin{equation} \begin{aligned} m_1^\mathrm{sky} (\rho,\phi) &= M_S \sin [\theta(\rho)]\cos [\kappa(\phi+\phi_0)] \,\,\,,\\ m_2^\mathrm{sky} (\rho,\phi) &= M_S \sin [\theta(\rho)]\sin[\kappa(\phi+\phi_0)] \,\,\,,\\ m_3^\mathrm{sky}(\rho,\phi) &= M_S \lambda \cos [\theta(\rho)] \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,, \label{eq_27} \end{aligned} \end{equation} \noindent using polar coordinates with $\rho=\sqrt{(x^2+y^2)}$ and $\phi=\arctan (y/x)$. $\theta(\rho)$ satisfies the Euler equation and $\kappa$ is the winding number. Thus, the form factor for an individual skyrmion can be written in the form of \begin{equation} f_\mathrm{sky}=\mathbf{V} \iint_{\mathrm{sky}}(\mathbf{\epsilon}_s^*\times\mathbf{\epsilon}_i) (m_1^\mathrm{sky}\mathbf{n}_1 + m_2^\mathrm{sky} \mathbf{n}_2 + m_3^\mathrm{sky}\mathbf{n}_3) e^{i\mathbf{q} \cdot \mathbf{r}} d\mathbf{r}. \label{eq_29} \end{equation} \noindent The integral is taken over the circular area of a skyrmion vortex. In contrast to the helical and conical states, the skyrmion state is a two-dimensional solution. The `crystal' structure is essentially a hexagonal-type two-dimensional lattice. Therefore, the two-dimensional unit cell can be chosen as shown in Fig.\ \ref{fig_1}(c). The structure factor then becomes \begin{equation} F_\mathrm{sky} = f_\mathrm{sky} (1 + e^{i\mathbf{q}\cdot\mathbf{a}_1}+e^{i\mathbf{q}\cdot\mathbf{a}_2}+e^{i\mathbf{q}\cdot\mathbf{a}_3}) \,\,\,\,, \label{eq_30} \end{equation} \noindent where $\mathbf{a}_1$, $\mathbf{a}_2$, $\mathbf{a}_3$ are the real-space basis vectors, which are rotated by $60^\circ$ with respect to each other. The core-to-core distance is $a_1=a_2=a_3$, which can be regarded as the `lattice constant' of the skyrmion crystal. \begin{figure}[t!] \begin{center} \includegraphics[width=8.6cm]{Fig2_MagSat.eps} \caption{(a) The helical phase is obtained by repeatedly stacking 1D helical chains to form a 2D magnetization pattern. The corresponding magnetic satellites come in pairs of $\pm {\mathbf{q}}_h$, lying on perpendicular axes. (b) Magnetic satellites in reciprocal space for the skyrmion state, where the spots are in the $hk$ plane ($k$ is perpendicular to the plane of the paper).} \label{fig_2} \end{center} \end{figure} Using the form factors of the helical ($f_\mathrm{h}$), conical ($f_\mathrm{c}$), and skyrmion ($f_\mathrm{sky}$) motifs, as well as their structure factors $F_\mathrm{h}$, $F_\mathrm{c}$, and $F_\mathrm{sky}$ of the unit cells, one can easily obtain the reciprocal space maps. In the helical state, the `lattice constant' is equal to the helical pitch. Therefore, the first-order diffraction peaks appear at $\pm\mathbf{q}_h$, and the reciprocal lattice is purely one-dimensional. In Cu$_2$OSeO$_3$, the direction of $\mathbf{q}_h$ is not entirely degenerate, but additionally governed by the sixth order magnetic anisotropy, giving rise to the three-fold degenerate preferred orientation along the three equivalent $\langle 001 \rangle$ directions. Consequently, three spatially separated helical domains are expected. Moreover, the helical magnetic reciprocal space lattice has to be imposed on the crystalline reciprocal space lattice in order to obtain the diffraction condition. The reciprocal space of the helical crystal is plotted in Fig.\ \ref{fig_2}(a), and summarized in Table \ref{tab_1}. In the conical state, the reciprocal space is similar to the one of the helical phase in that the first order diffraction peaks (modulation vector) appear at $\pm\mathbf{q}_h$. On the other hand, the direction of $\pm\mathbf{q}_h$ is entirely governed by the magnetic field direction, and in fact parallel to it. Therefore, there is only a `single domain' state observed, as summarized in Table \ref{tab_1}. In the skyrmion state, the reciprocal space [cf., Fig.\ \ref{fig_2}(b)] has three reciprocal-space basis vectors $\mathbf{\tau}_1$, $\mathbf{\tau}_2$, and $\mathbf{\tau}_3$. They are separated by $60^\circ$ and are related to the three lattice constants by $a _i= 2 \pi / (\sqrt{3}\tau_i)$. Therefore, the diffraction peaks appear at $\pm \mathbf{\tau}_i$ around the (001) diffraction, as shown in Fig.\ \ref{fig_2}(b) and Table \ref{tab_1}. \begin{table}[htb] \caption{Magnetic modulation vectors for the magnetic phases of Cu$_2$OSeO$_3$, and the associated magnetic satellites observable in a REXS experiment.} \label{tab_1} \begin{tabular}{lcc} \toprule Phase & Modulation vectors & Magnetic reflections \\ \hline Helical & (0,0,$q_h$), (0,0,-$q_h$) & (0,0,1$\pm q_h$)\\ & ($q_h$,0,0), (-$q_h$,0,0) & ($\pm q_h$,0,1) \\ & (0,$q_h$,0), (0,-$q_h$,0) & (0,$\pm q_h$,1) \\ & & \\ Conical & (0,0,$q_h$), (0,0,-$q_h$) & (0,0,1$\pm q_h$)\\ & & \\ Skyrmion & ($\tau$,0,0), (-$\tau$,0,0) & ($\pm \tau$,0,1) \\ & $\left( -\frac{1}{2}\tau , \frac{\sqrt{3}}{2}\tau ,0 \right)$, $\left( \frac{1}{2}\tau,-\frac{\sqrt{3}}{2}\tau,0 \right)$ & $\left( \mp \frac{1}{2}\tau, \pm \frac{\sqrt{3}}{2}\tau,1 \right)$ \\ & $\left( -\frac{1}{2}\tau,-\frac{\sqrt{3}}{2}\tau,0 \right)$,$\left( \frac{1}{2}\tau , \frac{\sqrt{3}}{2}\tau , 0 \right)$ & $\left( \pm \frac{1}{2}\tau , \pm\frac{\sqrt{3}}{2}\tau , 1 \right)$ \\ & & \\ \hline \hline \end{tabular} \end{table} \section{Experimental REXS Results} We performed REXS experiments on a well-characterized Cu$_2$OSeO$_3$ single crystal \cite{EPFL-ARTICLE-196610}. Instead of taking single CCD images at the crystalline (001) Bragg condition, we carried out reciprocal space maps (RSMs) by rocking the sample $\pm 2.5^\circ$ around the (001) peak such that the entire helix propagation-related reciprocal space is covered. Figure \ref{fig_3}(a) shows the RSM of the $hk$ plane ($l=1$) at 56.6~K in an applied field of 30~mT along the (001) direction. The incident x-rays are linearly polarized with a photon energy of 931.25~eV. Six sharp satellite peaks can be observed, corresponding to the skyrmion phase. Moreover, when scanning both temperature and field across the skyrmion phase region, no peak splitting is observed. This suggests a six-fold symmetric equilibrium ordering in the entire skyrmion phase pocket. \begin{figure}[h] \includegraphics*[width=7cm]{Fig3_REXS.eps} \caption{(Color online). (a) Experimentally obtained reciprocal space map in the $hk$ plane around the Cu$_2$OSeO$_3$ (001) Bragg peak (blanked out) in the skyrmion phase. (b) X-ray absorption spectra. Top: fluorescence yield; bottom: integrated satellite intensity. (c-e) CCD images of single, double, and triple split diffraction patterns.} \label{fig_3} \vspace*{-0.2cm} \end{figure} We performed RSMs for each energy point, and plot the spectroscopic profile using the integrated satellite intensity in Fig.\ \ref{fig_3}(b), bottom panel. Figures \ref{fig_3}(c-e) show single-shot CCD images in the (001) Bragg condition with the field orientation rotated $15^\circ$ away from the (001) direction (in the scattering plane) and from the direction of the incoming x-rays. Now, the well-defined six spots split into two, and finally three sets. This confirms the existence of a multidomain skyrmion state, where each domain has a different helix propagation orientation, which can be intentionally created by introducing a magnetic field gradient. This observation of a multidomain state is consistent with earlier LTEM work by Tokura \textit{et al.}\ \cite{Tokura_CuOSeO-MnSi_ratchet_Natmater_14}, where split peaks (in the Fourier transforms of the LTEM domain patterns) were observed as part of a dynamic domain rotation process (see Supplementary Movie S2 in Ref.~\cite{Tokura_CuOSeO-MnSi_ratchet_Natmater_14}). It has to be noted that x-ray based techniques are sampling a much larger area than electron microscopy based techniques, meaning that a multidomain state observable by LTEM will be picked up by REXS as well. \section{Discussion} Resonant soft x-ray diffraction experiments on Cu$_2$OSeO$_3$ single crystals has been previously carried out by Langner \textit{et al.} \cite{Langner2014}, where the observation of two sets of six-fold symmetric spots has been reported. The authors state that the peak splitting arises from the two inequivalent Cu sites. They support this statement by an observed 2-eV difference in energy profiles for the so-called `left' and `right' spot spectra (shown in Fig.~2 of Ref.~\cite{Langner2014}). They further argue that the two-fold splitting is linked to an in-plane rotation of two skyrmion sublattices, leading to a moir\'{e} pattern (Fig.~4 in Ref.~\cite{Langner2014}). Bond valence sum calculations show that the two inequivalent Cu$^{\mathrm{I}}$ and Cu$^{\mathrm{II}}$ sites in Cu$_2$OSeO$_3$ (where the superscripts I and II refer to the different lattice sites, not to different oxidation states) have practically the same valence charge \cite{Bos2008}, and density functional theory calculations show that their unoccupied states have very similar energies \cite{Yang2012}. We note that the Cu $L_{2,3}$ transition for Cu$^{2+}$ is $3d^9 \rightarrow 2p^5$$3d^{10}$. In the final state the $3d$ shell is full, which reduces the transition to a one-electron process without $2p$-$3d$ core-hole interaction. This gives a single absorption peak at 931~eV without multiplet splitting \cite{Laan1992}. There are no known Cu $d^9$ compounds with such a large splitting energy, and in fact, the energy splitting that could be expected would be well below $\sim$1~eV. As reported by Bos \textit{et al.}\ \cite{Bos2008}, and earlier by other others, Cu$^\mathrm{I}$ and Cu$^\mathrm{II}$ have practically the same valence charge, which can only result in a minute energy shift in the Cu $L_3$ absorption spectrum, well below the energy resolution limit (and certainly less than 2~eV reported in Ref.~\cite{Langner2014}). Note that the case would be of course very different for systems in which multiplet splitting exists, such as Fe compounds \cite{Gerrit-Theo91}. Since CuO$_2$ exhibits a peak at 933~eV \cite{Laan1992}, one possible explanation is that the higher energy peak is due to a Cu$^{1+}$ contamination. Alternatively, another possible source of the discrepancy is the way the energy scans are carried out. In particular, spectroscopic data obtained by analyzing the local pixels at the CCD plane is rather inaccurate, as the definition of the `left' and `right' spots on the camera is arbitrary for each energy. Most importantly, for different photon energies, scattering from the same propagation wave vector will result in a shift of the spot in the camera plane, leading to a `rocking-curve-like' Gaussian peak. Therefore, from a spectroscopic viewpoint, there is no evidence that the split-satellite peak is due to inequivalent Cu sites. A core argument for the formation of double-split six-fold diffraction patterns, given in Ref.~\cite{Langner2014} (Fig.\ 4), is that the superposition of two in-plane rotated skyrmion sublattices leads to a moir\'{e} pattern. However, the real-space moir\'{e} pattern and the presented Fourier transform (diffraction pattern) are mathematically not related. Rather, the superposition leads to new components in the Fourier spectrum, in particular six-fold symmetric satellites around the main reciprocal lattice points that result from the long wavelength beating in the moir\'{e} pattern---a phenomenon well-known from hexagonal coincidence lattices \cite{Graphene2014}. Further, it is important to consider another requirement that has to be met in order to carry out REXS experiments in a quantitative way. In the experiment reported in Ref.~\cite{Langner2014} the skyrmion plane is always perpendicular to the vertical direction of the laboratory reference frame, given by the fixed direction of the magnetic field, and it is thus independent of the goniometer angle $\theta$. There are (at least) six wave vectors ($\tau$) coupled to the (001) Bragg peak, giving rise to the observed magnetic peaks. In Fig.~2(a) in Ref.~\cite{Langner2014}, these magnetic peaks are different in both amplitude and orientation. Therefore, for a single $\theta$, it is impossible to reach the diffraction conditions for all six magnetic peaks at the same time. The observed skyrmion pattern (CCD images in Figs.~1(b) and 2(a) in \cite{Langner2014}) was collected in the \textit{structural} (001) Bragg condition, which is not the correct diffraction condition for either of the magnetic satellites. As a result of this, the satellites still have intensity, analogously to sitting at the edge of a rocking-curve peak. A single-shot CCD image corresponds to a curved plane in reciprocal space, which is not equal to the skyrmion plane in reciprocal space. As a result of this, the skyrmion diffraction spots will not end up on a circle, but on an oval, as can be seen in Fig.~2(a) in Ref.~\cite{Langner2014}. These satellite spot on the camera does not correspond to the peak position of (001)+$\tau$, but a poorly-defined reciprocal space point that could largely deviate from (001)+$\tau$. Also, the magnetic peaks of (001)+$\tau$ will not necessarily appear on the same oval for a single goniometer angle as they do not reach the diffraction conditions at this angle. Instead, a much more simple explanation of a peak splitting in this context is the occurrence of two \textit{non-superimposed} skyrmion lattice domains that are simultaneously sampled by the wide x-ray beam. \section{Summary and Conclusions} In conclusion, we used REXS on the chiral magnet Cu$_{2}$OSeO$_{3}$. We presented a detailed discussion of the magnetic contrast stemming from the magnetic phases. We showed experimental results of the six-fold symmetric magnetic diffraction pattern, in which the peaks were unsplit, double-split, as well as triple-split, depending on the magnetic history of the sample. This clearly contradicts the interpretation given in Ref.\ \cite{Langner2014} where the double-split peaks have been associated with the two chemically distinct Cu sites. Instead, by carefully performing XAS measurements, we find no evidence of a peak splitting. Oppositely, a more simple explanation is the occurrence of a multidomain skyrmion state, sampled by the relatively wide x-ray beam. \section*{Acknowledgments} The REXS experiments were carried out on beamline I10 at the Diamond Light Source, UK, under proposals SI-11784 and SI-12958. S.~L.~Z.\ and T.~H.\ acknowledge financial support by the Semiconductor Research Corporation. A.~B.\ and C.~P.\ acknowledge financial support through DFG TRR80 and ERC AdG (291079, TOPFIT).
train/arxiv
BkiUdSw5qYVBQos0INni
5
1
\section{Introduction} Optical manipulation of fluid interfaces having ultra-low interfacial tension was realised recently in systems of micron-sized emulsion droplets~\cite{Bain:06,Bain:11}. Potential applications of such techniques lie in pumping fluids, performing reaction chemistry at the attolitre scale and understanding the behaviour of oil droplets in surfactant-enhanced oil recovery~\cite{Karlsson:06, Hirasaki:11}. Similar techniques have been applied to deform cells, which have a non-zero bending energy of the lipid bilayer in addition to the usual interfacial tension~\cite{Bronkhorst:95, Guck2000a, Guck2001, Dharmadhikari:04}. The precision and non-destructive nature of manipulating particles by optical tweezers can be a benefit over standard mechanical techniques. In most physically realisable situations the radiation pressure exerted by the beam on the trapped particle is orders of magnitude weaker than the Young's modulus of the solid or the Laplace pressure of the confined fluid. As a result, particles do not deform in the trap: a solid particle retains its initial shape while a fluid droplet assumes a spherical geometry. Therefore, historically many applications of optical tweezing have been limited to exerting~\cite{Ashkin:86,Neuman2004,Hansen:05,McGloin2006} and measuring~\cite{Block1990,gibson:accuracy,Davenport2000} external forces on a rigid trapped object. The situation is, however, different for fluids with low interfacial tension, where the object itself can be deformed in response to the laser field. Ashkin and Dziedzic reported small but measurable deformations on planar soft surfaces using a focused and pulsed laser beam~\cite{Ashkin:73}. Further, Casner and Delville showed that large deformations could be observed if the interfacial tension was lowered~\cite{Casner:01}. It is known that the interfacial tension can be lowered substantially by the addition of surfactants. Ward \textit{et al.}\ showed that the interfacial tension of heptane droplets could be lowered to $\gamma \approx 10^{-5} - 10^{-6}\,N m^{-1}$ by the addition of suitably selected surfactants~\cite{Bain:06}. As a result, they were able to deform the emulsion droplets into various shapes using multiple continuous-wave lasers. Our theoretical work presented here is motivated by these experiments. The problem is similar in spirit to that of lipid vesicles deformed via micro-manipulation techniques~\cite{Evans:89} where a bending free energy of the lipid bilayer is balanced against externally applied forces to give its resulting shape. The difference between the membrane and fluid geometry arises from the nature of the constraints imposed on the shape equations. In the vesicle case, the total area is conserved while for a fluid droplet the volume of enclosed fluid is constant throughout the deformation process. Furthermore, based on dimensional arguments one can show that so long as $\frac{\kappa}{\xi^{2}} < \gamma$, where $\kappa$ is the bending modulus and $\xi$ is a length scale based on the radius of curvature of the droplet surface, in the hydrodynamic limit the interfacial tension contribution dominates over the curvature energy contribution. We propose a numerical model that predicts the three-dimensional steady-state shapes of droplets with ultra-low interfacial tension under the influence of one or more optical traps. Several authors have recently published models exploring different regimes of optical deformation of liquid droplets~\cite{Moller:09,Ellingsen:13}. The key feature of the model we propose here is that it does not assume small, linear deformation of the droplet. Our model makes no assumptions on the final shape of the droplet and there is no restriction that the optical traps have to be focused at the centre of the droplet. Our model allows us to investigate the equilibrium shapes of droplets as a function of different parameters, such as laser power $\left(P_{0}\right)$, numerical aperture $\left(NA\right)$, initial droplet radius $\left(R_{d}\right)$, interfacial tension $\left(\gamma\right)$ and the parameter $\left(\tilde{n}\right)$ which is related to the ratio of refractive indices as $\tilde{n} = 1 - \frac{n_{2}}{n_{1}}$, where $n_{1}$ and $n_{2}$ are the refractive indices of the droplet and external media, respectively. From these results we have defined a dimensionless deformation number $N_{d}$ which is a function of $P_{0}$, $NA$, $R_{d}$, $\gamma$ and $\tilde{n}$. Using this dimensionless number we are able to predict previously undescribed shape transformations of a droplet in a single optical trap. The mathematical model is described in the Section~\ref{maths}. This is followed by an outline of the numerical implementation of the model in Section~\ref{numerics}. We then present and discuss the results of the deformation of a droplet in single and multiple optical traps. Finally, we discuss future applications and extensions to the present model. \section{Mathematical Model for Droplet Deformation} \label{maths} In our model we describe the surface of the droplet (the interface between the liquid droplet and its host medium) in terms of a single-valued function $R(\theta,\phi)$ in spherical polar coordinates, where $R$ defines the distance between the interface and a preassigned fixed origin. In the absence of any external forces, a liquid droplet assumes a spherical shape, as a result of two antagonistic forces: the interfacial tension which tends to minimise the area, and the bulk pressure of the internal fluid which translates into a force acting along the local normal to an infinitesimal surface element $dS$. The energy function for an isolated droplet can be written as: \begin{equation}\label{isolated_droplet} E = \oint_{A} \gamma dS - \int_{V} P_{int} dV \end{equation} Thus, the internal pressure for an isolated droplet at equilibrium turns out to be $P_{int} = \frac{2\gamma}{R_{d}}$. In the presence of one or more optical tweezers, the total pressure acting at a position ${\bf r}{}$ on an infinitesimal surface element $dS$ at the oil-water interface can be written as: \begin{equation}\label{pressure-eq} P({\bf r}{}) = P_{lap}({\bf r}{}) + P_{opt}({\bf r}{}) + P_{int} \end{equation} where $P_{lap}$ is the Laplace pressure, $P_{opt}$ is the optical pressure due to momentum transfer from the laser field to the interface, and $P_{int}$ is the internal pressure within the droplet, which we will treat as an unknown variable dependent on the specific experimental configuration, but uniform throughout the droplet since we are only considering equilibrium structures. The Laplace pressure $P_{lap}({\bf r}{})$ acting on a surface element on the oil-water interface (interfacial tension $\gamma$), at a position ${\bf r}{}$ and with outward normal ${\bf {\hat{n}}}{}({\bf r}{})$, is calculated using the Young-Laplace equation: \begin{equation} \label{laplace-pressure-equation} P_{lap}({\bf r}{}) = \gamma \nabla \cdot {\bf {\hat{n}}}{}({\bf r}{}) \end{equation} from using Gauss' theorem on the first integral in Eq.~\ref{isolated_droplet}. Note that by convention this pressure acts inwards in the case of a convex droplet surface. The optical pressure $P_{opt}({\bf r}{})$ across the oil-water interface is calculated by considering the momentum transferred to the interface as light is reflected and refracted at the interface. An exact solution for the light field in the presence of a dielectric object could be calculated in the form of a series expansion within the framework of T-matrix theory~\cite{nonspherical-review}. However, due to the excessive computational demands that this would impose, we instead adopt a localized ray-optics approximation similar to that described in~\cite{Xu2009}. We generalize the treatment to an arbitrary shape, but at the same time we note that the refractive index ratio $m=\frac{n_{2}}{n_{1}}$ between the oil and water phases is close to one, such that the phase shift of rays crossing the droplet is small compared to the wavelength of the trapping laser and we are close to the Rayleigh-Gans regime~\cite{vanDeHulst}. We will therefore make the approximation that the field is unperturbed by the presence of the droplet and calculate the localized Fresnel reflection and refraction for each surface element using the unperturbed laser field, without considering higher-order reflections. If we consider a surface element on the oil-water interface, at a position ${\bf r}{}$ and with outward normal ${\bf {\hat{n}}}{}({\bf r}{})$, then for an incident beam with momentum density $p_0 n_1{\bf {\hat{s}}}$, where ${\bf {\hat{s}}}$ is the Poynting vector, we can apply standard laws of reflection and refraction~\cite{Herzberger:58} and, after some algebraic manipulation, we find the optical pressure acting on the surface (in the direction of the outward normal) to be: \begin{equation}\label{optical-pressure-eq} P_{opt}({\bf r}{}) = -p_0 n_1 \left((2F_r + F_t) \mu - F_t {\rm sgn}(\mu)\sqrt{(n_2/n_1)^2-1+\mu^2} \right) \end{equation} where $\mu = {\bf {\hat{s}}} \cdot {\bf {\hat{n}}}$, $F_r$ and $F_t$ are the Fresnel power reflection and transmission coefficients for the angle of incidence $\theta_{inc}=\arccos (|\mu|)$, and $n_1$ and $n_2$ are the refractive indices for the media in which the incoming and refracted beams are propagating. We also require a description for the momentum density $p_0 n_1{\bf {\hat{s}}}$ of the beam. The fact that our droplets are several wavelengths in diameter, significantly larger than the trapping beam waist, implies that a series expansion around the beam focus, such as that presented in~\cite{barton:5th-order} and used in surface optical stress calculations in~\cite{Xu2009}, is not suitable here. However, observing that for the droplet radii we are interested in the interface is generally several Rayleigh lengths from the (tight) laser focus, it becomes apparent that a far-field scalar model of a tightly focused Gaussian beam is appropriate~\cite{Siegman1986}. We can now return to the pressure acting on the surface of the droplet (Eq.~\ref{pressure-eq}). In order to obtain equilibrium droplet shapes we employ a relaxation dynamics scheme at each ``spoke'' of the coordinate system. Thus the time evolution of each point on the surface of the droplet is given by: \begin{equation}\label{equation-of-motion} \frac{dR(\theta,\phi)}{dt} = \beta( P_{opt}(\theta,\phi) - P_{lap}(\theta,\phi) + P_{int}) / (\hat{{\bf n}} \cdot \hat{{\bf r}}) \end{equation} where the $(\hat{{\bf n}} \cdot \hat{{\bf r}})$ term takes account of the fact that the interface normal is not collinear to the (fixed) radial direction along which $R(\theta,\phi)$ is measured -- see Figure \ref{fig:modelfig1}, and hence $R$ will in general vary faster than the rate of normal motion of the interface. The friction coefficient $\beta$ dictates the inverse time scale for a droplet to relax to equilibrium. This, however, cannot be chosen to be arbitrarily large as it compromises the stability of the numerical scheme used to integrate Eq.~\ref{equation-of-motion}. This equation of motion represents overdamped motion. Thus our model does not attempt to predict the detailed \emph{transient} motion of the droplet surface over time as it approaches its final shape, since we have deliberately selected a simplified equation of motion that does not take complete account of the hydrodynamic environment of the droplet. Although a final solution to our model represents a physically correct \emph{steady- state solution} for a given set of experimental parameters, we note that the ``time'' variable in our calculations does not represent real-world time, the trajectory taken through parameter space to attain the final solution is not physically rigorous, and if experimental parameters support multiple steady-state solutions then our model may not be able to predict for certain which solution represents the kinetic product from a given set of initial conditions. \begin{figure}[htbp] \centering \subfigure[][]{\includegraphics[width=0.40\textwidth , trim = {2cm 3cm 2cm 1cm},clip = true]{Figure1a.jpg}}\centering \subfigure[][]{\includegraphics[width=0.40\textwidth , trim = {2cm 3cm 2cm 1cm},clip = true]{Figure1b.jpg}}\centering \caption{Droplet configurations corresponding to the ``spoke'' model for (a) an undeformed droplet of $R_{d} = 5.0\,\mu m$ with $\gamma = 10^{-6}\,Nm^{-1}$ and (b) the corresponding converged shape in an optical trap with $P_{0} = 0.20\,W$ and numerical aperture $NA = 1.20$. The Gaussian beam is represented by the black lines, and part of the intensity distribution is also shown. The radial direction $(\hat{{\bf r}})$ of a ``spoke'' is also shown, along with the corresponding interface normal $(\hat{{\bf n}})$.} \label{fig:modelfig1} \end{figure} The equation of motion is integrated with respect to time in order to determine the physically correct steady-state solution, subject to a constant-volume constraint. The volume of the droplet is defined as: \begin{equation}\label{volume-integral} V = \frac{1}{3} \int_S R(\theta,\phi)^3 \sin\theta d\theta d\phi \end{equation} and hence the boundary condition can be expressed as the constraint: \begin{equation}\label{volume-integral-constraint} \frac{dV}{dt} = 0 = \int_S R(\theta,\phi)^2 \frac{dR(\theta,\phi)}{dt} \sin\theta d\theta d\phi \end{equation} Substituting Eq.~\ref{equation-of-motion} into this, we obtain an expression for the free parameter $P_{int}$ representing the internal pressure of the droplet: \begin{equation}\label{internal-pressure-equation} P_{int} = \left(\int_S \frac{R(\theta,\phi)^2 (P_{lap} + P_{opt})}{\hat{{\bf n}} \cdot \hat{{\bf r}}} \sin\theta d\theta d\phi\right) / \left( \int_S \frac{R(\theta,\phi)^2}{\hat{{\bf n}} \cdot \hat{{\bf r}}} \sin\theta d\theta d\phi \right) \end{equation} For completeness we note that once the shape of the droplet has converged, as a result of these pressure contributions, by removing the presence of the optical tweezers from our model we return to the energy function of an isolated droplet (Eq.~\ref{isolated_droplet}). As a result the deformed droplet converges back to a perfect sphere with radius $R_{d}$. \section{Numerical Implementation} \label{numerics} In our model the continuous surface of the droplet is sampled at a certain number of discrete points. These points lie on a regular grid in spherical coordinates, with $N_\theta \times N_\phi$ nodes equally spaced in $(\theta, \phi)$ space. Node (i,j) lies at coordinates $(\theta_i=\frac{i+0.5}{N_\theta}\pi, \phi_j=\frac{j+0.5}{N_\phi}2\pi)$ for $0 \leq i < N_\theta$ and $0 \leq j < N_\phi$. For each node a radius $R_{ij}$ is defined, each representing a point $(R_{ij}, \theta_i, \phi_j)$ on the surface of the droplet (the oil-water interface), so that the radius of each node defines a ``spoke'' extending from the origin in the direction $(\theta,\phi)$ -- see Figure \ref{fig:modelfig1}. We then solve the equation of motion (Eq.~\ref{equation-of-motion}) on this discretized grid, using finite-difference expressions as an estimate for local derivatives on the surface of the droplet. We can calculate the surface normal $\hat{\mathbf{n}}$ as follows~\cite{Spivak:75}: \begin{equation}\label{surface-normal-equation} \hat{\mathbf{n}}=\frac{\nabla\left(r-R\left(\theta,\phi\right)\right)}{\left|\nabla\left(r-R\left(\theta,\phi\right)\right)\right|} \end{equation} where $r$ is the radial coordinate and $R\left(\theta,\phi\right)$ is the radius of a ``spoke'' at a given $(\theta$,$\phi)$ coordinate. At first glance it might appear convenient to implement Eq.~\ref{laplace-pressure-equation} and \ref{surface-normal-equation} in a two-stage finite-difference scheme, however in practice this two-stage approach leads to a decoupling between odd and even points in the grid, which is difficult to eliminate. The solution is to derive a single expression for the Laplace pressure directly in terms of derivatives of the radius. This combined expression proves to be rather complicated in spherical coordinates: \begin{eqnarray}\label{laplace-eq-with-derivatives} P_{lap}(R, \theta, \phi)&\hspace{-2mm}=\hspace{-2mm}& \gamma \left[ \frac{2u}{R} - \frac{u}{R^2} \left( \cot\theta \frac{\partial R}{\partial \theta} + \frac{\partial^2 R}{\partial \theta^2} + \csc^2\theta \frac{\partial^2 R}{\partial \phi^2} \right) \right. \\ && \hspace{5mm} + \frac{u^3}{R^3} \left( \left( 1 + \frac{1}{R} \frac{\partial^2 R}{\partial \theta^2} \right)\left(\frac{\partial R}{\partial\theta}\right)^2 \right. \nonumber \\ && \hspace{15mm} + \left( \csc^2\theta - \frac{\cos\theta}{R \sin^3\theta} \frac{\partial R}{\partial \theta} \right) \left(\frac{\partial R}{\partial\phi}\right)^2 \nonumber \\ && \hspace{15mm} \left. \left. + \frac{2}{R \sin^2\theta} \frac{\partial R}{\partial\theta}\frac{\partial R}{\partial\phi}\frac{\partial^2 R}{\partial\theta\partial\phi} + \frac{1}{R\sin^4\theta} \left(\frac{\partial R}{\partial\phi}\right)^2 \frac{\partial^2 R}{\partial\phi^2} \right) \right] \nonumber, \end{eqnarray} where: \begin{eqnarray} u &\hspace{-2mm}=\hspace{-2mm}& \left( 1 + \frac{1}{R^2}\left(\frac{\partial R}{\partial\theta}\right)^2 + \frac{1}{R^2\sin^2\theta} \left(\frac{\partial R}{\partial\phi}\right)^2 \right)^{-\frac{1}{2}}. \end{eqnarray} Now, by evaluating Eq.~\ref{optical-pressure-eq}~and~\ref{laplace-eq-with-derivatives} with the help of second-order-accurate finite-difference expressions, and numerically integrating Eq.~\ref{internal-pressure-equation}, we are able to numerically integrate the equation of motion (Eq.~\ref{equation-of-motion}) with respect to time, using the Runge-Kutta-Fehlberg method, until motion ceases and the droplet has converged to a stable shape. To improve computational efficiency we initially use a mesh size of $N_{\theta}=25$ and $N_{\phi}=24$ nodes. Once the droplet shape has converged using this coarse mesh, we refine to $N_{\theta}=51$ and $N_{\phi}=48$. Figure \ref{fig:coarsevsfine} shows a comparison between the converged shapes of a droplet with $R_{d} = 5.0\,\mu m$ in an increasing number of optical traps. It can be seen that the refinement improves the shapes of the droplets in the regions of high curvature where the optical traps are positioned, whilst only minor improvements can be observed for the rest of the droplet. A mesh size of $N_{\theta}=13$ and $N_{\phi}=12$ is also shown in green in Figure \ref{fig:coarsevsfine}. Such a coarse mesh is incapable of ensuring a smooth surface is obtained, while the initial mesh size of $N_{\theta}=25$ and $N_{\phi}=24$ nodes represents a good compromise between computational demands and accuracy, being able to give a reasonable initial estimate of the droplet shape that can later be refined. Furthermore, the calculation of the Laplace pressure would be erroneous with an insufficient number of mesh points. Our choice of $N_{\theta}$ and $N_{\phi}$ is also verified by considering the well-known fact that the integral of the mean curvature $H$ over a closed surface $S$ is equal to zero~\cite{Blackmore:85}: \begin{equation} \int \!\!\! \int_{S} H \cdot \hat{{\bf n}} dA= 0 \end{equation} As the number of $N_{\theta}$ and $N_{\phi}$ nodes is increased in our model, this calculated quantity will tend to zero. \begin{figure}[htbp] \centering \subfigure[][]{\includegraphics[width=0.3\textwidth, trim=1cm 0cm 1.5cm 0.5cm,clip=true]{Figure2a.jpg}}\centering \subfigure[][]{\includegraphics[width=0.3\textwidth, trim=1cm 0cm 1.5cm 0.5cm,clip=true]{Figure2b.jpg}}\centering \subfigure[][]{\includegraphics[width=0.3\textwidth, trim=1cm 0cm 1.5cm 0.5cm,clip=true]{Figure2c.jpg}}\centering \caption{Comparison between a very coarse mesh of $N_{\theta}=13$ \& $N_{\phi}=12$ points (green), a coarse mesh of $N_{\theta}=25$ \& $N_{\phi}=24$ points (blue) and a refined mesh of $N_{\theta}=51$ \& $N_{\phi}=48$ (red), for a $R_{d} = 5.0\, \mu m$ droplet deformed in (a) two optical traps, (b) three optical traps and (c) four optical traps.} \label{fig:coarsevsfine} \end{figure} \section{Results and Discussion} \label{results} Continuous-wave, unpolarized, lasers having a wavelength $\lambda=1064\,nm$ have been modelled for results reported in this paper. Measured values of the refractive index of a heptane droplet and an external water medium, $n_{1}=1.38$ and $n_{2}=1.33$ respectively, have been used in our calculations, unless otherwise stated. \subsection{Single Optical Trap} We begin our discussion with the deformation of a droplet in a single optical trap. The optical trap is positioned such that the centre of the droplet initially coincides with the focal point of the beam. The steady-state profiles are obtained when the ``spoke'' positions parameterising the surface do not change as a function of time. Figure \ref{fig:1laser_results1} shows the converged three-dimensional shape of a deformed droplet with initial spherical geometry $R_{d}=2.0\,\mu m$, and a surface tension $\gamma = 10^{-6}\,Nm^{-1}$, as a function of increasing laser power $P_{0}$. The radius of the beam waist for each optical trap was kept constant at $w_{0}=0.282\,\mu m$, which corresponds to a numerical aperture of $NA=1.20$. As the laser power increases the droplet elongates to assume a lozenge form, with its long axis parallel to the direction of propagation of light (taken to be along the $+z$ axis). For laser powers $P_{0} \gtrsim 0.055\,W$ the droplet has a dumbbell-like shape with an hour-glass connecting two spherical caps. For these configurations one of the principal curvatures is negative. Experimental observations of these deformations have so far been limited to their two-dimensional projections along the axis of laser propagation~\cite{Bain:06}. Note that the shapes are asymmetric with respect to $z$. If we consider the gradient force alone then we would expect a symmetric shape, reflecting the symmetry of the intensity distribution in the trapping laser field. However the scattering force (which acts in the local direction of propagation of the laser field) breaks this symmetry and pushes the interface, and hence the droplet, in the direction of propagation of the laser and leads to ``bulging'' of the droplet beyond the laser focus. \begin{figure}[htbp] \centering \subfigure[][$P_{0}=0.02\,W$]{\includegraphics[width=0.3\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure3a.jpg}}\centering \subfigure[][$P_{0}=0.04\,W$]{\includegraphics[width=0.3\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure3b.jpg}}\centering \subfigure[][$P_{0}=0.06\,W$]{\includegraphics[width=0.3\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure3c.jpg}}\centering \subfigure[][$P_{0}=0.08\,W$]{\includegraphics[width=0.3\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure3d.jpg}}\centering \subfigure[][$P_{0}=0.10\,W$]{\includegraphics[width=0.3\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure3e.jpg}}\centering \subfigure[][$P_{0}=0.12\,W$]{\includegraphics[width=0.3\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure3f.jpg}}\centering \caption{Deformed conformations of an initial spherical droplet with radius $R_{d}=2.0\,\mu m$ and interfacial tension $\gamma = 10^{-6}\,Nm^{-1}$ in a single optical trap as a function of increasing laser power $P_{0}$. The colour scheme represents the variation in mean curvature, in units of $\mu m^{-1}$.} \label{fig:1laser_results1} \end{figure} The colour scheme used in Figure \ref{fig:1laser_results1} indicates the variation in the calculated mean curvature over the surface of the droplet. For low laser powers the mean curvature is close to $0.5\,\ \mu m^{-1}$ corresponding to that of a perfect sphere with radius $R_{d} = 2.0\,\mu m$, except at the ``tips'' of the droplet. As the laser power is increased we see a larger variation in the mean curvature, with the lowest mean curvature at the waist of the droplet, and the highest mean curvature at the two spherical caps. For laser powers greater than $P_{0} = 0.12\,W$ the mean curvature at the waist becomes negative. Since current experimental observations are limited to two-dimensional $xy$ projections, the deformation of these droplets in a single optical trap is only known due to a decrease in the maximum projected radius~\cite{Bain:06}. The variation of the maximum projected droplet radius $R_{xy}$ as a function of the laser power $P_{0}$ for a constant value of interfacial tension $\gamma=10^{-6}\,Nm^{-1}$ and initial droplet size $R_{d}=5.0\,\mu m$ for different values of numerical aperture $NA$ is shown in Figure \ref{fig:onelasergraphs}a. $R_{xy}$ changes non-monotonically as a function of $P_{0}$, with the minimum occurring at the transition to a dumbbell-like form, a transition that we will discuss in more detail below. The initial linear decrease in the slope of the $R_{xy}$ vs.\ $P_{0}$ curve is due to the elongation along the $z$-axis. Beyond the transition point, the equilibrated configurations show a narrowing of the neck region connecting the two spherical caps. A similar trend is observed for different values of the initial droplet size $R_{d}$ as shown in Figure \ref{fig:onelasergraphs}b. Figures \ref{fig:onelasergraphs}c \& d show the similar qualitative trends for a droplet with $R_{d} = 5.0\,\mu m$ for different values of the interfacial tension $10^{-7} \lesssim \gamma \lesssim 10^{-6}\,Nm^{-1}$ and refractive index of a droplet $n_{1}$ respectively. \begin{figure}[htbp] \centering \subfigure[][]{\includegraphics[width=0.45\textwidth, trim=0cm 0cm 1cm 0cm,clip=true]{Figure4a.jpg}}\centering \subfigure[][]{\includegraphics[width=0.45\textwidth, trim=0cm 0cm 1cm 0cm,clip=true]{Figure4b.jpg}}\centering \subfigure[][]{\includegraphics[width=0.45\textwidth, trim=0cm 0cm 1cm 0cm,clip=true]{Figure4c.jpg}}\centering \subfigure[][]{\includegraphics[width=0.45\textwidth, trim=0cm 0cm 1cm 0cm,clip=true]{Figure4d.jpg}} \caption{Variation of the maximum projected radius of a droplet as a function of (a) $NA$ for $R_{d}=5.0\,\mu m$ and $\gamma=10^{-6}\,Nm^{-1}$, (b) initial droplet radius $R_{d}$ with $NA=1.10$ and $\gamma = 10^{-6}\,Nm^{-1}$, (c) interfacial tension $\gamma$ for $R_{d}=5.0\,\mu m$ with $NA = 1.10$ and (d) refractive index $n_{1}$ of the droplet with $R_{d}=5.0\,\mu m$ and $\gamma=10^{-6}\,Nm^{-1}$ and an optical trap with $NA = 1.20$.} \label{fig:onelasergraphs} \end{figure} The laser power at which a droplet undergoes a transition to a dumbbell-like shape, in a single optical trap, is found to be linearly dependent on both the initial radius of the droplet $\left(R_{d}\right)$ and the interfacial tension $\left(\gamma\right)$, and inversely proportional to the refractive index ratio $\tilde{n}$. The relationship between the laser power and the numerical aperture is slightly more complicated. We find that the laser power $P_{0} \propto \exp\left(NA\right)$. These relationships allow us to define a dimensionless number which characterises when a droplet will deform into a dumbbell-like shape, for a given initial radius, surface tension, refractive index ratio and numerical aperture: \begin{equation} N_{d} = \frac{P_{0} \tilde{n}}{\gamma R_{d} c \exp\left(NA\right)} \end{equation} When $N_{d} \gtrsim 1.0$ the droplet deforms into a dumbbell-like shape, whereas for $N_{d} \lesssim 1.0$ the droplet only elongates in the direction of the propagation of light. Figure \ref{fig:onelasergraphs_nd}a shows this for a constant initial spherical radius $R_{d} = 5.0\,\mu m$ whilst varying the numerical aperture for the optical tweezer. Figure \ref{fig:onelasergraphs_nd}b shows the variation of $R_{xy}$ vs.\ $N_{d}$ for different starting radii, whilst using a numerical aperture of $NA = 1.10$. To obtain a data collapse for the graphs in Figure \ref{fig:onelasergraphs_nd}b we rescale the observed $R_{xy}$ values as: \begin{equation} R_{scale} = \frac{R_{d}-R_{xy}}{R_{d}} \end{equation} Figure \ref{fig:onelasergraphs_nd}c also shows the same trend for a variation in the refractive index of the droplet $n_{1}$. As it can be seen, the minimum in Figures \ref{fig:onelasergraphs_nd}a \& c and maximum in Figure \ref{fig:onelasergraphs_nd}b, indicating the transition to a dumbbell-like shape, occurs at $N_{d} \approx 1.0$. \begin{figure}[htbp] \centering \subfigure[][]{\includegraphics[width=0.45\textwidth, trim=0cm 0cm 1cm 0cm,clip=true]{Figure5a.jpg}}\centering \subfigure[][]{\includegraphics[width=0.45\textwidth, trim=0cm 0cm 1cm 0cm,clip=true]{Figure5b.jpg}}\centering \subfigure[][]{\includegraphics[width=0.45\textwidth, trim=0cm 0cm 1cm 0cm,clip=true]{Figure5c.jpg}}\centering \caption{Relationship between the deformation of a droplet and $N_{d}$ with varying (a) $NA$ for $R_{d}=5.0\,\mu m$ and $\gamma=10^{-6}\,Nm^{-1}$, (b) $R_{d}$ with $NA=1.10$ and $\gamma = 10^{-6}\,Nm^{-1}$ and (c) $n_{1}$ for a droplet with $R_{d}=5.0\,\mu m$ and $\gamma=10^{-6}\,Nm^{-1}$, and an optical trap with $NA = 1.20$.} \label{fig:onelasergraphs_nd} \end{figure} The transition to a dumbbell-like shape can be explained in terms of a balance between the forces due to the optical tweezer and the interfacial tension of the droplet. At values of $N_{d} \lesssim 0.7$ the interfacial tension energy is stronger than that of the optical tweezer, resulting in a linear deformation of the droplet. Between $N_{d} \approx 0.7$ and $1.0$ the forces exerted by the laser field begin to overcome the interfacial tension of the droplet, until $N_{d} \approx 1.0$ at which the optical forces dominate. The droplet interface then conforms locally to the iso-intensity contours of the laser beam, which results in the observed dumbbell-like geometry. Beyond a certain laser power our model is not capable of accurately predicting the converged shapes of droplets. This breakdown of the model is due to a single radial ``spoke'' intersecting the interface multiple times. This critical point for our numerical model will occur as the droplet elongates further and the dumbbell-like shape becomes more defined. Our model as described here is implemented using a spherical coordinate system, where each ``spoke'' radiates out from the origin of the system. Refining the number of ``spokes'' can allow the model to approach this critical point more closely (albeit at larger computational cost), but the model as presently formulated is unable to pass this critical point. \subsection{Multiple Optical Traps} In addition to calculating the shape of a droplet in a single optical trap, our model is able to successfully calculate how a droplet will deform in multiple optical traps. To our knowledge this is the first time such a model has been presented. Each optical tweezer is moved slowly outwards by $0.25\,\mu m$ in the appropriate directions, and we ensure that the droplet shape has converged at each stage before the lasers are moved again. \begin{figure}[htbp] \centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure6a_1.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure6b_1.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure6c_1.jpg}}\centering \setcounter{subfigure}{0 \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure6a_2.jpg}}\centering \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure6b_2.jpg}}\centering \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure6c_2.jpg}}\centering \caption{Deformation of a $R_{d} = 2.0\,\mu m$ droplet using two optical traps each with $P_{0} = 0.06\,W$ and $NA=1.20$. The top images show the two-dimensional $xy$ projections and the bottom images show the corresponding three-dimensional geometries. Both lasers are positioned at the origin for (a). The positions of the lasers are then $\left(1.0,[0,\pi],0\right)\,\mu m$ \& $\left(2.0,[0,\pi],0\right)\,\mu m$ in polar coordinates for (b) and (c) respectively. The colour scheme represents the variation in mean curvature, in units of $\mu m^{-1}$.} \label{fig:twolasers} \end{figure} \begin{figure}[htbp] \centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure7a_1.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure7b_1.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure7c_1.jpg}}\centering \setcounter{subfigure}{0 \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure7a_2.jpg}}\centering \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure7b_2.jpg}}\centering \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure7c_2.jpg}}\centering \caption{Deformation of a $R_{d} = 2.0\,\mu m$ droplet using three optical traps each with $P_{0} = 0.04\, W$ and $NA=1.20$. The top images represent the two-dimensional $xy$ projections and the bottom images show the corresponding three-dimensional geometries. All lasers are positioned at the origin for (a). The positions of the lasers are then $\left(1.0,[\frac{\pi}{3},\pi,\frac{5\pi}{3}],0\right)\,\mu m$ \& $\left(2.0,[\frac{\pi}{3},\pi,\frac{5\pi}{3}],0\right)\,\mu m$ in polar coordinates for (b) and (c) respectively. The colour scheme represents the variation in mean curvature, in units of $\mu m^{-1}$.} \label{fig:threelasers} \end{figure} \begin{figure}[htbp] \centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure8a_1.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure8b_1.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure8c_1.jpg}}\centering \setcounter{subfigure}{0 \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure8a_2.jpg}}\centering \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure8b_2.jpg}}\centering \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure8c_2.jpg}}\centering \caption{Deformation of a $R_{d} = 2.0\,\mu m$ droplet using four optical traps each with $P_{0} = 0.03\, W$ and $NA=1.20$. The top images represent the two-dimensional $xy$ projections and the bottom images show the corresponding three-dimensional geometries. All lasers are positioned at the origin for (a). The positions of the lasers are then $\left(1.0,[\frac{\pi}{4},\frac{3\pi}{4},\frac{5\pi}{4},\frac{7\pi}{4}],0\right)\,\mu m$ \& $\left(2.0,[\frac{\pi}{4},\frac{3\pi}{4},\frac{5\pi}{4},\frac{7\pi}{4}],0\right)\,\mu m$ in polar coordinates for (b) and (c) respectively. The colour scheme represents the variation in mean curvature, in units of $\mu m^{-1}$.} \label{fig:fourlasers} \end{figure} Figures \ref{fig:twolasers}, \ref{fig:threelasers} \& \ref{fig:fourlasers} show the converged shapes of an $R_{d} = 2.0\,\mu m$ droplet in two, three and four optical traps, respectively. In each Figure, moving from left to right, there is an increase in the separation between each optical trap. Initially, all traps are positioned at the centre of the droplet. The total laser power acting on the droplet is kept constant at $P_{total}=0.12\,W$ and is equally shared between the total number of lasers being modelled. The interfacial tension and numerical aperture were also kept constant at $\gamma = 10^{-6}\,Nm^{-1}$ and $NA = 1.20$ respectively. The top row of each figure shows the two-dimensional $xy$ projections of the deformed droplet, as might be seen in a brightfield microscope image. The chosen colour scheme indicates the variation in the calculated mean curvature; as the separation of each laser increases there is a larger variation in the mean curvature. For the largest separation between optical traps, presented in Figures \ref{fig:twolasers}c, \ref{fig:threelasers}c \& \ref{fig:fourlasers}c the mean curvature is highest at the ``tips'' or ``corners'' of the shapes formed, where the tightly focused lasers are positioned. As mentioned above, a major benefit of these numerical results is that it is possible to view the three-dimensional geometries of the deformed droplet. These are shown in the bottom row of each of the Figures \ref{fig:twolasers} - \ref{fig:fourlasers}. To date such observations have not been achieved experimentally. When all lasers are positioned at the centre of the droplet then $P_{total}=P_{0}$ and we obtain a value of $N_{d}=2.18$ for the given $R_{d}$, $\gamma$ and $NA$. Hence this is beyond the transition to a dumbbell-like shape, as shown in Figures \ref{fig:twolasers} - \ref{fig:fourlasers}a. The hour-glass connecting the two spherical caps has a negative mean curvature. As each individual laser is then moved outwards from the centre, the surface of the droplet at the ``tips'' or ``corners'' retains this concave shape. This observation is interesting since from just viewing the two-dimensional projections one might assume the surfaces to be convex along the axis of propagation of light. It is worth mentioning that for suitably selected parameters we can observe a range of deformed droplets with a high proportion of negative mean curvature. For example, a droplet with initial radius $R_{d} = 5.0\mu m$ and interfacial tension $\gamma = 10^{-6}\,Nm^{-1}$, deformed in four optical traps each with a laser power of $P_{0}=0.1\,W$ and numerical aperture $NA=0.8$ has negative mean curvature on the top and bottom faces, in addition to that seen on the side faces in Figure \ref{fig:fourlasers}. This indicates that the deformation of droplets by optical forces is extremely sensitive to the selected parameters. As an experimental validation of our model, we compare our calculated two-dimensional shapes of a deformed emulsion droplet to those presented in the work of Ward \emph{et al.}~\cite{Bain:06}. To summarise the parameters used in their experiments: an approximate value of $R_{d}=2.5\,\mu m$ was taken for the initial spherical geometry of the droplet, and an interfacial tension of $\gamma \approx 10^{-6}\,Nm^{-1}$ was reported. For the optical traps, each laser had a numerical aperture of $NA=1.20$ and a total combined laser power of $P_{total}=24\,mW$ was equally distributed between the total number of lasers. The experimentally observed shapes and steady-state shapes predicted by our model are presented in the first and second rows of Figure \ref{fig:expvalues} respectively, for an increasing number of optical traps. As mentioned above, the highest calculated mean curvature values occur near the focal positions of lasers. It can be seen from Figure \ref{fig:expvalues} that using the experimental parameters of Ward~\emph{et al.} we obtain droplets with conical ends. The high electric field at the laser foci exerts a very strong force on the interface, and when combined with volume conservation and surface curvature considerations, this results in a locally very small radius of curvature at the tips of the shape, as previously modelled by Stone~\emph{et al.}~\cite{Stone:99} based on observations of ``Taylor cones'' in static electric fields~\cite{Taylor1964}. \begin{figure}[htbp] \centering \subfigure{\includegraphics[width=0.25\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9a_1.jpg}}\centering \hspace{0.6cm} \subfigure{\includegraphics[width=0.25\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9b_1.jpg}}\centering \hspace{0.6cm} \subfigure{\includegraphics[width=0.25\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9c_1.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9a_2.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9b_2.jpg}}\centering \subfigure{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9c_2.jpg}}\centering \setcounter{subfigure}{0 \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9a_3.jpg}} \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9b_3.jpg}} \subfigure[][]{\includegraphics[width=0.30\textwidth, trim=0cm 0cm 0cm 0cm,clip=true]{Figure9c_3.jpg}} \caption{Deformation of a $R_{d} = 2.5\,\mu m$ droplet with $\gamma = 10^{-6}\,Nm^{-1}$ using two, three and four optical traps in (a)-(c) respectively. The total combined optical power is $P_{total}=24\,mW$ with a numerical aperture of $NA=1.20$ for each laser. The focus of each laser is a lateral distance $3\,\mu m$ from the centre of symmetry of the experiment. The top row represents the two-dimensional $xy$ projections observed experimentally (Ref.~\cite{Bain:06} -- Reproduced by permission of The Royal Society of Chemistry \texttt{http://pubs.rsc.org/en/Content/ArticleLanding/2006/CC/b610060k}), the middle row are our predicted shapes and the bottom row shows the corresponding three-dimensional geometries. The colour scheme for our calculated structures represents the variation in mean curvature, in units of $\mu m^{-1}$.} \label{fig:expvalues} \end{figure} The bottom row of Figure \ref{fig:expvalues} show the three-dimensional geometries of these emulsion droplets. Unlike the structures presented in Figures \ref{fig:twolasers}-\ref{fig:fourlasers}, the ``tips'' or ``corners'' of each droplet remain convex, as expected from the value of $N_{d} = 0.35$ deduced from the parameters reported by Ward \emph{et al.}~\cite{Bain:06}. Hence when all the lasers are positioned at the centre of the droplet at the start of the experiment, the droplet remains approximately spherical, and does not take on the dumbbell-like structure discussed earlier. This in turn reduces the deformation of the droplet along the light propagation axis at the ``tips''/``corners'' as each laser is moved outwards. There is a very strong agreement between our theoretical predictions of the two-dimensional $xy$ projections of the deformed droplets and those obtained experimentally. We hope that this work will prompt experimental studies to visualise the three-dimensional configurations as a function of the physical parameters explored in this paper. \section{Conclusion} In conclusion we have developed a theoretical framework to compute three-dimensional equilibrium shapes of liquid droplets with ultra-low interfacial tension in one or more optical traps. Taking a cue from experiments, we assume an isotropic, temperature-independent interfacial tension coefficient $\gamma$. The optical traps were described using a far-field scalar model of a tightly focused Gaussian beam, within the Rayleigh-Gans regime. The equilibrium droplet shape arises as a result of the interfacial tension trying to minimise the droplet surface area, the internal fluid pressure resisting such a change, and the external optical pressure causing local deformations, without any volumetric change. Using this model we numerically compute the droplet shapes as a function of both droplet and laser parameters e.g.\ interfacial tension, initial droplet size, laser power and numerical aperture. We obtain droplet shapes similar to those obtained experimentally by Ward \emph{et al.} for similar parameter values. The close agreement between theoretical predictions of two-dimensional projections in the $xy$ plane and experiments for known geometries and parameter values gives us confidence in the correctness of our predicted three-dimensional droplet shapes. It is worth noting that these predictions of three-dimensional droplet shapes for large deformations (where the linear response breaks down) have not been reported in the literature. Experiments are currently under development to measure three-dimensional shapes for comparison with our model Further, the theoretical work revealed a few surprises along the way. For example, in a single optical trap as the laser power is increased the trapped droplet not only elongates in the direction of light propagation, but also deforms into a dumbbell-like shape. We have predicted the onset of this transformation in terms of a dimensionless deformation number $N_{d}$, obtained using dimensional arguments balancing antagonistic surface tension and optical forces. The mathematical expression has been substantiated using a data collapse of the numerical solution of the shape equations. We have also predicted droplet shapes having a total negative mean curvature along its faces for optical traps having a high intensity. Though for some configurations (e.g.\ four laser traps) this may be intuitive, the existence of droplet shapes having a total negative mean curvature in a single trap in not immediately obvious. \section*{Acknowledgements} The authors wish to acknowledge funding from EPSRC via grant EP/I013377/1. We thank Oscar Ces and Andrew Ward, along with all others in the optonanofluidics group for their helpful discussions and suggestions. The authors acknowledge helpful discussions with P. D. Olmsted, and others of the external advisory board of the optonanofluidics project. BC thanks the Newton Institute for support.
train/arxiv
BkiUe9k5qhDCUnl_eCLQ
5
1
\section{Introduction} \label{sec:introduction} An orthogonal range reporting query $Q$ on a set of $d$-dimensional points $S$ asks for all points $p\in S$ that belong to the query rectangle $Q=\prod_{i=1}^d[a_i,b_i]$. The orthogonal range reporting problem, that is, the problem of constructing a data structure that supports such queries, was studied extensively; see for example~\cite{agarwal1999geometric}. In this paper we consider a variant of the two-dimensional range reporting in which reported points must be sorted by one of their coordinates. Moreover, our data structures can also work in the online modus: the query answering procedure reports all points from $S\cap Q$ in increasing $x$-coordinate order until the procedure is terminated or all points in $S\cap Q$ are output.% \footnote{We can get increasing/decreasing $x$/$y$-coordinate ordering via coordinate changes.} Some simple database queries can be represented as orthogonal range reporting queries. For instance, identifying all company employees who are between 20 and 40 years old and whose salary is in the range $[r_1,r_2]$ is equivalent to answering a range reporting query $Q= [r_1,r_2]\times [20,40]$ on a set of points with coordinates $(\idrm{salary}, \idrm{age})$. Then reporting employees with the salary-age range $Q$ sorted by their salary is equivalent to a sorted range reporting query. \tolerance=1000 Furthermore, the sorted reporting problem is a generalization of the orthogonal range successor problem (also known as the range next-value problem) \cite{LenhofS94,CrochemoreIR07,KKL07wads,IliopoulosCKRW08,Yu:2011}. The answer to an orthogonal range successor query $Q=[a,+\infty]\times [c,d]$ is the point with smallest $x$-coordinate\footnote{Previous works (e.g., \cite{CrochemoreIR07,Yu:2011}) use slightly different definitions, but all of them are equivalent up to a simple change of coordinate system or reduction to rank space~\cite{GabowBT84}.} among all points that are in the rectangle $Q$. The best previously known $O(n)$ space data structure for the range successor queries uses $O(n)$ space and supports queries in $O(\log n/\log \log n)$ time~\cite{Yu:2011}. The fastest previously described structure supports range successor queries in $O(\log \log n)$ time but needs $O(n\log n)$ space. \no{ A data structure that answers range successor queries in time $f(n)$ can also answer sorted reporting queries in $O((k+1) f(n))$ time; see the proof of Theorem~\ref{theor:spaceeff}. Thus the linear space structure of~\cite{Yu:2011} answers queries in $O((k+1)(\log n/\log \log n))$ time.} In this paper we show that these results can be significantly improved. In Section~\ref{sec:linspace} we describe two data structures for range successor queries. The first structure needs $O(n)$ space and answers queries in $O(\log^{\varepsilon} n)$ time; henceforth $\varepsilon$ denotes an arbitrarily small positive constant. The second structure needs $O(n\log \log n)$ space and supports queries in $O((\log\log n)^2)$ time. Both data structures can be used to answer sorted reporting queries in $O((k+1)\log^{\varepsilon}n)$ and $O((k+1)(\log\log n)^2)$ time, respectively, where $k$ is the number of reported points. In Sections~\ref{sec:3sided-rep} and~\ref{sec:2dim-range} we further improve the query time and describe a data structure that uses $O(n\log^{\varepsilon} n)$ space and supports sorted reporting queries in $O(\log \log n + k)$ time. As follows from the reduction of~\cite{MNSW98} and the lower bound of~\cite{PT06}, any data structure that uses $O(n\log^{O(1)}n)$ space needs $\Omega(\log \log n+ k)$ time to answer (unsorted) orthogonal range reporting queries. Thus we achieve optimal query time for the sorted range reporting problem. We observe that the currently best data structure for unsorted range reporting in optimal time~\cite{ChanLP11} also uses $O(n\log^{\varepsilon} n)$ space. In Section~\ref{sec:appl} we discuss applications of sorted reporting queries to some problems related to text indexing and some geometric problems. \nono{Further applications are described in the full version of this paper~\cite{NN12full}.} Our results are valid in the word RAM model. Unless specified otherwise, we measure the space usage in words of $\log n$ bits. We denote by $p.x$ and $p.y$ the coordinates of a point $p$. We assume that points lie on an $n\times n$ grid, i.e., that point coordinates are integers integers in $[1,n]$. We can reduce the more general case to this one by reduction to rank space~\cite{GabowBT84}. The space usage will not change and the query time would increase by an additive factor $pred(n)$, where $pred(n)$ is the time needed to search in a one-dimensional set of integers~\cite{BoasKZ77,PT06}. \no{ \todo{Say that we can iterate and report only k leftmost points, k unknown in advance, say it better than in the 1st paragraph} \todo{Say about applications, say that we report points sorted by $x$-coordinates} } \section{Compact Range Trees} The range tree is a handbook data structure frequently used for various orthogonal range reporting problems. Its leaves contain the $x$-coordinates of points; a set $S(v)$ associated with each node $v$ contains all points whose $x$-coordinates are stored in the subtree rooted at $v$. We will assume that points of $S(v)$ are sorted by their $y$-coordinates. $S(v)[i]$ will denote the $i$-th point in $S(v)$; $S(v)[i..j]$ will denote the sorted list of points $S(v)[i],S(v)[i+1],\ldots, S(v)[j]$. A standard range tree uses $O(n\log n)$ space, but this can be reduced by storing compact representations of sets $S(v)$. We will need to support the following two operations on compact range trees. Given a range $[c,d]$ and a node $v$, $noderange(c,d,v)$ finds the range $[c_v,d_v]$ such that $p.y\in [c,d]$ if and only if $p\in S(v)[c_v .. d_v]$ for any $p\in S(v)$. Given an index $i$ and a node $v$, $point(v,i)$ returns the coordinates of point $S(v)[i]$. \begin{lemma}\cite{Chaz88,ChanLP11}\label{lemma:range} There exists a compact range tree that uses $O(nf(n))$ space and supports operations $point(v,i)$ and $noderange(c,d,v)$ in $O(g(n))$ and $O(g(n)+\log \log n)$ time, respectively, for (i) $f(n)=O(1)$ and $g(n)=O(\log^{\varepsilon}n)$; (ii) $f(n)=O(\log\log n)$ and $g(n)=O(\log \log n)$; (iii) $f(n)=O(\log^{\varepsilon}n)$ and $g(n)=O(1)$. \end{lemma} \begin{proof} We can support $point(v,i)$ in $O(g(n))$ time using $O(nf(n))$ space as in variants $(i)$ and $(iii)$ using a result from Chazelle \cite{Chaz88}; we can support $point(v,i)$ in $O(\log \log n)$ time and $O(n\log \log n)$ space using a result from Chan et al.~\cite{ChanLP11}. In the same paper \cite[Lemma 2.4]{ChanLP11}, the authors also showed how to support $noderange(c,d,i)$ in $O(g(n)+\log \log n)$ time and $O(n)$ additional space using a data structure that supports $point(v,i)$ in $O(g(n))$ time. \end{proof} \section{Sorted Reporting in Linear Space} \label{sec:linspace} In this section we show how a range successor query $Q=[a,+\infty]\times [c,d]$ can be answered efficiently. We combine the recursive approach of the van Emde Boas structure~\cite{BoasKZ77} with compact structures for range maxima queries. A combination of succinct range minima structures and range trees was also used in~\cite{ChanLP11}. A novel idea that distinguishes our data structure from the range reporting structure in~\cite{ChanLP11}, as well as from the previous range successor structures, is binary search on tree levels originally designed for one-dimensional searching~\cite{BoasKZ77}. We essentially perform a one-dimensional search for the successor of $a$ and answer range maxima queries at each step. Let $T_x$ denote the compact range tree on the $x$-coordinates of points. $T_x$ is implemented as in variant $(i)$ of Lemma~\ref{lemma:range}; hence, we can find the interval $[c_v,d_v]$ for any node $v$ in $O(\log^{\varepsilon}n)$ time. We also store a compact structure for range maximum queries $M(v)$ in every node $v$: given a range $[i,j]$, $M(v)$ returns the index $i\le t\le j$ of the point $p$ with the greatest $x$-coordinate in $S(v)[i..j]$. We also store a structure for range minimum queries $M'(v)$. $M(v)$ and $M'(v)$ use $O(n)$ bits and answer queries in $O(1)$ time \cite{Fis10}. Hence all $M(u)$ and $M'(u)$ for $u\in T_x$ use $O(n)$ space. Finally, an $O(n)$ space level ancestor structure enables us to find the depth-$d$ ancestor of any node $u\in T_x$ in $O(1)$ time~\cite{BenderF04}. Let $\pi$ denote the search path for $a$ in the tree $T_x$: $\pi$ connects the root of $T_x$ with the leaf that contains the smallest value $a_x\ge a$. Our procedure looks for the lowest node $v_f$ on $\pi$ such that $S(v)\cap Q\not=\emptyset$. For simplicity we assume that the length of $\pi$ is a power of $2$. We initialize $v_l$ to the leaf that contains $a_x$; we initialize $v_u$ to the root node. The node $v_f$ is found by a binary search on $\pi$. We say that a node $w$ is the middle node between $u$ and $v$ if $w$ is on the path from $u$ to $v$ and the length of the path from $u$ to $w$ equals to the length of the path from $w$ to $v$. We set the node $v_m$ to be the middle node between $v_u$ and $v_l$. Then we find the index $t_m$ of the maximal element in $S(v_m)[c_{v_m}..d_{v_m}]$ and the point $p_m=S(v_m)[t_m]$. If $p_m.x\ge a$, then $v_f$ is either $v_m$ or its descendant; hence, we set $v_u=v_m$. If $p_m.x<a$, then $v_f$ is an ancestor of $v_m$; hence, we set $v_l=v_m$. The search procedure continues until $v_u$ is the parent of $v_m$. Finally, we test nodes $v_u$ and $v_l$ and identify $v_f$ (if such $v_f$ exists). \begin{fact} If the child $v'$ of $v_f$ belongs to $\pi$, then $v'$ is the left child of $v_f$. \end{fact} \begin{proof} Suppose that $v'$ is the right child of $v_f$ and let $v''$ be the sibling of $v'$. By definition of $v_f$, $Q\cap S(v')=\emptyset$. Since $v'$ belongs to $\pi$ and $v''$ is the left child, $p.x<a$ for all points $p\in S(v'')$. Since $S(v_f)=S(v')\cup S(v'')$, $Q\cap S(v_f)=\emptyset$ and we obtain a contradiction. \end{proof} Since $v'\in \pi$ is the left child of $v_f$, $p.x\ge a$ for all $p\in S(v'')$ for the sibling $v''$ of $v$. Moreover, $p.x< a$ for all points $p\in S(v')[c_{v'},d_{v'}]$ by definition of $v_f$. Therefore the range successor is the point with minimal $x$-coordinate in $S(v'')[c_{v''}..d_{v''}]$. The search procedure visits $O(\log \log n)$ nodes and spends $O(\log^{\varepsilon}n)$ time in each node, thus the total query time is $O(\log^{\varepsilon}n\log\log n)$. By replacing $\varepsilon'<\varepsilon$ in the above construction, we obtain the following result. \begin{lemma}\label{lemma:linsucc} There exists a data structure that uses $O(n)$ space and answers orthogonal range successor queries in $O(\log^{\varepsilon}n)$ time. \end{lemma} If we use the compact tree that needs $\Theta(n\log \log n)$ space, then $g(n)=\log \log n$. Using the same structure as in the proof of Lemma~\ref{lemma:linsucc}, we obtain the following. \begin{lemma}\label{lemma:llsucc} There exists a data structure that uses $O(n\log\log n)$ space and answers orthogonal range successor queries in $O((\log \log n)^2)$ time. \end{lemma} \paragraph{Sorted Reporting Queries.} We can answer sorted reporting queries by answering a sequence of range successor queries. Consider a query $Q=[a,b]\times [c,d]$. Let $p_1$ be the answer to the range successor query $Q_1=[a,+\infty]\times [c,d]$. For $i\ge 2$, let $p_i$ be the answer to the query $Q_i=[p_{i-1}.x,+\infty]\times [c,d]$. The sequence of points $p_1,\ldots p_k$ is the sequence of $k$ leftmost points in $[a,b]\times [c,d]$ sorted by their $x$-coordinates. We observe that our procedure also works in the \emph{online modus} when $k$ is not known in advance. That is, we can output the points of $Q\cap S$ in the left-to-right order until the procedure is stopped by the user or all points in $Q\cap S$ are reported. \no{ leftmost point, then the next one, etc; the query can be stopped at any time by the user. The total query time is the same as the time needed to answer $k$ range successor queries. } \begin{theorem}\label{theor:spaceeff} There exist a data structures that uses $O(n)$ space and answer sorted range reporting queries in $O((k+1)\log^{\varepsilon}n)$ time, and that use $O(n\log \log n)$ space and answer those queries in $O((k+1)(\log \log n)^2)$ time. \end{theorem} \section{Three-Sided Reporting in Optimal Time} \label{sec:3sided-rep} In this section we present optimal time data structures for two special cases of sorted two-dimensional queries. In the first part of this section we describe a data structure that answers sorted one-sided queries: for a query $c$ we report all points $p$, $p.y\le c$, sorted in increasing order of their $x$-coordinates. Then we will show how to answer three-sided queries, i.e., to report all points $p$, $a\le p.x\le b$ and $p.y\le c$, sorted in increasing order by their $x$-coordinates. \paragraph{One-Sided Sorted Reporting.} We start by describing a data structure that answers queries in $O(\log n+k)$ time; our solution is based on a standard range tree decomposition of the query interval $[1,c]$ into $O(\log n)$ intervals. Then we show how to reduce the query time to $O(k+\log \log n)$. This improvement uses an additional data structure for the case when $k\le \log n$ points must be reported. We construct a range tree on the $y$-coordinates. For every node $v\in T$, the list $L(v)$ contains all points that belong to $v$ sorted by their $x$-coordinates. Suppose that we want to return $k$ points $p$ with smallest $x$-coordinates such that $p.y\leq c$. We can represent the interval $[1,c]$ as a union of $O(\log n)$ node ranges for nodes $v_i\in T$. The search procedure visits each $v_i$ and finds the leftmost point (that is, the first point) in every list $L(v_i)$. Those points are kept in a data structure $D$. Then we repeat the following step $k$ times: We find the leftmost point $p$ stored in $D$, output $p$ and remove it from $D$. If $p$ belongs to a list $L(v_i)$, we find the point $p'$ that follows $p$ in $L(v_i)$ and insert $p'$ into $D$. As $D$ contains $O(\log n)$ points, we support updates and find the leftmost point in $D$ in $O(1)$ time \cite{FW94}. Hence, we can initialize $D$ in $O(\log n)$ time and then report $k$ points in $O(k)$ time. We can reduce the query time to $O(k+\log \log n)$ by constructing additional data structures. If $k\geq \log n$ the data structure described above already answers a query in $O(k+\log n)=O(k)$ time. The case $k\leq \log n$ can be handled as follows. We store for each $p\in S$ a list $V(p)$. Among all points $p'\in S$ such that $p'.y\leq p.y$ the list $V(p)$ contains $\log n$ points with the smallest $x$-coordinates. Points in $V(p)$ are sorted in increasing order by their $x$-coordinates. To find $k$ leftmost points in $[1,c]$ for $k <\log n$, we identify the highest point $p_c\in S$ such that $p_c.y \le c$ and report the first $k$ points in $V(p_c)$. The point $p_c$ can be found in $O(\log \log n)$ time using the van Emde Boas data structure~\cite{BoasKZ77}. If $p_c$ is known, then a query can be answered in $O(k)$ time for any value of $k$. One last improvement will be important for the data structure of Lemma~\ref{lemma:3sid}. Let $S_m$ denote the set of $\ceil{\log\log n}$ lowest points in $S$. We store the $y$-coordinates of $p\in S_m$ in the $q$-heap F. Using $F$, we can find the highest $p_m\in S_m$, such that $p_m.y\le c$, in $O(1)$ time \cite{FW94}. \no{Suppose that we want to report $k\ge \log \log n$ leftmost points $p\in S$, $p\leq a$, in sorted order.} Let $n_c=|\{\,p\in S\,|\,p.y\le c\,\}|$. If $n_c\le \log\log n$, then $p_m=p_c$. As described above, we can answer a query in $O(k)$ time when $p_c$ is known. Hence, a query can be answered in $O(k)$ time if $n_c\le \log \log n$. \no{If $n_a>\log \log n$, then we can answer a query in $O(k+\log \log n)=O(k)$ time.} \begin{lemma}\label{lemma:1sid} There exists an $O(n\log n)$ space data structure that supports one-sided sorted range reporting queries in $O(\log \log n + k)$ time. If the highest point $p$ with $p.y\le c$ is known, then one-sided sorted queries can be supported in $O(k)$ time. If $|\{\,p\in S\,|\,p.y\le c\,\}|\le \log \log n$, a sorted range reporting query $[1,c]$ can be answered in $O(k)$ time. \no{If $k\geq \log \log n$, then one-sided sorted queries can be answered in $O(k)$ time.} \no{ where $n_a=|\{\,p\in S\,|\,p.y\le c\,\}|$.} \end{lemma} \paragraph{Three-Sided Sorted Queries.} We construct a range tree on $x$-coordinates of points. For any node $v$, the data structure $D(v)$ of Lemma~\ref{lemma:1sid} supports one-sided queries on $S(v)$ as described above. \no{We also store $\ceil{\log \log n}$ lowest points from $S(v)$ in a data structure $L(v)$. Using the result of~\cite{FW94}, we can find for any $c$ the highest $p\in L(v)$ such that $p\le c$. Hence, we can use $L(v)$ to determine whether $S(v)$ contains at least $\log \log n$ points that are below $c$. If this is not the case, we can find the highest $p\in S(v)$, $p.y\le c$, using $L(v)$. } For each root-to-leaf path $\pi$ we store two data structures, $R_1(\pi)$ and $R_2(\pi)$. Let $\pi^+$ and $\pi^-$ be defined as follows. If $v$ belongs to a path $\pi$ and $v$ is the left child of its parent, then its sibling $v'$ belongs to $\pi^+$. If $v$ belongs to $\pi$ and $v$ is the right child of its parent, then its sibling $v'$ belongs to $\pi^-$. The data structure $R_1(\pi)$ contains the lowest point in $S(v')$ for each $v'\in \pi^+$; if $v\in \pi$ is a leaf, $R_1(\pi)$ also contains the point stored in $v$. The data structure $R_2(\pi)$ contains the lowest point in $S(v')$ for each $v'\in \pi^-$; if $v\in \pi$ is a leaf, $R_2(\pi)$ also contains the point stored in $v$. Let $lev(v)$ denote the level of a node $v$ (the level of a node $v$ is the length of the path from the root to $v$). If a point $p\in R_i(\pi)$, $i=1,2$, comes from a node $v$, then $lev(p)=lev(v)$. For a given query $(c,l)$ the data structure $R_1(\pi)$ ($R_2(\pi)$) reports points $p$ such that $p.y\le c$ and $lev(p)\ge l$ sorted in decreasing (increasing) order by $lev(p)$. Since a data structure $R_i(\pi)$, $i=1,2$, contain $O(\log n)$ points, the point with the $k$-th largest (smallest) value of $lev(p)$ among all $p$ with $p.y\le c$ can be found in $O(1)$ time. \no{first $k$ points can be found in $O(k)$ time.} The implementation of structures $R_i(\pi)$ is based on standard bit techniques and will be described in the full version. Consider a query $Q=[a,b]\times [1,c]$. Let $\pi_a$ and $\pi_b$ be the paths from the root to $a$ and $b$ respectively. Suppose that the lowest node $v\in \pi_a\cap\pi_b$ is situated on level $lev(v)=l$. Then all points $p$ such that $p.x\in [a,b]$ belong to some node $v$ such that $v\in \pi_a^+$ and $lev(v)> l$ or $v\in \pi^-_b$ and $lev(v)>l$. We start by finding the leftmost point $p$ in $R_1(\pi_a)$ such that $lev(p)>l$ and $p.y\leq c$. Since the $x$-coordinates of points in $R_1(\pi_a)$ decrease as $lev(p)$ increases, this is equivalent to finding the point $p_1\in R_1(\pi_a)$ such that $p_1.y \leq c$ and $lev(p_1)$ is maximal. If $lev(p_1)>l$, we visit the node $v_1\in \pi^+_a$ that contains $p_1$; \no{We find the highest point $p_l\in L(v_1)$, $p_l.y\le c$. If $p_l$ is the last (highest) point in $L(v_1)$, then $S(v_1)$ contains at least $\log \log n$ points $p$, such that $p.y\le c$. Otherwise $p_l$ is the highest among all $p\in S(v_1)$ such that $p\le c$. Then, we use $D(v_l)$ to report the $k$ leftmost points $p'\in S(v)$ such that $p'.y\le c$. If $p_1$ is not the highest point in $L(v_1)$, then $p.y \le p_l.y$ for any $p\in S(v_1)$ such that $p.y\le c$. } using $D(v_1)$, we report the $k$ leftmost points $p'\in S(v_1)$ such that $p'.y\le c$. Then, we find the point $p_2$ with the next largest value of $lev(p)$ among all $p\in R_1(\pi_a)$ such that $p.y\le c$; we visit the node $v_2\in \pi^+_a$ that contains $p_2$ and proceed as above. The procedure continues until $k$ points are output or there are no more points $p\in R_1(\pi_a)$, $lev(p)>l$ and $p.y\le c$. If $k'<k$ points were reported, we visit selected nodes $u\in \pi^-_b$ and report remaining $k-k'$ points using a symmetric procedure. Let $k_i$ denote the number of reported points from the set $S(v_i)$ and let $m_i=Q\cap S(v_i)$. We spend $O(k_i)$ time in a visited node $v_i$ if $k_i\ge \log \log n$ or $m_i< \log \log n$. If $k_j<\log\log n$ and $m_j\ge \log \log n$, then we spend $O(\log \log n +k_j)$ time in the respective node $v_j$. Thus we spend $O(\log \log n + k_j)$ time in a node $v_j$ only if $m_j>k_j$, i.e., only if not all points from $S(v_j)\cap Q$ are reported. Since at most one such node $v_j$ is visited, the total time needed to answer all one-sided queries is $O(\sum_i k_i + \log \log n)=O(\log \log n + k)$. \no{ in the decreasing order of $lev(p)$. For every found point $p\in R_1(\pi_a)$, we visit the node $v\in \pi^+_a$ that contains $v$. Using $D(v)$, we report the $k$ leftmost points $p'\in S(v)$ such that $p'.y\le c$. } \begin{lemma}\label{lemma:3sid} There exists an $O(n\log^2 n)$ space data structure that answers three-sided sorted reporting queries in $O(\log \log n + k)$ time. \end{lemma} \paragraph{Online queries.} We assumed in Lemmas~\ref{lemma:1sid} and~\ref{lemma:3sid} that parameter $k$ is fixed and given with the query. Our data structures can also support queries in the online modus using the method originally described in~\cite{BrodalFGL09}. The main idea is that we find roughly $\Theta(k_i)$ leftmost points from the query range for $k_i=2^i$ and $i=1,2,\ldots$; while $k_i$ points are reported, we simultaneously compute the following $\Theta(k_{i+1})$ points in the background. For a more extensive description, refer to \cite[Section 4.1]{NavN12}, where the same method for a slightly different problem is described. \section{Two-Dimensional Range Reporting in Optimal Time} \label{sec:2dim-range} We store points in a compact range tree $T_y$ on $y$-coordinates. We use the variant $(iii)$ of Lemma~\ref{lemma:range} that uses $O(n\log^{\varepsilon}n)$ space and retrieves the coordinates of the $r$-th point from $S(v)$ in $O(1)$ time. Moreover, the sets $S(v)$, $v\in T_y$, are divided into groups $G_i(v)$. Each $G_i(v)$, except of the last one, contains $\lceil\log^3 n\rceil$ points. For $i<j$, each point assigned to $G_i(v)$ has smaller $x$-coordinate than any point in $G_j(v)$. The set $S'(v)$ contains selected elements from $S(v)$. If $v$ is the right child of its parent, then $S'(v)$ contains $\ceil{\log\log n}$ points with smallest $y$-coordinates from each group $G_i(v)$; structure $D'(v)$ supports three-sided sorted queries of the form $[a,b]\times [0,c]$ on points of $S'(v)$. If $v$ is the left child of its parent, then $S'(v)$ contains $\ceil{\log\log n}$ points with largest $y$-coordinates from each group $G_i(v)$; data structure $D'(v)$ supports three-sided sorted queries of the form $[a,b]\times [c,+\infty]$ on points of $S'(v)$. For each point $p'\in S'(v)$ we store the index $i$ of the group $G_i(v)$ that contains $p$. We also store the point with the largest $x$-coordinate from each $G_i(v)$ in a structure $E(v)$ that supports $O(\log \log n)$ time searches \cite{BoasKZ77}. For all points in each group $G_i(v)$ we store an array $A_i(v)$ that contains points sorted by their $y$-coordinates. Each point is specified by the rank of its $x$-coordinate in $G_i(v)$; so each entry uses $O(\log \log n)$ bits of space. \no{ We answer a two-dimensional query by answering two three-sided queries in the online modus. Data structures $D'(u)$ are main tools for answering relevant three-sided queries. We also use the structures for single groups } To answer a query $Q=[a,b]\times [c,d]$, we find the lowest common ancestor $v_c$ of the leaves that contain $c$ and $d$. Let $v_l$ and $v_r$ be the left and the right children of $v_c$. All points in $Q\cap S$ belong to either $([a,b]\times [c,+\infty])\cap S(v_l)$ or $([a,b]\times [0,d])\cap S(v_r)$. We generate the sorted list of $k$ leftmost points in $Q\cap S$ by merging the lists of $k$ leftmost points in $([a,b]\times [c,+\infty])\cap S(v_l)$ and $([a,b]\times [0,d])\cap S(v_r)$. Thus it suffices to answer sorted three-sided queries $([a,b]\times [c,+\infty])$ and $([a,b]\times [0,d])$ in nodes $v_l$ and $v_r$ respectively. We consider a query $([a,b]\times [0,d])\cap S(v_r)$; query $[a,b]\times [c,+\infty]$ is answered symmetrically. Assume $[a,b]$ fits into one group $G_i(v_r)$, i.e., all points $p$ such that $a\le p.x\le b$ belong to one group $G_i(v_r)$. We can find the $y$-rank $d_r$ of the highest point $p\in G_i(v_r)$, such that $p.y \leq d$ in $O(\lg \lg n)$ time by binary search in $A_i(v_r)$. Let $a_r$ and $b_r$ be the ranks of $a$ and $b$ in $G_i(v_r)$. We can find the positions of $k$ leftmost points in $([a_r,b_r]\times [0,d_r])\cap G_i(v_r)$ using a data structure $H_i(v_r)$. $H_i(v_r)$ contains the $y$-ranks and $x$-ranks of points in $G_i(v_r)$ and answers sorted three-sided queries on $G_i(v_r)$. By Lemma~\ref{lemma:3sid}, $H_i(v_r)$ uses $O(|G_i(v_r)|(\log \log n)^3)$ bits and supports queries in $O(\log \log \log n + k )$ time. Actual coordinates of points can be obtained from their ranks in $G_i(v_r)$ in $O(1)$ time per point: if the $x$-rank of a point is known, we can compute its position in $S(v_r)$; we obtain $x$-coordinates of the $i$-th point in $S(v_r)$ using variant $(iii)$ of Lemma~\ref{lemma:range}. Now assume $[a,b]$ spans several groups $G_i(v_r),\ldots,G_j(v_r)$ for $i<j$. That is, the $x$-coordinates of all points in groups $G_{i+1}(v_r),\ldots,G_{j-1}(v_r)$ belong to $[a,b]$; the $x$-coordinate of at least one point in $G_i(v_r)$ ($G_j(v_r)$) is smaller than $a$ (larger than $b$) but the $x$-coordinate of at least one point in $G_i(v_r)$ and $G_j(v_r)$ belongs to $[a,b]$. Indices $i$ and $j$ are found in $O(\log \log n)$ time using $E(v_r)$. We report at most $k$ leftmost points in $([a,b]\times [0,d])\cap G_i(v_r)$ just as described above. Let $k_1=|([a,b]\times [0,d])\cap G_i(v_r)|$; if $k_1\ge k$, the query is answered. Otherwise, we report $k'=k-k_1$ leftmost points in $([a,b]\times [0,d])\cap (G_{i+1}(v_r)\cup\ldots\cup G_{j-1}(v_r))$ using the following method. Let $a'$ and $b'$ be the minimal and the maximal $x$-coordinates of points in $G_{i+1}(v_r)$ and $G_{j-1}(v_r)$, respectively. The main idea is to answer the query $Q'=([a',b']\times [0,d])\cap S'(v_r)$ in the online modus using the data structure $D'(v_r)$. If some group $G_t(v_r)$, $i<t<j$, contains less than $\ceil{\log \log n}$ points $p$ with $p.y\le d$, then all such $p$ belong to $S'(v_r)$ and will be reported by $D'(v_r)$. Suppose that $D'(v_r)$ reported $\log \log n$ points that belong to the same group $G_t(v_r)$. Then we find the rank $d_t$ of $d$ among the $y$-coordinates of points in $G_t(v_r)$. Using $H_t(v_r)$, we report the positions of all points $p\in G_t(v_r)$, such that the rank of $p.y$ in $G_t(v_r)$ is at most $d_t$, in the left-to right order; we can also identify the coordinates of every such $p$ in $O(1)$ time per point. The query to $H_t(v_r)$ is terminated when all such points are reported or when the total number of reported points is $k$. We need $O(\log \log n +k_t)$ time to answer a query on $H_t(v_r)$, where $k_t$ denotes the number of reported points from $G_t(v_r)$. Let $m_t=|Q'\cap G_t(v_r)|$ If $G_t$ is the last examined group, then $k_t\le m_t$; otherwise $k_t=m_t$. We send a query to $G_t(v_r)$ only if $G_t(v_r)$ contains at least $\log \log n$ points from $Q'$. Hence, a query to $G_t(v_r)$ takes $O(\log \log n + k_t)=O(k_t)$ time, unless $G_t(v_r)$ is the last examined group. Thus all necessary queries to $G_t(v_r)$ for $i{<}t{<}j$ take $O(\log \log n + k)$ time. Finally, if the total number of points in $([a,b]\times [0,d])\cap (G_i(v_r)\cup\ldots\cup G_{j-1}(v_r))$ is smaller than $k$, we also report the remaining points from $([a,b]\times [0,d])\cap G_j(v_r)$. \tolerance=1500 The compact tree $T_y$ uses $O(n\log^{\varepsilon}n)$ words of space. A data structure $D'(v)$ uses $O(|S'(v)|\log^2 n\log\log n)= O(|S(v)|\log \log n/\log n)$ words of space. Since all sets $S(v)$, $v\in T_y$, contain $O(n\log n)$ points, all $D'(v)$ use $O(n\log \log n)$ words of space. A data structure for a group $G_i(v)$ uses $O(|G_i(v)|(\log \log n)^3)$ bits. Since all $G_i(v)$ for all $v\in T_y$ contain $O(n\log n)$ elements, data structures for all groups $G_i(v)$ use $O(n(\log \log n)^3)$ words of $\log n$ bits. \begin{theorem}\label{theor:2dim} There exists a $O(n\log^{\varepsilon} n)$ space data structure that answers two-dimensional sorted reporting queries in $O(\log \log n + k)$ time. \end{theorem} \section{Applications} \label{sec:appl} \tolerance=1000 In this section we will describe data structures for several indexing and computational geometry problems. A text (string) $T$ of length $n$ is pre-processed and stored in a data structure so that certain queries concerning some substrings of $T$ can be answered efficiently. \paragraph{Preliminaries.} In a suffix tree ${\cal T}$ for a text $T$, every leaf of ${\cal T}$ is associated with a suffix of $T$. If the leaves of ${\cal T}$ are listed from left to right, then the corresponding suffixes of $T$ are lexicographically sorted. For any pattern $P$, we can find in $O(|P|)$ time the special node $v\in {\cal T}$, called the \emph{locus} of $P$. The starting position of every suffix in the subtree of $v=\idsf{locus}(P)$ is the location of an occurrence of $P$. We define the rank of a suffix $\idsf{Suf}$ as the number of $T$'s suffixes that are lexicographically smaller than or equal to $\idsf{Suf}$. The ranks of all suffixes in $v=\idsf{locus}(P)$ belong to an interval $[\idit{left}(P),\idit{right}(P)]$, where $\idit{left}(P)$ and $\idit{right}(P)$ denote the ranks of the leftmost and the rightmost suffixes in the subtree of $v$. Thus for any pattern $P$ there is a unique range $[\idit{left}(P),\idit{right}(P)]$; pattern $P$ occurs at position $i$ in $T$ if and only if the rank of suffix $T[i..n]$ belongs to $[\idit{left}(P),\idit{right}(P)]$. Refer to~\cite{Gusfield1997} for a more extensive description of suffix trees and related concepts. We will frequently use a special set of points, further called \emph{the position set for $T$}. Every point $p$ in the position set corresponds to a unique suffix $\idsf{Suf}$ of a string $T$; the $y$-coordinate of $p$ equals to the rank of $\idsf{Suf}$ and the $x$-coordinate of $p$ equals to the starting position of $\idsf{Suf}$ in $T$. \paragraph{Successive List Indexing.} In this problem a query consists of a pattern $P$ and an index $j$, $1\le j\le n$. We want to find the first (leftmost) occurrence of $P$ at position $i\ge j$. A successive list indexing query $(P,j)$ is equivalent to finding the point $p$ from the position set such that $p$ belongs to the range $[j,n]\times [left(P),right(P)]$ and the $x$-coordinate of $p$ is minimal. Thus a list indexing query is equivalent to a range successor query on the position set. Using Theorems~\ref{theor:spaceeff} and~\ref{theor:2dim} to answer range successor queries, we obtain the following result. \begin{corollary}\label{cor:succind} We can store a string $T$ in an $O(nf(n))$ space data structure, so that for any pattern $P$ and any index $j$, $1\le j\le n$, the leftmost occurrence of $P$ at position $i\ge j$ can be found in $O(g(n))$ time for (i) $f(n)=O(1)$ and $g(n)=O(\log^{\varepsilon}n)$; (ii) $f(n)=O(\log\log n)$ and $g(n)=O((\log \log n)^2)$; (iii) $f(n)=O(\log^{\varepsilon}n)$ and $g(n)=O(\log \log n)$. \end{corollary} \paragraph{Range Non-Overlapping Indexing.} In the string statistics problem we want to find the maximum number of non-overlapping occurrences of a pattern $P$. In~\cite{KKL07wads} the \emph{range non-overlapping indexing problem} was introduced: instead of just computing the maximum number of occurrences we want to find the longest sequence of non-overlapping occurrences of $P$. It was shown \cite{KKL07wads} that the range non-overlapping indexing problem can be solved via $k$ successive list indexing queries; here $k$ denotes the maximal number of non-overlapping occurrences. \begin{corollary}\label{cor:rangenon} The range non-overlapping indexing problem can be solved in $O(|P|+kg(n))$ time with an $O(n f(n))$ space data structure, where $g(n)$ and $f(n)$ are defined as in Corollary~\ref{cor:succind}. \end{corollary} Other, more far-fetched applications, are described next. \subsection{Pattern Matching with Variable-Length Don't Cares} \label{sec:regpat} We must determine whether a query pattern $P=P_1*P_2*P_3\ldots *P_m$ occurs in $T$. The special symbol $*$ is the Kleene star symbol; it corresponds to an arbitrary sequence of (zero or more) characters from the original alphabet of $T$. The parameter $m$ can be specified at query time. In~\cite{YuWK10} the authors showed how to answer such queries in $O(\sum_{i=1}^m |P_i|)$ and $O(n)$ space in the case when the alphabet size is $\log^{O(1)}n$. In this paper we describe a data structure for an arbitrarily large alphabet. Using the approach of~\cite{YuWK10}, we can reduce such a query for $P$ to answering $m$ successive list indexing queries. First, we identify the leftmost occurrence of $P_1$ in $T$ by answering the successive list indexing query $(P_1,1)$. Let $j_1$ denote the leftmost position of $P_1$. $P_1*P_2*P_3\ldots *P_m$ occurs in $T$ if and only if $P_2*P_3\ldots *P_m$ occurs at position $i\ge j_1+ |P_1|$. We find the leftmost occurrence $j_2\ge j_1+|P_1|$ of $P_2$ by answering the query $(P_2,j_1+|P_1|)$. $P_2*P_3\ldots *P_m$ occurs in $T$ at position $i_2\ge j_1+ |P_1|$ if and only if $P_3*P_m$ occurs at position $i_3\ge j_2+ |P_2|$. Proceeding in the same way we find the leftmost possible positions for $P_4*\ldots *P_m$. Thus we answer $m$ successive list indexing queries $(P_t,i_t)$, $t=1,\ldots, m$; here $i_1=1$, $i_t=j_{t-1}+|P_{i-1}|$ for $t\ge 2$, and $j_{t-1}$ denotes the answer to the $(t-1)$-th query. \begin{corollary}\label{cor:dontcares} We can determine whether a text $T$ contains a substring $P=P_1*\ldots P_{m-1}*P_m$ in $O(\sum_{i=1}^m|P_i|+mg(n))$ time using an $O(n f(n))$ space data structure, where $g(n)$ and $f(n)$ are defined as in Corollaries~\ref{cor:succind} and~\ref{cor:rangenon}. \end{corollary} \nono{Further applications of sorted range reporting and orthogonal range successor queries are discussed in the full version of this paper~\cite{NN12full}.} \subsection{Ordered Substring Searching} \label{sec:sortsubstr} Suppose that a data structure contains a text $T$ and we want to report occurrences of a query pattern $P$ in the left-to-right order, i.e., in the same order as they appear in $T$; in some case we may want to find only the $k$ leftmost occurrences. In this section we describe two solutions for this problem. Then we show how sorted range reporting can be used to solve the position-restricted variant of this problem. We denote by $\idrm{occ}$ the number of $P$'s occurrences in $T$ that are reported when a query is answered. \paragraph{Data Structure with Optimal Query Time.} Such queries can be answered in $O(|P|+\idrm{occ})$ time and $O(n)$ space using the suffix tree and the data structure of Brodal \emph{et~al.}~\cite{BrodalFGL09}. Positions of suffixes are stored in lexicographic order in the suffix array $A$; the $k$-th entry $A[k]$ contains the starting position of the $k$-th suffix in the lexicographic order. In~\cite{BrodalFGL09} the authors described an $O(n)$ space data structure that answers online sorted range reporting queries: for any $i\ge j$, we can report in $O(j-i+1)$ time all entries $A[t]$, $i\le t\le j$, sorted in increasing order by their values. Occurrences of a pattern $P$ can be reported in the left-to-right order as follows. Using a suffix tree, we find $\idit{left}(P)$ and $\idit{right}(P)$ in $O(|P|)$ time. Then we report all suffixes in the interval $[\idit{left}(P),\idit{right}(P)]$ sorted by their starting positions using the data structure of~\cite{BrodalFGL09} on $A$. \begin{corollary}\label{cor:sortlin} We can answer a sorted substring matching query in $O(|P|+\idrm{occ})$ time using a $O(n)$ space data structure \end{corollary} \paragraph{Succinct Data Structure.} The space usage of a data structure for sorted pattern matching can be further reduced. We store a compressed suffix array for $T$ and a succinct data structure for range minimum queries. We use the implementation of the compressed suffix array described in~\cite{GrossiGV03} that needs $(1+1/\varepsilon)nH_k+o(n)$ bits for $\sigma=\log^{O(1)}n$, where $\sigma$ denotes the alphabet size and $H_k$ is the $k$-th order entropy. Using the results of~\cite{GrossiGV03}, we can find the position of the $i$-th lexicographically smallest suffix in $O(\log^{\varepsilon} n)$ time. We can also find $\idit{left}(P)$ and $\idit{right}(P)$ for any $P$ in $O(|P|)$ time. We also store the range minimum data structure~\cite{Fis10} for the array $A$ defined above. For any $i\le j$, we can find such $k=\idsf{rmq}(i,j)$ that $A[k]\le A[t]$ for any $i\le t\le j$. Observe that $A$ itself is not stored; we only store the structure from~\cite{Fis10} that uses $O(n)$ bits of space. Now occurrences of $P$ are reported as follows. An initially empty queue $Q$ contains suffix positions; with every suffix position $p$ we also store an interval $[l_p,r_p]$ and the rank $i_p$ of the suffix that starts at position $p$. Let $l=\idit{left}(P)$ and $r=\idit{right}(P)$. We find $i_f=\idsf{rmq}(l,r)$ and the position $p_f$ of the suffix with rank $i_f$. The position $p_f$ with its rank $i_f$ and the associated interval $[l,r]$ is inserted into $Q$. We repeat the following steps until $Q$ is empty. The item with the minimal value of $p_t$ is extracted from $Q$. Let $i_t$ and $[l_t,r_t]$ denote the rank and interval stored with $p_t$. We answer queries $i'=\idsf{rmq}(l_t,i_t-1)$ and $i''=\idsf{rmq}(i_t+1,r_t)$ and identify the positions $p'$, $p''$ of suffixes with ranks $i'$, $i''$. Finally, we insert items $(p',i',[l_t,i_t-1])$ and $(p'',i'',[i_t+1,r_t])$ into $Q$. Using the van Emde Boas data structure, we can implement each operation on $Q$ in $O(\log \log n)$ time. We can find the position of a suffix with rank $i$ in $O(\log^{\varepsilon} n)$ time. Thus the total time that we need to answer a query is $O(|P|+\idrm{occ}\log^{\varepsilon} n)$. Our data structure uses $(1+1/\varepsilon)nH_k+O(n)$ bits. We observe however that we need $O(\idrm{occ}\log n)$ additional bits at the time when a query is answered. \begin{corollary}\label{cor:sortcomp} If the alphabet size $\sigma=\log^{O(1)}n$, then we can answer an ordered substring searching query in $O(|P|+\idrm{occ}\log^{\varepsilon}n)$ time using a $(1+1/\varepsilon)nH_k+O(n)$-bit data structure \end{corollary} \paragraph{Position-Restricted Ordered Substring Searching} The position restricted substring searching problem was introduced by M{\"a}kinen and Navarro in~\cite{MakinN07}. Given a range $[i,j]$ we want to report all occurrences of $P$ that start at position $t$, $i\le t\le j$. If we want to report occurrences of $P$ at positions from $[i,j]$ in the sorted order, then this is equivalent to answering a sorted range reporting query $[i,j]\times [\idit{left}(P),\idit{right}(P)]$. Hence, we can obtain the same time-space trade-offs as in Theorems~\ref{theor:spaceeff} and~\ref{theor:2dim}. \subsection{Maximal Points in a 2D Range and Rectangular Visibility} A point $p$ \emph{dominates} another point $q$ if $p.x\ge q.x$ and $p.y\ge q.y$. A point $p\in S$ is \emph{a maximal point} if $p$ is not dominated by any other point $q\in S$. In a two-dimensional maximal points range query, we must find all maximal points in $Q\cap S$ for a query rectangle $Q$. We refer to~\cite{BT11} and references therein for description of previous results. We can answer such queries using orthogonal range successor queries. For simplicity, we assume that all points have different $x$- and $y$-coordinates. Suppose that maximal points in the range $Q=[a,b]\times [c,d]$ must be listed. For $i\ge 1$, we report a point $p_i$ such that $p_i.x\ge p.x$ for any $p\in Q_{i-1}\cap S$, where $Q_0=Q$ and $Q_j=[a,p_{i}.x]\times [p_i.y,d]$ for $j\ge 1$. Our reporting procedure is completed when $Q_i\cap S=\emptyset$. Clearly, finding a point $p_i$ or determining that no such $p_i$ exists is equivalent to answering a range successor query for $Q_{i-1}$. Thus we can find $k$ maximal points in $O(kg(n))$ time using an $O(nf(n))$ space data structure, where $g(n)$ and $f(n)$ are again defined as in Corollary~\ref{cor:succind}. A point $p\in S$ is rectangularly visible from a point $q$ if $Q_{pq}\cap S=\emptyset$, where $Q_{pq}$ is the rectangle with points $p$ and $q$ at its opposite corners. In the rectangle visibility problem, we must determine all points $p\in S$ that are visible from a query point $q$. Rectangular visibility problem is equivalent to finding maximal points in $Q\cap S$ for $Q=[0,q.x]\times [0,q.y]$. Hence, we can find points rectangularly visible from $q$ in $O(kg(n))$ time using an $O(nf(n))$ space data structure. \bibliographystyle{abbrv}
train/arxiv
BkiUda85qWTA8XvyTcME
5
1
\section{Introduction} The zero-range process is one of the interacting particle systems introduced in the seminal paper \cite{spitzer70}. The process has unbounded local state space, i.e. there is no restriction on the number of particles per site, and the jump rate $g(n)$ at a given site depends only on the number of particles $n$ at that site. This simple zero-range interaction leads to a product structure of the stationary measures \cite{andjel82,spitzer70} and further interest was initially on the existence of the dynamics under general conditions \cite{andjel82} and on establishing hydrodynamic limits. These questions have been successfully addressed in the case of attractive zero-range processes when $g(n)$ is a non-decreasing function, and results are summarized in \cite{kipnislandim}. For such processes with additional space dependence of the rates $g_x$, there is also a number of rigorous results regarding condensation on slow sites \cite{andjeletal00,ferrarisisko07,landim96}.\\ \noindent More recently, there has been increasing interest in zero-range processes with spatially homogeneous jump rates $g(n)$ decreasing with the number of particles. This results in an effective attraction of the particles and can lead to condensation phenomena. A generic family of models with that property has been introduced in the theoretical physics literature \cite{evans00}, with asymptotic behaviour of the jump rates \begin{eqnarray}\label{r0} g(n)\simeq 1+\frac{b}{n^\lambda}\quad\mbox{as }n\to\infty\ . \end{eqnarray} For $\lambda\in (0,1)$, $b>0$ and for $\lambda =1$, $b>2$ the following phase transition was established using heuristic arguments: If the particle density $\rho$ exceeds a critical value $\rho_c$, the system phase separates into a homogeneous background with density $\rho_c$ and a condensate, a single randomly located lattice site that contains a macroscopic fraction of all the particles. This type of condensation appears in diverse contexts such as traffic jamming, gelation in networks, or wealth condensation in macro-economies, and zero range processes or simple variants have been used as prototype models to explain these phenomena (see \cite{evansetal05} for a review).\\ \noindent The existence of invariant measures with simple product structure makes the problem mathematically tractable. Jeon, March and Pittel showed in \cite{jeonetal00} that for some cases of zero-range processes the maximum site contains a non-zero fraction of all the particles. Condensation has been established rigorously in \cite{grosskinskyetal03} by proving the equivalence of ensembles in the thermodynamic limit, where the lattice size $L$ and the number of particles $N$ tend to infinity such that $N/L\to\rho$. This implies convergence of finite-dimensional marginals of stationary measures conditioned on a total particle number $N$, to stationary product measures with density $\rho$ in the subcritical case $\rho\leq\rho_c$, and with density $\rho_c$ in the supercritical case $\rho >\rho_c$. In the latter case the condition on the particle number is an atypical event which is most likely realized by a large deviation of the maximum component, and the problem can be described as Gibbs conditioning for measures without exponential moments. It turns out (cf. \cite{armendarizetal08}) that a strong form of the equivalence holds in the supercritical case, which determines the asymptotic distribution of the particles on all $L$ sites. A similar result has been established in \cite{ferrarietal07} on a lattice of fixed size $L$ in the limit $N\to\infty$, and the local equivalence of ensembles result was generalized to processes with several particle species in \cite{grosskinsky08}. More recent rigorous results address metastability for the motion of the condensate \cite{beltranetal10,beltranetal09}.\\ \noindent In this paper we study the properties of the condensation transition at the critical density $\rho_c$ for the processes introduced in \cite{evans00} with rates (\ref{r0}), to understand the onset of the condensate formation. We consider the thermodynamic limit with $N/L\to\rho_c$, with the excess mass $N-\rho_c L$ is on a scale $o(L)$. Our results are formulated in Section 2.2 and provide a rather complete picture of the transition from a homogeneous subcritical to condensed supercritical behaviour. It turns out that the condensate forms suddenly on a critical scale $N-\rho_c L\sim \Delta_L$, which is identified in Theorems \ref{th1} and \ref{th3} to be \begin{equation}\label{cscale0} \Delta_L =\begin{cases} \sigma\sqrt{(b-3)L\log L}&\mbox{ for}\quad\lambda =1,\ b>3\quad\mbox{and}\\ c_\lambda(\sigma^2 L)^{\frac{1}{1+\lambda}}&\mbox{ for}\quad\lambda \in (0,1),\ b>0\ .\end{cases} \end{equation} Our results imply a weak law of large numbers for the ratio $M_L /(N-\rho_c L)$ where $M_L$ is the maximum occupation number, which is illustrated in Figure~\ref{fig:scale}. The ratio exhibits a sudden jump from $0$ to a positive value when the excess mass reaches the critical size $\Delta_L$. At this point both values can occur with positive probability depending on sub-leading orders of the excess mass, which is discussed in detail in Section 2.3. For $\lambda =1$ the full excess mass is concentrated in the maximum right above the critical scale. On the other hand, for $\lambda\in (0,1)$ the excess mass is shared between the condensate and the bulk, and the condensate fraction increases from $2\lambda /(1+\lambda)$ to $1$ only as $(N-\rho_c L)/ \Delta_L \to\infty$. Theorem \ref{bulk} provides results for the bulk fluctuations, which imply that the mass outside the maximum is always distributed homogeneously and the system typically contains at most one condensate site. Theorems \ref{th1} and \ref{th3} also cover the fluctuations of the maximum, which change from standard extreme value statistics to Gaussian. This is complemented by Theorems \ref{th2} and \ref{th4} on downside deviations, which give a detailed description of the crossover to the expected Gumbel distributions in the subcritical regime ($\rho <\rho_c$), where the marginals have exponential tails. In \cite{evansetal08} the fluctuations of the maximum for $\lambda=1$ were observed by the use of saddle point computations to change from Gumbel ($\rho <\rho_c$), via Fr\'echet ($\rho =\rho_c$), to Gaussian or stable law fluctuations ($\rho >\rho_c$), raising the question on how the transition between these different regimes occurs. Our results around the critical point provide a detailed, rigorous answer to that question, covering also the case $\lambda\in (0,1)$. We use previous results on local limit theorems for moderate deviations of random variables with power-law distribution \cite{doney01} for the case $\lambda =1$, and stretched exponential distribution \cite{nagaev68} for $\lambda\in (0,1)$. In the latter case we can also extend the results for $\rho >\rho_c$ (Corollary \ref{corr}) to parameter values that were not covered by previous results \cite{armendarizetal08}.\\ \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{scale2} \end{center} \caption{ \label{fig:scale} Illustration of the law of large numbers for the excess mass fraction $\frac{M_L}{N-\rho_c L}$ in the condensate on the critical scale $\Delta_L$ (\ref{cscale0}). For $\lambda =1$ the results are given in Theorem \ref{th1} ((\ref{conv0}) and (\ref{conv})) and for $\lambda\in (0,1)$ in Theorem \ref{th3} ((\ref{conv0l}) and (\ref{convl})). The behaviour at $1$ depends on the sub-leading terms in the excess mass, as detailed in (\ref{gammal1}) and (\ref{subl}). } \end{figure} \noindent In general, the onset of phase separation and phase coexistence at the critical scale is a classical question of mathematical statistical mechanics. This has been studied for example in the Ising model and related liquid/vapour systems in \cite{biskupetal02,biskupetal03}, where a major point is the shape of critical 'droplets'. Here we treat this question in the case of zero-range condensation, where the main mathematical challenges are related to subexponential scales and a lack of symmetry between the fluid and condensed phase. The condensate turns out to always concentrate on a single lattice site (even at criticality), and contains a positive fraction of the excess mass. In contrast to liquid/vapour systems, this fraction is not 'universal', but depends on the system parameter $\lambda$ (see also discussion in Section 2.3). From a mathematical point of view, the analysis includes interesting connections to extreme value statistics and large deviations for subexponential random variables, which in itself is an area of recent research interest (see \cite{denisovetal08,al10} and references therein). Our results also provide a detailed understanding of finite-size effects and metastability close to the critical point, which are important in applications such as traffic flow and granular clustering (see \cite{chlebounetal10} and references therein). \section{Definitions and results} \subsection{The zero-range process and condensation} We consider the zero-range processes on a finite set $\Lambda_L$ of size $L$. Given a jump rate function $g:\ {\mathbb{N}}_{0}=\{0,1,2,\ldots\}\mapsto[0,+\infty)$ such that $g(n)=0\Leftrightarrow n=0$ and a set of transition probabilities $p(\cdot,\cdot)$ on $\Lambda_L\times\Lambda_L$, a zero range process is defined as a Markov process on the state space $X_L={\mathbb{N}}_{0}^{\Lambda_L}$ of all particle configurations \begin{eqnarray} \eta =(\eta_x :x\in\Lambda_L )\ , \end{eqnarray} where $\eta_x \in{\mathbb{N}}_0$ is the local occupation number at site $x$. The dynamics is given by the generator \begin{eqnarray}\label{gen} {\mathcal{L}} f(\eta )=\sum_{x,y\in\Lambda_L} g(\eta_x ) p(x,y)\big( f(\eta^{x,y} )-f(\eta )\big) \end{eqnarray} using the notation\quad $\eta^{x,y}_z=\left\{\begin{array}{cl} \eta_x -1,\ &\ z=x \,\text{ and }\,\eta_x>0\\ \eta_y +1, &\ z=y\, \text{ and }\,\eta_x>0\\ \eta_z,\ &\mbox{ otherwise.}\end{array}\right.$ \\ \noindent For a technical discussion of the domain of test functions $f$ of the generator and the corresponding construction of the semigroup we refer to \cite{andjel82}. The practical meaning of (\ref{gen}) is that any given site $x$ looses one particle with rate $g(\eta_x )$ and this particle then jumps to site $y$ with probability $p(x,y)$. To avoid degeneracies $p(x,y)$ should be irreducible transition probabilities of a random walk on $\Lambda_L$. This way, the number of particles is the only conserved quantity of the process, leading to a family of stationary measures indexed by the particle density. In the following we are interested in the situation where these measures are spatially homogeneous. This is guaranteed by the condition that the harmonic equations \begin{eqnarray} \sum_{x\in\Lambda_L} p(x,y)\lambda_x =\lambda_y\ ,\quad y\in\Lambda_L, \end{eqnarray} have the constant solution $\lambda_x \equiv 1$, and by the irreducibility of $p(x,y)$ this implies that every solution is constant. This is for example the case if $\Lambda_L$ is a regular periodic lattice and $p(x,y)$ is translation invariant, such as $\Lambda_L ={\mathbb{Z}} /L{\mathbb{Z}}$ and $p(x,y)=\delta_{y,x+1}$ for totally asymmetric or $p(x,y)=\frac12\delta_{y,x+1} +\frac12\delta_{y,x-1}$ for symmetric nearest-neighbour hopping.\\ \noindent It is well known (see e.g. \cite{andjel82,spitzer70}) that under the above conditions the zero-range process has a family of stationary homogeneous product measures $\nu_\phi$. The occupation numbers $\eta_x$ are i.i.d. random variables with marginal distribution \begin{eqnarray}\label{smeas} \nu_\phi \big[\eta_x =n\big]=\frac{1}{z(\phi )}\, w(n)\,\phi^n \quad\mbox{where}\quad w(n)=\prod_{k=1}^n \frac{1}{g(k)}\ . \end{eqnarray} The parameter $\phi$ of the stationary measures is called the fugacity, and the measures exist for all $\phi\geq 0$ such that the normalization (partition function) is finite, i.e. \begin{eqnarray} z (\phi ):=\sum_{n=0}^\infty w(n)\,\phi^n <\infty\ . \end{eqnarray} The particle density as a function of $\phi$ can be computed as \begin{eqnarray} R (\phi ):={\mathbb{E}}^{\nu_\phi} \big[\eta_x \big]=\phi\ \partial_\phi \log z(\phi ), \end{eqnarray} and turns out to be strictly increasing and continuous with $R(0)=0$.\\ \noindent In this paper we consider the family of models introduced in \cite{evans00}, where the jump rates have asymptotic behaviour \begin{eqnarray}\label{r0new} g(n)\simeq 1+\frac{b}{n^\lambda}\quad\mbox{as }n\to\infty\ , \end{eqnarray} with $b>0$ and $\lambda\in (0,1]$. In (\ref{r0new}) and hereafter we use the notation $a_n\simeq b_n$ as $n\to\infty$, if $\lim_{n\to\infty}a_n/b_n=1.$ We will also write $a_n\sim b_n$ as $n\to\infty$ if there is a constant $C>1$ such that $C^{-1}\le a_n/b_n\le C$ for sufficiently large $n$. With (\ref{smeas}) this definition of jump rates leads to stationary weights with asymptotic power law decay \begin{eqnarray}\label{plaw} w(n)\simeq A_1 n^{-b}\quad\mbox{for }\lambda =1\ , \end{eqnarray} and stretched exponential decay \begin{eqnarray}\label{sexp} w(n)\simeq A_\lambda \exp\Big(-\frac{b}{1-\lambda}n^{1-\lambda}\Big)\quad\mbox{for }\lambda \in (0,1)\ , \end{eqnarray} with constant prefactors $A_\lambda$. In the second case the distributions (\ref{smeas}) are well defined for all $\phi\in [0,1]$ with finite maximal (\textit{critical}) density \begin{eqnarray} \rho_c :=R(1)<\infty \label{rhoc} \end{eqnarray} and finite corresponding variance \begin{eqnarray} \sigma^2 :={\mathbb{E}}^{\nu_1} \big[\eta_x^2 \big]-\rho_c^2 <\infty\ . \end{eqnarray} If $\lambda =1$ the corresponding variance is finite if $b>3$, which we will assume hereafter. The case $2<b\leq 3$ is not covered by our main results, and we discuss it shortly in Section 2.3. In general, (\ref{sexp}) also contains terms of lower order $n^{1-k\lambda}$, $k\geq 2$, in the exponent, which may contribute to the asymptotic behaviour for $\lambda\in (0,1/2]$ depending on the subleading terms in the jump rates (\ref{r0new}). To avoid these complications when $\lambda\leq 1/2$, we focus on processes with rates (\ref{r0new}) for which (\ref{sexp}) holds. The simplest way to meet this condition is to choose $g(n)=w(n-1)/w(n)$, $n\ge 1$, with $w(n)$ as in the right hand of (\ref{sexp}) with $A_\lambda=1$.\\ \noindent It has been shown in \cite{evans00,grosskinskyetal03} that when the critical density is finite the system exhibits a condensation transition that can be quantified as follows. Since the number of particles is conserved by the microscopic dynamics for each $N\in{\mathbb{N}}$, the subspaces \begin{eqnarray} X_{L,N} =\big\{\eta\in X_L :S_L (\eta )=N\big\}\quad\mbox{where}\quad S_L (\eta )=\sum_{x\in\Lambda_L} \eta_x \end{eqnarray} are invariant. The zero range process is irreducible over each of these subspaces and the unique invariant measure supported on $X_{L,N}$ is given by \begin{eqnarray} \mu_{L,N} =\nu_\phi \big[\cdot\ |S_L =N\big]\ . \end{eqnarray} It is not hard to see that the measures $\mu_{L,N}$ are independent of $\phi$ on the right-hand side. A question of interest is the convergence of the measures $\mu_{L,N}$ in the thermodynamic limit $L,N\to\infty$, $N/L\to\rho$. This is answered by the equivalence of ensembles principle, which states that in the limit the measures $\mu_{L,N}$ locally behave like a product measure $\nu_\phi$ for a suitable $\phi$. Note that when $\rho\le\rho_c$ there exists a unique $\phi =\phi(\rho)$ such that $\rho=R\big(\phi\big)$, whereas if $\rho>\rho_c$ no such $\phi$ exists. The equivalence of ensembles precisely states that if $f$ is a cylinder function, i.e. a function that only depends on the configuration $\eta$ on a finite number of sites, then \begin{eqnarray}\label{equi} \mu_{L,N} \big[f\big]\to \nu_\phi \big[f\big]\ , \end{eqnarray} provided that (see \cite{grosskinsky08} and Appendix 2.1 in \cite{kipnislandim}) \begin{eqnarray} R(\phi )=\rho\quad\mbox{and}\quad f\in L^2 (\nu_\phi )\quad &\mbox{for} &\quad\rho <\rho_c \quad\mbox{and}\nonumber\\ \phi =\phi_c =1\quad\mbox{and}\quad f\mbox{ bounded}\quad &\mbox{for} &\quad\rho \geq\rho_c \ . \end{eqnarray} The behaviour described above is accompanied by the emergence of a condensate, a site which contains $O(L)$ particles. If $\rho<\rho_c$ one can easily check that the limiting measures $\nu_{\phi(\rho)}$ have finite exponential moments and the size of the maximum component $M_L (\eta )=\max_{x\in\Lambda_L} \eta_x$ is typically $O(\log L)$. If on the other hand $\rho >\rho_c$ it has been shown in \cite{jeonetal00} for the power law case that \begin{eqnarray}\label{lln} \frac1L M_L \stackrel{\mu_{L,N}}{\longrightarrow}\rho -\rho_c\ . \end{eqnarray} The notation in (\ref{lln}), which we also use in the following, denotes convergence in probability w.r.t the conditional laws $\mu_{L,N}$, i.e. \begin{eqnarray} \mu_{L,N} \Big[\ \Big|\frac1L M_L -(\rho -\rho_c )\Big| >\epsilon\Big]\to 0\quad\mbox{for all }\epsilon >0\ . \end{eqnarray} Equation (\ref{equi}) has been generalized in \cite{armendarizetal08} for $\rho >\rho_c$ to test functions $f$ depending on all sites but the maximally occupied one, and equation (\ref{lln}) is proved for stretched exponentials of the form (\ref{sexp}) with $\lambda >1/2$, as well. An immediate corollary is that the size of the second largest component is typically $o(L)$, which implies that the condensate typically covers only a single randomly located site. \subsection{Main results} In the following we study the distribution of the excess mass in the system at the critical point to fully understand the emergence of the condensate when the density increases from sub- to supercritical values. We consider the thermodynamic limit $N/L\to\rho_c$ where the excess mass is on a sub-extensive scale $|N-\rho_c L|=o(L)$. Our first theorem on the power-law case (\ref{plaw}) relies on a result of Doney (Theorem 2 in \cite{doney01}) for the estimation of $\nu_{\phi_c}\big[S_L=N\big]$. Precisely, for $z:=(N-\rho_c L )/\sqrt{L}\to\infty$, we get \begin{equation} \nu_{\phi_c} \big[S_L =N\big]=\frac{1}{\sqrt{2\pi\sigma^2 L}}\, e^{-\frac{z^2}{2\sigma^2}}\big( 1+o(1)\big) +L\ \nu_{\phi_c}\Big[\eta_x=\big[ z\sqrt{L}\big]\Big]\big( 1+o(1)\big) \label{split} \end{equation} as $L\to\infty$. It turns out that when $N-\rho_cL$ is close to the typical scale $\sqrt{L}$ (case a) in the Theorem below) the first term of the sum dominates the right hand side of (\ref{split}) and the excess mass is distributed homogeneously among the sites. On the other hand, when $N-\rho_cL$ is large enough (case (b)) it is the second term that dominates the right hand side of (\ref{split}) and this implies the existence of a condensate that carries essentially all the excess mass. Finally, there is an intermediate scale (case (c)) where the two terms are of the same order and both scenarios can occur with positive probability. \begin{theorem} {\bf (Upside moderate deviations, power law case)} \noindent Let $\lambda =1$ and $b>3$, so that $\sigma^2<\infty$. Assume that $N\ge\rho_c L$ and define $\gamma_L \in{\mathbb{R}}$ by \begin{equation} N=\rho_c L+\sigma\sqrt{(b-3)L\log L}\,\Big(1+\frac{b}{2(b-3)}\frac{\log\log L}{\log L}+\frac{\gamma_L}{\log L}\Big). \label{gammal1} \end{equation} a) If $\gamma_L\to-\infty $ the distribution under $\mu_{L,N}$ of the maximum $M_L$ is asymptotically equivalent to its distribution under $\nu_{\phi_c}$. Precisely, for all $x>0$ we have \begin{eqnarray}\label{mda} \lim_{L\to\infty}\mu_{L,N}\left[ \frac{M_L}{L^{\frac{1}{b-1}}}\le x\right]=\lim_{L\to\infty}\nu_{\phi_c}\left[ \frac{M_L}{L^{\frac{1}{b-1}}}\le x\right]=\exp\left\{-\frac{A_1 x^{1-b}}{b-1}\right\}. \end{eqnarray} \begin{equation}\label{conv0} \text{In particular,\ if }\ N-\rho_cL\gg L^{\frac{1}{b-1}} \ \text{ then }\qquad \frac{M_L}{N-\rho_c L} \stackrel{\mu_{L,N}}{\longrightarrow}0. \end{equation} b) If $\gamma_L\to+\infty $ the normalized fluctuations of the maximum around the excess mass under $\mu_{L,N}$ converge in distribution to a normal r.v. \begin{eqnarray}\label{fluct} \frac{M_L -(N-\rho_c L )}{\sqrt{L\sigma^2}} \stackrel{\mu_{L,N}}{\Longrightarrow} {\cal N} (0,1)\ . \end{eqnarray} \begin{eqnarray}\label{conv} \text{In particular, }\qquad \frac{M_L}{N-\rho_c L} \stackrel{\mu_{L,N}}{\longrightarrow}1. \end{eqnarray} c) If $\gamma_L \to\gamma\in{\mathbb{R}}$ we have convergence in distribution to a Bernoulli random variable, \begin{eqnarray}\label{bern} \qquad\ \qquad\ \qquad\ \frac{M_L}{N-\rho_c L} \stackrel{\mu_{L,N}}{\Longrightarrow} Be(p_\gamma )\ , \end{eqnarray} where $p_\gamma \in (0,1)$ is such that $p_\gamma \to 0\ (1)$, as $\gamma\to -\infty\ (+\infty)$. An explicit expression for $p_\gamma$ is given in (\ref{lgdef}) and (\ref{bernoulimit}) in the proofs section. \label{th1} \end{theorem} \noindent The next result connects the fluctuations of the maximum to the extreme value statistics expected in the subcritical regime. \begin{theorem} {\bf (Downside moderate deviations, power law case)} \noindent Let $\lambda =1$ and $b>3$ and define $\omega_L \geq 0$ by \begin{eqnarray} N=\rho_c L-\omega_L \sigma^2 L^{\frac{b-2}{b-1}}\ . \end{eqnarray} \noindent a) If $\omega_L \to 0$ the distribution under $\mu_{L,N}$ of the maximum $M_L$ is asymptotically equivalent to its distribution under $\nu_{\phi_c}$. Precisely, for all $x>0$ we get \begin{eqnarray}\label{1dpl} \lim_{L\to \infty}\mu_{L,N}\left[ \frac{M_L}{L^{\frac{1}{b-1}}}\le x\right]=\exp\left\{-\frac{A_1 x^{1-b}}{b-1}\right\}. \end{eqnarray} b) If $\omega_L \to \omega >0$ then there exists a positive constant $v$ such that for all $x>0$ \begin{eqnarray}\label{2dpl} \lim_{L\to \infty} \mu_{L,N}\left[\frac{M_L}{L^{\frac{1}{b-1}}}\le x \right]=\exp\left\{-A_1 \int_x^{\infty}e^{-\omega t}\frac{dt}{t^b}\right\}.\end{eqnarray} c) If $\omega_L \to \infty$ then there exist sequences $B_L\to \infty$ and $s_L= \frac{\rho_cL-N}{\sigma^2 L}(1+o(1))$ with $B_Ls_L\to \infty$, such that for all $x\in\mathbb{R}$ \begin{eqnarray}\label{3dpl} \lim_{L\to \infty} \mu_{L,N}\left[\frac{M_L-B_L}{1/s_L}\le x\right]=\exp\{-e^{-x}\}.\end{eqnarray} \label{th2} \end{theorem} \noindent We return to a more detailed discussion of these results in Section 2.3 after stating the results for the stretched exponential tail ($\lambda<1$). For this case the counterpart of estimate (\ref{split}) was obtained by A.V. Nagaev in \cite{nagaev68}, where the size of the maximum is also discussed and which is summarized in the appendix. In fact, a careful reading reveals that equation (\ref{fluct2}) below is already contained there. \begin{theorem} {\bf (Upside moderate deviations, stretched exponential case)} \noindent Let $\lambda \in (0,1)$ and $c_\lambda=(1+\lambda)(2\lambda)^{-\frac{\lambda}{1+\lambda}}\left(\frac{b}{1-\lambda}\right)^{\frac{1}{1+\lambda}}$. Assume that $N\ge\rho_cL$ and define $t_L \geq 0$ by \begin{eqnarray}\label{gammal2} N=\rho_c L+t_L(\sigma^2 L)^{\frac{1}{1+\lambda}}. \end{eqnarray} a) If $\lim\sup t_L<c_\lambda $ the distribution under $\mu_{L,N}$ of the maximum $M_L$ is asymptotically equivalent to its distribution under $\nu_{\phi_c}$. Precisely, there exist sequences $y_L,b_L$ such that \begin{eqnarray} y_L\simeq\Big(\frac{1-\lambda}{b}\log L\Big)^{\frac{1}{1-\lambda}},\qquad b_L\simeq\frac{y_L^\lambda}{b} \qquad\text{as } L\to\infty, \label{extremes} \end{eqnarray} \noindent and for all $x\in{\mathbb{R}}$ we have \begin{eqnarray} \lim_{L\to\infty}\mu_{L,N}\left[ \frac{M_L-y_L}{b_L}\le x\right]=\lim_{L\to\infty}\nu_{\phi_c}\left[ \frac{M_L-y_L}{b_L}\le x\right]=e^{-e^{-x}}. \label{mdal} \end{eqnarray} \begin{equation} \text{In particular, \ if }\ N-\rho_c L\gg (\log L)^{\frac{1}{1-\lambda}} \ \text{ then}\qquad \frac{M_L}{N-\rho_c L} \stackrel{\mu_{L,N}}{\longrightarrow}0. \label{conv0l} \end{equation} b) If $t_L\to t$ with $c_\lambda<t\le +\infty$, there exists a sequence $a_L$ and a function $a(t)$, $a_L\to a(t)$, such that \begin{eqnarray}\label{fluct2} \frac{M_L-(N-\rho_cL)a_L}{\sqrt{L}}\stackrel{\mu_{L,N}}{\Longrightarrow} {\cal N}\left(0, \frac{\sigma^2}{1-\frac{\lambda\big(1-a(t)\big)}{a(t)}}\right). \end{eqnarray} \begin{eqnarray}\label{convl} \text{In particular, }\qquad \frac{M_L}{N-\rho_c L} \stackrel{\mu_{L,N}}{\longrightarrow}a(t). \end{eqnarray} The sequence $a_L$ is implicitly defined by (\ref{alef}) in the Appendix (with $a_L =1-\alpha$ and $\gamma =b/(1-\lambda)$), and when $N-\rho_c L\gg L^{\frac{1}{2\lambda}}$ we may take $a_L=1$. The limit $a(t)$ in the preceding equation is an increasing function of $t$ with \begin{eqnarray}\label{nonuni} \lim_{t\downarrow c_\lambda}a(t)=\frac{2\lambda}{1+\lambda}\qquad\text{and}\qquad\lim_{t\uparrow\infty}a(t)=a(+\infty)=1. \end{eqnarray} c) If $t_L \to c_\lambda$, assume that $\lambda >1/2$ and suppose \begin{equation} N=\rho_c L+ c_\lambda(\sigma^2 L)^{\frac{1}{1+\lambda}}-\frac{1+\lambda}{2\lambda c_\lambda}(\sigma^2 L)^{\frac{\lambda}{1+\lambda}}\big(\tfrac{3}{2}\log L +\gamma_L \big) \label{subl} \end{equation} with $\gamma_L\to \gamma\in{\mathbb{R}}$. Then we have convergence to a Bernoulli random variable, \begin{eqnarray} \frac{M_L}{N-\rho_c L}\stackrel{\mu_{L,N}}{\Longrightarrow} \frac{2\lambda}{1+\lambda} Be (p_\gamma )\ , \label{bern2} \end{eqnarray} where $p_\gamma \in (0,1)$ is such that $p_\gamma \to 0\ (1)$ for $\gamma\to -\infty\ (+\infty)$. An explicit expression for $p_\gamma$ is given in (\ref{pstretched}). \label{th3} \end{theorem} In c) analogous statements also hold for the case $\lambda\leq 1/2$, which can be derived from the results in \cite{nagaev68} summarized in the appendix. However, the order of the sub-leading scale depends on the first few Cram\'er coefficients of the distribution, and results cannot be formulated in an explicit form as above. \begin{theorem} {\bf (Downside moderate deviations, stretched exponential case)} \noindent Let $\lambda<1$ and define $\omega_L\ge 0$ by \[N=\rho_cL-\omega_LL\ (\log L)^{-\frac{1}{1-\lambda}}.\]\\ If $\omega_L\to c\in[0,+\infty]$ there exist sequences $\gamma_L$ and $\zeta_L$, both increasing to $\infty$ with $L$, such that \begin{eqnarray} \lim_{L\to \infty}\mu_{L,N}\left[ M_L\le \gamma_L+x\,\zeta_L\right]=e^{-e^{-x}},\quad x\in {\mathbb{R}}\,. \label{sexponential-limit} \end{eqnarray} If $\omega_L\to 0$ the distribution under $\mu_{L,N}$ of the maximum $M_L$ is asymptotically equivalent to its distribution under $\nu_{\phi_c}$. Precisely, if $y_L$ and $b_L$ are the sequences introduced in Theorem \ref{th3}.a), we can take $\gamma_L=y_L$ and $\zeta_L=b_L$ to get \[ \lim_{L\to \infty}\mu_{L,N}\left[ M_L\le y_L+x\, b_L\right]=e^{-e^{-x}},\quad x\in {\mathbb{R}}\,. \] \label{th4} \end{theorem} \noindent Our final result focuses on the fluctuations of the bulk outside the maximum. \begin{theorem} ({\bf Fluctuations of the bulk})\\ Assume $\lambda\in(0,1)$, or $\lambda=1$ and $b>3$ so that $\sigma^2<+\infty$. \\ a) In the subcritical regime, that is if $\frac{M_L}{\Delta_L}\stackrel{\mu_{L,N}}{\longrightarrow} 0$ , the distribution under $\mu_{L,N}$ of the bulk fluctuation process converges in the Skorokhod space to a standard Brownian bridge conditioned to return to the origin at time 1, i.e. \[ X_s^L=\frac{1}{\sigma\sqrt L}\sum_{x=1}^{[sL]} \left(\eta_x-\frac{N}{L}\right)\stackrel{\mu_{L,N}}{\Longrightarrow} BB_s\ . \] b) In the supercritical regime, that is if $N-\rho_c L>\Delta_L$ and $\frac{M_L}{N-\rho_cL}\stackrel{\mu_{L,N}}{\longrightarrow} \kappa$, $\kappa$ a positive constant, the distribution under $\mu_{L,N}$ of the bulk fluctuation process, converges in the Skorokhod space to a standard Brownian bridge plus an independent, random drift term. Precisely, if $\tilde{\eta}_x=\eta_x\mathbbm{1}\{\eta_x\le L^{1/4}\}$ then \[ Y_s^L=\frac{1}{\sigma\sqrt L}\sum_{x=1}^{[sL]} \left(\tilde{\eta}_x-\frac{N-a_L(N-\rho_cL)}{L}\right)\stackrel{\mu_{L,N}}{\Longrightarrow} BB_s+s\,\Phi\,, \] where $\Phi\sim {\cal N}\Big(0,\,1/\big(1-\frac{\lambda(1-a(t))}{a(t)}\big)\Big)$\quad is an independent random variable.\\ When $\lambda=1$, or when $\lambda\in(0,1)$ and $N-\rho_cL\gg L^{\frac{1}{2\lambda}}$ we may take $a_L=1$. Otherwise, $a_L$ is defined by (\ref{alef}) in the Appendix (with $a_L =1-\alpha$ and $\gamma =b/(1-\lambda )$). \label{bulk} \end{theorem} \noindent The supercritical case (assertion b) above) takes a particularly simple form when $a(t)=1$: then $\Phi\sim{\cal N}(0,1)$ is a Gaussian variable independent of the Brownian bridge component, and hence $BB_s+s\,\Phi$ is a standard Brownian motion $B_s$. This is the case for the supercritical power law, or for the stretched exponential law when $\frac{N-\rho_cL}{\Delta_L}\to +\infty$. \subsection{Discussion of the main results} As is already summarized in the introduction, Theorems \ref{th1} and \ref{th3} imply a weak law of large numbers for the excess mass fraction in the condensate $M_L /(N-\rho_c L)$. The critical scale $\Delta_L$ for the excess mass, above which a positive fraction of it concentrates on the maximum and forms a condensate according to (\ref{conv0}), (\ref{conv}) and (\ref{conv0l}), (\ref{convl}), is summarized in (\ref{cscale0}). It is of order $\sqrt{L\log L}$ for the power law case given precisely in (\ref{gammal1}), and the lighter tails in the stretched exponential case lead to a higher scale of order $L^{\frac{1}{1+\lambda}}$ given precisely in (\ref{gammal2}). At the critical scale the excess mass fraction can take both values with positive probability (cf. (\ref{bern}) and (\ref{bern2})), depending on sub-leading orders as detailed in (\ref{gammal1}) and (\ref{subl}). In the power law case, the condensate always contains the full excess mass (\ref{conv}) as soon as it exists. On the other hand, for stretched exponential tails the excess mass is shared between the condensate and the bulk according to (\ref{convl}) as long as $N-\rho_c L\sim \Delta_L$, and the fraction $a(t)$ of the condensate gradually increases to $1$. This behaviour is illustrated in Figure \ref{fig:scale} in the introduction. The results on the bulk fluctuations in Theorem \ref{bulk} imply that below criticality the excess mass is distributed homogeneously in the system, and that the same holds above criticality in the bulk, which completes the above picture. These results are illustrated in Figure \ref{fig:profile}, where we show sample profiles for a zero-range process which show exactly the predicted behaviour already at a rather moderate system size of $L=1024$.\\ \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{profile1360} \end{center} \caption{ \label{fig:profile} Results of Monte Carlo simulations of the zero-range process with rates (\ref{r0}) with $\lambda =0.6$ and $b=2$, on a one-dimensional lattice with $L=1024$ sites and periodic boundary conditions. For these parameter values $\rho_c =0.842$, $\sigma^2 =2.55$ and $c_\lambda =4.09$, and we choose $N=1360$ particles, which is very close to the leading order prediction of $1356$ for the critical scale according to (\ref{subl}) with $\gamma =0$. We plot four realizations for the accumulated profile $S_k=\sum_{x=1}^k \eta_x$ against $k$, and see that both, fluid and condensed realizations occur. In the condensed case, the mass is shared between the condensate and the bulk according to the prediction (\ref{nonuni}) with $\frac{2\lambda}{1+\lambda}=3/4$, as is indicated by the dashed lines. } \end{figure} \noindent The discontinuous formation of the condensate on the critical scale implies that it forms 'spontaneously' out of particles taken from the bulk of the system: When crossing the critical scale by adding more mass to the system, the number of particles joining the maximum is indeed of higher order than the number of particles that have to be added to the system in order to form the condensate. A similar phenomenon has been reported for the Ising model and related liquid/vapour systems in \cite{biskupetal02,biskupetal03}. In contrast to these results, the condensed excess mass fraction at criticality is not 'universal', but depends on the system parameter $\lambda$ according to (\ref{nonuni}). This might seem surprising at first sight, but the rates of the form (\ref{r0}) introduce an effective long-range interaction when the zero-range process is mapped to an exclusion model with finite local state space (see e.g. \cite{evans00,evansetal05}).\\ \noindent In addition to a law of large numbers our results also include limit theorems for the fluctuations of the maximum, which are Gaussian above the critical scale (cf. (\ref{fluct}) and (\ref{fluct2})), and given by the extreme value statistics below criticality. As long as $\lim_{L\to\infty} (N-\rho_c L)/ \Delta_L <1$, the excess mass does not affect the behaviour of the maximum. According to statements a) of Theorems \ref{th1} and \ref{th3}, $M_L$ scales as the maximum of i.i.d.\ random variables, which is proportional to $L^{\frac{1}{b-1}}$ and $(\log L)^{\frac{1}{1-\lambda}}$, respectively, with limiting Fr\'echet distribution for power law tails (\ref{mda}) (cf. \cite{evansetal08}) and Gumbel distribution for stretched exponential tails (\ref{mdal}). Theorems \ref{th2} and \ref{th4} describe the crossover to the expected Gumbel distributions in the subcritical regime, where the marginals have exponential tails. In the power law case, the change from Fr\'echet to Gumbel occurs at the critical scale $\rho_c L-N\sim L^{(b-2)/(b-1)}$ according to (\ref{2dpl}). In \cite{evansetal08} the behaviour of the maximum was predicted for $N=\rho L$ with $\rho$ smaller, equal, or bigger than $\rho_c$ for the power law case $\lambda =1$. Our results provide a rigorous confirmation including the stretched exponential case $\lambda\in (0,1)$, together with a full understanding of the crossover from subcritical extremal statistics to Gaussian fluctuations in the supercritical regime.\\ \noindent We point out that at criticality the correlations introduced by conditioning on the total number of particles shift from being entirely absorbed by the bulk to being entirely absorbed by the maximum. Indeed, when $N-\rho_c L\ll \Delta_L$ we know from Theorems \ref{th1} a) and \ref{th3} a) that the maximum behaves as the maximum of i.i.d. random variables with distribution $\nu_{\phi_c}$. On the other hand, if $N-\rho_c L\gg \Delta_L$, the bulk asymptotically behaves as i.i.d. random variables with distribution $\nu_{\phi_c}$ following from Theorems 1a and 1b in \cite{armendarizetal08}, and the discussion after Theorem \ref{bulk}.\\ \noindent In the stretched exponential case, there is another interesting point regarding the centering of the bulk variables in the central limit theorem: When the excess mass exceeds $\Delta_L$ the typical excess mass in the bulk is \[ (1-a_L)(N-\rho_c L)\sim\frac{\sigma^2 bL}{(N-\rho_cL)^\lambda}\ , \] as follows from the implicit definition (\ref{alef}) of $a_L$. This is of order at least $\sqrt{L}$ unless $N-\rho_c L\gg L^{1/(2\lambda )}$, hence the special centering required in Theorem \ref{bulk} b). In this case the equivalence of ensembles cannot be extended to the strong form of \cite{armendarizetal08} (Theorem 2b). Note that for $\lambda \leq 1/2$ this affects even supercritical densities, i.e. $N/L\to\rho>\rho_c$. This is why previous results did not cover this case, which is summarized in the following simple Corollary of Theorem \ref{th3}, and completes the condensation picture for supercritical densities. \begin{corollary} If $\lambda\in (0,1/2]$ and $N/L\to\rho >\rho_c\ $ we have \[ \frac{1}{L}\, M_L \stackrel{\mu_{L,N}}{\longrightarrow} \rho-\rho_c\,, \] and the fluctuations around this limit are given by (\ref{fluct2}), with $(1-a_L)=O(L^{-\lambda})$. \label{corr} \end{corollary} \noindent A necessary condition for our results is the existence of finite second moments, and the case $\lambda=1$ and $2<b\le 3$ is not covered by this article. The reason we cannot provide results analogous to Theorems \ref{th1} and \ref{th2} is the lack of a precise estimate for the probability of a moderate deviation of the sum in that case, similar to the result (\ref{split}) by Doney \cite{doney01} for square integrable power-law tails. Nevertheless, when the excess mass is such that \[ \P\big[S_L=N \big]=L\,\P\Big[\eta_1=[N-\rho_c L]\Big] \big(1+o(1)\big), \] we can still apply Theorem 1 in \cite{armendarizetal08} to obtain a stable limit theorem for the fluctuations of the maximum around $N-\rho_cL$. For instance if $2<b<3$, the preceding relation is true provided $N-\rho_cL{\gg}L^{\frac{1}{b-1}}$, and under this condition we get that \[ \frac{M_L}{N-\rho_cL}\stackrel{\mu_{L,N}}{\longrightarrow}1\quad\mbox{and}\quad \frac{M_L-(N-\rho_cL)}{L^{\frac{1}{b-1}}}\stackrel{\mu_{L,N}}{\Longrightarrow}G_{b-1}\ , \] where $G_{b-1}$ is a completely asymmetric stable law with index $b-1$. \section{Proofs} Since the product measures $\nu_\phi$ and the conditional distributions $\mu_{L,N}$ are exchangeable and independent of the jump probabilities $p(x,y)$, the spatial structure of zero-range configurations is irrelevant for our results. In the following we will therefore consider $\eta_1 ,\eta_2 ,\ldots$ to be i.i.d. integer valued random variables defined in a probability space $(\Omega,{\cal F},\P)$, where ${\cal F}$ is the $\sigma$-field generated by $\{\eta_i\}_{i\in{\mathbb{N}}}$ and $\P=\nu_{\phi_c}$. We further define \begin{eqnarray} p_k:=\P \big[\eta_i =k\big]. \end{eqnarray} Note that $p_n$ is directly proportional to the stationary weights $w(n)$ in (\ref{smeas}). Recall the notation \begin{eqnarray}\label{notation} \rho_c ={\mathbb{E}}\big[\eta_i \big]\ ,\quad\sigma^2 ={\mathbb{E}} \big[\eta_i^2 \big]-\rho_c^2 \ ,\quad S_L =\sum_{i=1}^L \eta_i \ ,\quad M_L =\max_{i=1,..,L} \eta_i \ , \end{eqnarray} and that the conditional laws are given by $\mu_{L,N} =\P \big[\cdot\ |\ S_L =N\big]$. We will denote by $x^+=\max\{x,0\}$ the positive part of a real number $x$, and by $x^-=(-x)^+$ its negative part.\\ \subsection{Preliminaries} \noindent Our proofs mainly involve explicit estimates and standard large deviations methods. One such technique consists in introducing a change of measure that renders the rare event typical. Precisely, given $\alpha>0$ and $s\in {\mathbb{R}}$, define a new measure $\P_{\alpha}(s)$ on the $\sigma$--field ${\cal F}_L:=\sigma(\eta_1,\dots,\eta_L)$ by \begin{equation} \frac{d\P_{\alpha}(s)}{d\P}\Big|_{{\cal F}_L}= \frac{1}{Z^L_{\alpha}(s)}\mathbbm{1}_{\{M_L\le \alpha\}}e^{s S_L}\,, \label{tilted} \end{equation} where the normalization above is given by \[ Z_{\alpha}(s)=\sum_{k\le \alpha} e^{sk}p_k\,. \] Note that under $\P_{\alpha}(s)$ the random variables $\{\eta_k\}_{k\in{\mathbb{N}}}$ are i.i.d., bounded above by $\alpha$, their mean value is given by \[ \rho_{\alpha}(s)=\frac{Z'_{\alpha}(s)}{Z_{\alpha}(s)}=\frac{1}{Z_{\alpha}(s)} \sum_{k\le\alpha}ke^{sk}p_k, \] and their variance is given by $\sigma_{\alpha}^2(s)=\rho'_{\alpha}(s)$. It is not hard to verify that \[ \lim_{s\to-\infty}\rho_\alpha(s)=\inf\{k\ge 0: p_k>0\} \qquad\text{and}\qquad \lim_{s\to\infty}\rho_\alpha(s)=\sup\{k\le\alpha: p_k>0\}. \] Since $\rho_\alpha(\cdot)$ is a continuous increasing function, it follows that if $N/L$ is sufficiently close to the mean $\rho_c$ of the distribution and $\alpha$ is sufficiently large, there exists an $s_*=s_*(L,N,\alpha)$ such that \begin{equation} \rho_\alpha(s_*)=\frac{N}{L}. \label{sstar} \end{equation} The following lemma can be applied to compute the exact asymptotics of the conditional maximum when the average is set to be a small perturbation of the mean, using an a priori estimate as input. \begin{lemma} Take $N=N(L)$ such that $\frac{N}{L}\to\rho_c$ and suppose the following conditions are satisfied:\\ \noindent i) There exists a sequence $\alpha_{L,N}\le\infty$ such that \[\lim_{L\to\infty}\mu_{L,N} \big[M_L\le\alpha_{L,N}\big]=1\qquad\text{and}\qquad \lim_{L\to\infty}\ \left(\frac{N}{L}-\rho_c\right)^+\alpha_{L,N}=0.\] \noindent ii) For each $x\in\mathbb{R}$, there exists a sequence $\beta_L(x)\leq\alpha_{L,N}$ with $\beta_L(x)\to+\infty$, such that \[ L\sum_{\beta_L(x)< k\le \alpha_{L,N}} e^{s_*k} p_k \longrightarrow \Phi(x)\in [0,\infty )\quad\text{as}\quad L\to\infty\, ,\] where $s_*=s_*(L)$ is defined as in (\ref{sstar}) with $\alpha=\alpha_{L,N}$\,. Then \[ \mu_{L,N}\left[M_L\le \b_L(x)\right] \longrightarrow e^{-\Phi(x)}\qquad \text{as}\quad L\to \infty\, . \] \label{lemma0} \end{lemma} {\em Proof:} We begin by showing that under the conditions of the Lemma $s_*^+\alpha_{L,N}\to 0.$ For ease of notation we may write $\alpha$ and $\beta$ as shorthands for $\alpha_{L,N}$ and $\beta_L(x)$ respectively. Using the elementary inequality $(x-y)(e^x-e^y)\ge(x-y)^2$, valid for all $x,y\ge 0$ we have for any $s>0$ \begin{align} &Z_{\alpha}(s)\left(\rho_{\alpha}(s)-\frac{N}{L}\right) =\sum_{k\le\alpha}\big(k-\frac{N}{L}\big)e^{sk}p_k\nonumber\\ &\qquad\ge e^{sN/L}\sum_{k\le\alpha} \big(k-\frac{N}{L}\big)p_k+s\sum_{k\le \alpha} \big(k-\frac{N}{L}\big)^2p_k\nonumber\\ &\qquad\qquad\ge e^{sN/L}\left(\rho_c-\frac{N}{L}-\sum_{k>\alpha}\big(k-\frac{N}{L}\big)p_k\right)+s\sigma^2-s\sum_{k>\alpha}\big(k-\frac{N}{L}\big)^2p_k. \label{s0} \end{align} If we set \[ s_0\sigma^2=2\sum_{k>\alpha}\big(k-\frac{N}{L}\big)p_k+2\big(\frac{N}{L}-\rho_c\big)^+ \] it follows from (\ref{s0}) that $\rho_\alpha(s_0)>N/L$ for sufficiently large $L$, and since $\rho_\alpha(\cdot)$ is increasing we have $s_*<s_0$. On the other hand, in view of condition (i) in the statement of the Lemma and the finiteness of the second moment we have $s_0\alpha\to 0$. Thus, \begin{equation} s_*^+\alpha_{L,N}\to 0. \label{s+} \end{equation} If $s_*<0$ we still have \[ 0\le \sum_k\big(k-\frac{N}{L}\big)\big(e^{s_*N/L}-e^{s_*k}\big)p_k=e^{s_*N/L}\big(\rho_c-\frac{N}{L}\big)\le \big(\rho_c-\frac{N}{L}\big)\longrightarrow 0, \] and since all the terms in the preceding sum are non negative, this implies \begin{equation} s_*^-\to 0. \label{s-} \end{equation} The limits in (\ref{s+}), (\ref{s-}), together with the dominated convergence theorem and Fatou's lemma imply that \begin{equation} Z_{\alpha}(s_*)=\sum_{k\le\alpha}e^{s_*k}p_k\longrightarrow 1\qquad\text{and}\qquad Z_{\beta}(s_*)=\sum_{k\le\beta}e^{s_*k}p_k\longrightarrow 1\qquad\text{as } L\to\infty. \label{partlimit} \end{equation} This in turn gives after another application of the dominated convergence theorem that \begin{equation} \sigma_\alpha^2(s_*)=\frac{1}{Z_{\alpha}(s_*)}\sum_{k\le\alpha}\big(k-\frac{N}{L}\big)^2e^{s_*k}p_k\longrightarrow \sigma^2\qquad \text{and}\qquad\sigma_\beta^2(s_*)\longrightarrow \sigma^2\quad\text{as }L\to\infty . \label{sigmalimit} \end{equation} \noindent We proceed now with the proof of the assertion of the lemma. Given $x\in {\mathbb{R}}$, write \begin{eqnarray*} \P\left[M_L\le \b_L(x),\,S_L=N \right]=Z^L_{\b}(s_*)\,e^{-s_* N}\,\P_{\b}(s_*)[S_L=N]\,, \end{eqnarray*} and \begin{eqnarray*} \P\left[M_L\le \a_{L,N},\,S_L=N\right]=Z^L_{\a}(s_*)\,e^{-s_* N}\,\P_{\a}(s_*)[S_L=N]\,. \end{eqnarray*} By condition {\em (i)} in the statement of the lemma we have \begin{align} \mu_{L,N}\left[ M_L\le \b_L(x) \right]&\simeq\frac{\P\left[M_L\le \b_L(x),\, S_L=N \right]}{\P\left[M_L\le\a_{L,N},\,S_L=N \right]}\notag\\ &=\left( \frac{Z_{\b}(s_*)}{Z_{\a}(s_*)}\right)^L \frac{\P_{\b}(s_*)[S_L=N]}{\P_{\a}(s_*)[S_L=N]}\,. \label{limit} \end{align} By the local limit theorem for triangular arrays (Theorem 1.2 in \cite{DMcD}) and (\ref{sstar}) and (\ref{sigmalimit}), we have \begin{equation} \sqrt{2\pi L\sigma^2}\,\,\P_{\a}(s_*)\big[S_L=N\big]\longrightarrow 1\,. \label{llt} \end{equation} In order to compute the asymptotics of $\P_{\b}(s_*)\big[S_L=N\big]$ in (\ref{limit}) we need to obtain estimates on $\rho_{\b}(s_*)-N/L$. \begin{align*} \rho_{\b}(s_*)=\frac{\sum_{k\le \b} ke^{s_*k}p_k }{\sum_{k\le \b} e^{s_* k}p_k} &= \frac{\frac{N}{L}\sum_{k\le \a}e^{s_* k}p_k\,-\,\sum_{\b< k\le \a}ke^{s_* k}p_k }{\sum_{k\le \b}e^{s_*k}p_k}\\ &=\frac{N}{L}\,+\,\frac{\sum_{\b< k\le \a}\left(\frac{N}{L}-k\right)e^{s_* k}p_k }{\sum_{k\le \b}e^{s_*k}p_k} \end{align*} and \begin{align} L\left( \rho_{\b}(s_*)-\frac{N}{L}\right)^2 &=L\left(\frac{\sum_{\b< k\le \a}\left(\frac{N}{L}-k\right)e^{s_* k}p_k }{Z_{\b}(s_*)} \right)^2\notag\\ &\le \frac{L\,\,\sum_{\b< k\le \a} e^{s_* k} p_k}{Z^2_{\b}(s_*)}\, \sum_{\b<k\le \a}\left(\frac{N}{L}-k\right)^2e^{s_* k}p_k. \label{density} \end{align} It now follows easily from (\ref{partlimit}), (\ref{sigmalimit}) and condition (ii) of the Lemma that \[ \lim_{L\to \infty} L\left( \rho_{\b}(s_*)-\frac{N}{L}\right)^2\,=\,0\,. \] By another application of the local limit theorem for triangular arrays, we get that \[ \sqrt{2\pi L\sigma^2}\,\,\P_{\b}(s_*)[S_L=N] \longrightarrow 1\qquad \text{as}\quad L\to \infty\,, \] and using condition (ii) (\ref{limit}) becomes \[ \mu_{L,N}[M_L\le \beta_L(x)]\simeq \left(\frac{Z_{\b}(s_*)}{Z_{\a}(s_*)}\right)^L =\left(1-\frac{\sum_{\b< k\le \a}e^{s_* k} p_k}{\sum_{k\le \a}e^{s_* k} p_k}\right)^L \longrightarrow e^{-\Phi(x)}\, . \] \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \subsection{The power law case} \subsubsection{Proof of Theorem \ref{th1}} Here $N>\rho_cL$ and $N/L\to \rho_c$ as $L\uparrow \infty$.\\ \noindent We will use that if $\lambda =1$ then $p_n\simeq A_1 n^{-b}$, and the decomposition (\ref{gammal1}) \begin{equation*} N=\rho_c L+\sigma\sqrt{(b-3)L\log L}\,\Big(1+\frac{b}{2(b-3)}\frac{\log\log L}{\log L}+\frac{\gamma_L}{\log L}\Big) \end{equation*} that holds for $b>3$.\\ \noindent The proof of the theorem relies on (\ref{split}) and the following two lemmas. \begin{lemma} Let $b>3$ and $N>\rho_c L$ be such that the sequence $\gamma_L$ in (\ref{gammal1}) has a limit $\displaystyle{\lim_{L\to \infty} \gamma_L=\gamma \in [-\infty, \infty)\,}.$ If $\a_L=\frac{\sqrt{L}}{\log L}$ then \[ \P\big[M_L\le \alpha_L; S_L=N \big]=\frac{1}{\sqrt{2\pi \sigma^2 L}} \,\exp\left\{-\frac{(N-\rho_cL)^2}{2\sigma^2L} \right\}(1+o(1))\,. \] \label{lemma1} \end{lemma} \begin{lemma} Suppose $b>3$ and $N-\rho_c L\gg\vartheta_L\sqrt{L}\,$, for a sequence $\vartheta_L\to\infty$. Then as $L\to\infty$ \[ \P\Big[ M_L\ge N-\rho_cL-\vartheta_L\sqrt{L};\ S_L=N\Big]=A_1 L(N-\rho_cL)^{-b} \big(1+o(1)\big). \] \label{lemma2} \end{lemma} \vspace{-.4cm} {\em Proof of Lemma \ref{lemma1}:} The argument follows the standard approach used for moderate deviations of the sum of i.i.d. random variables. Consider $s_*\in {\mathbb{R}}$ such that $\rho_{\a_L}(s_*)=\frac{N}{L}$. Notice that $\rho_{\a_L}(0)=\rho_c-Z_{\a_L}^{-1}(0)\sum_{k>\a_L}(k-\rho_c)p_k<\frac{N}{L}$ for sufficiently large $L$, and by (\ref{s+}) we must have $s_*=o(\log L/\sqrt{L}).$ In particular, we have $Ls_*^{2+\epsilon}\to 0$ for all $\epsilon>0$. Just as in the proof of Lemma \ref{lemma0}, we may write \begin{align} \P\left[ M_L\le \a_{L};\,S_L=N \right]&=Z^L_{\a_L}(s_*)\,e^{-s_* N}\,\P_{\a_L}(s_*)[S_L=N]\,\notag\\ &=Z_{\a_L}^L(0)\,\exp\left\{-L\int_0^{s_*} t\,\rho'_{\a_L}(t)\,dt \right\}\,\P_{\a_L}(s_*)[S_L=N]\,, \label{ch.m} \end{align} \noindent where we have used the identity \[ \log\Big(\frac{Z_{\a_L}(s_*)}{Z_{\a_L}(0)}\Big)=\int_0^{s_*}\rho_{\a_L}(t)\,dt=s_*\rho_{\a_L}(s_*)-\int_0^{s_*}t\rho'_{\a_L}(t)\,dt\,. \] We now determine the asymptotic order of each term in (\ref{ch.m}). Observe that \begin{equation*} Z_{\a_L}^L(0)=\nu_{\phi_c}\left[ M_L \le \a_L \right]=\left(1-\sum_{k>\a_L}p_k\right)^L\longrightarrow 1\qquad\text{as }L\to \infty\,. \end{equation*} Furthermore, if we define $h_{\a_L}(t)=\rho_{\a_L}(t)-\rho_{\a_L}(0)-t\sigma_{\a_L}^2(0)$ we have \begin{align} \int_0^{s_*} t\rho_{\a_L}'(t)\,dt &=\frac{1}{\sigma_{\a_L}^2(0)}\int_0^{s_*}(\rho_{a_L}(t)-\rho_{\a_L}(0)-h_{\a_L}(t))\rho'_{\a_L}(t)\ dt\notag\\ &=\frac{\big(\rho_{\a_L}(s_*)-\rho_{\a_L}(0)\big)^2}{2\sigma_{\a_L}^2(0)}-\frac{1}{\sigma_{\a_L}^2(0)}\int_0^{s_*}h_{\a_L}(t)\rho_{\a_L}'(t)\ dt\,. \label{h1} \end{align} Using elementary estimates one can show that \begin{equation} \frac{L\big(\rho_{\a_L}(s_*)-\rho_{\a_L}(0)\big)^2}{2\sigma^2_{\a_L}(0)}-\frac{(N-\rho_cL)^2}{2\sigma^2 L}\longrightarrow 0\,. \label{h2} \end{equation} For the rightmost term in (\ref{h1}) notice that for all $s\in [0,s_*]$ we have \[ 0\le\sum_{k\le\a_L}k^2\big(e^{sk}-1\big)p_k\le Cs^\epsilon, \] for some $\epsilon>0$, which implies that $\big|\sigma_{a_L}^2(s)-\sigma_{\a_L}^2(0)\big|\le Cs^\epsilon$. Therefore, \begin{align*} &L\int_0^{s_*}h_{\a_L}(t)\rho_{\a_L}'(t)\ dt=L\int_0^{s_*}\left(\int_0^{t}\sigma_{\a_L}^2(s)-\sigma_{\a_L}^2(0)\ ds\right)\sigma_{\a_L}^2(t)dt=O\big(Ls_*^{2+\epsilon}\big)\to 0, \end{align*} where in the first equality we used that $\rho'_{\a_L}(t)=\sigma^2_{\a_L}(t)$. Together with (\ref{h1}), (\ref{h2}) this gives \begin{equation*} \exp\left\{-L\int_0^{s_*} t\rho'_{\a_L}(t)\ dt\right\}\simeq \exp\left\{-\frac{(N-\rho_cL)^2}{2\sigma^2 L}\right\}\qquad \text{as } L\to\infty\,. \label{middle} \end{equation*} The assertion now follows recalling that $\sqrt{2\pi L\sigma^2_{\a_L}(s_*)}\P_{\a_L}(s_*)\big[S_L=N\big]\longrightarrow 1$ by (\ref{llt}). \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \\ \noindent {\em Proof of Lemma \ref{lemma2}:} Consider a sequence $\vartheta_L$ as in the statement of the lemma. Then, \begin{align*} P_L& :=\P \big[ M_L \geq N-\rho_c L-\vartheta_L\sqrt{L};\, S_L =N\big] \\ & \simeq\sum_{k\ge(N-\rho_c L)-\vartheta_L\sqrt{L}} L\, p_k\,\P\big[S_{L-1} =N-k; \, M_{L-1}\leq k\big]\nonumber. \end{align*} Using the central limit theorem we can see that the contribution to the sum of the terms outside the set $ U_L=\{k\in{\mathbb{Z}}:\ |N-\rho_c L-k|\le \vartheta_L\sqrt{L}\}$ is negligible, that is \[ P_L=\sum_{k\in U_L} L\, p_k\,\P\big[S_{L-1} =N-k; \, M_{L-1}\leq k\big] +o \big(L(N-\rho_cL)^{-b}\big)\,. \] We can now use the regular variation of $p_k$ to get \[ P_L= A_1 L(N-\rho_cL)^{-b}\bigg(\sum_{k\in U_L} \P\big[S_{L-1} =N-k; \, M_{L-1}\leq k\big] +o(1)\bigg). \] The last sum converges to $1$ again by the central limit theorem and the fact that $\P\Big[M_{L-1}\le k\Big]\to 1$, uniformly for $k\in U_L$, so \[ P_L=A_1 L(N-\rho_cL)^{-b}\big(1+o(1)\big), \] as asserted. \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \\ \noindent We proceed now with the {\em proof of Theorem \ref{th1}}:\\ \noindent {\em a) The case} $\gamma_L\to-\infty$.\\ \noindent By Lemma \ref{lemma1} and (\ref{split}) if $N-\rho_cL\gg\sqrt{L}$ or the local limit theorem otherwise, condition {\em i}) of Lemma \ref{lemma0} is satisfied by $\a_L=\sqrt{L}/\log L$. Consider $s_*>0$ such that $\rho_{\a_L}(s_*)=N/L$ and let $\b_L(x)=xL^{\frac{1}{L-1}},\, x>0\,.$ Then \begin{align*} L\sum_{\b_L(x)<k\le \a_L} e^{s_*k}p_k&=L\sum_{\b_L(x)<k\le \a_L} p_k+L\sum_{\b_L(x)<k\le a_L}\big(e^{s_*k} -1 \big)\,p_k \\ &=L\bar{F}\Big(xL^{\frac{1}{b-1}}\Big)+L\sum_{\b_L(x)<k\le \a_L}\big(e^{s_*k} -1 \big)\,p_k - L\sum_{k>\a_L}p_k.\\ \end{align*} It is easy to see that the first term above converges to $\frac{A_1}{b-1} x^{1-b}$ and that the last two terms vanish in the limit, since $s_*\a_L\to 0$ by (\ref{s+}). That is \[ L\sum_{\b_L(x)<k\le a_L} e^{s_*k}p_k\longrightarrow \frac{A_1}{b-1} x^{1-b}\qquad \text{as }L\to \infty\, , \] which is condition {\em ii}) in Lemma \ref{lemma0}.\\ \noindent {\em b) The case} $\gamma_L\to+\infty$. \\ \noindent This case is essentially treated in \cite{armendarizetal08}. It is shown there (cf. Theorem 1b) that when the second term on the right hand side of (\ref{split}) dominates the probability of the event $\{S_L=N\}$, the variables $\{\eta_i\}$ aside from their maximum become asymptotically independent with distribution $\nu_{\phi_c}$. This entails that for all $y\in{\mathbb{R}}$ \[ \mu_{L,N}\Big[\frac{M_L -(N-\rho_c L)}{\sigma\sqrt{L}}\leq y\, \Big]\longrightarrow\frac{1}{\sqrt{2\pi }}\int_{-\infty}^y e^{-x^2 /2}\ dx, \] which is (\ref{fluct}). \\ \\ \noindent {\em c) The case} $\gamma_L\to\gamma\in{\mathbb{R}}$.\\ \noindent Here $N-\rho_c L\simeq\sigma\sqrt{(b-3)L\log L}\,$, and the two terms in the right hand side of (\ref{split}) are of the same order. Precisely, \begin{equation}\label{lgdef} \frac{\frac{1}{\sqrt{2\pi L\sigma^2}}\exp\big\{-\frac{(N-\rho_c L)^2}{2\sigma^2 L}\big\}}{LA_1 [N-\rho_cL]^{-b}}\longrightarrow\frac{\sigma^{b-1}(b-3)^{\frac{b}{2}}}{\sqrt{2\pi}A_1 }e^{-(b-3)\gamma}=:\ell_\gamma. \end{equation} It follows by (\ref{split}) and Lemma \ref{lemma1} that \begin{equation*} \liminf_{L\to \infty} \mu_{L,N}\Big[M_L\le \a_L\Big]\ge \frac{\ell_\gamma}{1+\ell_\gamma}. \end{equation*} On the other hand, applying Lemma \ref{lemma2} with $\vartheta_L \ll\sqrt{\log L}$ we have that \begin{equation*} \liminf_{L\to \infty}\mu_{L,N}\Big[M_L\ge N-\rho_cL-\vartheta_L\sqrt{L}\Big]= \frac{1}{1+\ell_\gamma}\,, \end{equation*} and by the central limit theorem \[ \limsup_{L\to \infty}\mu_{L,N}\Big[M_L\ge N-\rho_cL+\vartheta_L\sqrt{L}\Big]= 0\,. \] The last three relations together imply that \begin{equation} \frac{M_L}{N-\rho_c L} \stackrel{\mu_{L,N}}{\Longrightarrow} Be(p_\gamma)\quad \text{with }\ p_\gamma=\frac{1}{1+\ell_\gamma}. \label{bernoulimit} \end{equation} \noindent Note that $p_\gamma$ as given above satisfies \[ p_\gamma \to\left\{\begin{array}{cl} 1\ &\mbox{ if }\gamma\to\infty\\ 0\ &\mbox{ if }\gamma\to -\infty\end{array}\right.\ . \] This finishes the proof of Theorem \ref{th1}. \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \subsubsection{Proof of Theorem \ref{th2}} \noindent Here $N<\rho_c L$ and $N/L \to \rho_c$ as $L\uparrow \infty$.\\ \noindent We take $\a_L=\infty$, so that ({\em i}) in Lemma \ref{lemma0} is automatically satisfied. It remains to identify the sequence $\b_L\to \infty$ and the limit $\Phi$ in condition ({\em ii}) for each case ({\em a}), ({\em b}) or ({\em c}) in the theorem. Note that since \[ \frac{N}{L}=\rho(s_*)=\rho_c+\int_0^{s_*}\sigma^2(s)\ ds, \qquad\text{with}\qquad \sigma^2(s)\stackrel{s\to 0}{\longrightarrow}\sigma^2, \] we have \begin{equation} s_*=\frac{N-\rho_cL}{\sigma^2L}\big(1+o(1)\big)\qquad \text{as } L\to\infty. \label{sas} \end{equation} \noindent{\em a)} The case $\omega_L\to 0$.\\ \noindent Let $\displaystyle{\b_L(x)=xL^{\frac{1}{b-1}}\,.}$ Then \[ L\sum_{k>\b_L(x)}e^{s_*k} p_k=L\bar{F}\big[\b_L(x)\big]+L\sum_{k>\b_L(x)}\big(e^{s_*k}-1 \big)p_k\,, \] where \[ 0\ge L\sum_{k>\b_L(x)}\big(e^{s_*k}-1 \big)p_k\ge Ls_*\sum_{k>\b_L(x)}kp_k =O(\omega_L)\longrightarrow 0\,. \] That is, \[ L\sum_{k>\b_L(x)}e^{s_*k} p_k \longrightarrow \frac{A_1}{b-1}\, x^{1-b}\,. \] \noindent{\em b)} The case $\omega_L\to \omega >0$.\\ \noindent As in the previous case, let $\displaystyle{\b_L(x)=xL^{\frac{1}{b-1}}}\,$. By the regular variation of the probabilities $p_k$, \begin{equation*} L\sum_{k>\b_L(x)} e^{s_*k}p_k\,\simeq\, A_1 L\sum_{k=\b_L(x)+1}^{ML^{\frac{1}{b-1}}} e^{s_*k}\frac{1}{k^b}\,+\, A_1 L\sum_{k>ML^{\frac{1}{b-1}}}e^{s_*k}\frac{1}{k^b}\,. \end{equation*} We compute the limits of both terms on the right hand side above: \[ \lim_{M\to \infty}\lim_{L\to \infty}L\sum_{k>ML^{\frac{1}{b-1}}}e^{s_*k}\frac{1}{k^b}\,=\,0 \] and by (\ref{sas}) \begin{align*} \lim_{M\to \infty}\lim_{L\to \infty}L\sum_{k=\b_L(x)+1}^{ML^{\frac{1}{b-1}}} e^{s_*k}\frac{1}{k^b}\,&=\,\lim_{M\to \infty}\lim_{L\to \infty} L\sum_{k=\b_L(x)+1}^{ML^{\frac{1}{b-1}}} \exp\left\{-\omega\frac{k}{L^{\frac{1}{b-1}}}\right\} \frac{1}{k^{b}}\\ &=\lim_{M\to \infty} \int_{x}^{M}e^{-\omega t} \frac{1}{t^b}\,dt\,. \end{align*} \noindent{\em c)} The case $\omega_L\to \infty$.\\ \noindent Define now a sequence $B_L$ by the equation \begin{equation} \left(|s_*|B_L\right)^b e^{|s_*|B_L}=A_1 L|s_*|^{b-1}\,, \label{BsubL}\end{equation} and note that $\omega_L\to \infty$ implies that $|s_*|B_L\to \infty$ as well. Let $\displaystyle{\b_L(x)=B_L+\frac{x}{|s_*|}}\,.$ Then \begin{align*} L\sum_{k>\b_L(x)} e^{s_* k}p_k&=Le^{s_* B_L}\sum_{k>\b_L(x)}e^{s_*(k-B_L)} p_k \\ &\simeq A_1 Le^{-|s_*| B_L}\sum_{k>\b_L(x)}e^{s_*(k-B_L)}\frac{1}{k^b}\\ &=|s_*|\sum_{k>\b_L(x)}e^{s_*(k-B_L)}\left(\frac{B_L}{k}\right)^b\\ &=|s_*|\sum_{k'>x/|s_*|}e^{s_*(k')}\frac{|s_*|B_L}{|s_*|k' +|s_*|B_L}\longrightarrow \int_x^{\infty} e^{-t}\,dt=e^{-x \end{align*} as $L\to\infty$, using dominated convergence with $|s_*|B_L\to\infty$. The third line above follows from the second one by (\ref{BsubL}). This concludes the proof of the theorem. \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \subsection{The stretched exponential case} Here we have $p_n\simeq A_\lambda e^{-\frac{b}{1-\lambda}n^{1-\lambda}}$. The proofs in this case use results from \cite{nagaev68}, which are summarized in the Appendix. \\ \subsubsection{Proof of Theorem \ref{th3}} We recall the the notation from equation (\ref{gammal2}) \[ N=\rho_cL+t_L(\sigma^2L)^{\frac{1}{1+\lambda}}\,. \] {\em a) The case} $\displaystyle{\limsup_{L\to \infty} t_L<c_\lambda}$. \\ \noindent The second equation in (\ref{mdal}) that gives the limit theorem for $M_L$ without conditioning is a standard computation in extreme value theory. The appropriate scales \[ y_L\simeq \left( \frac{1-\lambda}{b} \log L\right)^\frac{1}{1-\lambda},\qquad b_L\simeq \frac{y_L^{\lambda}}{b}\qquad\text{as }\quad L\to \infty \] are chosen so that \begin{equation}\label{ybscale} L \sum_{k > y_{L}+x b_L}p_k \to e^{-x}, \quad x\in{\mathbb{R}}. \end{equation} Let $c>1$. According to (\ref{rem1}) \[ \P\big[M_L\le c y_L\ ; \ S_L=N \big]\simeq\P\big[ S_L=N\big]\,. \] We thus set the sequence $\a_L=c y_L$, and item {\em i}) in Lemma \ref{lemma0} is satisfied. Recall that $s_*>0$ since $N>\rho_cL$. If we define $\b_L(x)=y_L+x b_L$, we obtain \begin{align*} \lim_{L\to \infty} L\sum_{\b_L(x)<k\le \a_L}e^{s_* k}p_k&=\lim_{L\to \infty} L\sum_{\b_L(x)<k\le \a_L} p_k \\ &=\lim_{L\to \infty} L\sum_{k>y_L+xb_L}p_k\,-\,\lim_{L\to \infty}L\sum_{k>c y_L}p_k \\ &=e^{-x}\,, \end{align*} where the first identity follows from (\ref{s+}) and the third one is (\ref{ybscale}). This provides condition {\em ii}) in Lemma \ref{lemma0}.\\ \noindent {\em b) The case} $t_L \to t>c_\lambda$.\\ \noindent When $N-\rho_cL\gg L^{\frac{1}{2\lambda}}$ this can be deduced by Theorem 1a in \cite{armendarizetal08}, since in that case the $L-1$ smallest variables become asymptotically independent. In fact, we can then take $a_L=1$. For smaller values of $N-\rho_c L$, even though it is not stated explicitly, this is essentially proved in \cite{nagaev68}. Note that by Remarks 2,3 and 5 in the Appendix for any $\theta_L\to\infty$ we have \[ \mu_{L,N}\Big[ |M_L-(1-\alpha)(N-\rho_cL)|<\theta_L\sqrt{L}\Big]\longrightarrow 1. \] where $\alpha$ is implicitly defined by (\ref{alef}). This implies that the conditional distribution of $\displaystyle\frac{M_L-(1-\alpha)(N-\rho_cL)}{\sqrt{L}}$ is tight. A careful reading of the proof of Lemma 7, part 2 in \cite{nagaev68} reveals that in fact this distribution has the asserted limit. Note that the sequence $a_L$ in the statement of Theorem \ref{th2} is given by $1-\alpha$. The properties of its limit $a(t)$ can easily be deduced from (\ref{alef}).\\ \\ {\em (c) The case} $t_L \to c_\lambda$. \\ \noindent This case can be treated analogously to the third part of Theorem \ref{th1}. By (\ref{Pc4}), the leading order of $\P\big[S_L=N \big]$ is the sum of two explicit terms, and one has to find the precise subscale around $N-\rho_cL-c_\lambda(\sigma^2L)^{\frac{1}{1+\lambda}}$ where these two terms are of the same order. Using (\ref{simple}) for $\lambda >1/2$ and (\ref{alef}) we find that on the scale (\ref{subl}) \begin{eqnarray}\label{pstretched} \lefteqn{\frac{1}{\sigma\sqrt{2\pi L}}\, e^{-\frac{k^2}{2L\sigma^2}}\bigg/\frac{A_\lambda L}{\sqrt{1-\frac{\sigma^2\gamma\lambda(1-\lambda)L}{k^{1+\lambda}(1-\alpha)^{1+\lambda}}}}\, e^{-\frac{\alpha^2k^2}{2\sigma^2L}-\gamma(1-\alpha)^{1-\lambda}k^{1-\lambda}}}\nonumber\\ & &\quad\longrightarrow \frac{\sqrt{1+\lambda}}{2A_\lambda \sqrt{\pi\sigma^2}}\, e^{\gamma }=:\ell_\gamma\ . \end{eqnarray} From this, (\ref{bern2}) can be deduced analogous to the proof of Theorem \ref{th1} with $p_\gamma =(1+\ell_\gamma )^{-1}$. \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \subsubsection{Proof of Theorem \ref{th4} } As for the downside deviations in the power law case, we here set $\a_L=\infty$. Then {\em i}) in Lemma \ref{lemma0} is satisfied, $s_*<0$, and satisfies (\ref{sas}). \noindent Now the limit in {\em ii}), Lemma \ref{lemma0} is given by $\Phi(x)=e^{-x}$ in all cases, so we just need to prove that the proposed values of the sequence $\b_L(x)$ do the job. The proofs are all based on the following computation, with simple adjustments to match each situation.\\ \noindent Let $\gamma_L$ and $\zeta_L$ be sequences such that there exist $\ell_1$ and $\ell_2\,\in {\mathbb{R}}$ with \begin{eqnarray} \lim_{L\to \infty} \frac{\zeta_L^2}{\gamma_L^{1+\lambda}}=0,\quad \lim_{L\to \infty} \zeta_L |s_*|=\ell_1\,\quad \text{and } \quad \lim_{L\to \infty} \frac{\zeta_L}{\gamma_L^{\lambda}}=\ell_2,\quad \ell_1+b\ell_2=1. \label{gamma-zeta} \end{eqnarray} Then \begin{eqnarray*} \lefteqn{A_\lambda L\sum_{k= \gamma_L+x\zeta_L}^{\gamma_L+y\zeta_L} e^{s_* k}e^{-\frac{b}{1-\lambda}k^{1-\lambda}}= A_\lambda L\,e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} } \sum_{k= \gamma_L+x\zeta_L}^{ \gamma_L+y\zeta_L} e^{s_* (k-\gamma_L)}e^{-\frac{b}{1-\lambda}\big(k^{1-\lambda}-\gamma_L^{1-\lambda}\big)} }\notag\\ & &\qquad =A_\lambda L\,e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} } \sum_{k= \gamma_L+x\zeta_L}^{\gamma_L+y\zeta_L} e^{s_* (k-\gamma_L)}e^{-\frac{b}{\gamma_L^{\lambda}}(k-\gamma_L)}\,\big(1+o(1)\big)\notag\\ & &\qquad =A_\lambda L\,e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} } \sum_{k-\gamma_L=x\zeta_L}^{k-\gamma_L=y\zeta_L} e^{-(|s_*|+\frac{b}{\gamma_L}) (k-\gamma_L)}\,\big(1+o(1)\big)\notag\\ & &\qquad\simeq A_\lambda L\,\frac{e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} }}{|s_*|+\frac{b}{\gamma_L^{\lambda}}} \int_{x(\ell_1+b\ell_2)}^{y(\ell_1+b\ell_2)} e^{-t}\,dt \,. \end{eqnarray*} We now let $y\to \infty$ and apply (\ref{gamma-zeta}) to get \begin{align} A_\lambda L\sum_{k> \gamma_L+x\zeta_L} e^{s_* k}e^{-\frac{b}{1-\lambda}k^{1-\lambda}} &\simeq A_\lambda L\,\,\gamma_L^{\lambda}\,\,\frac{e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} }}{|s_*|\gamma_L^{\lambda}+ b}\, \,e^{-x}\,. \label{exponential-phi} \end{align} We will work on this limit case by case.\\ \noindent Recall from the proof of Theorem \ref{th3} the existence of sequences \[ y_L\simeq \left( \frac{1-\lambda}{b} \log L\right)^\frac{1}{1-\lambda},\qquad b_L\simeq \frac{y_L^{\lambda}}{b}\qquad\text{as }\quad L\to \infty \] satisfying \begin{eqnarray} L \sum_{k > y_{L}+x b_L}p_k \to e^{-x}, \,\,\, x\in{\mathbb{R}}. \label{ybscale2} \end{eqnarray} \noindent{\em a) The case $s_*y_L\to 0\Leftrightarrow \omega_L\to 0$.} \\ \noindent Here $\gamma_L:=y_L$ and $\zeta_L:=b_L$ satisfy (\ref{gamma-zeta}) with $\ell_1=0$ and $\ell_2=1/b$. In fact, in this case it is straightforward from (\ref{ybscale2}) and dominated convergence that \[ \lim_{L\to \infty} L\sum_{k>y_L+xb_L}e^{s_* k}p_k=e^{-x}. \] \noindent{\em b) The case $s_* y_L^{\lambda}\to 0$.}\\ \noindent Here we need to be slightly more careful with the choice of $\gamma_L$. Set $\zeta_L=b_L$ and let $\gamma_L$ be the solution to \begin{eqnarray} A_\lambda L\,\,b_L\,\,e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} }=1 \label{case2} \end{eqnarray} The sequences $y_L$ and $b_L$ can be chosen so that \[ A_\lambda L\,b_Le^{-\frac{b}{1-\lambda}y_L^{1-\lambda}}=1, \] and from (\ref{case2}) \[ s_*\gamma_L=\frac{b}{1-\lambda}\big(\gamma_L^{1-\lambda}-y_L^{1-\lambda}\big)\qquad\text{or}\qquad s_*y_L^{\lambda}= \frac{b}{1-\lambda}\left(\frac{y_L^{\lambda}}{\gamma_L^{\lambda}}-\frac{y_L}{\gamma_L} \right)\,. \] By the condition $s_*y_L^{\lambda}\to 0$ this implies that $\displaystyle{\frac{y_L}{\gamma_L}\simeq 1}$, and (\ref{gamma-zeta}) holds with $\ell_1=0$ and $\ell_2=1/b$. Also (\ref{case2}) and (\ref{exponential-phi}) imply \[ \lim_{L\to \infty} L \sum_{k>\gamma_L+xb_L} e^{s_* k} p_k=e^{-x}\,. \] \noindent {\em c) The case $s_* y_L^{\lambda} \to c<0,\, c\in {\mathbb{R}}$. }\\ \noindent The scaling in the sequences $\gamma_L$ and $\zeta_L$ is preserved, but the limits $\displaystyle{\lim_{L\to \infty} \frac{\gamma_L}{y_L}}$ and $\displaystyle{\lim_{L\to \infty}\frac{\zeta_L}{b_L}}$ need to be chosen so that $\ell_1$ and $\ell_2$ in (\ref{gamma-zeta}) satisfy $\ell_1+b\ell_2=1$ and the right hand side of (\ref{exponential-phi}) equals $e^{-x}\,$, \[ A_\lambda L\,\,\gamma_L^{\lambda}\,\,\frac{e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} }}{|s_*|\gamma_L^{\lambda}+ b}=1\, \] {\em d) The case $|s_*| y_L^{\lambda}\to \infty$.}\\ \noindent Let now $\displaystyle{\zeta_L=\frac{1}{|s_*|}}$ and set $\gamma_L$ as the solution to \begin{eqnarray} \frac{A_\lambda L}{|s_*|}\,\,e^{s_*\gamma_L-\frac{b}{1-\lambda}\gamma_L^{1-\lambda} }=1\,. \label{last} \end{eqnarray} (It is easy to see that such a solution exists). Now taking logarithms in (\ref{last}) we obtain that, to leading order, \[ \left(\frac{1-\lambda }{b} \log L\right)^{\frac{1}{1-\lambda}} \simeq \gamma_L\left(1+\frac{1-\lambda}{b}|s_*|\gamma_L^{\lambda}\right)^{\frac{1}{1-\lambda}}, \] from where we conclude that necessarily $|s_*|\gamma_L^{\lambda}\to \infty$. Then (\ref{gamma-zeta}) holds with $\ell_1=1$ and $\ell_2=0$, and (\ref{exponential-phi}) follows from (\ref{last}). \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \subsection{Proof of Theorem \ref{bulk}} a) The assertion will follow from Theorem 24.2 in \cite{bill} provided we check the validity of the following three conditions for the exchangeable random variables $\xi_{x}^{L,N}= \frac{\eta_x-N/L}{\sigma\sqrt{L}}$.\\ 1. $\displaystyle\sum_{x=1}^L \xi_{x}^{L,N}\stackrel{\mu_{L,N}}{\longrightarrow}0.$ This is trivial since the sum of all $\xi_x^{L,N}$ is equal to zero $\mu_{L,N}$-a.s.\\ 2. $\displaystyle \big|\max_{1\le x\le L}\xi_{x}^{L,N}\big|\stackrel{\mu_{L,N}}{\longrightarrow}0.$ This follows from part (a) of Theorems \ref{th1}, \ref{th3} when $N\ge\rho_c L$, or from Theorems \ref{th2}, \ref{th4} when $N<\rho_c L$.\\ 3. $\displaystyle \sum_{x=1}^L \big(\xi_{x}^{L,N}\big)^2\stackrel{\mu_{L,N}}{\longrightarrow} 1$: We prove it in detail for the case when the occupation variables $\eta_x$ follow a power law, the stretched exponential case being completely similar. Let $\epsilon>0$ and set $\alpha_L=\frac{\sqrt{L}}{\log L}$ as in Lemma \ref{lemma1} . Then \begin{align} R_{L,N}&:= \mu_{L,N}\left[\Big|\frac{1}{L}\sum_{x=1}^L\big(\eta_x-\frac{N}{L}\big)^2-\sigma^2 \Big|>\epsilon \right] \notag\\ & \le \mu_{L,N} \left[\Big| \frac{1}{L}\sum_{x=1}^L\big(\eta_x-\frac{N}{L}\big)^2-\sigma^2 \Big|>\epsilon,\, M_L \le \alpha_L \right] +\mu_{L,N}\big[ M_L>\alpha_L\big] \label{a1} \end{align} By Theorem \ref{th1} a) and Theorem \ref{th2} the second term on the right side above tends to $0$ as $L\to \infty$. Let us now write \begin{align} &\mu_{L,N}\left[\Big|\frac{1}{L}\sum_{x=1}^L\big(\eta_x-\frac{N}{L}\big)^2-\sigma^2 \Big|>\epsilon,\, M_L\le \alpha_L \right] \notag\\ &\qquad=\frac{\mu^L\left[\big|\frac{1}{L}\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2-\sigma^2\big|> \epsilon,\,M_L\le \alpha_L,\, \sum_{x=1}^L \eta_x=N \right]}{\mu^L\left[ \sum_{x=1}^L \eta_x= N\right]}\notag\\ &\qquad\qquad=\frac{\P_{\alpha_L}(s_*)\left[\big|\frac{1}{L}\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2-\sigma^2\big|> \epsilon,\, \sum_{x=1}^L \eta_x=N \right]}{ Z_{\alpha_L}^L(s_*)\,{\mathbb{E}}\left[e^{s_*S_L}\,\mathbbm{1}_{\{\sum \eta_x =N\}}\right]} \notag\\ &\qquad\qquad\qquad\le \frac{\P_{\alpha_L}(s_*)\left[\big|\frac{1}{L}\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2-\sigma^2\big|> \epsilon \right]}{\P_{\alpha_L}(s_*)\left[\sum_{x=1}^L \eta_x=N\right]}\,, \label{a2} \end{align} where we recall that given parameters $\alpha>0$ and $s\in {\mathbb{R}}$, the measure $\P_{\alpha}(s)$ is defined by (\ref{tilted}). From (\ref{a1}), (\ref{a2}) and (\ref{llt}) we conclude that \[ R_{L,N} \le \sqrt{2\pi \sigma^2 L} \,\P_{\alpha_L}(s_*)\left[ \Big|\frac{1}{L}\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2-\sigma^2\Big|> \epsilon \right]\,+o(1) \,. \] The result will thus follow if we show that \begin{eqnarray} \sqrt{L}\,\,\P_{\alpha_L}(s_*)\left[ \frac{1}{L}\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2 \ge \sigma^2 + \epsilon \right]\longrightarrow 0\quad \mbox{ as }L\to \infty \label{a3} \end{eqnarray} and \begin{eqnarray} \sqrt{L}\,\,\P_{\alpha_L}(s_*)\left[ \frac{1}{L}\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2\le \sigma^2 - \epsilon \right]\longrightarrow 0\quad \mbox{ as }L\to \infty\,. \label{a4} \end{eqnarray} Let us start with the former. If $\zeta>0$ then \begin{align} &\P_{\alpha_L}(s_*)\left[\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2 \ge (\sigma^2 + \epsilon)L \right] \le e^{-\zeta (\sigma^2+\epsilon)L}\, {\mathbb{E}}^{s_*}\left[ \exp\left\{ \zeta \sum_{x=1}^L \big(\eta_x-\frac{N}{L}\big)^2 \right\} \right]\notag\\ &\hspace{5cm}=e^{-\zeta (\sigma^2+\epsilon)L}\left(\frac{1}{Z_{\alpha_L}(s_*)}\sum_{k=1}^{\alpha_L} e^{\zeta(k-\frac{N}{L})^2} e^{s_*k}p_k\right)^L\,, \label{a5} \end{align} where ${\mathbb{E}}^{s_*}$ denotes expectation with respect to the measure $\P_{\alpha_L}(s_*)$. \\ \\ Now set $\zeta=\frac{\log^{3/2} L}{L}$, so that $\zeta \a_L^2 \to 0$, and apply the elementary inequality $e^x\le 1+x\psi(h)$ for $x\in[0,h]$, where $\psi(h)=\frac{e^h-1}{h}$. We get \begin{align*} \frac{1}{Z_{\alpha_L}(s_*)}\sum_{k\le \alpha_L} e^{\zeta (k-\frac{N}{L})^2}e^{s_*k}p_k &\le \frac{1}{Z_{\alpha_L}(s_*)}\sum_{k\le \alpha_L} \left[1+\zeta \big(k-\frac{N}{L}\big)^2\psi(\zeta\a_L^2)\right] e^{s_*k}p_k\\ &=1+\zeta\, \sigma^2_{\alpha_L}(s_*)\psi(\zeta\a_L^2). \end{align*} From (\ref{sigmalimit}) we can make $\sigma^2_{\alpha_L}(s_*)\psi(\zeta\a_L^2)<\sigma^2+\epsilon/2$ for large enough $L$, so (\ref{a5}) becomes \begin{align} &\P_{\alpha_L}(s_*)\left[\sum_{x=1}^L \big(\eta_x- \frac{N}{L}\big)^2 \ge (\sigma^2 + \epsilon)L \right] \le e^{-\zeta (\sigma^2+\epsilon)L} \left(1+ \zeta\, \sigma^2_{\alpha_L}(s_*)\psi(\zeta\a_L^2)\big) \right)^L \notag\\ &\qquad\qquad\le e^{-\zeta (\sigma^2+\epsilon)L}\,\, e^{\zeta L\sigma^2_{\alpha_L}(s_*)\psi(\zeta\a_L^2)}\le e^{-\frac{\epsilon}{2}\log^{3/2}L}, \label{final estimate} \end{align} from where (\ref{a3}) is easily obtained. The limit (\ref{a4}) can be derived by similar estimates.\\ \noindent b) In the power law case ($\lambda=1$), or when $\lambda<1$ and $N-\rho_c L\gg L^{\frac{1}{2\lambda}}$, the assertion follows immediately (with $a_L=1$) from the asymptotic independence of the bulk variables proved in Theorems 1b, 1a in \cite{armendarizetal08}, respectively. \\ \\ Let us then consider the stretched exponential case $\lambda <1$ when $N-\rho_cL=t_L (\sigma^2 L)^{\frac{1}{1+\lambda}}$, and $t_L\to t \in (c_{\lambda},+\infty]$ (we refer to the statement of Theorem \ref{th3} for notation). The case $N-\rho_c L\gg L^{\frac{1}{2\lambda}}$ discussed in the previous paragraph clearly belongs to this family as well. \\ \\ We first observe that in this situation \begin{eqnarray} N-a(t)(N-\rho_cL)<\rho_cL+c_{\lambda} (\sigma^2 L)^{\frac{1}{1+\lambda}}. \label{reduction} \end{eqnarray} Indeed, according to (\ref{alef}), $a(t)$ satisfies \[ \frac{1}{t^{1+\lambda}}=\frac{(1-a(t))\,a(t)^{\lambda}}{\gamma(1-\lambda)}\quad\mbox{with } \quad \gamma=\frac{b}{1-\lambda}\,. \] Let $x_t=t(1-a(t))$. Then it follows from Theorem 2 in the Appendix that $x_t$ is the smallest positive root of the equation \begin{eqnarray} b=x_t(t-x_t)^{\lambda}, \label{x_t} \end{eqnarray} and it is easily checked that \[ \lim_{t\uparrow \infty} x_t=0 \qquad \mbox{and} \qquad \lim_{t\downarrow c_{\lambda}} x_t=c_{\lambda} \,\frac{1-\lambda}{1+\lambda}<c_\lambda\,. \] In order to conclude (\ref{reduction}) it will therefore be enough to show that $x_t$ is decreasing. Differentiating in (\ref{x_t}) we get, after a couple of operations, that the derivative $x'_t$ satisfies \[ x'_t \left(\frac{\lambda}{t-x_t}-\frac{1}{x_t} \right)=\frac{\lambda}{t-x_t}. \] Now \[ \frac{\lambda}{t-x_t}<\frac{1}{x_t} \iff \lambda x_t < t-x_t \iff \lambda t(1-a(t))<ta(t) \iff \frac{\lambda}{1+\lambda}<a(t)\,. \] But $a(t)$ is increasing on the half-line $(c_{\lambda}, \infty)$ with $\lim_{t\downarrow c_{\lambda}} a(t)=\frac{2\lambda}{1+\lambda}$, $\lim_{t\uparrow \infty}a(t)=1$, and hence $a(t)>\frac{\lambda}{1+\lambda}$. \\ \\ Inequality (\ref{reduction}) allows us to decompose the random walk $Y_L$ into two components: the first term will be easily shown to converge to a Brownian bridge via the same arguments applied to prove the first statement in the theorem, while the second one is a drift term determined by the Gaussian limit specified in Theorem \ref{th3} b). Precisely, write \begin{eqnarray} Y^L_s = W^L_s+\frac{[sL]}{L}\frac{M_L-a_L(N-\rho_cL)}{\sqrt{\sigma^2 L}}\,,\qquad W^L_s=\frac{1}{\sigma \sqrt{L}}\sum_{x=1}^{[sL]} \left( \tilde{\eta}_x-\frac{N-M_L}{L}\right). \label{decompos} \end{eqnarray} Next, consider the interval ${\cal A}_L=\left\{X\in {\mathbb{R}},\,\Big|\frac{X}{a_L(N-\rho_cL)}-1\Big| \le \delta_L \right\}$ associated to $\delta_L=L^{-\frac{1}{4}\frac{1-\lambda}{1+\lambda}}$, chosen so that Theorem \ref{th3} b) implies $\lim_{L\to \infty} \mu_{L,N}[\,M_L\in {\cal A}_L]=1$. Notice that by (\ref{reduction}) the occupation variables $\{\tilde{\eta}_x\}_{x=1,\cdots,L}$ are in the subcritical regime when $M_L\in {\cal A}_L$, they are clearly exchangeable, and moreover when properly centered they satisfy conditions 1), 2) and 3) of Theorem 24.2 in \cite{bill}.\\ \\ Namely, let $\tilde{\xi}_x^{L,N}=\frac{\tilde{\eta}_x-(N-M_L)/L}{\sigma\sqrt{L}}$. Then, provided $M_L \in {\cal A}_L$, we can easily show that\\ \\ 1'. $\displaystyle{\sum_{x=1}^L \tilde{\xi}^{L,N}\stackrel{\mu_{L,N}}{\longrightarrow}0}$. In fact, the sum equals $0$ except in the rare event that the second order statistic of the sample $\{\eta_x\}_{1\le x\le L}$is greater than $L^{1/4}$.\\ \\ 2'. $\displaystyle{|\max_{1\le x \le L}\tilde{\xi}_x^{L,N}|\stackrel{\mu_{L,N}}{\longrightarrow}0}$. This is trivial, as $\tilde{\eta}_x\le L^{1/4}$ and $N-M_L\le L^{\frac{1}{1+\lambda}}$ when $M_L\in {\cal A}_L$.\\ \\ 3'. $\displaystyle{\sum_{x=1}^L (\tilde{\xi}^{L,N}_x)^2\stackrel{\mu_{L,N}}{\longrightarrow}1}$. This can be shown following the arguments applied to prove condition 3) in the first statement of the theorem, the difference being that it is now necessary to condition on $M_L$ before applying Chebyshev's inequality: for $\epsilon>0$, \begin{align} & \mu_{L,N}\left[\Big|\frac{1}{L}\sum_{x=1}^L\big(\tilde{\eta}_x-\frac{N-M_L}{L}\big)^2-\sigma^2 \Big|>\epsilon,\, M_L\in {\cal A}_L \right] \notag\\ & \hspace{1cm} = {\mathbb{E}}^{\mu_{L,N}} \left[\mu_{L,N}\left[\Big| \frac{1}{L}\sum_{x=1}^L\big(\tilde{\eta}_x-\frac{N-M_L}{L}\big)^2-\sigma^2 \Big|>\epsilon\Big|\,M_L\right]\, \mathbbm{1}_{\{M_L \in {\cal A}_L\}}\right]\,. \label{condition-first} \end{align} The estimates leading to the bound (\ref{final estimate}) hold uniformly for $\tilde{N}=N-M_L$ when $M_L\in {\cal A}_L$, and hence can be applied to the conditioned expectation in the right side of (\ref{condition-first}) to conclude that \[ \mu_{L,N}\left[\Big|\frac{1}{L}\sum_{x=1}^L\big(\tilde{\eta}_x-\frac{N-M_L}{L}\big)^2-\sigma^2 \Big|>\epsilon,\, M_L\in {\cal A}_L \right] \longrightarrow 0\,, \] as required. Conditions 1'), 2') and 3') imply that $W_s^L\stackrel{\mu_{L,N}}{\Longrightarrow} BB_s$, the standard Brownian bridge on $[0,1]$ conditioned to return to the origin at time $1$. On the other hand, we know form Theorem \ref{th3} b) that $\frac{M_L-a_L(N-\rho_cL)}{\sqrt{\sigma^2 L}}\stackrel{\mu_{L,N}}{\Longrightarrow} \Phi$, a zero mean Gaussian variable with variance $1/(1-\frac{\lambda(1-a(t))}{a(t)})$. \\ \\ Our assertion will follow once we prove the convergence of the finite dimensional marginals plus tightness for the laws of the sequence $Y_s^L$. The former is easily derived by first conditioning on $M_L$; the fact that the limit of the first term in (\ref{decompos}) is independent of the value of the second implies that the finite dimensional marginal distributions converge to those of $BB_s+s\,\Phi$. Tightness is also straightforward: by the linearity of the second term in (\ref{decompos}), it suffices to show that the modulus of continuity of the first term tends to $0$, \[ \omega(\delta)=\sup_{|s-r|\le \delta} \big| W^L_s-W^L_r\big|\,\stackrel{\mu_{L,N}}{\longrightarrow} 0 \qquad \mbox{as}\qquad\delta\to 0\,, \] which is a direct consequence of the Arzel\`a-Ascoli Theorem. \hfill\hbox{\vrule\vbox to 8pt{\hrule width 8pt\vfill\hrule}\vrule} \section*{Acknowledgments} The results in this article answer a question raised by Pablo Ferrari after a seminar in the XI Brazilian School of Probability, and we would like to thank him for pointing out the problem to us. We are also grateful to Paul Chleboun who provided us with the simulation data for Figure~\ref{fig:profile}. This research has been supported by the University of Warwick Research Development Grant RD08138, the FP7-REGPOT-2009-1 project Archimedes Center for Modeling, Analysis and Computation, PICT-2008-0315 project "Probability and Stochastic Processes" and the University of Crete Basic Research grant KA-2865. I.A.\ and S.G.\ are also grateful for the hospitatility of the Hausdorff Research Institute for Mathematics in Bonn.
train/arxiv
BkiUdL45qX_AYzjedAza
5
1
\section{Introduction} \label{sec:Introduction} To build an internal model of the environment, the brain needs some representation of \emph{invariance transformations} which may change the way how one and the same object is perceived. Such transformations include, for example, translations, rotations, or rescaling of visual objects, as well as a change of key or octave in music. They are employed both passively, as in the quick recognition of rotated letters \cite{corballis_decisions_1978}, and actively, as in mental rotation tasks \cite{shepard_mental_1971}. Mastering such invariance transformations is instrumental in the abstraction process of untangling perceptual input into different mental categories like object class, its location and orientation in space, or its size. The central claim of this article is that the brain learns and encodes invariance transformations in graph symmetries as a substrate for computational processes. We construct a theory on the assumption that invariances initially manifest themselves as approximate symmetries of the probability distribution on the space of all possible perceptual observations. This distribution gives rise to a set of feature detectors via some unsupervised learning process, as it may be the case in primary sensory cortices. We assume that the set of detectors \enquote{inherits} a symmetry transformation of the distribution in the sense that every feature detector implies the existence of another detector for the transformed feature. In that case the invariance transformation can be expressed as a permutation of the feature detectors. Importantly, their pairwise activity correlations over many observations are then invariant under said permutation. Assuming a process of Hebbian learning to shape the recurrent synaptic connections between the feature detectors, those correlations are reflected in the synaptic weights. The invariance transformation should therefore give rise to a symmetry in the graph of synaptic connections between the feature detector neurons. \redact{ For example, consider translations and rotations as invariance transformations in the modality of vision. Let $T_{y,\sigma}$ be the combination of a translation by a vector $y$ and a rotation by an angle $\sigma$. We assume that the cells in primary visual cortex act as feature detectors which are typically activated by some edge-like structure with an orientation $\omega \in [0; 2\pi)$ at a position $x \in \mathbb{R}^2$ in the image. They \enquote{inherit} the translational-rotational invariance of visual space in the sense that for every detector $(x, \omega)$ we would expect the existence of another detector with the receptive field $(x+y, \omega + \sigma)$. The transformation $T_{y,\sigma}$ can then also be expressed as a permutation $\tau_{y,\sigma}$ of the set of feature detectors. If the visual world is truly invariant under $T_{y,\sigma}$, the correlation between the activations of two detectors $(x_1, \omega_1)$ and $(x_2, \omega_2)$ over many observations is the same as the correlation between the transformed pair of detectors $\tau_{y,\sigma}(x_1, \omega_1) = (x_1 + y, \omega_1+ \sigma)$ and $\tau_{y,\sigma}(x_2, \omega_2) = (x_2 + y, \omega_2+ \sigma)$. Assuming that the correlations translate directly into the recurrent connectivity structure between the detectors, which is to be expected from a process of Hebbian learning, the graph of synaptic connection strengths exhibits a symmetry under the permutation $\tau_{y,\sigma}$. The argument can be extended easily to other transformations and other modalities. } The following two sections are to formalize the ideas outlined above and to show that they are supported by empirical evidence. Alternative theories will be discussed in \Cref{sec:discussion}, as well as potential objections against the proposed concept, predictions, and conceivable extensions. \section{A New Way of Transformation Learning} \label{sec:it} % \subsection{Problem Statement} \label{sec:model:problem} Consider a model brain which perceives its environment over some extended time period via a set of $n$ information channels. Each of them might represent one atomic percept (like a pixel of an image) or some pre-aggregated combination thereof (like a small edge in an image). We call each such information channel a \emph{feature detector}, independent of whether it represents an atomic percept or pre-aggregated ones. An \emph{observation} is a snapshot of the feature detectors' state at some time $t$. For simplicity, we assume that at any time a feature can only be either present or not, represented by the numbers $1$ and $0$. We model the process of perception as a repeated independent random drawing from a discrete probability distribution $\Psi: \{0;1\}^n \rightarrow [0;1]$ on the feature state space, where $n$ is the total number of feature detectors under consideration. Let now $T$ be a transformation on the feature state space which acts via a permutation of the features. We call $T$ an \emph{invariance transformation} if it has no effect on $\Psi$, i.\,e.\xspace if $\Psi(Tx) = \Psi(x)$ for all $x \in \{0;1\}^n$. How can one find the invariance transformations of $\Psi$ given a finite set of observations $x$ only? In practically relevant cases, this problem is hard because $\Psi$ is a probability distribution in high-dimensional space and to approximate $\Psi$ its value needs to be estimated on a number of points which grows exponentially with the number of dimensions. In the following we shall discuss an approach to identify candidates for invariance transformations without being able to reconstruct $\Psi$ explicitly. Since it relies heavily on \emph{graph symmetries}, a brief general introduction to that concept is in order before introducing the approach. \subsection{Graph Symmetries} \label{sec:model:symmetry} % A \emph{graph} is a collection of uniquely identifiable (\enquote{labeled}) nodes, each pair of which can be connected by an edge, and it is called \emph{weighted} if each edge is associated with a number. Permutations can act on a graph by interchanging node labels without altering the edges. If the relabeled graph is structurally identical to the original one, the permutation is called a \emph{graph automorphism}. More formally, the permutation $\tau$ is an automorphism if and only if for every pair of nodes $u$ and $v$ either the edge between $u$ and $v$ has the same weight as the one between $\tau(u)$ and $\tau(v)$, or neither of the two edges exists. Further automorphisms can be constructed by applying $\tau$ repeatedly. This amounts to a chain of node exchanges which will ultimately, after a certain number of steps, restore the original graph. Such a sequence of permutations characterizes one \emph{graph symmetry} of $G$. \Cref{fig:symmetry} illustrates these definitions with a simple example. \begin{figure}[tb] \centering \includegraphics[width=0.7\textwidth]{symmetry} \caption{% Simple example of graph symmetries. Assume that the four edges have equal weights. Then, the cyclic permutation $\tau$, relabeling each node by the next higher number and 4 by 1, is a graph automorphism. Together with $\tau^2$, $\tau^3$, and the identity $\tau^4$, it characterizes a symmetry (\enquote{rotation}) of the graph. Similarly, the exchange of nodes 1 and 3 is an automorphism which gives rise to another symmetry (\enquote{reversal}). Further automorphisms can be constructed by combining the two symmetries. An exchange of the nodes 1 and 2, on the other hand, is not an automorphism since it breaks the link between $1$ and $4$, inter alia. } \label{fig:symmetry} \end{figure} \subsection{The Concurrence Graph} \label{sec:model:projections} % The crucial idea to find invariance transformations is to replace $\Psi$ by some of its projections to lower-dimensional spaces: For example, the marginal distribution of an individual component $x_k$ or of two components $x_k$ and $x_{k'}$ can be estimated accurately from a relatively small number of observations. The marginal distributions \enquote{inherit} the symmetries of the original distribution, i.\,e.\xspace if $\Psi$ is unaltered by some permutation $\tau$ of the feature detectors, then so are the marginal distributions (see the appendix for details). Yet the reverse statement is not true: Certain marginal distributions of $\Psi$ might exhibit a symmetry property which $\Psi$ itself does not have. For example, if visual perceptions are translation invariant, and if we choose the features to be individual pixels, then every $\Psi_k$ (i.\,e.\xspace the probability of a pixel $k$ being \enquote{on} or \enquote{off}) is equal. Thus every possible permutation of pixels leaves these marginal distributions of $\Psi$ unchanged, which is obviously not true for $\Psi$ itself. In the light of these considerations, it is necessary to find a reasonable trade-off when choosing the projections applied to $\Psi$: On the one hand, the dimension of the projected functions should be small enough that they can be approximated by the observations given. On the other hand it should not be so small that too many additional symmetries appear as artifacts. For the present discussion, the projection of $\Psi$ to two-dimensional spaces is the most relevant case. For every pair $k \neq m$ of features, the projected probability distribution $\Psi_{k,m}$ is determined by three numbers: The individual probabilities of the features $k$ and $m$ to be \enquote{on} and their joint probability to be \enquote{on} simultaneously. Note the special case where all the $\Psi_k$ distributions are known and equal, i.\,e.\xspace every feature has the same probability of being \enquote{on}: the distribution $\Psi_{k,m}$ is then determined by only one number for each pair $\{k; m\}$, namely by the probability of features $k$ and $m$ being \enquote{on} simultaneously. If we now associate each feature with the node of a graph, and take this probability to be the weight of the edge between nodes $k$ and $m$, then \textbf{every invariance transformation of $\Psi$ turns into a symmetry of this graph}. Since the edges of the graph are a measure for the probability of two features being \enquote{on} jointly, we shall call it the \emph{concurrence graph}. For the present purpose, we are only interested in symmetries of the concurrence graph and not in the numeric values of its edge weights. In particular, it does not matter if the weight between $k$ and $m$ represents the probability of those features being \enquote{on} simultaneously or some strictly monotonous function of that probability. This flexibility strengthens the biological plausibility of the presented concept and we shall therefore use the term \enquote{concurrence graph} in a broader sense for any graph whose edge weights are computed by some strictly monotonous function of the actual probabilities. Simple schematics of a concurrence graph with a caricature of V1 cells are shown in \Cref{fig:graph,fig:scaling}. For readability, only a very small number of feature detectors are depicted, respectively. \Cref{fig:graph} shows a subset appropriate to exhibit translation and rotation invariance, while \Cref{fig:scaling} demonstrates scale invariance. According to the present proposal, the true graph of neural connections in V1 should be thought of as a combination of both figures. An example of how the concurrence graph conceptually supports the completion of perceptual tasks is given in \Cref{fig:intuitive}. The blue and the green \enquote{H} are connected by a graph symmetry, as explained in the caption. Applied to this example, the claim of the present article reads as follows: The brain considers the two letters \enquote{the same} because of their indistinguishable embedding in the graph structure. In the following, we will discuss how the concurrence graph can form organically in the synaptic connections of a neural system. \begin{figure}[tb] \centering \includegraphics[width=0.7\textwidth]{graph} \caption{% Schematic of a concurrence graph. Nodes are boxes, each of which corresponds to a feature detector with orientation $\omega$ on an image patch at the respective $(x,y)$ position. The line strength between boxes represents the weight of the respective edge, with connections between co-linear features being particularly strong. Lines between diagonal or indirect neighbors are omitted for better readability. The graph is symmetric under reflections and under translations except for border effects. Another symmetry is a \ang{90} rotation of the $x$-$y$-plane combined with a simultaneous interchange of horizontal and vertical feature detectors. } \label{fig:graph} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.7\textwidth]{scaling} \caption{% Concurrence graph as in \Cref{fig:graph}, depicted with focus on scale invariance. A subset of feature detectors with receptive fields of different sizes $\lambda$ are shown. The organic development of scale invariant detector sets is hypothetically a consequence of the corresponding invariance of natural visual stimuli, cf.\xspace \Cref{sec:empirical:natural}. Despite the suggestive hierarchical structure all depicted connections are recurrent, while feed-forward connections to the detectors are omitted. The transformation $T$ combines a translation in $x$ and a rescaling. It approximates a graph automorphism, characterized by its action on the nodes in blue ovals and a corresponding action on all other nodes, except for the \enquote{border effects} at the extreme values of $x$ and at the highest and lowest scaling levels. } \label{fig:scaling} \end{figure} \begin{figure}[tb] \hfill \begin{subfigure}[t]{0.28\textwidth} \centering \includegraphics[width=0.9\textwidth]{intuitive2} \caption{Visual stimuli} \label{fig:intuitive:stimuli} \end{subfigure} \hfill \begin{subfigure}[t]{0.68\textwidth} \centering \includegraphics[width=0.9\textwidth]{intuitive} \caption{Concurrence graph} \label{fig:intuitive:graph} \end{subfigure} \caption{Two visual stimuli (\subref{fig:intuitive:stimuli}) activate sets of feature detectors (\subref{fig:intuitive:graph}) marked in the corresponding color (note that this is a model of black-and-white vision and the colors are for differentiation only). The two stimuli are connected by an invariance transformation, namely a translation and a rotation by $\ang{90}$. The corresponding transformation in the concurrence graph is a permutation of all feature detectors, in particular mapping $A$ to $A'$, $B$ to $B'$, etc.\xspace It is also a graph symmetry: Note how the blue and the green nodes are identically embedded in the graph. For example, A is connected to B by a strong black line just like A' to B', and analogously for all other corresponding pairs of nodes (including those which are not activated by the stimuli, like D).} \label{fig:intuitive} \end{figure} \subsection{Formation of the Concurrence Graph in the Synaptic Structure} \label{sec:model:concurrencegraph} According to the present proposal, the concurrence graph is part of a \emph{theme of connectivity} (see \Cref{fig:theme}) which forms organically in primary sensory cortex via biologically plausible mechanisms. The first phase of the process is the formation of the feature detectors through competitive Hebbian learning \cite{rumelhart_feature_1985}. Each neuron receives direct feed-forward perceptual input and whenever it is activated strongly enough, it will \enquote{fire}, inhibit the other neurons, and tune its feed-forward synaptic connections closer to the current input in accordance with the Hebbian learning rule. Examples for such competitive learning algorithms are Kohonen Maps \cite{kohonen_self-organized_1982}, (Growing) Neural Gas \cite{martinetz_topology_1994} or variants of sparse coding dictionary learning \cite{Elad:2010}. While the quantitative details may differ between these implementations of competitive learning, the qualitative outcome tends to be similar: Ultimately each neuron represents a certain pattern in sensory perception. In the second phase of the learning process, which may overlap in time with the first phase, the recurrent connections between the feature detectors are established. Simply applying Hebbian learning again, the synapses between two feature detectors are strengthened whenever they are both activated simultaneously. Consequently, the concurrence graph has materialized in the structure of recurrent neural connections between the feature detectors. Its symmetries represent the invariances of the environmental stimuli that have shaped the network during the training process. It can now serve as a substrate on which different neural algorithms can be implemented, like object classification, mental imagery tasks, or the planning of bodily motions \cite{powell_can_2021}. In particular, when a new object is perceived for the first time, it is decomposed into the same set of features that form the graph's nodes, and its symmetries can directly be applied to the new object. To further illustrate this point, note that the concurrence graph in \Cref{fig:intuitive} could have been formed by visual experience without ever seeing the particular letter \enquote{H} before. The central claim of this article, namely that invariance transformations are encoded as graph symmetries, is somewhat similar to the proposition that numbers are encoded as binary states of electrical current in a microchip: the representation only has practical value when it is paired with a computational mechanism -- logical gates and circuits in the case of the microchip. We offer some speculations about how the brain employs graph symmetries in computation in \Cref{sec:discussion:readout}. But even in complete ignorance of the computational process, a theory of such representations is of value by itself: It can be tested in experiment as shown in \Cref{sec:discussion:predictions} and, if correct, it provides a basis and direction for further investigation. \begin{figure}[tb] \centering \includegraphics[width=0.7\textwidth]{theme} \caption{% According to the postulated theme of connectivity for primary sensory cortices, feature detector units are shaped by external stimuli transmitted through feed-forward connections in a process of competitive Hebbian learning. Recurrent connections are then built up also according to a Hebbian learning rule, thus becoming a measure for the correlation between two feature detectors. } \label{fig:theme} \end{figure} \section{Empirical Support} \label{sec:empirical} In the following, we present a survey of empirical observations in support of two major assertions of this paper: Firstly, that the symmetries of concurrence graphs generated by natural sensory perceptions are indeed indicative of real-world invariance transformations. And secondly, that the synaptic connectivity structure in primary sensory cortices approximates the proposed theme of connectivity and thus the concurrence graph. \subsection{Invariances in Natural Stimuli} \label{sec:empirical:natural} Empirical support for the idea that invariance transformations of the environment are encoded in the concurrence structure of features is available for both visual and auditory perception. \paragraph{Vision} A significant number of studies have analyzed the intrinsic statistics of natural images (see e.\,g.\xspace \cite{jinggang_huang_statistics_1999, ruderman_statistics_1994, geisler_visual_2008} and references therein). A standard approach is to decorrelate the data by decomposing it into features which usually represent small edges of different position and orientation in the image \cite{olshausen_emergence_1996}. Several studies have computed correlations between such features and consistently found two types of interaction: First, the strongest correlations exist between co-linear features, i.\,e.\xspace, between edges which are positioned along a straight line \cite{August-Zucker_curve_2000, kruger_collinearity_1998, geisler_edge_2001}. Second, a positive correlation is also found between features which are co-circular \cite{sigman_common_2001}, i.\,e.\xspace between edges which are positioned such that one circle can be drawn through both of them, see \Cref{fig:correlations:edges}. Altogether this shows that features in natural images have an intricate correlation structure depending on their distance and relative orientations, which is a necessary condition for meaningful symmetries to be identified. The studies cited above report feature correlations only for relative distances and mostly for relative orientations of features rather than for absolute ones. For our purpose, this is a limitation as translation and rotation invariance of natural images are then implicitly presumed rather than measured. Translation invariance seems to be generally accepted to hold, at least approximately and given that the selection of images is not too narrow\footnote{A set of landscape photos with a blue sky in the upper half will certainly not exhibit translation invariant statistics.}. It is even more plausible for the statistics of real visual percepts than for collections of photographs, since eye saccades constantly create sequences of translated copies on the retina. Rotational invariance of image statistics does not hold exactly since natural visual stimuli are somewhat anisotropic \cite{hansen_horizontal_2004}. There is a quantitative dominance of horizontal and vertical edges, but according to \cite{sigman_common_2001} the correlation structure between features is at least qualitatively the same for different (absolute) orientations. The anisotropy might also by attenuated when taking into account the full visual experience of an animal or human during development, as opposed to a set of photographs taken with a (usually) horizontally aligned camera. Scale invariance is another well-studied property of natural images: Several of their statistical properties are not affected by zooming into or out of the picture \cite{ruderman_origins_1997}. The set of feature detectors should therefore cover different scales as well, unless a bias for a certain size of receptive fields is inherent in the learning process. A mix of features with similarly shaped receptive fields of different spatial extension is indeed the outcome of computational models like sparse coding applied to natural images \cite{olshausen_emergence_1996, olshausen_sparse_1997}. For real neurons the situation appears to be more complicated: \enquote{the widely accepted notion that receptive fields of neurons in V1 are scaled replica of each other [\dots] is valid in general only to a first approximation} \cite{teichert_scale-invariance_2007}. Nevertheless, given that both the image statistics and the set of feature detectors are (at least approximately) scale-invariant, it seems reasonable to assume that the concurrence graph also exhibits the respective symmetry approximately. In summary, there is strong evidence that the correlation structure between features in natural images is pronounced enough to make the search for symmetries in the concurrence graph meaningful. According to the data presented, it is to be expected that the graph is at least approximately invariant under translation, rotation, or rescaling. \paragraph{Acoustics} Strong correlations between frequency bands differing by small integer ratios should be expected in a wide variety of sounds, since emitters and resonators tend to mechanically oscillate at a mix of their fundamental frequency and some overtones simultaneously. Indeed, such correlations have been measured by Abdallah and Plumbley \cite{Abdallah:2006b} in real music data. \Cref{fig:correlations:frequ} shows a schematic of the cross-correlations between short-term Fourier transform magnitudes from several hours of music radio recording. In another study \cite{Abdallah:2006a}, the same authors first used Independent Component Analysis (ICA) to create a set of basis vectors in an attempt to optimally decorrelate a data set of short music samples. Then they estimated the remaining mutual information between the projections of the audio data onto the different basis vectors. Since most of the basis vectors are well localized in frequency space, each of them can be represented by its center frequency. A plot of the estimated mutual information of pairs of frequencies again resembles \Cref{fig:correlations:frequ}. \begin{figure}[tb] \hfill \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=0.8\textwidth]{edge_correlations} \caption{% Correlations between edges in natural images: The blue lines represent the relative positions and orientations of edges with a strong correlation to an edge at the position of the red line. Correlations are highest for the co-linear edges, followed by the co-circular ones, cf.\xspace \cite{sigman_common_2001}. } \label{fig:correlations:edges} \end{subfigure} \hfill \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=0.8\textwidth]{frequency_correlations} \caption{% Schematic of the cross-correlations between spectral magnitudes for several hours of music radio. The straight lines represent areas of high correlations. The graph is invariant under a multiplication of the frequency scale. For plots of the original data see \cite{Abdallah:2006a} and \cite{Abdallah:2006b}. } \label{fig:correlations:frequ} \end{subfigure} \caption{Empirical data about feature correlations in natural stimuli.} \end{figure} \Cref{fig:correlations:frequ} clearly shows the strong connection between frequencies differing by a harmonic interval. Considering either the frequency bands or the ICA basis vectors as features (in the sense defined above), the edges of the concurrence graph measure how often two given frequencies contribute significantly and simultaneously to some short audio sample. The plot is therefore qualitatively similar to the weight matrix of the concurrence graph. An approximate symmetry of the concurrence graph can be found by visual inspection of \Cref{fig:correlations:frequ}: Multiplying every frequency with some constant factor amounts to a rescaling of both axes in each plot, mapping the plot onto a scaled version of itself. Such a rescaling leaves the main features of the plot -- namely the straight lines radiating from the origin -- unchanged, which implies an approximate symmetry of the weight matrix and thus of the concurrence graph. The symmetry is only approximate because it is necessarily broken at very high and very low frequencies. Also, as Abdallah and Plumbley observe, the symmetry is slightly broken by twelve \enquote{ripples} per octave (not shown in \Cref{fig:correlations:frequ}) which seem to be related to the semitone quantization of western music. In summary, as far as perceptual input is concerned, our theory appears viable both for the visual and the auditory domain. In the following we will show that the concurrence graph -- including its symmetries -- is also possibly implemented in the neural structure of primary sensory cortices. \subsection{Cortical Theme of Connectivity} \label{sec:empirical:connectivity} The following is an overview of empirical observations which are consistent with the theme of connectivity described in \Cref{sec:model:concurrencegraph}. \paragraph{Visual Cortex} Neurons in primary visual cortex are often interpreted as feature detectors. These cells receive their feed-forward input from the lateral geniculate nucleus and they are activated by features at a particular position and orientation in the visual field. They are also interconnected through a tight network of recurrent synapses. Several studies, e.\,g.\xspace \cite{ko_emergence_2013, iacaruso_synaptic_2017, ko_functional_2011}, have shown that two such cells are preferentially connected when their receptive fields are co-oriented and co-axially aligned, thus reflecting the statistical correlation of co-linear edges in natural images. One might expect that the (weaker) correlations between co-circular edges are also expressed in the synaptic connectivity structure, yet the only related study the we are aware of was challenged by very limited data availability and turned out rather inconclusive \cite{hunt_statistical_2011}. It has been shown though, that the degree of co-circularity in a contour influences human contour detection performance \cite{geisler_edge_2001}. \paragraph{Auditory Cortex} Neurons in primary auditory cortex receive feed-forward input from thalamocortical connections as well as intracortical signals via recurrent connections. The feed-forward input is tonotopically organized and A1 neurons typically respond to one or several characteristic frequencies. According to the hypothetical theme of connectivity, and given the correlation statistics of natural audio stimuli, intra-cortical connections should be strongest between neurons if their characteristic frequencies differ by a harmonic interval. Indeed, some support for this hypothesis is reviewed in \cite{wang_harmonic_2013}: Tracing the diffusion of a marker substance after local injection into cat auditory cortex shows that \enquote{the intrinsic connections of A1 arising from nearby cylinders of neurons are not homogenous and clusters of cells can be identified by their unique pattern of connections within A1} \cite{wallace_intrinsic_1991}. In particular, horizontal connections displayed a periodic pattern along the tonotopic axis. In similar tracing experiments on cat A1 it was found that injections into a specific cortical location caused labeling at other A1 locations that were harmonically related to the injection site \cite{kadia1999horizontal}. In summary, evidence from primary sensory cortical areas suggests a common cortical theme of connectivity in which neurons are tuned to specific patterns in their feed-forward input from other brain regions, while being connected intracortically according to statistical correlations between these patterns. \section{Discussion} \label{sec:discussion} \subsection{The Read-Out Mechanism} \label{sec:discussion:readout} % So far we have collected evidence that invariances are encoded in feature correlations of natural stimuli and that they are reproduced in the connectivity structure of primary sensory cortices. An open question remains about the \enquote{read-out mechanism}, i.\,e.\xspace how the brain utilizes the concurrence graph to solve computational problems. While a definite answer to this question is out of reach for now, the following arguments indicate that the existence of such a mechanism is indeed conceivable. Of course, it is highly improbable that the brain can identify arbitrary graph symmetries without additional assumptions: There is no algorithm known which solves this problem in complete generality in polynomial time, let alone in a biologically plausible way \cite{kobler_graph_1992}. Yet it is possible that some approximation scheme has evolved which is effective in uncovering those invariances that are encoded in natural stimuli. Such a heuristic might be based on the assumption that invariance transformations are continuous, i.\,e.\xspace they can be generated by a sequence of infinitesimally small steps\footnote{It is not required that such a sequence of infinitesimal transformations can be observed as a time-continuous process.}. This is certainly true for the important examples of rotation, rescaling and translation in images or multiplicative frequency change in audio signals. Expressed in terms of the concurrence graph, the continuum of transformations is discretized into different permutations of the nodes. In particular, infinitesimal transformations are approximated by graph automorphisms which map features to only slightly transformed features. Since the feature detectors are not perfectly precise, there will be an overlap of the receptive fields between two detectors whose features differ only by an infinitesimal transformation. Such a pair of detectors will be correlated and therefore strongly connected in the concurrence graph. In summary, infinitesimal transformations translate to permutations within neighborhoods of the concurrence graph -- a restriction which dramatically reduces the search space for potential symmetries. A specific mechanism to exploit these assumptions might rely on wave-like propagation of activity through the network. Suppose that an environmental stimulus activates a certain subset $\Sigma$ of the feature detectors. Assume further that this activation can be passed on to other subsets which are in the graph vicinity of $\Sigma$ and which are the image of $\Sigma$ under a graph automorphism. If each of these subsets can in turn activate further subsets, a wave of activity may propagate along all directions in the space of possible invariance transformations. Every point of the wave front contains a transformed representation of $\Sigma$ and as it travels through the network it can be detected by some feature detector on a higher layer. The wave also maintains the information about the original location of $\Sigma$ in the space of transformations: From the time difference it takes for the wave front to arrive at certain points in transformation space one can always restore the point of origin. See \Cref{fig:waves} for a sketch of how this mechanism would work in a very simplistic scenario. One concern about the read-out mechanism sketched above is that the activated subsets of feature detectors might overlap and interfere. To avoid this, their respective activities need to be segregated into different \enquote{channels}. One option is to encode the assignment of detectors to subsets by temporal synchronization of their activity, allowing a single detector to participate in several subsets simultaneously. Another option is to rely on some degree of redundancy in the set of feature detectors, such that each feature can be represented by several detectors which may participate in different subsets independently. While this account still omits many details, it might lead towards a biologically plausible mechanism to separate a stimulus into a \enquote{what} and a \enquote{where} (either literally or in some abstract space of transformations). Wave-like propagation of cortical activity has been observed in many experiments. In a different article \cite{powell_can_2021} we proposed that such waves are used to solve planning problems and gave an overview of empirical support for this idea. It is appealing to speculate that both perception and planning problems which are subject to (invariance) transformations could be supported by essentially the same mechanism. Finally, using time differences in the sub-millisecond range to infer the location of a stimulus also is known to be in the computational repertoire of the brain, namely when localizing the source of an auditory stimulus \cite{grothe_mechanisms_2010}. It seems conceivable to employ a similar mechanism to reconstruct the origin of a stimulus in other and more abstract spaces. \begin{figure}[tb] \centering \includegraphics[width=0.7\textwidth]{waves} \caption{% A simple example for a read-out mechanism of the concurrence graph. Feature detectors with two types of orientations $\omega$ have their receptive fields along a \enquote{one-dimensional retina} ($x$-axis) and form a concurrence graph with translational symmetry. When the letter \enquote{H} (in blue) enters the field of vision, it activates three feature detectors (marked in gray). This triggers two wave fronts of activity traveling through the graph, each of which conserves the relative position (as encoded in the graph symmetry) of the three features. The wave fronts reach the two \enquote{H-detectors} on either side and activate them via feed-forward connections. The \enquote{what} information of the stimulus is now encoded in the fact that the H-detectors are activated and the \enquote{where} information is encoded in the time difference $\Delta t$ between their firing (vertical axis, with two black arrows visualizing the spatiotemporal trajectory of the wave fronts). Note that the H-detectors may not have had a chance to emerge as feature detectors on the same layer as the edge detectors in the first place, because the \enquote{H} is less frequent as a pattern than each individual edge. But given the wave propagation dynamics, separate observations of the letter at different positions repeatedly and consistently give rise to the same shape of wave front and thus may be sufficient as a learning signal for new feed-forward feature detectors. } \label{fig:waves} \end{figure} \subsection{Generalized Transformations} \label{sec:discussion:generalization} The concept presented so far is based on global transformations represented by symmetries of the entire concurrence graph. For many perceptual tasks, the brain also requires an understanding of local transformations, such as one individual object moving in front of a static background or several objects moving independently of each other. The potential read-out mechanism outlined above can be extended to allow for such local transformations, too. In the first step, a global scene activating a feature set is segmented into subsets representing individual objects. Intriguingly, the concurrence graph itself is well suited as a substrate for a segmentation algorithm: If two features often appear together in general, they are not only likely to have a common cause, but they are also strongly connected in the concurrence graph. Therefore, when some set of feature detectors are activated simultaneously and their features form a tightly connected clique in the concurrence graph, then they are likely to collectively represent one and the same object. It has been proposed by Singer that this mechanism of perceptual grouping is implemented in the brain and that the different segments are encoded as independently synchronized assemblies of firing neurons \cite{singer_synchronization_1993}. The read-out mechanism proposed in the present article extends Singer's idea by postulating that, in the second step, each of the feature subsets are the source of a wave of activity, traveling through the concurrence graph, and thus encoding for the \enquote{what} and the \enquote{where} of several objects in the scene simultaneously. Effectively, each object locally \enquote{inherits} the transformation which was originally only understood as as a global transformation, induced by a symmetry of the probability distribution $\Psi$, cf.\xspace \Cref{sec:model:problem}. It remains open if and how the proposed concept can be extended to certain other transformations, with the case of three-dimensional spatial transformations being of particular interest. It is conceivable that progress can be made via a hierarchical stacking of the neural network architecture underlying the present proposal. \subsection{Other Modalities} \label{sec:discussion:modalities} The preceding discussion was limited to the visual and auditory modalities that provide lucid examples of invariance transformations and for which a large body of empirical results is available. Yet great care has been taken to avoid any assumptions specific to those two domains and therefore the concept can in principle be extended to other human or nonhuman modalities. For tactile sensations the concept of invariance transformations is meaningful, since external objects can be identified through touch regardless of their orientation in space and with some flexibility regarding the body part which makes contact. Indeed there is evidence that somatosensory cortex follows the theme of connectivity described above with tactile stimuli detectors having receptive fields similar to those in V1 and recurrent connections depending on their likelihood of simultaneous stimulation -- cf.\xspace \cite{powell_can_2021} and references therein. Yet characterizing the relevant invariances explicitly is harder than in vision or audio due to the interplay between sensory perception and bodily posture. The author is not aware of any empirical results about the correlation structure of natural tactile stimuli applicable to the presented concept. In the olfactory modality \enquote{no obvious metric is available to describe either the space of odor perceptions or the space of odor chemistry} \cite{wright_odor_2005}. Consequently, the notion of invariance transformations and thus the presented concept may not apply at all. It fits the picture that olfactory processing also differs anatomically from other modalities in that the respective neural circuits are shallower than their visual and auditory counterparts \cite{laurent_odor_2001}. \subsection{Objections} \label{sec:discussion:objections} Several aspects of the presented proposal might be controversial, in particular with respect to the definition of transformations and how they translate into symmetries of the concurrence graph. The following paragraphs address some of the potential concerns. The basic question how to define invariance transformations invites almost philosophical discussions: Are they characterized by the possibility of some object to undergo the transformation physically like in spatial movement? Or are they rather mappings between hypothetical objects which differ in just one attribute? And if the latter, which attributes are subject to an invariance transformation and which are not? In the present proposal, invariance transformations are simply a way for the brain to replace some na\"ive distance measure in the space of possible perceptions by a metric which is better suited to the respective domain -- suitability being defined by how well it enables the organism to categorize and model its environment, and thus ultimately by evolutionary success. The claim is in essence that symmetries of the concurrence graph allow for statistical learning of transformations whose application in cognitive processes is useful for an animal's survival. There is no need to decide whether some candidate invariance is \enquote{right} or \enquote{wrong}. Even our initial assumption of the probability distribution $\Psi$ being symmetric under invariance transformations, see \Cref{sec:model:problem}, is to be understood as a starting point for a simple and consistent mathematical backbone to the proposed concept rather than a definition of real-life invariances. In fact, as critics may point out, $\Psi$ is not invariant under some relevant transformations: Most objects in our environment, for example, have a preferred orientation and thus $\Psi$ cannot be invariant under spatial rotation. Similarly, environmental sound sources and musical instruments alike are restricted to certain frequency ranges and therefore the invariance of $\Psi$ under multiplicative frequency change is broken. Nevertheless, for each object the symmetry may hold approximately within some range (e.\,g.\xspace, small angles or frequency multiples close to $1$) and a large number of such objects add to the overall correlation structure of natural stimuli. Given that each of their contributions is transform-invariant over a certain range, it is plausible that the deviations from global transform-invariance approximately balance out and a global symmetry in the concurrence graph emerges. And according to the empirical observations reviewed in \Cref{sec:empirical:natural}, this seems in fact to be the case. One might still object that a certain symmetry which is manifest in perceptual input could be lost in the observation process. For example, the density of cones and ganglion cells is highly inhomogeneous across the retina \cite{curcio_topography_1990}, such that even simple spatial translations cannot be expressed as permutations of retina cells. Yet for at least two reasons that observation does not invalidate the presented concept: First, the formation of feature detectors by means of unsupervised learning may restore statistical regularities which are present in the perceptual input data but got distorted during the first stages of processing. For example, the probability of finding an edge at a certain orientation and position in an image (and thus the tendency to develop a detector for that particular edge) should be independent of the resolution at which this image patch is processed locally, as long as it is high enough to clearly represent the edge. Second, if the proposed concept is indeed implemented by biological processes in the brain, then it can be expected to be robust under perturbations. In particular, the read-out mechanism outlined in \Cref{sec:discussion:readout} is based on approximate local symmetries of the graph and might be relatively unaffected by global deviations from perfect symmetry. Finally, critics may suspect that the projection of $\Psi$ onto lower-dimensional spaces might cause too many artificial symmetries to be useful, cf.\xspace \Cref{sec:model:projections}. Yet there is reason to believe otherwise: We have already argued in \Cref{sec:discussion:readout} that neighboring feature detectors have overlapping receptive fields and thus relatively strong mutual connections in the concurrence graph. For simplicity, assume that these neighborhood connections are the strongest ones to be observed at all, which will be true at least when the density of feature detectors and therefore the overlap between neighbors is high enough. Then neighboring features are always represented by the most strongly connected nodes in the graph and vice versa. Graph symmetries therefore map neighboring features to neighboring features and thus preserve the topology of the feature space, which dramatically reduces the possibilities for symmetries to arise randomly. Nevertheless, one cannot rule out that artifacts exists and as explained above it is not always obvious whether an invariance is \enquote{real} or an artifact. For example, one might speculate whether the relative ease and precision with which humans can match musical intervals reflects an evolutionary adaptation facilitating auditory processing or merely a byproduct of the cortical standard mechanism to process topologically arranged stimuli. \subsection{Comparison to Alternative Concepts} \label{sec:discussion:novelty} Several alternative theories for the emergence of perceptual invariance in the brain have been suggested. Some date back as far as the 1940s but not all of them have passed the test of time, see \cite{olshausen_neural_2013} for an overview. It seems to be generally accepted that at least some degree of statistical learning must be involved in establishing invariance transformations, since a complete determination of the relevant neural circuits via evolutionary \enquote{hard-coding} is ruled out by many empirical observations on cortical plasticity \cite{barnes_sensory_2010}. The learning mechanism might depend on the particular type of transformation or perceptual modality, but in the light of the anatomical homogeneity and cross-modal plasticity of neocortex one common explanatory framework appears preferable. Some of the alternative theories are based on the assumption of time continuity and they attempt to reconstruct general transformations from pairs of perceptions separated by some infinitesimal time interval \cite{foldiak_learning_1991, cadieu_learning_2012}. These models focus on the the visual system and they do not attempt to explain the emergence of other invariances which can not usually be observed as a time-continuous process, like a change of key in music. Another class of theories postulates a dynamic remapping of an object's perceptual representation onto some invariant template. Hinton \cite{Hinton_Parallel_1981} proposed a neural network with a \enquote{mapping unit} for every possible transformation which sends the present sensory input to its respective transformed version. The system is designed to optimize the match between the transformed percepts and some previously memorized templates, converging to the right transformation and the right template simultaneously. Hinton's model does not attempt to give a biological explanation for the origin of the mapping units and their correct encoding of invariance transformations. A different remapping approach based on graph matching has been proposed by von der Malsburg and Bienenstock \cite{von_der_malsburg_pattern_statistical_1986} and further developed by von der Malsburg and others \cite{von_der_malsburg_pattern_1988, lades_distortion_1993, Westphal_feature-driven_2008}. The concept bears some similarity to the ideas presented in this article in that it attempts to identify an object by representing it as a graph of features and matching it to the most similar graph out of a set of memorized templates. In contrast to our proposal, their graph matching focuses exclusively on the features which are actually observed in a particular image and it ignores their embedding in a wider correlation structure with currently inactive features. The invariance transformations are \enquote{hard-coded} in the graph representation and in the matching process itself, by defining which features at different positions in the image are considered \enquote{the same} and by making the matching explicitly translation invariant or insensitive to rescaling and deformation. Yet another concept has been put forward by Anselmi et\,al.\xspace \cite{anselmi_unsupervised_2016}: Given a group $G$ of transformations, a set of template images $t^k$, and all possible transformed templates $gt^k$ ($g \in G$), one can compute a transform-invariant signature for an arbitrary image $I$ with the help of scalar products $\langle I, gt^k\rangle$. For a biological implementation of this idea the authors propose that each $gt^k$ is represented by one \enquote{simple cell} with the appropriate synaptic connections to the pixels of the input image to effectively compute the scalar product. They suggest that this network structure may emerge during visual experience and based on the time continuity assumption (see above). Yet the question remains how a one-dimensional set of temporally consecutive observations can be sufficient to learn the large number of all possible template transformations when the group $G$ is multidimensional, and how robust the image recognition is in situations where $G$ is only partially represented in the template set. \subsection{Predictions} \label{sec:discussion:predictions} The proposed theory can be tested in experiment: The ability of an organism to apply some invariance transformation to a given perceptual task should depend on its past exposure to stimuli with the corresponding correlation structure. The following paragraphs outline experiments to assess this prediction. Assume that two groups A and B of animals are reared in darkness except for regular visual training cycles during which they are exposed to strictly controlled visual stimuli. The latter are a set of computer-generated videos which do not contain any time-continuous rotations and which are carefully crafted such that feature correlations are strongly anisotropic (see \Cref{fig:anisotropic} for an example). While group A is shown those unaltered videos, group B watches every video rotated by a different angle which is chosen at random but constant for the duration of the respective video. The feature correlations perceived by group B over many videos are thus isotropic. Lastly, the abilities of all subjects in recognizing rotated visual stimuli are tested. If rotational invariance were \enquote{hard-coded} in the brain and independent of experience, both groups should complete the final recognition task equally well. Yet if invariances were learned by observation of time-continuous transformations, neither group should be able to perform well at the task. Finally, the concept presented in this article makes the very specific prediction that group B should perform significantly better than group A, because only group B received perceptual input with rotationally invariant correlation structure. \begin{figure}[tb] \centering \includegraphics[width=0.7\textwidth]{anisotropic} \caption{% Example of an image with strongly anisotropic correlation structure. Long straight lines, which are the predominant source of strong correlations between collinear features, are preferably oriented in horizontal direction. } \label{fig:anisotropic} \end{figure} Similar experiments can be performed for the auditory modality: The present theory predicts that the ease at which an animal can relate two frequency intervals or melodies depends on its auditory experience during development. Assume that an animal is reared under conditions where all sounds are modified such that the natural correlation structure (\Cref{fig:correlations:frequ}) is replaced by a distorted one. This should have a predictable effect on how quickly the subject can learn equivalence between pairs of auditory stimuli which are connected either by the natural transformation (i.\,e.\xspace, multiplication of all frequencies with a constant) or the transformation which corresponds to the distorted correlation structure. \section{Conclusion} This article established a unified framework describing how the brain might learn a wide range of (not necessarily time-continuous) invariance transformations in multiple sensory modalities without supervision or hardwired domain-specific assumptions. The proposal explains several seemingly unrelated facts about human perception, e.\,g.\xspace the possibility to learn transformations and apply them to new objects or the invariance of musical perception under a change of key, and it makes specific predictions which can be tested in experiments. It is consistent with many experimental findings and it is based entirely on basic, biologically plausible mechanisms for the formation of synaptic connectivity. Depending on the read-out mechanism for the symmetries of the concurrence graph, cf.\xspace \Cref{sec:discussion:readout}, the concept may lay the basis for an understanding of abstraction in cognitive processes, i.\,e.\xspace the simultaneous classification of a stimulus and its localization in some abstract space. In order to further solidify the concept, potential read-out mechanisms need to be investigated in more detail. This includes software simulations which may also open the door for new types of brain-inspired artificial intelligence algorithms. \section*{Acknowledgments} It is my pleasure to thank Alexander V. Hopp, Robert Klassert, Raul Mure\textcommabelow{s}an, Aleksandar Vu\v{c}kovi\'c, and Mathias Winkel for helpful suggestions which have improved this paper. \section*{Appendix} \label{appendix} In this appendix we show that an invariance of the probability distribution ${\Psi: \{0;1\}^n \rightarrow [0;1]}$ gives rise to equivalences between its marginal distributions. We call $\Psi_\mu$ the marginal distribution of the features belonging to some index set $\mu = \{\mu_1,\dots,\mu_m\}$, i.\,e.\xspace \begin{equation} \label{eq:t1} \Psi_\mu(x_{\mu_1},\dots, x_{\mu_m}) = \sum_{\{x_j:\,j \not\in\mu\}} \Psi(x_1, \dots, x_n). \end{equation} is the projection of $\Psi$ to the coordinate axes determined by $\mu$. The sum runs over all the $x_j$ which are not selected by the index set $\mu$. The constant $m$ stands for the dimension of the space onto which $\Psi$ is projected with $m=2$ being the most important case for the main text. We assume that $\Psi$ is invariant under a transformation $T$ , i.\,e.\xspace $\Psi(x) = \Psi(Tx)$, and that $T$ can be expressed as a permutation $\tau$ of the coordinate axes, i.\,e.\xspace \begin{equation}\label{eq:tt} \Psi(x_1, \dots, x_n) = \Psi(x_{\tau(1)}, \dots, x_{\tau(n)}). \end{equation} Then we can show the equivalence between the transformed marginal distributions $\Psi_\mu$ and $\Psi_{\tau(\mu)}$, where $\tau(\mu)$ simply stands for $\{\tau(\mu_1),\dots,\tau(\mu_m)\}$, starting with \begin{eqnarray*} \Psi_{\tau(\mu)}(x_{\tau(\mu_1)},\dots, x_{\tau(\mu_m)}) &\stackrel{\text{(\ref{eq:t1})}}{=}& \sum_{\{x_j:\,j \not\in\tau(\mu)\}} \Psi(x_1, \dots, x_n) \\ &\stackrel{\text{(\ref{eq:tt})}}{=}& \sum_{\{x_j:\,j \not\in\tau(\mu)\}} \Psi(x_{\tau(1)}, \dots, x_{\tau(n)}). \end{eqnarray*} By a re-labeling $x_{\tau(k)} \rightarrow y_k$ of the coordinate axes this can be written as \begin{equation} \Psi_{\tau(\mu)}(y_{\mu_1},\dots, y_{\mu_m}) = \sum_{\{y_{\tau^{-1}(j)}:\, j \not\in \tau(\mu)\}} \Psi(y_1, \dots, y_n). \end{equation} Finally, replacing $\tau^{-1}(j)$ by $j'$ we see how the invariance $T$ translates into a equivalence between different marginal distributions: \begin{equation} \Psi_{\tau(\mu)}(y_{\mu_1},\dots, y_{\mu_m}) = \sum_{\{y_{j'}:\, j' \not\in \mu\}} \Psi(y_1, \dots, y_n) = \Psi_\mu(y_{\mu_1},\dots, y_{\mu_m}). \end{equation} \printbibliography \end{document}
train/arxiv
BkiUdZI5qX_Bk6r0sA0M
5
1
\section{Introduction} Let $G=(V,E)$ be a bipartite planar graph that allows a perfect matching. Assume that $G$ is embedded in a plane. An \textit{elementary cycle} of $G$ is a cycle that encircles a single region $R$ different than outer region $R^*$. Throughout this paper, we identify an elementary cycle with the region it encircles as well as with its set of vertices or edges. A \textit{tiling} of $G$ is a partition of the vertex set $V$ into disjoint blocks of the following two types: \begin{itemize} \item[(1)] an edge $\{x,y\}$ of $G$; or \item[(2)] an elementary cycle $R$ (the set of vertices of $R$). \end{itemize} The set of all tilings of $G$ form a cubical complex $\mathcal{C}(G)$ (called the \textit{cubical matching complex}) defined by Ehrenborg in \cite{EhrCUB}. Note that $\mathcal{C}(G)$ depends not only on $G$, but also on the choice of the embedding of that graph in the plane. A face $F$ of $\mathcal{C}(G)$ has the form $F=M_F \cup C_F=(M_F,C_F),$ where $C_F$ is a collection $C_F=\{R_1,R_2,\ldots,R_t\}$ of vertex-disjoint elementary cycles of $G$, and $M_F$ is a perfect matching on $G\setminus \big(R_1\cup R_2 \cup \cdots\cup R_t\big)$. The dimension of $F$ is $|C_F|$, and the vertices of $\mathcal{C}(G)$ are all perfect matchings of $G$. All tilings of $G$ covered by $F=(M_F,C_F)$ can be obtained by deleting an elementary cycle $R$ from $C_F$, and adding every other edge of $R$ into $M_F$ (there are two possibilities to do this). Therefore, for two faces $F_1=(M_{F_1}, C_{F_1})$ and $F_2=(M_{F_2}, C_{F_2})$, we have that \begin{equation}\label{E:facerelation} \big(F_1\subset F_2\big) \Longleftrightarrow \Big(C_{F_1}\subset C_{F_2}\textrm{ and } M_{F_1}\supset M_{F_2}\Big). \end{equation} Let $G^\circ$ denote the weak dual graph of a planar graph $G$. The vertices of $G^\circ$ are all bounded regions of $G$, and two regions that share a common edge are adjacent in $G^\circ$. The \textit{independence complex} of a graph $H$ is a simplicial complex $I(H)$ whose faces are the independent subsets of vertices of $H$. Note that for any face $F= (M_F,C_F)$ of $\mathcal{C}(G)$, the set $C_F$ contains independent vertices of $G^\circ$, i.e., $C_F$ is a face of $I(G^\circ)$. At the first sight, the complex $\mathcal{C}(G)$ is related with the independence complex $I(G^\circ)$ of its weak dual graph. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.96]{center} \draw (0,0) -- (1,0) -- (1,1) -- (0,1) -- (0,0); \draw (2,0) -- (3,0) -- (3,1) -- (2,1) -- (2,0); \draw (1,1) -- (1.5,1.7) -- (2,1);\draw (1,0) -- (1.5,-0.7) -- (2,0); \draw [fill] (0,0) circle [radius=0.051];\draw [fill] (0,1) circle [radius=0.051]; \draw [fill] (1,0) circle [radius=0.051]; \draw [fill] (1,1) circle [radius=0.051]; \draw [fill] (2,0) circle [radius=0.051];\draw [fill] (2,1) circle [radius=0.051]; \draw [fill] (3,0) circle [radius=0.051]; \draw [fill] (3,1) circle [radius=0.051]; \draw [fill] (1.5,1.7) circle [radius=0.051];\draw [fill] (1.5,-0.7) circle [radius=0.051]; \node at (.5,.5) {A};\node at (1.5,.5) {B};\node at (2.5,.5) {C}; \node at (1.7,-1.2) {$G_1$};\node at (5.6,-1.2) {$G_2$};\node at (10,-1.2) {$G_3$}; \node at (1.7,-3.2) {$\mathcal{C}(G_1)$};\node at (6.68,-3.2) {$\mathcal{C}(G_2)$};\node at (10.7,-3.2) {$\mathcal{C}(G_3)$}; \draw (4,0) -- (5,0) -- (5,1) -- (4,1) -- (4,0); \draw (6,0) -- (7,0) -- (7,1) -- (6,1) -- (6,0); \draw (5,1) -- (6,1);\draw (5,0) -- (5.33,-0.7) -- (5.66,-0.7)--(6,0); \draw [fill] (4,0) circle [radius=0.051];\draw [fill] (4,1) circle [radius=0.051]; \draw [fill] (5,0) circle [radius=0.051]; \draw [fill] (5,1) circle [radius=0.051]; \draw [fill] (6,0) circle [radius=0.051];\draw [fill] (6,1) circle [radius=0.051]; \draw [fill] (7,0) circle [radius=0.051]; \draw [fill] (7,1) circle [radius=0.051]; \draw [fill] (5.33,-.7) circle [radius=0.051];\draw [fill] (5.66,-0.7) circle [radius=0.051]; \node at (4.5,.5) {A};\node at (5.5,.5) {B};\node at (6.5,.5) {C}; \draw (8,0.5) -- (8.7,1.2) -- (9.4,0.5) -- (8.7,-0.2) -- (8,0.5); \draw (8.7,1.2) -- (10.7,1.2); \draw (10.7,-0.2)--(8.7,-0.2); \draw (10,0.5) -- (10.7,1.2) -- (11.4,0.5) -- (10.7,-0.2) -- (10,0.5); \draw [fill] (11.4,0.5) circle [radius=0.051];\draw [fill] (10.7,-0.2) circle [radius=0.051]; \draw [fill] (8,0.5) circle [radius=0.051];\draw [fill] (8.7,1.2) circle [radius=0.051]; \draw [fill] (9.4,0.5) circle [radius=0.051]; \draw [fill] (8.7,-0.2) circle [radius=0.051]; \draw [fill] (10,0.5) circle [radius=0.051];\draw [fill] (10.7,1.2) circle [radius=0.051]; \node at (8.75,.5) {$A$};\node at (9.65,.25) {$B$};\node at (10.75,.5) {$C$}; \draw [ultra thick] (0.2,-3) -- (1.,-4)--(2.,-4)--(2.8,-3); \draw [ultra thick] [fill=gray!19] (4.5,-2.5) rectangle (6,-4);,\draw [ultra thick](6,-4)--(7.2,-4); \node at (5.3,-3.3) {$AC$};\node at (6.6,-4.27) {$B$}; \node at (0.34,-3.77) {$A$};\node at (1.5,-4.27) {$B$};\node at (2.6,-3.77) {$C$}; \draw [ultra thick] [fill=gray!19] (8.5,-2.5) rectangle (10,-4);\node at (9.3,-3.3) {$AC$}; \end{tikzpicture} \caption{The three graphs with the same weak dual, but different cubical matching complexes.} \label{F:istiIdrugiC} \end{center} \end{figure} However, Figure \ref{F:istiIdrugiC} shows the three graphs with the same weak dual but different cubical matching complexes. The facets of the complexes on Figure \ref{F:istiIdrugiC} are labeled by corresponding subsets of pairwise disjoint elementary regions. \begin{exm}Let $\mathcal{L}_n$ and $\mathcal{C}_n$ denote the independence complexes of $P_n$ and $C_n$ (the path and cycle with $n$ vertices) respectively. The homotopy types of these complexes are determined by Kozlov in \cite{Koz}: $$\mathcal{L}_n \simeq\left\ \begin{array}{ll} \textrm{ a point } , & \hbox{if $n=3k+1$;} \\ S^{\lfloor\frac{n-1}{3}\rfloor}, & \hbox{otherwise.} \\ \end{array \right. \mathcal{C}_n \simeq\left\ \begin{array}{ll} S^{k-1}, & \hbox{if $n=3k\pm 1$;} \\ S^{k-1}\vee S^{k-1}, & \hbox{if $n=3k$.} \\ \end{array \right. $$ We will use these complexes later, see Corollary \ref{C:allLorC} and Remark \ref{R:OQ}. More details about combinatorial and topological properties of $\mathcal{L}_n$ and $\mathcal{C}_n$ (and about the independence complexes in general), an interested reader can find in \cite{Eh-He}, \cite{Eng} and \cite{Jonss}. \end{exm} There are some cubical complexes that cannot be realized as subcomplexes of a $d$-cube $C^d=[0,1]^d$, see Chapter $4$ of \cite{tortop}. \begin{prop}\label{P:cord} Let $G$ be a bipartite planar graph that has a perfect matching. If $G$ has $d$ elementary regions, then its cubical matching complex $\mathcal{C}(G)$ can be embedded into $C^d$. \end{prop} \begin{proof} We use an idea from \cite{Propp} to describe the coordinates of vertices of $\mathcal{C}(G)$ explicitly. Let $R_1,R_2,\ldots,R_d$ be a fixed linear order of elementary regions of $G$. We choose an arbitrary perfect matching $M_0$ of $G$ (a vertex of $\mathcal{C}(G)$) to be the origin $\mathbf{0}=(0,0,\ldots,0)$ in $\mathbb{R}^d$. For another vertex $M$ of $\mathcal{C}(G)$, we consider the symmetric difference $ M\triangle M_0$. Note that $ M \triangle M_0$ is a disjoint union of cycles. For a given perfect matching $M$ of $G$, we assign the vertex $V_M=(x_1,\ldots ,x_d)$ of $C^d$, where $$x_i=\left\ \begin{array}{ll} 1, & \hbox{if $R_i$ is contained into an odd number of cycles of $ M\triangle M_0$;} \\ 0, & \hbox{otherwise.} \\ \end{array \right. $$ If $M'$ and $M''$ are two perfect matchings of $G$ such that $M'\triangle M''=R_j$ (meaning that these two matchings differ just on an elementary region $R_j$), then their corresponding vertices $V_{M'}$ and $V_{M''}$ of $C^d$ differ only at the $j$-th coordinate. Therefore, the face $F=(M_F,C_F)$ is embedded in $C^d$ as the convex hull of its $2^{|C(F)|}$ vertices. \end{proof} \section{The local structure of $\mathcal{C}(G)$} The \textit{star} of a face $F$ in a cubical complex $\mathcal{C}$ is the set of all faces of $\mathcal{C}$ that contain $F$ $$star(F)=\{F'\in \mathcal{C}: F\subset F'\}.$$ The \textit{link} of a vertex $v$ in a cubical complex $\mathcal{C}$ is the simplicial complex $link_\mathcal{C}(v)$ that can be realized in $\mathcal{C}$ as a ``small sphere`` around the vertex $v$. More formally, the vertices of $link_\mathcal{C}(v)$ are the edges of $\mathcal{C}$ containing $v$. A subset of vertices of $link_\mathcal{C}(v)$ is a face of $link_\mathcal{C}(v)$ if and only if the corresponding edges belong to a same face of $\mathcal{C}$. The \textit{link} of a face $F$ in a cubical complex $\mathcal{C}$ is defined in a similar way. The set of vertices of $link_\mathcal{C}(F)$ is $$\{F'\in \mathcal{C}: F\subset F'\textrm{ and }dim\, F'=1+dim\, F\},$$ and a subset $A$ of the set of vertices is a face of $link_\mathcal{C}(F)$ if and only if all elements of $A$ are contained in a same face of $\mathcal{C}$. Ehrenborg investigated the links of the cubical complexes associated to tilings of a region by dominos or lozenges. Here we describe the links in the cubical matching complex $\mathcal{C}(G)$ for any bipartite planar graph $G$. For a face $F=(M_F,C_F)$ of $\mathcal{C}(G)$, let $\mathcal{R}_F$ denote the set of all elementary regions of $G$ for which every second edge is contained in $M_F$. Further, let $G_F$ denote the subgraph of the weak dual graph $G^\circ$ spanned with the regions from $\mathcal{R}_F$. From the definition of the link in a cubical complex and (\ref{E:facerelation}), we obtain the next statement. \begin{prop} For any face $F=(M_F,C_F)$ of $\mathcal{C}(G)$ we have that $$link_\mathcal{C}(F)\cong I(G_F).$$ \end{prop} The above proposition explains the appearance of complexes $\mathcal{L}_n$ and $\mathcal{C}_n$ as the links in cubical the matching complexes, see Theorem 3.3 and Section 4 in \cite{EhrCUB}. Assume that all elementary regions of $G$ are quadrilaterals. In that case, for any face $F$ of $\mathcal{C}(G)$, the degree of a vertex in $G_F$ is at most two. Therefore, $G_F$ is a union of paths and cycles. \begin{cor}\label{C:allLorC} If all elementary regions of $G$ are quadrilaterals, then $link_\mathcal{C}(F)$ is a join of complexes $\mathcal{L}_p$ and $\mathcal{C}_{2q}$. \end{cor} \begin{thm} Let $G$ be a bipartite planar graph that has a perfect matching. For any face $F=(M_F,C_F)$ of $\mathcal{C}(G)$ the graph $G_F$ is bipartite. \end{thm} \begin{proof} Assume that $G_F$ contains an odd cycle $R_1,R_2,\ldots, R_{2m+1}$. Recall that $R_i$ is an elementary region of $G$ and the that every second edge of $R_i$ is contained in $M_F$. Two neighborly regions $R_i$ and $R_{i+1}$ have to share the odd number of edges, the first and the last of their common edges belong to $M_F$. Therefore, for each region $R_i$, there is an odd number of common edges of $R_i$ and $R_{i-1}$ that belong to $M_F$. Obviously, the same holds for $R_i$ and $R_{i+1}$. So, we can conclude that there is an odd number of edges of $R_i$ that are between $R_i \cap R_{i-1}$ and $R_i \cap R_{i+1}$ (the first and the last one of these edges are not in $M_F$). The union of all of these edges (for all regions $R_i$) is an odd cycle in $G$, which is a contradiction. \end{proof} Barmak proved in \cite{Barmak} (see also in \cite{Na-Re}) that the independence complexes of bipartite graphs are suspensions, up to homotopy. This implies the next result. \begin{cor}\label{Cor:link} All links in $\mathcal{C}(G)$ are homotopy equivalent to suspensions. Therefore, the link of any face in $\mathcal{C}(G)$ has at most two connected components. \end{cor} For any simplicial complex $K$ there exists a bipartite graph $G$ such that the independence complex of $G$ is homotopy equivalent to the suspension over $K$, see \cite{Barmak}. Skwarski proved in \cite{Skv} (see also \cite{Barmak}) that there exists a planar graph $G$ whose independence complex is homotopy equivalent to an iterated suspension of $K$. We prove that the links of faces in cubical matching complexes are independence complexes of bipartite planar graphs. What can be said about homotopy types of these complexes? \begin{rem}\label{R:OQ} There is a natural question, posed by Ehrenborg in \cite{EhrCUB}: \textit{For what graphs $G$ would the cubical matching complex $\mathcal{C}(G)$ be pure, shellable, non-pure shellable?} The complexes $\mathcal{L}_n$ are non-pure for $n>4$, and the complexes $\mathcal{C}_n$ are non-shellable for $n>5$. Therefore, these complexes can be used to show that the cubical matching complex of a concrete graph is non-pure or non-shellable. \end{rem} \section{Collapsibility and contractibility of cubical matching complexes} The next theorem is the main result in \cite{EhrCUB}. \begin{thm}[Theorem 1.2 in \cite{EhrCUB}]\label{T:EHr} For a planar bipartite graph $G$ that has a perfect matching, the cubical matching complex $\mathcal{C}(G)$ is collapsible. \end{thm} The proof of the above statement is based on the next two results: \begin{itemize} \item[$(i)$](Propp, Theorem 2 in \cite{Propp}) \textit{The set of all perfect matchings of a bipartite planar graph is a distributive lattice.} \item[$(ii)$](Kalai, see in \cite{EC1}, Solution to Exercise 3.47 c) \textit{The cubical complex of a meet-distributive lattice is collapsible.} \end{itemize} \noindent Note however that Propp in his proof of $(i)$ assumed the following two additional conditions for bipartite planar graph $G$: \begin{itemize} \item[$(*)$] Graph $G$ is connected, and \item[$(**)$] Any edge of $G$ is contained in some matching of $G$ but not in others. \end{itemize} \begin{exm}\label{E:counter}The next figure shows a bipartite planar graph whose cubical matching complex is not collapsible. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.898]{center} \draw [thick] (0,0) rectangle (4,4); \draw [thick] (1,1) rectangle (3,3); \draw [fill] (0,0) circle [radius=0.111];\draw [fill] (0,4) circle [radius=0.111];\draw [fill] (4,0) circle [radius=0.111];\draw [fill] (4,4) circle [radius=0.111]; \draw [fill] (1,1) circle [radius=0.111];\draw [fill] (1,3) circle [radius=0.111];\draw [fill] (3,1) circle [radius=0.111];\draw [fill] (3,3) circle [radius=0.111]; \draw[thick] (0,0)--(1,1); \draw[thick] (3,3)--(4,4); \node at (2,-.5) {$G$}; \node at (9.7,-0.2) {$\mathcal{C}(G)$}; \draw [ultra thin] (6.6,3.5) rectangle (7.6,4.5); \draw [ultra thin] (6.9,3.8) rectangle (7.3,4.2); \draw [ultra thin] (6.6,3.5)--(6.9,3.8);\draw [ultra thin] (7.6,4.5)--(7.3,4.2); \draw [ultra thin] (11.6,3.5) rectangle (12.6,4.5); \draw [ultra thin] (11.9,3.8) rectangle (12.3,4.2); \draw [ultra thin] (11.6,3.5)--(11.9,3.8);\draw [ultra thin] (12.6,4.5)--(12.3,4.2); \draw [ultra thin] (6.6,-.5) rectangle (7.6,0.5); \draw [ultra thin] (6.9,-.2) rectangle (7.3,.2); \draw [ultra thin] (6.6,-.5)--(6.9,-0.2);\draw [ultra thin] (7.6,0.5)--(7.3,0.2); \draw [ultra thin] (11.6,-.5) rectangle (12.6,0.5); \draw [ultra thin] (11.9,-.2) rectangle (12.3,.2); \draw [ultra thin] (11.6,-.5)--(11.9,-0.2);\draw [ultra thin] (12.6,0.5)--(12.3,0.2); \draw [ultra thick](6.6,3.5)--(6.6,4.5);\draw [ultra thick](7.6,3.5)--(7.6,4.5); \draw [ultra thick](7.3,3.8)--(7.3,4.2);\draw [ultra thick](6.9,3.8)--(6.9,4.2); \draw [ultra thick](11.6,3.5)--(11.6,4.5);\draw [ultra thick](12.6,3.5)--(12.6,4.5); \draw [ultra thick](11.9,-.2)--(12.3,-.2);\draw [ultra thick](11.9,.2)--(12.3,.2); \draw [ultra thick](6.6,-.5)--(7.6,-.5);\draw [ultra thick](6.6,0.5)--(7.6,0.5); \draw [ultra thick](12.3,3.8)--(11.9,3.8);\draw [ultra thick](11.9,4.2)--(12.3,4.2); \draw [ultra thick](11.6,-.5)--(12.6,-.5);\draw [ultra thick](11.6,0.5)--(12.6,0.5);\draw [ultra thick](7.3,.2)--(7.3,-.2);\draw [ultra thick](6.9,.2)--(6.9,-.2); \draw [ultra thick] (8,0.5)--(11,0.5); \draw [ultra thick] (8,3.5) --(11,3.5); \end{tikzpicture} \caption{A bipartite planar graph $G$ for which $\mathcal{C}(G)$ is not collapsible.} \label{F:nijekol} \end{center} \end{figure} \end{exm} Also, the Jockusch example (page 41 in \cite{Propp}, a bipartite planar graph with $20$ edges, but just $12$ of them can be used in a perfect matching), describe a graph $G$ whose cubical matching complex is a disjoint union of four segments. The edges that do not appear in any perfect matching of a graph $G$ (the forbidden edges) can be deleted. Also, if the edge $xy$ is a forced edge ($xy$ appears in all perfect matching of $G$), then we may consider the graph $G-\{x,y\}$. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.7]{center} \draw [fill=black] (0,.5) circle [radius=0.039]; \draw [fill=black] (-0.5,4) circle [radius=0.039]; \draw [fill=black] (1,4.8) circle [radius=0.039];\draw [fill=black] (2,0) circle [radius=0.039];\draw [fill=black] (2.6,5) circle [radius=0.039]; \draw [fill=black] (4,.5) circle [radius=0.039]; \draw [fill=black] (4,4.7) circle [radius=0.039]; \draw [fill=black] (5,3) circle [radius=0.039]; \draw[thick] (0,.5)--(-.5,4)--(1,4.8)--(2.6,5)--(4,4.7)--(5,3)--(4,.5)--(2,0)--(0,.5); \draw[dashed](2,0)--(1,4.8); \node at (1.8,2.8) {$e$}; \node at (7.8,2.8) {$e$};\node at (13.8,2.8) {$e$}; \draw [fill=black] (6,.5) circle [radius=0.039]; \draw [fill=black] (5.5,4) circle [radius=0.039]; \draw [fill=black] (7,4.8) circle [radius=0.039];\draw [fill=black] (8,0) circle [radius=0.039];\draw [fill=black] (8.6,5) circle [radius=0.039]; \draw [fill=black] (10,.5) circle [radius=0.039]; \draw [fill=black] (10,4.7) circle [radius=0.039]; \draw [fill=black] (11,3) circle [radius=0.039]; \draw[thick] (6,.5)--(5.5,4); \draw[dashed](5.5,4)--(7,4.8); \draw[thick] (7,4.8)--(8.6,5); \draw[dashed](8.6,5)--(10,4.7); \draw[thick](10,4.7)--(11,3);\draw[dashed] (11,3)--(10,.5); \draw[thick](10,.5)--(8,0);\draw[dashed] (8,0)--(6,.5); \draw[dashed](8,0)--(7,4.8); \draw [fill=black] (12,.5) circle [radius=0.039]; \draw [fill=black] (11.5,4) circle [radius=0.039]; \draw [fill=black] (13,4.8) circle [radius=0.039];\draw [fill=black] (14,0) circle [radius=0.039];\draw [fill=black] (14.6,5) circle [radius=0.039]; \draw [fill=black] (16,.5) circle [radius=0.039]; \draw [fill=black] (16,4.7) circle [radius=0.039]; \draw [fill=black] (17,3) circle [radius=0.039]; \draw[thick] (12,.5)--(11.5,4); \draw[dashed](11.5,4)--(13,4.8); \draw[dashed] (13,4.8)--(14.6,5); \draw[thick](14.6,5)--(16,4.7); \draw[dashed](16,4.7)--(17,3);\draw[thick] (17,3)--(16,.5); \draw[dashed](16,.5)--(14,0);\draw[dashed] (14,0)--(12,.5); \draw[thick](14,0)--(13,4.8); \end{tikzpicture} \caption{If a new region can be included in a tiling of $G-e$, then $e$ is not forbidden.} \label{F:GG'} \end{center} \end{figure} \begin{rem}\label{R:GsameG'} Let $e$ denote a forbidden edge in $G$ and let $G'=G-e$. The possible new elementary region of $G'$, that appears after we delete $e$, can not be included in a tiling of $G'$. Otherwise, we can find a perfect matching of $G$ that contains $e$, see Figure \ref{F:GG'}. In a similar way we conclude that the new regions that appear after deleting a forced edge can not be included in a tiling of $G'$. \end{rem} Let $G'$ denote the graph obtained from $G$ after all deletions. Unfortunately, this new graph (after deleting all forced and forbidden edges) may be non-connected. If $G'$ is connected, then the collapsibility of $\mathcal{C}(G')$ follows from Ehrenborg's proof. Also, if $G'$ is non-connected, and all of its connected components are separated (there is no component of $G'$ that is contained in an elementary region of another component), then $\mathcal{C}(G')$ is collapsible as a product of collapsible complexes. By using Remark \ref{R:GsameG'}, we can establish an obvious bijection between tilings of $G'$ and tilings of $G$ (we just add all forced edges). Therefore, Theorem \ref{T:EHr} holds if $G'$ is connected or if all of its connected components are separated. However, Theorem \ref{T:EHr} fails if $G'$ has two different connected components $G_1$ and $G_2$ such that $G_1$ is contained in an elementary region $R$ of $G_2$, see Example \ref{E:counter}. In that case we have that $$\mathcal{C}(G')=\mathcal{C}(G_1)\times\left(\mathcal{C}(G_2) \setminus\{R\}\right),$$and $\mathcal{C}(G')$ is a union of collapsible complexes. Here $\mathcal{C}(G_2) \setminus\{R\}$ denote the cubical complex obtained from $\mathcal{C}(G_2)$ by deleting all tilings (faces) that contain $R$ as an elementary region. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.7]{center} \draw [thick] (0,4.5)--(2,1.7)--(-2,1.7)--(0,4.5); \draw [ thick] (0,6)--(3.3,.8)--(-3.3,.8)--(0,6); \draw [ thick] (0,6)--(0,4.5); \draw [ thick] (3.3,.8)--(2,1.7); \draw [ thick] (-3.3,.8)--(-2,1.7); \draw [fill] (0,4.5) circle [radius=0.111]; \draw [fill] (2,1.7) circle [radius=0.111]; \draw [fill] (-2,1.7) circle [radius=0.111]; \draw [fill] (0,6) circle [radius=0.111]; \draw [fill] (3.3,.8) circle [radius=0.111]; \draw [fill] (-3.3,.8) circle [radius=0.111]; \draw [fill] (9,4.5) circle [radius=0.111]; \draw [fill] (11,1.7) circle [radius=0.111]; \draw [fill] (7,1.7) circle [radius=0.111]; \draw [fill] (9,2.93) circle [radius=0.111]; \draw [ thick] (9,4.5)--(11,1.7)--(7,1.7)--(9,4.5); \draw [thick] (9,2.93)--(9,4.5);\draw [ thick] (9,2.93)--(11,1.7);\draw [thick] (9,2.93)--(7,1.7); \draw [ thick] (9,4.5)--(11,1.7)--(11.6,2.2)--(9.6,5)--(9,4.5); \draw [fill] (11.6,2.2) circle [radius=0.111]; \draw [fill] (9.6,5) circle [radius=0.111]; \draw [ thick] (11,1.7)--(7,1.7)--(7,.9)--(11,.9)--(11,1.7); \draw [fill] (7,.9) circle [radius=0.111]; \draw [fill] (11,.9) circle [radius=0.111]; \draw [ thick] (7,1.7)--(9,4.5)--(8.45,5.1)--(6.45,2.3)--(7,1.7); \draw [fill] (8.45,5.1) circle [radius=0.111]; \draw [fill] (6.45,2.3) circle [radius=0.111]; \draw [thick] (0,-2)--(-2,-5);\draw [thick] (0,-2)--(0,-5); \draw [ thick] (0,-2)--(2,-5); \draw [ultra thin] (-1,-1)--(0,-1.8)--(-2,-1.8)--(-1,-1); \draw [ultra thin] (-1,-1.3)--(-0.5,-1.6)--(-1.5,-1.6)--(-1,-1.3); \draw [ultra thick](-1,-1)--(-1,-1.3);\draw [ultra thick](0,-1.8)--(-0.5,-1.6);\draw [ultra thick](-2,-1.8)--(-1.5,-1.6); \draw [ultra thin] (-2.5,-5.6)--(-1.5,-6.4)--(-3.5,-6.4)--(-2.5,-5.6); \draw [ultra thin] (-2.5,-5.9)--(-2,-6.2)--(-3,-6.2)--(-2.5,-5.9); \draw [ultra thick](-2.5,-5.6)--(-2.5,-5.9);\draw [ultra thick](-1.5,-6.4)--(-3.5,-6.4);\draw [ultra thick](-2,-6.2)--(-3,-6.2); \draw [ultra thin](-1.5,-6.4)--(-2,-6.2); \draw [ultra thin](-3.5,-6.4)--(-3,-6.2); \draw [ultra thin] (0,-5.6)--(1,-6.4)--(-1,-6.4)--(0,-5.6); \draw [ultra thin] (0,-5.9)--(.5,-6.2)--(-.5,-6.2)--(0,-5.9); \draw [ultra thick](1,-6.4)--(.5,-6.2); \draw [ultra thick](0,-5.6)--(-1,-6.4); \draw [ultra thick](-.5,-6.2)--(0,-5.9); \draw [ultra thin] (0,-5.6)--(0,-5.9);\draw [ultra thin] (-1,-6.4)--(-.5,-6.2); \draw [ultra thin] (2.5,-5.6)--(3.5,-6.4)--(1.5,-6.4)--(2.5,-5.6); \draw [ultra thin] (2.5,-5.9)--(3,-6.2)--(2,-6.2)--(2.5,-5.9); \draw [ultra thick](1.5,-6.4)--(2,-6.2); \draw [ultra thick](3.5,-6.4)--(2.5,-5.6); \draw [ultra thick](2.5,-5.9)--(3,-6.2); \draw [ultra thin] (2.5,-5.6)--(2.5,-5.9); \draw [ultra thin] (3.5,-6.4)--(3,-6.2); \draw [thick] (9.7,-2)--(9.7,-6); \draw [thick] (12.7,-2)--(12.7,-6); \draw [thick] (6.8,-2)--(6.8,-6); \draw [ultra thin] (5.5,-3)--(6.5,-4)--(4.5,-4)--(5.5,-3); \draw [ultra thin] (5.5,-3.67)--(5.5,-3);\draw [ultra thin] (5.5,-3.67)--(6.5,-4);\draw [ultra thick] (5.5,-3.67)--(4.5,-4); \draw [ultra thick] (5.5,-3)--(6.5,-4)--(6.7,-3.88)--(5.67,-2.8)--(5.5,-3); \draw [ultra thin] (5.5,-3)--(4.5,-4)--(4.33,-3.88)--(5.33,-2.8)--(5.5,-3); \draw [ultra thin] (6.5,-4)--(4.5,-4)--(4.5,-4.3)--(6.5,-4.3)--(6.5,-4); \draw [ultra thick] (4.33,-3.88)--(5.33,-2.8); \draw [ultra thick] (4.5,-4.3)--(6.5,-4.3); \draw [ultra thin] (8.5,-3)--(9.5,-4)--(7.5,-4)--(8.5,-3); \draw [ultra thin] (8.5,-3.67)--(8.5,-3);\draw [ultra thick] (8.5,-3.67)--(9.5,-4);\draw [ultra thin] (8.5,-3.67)--(7.5,-4); \draw [ultra thin] (8.5,-3)--(9.5,-4)--(9.7,-3.88)--(8.67,-2.8)--(8.5,-3); \draw [ultra thick] (8.5,-3)--(7.5,-4)--(7.33,-3.88)--(8.33,-2.8)--(8.5,-3); \draw [ultra thin] (9.5,-4)--(7.5,-4)--(7.5,-4.3)--(9.5,-4.3)--(9.5,-4); \draw [ultra thick] (9.7,-3.88)--(8.67,-2.8); \draw [ultra thick] (7.5,-4.3)--(9.5,-4.3); \draw [ultra thin] (11.5,-3)--(12.5,-4)--(10.5,-4)--(11.5,-3); \draw [ultra thick] (11.5,-3)--(11.5,-3.67); \draw [ultra thin] (11.5,-3.67)--(11.5,-3);\draw [ultra thin] (11.5,-3.67)--(12.5,-4);\draw [ultra thin] (11.5,-3.67)--(10.5,-4); \draw [ultra thin] (11.5,-3)--(12.5,-4)--(12.7,-3.88)--(11.67,-2.8)--(11.5,-3); \draw [ultra thick](12.7,-3.88)--(11.67,-2.8); \draw [ultra thick] (10.33,-3.88)--(11.33,-2.8); \draw [ultra thin] (11.5,-3)--(10.5,-4)--(10.33,-3.88)--(11.33,-2.8)--(11.5,-3); \draw [ultra thick] (12.5,-4)--(10.5,-4)--(10.5,-4.3)--(12.5,-4.3)--(12.5,-4); \end{tikzpicture} \caption{Non-bipartite graphs and their cubical matching complexes.} \label{F:primjerigener} \end{center} \end{figure} Now, we consider the cubical matching complex for all planar graphs that have a perfect matching (not necessarily bipartite). \begin{defn}Let $G$ be a planar graph that allows a perfect matching. A tiling of $G$ is a partition of the vertex set $V$ into disjoint blocks of the following two types: \begin{itemize} \item an edge $\{x,y\}$ of $G$; or \item the set of vertices $\{v_1,v_2,\ldots, v_{2m}\}$ of an even elementary cycle $R$. \end{itemize} Let $\mathcal{C}(G)$ denote the set of all tilings of $G$. Note that $\mathcal{C}(G)$ is also a cubical complex. \end{defn} \begin{exm} If $G$ is a graph of a triangular prism (embedded in the plane so that the outer region is a triangle), then $\mathcal{C}(G)$ is a union of three $1$-dimensional segments that share the same vertex, see the left side of Figure \ref{F:primjerigener}. Each of segments of $\mathcal{C}(G)$ corresponds to a rectangle of prism. The link of the common vertex of these segments is a $0$-dimensional complex with three points. Such situation is no possible for bipartite planar graphs, see Corollary \ref{Cor:link}. \end{exm} The next theorem describe the homotopy type of the cubical matching complex associated to a planar graph that allows a perfect matching. \begin{thm}\label{T:generalcase}Let $G$ be a planar graph that has a perfect matching. The cubical complex $\mathcal{C}(G)$ is contractible or a disjoint union of contractible complexes. \end{thm} This is a weaker version (we prove contractibility instead collapsibility) of corrected Theorem \ref{T:EHr}, with a different proof. \begin{proof} We use the induction on the number of edges of $G$. Let $e=xy$ denote an edge that belongs to the outer region $R^*$. Let $R\neq R^*$ denote the elementary region that contains $e$. If $R$ is an odd region, then all tilings of $G$ can be divided into two disjoint classes: \begin{itemize} \item[(a)] The tilings of $G$ that do not use $e$. These tilings are just the tilings of $G\setminus e$. \item[(b)] The tilings of $G$ that contain $e$ as an edge in a partial matching correspond to the tilings of $G\setminus\{x,y\}$. \end{itemize} In that case we obtain that $\mathcal{C}(G)=\mathcal{C}(G\setminus\{x,y\})\sqcup \mathcal{C}(G\setminus e)$ is a disjoint union of contractible complexes by inductive assumption. \\ \noindent If $R$ is an even elementary region, then some tilings of $G$ may to contain $R$. Note that these tilings are not considered in (a) and (b). To describe the corresponding faces of $\mathcal{C}(G)$, we consider $G\setminus R$, the graph obtained from $G$ by deleting all vertices from $R$. Let $\mathcal{C}_e$ denote the subcomplex of $ \mathcal{C}(G\setminus e)$ formed by all tilings that contain every second edge of $R$ (but do not contain $e$, obviously). Further, let $\mathcal{C}_{x,y}$ denote the subcomplex of $\mathcal{C}(G\setminus\{x,y\})$, defined by tilings that contain every second edge of $R$ (these tilings have to contain $e$). Note that the both of complexes $\mathcal{C}_e$ and $\mathcal{C}_{x,y}$ are isomorphic to $\mathcal{C}(G\setminus R)$. In that case we obtain \begin{equation}\label{E:RRforC} \mathcal{C}(G)=\mathcal{C}(G\setminus\{x,y\})\cup \mathcal{C}(G\setminus e)\cup Prism(\mathcal{C}(G\setminus R)). \end{equation} Further, we have that $$\mathcal{C}(G\setminus e) \cap Prism(\mathcal{C}(G\setminus R))=\mathcal{C}_e \textrm{ and } \mathcal{C}(G\setminus \{x,y\}) \cap Prism(\mathcal{C}(G\setminus R))=\mathcal{C}_{x,y}.$$ The complexes on the right-hand side of (\ref{E:RRforC}) are disjoint unions of contractible complexes by the inductive hypothesis. Assume that $$ \mathcal{C}(G\setminus\{x,y\})=A_1 \sqcup A_2\sqcup\cdots \sqcup A_s \textrm{ and }\mathcal{C}_{x,y}=B_1\sqcup B_2 \sqcup \cdots \sqcup B_t,$$ where $A_i$ and $B_j$ denote the contractible components of corresponding complexes. Obviously, each complex $B_j$ is contained in some $A_i$. Now, we need the following lemma. \begin{lem}Each of connected component of $\mathcal{C}(G\setminus\{x,y\})$ contains at most one component of $\mathcal{C}_{x,y}$. \end{lem} \textit{Proof of Lemma: }Assume that a component of $\mathcal{C}(G\setminus\{x,y\})$ contains two components of $\mathcal{C}_{x,y}$. In that case, there are two vertices of $\mathcal{C}_{x,y}$ (perfect matchings of $G$ that contain $xy$) that are in different components of $\mathcal{C}_{x,y}$, but in the same component of $\mathcal{C}(G\setminus\{x,y\})$. Assume that $M'$ and $M''$ are two such vertices, chosen so that the distance between them in $\mathcal{C}(G\setminus\{x,y\})$ is minimal. Let \begin{equation}\label{E:shorty} M'=M_0 \frac{_{R_0}}{} M_1 \frac{\textrm{\,\,\,\, }}{}\ldots \frac{\textrm{\,\,\,\, }}{} M_i \frac{_{R_i}}{} M_{i+1} \frac{\textrm{\,\,\,\, }}{}\ldots \frac{\textrm{\,\,\,\, }}{}M_{n} \frac{_{R_{n}}}{}M_{n+1}=M'' \end{equation} denote the shortest path from $M'$ to $M''$ in $\mathcal{C}(G\setminus\{x,y\})$. The perfect matching $M_{i+1}$ is obtained from $M_i$ by removing the edges of $M_i$ contained in an elementary region $R_i$, and by inserting the complementary edges. In other words, we have that $M_{i+1}=M_i\triangle R_i$, for an elementary region $R_i$ contained in $\mathcal{R}_{F_i}\cap \mathcal{R}_{F_{i+1}}$. Note that $R_0$ must be adjacent (share the common edge) with $R$. Otherwise, both of vertices $M_0$ and $M_1$ belong to the same component of $\mathcal{C}_{x,y}$, and we obtain a contradiction with the assumption that the path described in (\ref{E:shorty}) is minimal. In a similar way, we obtain that for any $i=1,2,\ldots,n$, the region $R_i$ must be adjacent with at least one of regions $R, R_0, R_1,\ldots,R_{i-1}$. If not, we have that the perfect matching $\overline{M}=M_0\triangle R_i$ belongs to $\mathcal{C}_{x,y}$, and $\overline{M}$ and $M'$ are contained in the same component of $\mathcal{C}_{x,y}$. In that case we obtain a contradiction, because the path $$\overline{M}=\overline{M}_0 \frac{_{R_0}}{} \overline{M}_1 \frac{\textrm{\,\,\,\, }}{}\ldots \frac{\textrm{\,\,\,\, }}{} \overline{M}_{i-1} \frac{_{R_{i-1}}}{} \overline{M}_{i+1} \frac{_{R_{i+1}}}{}\ldots \frac{\textrm{\,\,\,\, }}{}\overline{M}_{n} \frac{_{R_{n}}}{}\overline{M}_{n+1}=M''$$ is shorter than (\ref{E:shorty}). Here we let that $\overline{M}_{j+1}=\overline{M}_j\triangle R_j$. Let $e'$ denote a common edge of regions $R_0$ and $R$ that is contained in $M'$. Note that $e'$ is not contained in $M_1$. However, this edge is again contained in $M''$, and we conclude that the region $R_0$ has to reappear again in (\ref{E:shorty}). Let $R_{i_0}=R_0$ denote the first appearance of $R_0$ in (\ref{E:shorty}) after the first step. There are the following three possible situations that enable the reappearance of $R_0$: \begin{enumerate} \item[$(a)$] All regions $R_k$ (for $k$ between $0$ and $i_0$) are disjoint with $R_0$.\\ In that case, we can omit the steps in (\ref{E:shorty}) labelled by $R_0$ and $R_{i_0}$, and obtain a shorter path between $M'$ and $M''$. \item[$(b)$] Any region that shares at least one edge with $R_0$ appears an odd number of times between $R_0$ and $R_{i_0}$. \\ This is impossible, because $R$ (that share an edge with $R_0$) can not appear in (\ref{E:shorty}). \item[$(c)$] There is $t<i_0$ such that $R_t=\bar{R}$ shares an edge with $R_0$, but the fragment of the sequence (\ref{E:shorty}) between $R_0$ and $R_{i_0}$ does not contain all region that shares an edge with $R_0$. \\ Then the same region $\bar{R}$ has to appear again as $R_s$, for some $s$ such that $t<s<i_0$. Again, if all regions $R_j$ are disjoint with $\bar{R}$ (for $j=t+1,\ldots,s-1$), we can omit $R_t$ and $R_s$, and obtain a contradiction. If not, there exist indices $t'$ and $s'$ such that $t<t'<s'<s$ and $R_t'=R_s'$. We continue in the same way, and from the finiteness of the path, obtain a shorter path than (\ref{E:shorty}). \end{enumerate} \qed \textit{Continue of Proof:} We built $\mathcal{C}(G)$ by starting with $\mathcal{C}(G\setminus e)$, that is a union of contractible complexes by assumption. Then we glue the components of $ Prism(\mathcal{C}(G\setminus R))$ one by one. After that, we glue all components of $\mathcal{C}(G\setminus \{x,y\})$. At each step we are gluing two contractible complexes along a contractible subcomplex, or we just add a new contractible complex, disjoint with previously added components. From the Gluing Lemma (see Lemma 10.3 in \cite{TM}) we obtain that $\mathcal{C}(G)$ is contractible, or a disjoint union of contractible complexes. \end{proof} \begin{rem}\label{R:whenCiscont} For a connected bipartite planar graph $G$ that satisfy the condition $(**)$, the cubical matching complex $\mathcal{C}(G)$ is collapsible, see Theorem \ref{T:EHr}. The planar graph on the right side on Figure \ref{F:primjerigener} satisfies the condition $(**)$, but the corresponding cubical complex is not collapsible, it is a union of three disjoint edges. So, there is a natural question: \textit{Is there a property of $G$ that provides the collapsibility of its cubical complex $\mathcal{C}(G)$?} Obviously, if all complexes that appear on the right-hand side of (\ref{E:RRforC}) are nonempty and contractible, then $\mathcal{C}(G)$ is contractible. \end{rem} \section{The $f$-vector of domino tilings} The concept of tilings of a bipartite planar graph generalizes the notion of domino tilings. Let $\mathcal{R}$ be a simple connected region, compound of unit squares in the plane, that can be tiled with domino tiles $1\times 2$ and $2\times 1$. The set of all tilings of $\mathcal{R}$ by domino tiles and $2\times 2$ squares defines a cubical complex, denoted by $\mathcal{C}(\mathcal{R})$. If we consider $\mathcal{R}$ as a planar graph (all of its elementary regions are unit squares), and if $G$ denotes the weak dual graph of $\mathcal{R}$ (the unit squares of $\mathcal{R}$ are vertices of $G$), then $\mathcal{C}(\mathcal{R})$ is isomorphic to the cubical matching complex $\mathcal{C}(G)$, see Section 3 in \cite{EhrCUB} for details. Note that the number of $i$-dimensional faces of $\mathcal{C}(G)$ counts the number of tilings of $\mathcal{R}$ with exactly $i$ squares $2\times 2$. Ehrenborg used collapsibility of $\mathcal{C}(G)$ to conclude (see Corollary 3.1. in \cite{EhrCUB}) that the entries of $f$-vector of $f(\mathcal{C}(G))=(f_0,f_1,\ldots,f_d)$ satisfy \begin{equation}\label{E:linf} f_0-f_1+f_2-\cdots+(-1)^df_d=1. \end{equation} If $G$ is the weak dual graph of a region $\mathcal{R}$ that admits a domino tiling, then all complexes that appear on the right-hand side of the relation (\ref{E:RRforC}) are contractible by induction, and therefore $\mathcal{C}(G)$ is contractible, see Remark \ref{R:whenCiscont}. So, we obtain that the relation (\ref{E:linf}) is true in any case, disregarding possible problems with Theorem \ref{T:EHr}. In this Section we will prove that (\ref{E:linf}) is the only linear relation for $f$-vectors of cubical complexes of domino tilings. \\ For all $n\in \mathbb{N}$, we let $G_n$ denote the following graph $\begin{tabular}{|c|c|c|c|c|c|} \hline $1$ & $2$ & \,\,\, &\,\,\, &\,\,\, & $n$ \\ \hline \end{tabular}.$ \noindent This graph (also known as the ladder graph) has $2n+2$ vertices, $3n+1$ edges and $n$ elementary regions (squares). For $i=1,2,\ldots,n$, let $G_{n,i}$ denote the graph obtained by adding one unit square below the $i$-th square of $G_n$. Now, we describe some recursive relations for $f$-vectors of $\mathcal{C}(G_{n})$ and $\mathcal{C}(G_{n,i})$. \begin{prop}\label{P:rrforf}The entries of $f$-vectors of $\mathcal{C}(G_{n})$ and $\mathcal{C}(G_{n,i})$ satisfy the following recurrences: \begin{equation}\label{F:RRforf-vector} f_i(\mathcal{C}(G_{n+2}))=f_i(\mathcal{C}(G_{n+1}))+ f_i(\mathcal{C}(G_{n}))+f_{i-1}(\mathcal{C}(G_{n})), \end{equation} \begin{equation}\label{F:RRforf-vectorleft} f_i(\mathcal{C}(G_{n+2,i}))=f_i(\mathcal{C}(G_{n+1,i}))+ f_i(\mathcal{C}(G_{n,i}))+f_{i-1}(\mathcal{C}(G_{n,i})), \end{equation} \begin{equation}\label{F:RRforf-vectorright} f_i(\mathcal{C}(G_{n+2,i}))=f_i(\mathcal{C}(G_{n+1,i-1}))+ f_i(\mathcal{C}(G_{n,i-2}))+f_{i-1}(\mathcal{C}(G_{n,i-2})). \end{equation} \end{prop} \begin{proof} All formulas follow from relation (\ref{E:RRforC}), see the proof of Theorem \ref{T:generalcase}. To obtain the formula (\ref{F:RRforf-vector}), we apply (\ref{E:RRforC}) on $G_{n+2}$. The rightmost vertical edge and the rightmost unit square in $G_{n+2}$ act as $e$ and $R$ in (\ref{E:RRforC}). \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=.491]{center} \draw[ultra thick] (-4,-4) grid (1,-3);\node at (1.5,-3.5) {$=$}; \node at (7.5,-3.5) {$\sqcup$};\node at (13.5,-3.5) {$\sqcup$}; \draw[ultra thick] (2,-4) grid (6,-3); \draw[ultra thin] (2,-4) grid (7,-3); \draw[ultra thick] (7,-4) -- (7,-3); \draw[ultra thick] (12,-4) -- (13,-4); \draw[ ultra thick] (12,-3) -- (13,-3); \draw[ultra thick] (14,-4) grid (17,-3); \draw[ultra thin] (14,-4) grid (19,-3); \draw[ultra thick] (18,-4) grid (19,-3); \draw[ultra thick] (8,-4) grid (11,-3);\draw[ultra thin] (8,-4) grid (13,-3); \draw[ultra thick] (-4,-1) grid (1,0);\node at (1.5,-.5) {$=$}; \node at (7.5,-.5) {$\sqcup$};\node at (13.5,-.5) {$\sqcup$}; \draw[ultra thick] (2,-1) grid (6,0); \draw[ultra thin] (2,-1) grid (7,0); \draw[ultra thick] (7,-1) -- (7,0); \draw[ultra thick] (12,-1) -- (13,-1); \draw[ ultra thick] (12,0) -- (13,0); \draw[ultra thick] (14,-1) grid (17,0); \draw[ultra thin] (14,-1) grid (19,0); \draw[ultra thick] (18,-1) grid (19,0); \draw[ultra thick] (8,-1) grid (11,0);\draw[ultra thin] (8,-1) grid (13,0); \draw[ultra thick] (-2,-7) grid (-1,-8); \draw[ultra thick] (16,-7) grid (17,-8); \draw[ultra thick] (10,-7) grid (11,-8); \draw[ultra thick] (4,-7) grid (5,-8); \draw[ultra thick] (-4,-6) grid (1,-7);\node at (1.5,-6.5) {$=$}; \node at (7.5,-6.5) {$\sqcup$};\node at (13.5,-6.5) {$\sqcup$}; \draw[ultra thick] (3,-7) grid (7,-6); \draw[ultra thin] (2,-7) grid (7,-6); \draw[ultra thick] (2,-7) -- (2,-6); \draw[ultra thick] (8,-7) -- (9,-7); \draw[ ultra thick] (8,-6) -- (9,-6); \draw[ultra thick] (16,-7) grid (19,-6); \draw[ultra thin] (14,-7) grid (19,-6); \draw[ultra thick] (14,-7) grid (15,-6); \draw[ultra thick] (10,-7) grid (13,-6);\draw[ultra thin] (8,-7) grid (13,-6); \draw[ultra thick] (-2,-4) grid (-1,-5); \draw[ultra thick] (-2,-6) grid (-1,-7); \draw[ultra thick] (16,-4) grid (17,-5); \draw[ultra thick] (10,-4) grid (11,-5); \draw[ultra thick] (4,-4) grid (5,-5); \node at (-6,-.5) {$(5)$};\node at (-6,-3.5) {$(6)$};\node at (-6,-6.5) {$(7)$}; \end{tikzpicture} \end{center} \caption{The ``geometric proof`` of recursive relations for $f(\mathcal{C}(G_{n}))$ and $f(\mathcal{C}(G_{n,i}))$.} \label{Figure:rrfor} \end{figure} In the same way we can prove the remaining two relations. For each relation, we choose an adequate elementary region $R$, a corresponding edge $e$ of $R$, and use relation (\ref{E:RRforC}), see Figure \ref{Figure:rrfor}. \end{proof} \noindent The $f$-vector $(f_0,f_1,f_2,\ldots,f_{\lceil\frac{n}{2}\rceil})$ of $\mathcal{C}(G_{n})$ can be encoded by the polynomial $F_n$: $$F_n=F_{\mathcal{C}(G_{n})}(x)=f_0+f_1x+f_2x^2+\cdots +f_{\lceil\frac{n}{2}\rceil}x^{\lceil\frac{n}{2}\rceil}.$$ Similarly, we define the polynomials $F_{n,i}$ to encode the $f$-vector of $\mathcal{C}(G_{n,i})$. Directly from (\ref{F:RRforf-vector}) and (\ref{F:RRforf-vectorleft}) we obtain that $$F_{n+2}(x)=F_{n+1}(x)+(x+1)F_{n}(x),\hspace{.2cm} F_{n+2,i}(x)=F_{n+1,i}(x)+(x+1)F_{n,i}(x).$$ Now, we define new polynomials $P_n$ and $P_{n,i}$ by $$P_n=P_n(x)=F_n(x-1),\hspace{.4cm} P_{n,i}=P_{n,i}(x)=F_{n,i}(x-1).$$ This is a variant of $h$-polynomial associated to corresponding cubical complexes.\\ From Proposition \ref{P:rrforf} it follows that the polynomials $P_n$ and $P_{n,i}$ satisfy the following recurrences \begin{equation}\label{F:RRforh-vector nolmal} P_{n+2}(x)=P_{n+1}(x)+xP_{n}(x), \end{equation} \begin{equation}\label{FFrrn_RIGHT} P_{n+2,i}(x)=P_{n+1,i}(x)+xP_{n,i}(x), \end{equation} \begin{equation}\label{F:RRforh-vector_LEFT} P_{n+2,i}(x)=P_{n+1,i-1}(x)+xP_{n,i-2}(x). \end{equation} \begin{rem}\label{R:exact} We can use (\ref{F:RRforh-vector nolmal}) to obtain the polynomials $P_n$ explicitly $$P_{2d-1}={d\choose d}x^d+\cdots+{d+k\choose d-k}x^k+\cdots+{2d-1\choose 1}x+{2d \choose 0}\textrm{, and} $$ $$P_{2d}={d+1\choose d}x^d+\cdots+{d+k+1\choose d-k}x^k+\cdots+{2d\choose 1}x+{2d+1 \choose 0}. $$ \vspace{.2cm} Note that the polynomials $P_n$ are related with Fibonacci polynomials, see Section 9.4 in \cite{Benj-Quinn} for the definition and a combinatorial interpretation of coefficients. The coefficient of these polynomials are positive integers and the sum of coefficients of $P_n$ is a Fibonacci number. Note that this is just the number of vertices in $\mathcal{C}(G_n)$. Assume that we embedded $\mathcal{C}(G_n)$ into $n$-cube as in Proposition \ref{P:cord}, so that the perfect matching $M_0=\begin{tabular}{|c|c|c|c|c|c|} \hdashline & & & & & \\ \hdashline \end{tabular}$ of $G_n$ is the vertex in the origin. Now, the coefficient of $x^k$ in $P_n$ counts the number of vertices of $\mathcal{C}(G_n)$ for which the sum of coordinates is $k$, i.e., it is the number of vertices of $\mathcal{C}(G_n)$ whose distance from $M_0$ is $k$. Also, following \cite{Benj-Quinn}, we can recognize the coefficient of $x^k$ in $P_n$ as the number of $k$-element subsets of $[n]$ that do not contain two consecutive integers. Similarly, we can interpret the coefficient of $x^k$ in $P_{n,i}$ as the number of $k$-element subsets of the multiset $M=\{1,2,\ldots,i-1,i,i, i+1,\ldots,n\}$ that do not contain two consecutive integers. Note that the multiplicity of $i$ in $M$ is two, and all other elements have the multiplicity one. \end{rem} \begin{defn}\label{D:opA} Let $\mathcal{P}^d$ denote the vector space of all polynomials of degree at most $d$. We define the linear map $A_d: \mathcal{P}^d\rightarrow \mathcal{P}^{d+1}$ recursively by \begin{equation}\label{F:Arekrel} A_d(x^k)=xA_{d-1}(x^{k-1}) \textrm{ for all }k >0, \end{equation} \begin{equation}\label{F:A1} A_0(1)=1+2x\textrm{ and }A_d(1)=P_{2d+1}-A_d(P_{2d-1}-1). \end{equation} \end{defn} \begin{lem}\label{L:0} For any non-negative integer $d$, we have that $$A_d(P_{2d-1})=P_{2d+1},A_d(P_{2d})=P_{2d+2} \textrm{ and } A_{d+1}(P_{2d}) =P_{2d+2}.$$ \end{lem} \begin{proof} From (\ref{F:A1}) it follows that $A_d(P_{2d-1})=P_{2d+1}$. For the proof of the second formula we use (\ref{F:RRforh-vector nolmal}), (\ref{F:Arekrel}) and induction $$A_d(P_{2d})=A_d(P_{2d-1}+xP_{2d-2})= P_{2d+1}+xA_{d-1}(P_{2d-2})=P_{2d+1}+xP_{2d}=P_{2d+2}.$$ The last formula in this lemma follows from (\ref{F:RRforh-vector nolmal}) and earlier proved formulas $$A_{d+1}(P_{2d})=A_{d+1}(P_{2d+1}-xP_{2d-1})=$$ $$P_{2d+3}-xA_d(P_{2d-1})=P_{2d+3}-xP_{2d+1}=P_{2d+2}.$$ \end{proof} \begin{lem}\label{L:1} For all integers $i$ and $d$ such that $1\leq i\leq \lfloor\frac d 2 \rfloor$, the following holds: $$A_d(P_{2d-1,i})=P_{2d+1,i}\textrm{ and } A_d(P_{2d,i})=P_{2d+2,i}.$$ \end{lem} \begin{proof} For $i=1$ and $i=2$ we apply relation (\ref{E:RRforC}) in a similar way as in the proof of Proposition \ref{P:rrforf}. We just delete the only square in the second row of $G_{n,1}$ and $G_{n,2}$, and obtain that $$P_{2d-1,1}=P_{2d-1}+xP_{2d-3}, P_{2d-1,2}=P_{2d-1}+xP_{2d-4}.$$ By using Lemma \ref{L:0}, we obtain that $$A_d(P_{2d-1,1})=A_d(P_{2d-1}+xP_{2d-3})= P_{2d+1}+xP_{2d-1}=P_{2d+1,1}\textrm{, and }$$ $$A_d(P_{2d-1,2})=A_d(P_{2d-1}+xP_{2d-4})= P_{2d+1}+xA_{d-1}(P_{2d-4})=$$$$ =P_{2d+1}+xP_{2d-2}=P_{2d+1,2}.$$ In a similar way, we can prove that $$ A_d(P_{2d,1})=P_{2d+2,1}, A_d(P_{2d,2})=P_{2d+2,2}.$$ Assume that the statement of this lemma is true for $P_{2d-1,j}$ and $P_{2d,j}$ when $j<i+1$. Now, we use (\ref{F:RRforh-vector_LEFT}) and induction to calculate $$A_d(P_{2d,i+1})=A_d(P_{2d-1,i}+xP_{2d-2,i-1})= A_{d}(P_{2d-1,i})+xA_{d-1}(P_{2d-2,i-1})=$$ $$=P_{2d+1,i}+xP_{2d,i-1}=P_{2d+2,i+1}.$$ From (\ref{FFrrn_RIGHT}) we obtain that $$A_d(P_{2d-1,i+1})=A_d(P_{2d,i+1}-xP_{2d-2,i+1})= A_{d}(P_{2d,i+1})-xA_{d-1}(P_{2d-2,i+1})=$$ $$=P_{2d+2,i+1}-xP_{2d,i+1}=P_{2d+1,i+1}.$$ \end{proof} From Definition \ref{D:opA} and Remark \ref{R:exact} we can obtain the concrete formula for the linear map $A_d$. \begin{prop}\label{P:tacnoA_k} For all $d,k \in \mathbb{N}$ such that $d\geq k\geq 1$, we have that: $$A_d(x^k)=x^k\left(1+2x-x^2+ 2x^3-5x^4+14x^5-\cdots+(-1)^{d-k}C_{d-k}x^{d-k+1}\right).$$ Here $C_m$ denotes the $m$-th Catalan number. \end{prop} \begin{proof}From (\ref{F:Arekrel}) it is enough to prove that \begin{equation}\label{E:trueA_d} A_d(1)=1+2x-x^2+2x^3-5x^4+\cdots+(-1)^dC_dx^{d+1}. \end{equation} For all integers $n$ and $k$ such that $n \geq k \geq 1$ (by using the induction and the Pascal's Identity), we can obtain the next relation \begin{equation}\label{E:Catibin} {n\choose k}=\sum_{i=0}^{k}(-1)^i{n+1+i\choose k-i}C_i. \end{equation} Now, we assume that (\ref{E:trueA_d}) is true for all positive integers less than $d$, and calculate $A_d(1)$ by definition: $$A_d(1)=P_{2d+1}-A_d(P_{2d-1}-1)=$$ $$=\sum_{i=0}^{d+1}{2d+2-i\choose i}x^i- \sum_{i=1}^{d}{2d-i\choose i}x^{i}A_{d-i}(1).$$ The coefficients of $1,x$ and $x^2$ in $A_d(1)$ are respectively: $${2d+2\choose 0}=1, {2d+1\choose 1}-{2d-1\choose 1}=2, {2d\choose 2}-{2d-2\choose 2}-2{2d-1\choose 1}=-1.$$ For $k>1$ the coefficient of $x^{k+1}$ in the polynomial $A_d(1)$ is $${2d+1-k\choose k+1}- {2d-k-1\choose k+1}-2{2d-k\choose k}-\sum_{i=1}^{k-1}(-1)^{i}{2d-k+i\choose k-i}C_i.$$ From (\ref{E:Catibin}) we obtain that the coefficient of $x^{k+1}$ in $A_d(1)$ is $(-1)^{k}C_{k}$. \end{proof} \begin{cor}\label{C:Ais11} For any positive integer $d$ the linear map $A_d$ is injective. \end{cor} \noindent Now, we consider all simple connected regions for which the degree of the associated polynomial $P_\mathcal{R}(x)=F_\mathcal{R}(x-1)$ is equal to $d$. Let $\mathcal{F}^d$ denote the affine subspace of $\mathcal{P}^d$ spanned by these polynomials. \begin{lem}\label{L:one is not} The polynomial $P_{2d+1,d}$ is not contained in $A_d(\mathcal{F}^d)$. \end{lem} \begin{proof} From (\ref{F:RRforh-vector_LEFT}) and (\ref{FFrrn_RIGHT}) we have that $$P_{2d+1,d}-P_{2d+1,d-1}=(P_{2d,d-1}+xP_{2d-1,d-2})- (P_{2d,d-1}+xP_{2d-1,d-1})=$$ $$-x(P_{2d-1,d-1}-P_{2d-1,d-2})=(-1)^{d+1}(x^{d+1}+x^d).$$ We know that $P_{2d+1,d-1}=A_d(P_{2d-1,d-1})$. If there exists a polynomial $p\in \mathcal{F}^d$ such that $A_d(p)=P_{2d+1,d}$ then we obtain $$x^{d+1}+x^d=\pm A_d(p-P_{2d-1,d-1}),$$ which is impossible from Proposition \ref{P:tacnoA_k}. \end{proof} \begin{thm} The polynomials $P_{2d-1},P_{2d},P_{2d-1,1},\ldots,P_{2d-1,d-1}$ are affinely independent in $\mathcal{F}^d$. \end{thm} \begin{proof} We use induction on the degree. Assume that $d$ polynomials $P_{2d-3}$, $P_{2d-2}$, $P_{2d-3,1}$, $\ldots,P_{2d-3,d-2}$ are affinely independent in $\mathcal{F}^{d-1}$. From Lemmas \ref{L:0} and \ref{L:1} and Corollary \ref{C:Ais11}, we conclude that $P_{2d-1}$, $P_{2d}$, $P_{2d-1,1}$, $\ldots,P_{2d-1,d-2}$ are affinely independent. These polynomials span a $(d-1)$-dimensional affine subspace of $\mathcal{F}^d$. From Lemma \ref{L:one is not} follows that $P_{2d-1,d-1}$ is not contained in $A_{d-1}(\mathcal{F}^{d-1})$. \end{proof} \begin{cor} The Euler-Poincare relation (\ref{E:linf}) is the only linear relation for the $f$-vectors of tilings. \end{cor} This answer the question of Ehrenborg question about numerical relations between the numbers of different types of tilings, see \cite{EhrCUB}. \bibliographystyle{amcjoucc}
train/arxiv
BkiUeL84uzlhXz2JVKAa
5
1
\section{Introduction} \label{sec:introduction} Frobenius splitting is an important tool in characteristic $p$ commutative algebra and algebraic geometry. Locally, Frobenius splitting (or $F$-purity) is a restriction on singularities, analogous to log canonicity for complex singularities. Strong $F$-regularity is a strengthening of Frobenius splitting, analogous to how Kawamata log terminality is a strengthening of log canonicity. There is much interest in understanding the world of Frobenius split objects that are not strongly $F$-regular. For a local ring, Aberbach and Enescu introduced the \emph{splitting prime} as a way to measure the difference between Frobenius splitting and strong $F$-regularity---the elements in the splitting prime are obstructions to strong $F$-regularity, and in particular, the splitting prime of a domain is zero precisely for strongly $F$-regular rings \cite{Aberbach+Enescu.05}. Aberbach and Enescu's splitting prime can also be described as the largest uniformly $F$-compatible ideal in the sense of Schwede \cite{Schwede.10a}. In this paper, we will be working with a generalization of the splitting prime following two different directions. Frobenius splitting and strong $F$-regularity have been generalized to further settings, including pairs $(X, \Delta)$ consisting of a $\mathbb{Q}$-divisor $\Delta$ on a smooth variety $X$ of characteristic $p$; pairs $(R,\mf a^t)$ where $R$ is a ring of characteristic $p$, $\mf a$ is an ideal, and the formal exponent $t$ is a positive real number; as well as more general settings \cite{Hara+Watanabe.02,Takagi.04,Schwede.08,Schwede.10}. The Cartier algebra (see \Cref{full-cart-alg-def}) gives a unified and more general approach, allowing us to talk about Frobenius splitting and strong $F$-regularity for an arbitrary subalgebra of the Cartier algebra \cite{Schwede.11a}. An important problem is to understand the extent to which strong $F$-regularity fails for an arbitrary Frobenius split subalgebra of the Cartier algebra \cite{Schwede.10a,Blickle+etal.12}. Fix a Frobenius split pair~$(R,\mc D)$, where $\mc D$ is a Cartier subalgebra (see \Cref{cartier-mod-defs}). One main theme of this paper is to consider splitting primes via the perspective of the Cartier core map \[ \Ccore_{\mc D}:\Spec R \to \Spec R \] which assigns to each prime $P\in \Spec R$ the splitting prime $\Ccore_{\mc D}(P)$ of the pair~$(R_P, \mc D_P)$. Alternatively, if $R$ is not Frobenius split one can define $\Ccore_{\mc D}$ on the Frobenius split locus, $\mc U_{\mc D}$, of $(R,\mc D)$. In this context, we show the following main result. \begin{thmA}[\Cref{c-map-summary-thm}] Let $R$ be an $F$-finite Noetherian ring of characteristic $p$, and let $\mc D$ be a Cartier subalgebra. Then the Cartier core map \[ \mathcal U_{\mc D} \rightarrow \Spec R \qquad\qquad P\mapsto C_{\mathcal D}(P) \] is a continuous containment preserving map on the $F$-pure locus $\mathcal U_{\mc D}$ of the pair~$(R,\mc D)$ which fixes the $\mc D$-compatible ideals. The image of $C_{\mathcal D}$ is the set of prime $\mc D$-compatible ideals and is always finite. The image is the set of minimal primes of $R$ precisely when the pair~$(R,\mc D)$ is strongly $F$-regular. \end{thmA} In another direction, for an arbitrary (not necessarily Frobenius split) pair~$(R,\mc D)$, we can associate to \emph{any} ideal $J$ of $R$ the smallest $\mc D$-compatible ideal $\Ccore_{\mc D}(J)$ contained in $J$. We call this ideal the \emph{Cartier core} of $J$, following Badilla-C\'espedes, who considered this earlier for the non-pair setting \cite{Badilla-Cespedes.21}. We will prove some basic facts about the Cartier core, many of which mirror results of Schwede in the triples setting and of Badilla-C\'espedes in the setting where $\mc D$ is the full Cartier algebra. Returning to the non-pair setting, we also give the following explicit formula for the Cartier core of an arbitrary ideal in a quotient of a regular ring, making use of a criterion for strong $F$-regularity due to Glassbrenner \cite{Glassbrenner.96}. \begin{thmB}[\Cref{c-ideal-quo-reg-ring}] Let $S$ be a regular $F$-finite ring, let $I\subseteq J$ be ideals of $S$, and let $R=S/I$. Then \[ C_R(J/I) = \left(\bigcap_{e>0} J^{[p^e]} :_S (I^{[p^e]} :_S I)\right)/I. \] \end{thmB} This presentation of the Cartier core allows us to prove that the Cartier core map commutes with basic operations in commutative algebra such as localizing, adjoining a variable, and in the case of quotients of polynomial rings, with homogenization (see \Cref{localization}, \Cref{qr-add-var}, \Cref{qr-homogenization}). As an application of these techniques, we give an exact description of the Cartier core map for primes in the case of Stanley-Reisner rings. \begin{thmC}[\Cref{stanley-reisner-c-map}] Let $R$ be a Stanley-Reisner ring over a field that has prime characteristic and is $F$-finite. Let $Q$ be any prime ideal. Then \[ C_R(Q) = \sum_{\substack{P \in \Min(R) \\ P\subseteq Q}} P. \] In particular, the image of the Cartier core map on primes, i.e., the set of generic points of $F$-pure centers of $R$, is the set of sums of minimal primes. \end{thmC} This theorem extends existing work on computing certain specific uniformly $F$-compatible ideals and $\mc D$-compatible ideals, including the splitting prime and test ideals, for Stanley-Reisner rings \cite{Aberbach+Enescu.05,Vassilev.98,Enescu+Ilioaea.20,Badilla-Cespedes.21}. \begin{assume} All rings in this paper (other than the Cartier algebras) are commutative, Noetherian, and unital. Furthermore, all such rings are of prime characteristic $p$ and are $F$-finite. \end{assume} \begin{ackblock} I am grateful to my advisor, Karen Smith, for all of her guidance and suggestions. I would also like to thank Shelby Cox and Swaraj Pande for the helpful conversations. \end{ackblock} \section{Background} \label{sec:background} For a ring $R$ of prime characteristic $p$, the Frobenius endomorphism is the ring map $F:R\to R$ where $F(r) = r^p$. To distinguish the copies of $R$, we may write $F: R\to F_* R$ so that $F(r) = F_*(r^p)$. As a ring, this Frobenius pushforward $F_*R$ is the same as $R$, but its $R$-module structure induced by $F$ is $rF_* s = F_* (r^p s)$. We can iterate the Frobenius, writing $F^e: R\to F_*^e R$, where $F^e(r) = F_*^e(r^{p^e})$ and $rF_*^e s = F_*^e (r^{p^e}s)$. We will utilize this $R$-module structure on $F_*^eR$, but first we need a cohesive way to consider only certain maps in $\hom_R(F_*^e R, R)$. To do this, define a (non-commutative) multiplication on the abelian group $\bigoplus_e \hom_R(F_*^e R, R)$ as follows. Given maps $\varphi \in \hom_R(F_*^e R, R)$ and $\psi \in \hom_R(F_*^d R, R)$, we define their product as \begin{equation} \label{eq:cartier-mult} \varphi\cdot \psi = \phi \circ F_*^e \psi. \end{equation} More concretely, for any $r\in R$ we have \[ (\varphi\cdot \psi) (F_*^{e+d} r) = \varphi \left( F_*^e(\psi(F_*^d r))\right). \] \begin{defn} \label{full-cart-alg-def} The \emph{(full) Cartier algebra} on $R$ is the graded non-commutative ring \[ \mc C_R = \bigoplus_{e\geq 0} \hom_R(F_*^e R, R), \] where multiplication is as defined in \Cref{eq:cartier-mult}. \end{defn} Note that we are writing $F_*^0R$ to mean $R$ as an $R$-module, so that $(\mc C_R)_0 = \hom_R(R,R)\cong R$. However, this copy of $R$ is rarely central in $\mc C_R$, because for $\varphi$ of degree $e$, we have $r\cdot \varphi = \varphi \cdot r^{p^e} $. In particular, $R$ is central only if $R = \bb F_p$. \begin{defn} \label{cart-subalg-def} A \emph{Cartier subalgebra} $\mc D$ is a graded subalgebra of $\mc C_R$ such that $\mc D_0 = R$. In particular, $\mc D$ has the form $\mc D = \bigoplus_e \mc D_e$ where $\mc D_e \ \subseteq \hom_R(F_*^eR, R)$ for all $e\geq 0$. \end{defn} \begin{zb}[{\cite[Rmk.~3.10]{Schwede.11a}}] Let $(R,\mf a^t)$ be a pair where $\mf a$ is an ideal and the formal exponent $t$ is a positive real number. Then the corresponding Cartier subalgebra $\mc C^{\mf a^t}$ has \[ \mc C^{\mf a^t}_e = \hom_R(F_*^e R, R)\cdot \mf a^{\lceil t(p^e-1)\rceil}. \] \end{zb} \begin{defn} \label{cartier-mod-defs} Let $R$ be a ring of prime characteristic, and let $\mc D$ be a Cartier subalgebra on $R$. \begin{itemize} \item The pair~$(R,\mc D)$ is \emph{$F$-finite} if $R$ is $F$-finite, i.e., $F_*R$ is a finite $R$-module. Every ring $R$ in this paper will be $F$-finite. \item The pair~$(R,\mc D)$ is \emph{Frobenius split} or \emph{(sharply) $F$-pure} if there exists some $e>0$ and some $\phi \in \mc D_e$ with $\phi(F_*^e 1)=1$. \item If $c$ is an element of $R$, then the pair~$(R,\mc D)$ is \emph{eventually Frobenius split along $c$}, or \emph{$F$-pure along $c$} if there exists some $e>0$ and some $\phi \in \mc D_e$ with $\phi(F_*^e c)=1$. \item The pair~$(R,\mc D)$ is \emph{strongly $F$-regular} if it is eventually Frobenius split along every $c$ which is not in any minimal prime of $R$. \end{itemize} \end{defn} We will follow the example of Blickle, Schwede, and Tucker and omit the adjective ``sharp'' when discussing $F$-purity of pairs \cite[Def.~2.7]{Blickle+etal.12}. Observe that if $\varphi \in \mc D_e$ is a splitting of $F^e$, then there is a splitting in any multiple of the degree, given by $\varphi^n \in \mc D_{en}$. Since we will consider only pairs $(R,\mc D)$ where $R$ is Noetherian and $F$-finite, this means that for any ring $S$ such that $R\to S$ is flat, we have by \cite[Thm.~7.11]{Matsumura.89}, \[ S\otimes_R \hom_R(F_*^eR, R)\cong \hom_S(S\otimes_R F_*^e R, S). \] In the case that $S$ is a localization of $R$ we further know that $S$ commutes with the Frobenius, that is, for any multiplicative set $W$, \begin{align*} W^{-1} R \otimes_R \hom_R(F_*^eR, R) &\cong \hom_{W^{-1} R}(F_*^e (W^{-1} R), W^{-1} R). \end{align*} We will use this isomorphism freely: if $\frac{r}{w} \otimes \phi$ is a pure tensor in $W^{-1} R \otimes \hom_{R}(F_*^e R, R)$, we will identify this with the map in $\hom_{W^{-1} R}(F_*^e W^{-1} R, W^{-1} R)$ which sends $F_*^e(\frac{s}{u})$ to $\frac{r\phi(F_*^e( su^{p^e-1}))}{wu}$. This identification is easier to understand if we first rewrite $F_*^e(\frac{s}{u})$ as $\frac{F_*^e(su^{p^e-1})}{u}$. Thus we have a natural containment $W^{-1} R\otimes_R \mc D_e\subseteq \mc (C_{W^{-1} R})_e$. We can therefore construct a new Cartier subalgebra $W^{-1} \mc D$ on $W^{-1} R$ using this isomorphism, so that \begin{align*} (W^{-1} \mc D)_e = W^{-1} R\otimes \mc D_e. \end{align*} When we are localizing at a prime ideal $P$, we write this Cartier subalgebra as $\mc D_P$. Now that we have the setup to discuss localizations of Cartier subalgebras, we can state and prove the following standard result on the Frobenius split locus in the setting of Cartier algebra pairs. \begin{thm} \label{f-pure-open-local} Let $R$ be an $F$-finite ring, and $\mc D$ a Cartier subalgebra. Then the set of primes $P$ of $R$ at which $(R_P, \mc D_P)$ is $F$-pure is open. Further, the pair~$(R,\mc D)$ is $F$-pure if and only if the localized pair~$(R_P, \mc D_P)$ is $F$-pure for all primes $P$. \end{thm} \begin{proof} For any $e$, we get a module map $\Psi_e: \mc D_e \to R$ via evaluation at~$F_*^e 1$. The pair~$(R,\mc D)$ is $F$-pure exactly when this map is surjective for some $e>0$, or equivalently, when there exists an $e>0$ such that $R/\im \Psi_e=0$. The localization $(\Psi_e)_P$ corresponds to the evaluation map map $(\mc D_P)_e \to R$, so the pair~$(R_P, \mc D_P)$ is \emph{not} $F$-pure if and only if $R_P/\im (\Psi_e)_P \neq 0$ for all $e$. Thus the non-$F$-pure locus is precisely the closed set $\bigcap_{e>0} \bb V(\im \Psi_e)$. For the second statement, if $(R,\mc D)$ is $F$-pure, then there exists some $e>0$ and $\varphi\in \mc D_e$ with $\varphi(F_*^e 1)=1$. By definition, the localization $\varphi_P : F_*^e(R_P) \to R_P$ is in $(\mc D_P)_e$, and so $(R_P, \mc D_P)$ is also $F$-pure. Conversely, if each $(R_P, \mc D_P)$ is $F$-pure, then the complements of the sets $\bb V(\im \Psi_e)$ give an open cover of $\Spec R$. Since $\Spec R$ is compact, only finitely many are needed, say, the complements of $\bb V(\im \Psi_{e_1}), \ldots, \bb V(\im \Psi_{e_t})$. Then taking $e=e_1\cdots e_t$ to be the product of these indices, we must have that $(R_P, \mc D_P)$ has a splitting in $\mc D_e$ for every prime $P$. Thus the map $\Psi_e$ is surjective at every prime, and therefore is surjective. \end{proof} This proof in fact shows that for any $c$, the set of primes $P$ such that $(R_P, \mc D_P)$ is not eventually Frobenius split along $c$ is closed. Further, it also shows that $(R,\mc D)$ is eventually Frobenius split along $c$ if and only if $(R_P, \mc D_P)$ is for every prime ideal $P$. In particular, this shows that just like in the classical case, $(R,\mc D)$ is strongly $F$-regular if and only if every $(R_P, \mc D_P)$ is as well. \section{The Cartier core map} Fix a pair~$(R,\mc D)$ where $R$ is an $F$-finite and Frobenius split ring and where $\mc D$ is a Cartier subalgebra. In this section we will define an explicit continuous map \[ \Ccore_{\mc D}: \Spec R \to \Spec R \] that has some especially nice properties. The image of our map is the set of $\mc D$-compatible primes of $\Spec R$, which in the case $\mc D = \mc C_R$ is the set of (generic points of) $F$-pure centers. If $R$ is not Frobenius split, we can instead define $\Ccore_{\mc D}$ on the open locus of Frobenius split points. More generally, the map $\Ccore_{\mc D}$ can be viewed as an endomorphism defined on the set of \emph{all ideals} of $R$ (not necessarily proper), which is especially interesting on the class of radical ideals in a Frobenius split ring. \begin{defn} Let \(R\) be an $F$-finite ring of prime characteristic. Let \(J\) be an ideal of \(R\). Let $\mc D \subseteq \mc C_R$ be a Cartier subalgebra. Then the \emph{Cartier core} of \(J\) in \(R\) with respect to $\mc D$ is \[ \Ccore_{\mc D}(J) = \left\{r\in R \: | \: \phi(F_*^e r)\in J \ \ \forall e>0,\ \forall\phi\in \mc D_e\right\}. \] \end{defn} We will write $\Ccore_R(J)$ to mean the Cartier core with respect to the full Cartier algebra $\mc C_R$, and just $\Ccore(J)$ when the ring and Cartier subaglebra are clear from context. In the case that $\mc D = \mc C_R$, the Cartier core $\Ccore_R(J)$ is also denoted (e.g., in \cite{Badilla-Cespedes.21}) as $\mc P(J)$. \begin{notation} The \emph{\(e\)-th Cartier contraction} of $J$ with respect to $\mc D$ is \[ A_{\mc D_e}(J) = \left\{ r \in R \: | \: \phi(F_*^e r) \in J\ \ \forall \phi \in \mc D_e\right\}. \] \end{notation} We can express the Cartier core in terms of the Cartier contractions as \[ \Ccore_{\mc D}(J) = \bigcap_{e>0} A_{\mc D_e}(J). \] We can also express the Frobenius pushforward of the $e$-th Cartier contraction as \[ F_*^e(A_{\mc D_e}(J)) = \bigcap_{\varphi \in \mc D_e}\varphi^{-1}(J). \] When $\mc D = \mc C_R$, the \(e\)-th Cartier contraction \(A_{\mc D_e}(J)\) is sometimes denoted by $J_e$. Note that for an $F$-finite pair~$(R,\mc D)$, $A_{\mc D_e}(J)$ and \(\Ccore_{\mc D}(J)\) are ideals. Both are clearly additively closed, so it suffices to check that if $a\in A_{\mc D_e}(J)$ and $r\in R$, then $ra\in A_{\mc D_e}(J)$. For any $\phi \in \mc D_e$, we have $\phi(F_*^e(ra)) = (\phi \cdot r)(F_*^e a)$, which is in $J$ since $a\in A_{\mc D_e}(J)$. The Cartier core was defined for the case $\mc C_R = \mc D$ by Badilla-C\'espedes \cite[Def.~4.12]{Badilla-Cespedes.21} as a generalization of Aberbach and Enescu's splitting prime \cite{Aberbach+Enescu.05} and of Brenner, Jeffries, and N\'u\~{n}ez Betancourt's differential core \cite{Brenner+etal.19}. Here we generalize this definition to the context of pairs, similar to Blickle, Schwede, and Tucker's generalization of the splitting prime to the context of pairs \cite{Blickle+etal.12}. To motivate the definition of the Cartier core, note that the condition \(J\subseteq \Ccore_{\mc D}(J)\), i.e., that \(\phi(F_*^e(J))\subseteq J\) for all $e$ and for all $\phi\in \mc D_e$, is precisely the condition that \(J\) is $\mc D$-compatible. In the case where $\mc D$ is the full Cartier algebra, this is the same as saying $J$ is uniformly $F$-compatible. In fact, it is known that when $R$ is $F$-pure, $\Ccore_{R}(J)$ is the largest uniformly $F$-compatible ideal contained in \(J\) \cite[Prop.~4.11]{Badilla-Cespedes.21}. We will see in \Cref{largest-D-compat} that when the pair~$(R,\mc D)$ is Frobenus split, $\Ccore_{\mc D}(J)$ is the largest $\mc D$-compatible ideal contained in $J$. Further, as the next two results show, the Cartier core of a prime ideal $P$ carries information about the localization $(R_P, \mc D_P)$. \begin{prop}[{\cite[Prop.~2.12]{Blickle+etal.12}}] \label{f-split-proper} Let $(R,\mc D)$ be an $F$-finite pair and let $P$ be a prime ideal of $R$. Then $r\notin \Ccore_{\mc D}(P)$ if and only if the pair~$(R_P, \mc D_P)$ is $F$-pure along $r/1$. In particular, $(R_P, \mc D_P)$ is $F$-pure if and only if $\Ccore_{\mc D}(P)$ is proper. \end{prop} \begin{proof} Since $\mc D_P = \mc D\otimes R_P$, saying $\phi(F_*^e(r))\in P$ for some $\phi\in\mc D_e \subseteq \hom_R(F_*^e R, R)$ is equivalent to saying $\phi(F_*^e(r/1))\in PR_P$, viewing $\phi\in \mc (D_P)_e\subseteq \hom_{R_P}(F_*^e(R_P), R_P)$. The pair~$(R_P,\mc D_P)$ is $F$-pure if and only if there is some $\phi\in \mc D_P$ such that $\phi(F_*^e(1))$ is a unit, which we have just seen is equivalent to having $1\notin \Ccore_{\mc D}(P)$. \end{proof} \begin{prop}[{Cf. \cite[Thm.~2.11,Prop.~2.12]{Blickle+etal.12}}] \label{str-f-reg=in-minl-prime} Let $(R,\mc D)$ be an $F$-finite pair and let $P$ be a prime ideal of $R$. Then the pair~$(R_P, \mc D_P)$ is strongly $F$-regular if and only if $\Ccore_{\mc D}(P)$ is contained in some minimal prime of $R$. \end{prop} \begin{proof} The pair~$(R_P, \mc D_P)$ is strongly $F$-regular if and only if $(R_P, \mc D_P)$ is $F$-pure along every non-zero divisor, i.e., $\Ccore_{\mc D}(P)$ is contained in the union of the minimal primes of $R$. Since $\Ccore_{\mc D}(P)$ is an ideal, prime avoidance says this is equivalent to having $\Ccore_{\mc D}(P)$ contained in some minimal prime of $R$. \end{proof} Now that we have provided some motivation for the Cartier core construction, we will discuss some of its nice properties. \begin{prop} \label{containment} Let $(R,\mc D)$ be an $F$-finite pair. If $J_1 \subseteq J_2$ in \(R\), then $\Ccore_{\mc D}(J_1)\subseteq \Ccore_{\mc D}(J_2)$. \end{prop} \begin{proof} For every $e$, $A_{\mc D_e}( J_1)\subseteq A_{\mc D_e}(J_2)$, since if $\phi(F_*^e r)\in J_1$ for some $\phi \in \mc D_e$, we also have $\phi(F_*^e r)\in J_2$. Taking the intersection over all $e$ gives our result. \end{proof} \begin{prop}[{Cf. \cite[Prop~4.6]{Badilla-Cespedes.21}}] \label{arbitrary-intersection-c-ideal} Let $\{J_\alpha\}$ be an arbitrary collection of ideals in an $F$-finite ring $R$, and let $\mc D$ be a Cartier subalgebra. Then \[ \Ccore_{\mc D}\left(\bigcap_\alpha J_\alpha\right) = \bigcap_\alpha \Ccore_{\mc D}(J_\alpha). \] \end{prop} \begin{proof} We see that \begin{align*} \Ccore_{\mc D}\left(\bigcap_\alpha J_\alpha\right) &= \left\{r\in R\: | \: \phi(F_*^e r) \in \bigcap_\alpha J_\alpha \; \forall e, \forall \phi\in \mc D_e \right\} \\ &= \bigcap_\alpha \left\{r\in R\: | \: \phi(F_*^e r) \in J_\alpha \ \forall e, \forall \phi\in \mc D_e \right\} \\ &= \bigcap_\alpha \Ccore_{\mc D}(J_\alpha) \qedhere \end{align*} \end{proof} In particular, the set of Cartier cores with respect to $\mc D$ is closed under arbitrary intersection. We will see in \Cref{prop:arbitrary-sum-c-ideals} that this set is also closed under arbitrary sum for $F$-pure pairs. Our next goal is to show that the Cartier core construction commutes with localization. To do so, we need the following lemma. \begin{lemma} \label{localization} Let $(R,\mc D)$ be an $F$-finite pair, let $Q$ be a $P$-primary ideal of $R$, and let $W$ be a multiplicative set avoiding $P$, so that $W\cap P=\emptyset$. Then \[ \Ccore_{W^{-1} \mc D}(QW^{-1} R) \cap R = \Ccore_{\mc D}(Q). \] \end{lemma} \begin{proof} By our discussion in \Cref{sec:background}, $W^{-1} \mc D_e$ is generated by the maps $\frac{\phi}{w}: F_*(W^{-1} R)\to R$ for $\phi\in \mc D_e$ and $w\in W$, where $\frac{\phi}{w}(F_*(\frac{s}{u})) = \frac{\phi(F_*^e(su^{p^e-1}))}{wu}$. We will start by showing that $\frac{s}{1}\in A_{W^{-1} \mc D_e}(QW^{-1} R)$ if and only if $s\in A_{\mc D_e}(Q)$. By definition, $\frac{s}{1}\in A_{W^{-1} \mc D_e}(QW^{-1} R)$ if and only if $\varphi(F_*^e(\frac{s}{1}))\in QW^{-1} R$ for all \( \varphi \in W^{-1} \mc D_e. \) This is equivalent to having \[ \frac{\phi(F_*^e(s))}{w}\in QW^{-1} R \] for all $\phi \in \mc D_e$ and all $w\in W$. This means that we can write $\frac{\phi(F_*^e(s))}{w} = \frac{j}{u}$ for some $j\in Q$, $u\in W$, i.e., there exists $v\in W$ such that $vu \phi(F_*^e(s)) = vwj$. The latter is in $Q$, but $vu\notin P$, so by $P$-primaryness of $Q$ we must then have $\phi(F_*^e s) \in Q$. This holds for all $\phi$ exactly when $s \in A_{\mc D_e}(Q)$. Now we have shown our first claim, which implies $A_{\mc D}(Q) = A_{W^{-1} \mc D_e}(Q) \cap R$. Intersecting both sides over all $e>0$, we see \[ \Ccore_{W^{-1} \mc D}(Q)\cap R = \Ccore_{\mc D}(Q).\qedhere \] \end{proof} \begin{thm} \label{thm:localization} Let $(R,\mc D)$ be an $F$-finite pair, let $J$ be an ideal of $R$, and let $W$ be a multiplicative set avoiding every prime in $\Ass(J)$. Then \[ \Ccore_{W^{-1} \mc D}(JW^{-1} R)\cap R = \Ccore_{\mc D}(J) \quad \text{ and } \quad \Ccore_{\mc D}(J)W^{-1} R = \Ccore_{W^{-1} \mc D}(JW^{-1} R). \] \end{thm} \begin{proof} Write $J= Q_1 \cap \cdots \cap Q_t$ a minimal primary decomposition of $J$ with corresponding primes $P_i = \sqrt {Q_i}$. Then since intersection commutes with applying $\Ccore_{\mc D}$ and with contraction \begin{align*} \Ccore_{W^{-1} \mc D}(J)\cap R &= \bigcap_{i=1}^t (\Ccore_{W^{-1} \mc D}(Q_i)\cap R). \end{align*} By \Cref{localization}, since $W\cap P_i=\emptyset$ we have $\Ccore_{W^{-1} \mc D}(Q_i)\cap R = \Ccore_{\mc D}(Q_i)$ and so \[ \Ccore_{W^{-1} \mc D}(J)\cap R = \bigcap_{i=1}^t \Ccore_{\mc D}(Q_i) = \Ccore_{\mc D}(J). \] For the second equality, we note \[ \Ccore_{W^{-1} \mc D}(J) = (\Ccore_{W^{-1} \mc D}(J)\cap R)W^{-1} R = \Ccore_{\mc D}(J)W^{-1} R \] since contracting then extending to a localization preserves ideals. \end{proof} Now that we have established the preliminary results for arbitrary ideals, we move to considering prime ideals. Our main results of the rest of this section can be summarized in the following theorem. \begin{thm} \label{c-map-summary-thm} Let $R$ be an $F$-finite Noetherian ring, and let $\mc D$ be a Cartier subalgebra. Then the Cartier core construction with respect to $\mc D$ induces a well-defined, continuous, and containment preserving map on the $F$-pure locus of the pair~$(R,\mc D)$ which fixes $\mc D$-compatible ideals. The image of the map is the set of $\mc D$-compatible ideals in $\mc U_{\mc D}$ and is always finite. The image is the set of minimal primes of $R$ precisely when the pair~$(R,\mc D)$ is strongly $F$-regular. \end{thm} \begin{proof} We have already seen in \Cref{containment} that the Cartier core is containment preserving, even without restricting to primes. \Cref{c-map-well-defined} will show that the map $\Ccore:\mc U_{\mc D} \to \mc U_{\mc D}$ is well-defined. \Cref{cts-map} will show that this map is continuous, and \Cref{fin-many-C-cores} will show that the image is finite. \Cref{cartier-core=D-compatible} will show that the image is precisely the set of $F$-pure $\mc D$-compatible ideals, which combined with \Cref{prop:c-ideal-stable} shows that all the $\mc D$-compatible ideals in $\mc U_{\mc D}$ are fixed. The one statement that doesn't have a stand-alone proof elsewhere is the last one. $(R,\mc D)$ is strongly $F$-regular if and only if each $(R_P, \mc D_P)$ is strongly $F$-regular. By \Cref{str-f-reg=in-minl-prime}, this occurs exactly when each $\Ccore_{\mc D}(P)$ is contained in a minimal prime of $R$. But since $\Ccore_{\mc D}(P)$ is prime, this is equivalent to having $\Ccore_{\mc D}(P)$ be a minimal prime. \end{proof} It is known that the splitting prime, which in our notation is $\Ccore_{R}(\mf m)$ for $(R,\mf m)$ local, is indeed prime \cite[Thm.~3.3]{Aberbach+Enescu.05}, even in the case of an arbitrary Cartier subalgebra \cite[Prop.~2.12]{Blickle+etal.12}. After localizing, the same proof works here, which we repeat for the reader's convenience. \begin{prop}[{\cite[Prop.~2.12]{Blickle+etal.12}}] \label{proper-prime} If \(P\) is prime and \(\Ccore_{\mc D}(P)\) is proper, then \(\Ccore_{\mc D}(P)\) is prime. \end{prop} \begin{proof} Suppose $c_0,c_1\notin \Ccore_{\mc D}(P)$. Then we will show $c_0c_1\notin \Ccore_{\mc D}(P)$. Our assumption means that $(R_P, \mc D_{P})$ is $F$-pure along each $c_i$, i.e., there exists an $e_i$ and $\psi_i\in \mc (D_P)_{e_i}$ such that $\psi_i(F_*^{e_i}c_i) = 1$. Then applying the map $\psi_1\circ F_*^{e_1}\psi_0 \circ F_*^{e_0+e_1}(c_1^{p^{e_0}-1})$ to $F_*^{e_0+e_1}(c_0c_1)$, we get \begin{center} \begin{tikzcd}[column sep=large] F_*^{e_0+e_1} (R_P) \rar["\cdot F_*^{e_0+e_1}(c_1^{p^{e_0}-1})"] & F_*^{e_0+e_1}(R_P) \rar["F_*^{e_1}\psi_0"] & F_*^{e_1}(R_P) \rar["\psi_1"] & R_P \\ F_*^{e_0+e_1}(c_0c_1) \rar[mapsto] & F_*^{e_0+e_1}(c_0c_1^{p^{e_0}}) = F_*^{e_1}(c_1F_*^{e_0}(c_0)) \rar[mapsto] & F_*^{e_1}c_1 \rar[mapsto] & 1. \end{tikzcd} \end{center} Rewriting this map as $\psi_1 \circ F_*^{e_1}\psi_0 \circ F_*^{e_0+e_1}(c_1^{p^{e_0}-1}) = \psi_1 \cdot \psi_0 \cdot c_1^{p^{e_0}-1}$, we see that it is in $(\mc D_P)_{e_0+e_1}$, and thus that that $(R_P, \mc D_P)$ is also $F$-pure along $c_0c_1$, as desired. \end{proof} \begin{prop} \label{prop:c-ideal-in-primary-original-ideal} Let $R$ be a characteristic $p$, $F$-finite ring, and let $\mc D$ be a Cartier subalgebra. If $Q$ is a $P$-primary ideal of $R$ and $\Ccore_{\mc D}(P)$ is proper, then $\Ccore_{\mc D}(Q)\subseteq Q$. \end{prop} \begin{proof} Since $\Ccore_{\mc D}(P)$ is proper, there is some $e>0$ and $\psi \in \mc D_e$ with $\psi(F_*^e 1)\notin P$. Consider $r\notin Q$ and the map $\psi \circ (F_*^e (r^{p^e-1}))= \psi \cdot r^{p^e-1}$ in $\mc D_e$. Then by $P$-primaryness, \[ r\psi(F_* 1) = \psi(F_*^e r^{p^e}) = (\psi\cdot r^{p^e-1})(F_*^e r)\notin Q, \] and so $r\notin \Ccore_{\mc D}(Q)$ as desired. \end{proof} \begin{cor} \label{c-map-well-defined} Let $(R, \mc D)$ be an $F$-finite pair, with $F$-pure locus $\mc U_{\mc D}$. Then the Cartier core construction induces a well-defined map $\Ccore_{\mc D}:\mc U_{\mc D}\to \mc U_{\mc D}$. \end{cor} \begin{proof} Let $P$ be a prime ideal in $\mc U_{\mc D}$. Then $(R_P,\mc D_P)$ is Frobenius split, so \Cref{f-split-proper} gives that $\Ccore_{\mc D}$ is proper, and thus prime by \Cref{proper-prime}. This gives a map $\Ccore_{\mc D}:\mc U_{\mc D} \to \Spec R$. Then \Cref{prop:c-ideal-in-primary-original-ideal} says $\Ccore_{\mc D}(P) \subseteq P$. Since the $F$-pure locus is open, this means $\Ccore_{\mc D}(P)$ must also be in the $F$-pure locus. \end{proof} \begin{cor}[{Cf. \cite[Cor.~4.8]{Schwede.10a}}] \label{c-minl-prime} Suppose the pair~$(R, \mc D)$ is $F$-finite and $F$-pure. If $P$ is a minimal prime of $R$, then $\Ccore_{\mc D}(P) = P$. \end{cor} \begin{proof} Since $(R,\mc D)$ is $F$-pure, $\Ccore_{\mc D}(P)\subseteq P$. Since $\Ccore_{\mc D}(P)$ is prime by \Cref{proper-prime} and $P$ is minimal, we must have that $\Ccore_{\mc D}(P)=P$. \end{proof} \begin{cor}[{Cf. \cite[Prop.~4.5]{Badilla-Cespedes.21}}] \label{c-ideal-in-original-ideal} If the pair~$(R,\mc D)$ is $F$-finite and $F$-pure, then for any ideal $J$ we have $\Ccore_{\mc D}(J)\subseteq J$. \end{cor} \begin{proof} Write $J = Q_1\cap \cdots \cap Q_t$, where the $Q_i$ give a primary decomposition of $J$. Then by \Cref{arbitrary-intersection-c-ideal}, \[ \Ccore_{\mc D}(J) = \Ccore_{\mc D}(Q_1)\cap \cdots \cap \Ccore_{\mc D}(Q_t). \] Since $(R,\mc D)$ is Frobenius split, for every prime $P$ the pair~$(R_P, \mc D_P)$ is also Frobenius split, and thus has $\Ccore_{\mc D}(P)$ proper by \Cref{f-split-proper}. By \Cref{prop:c-ideal-in-primary-original-ideal}, each $\Ccore_{\mc D}(Q_i)\subseteq Q_i$. Intersecting, we get that $\Ccore_{\mc D}(J)\subseteq J$ as desired. \end{proof} \begin{prop}[{Cf. \cite[Lemma~3.5]{Schwede.10a}}] \label{prop:arbitrary-sum-c-ideals} Let $(R,\mc D)$ be an $F$-finite, $F$-pure pair, and let $\{J_\alpha\}_{\alpha \in \mc A}$ be a collection of ideals with $\Ccore_{\mc D}(J_\alpha)=J_\alpha$ for all $\alpha\in \mc A$. Then we have \begin{align*} \Ccore_{\mc D}\left(\sum_\alpha J_\alpha\right) &= \sum_\alpha \Ccore_{\mc D}(J_\alpha) \end{align*} \end{prop} \begin{proof} Since $J_\beta\subseteq \sum J_\alpha$, we have $\Ccore_{\mc D}(J_\beta)\subseteq \Ccore_{\mc D}\left(\sum_\alpha J_\alpha\right)$ for all $\beta\in \mc A$ by \Cref{containment}, and so \[ \sum_\alpha \Ccore_{\mc D}(J_\alpha)\subseteq \Ccore_{\mc D}\left(\sum_\alpha J_\alpha\right). \] For the reverse containment, we use our assumption that $\Ccore_{\mc D}(J_\alpha)=J_\alpha$ and \Cref{c-ideal-in-original-ideal} to see that \[ \Ccore_{\mc D}\left(\sum J_\alpha\right) = \Ccore_{\mc D}\left(\sum \Ccore_{\mc D}(J_\alpha)\right) \subseteq \sum \Ccore_{\mc D}(J_\alpha) \] which is our desired opposite inclusion. \end{proof} \begin{prop} \label{prop:c-ideal-stable} If the pair~$(R,\mc D)$ is $F$-finite and $F$-pure, then for any ideal $J$ in $R$, \[ \Ccore_{\mc D}(J) = \Ccore_{\mc D}\left(\Ccore_{\mc D}(J)\right). \] \end{prop} \begin{proof} By \Cref{c-ideal-in-original-ideal}, we know that $\Ccore_{\mc D}(J)\subseteq J$. Then $\Ccore_{\mc D}\left(\Ccore_{\mc D}(J)\right)\subseteq \Ccore_{\mc D}(J)$ by \Cref{containment}, so it suffices to show the other direction. Consider $f\notin \Ccore_{\mc D}\left(\Ccore_{\mc D}(J)\right)$. Thus there exists $e>0$ and $\phi\in \mc D_e$ with $\phi(F_*^e f)\notin \Ccore_{\mc D}(J)$. Then there must also exist $e'$ and $\phi'\in \mc D_{e'}$ with $\phi'(F_*^{e'} \phi(F_*^e(f))\notin J$. This term can be rewritten as $(\phi'\cdot \phi)(F_*^{e'+e}(f)) = \phi'\left(F_*^{e'}(\phi(F_*^e f))\right)$, and so $f\notin \Ccore_{\mc D}(J)$. \end{proof} The following result is known when $\mc D = \mc C_R$ \cite{Badilla-Cespedes.21}, and for triples $(R,\Delta, \mf a^t)$ \cite{Schwede.10a}. The proof in the Cartier algebra setting proceeds the same as Badilla-C\'espedes' proof, with a little care needed for the exponents used. \begin{prop}[{Cf. \cite[Rmk.~4.14]{Badilla-Cespedes.21}, \cite[Cor.~3.3]{Schwede.10a}}] \label{c-ideal-radical} If the pair~$(R,\mc D)$ is $F$-finite and $F$-pure, then for any ideal $J$, the Cartier core $\Ccore_{\mc D}(J)$ is radical. \end{prop} \begin{proof} Suppose $r\in \sqrt{\Ccore_{\mc D}(J)}$. Then there exists some $n$ so that $r^{p^n}\in \Ccore_{\mc D}(J)$. Since the pair is $F$-pure, there also exists some $\psi \in \mc D_d$ so that $\psi(F_*^d1)=1$. Take $e = nd$, so that there is $\phi \in \mc D_e$ with $\phi(F_*^e 1)=1$, and so that \Cref{prop:c-ideal-stable} gives $r^{p^e}\in \Ccore_{\mc D}(J) = \Ccore_{\mc D}(\Ccore_{\mc D}(J))$. Then \[ \phi(F_*^e(r^{p^e})) = r\phi(F_*^e 1) = r \in \Ccore_{\mc D}(J).\qedhere \] \end{proof} The hypothesis that $(R,\mc D)$ be $F$-pure is necessary. Consider $R=k[x]/\langle x^2\rangle$ where $k$ is an $F$-finite field, and let $\mc D =\mc C_R$. This ring $R$ is non-reduced, so can't be $F$-pure. Using the presentation from \Cref{c-ideal-quo-reg-ring}, we compute \begin{align*} A_e(\olin{\langle x^2\rangle}) &= \olin{\langle x^2 \rangle^{[p^e]}:_{k[x]}\left(\langle x^2\rangle ^{[p^e]}:_{k[x]}\langle x^2 \rangle\right) = \olin{ \langle x^{2p^e} \rangle :_{k[x]} \langle x^{2p^e-2}\rangle } = \olin{\langle x^{2}\rangle}. \end{align*} Intersecting over all $e$, we see that $\Ccore_R(\olin{\langle x^2\rangle}) =\olin{\langle x^2\rangle}$, a non-radical ideal. \begin{thm}[{Cf. \cite[Prop.~4.9,Thm.~4.10]{Badilla-Cespedes.21}}] \label{cartier-core=D-compatible} If the pair~$(R,\mc D)$ is $F$-finite and $F$-pure, then the set of Cartier cores with respect to $\mc D$, i.e., the set \(\left\{ \Ccore_{\mc D}(J) \: | \: J\text{ an ideal of \(R\)} \right\}\), is precisely the set of $\mc D$-compatible ideals. \end{thm} \begin{proof} An ideal $J$ is $\mc D$-compatible precisely if $\varphi(F_*^e(J))\subseteq J$ for all $e$ and for all $\varphi \in \mc D_e$, and thus by construction $J$ is $\mc D$-compatible if and only if $J\subseteq \Ccore_{\mc D}(J)$. By \Cref{c-ideal-in-original-ideal}, if the pair~$(R,\mc D)$ is $F$-pure then this is equivalent to having $J=\Ccore_{\mc D}(J)$. This shows that every $\mc D$-compatible ideal is a Cartier core. Conversely, the Cartier core $\Ccore_{\mc D}(J)$ is $\mc D$-compatible since by \Cref{prop:c-ideal-stable} we have $\Ccore_{\mc D}(J) = \Ccore_{\mc D}(\Ccore_{\mc D}(J))$. \end{proof} \begin{cor}[{Cf. \cite[Prop.~4.11]{Badilla-Cespedes.21}}] \label{largest-D-compat} If the pair~$(R,\mc D)$ is $F$-finite and $F$-pure and $J$ is an ideal of $R$, then $\Ccore_{\mc D}(J)$ is the largest $\mc D$-compatible ideal contained in $J$. \end{cor} \begin{proof} $\Ccore_{\mc D}(J)$ is $\mc D$-compatible by the previous result. If another $\mc D$-compatible ideal~$J'$ has $\Ccore_{\mc D}(J) \subseteq J' \subseteq J$, then by \Cref{containment} we have $\Ccore_{\mc D}(\Ccore_{\mc D}(J)) \subseteq \Ccore_{\mc D}(J') \subseteq \Ccore_{\mc D}(J)$, and by \Cref{prop:c-ideal-stable} we in fact have $\Ccore_{\mc D}(J')=J'=\Ccore_{\mc D}(J)$. \end{proof} \begin{prop}[Cf. {\cite[Cor.~5.3]{Schwede.10a}}] \label{fin-many-C-cores} If $(R,\mc D)$ is an $F$-finite, $F$-pure pair, then there are only finitely many Cartier cores with respect to $\mc D$, i.e., there are only finitely many $\mc D$-compatible ideals. \end{prop} Schwede proved this result in the case of sharply $F$-pure triples $(R,\Delta, \mf a_\bullet)$, where $R$ is a reduced local $F$-finite ring, $\Delta$ is an effective $\mathbb{R}$-divisor on $\Spec R$, and $\mf a_\bullet$ is a graded system of ideals \cite{Schwede.10a}. Both our result and Schwede's result rely on the following useful result of Enescu and Hochster. \begin{thm}[{\cite[Cor.~3.2]{Enescu+Hochster.08}}] A family of radical ideals in an excellent local ring closed under sum, intersection, and primary decomposition is finite. \end{thm} \begin{proof}[Proof of \Cref{fin-many-C-cores}] First we handle the case where $(R,\mf m)$ is a local ring. Since it is $F$-finite, it is excellent \cite[Thm.~2.5]{Kunz.76}. We have already seen that the set of Cartier cores is a family of radical ideals closed under sum and intersection (see \Cref{c-ideal-radical}, \Cref{prop:arbitrary-sum-c-ideals}, \Cref{arbitrary-intersection-c-ideal}). Our next step is to show that the family is also closed under taking minimal primes. If $J=\Ccore_{\mc D}(J)$ is a Cartier core, write $J=P_1\cap \cdots \cap P_t$ as the intersection of its minimal primes. Then \[ J=\Ccore_{\mc D}(J) = \Ccore_{\mc D}(P_1)\cap \cdots \cap \Ccore_{\mc D}(P_t), \] and since each $\Ccore_{\mc D}(P_i)$ is either all of $R$ or a prime contained in $P_i$ by \Cref{proper-prime} and \Cref{c-ideal-in-original-ideal}, minimality gives $\Ccore_{\mc D}(P_i)=P_i$ for each $i$. Thus we can apply Corollary~3.2 of \cite{Enescu+Hochster.08} to see there are finitely many Cartier cores. To generalize to when $R$ is not local, we proceed by way of contradiction. Suppose the set of Cartier cores with respect to $\mc D$ is infinite. Since $R$ is Noetherian, pick a maximal Cartier core and call it $J_1$. By the above minimal prime argument, this maximal Cartier core must be prime. We will inductively continue to pick prime ideals $J_i$ such that $J_i$ is maximal among the set \[ \left\{ J \: | \: J = \Ccore_{\mc D}(J) \textrm{ and } \ \forall \ell<i, J\not\subseteq J_\ell\right\}. \] Each resulting ideal is prime since if $J$ is not contained in $J_\ell$ then neither are its minimal primes. This set of Cartier cores remains non-empty since for every $J_i$, the local ring $R_{J_i}$ contains only finitely many Cartier cores with respect to $\mc D_{J_i}$, which by \Cref{localization} implies $J_i$ contains only finitely many Cartier cores with respect to $\mc D$. Throwing these Cartier cores away still leaves infinitely many remaining. We will now consider the descending chain of ideals \[ J_1 \supseteq J_1\cap J_2 \supseteq J_1 \cap J_2 \cap J_3 \supseteq \cdots. \] If we ever had $J_1\cap \cdots \cap J_{i-1} = J_1 \cap \cdots \cap J_{i-1}\cap J_i$, this would mean that $J_1 \cap \cdots \cap J_{i-1}\subseteq J_i$. Since $J_i$ is prime, this would imply $J_\ell \subseteq J_i$ for some $\ell<i$. But since neither $J_\ell$ nor $J_i$ is contained in $J_{\ell-1}$, this containment would contradict the maximality of $J_\ell$ amongst such ideals. Thus our chain of ideals must be an infinite descending chain of $\mc D$-Cartier cores, which contradicts the fact that there are only finitely many $\mc D$-Cartier cores contained in $J_1$. So there must have been only finitely many Cartier cores in $R$ to begin with. \end{proof} \begin{thm} \label{cts-map} Let $(R,\mc D)$ be an $F$-finite pair, and let $\mc U_{\mc D}$ denote the $F$-pure locus of $(R,\mc D)$. Then the map $\Ccore_{\mc D}:\mc U_{\mc D} \to \mc U_{\mc D}$ is continuous under the Zariski topology. \end{thm} \begin{proof} We will show that the inverse image of the closed set $ V = \bb V(J)\cap \mc U_{\mc D}$ is also closed, where $J$ is an ideal of $R$. Let $K$ be the intersection of all Cartier cores containing $J$ which come from primes, so that \[ K = \bigcap_{\substack{P \in \mc U_{\mc D}\\ \Ccore_{\mc D}(P)\in \mathbb V(J)}} \Ccore_{\mc D}(P). \] Since the set of Cartier cores with respect to $\mc D$ is closed under infinite intersection by \Cref{arbitrary-intersection-c-ideal}, $K = \Ccore_{\mc D}(K)$ is also a Cartier core. We claim that $\Ccore_{\mc D}^{-1}(V) = \mathbb V(K)$. Suppose $P\in \Ccore_{\mc D}^{-1}(V)$. Then since $P\in \mc U_{\mc D}$, we have $\Ccore_{\mc D}(P)\subseteq P$ by \Cref{prop:c-ideal-in-primary-original-ideal}. Since $\Ccore_{\mc D}(P)\in \bb V(J)$, we have $J\subseteq \Ccore_{\mc D}(P)$ and so by construction, $K\subseteq \Ccore_{\mc D}(P)\subseteq P$. Thus $\Ccore_{\mc D}^{-1}(V) \subseteq \bb V(K)$. Conversely, if $P\in \bb V(K)$, then $K\subseteq P$ and by \Cref{containment}, \[ J\subseteq K = \Ccore_{\mc D}(K) \subseteq \Ccore_{\mc D}(P). \] Thus $\bb V(K) \subseteq \Ccore_{\mc D}^{-1}(V)$. \end{proof} \section{Quotients of Regular Rings} \label{sec:quo-regular-ring} Now that we have seen some abstract properties of the Cartier core map, $\Ccore_{\mc D}$, we shift our focus to actually computing it. In this section we give a concrete description of the Cartier core in the case when $R$ is presented as a quotient of a regular ring and $\mc D$ is the full Cartier algebra $\mc C_R$. We will then use this concrete description to show that the Cartier core commutes with adjoining a variable and with homogenization (in the case that our regular ring is a polynomial ring). One reason to focus on this case is that regularity of $S$ forces $F_*^eS$ and $\hom_S(F_*^eS, S)$ to be well-behaved, as the following result of Kunz and result of Fedder illustrate. \begin{thm}[{\cite[Cor.~2.7]{Kunz.69}}] If $R$ is a Noetherian ring of prime characteristic, then $R$ is regular if and only if $F_*R$ is a flat $R$-module. \end{thm} \begin{lemma}[{\cite[Lemma~1.6]{Fedder.83}}] If $S$ is an $F$-finite regular local ring, then $\hom_S(F_*^eS, S)$ is a free rank one $F_*^eS$ module. \end{lemma} Further, Glassbrenner, building on work of Fedder, gives us the following description of the $R$-module structure on maps in the local case. \begin{lemma}[{Fedder's Lemma \cite[Lemma~2.1]{Glassbrenner.96}}] \label{qr-hom-presentation} Let $S$ be an $F$-finite regular local ring and let $R=S/I$ for some ideal $I$. Then \[ \hom_{R}(F_*^e R, R) \cong F_*^e\left( \frac{I^{[p^e]}:I}{I^{[p^e]}} \right) \] as $R$-modules. \end{lemma} This description of $\hom_R(F_*^eR, R)$ is the core of Fedder's criterion and of Glassbrenner's criterion. \begin{prop}[{\cite[Prop.~1.7]{Fedder.83}}] Let $(S,\mf m)$ be an $F$-finite regular local ring of prime characteristic $p$, and let $I$ be an ideal of $S$. Then $R$ is $F$-pure if and only if $(I^{[p]}:I)\not\subseteq \mf m^{[p]}$. \end{prop} \begin{lemma}[{\cite[Lemma~2.2]{Glassbrenner.96}}] \label{lem:e-split-locus} Let $(S,\mf m)$ be an $F$-finite regular local ring of prime characteristic $p$. Let $I$ be an ideal of $S$. Then the map $S/I \to F_*^e(S/I)$, where $1\mapsto F_*^ec$, splits as an $(S/I)$-module map exactly when $c\notin \mf m^{[p^e]} :(I^{[p^e]}:I)$. \end{lemma} Since the Cartier core of $J$ is composed precisely of the elements which cannot be split in this manner, the technique of this lemma naturally leads to the following result. \begin{thm} \label{c-ideal-quo-reg-ring} Let $S$ be a regular $F$-finite ring, let $I\subseteq J$ be ideals of $S$, and let $R=S/I$. Fix $e\geq 1$, $c\in S$. Then there exists some $\varphi \in \hom_R(F_*^eR, R)$ with $\varphi(\olin c)\notin \olin{J}$ if and only if $c\notin J^{[p^e]}:(I^{[p^e]}:I)$. In particular, \[ A_{e;R}(\olin J) = \olin{J^{[p^e]}:_S (I^{[p^e]}:_S I)} \quad\textrm{ and }\quad C_R(\olin J) = \olin{\bigcap_e J^{[p^e]}:_S (I^{[p^e]}:_S I)}. \] \end{thm} \begin{proof} The representations of $A_e$ and $\Ccore_R$ follow directly from the first statement, so it suffices to prove that $c\notin J^{[p^e]}:(I^{[p^e]}:I)$ if and only if there is some $\varphi \in \hom_R(F_*^eR, R)$ with $\varphi(\olin c)\notin \olin J$. For our fixed $e$, let $E: \hom_R(F_*^e R, R)\to R$ be the ``evaluation at $c$'' map, so that $E(\varphi) = \varphi(F_*^e c)$. Our goal is to show $\im(E)\subseteq \olin J$ if and only if $c\in J^{[p^e]}:(I^{[p^e]}:I)$. By the discussion in \Cref{sec:background}, we can view the localization of $E$ as a map $\hom_{R_P}(F_*^e(R_P), R_P)\to R_P$ so that $(\im E)_P \cong \im (E_P)$. Since localization also commutes with Frobenius and with ideal colon, we can without loss of generality assume that $(S,\mf m)$ is local. Let $\Phi$ be a generator of $\hom_S(F_*^e S, S)$ as an $F_*^e S$ module. By \Cref{qr-hom-presentation}, the maps $\varphi\in \hom_R(F_*^eR, R)$ are exactly those maps induced by something the form $\Phi\circ F_*^e(s)$ where $s\in I^{[p^e]}:I$. Thus \[ \varphi(F_*^e(\olin c)) = \olin{(\Phi\circ F_*^es)(F_*^ec)} = \olin{\Phi(F_*^e(sc))} \] and so there exists $\varphi$ with $\varphi(F_*^e(\olin c))\notin \olin J$ if and only if there exists $s\in I^{[p^e]}:I$ with $\Phi(F_*^e(sc))\notin J$, i.e., if and only if \[ \Phi\left(F_*^e\left( c(I^{[p^e]}:I)\right)\right) = (F_*^ec \cdot \Phi)(I^{[p^e]}:I)\not\subseteq J. \] Using \cite[Lemma~1.6]{Fedder.83}, this occurs if and only if \[ F_*^e(c)\notin (J F_*^eS):(F_*^e(I^{[p^e]}:I)) = F_*^e(J^{[p^e]}):F_*^e(I^{[p^e]}:I). \] Since $S$ is regular, the flat Frobenius commutes with colon and is injective, thus this is equivalent to \[ c\notin J^{[p^e]}:(I^{[p^e]}:I).\qedhere \] \end{proof} We will frequently move between considering $\Ccore_R(\olin J)$ in $R$ and its lift $\bigcap_{e>0}J^{[p^e]}:(I^{[p^e]}:I)$ in $S$, which we will denote as either $\wt{\Ccore}_R(J)$ or $\wt{\Ccore}_R(\olin J)$. Similarly, we will denote the lift of $A_{e;R}(\olin J)$ as $\wt A_{e;R}(J)$ or $\wt A_{e;R}(\olin J)$. We now prove results which let us connect Cartier cores of related ideals computed in different, related rings. \begin{lemma} \label{qr-flat-extension-containment} Let $S_1\to S_2$ be a flat map of regular $F$-finite rings. Consider ideals $I\subseteq J_1$ in $S_1$, and ideal $J_2$ in $S_2$ contracting to $J_1$. Let $R_1 = S_1/I$ and $R_2 = S_2/IS_2$. Then \[ C_{R_1}(\olin{J_1})R_2 \subseteq C_{R_2}(\olin{J_2}). \] \end{lemma} \begin{proof} Finite intersections always commute with flat base change. Thus for any sequence of ideals $\{K_e\}_{e\in \mathbb{N}}$ and for any $n$, \[ \left(\bigcap_{e=1}^\infty K_e\right) S_2 \subseteq \left(\bigcap_{e=1}^n K_e\right) S_2 = \bigcap_{e=1}^n (K_eS_2) \] and in particular we must have $\left(\bigcap_{e=1}^\infty K_e\right) S_2 \subseteq \bigcap_{e=1}^\infty (K_eS_2)$. Colon commutes with flat base change when the ideals are finitely generated \cite[Thm.~7.4]{Matsumura.89}. Thus \begin{align*} \left(\bigcap_{e\geq 1}J_1^{[p^e]}:(I^{[p^e]}:I)\right) S_2 &\subseteq \bigcap_{e\geq 1} \left( (J_1S_2)^{[p^e]}:((IS_2)^{[p^e]}:IS_2) \right) \subseteq \bigcap_{e\geq 1} \left( J_2^{[p^e]}:((IS_2)^{[p^e]}:IS_2) \right), \end{align*} which by using \Cref{c-ideal-quo-reg-ring} to pass to the quotient gives \[ C_{R_1}(\olin{J_1})R_2 \subseteq C_{R_2}(\olin{J_2}).\qedhere \] \end{proof} In the case of a general flat map, even a general faithfully flat map, containment is the best we can do. For example, consider $S_1=k[x^p]$ and $S_2 = k[x]$ where $k$ is a perfect field. The inclusion of $S_1$ into $S_2$ is faithfully flat since it corresponds to the Frobenius on the regular ring $k[x]$. Now consider $I=J_1=\langle x^p\rangle\subset S_1$ and $J_2 = \langle x \rangle \subset S_2$. Then $R_1=S_1/I\cong K$ which is Frobenius split, so $C_{R_1}(\olin J_1) = \olin J_1$. But $R_2 = S_2/IS_2 = k[x]/\langle x^p\rangle$ is not reduced, thus cannot be Frobenius split. Since $\olin J_2$ is a prime ideal, this means $C_{R_2}(\olin J_2)=R_2$. However, it turns out that in the case of adjoining a variable, we can get a stronger result. \begin{prop} \label{qr-add-var} Let $R$ be a quotient of a regular $F$-finite ring, let $J$ be an ideal of $R$, and let $J'$ be an ideal of $R[x]$ such that $JR[x] \subseteq J' \subseteq JR[x]+\langle x \rangle$. Then \[ C_{R}(J)R[x] = C_{R[x]}(J') \quad \textrm{ and } \quad C_{R[x]}(J')\cap R = C_R(J). \] \end{prop} \begin{proof} By \Cref{containment}, \[ C_{R[x]}(JR[x]) \subseteq C_{R[x]}(J') \subseteq C_{R[x]}(JR[x]+\langle x\rangle). \] Our first step will be to show \[ C_{R[x]}(JR[x]) \supseteq C_{R[x]}(JR[x] + \langle x\rangle), \] which will then give us $C_{R[x]}(JR[x])=C_{R[x]}(J') = C_{R[x]}(JR[x] + \langle x \rangle)$. To do so, note that by assumption we can write $R=S/I$ where $S$ is a regular $F$-finite ring, and so we can also write $R[x] = S[x]/IS[x]$. We use $\wt{\ }$ to denote lifting an ideal from $R$ or $R[x]$ to $S$ or $S[x]$, as appropriate. Consider $S[x]$ to be $\mathbb{N}$-graded by $x$. Since $\wt{JR[x]+\langle x \rangle}$, the lift of $JR[x]+\langle x\rangle$ to $S[x]$, is homogeneous, as is $IS[x]$, our lift of the Cartier core \[ \wt{\Ccore}_{R[x]}(J[x]+\langle x\rangle) = \bigcap_{e>0} \wt{JR[x]+\langle x\rangle}:(IS[x]^{[q]}:IS[x]) \] is also homogeneous. Consider some homogeneous $g$ in this lift of the Cartier core. Ideal colon commutes with flat maps, and $S\to S[x]$ and the Frobenius are both flat. Thus for every $q=p^e$ we have \[ IS[x]^{[q]}:IS[x] = (I^{[q]}:I)S[x]. \] Since $g\in \wt{A}_e(J)$, we must have $g(I^{[q]}:I)\subseteq (\wt{JR[x]+\langle x\rangle})^{[q]}$. However, any element of $(\wt{JR[x]+\langle x\rangle})^{[q]}$ of degree less than $q$ must be expressible in terms of elements of $\wt{JR[x]}^{[q]}$. In particular, if $q>\deg g$ then $g(I^{[q]}:I) \subseteq \wt{JR[x]}^{[q]}$. Thus for $e\gg 0$, we have \[ g\in \wt{JR[x]}^{[q]}:(IS[x]^{[q]}:IS[x]) = \wt{A}_{e;R[x]}(JR[x]). \] By \cite[Prop.~4.15]{Badilla-Cespedes.21}, since $\Ccore_{R[x]}(JR[x]) = \bigcap_{e\gg0}A_{e;R[x]}(JR[x])$, this tells us that \[ \Ccore_{R[x]}(JR[x]+\langle x \rangle) \subseteq \Ccore_{R[x]}(JR[x]) \] as desired. Now we have shown $C_{R[x]}(JR[x])=C_{R[x]}(J')$, and it suffices to show $C_R(J)R[x]=C_{R[x]}(J[x])$. To do so, we will show that adjoining a variable commutes with infinite intersection. Consider an arbitrary ideal $K = \bigcap_\alpha K_\alpha$ in $S$. As a set, each $K_\alpha S[x]$ is polynomials with coefficients in $K_\alpha$, and so the polynomials in $\bigcap_\alpha K_\alpha S[x]$ are those with coefficients in $K_\alpha$ for every $\alpha$, which is precisely $KS[x]$, as desired. This lets us repeat the argument in \Cref{qr-flat-extension-containment} but with equalities, and thus \[ C_R(J)R[x] = C_{R[x]}(J[x]) = C_{R[x]}(J') \] as desired. The contraction result then follows directly from the fact that adjoining a variable is faithfully flat, so that \[ C_R(J) = C_R(J)R[x] \cap R = C_{R[x]}(J')\cap R. \qedhere \] \end{proof} If $R$ is a quotient of a polynomial ring by a homogeneous ideal, we can also look at how the Cartier core behaves under homogenization. More concretely, take $R=S/I$ for $S=k[x_1,\ldots, x_d]$ and $I$ a homogeneous ideal of $S$, so that $R$ is $\mathbb{N}$-graded. If $f\in R$, we let $f^h$ denote the minimal homogenization of $f$ in $R[t]$, so that \[ f^h = t^{\deg f} f\left(\frac{x_1}{t}, \ldots, \frac{x_n}{t}\right). \] If $J$ is an ideal of $R$, we define its homogenization to be $J^h = \langle f^h \: | \: f\in J\rangle$. For any degree-preserving lift of $f$ to $S$, there is a corresponding lift of $f^h$ to $S[t]$ so that the lift of the homogenization is the homogenization of the lift. This means we can freely consider a given homogenization to live either in $R[t]$ or in $S[t]$. Further, the ideals $\wt{(J^h)}$ and $(\wt J)^h$ are the same: $\wt{(J^h)}$ is generated by the lifts of the homogenizations of elements of $J$, and $(\wt J)^h$ is generated by homogenizations of lifts of elements of $J$. There is also a corresponding dehomogenization map $\delta:R[t]\to R$ defined by $\delta(t)=1$, which ensures that $\delta(f^h) = f$. We recall the following straightforward facts about homogenization. \begin{lemma} Let $R$ be a quotient of a polynomial ring by a homogeneous ideal. Let $I,J$ be ideals of $R$, and $\{I_\alpha\}$ a family of ideals. Let $f$ be an element of $R$. Then the following statements all hold. \begin{itemize} \item $f\in I$ if and only if $f^h\in I^h$. \item $(I:J)^h=I^h:J^h$ and $\left( \bigcap I_\alpha\right)^h = \bigcap (I_\alpha^h)$. \item $(I^{h})^{[p^e]} = (I^{[p^e]})^h$. \end{itemize} \end{lemma} Using these facts, we will prove the following lemma. \begin{lemma} \label{qr-homogenization} Let $R=S/I$ where $S$ is a polynomial ring over an $F$-finite field and $I$ is a homogeneous ideal. Let $J$ be an ideal of $R$. Then \[ \left(C_R(J)\right)^h = C_{R[t]}(J^h) \quad\textrm{ and }\quad C_R(J) = \delta\left( C_{R[t]}(J^h)\right) \] \end{lemma} \begin{proof} If we lift to $S[t]$ using \Cref{c-ideal-quo-reg-ring} and the above discussion on lifting and homogenization, then \begin{align*} \wt{\left(C_R(J)\right)^h} &= \left(\wt C_R(J)\right)^h \\ &= \left(\bigcap_{e>0} (\wt J)^{[q]}:(I^{[q]}:I)\right)^h \\ &= \bigcap_{e>0} \left((\wt{J}^h)^{[q]}:((I^h)^{[q]}:I^h) \right) \\ &= \bigcap_{e>0} \left((\wt{J^h})^{[q]}:(I^{[q]}:I) \right) \\ &= \wt{C}_{R[t]}(J^h) \end{align*} and so contracting back to $R[t]$ via \Cref{c-ideal-quo-reg-ring}, \[ (C_R(J))^h = C_{R[y]}(J^h). \] The last statement follows directly from dehomogenizing each side of the equation. \end{proof} Note that in fact the proof above shows that if $S=k[x_1,\ldots, x_n]$ and $I$ is any ideal, not necessarily homogeneous, and $J\supseteq I$, then \[ \left(\wt C_{S/I}(\olin J)\right)^h = \wt C_{S/I^h}(\olin{J^h}). \] \section{Stanley-Reisner Rings} \label{sec:stanley-reisner} A ring $R$ is a \emph{Stanley-Reisner ring} if it can be written as $R = S/I$, where $S$ is a polynomial ring and $I$ is a square-free monomial ideal. The following theorem gives a complete description of the Cartier core map for $\Spec R$ where $R$ is a Stanley-Reisner ring. \begin{thm} \label{stanley-reisner-c-map} Let $R$ be a Stanley-Reisner ring over a field that has prime characteristic and is $F$-finite. Let $Q$ be any prime ideal. Then \[ C(Q) = \sum_{\substack{P \in \Min(R) \\ P\subseteq Q}} P. \] In particular, the set of prime Cartier cores of $R$, i.e., the set of generic points of $F$-pure centers of $R$, is the set of sums of minimal primes. \end{thm} This theorem extends some earlier results. Aberbach and Enescu showed that the splitting prime of a Stanley-Reisner ring, which is its largest proper uniformly $F$-compatible ideal, is the sum of the minimal primes \cite[Prop~4.10]{Aberbach+Enescu.05}. For the reader's convenience, we will reprove this in our proof of \Cref{stanley-reisner-c-map}. At the other extreme, Vassilev showed that the test ideal of a Stanley-Reisner ring, which is its smallest non-zero uniformly $F$-compatible ideal, is $\sum_{i=1}^t \bigcap_{j\neq i} P_j$ where $P_1,\ldots, P_t$ are the minimal primes of $R$ \cite[Thm.~3.7]{Vassilev.98}. In a related but different direction, for a specific choice of $\phi:F_*^e R\to R$, Enescu and Ilioaea showed that the $\phi$-compatible primes of $R$ are precisely the prime monomial ideals which \emph{contain} a minimal prime of $R$. They used this to give a combinatorial description the test ideal of the pair~$(R,\phi)$ \cite[Prop.~3.9,Prop.~3.10]{Enescu+Ilioaea.20}. Badilla-C\'espedes showed that if $P'$ is a prime monomial ideal, then $\Ccore(P')$ as well as each $A_e(P')$ is also a monomial ideal, and more explicitly that $A_e(P') = (P')^{[p^e]}+\Ccore(P')$ in this setting \cite[Lemma~4.16,Prop~4.17]{Badilla-Cespedes.21}. Meanwhile, \`Alvarez Montaner, Boix, and Zarzuela gave a concrete description of $I^{[p^e]}:_S I$ in terms of the minimal primes of $I$, which could be used to explicitly compute the Cartier contractions for any ideal $J$ \cite[Prop~3.2]{AlvarezMontaner+etal.12}. \begin{proof}[Proof of \Cref{stanley-reisner-c-map}] Our proof will proceed as follows: First we will reduce to the case where every minimal prime is contained in $Q$. Then we will homogenize and trap $Q^h$ between a sum of minimal primes and the homogeneous maximal ideal, and use \Cref{containment} and the convenient form of monomial primes to get our desired equality. Let $\wt{\ }$ denote the lift of any ideal to $S$, let $I' = \displaystyle\bigcap_{P\in \Min(R),\ P \subseteq Q} \wt P$ be the intersection of the minimal primes contained in $Q$, and let $R'=S/I'$. Then \[ R_{Q} \cong S_{\wt Q}/I_{\wt Q} = S_{\wt Q}/I'_{\wt Q} \cong R'_{Q} \] and so by \Cref{localization}, \[ C_R(Q)R_{Q} = C_{R_{Q}}(Q) = C_{R'_{Q}}(Q) = C_{R'}(Q)R'_{Q}. \] Stanley-Reisner rings are $F$-pure \cite[Prop.~5.8]{Hochster+Roberts.76}, and so $C_R(Q)\subseteq Q$ by \Cref{c-ideal-in-original-ideal}, and thus when we lift back to $S$ using \Cref{c-ideal-quo-reg-ring}, we see \[ \bigcap_{e>0} \wt Q^{[p^e]}: (I^{[p^e]}: I) = \bigcap_{e>0} \wt Q^{[p^e]}:(I'^{[p^e]}:I'). \] Thus we can use $I'$ as our new $I$, and so we can assume $P\subseteq Q$ for all minimal primes $P$. Relabel the variables so that $\sum_{P\in \Min(R)} P = \langle x_1,\ldots, x_c\rangle$ and define $A = k[x_1,\ldots, x_c]/I$, so that $R = A[x_{c+1}, \ldots, x_{d}]$. Now we homogenize, so $Q^h \subseteq \mf m$ where $\mf m$ is the homogeneous maximal ideal in $S[t]$. Then \Cref{containment} tells us \[ C_{R[t]}\left(\sum_{P\in \Min(R)} P^h\right) \subseteq C_{R[t]}(Q^h) \subseteq C_{R[t]}(\olin{\mf m}). \] Each minimal prime $P$ of $R$ remains a minimal prime of $R[t]$ after homogenizing, so \Cref{c-minl-prime} says $C_{R[t]}(P^h) = P^h$, and \Cref{prop:arbitrary-sum-c-ideals} then says that their sum is also preserved by the Cartier core map. Applying \Cref{qr-add-var} to $\olin{\mf m}$, we get \[ \langle x_1,\ldots, x_c\rangle R[t] = C_{A}(P_1+\cdots+P_t)A[x_{c+1},\ldots, x_d,t] = C_{R[t]}(\olin{\mf m}). \] Thus \[ \langle x_1,\ldots, x_c\rangle = \Ccore_{R[t]}\left(\sum_{P\in \Min(R)} P^h \right) \subseteq C_{R[t]}(Q^h) \subseteq C_{R[t]}(\olin{\mf m}) = \langle x_1,\ldots, x_c\rangle \] and by \Cref{qr-homogenization}, \[ (C_{R}(Q))^h = C_{R[t]}(Q^h) = \langle x_1,\ldots, x_c\rangle. \] Dehomogenizing the homogenization always gives back the original ideal, and so \[ C_R(Q) = \langle x_1,\ldots, x_c\rangle = \sum_{P\in \Min(R)} P \] as desired. For the last statement of the theorem, note that since each minimal prime of $R$ corresponds to an ideal of $S$ which is generated by variables, any sum of minimal primes is also prime, and thus is fixed by the Cartier core map. \end{proof} \printbibliography \end{document}
train/arxiv
BkiUdP825V5ik2zA0Sws
5
1
\section{Introduction} \label{S1} Consider the following $n$-dimensional process $x_t^{\epsilon}$ defined by \begin{align} dx_t^{\epsilon} = F(x_t^{\epsilon}, y_t^{\epsilon})dt \label{Eq1} \end{align} and an $m$-dimensional diffusion process $y_t^{\epsilon}$ that obeys the following stochastic differential equation \begin{align} dy_t^{\epsilon} = f(y_t^{\epsilon})dt + \sqrt{\epsilon} \sigma(y_t^{\epsilon}) d w_t, \,\, t \in [0, T], \,\, (x_0^{\epsilon}, y_0^{\epsilon})=(x_0, y_0), \label{Eq2} \end{align} where \begin{itemize} \item $(x_t^{\epsilon}, y_t^{\epsilon})$ jointly defined an $\mathbb{R}^{(n+m)}$-valued diffusion process, \item the functions $F$ and $f$ are uniformly Lipschitz, with bounded first derivatives, \item $\sigma(y)$ is a Lipschitz continuous $\mathbb{R}^{m \times m}$-valued function such that $a(y) = \sigma(y)\,\sigma^{T}(y)$ is uniformly elliptic, i.e., \begin{align*} a_{min} \vert p \vert^2 < p \cdot a(y) p < a_{max} \vert p \vert^2, \quad p, y \in \mathbb{R}^{m}, \end{align*} for some $a_{max} > a_{min} > 0$, \item $w_t$ is a standard Wiener process in $\mathbb{R}^{m}$, and \item $\epsilon$ is a small positive number that represents the level of random perturbation in the system. \end{itemize} Let $\tau_{G_0}^{\epsilon}$ be the first exit time for the component $x_t^{\epsilon}$ from a bounded open domain $G_0 \subset \mathbb{R}^{n}$, with smooth boundary $\partial G_0$, i.e., \begin{align} \tau_{G_0}^{\epsilon} = \min \bigl\{ t > 0 \, \bigl\vert \, x_t^{\epsilon} \notin G_0 \bigr\}. \label{Eq3} \end{align} Here, we remark that a small random perturbation enters only in \eqref{Eq2} and is then subsequently transmitted to the other dynamical system in \eqref{Eq1}. As a result of this, the $\mathbb{R}^{(n+m)}$-valued diffusion process $(x_t^{\epsilon}, y_t^{\epsilon})$ is degenerate in the sense that the operator associated with it is a degenerate elliptic equation. Recently, the authors in \cite{BefA15a}, based on the following assumptions: \begin{enumerate} [i)] \item the infinitesimal generator \begin{align*} \mathcal{L}_0^{\epsilon}\bigl(\cdot\bigr)(x,y) = \frac{\epsilon}{2} \operatorname{tr}\bigl \{a(y)\bigtriangledown_y^2 \bigl(\cdot\bigr) \bigr\} + \bigl \langle \bigtriangledown_x \bigl(\cdot\bigr), F(x, y) \bigr\rangle + \bigl \langle \bigtriangledown_y \bigl(\cdot\bigr), b(y) \bigr\rangle, \end{align*} is hypoelliptic (e.g., see \cite{Hor67} or \cite{Ell73}); see also Remark~\ref{R2} below), and \item $\bigl \langle F(x, y), \nu(x) \bigr\rangle > 0$, where $\nu(x)$ is a unit outward normal vector to $\partial G_{0}$ at $x \in \partial D_{0}$ and for $y \in \mathbb{R}^m$, \end{enumerate} have provided an asymptotic estimate for the exit probability with which the diffusion process exits from a given bounded open domain during a certain time interval. Note that the approach for such an asymptotic estimate is basically relied on the interpretation of the exit probability function as a value function for a certain stochastic control problem that is associated with the underlying dynamical system with small random perturbations. \begin{remark} \label{R2} Note that, in general, the hypoellipticity assumption is related to a strong accessibility property of controllable nonlinear systems that are driven by white noise (e.g., see \cite{SusJu72} concerning the controllability of nonlinear systems, which is closely related to \cite{StrVa72} and \cite{IchKu74}; see also \cite[Section~3]{Ell73}). That is, the hypoellipticity assumption implies that the diffusion process $(x_t^{\epsilon}, y_t^{\epsilon})$ has a transition probability density with a strong Feller property. \end{remark} Here, it is worth mentioning that some interesting studies on the exit probabilities for the dynamical systems with small random perturbations have been reported in literature (see, e.g., \cite{VenFre70}, \cite{FreWe84}, \cite{Kif90} and \cite{DupKu86} in the context of large deviations; see \cite{DupKu89}, \cite{Day86}, \cite{EvaIsh85}, \cite{Zab85}, \cite{Fle78}, \cite{FleTs81} and \cite{BefA15a} in connection with stochastic optimal control problems; and see \cite{Day86} or \cite{MatSc77} via asymptotic expansions approach). Note that the rationale behind our framework follows, in some sense, the settings of these papers -- where we establish a connection between the asymptotic estimates for the joint type occupation times and exit probability problems for a family of diffusion-transmutation processes and the corresponding solutions for the Dirichlet problem in a given bounded open domain with smooth boundary. Specifically, we consider the following Markov process $\bigl((x_t^{\epsilon},y_t^{\epsilon}), \nu_t^{\epsilon}\bigr)$ in the phase space $\mathbb{R}^{(n+m)} \times \{1,2, \dots, K\}$ (see Fig.~\ref{Fig-DSC_With_DTP}) \begin{align} \left.\begin{array}{l} dx_t^{\epsilon} = F(x_t^{\epsilon}, y_t^{\epsilon})dt \\ dy_t^{\epsilon} = f_{\nu_t^{\epsilon}}(y_t^{\epsilon})dt + \sqrt{\epsilon} \sigma_{\nu_t^{\epsilon}}(y_t^{\epsilon}) d w_t, \quad (x_0^{\epsilon}, y_0^{\epsilon})= (x_0, y_0) \end{array} \right \} \label{Eq4} \end{align} where \begin{itemize} \item $(x_t^{\epsilon}, y_t^{\epsilon})$ is an $\mathbb{R}^{(n+m)}$-valued diffusion process, \item the functions $F$ and $f_k$, $k=1,2, \ldots, K$, are uniformly Lipschitz, with bounded first derivatives, \item $\nu_t^{\epsilon}$ is a $\{1,2, \ldots, K\}$-valued process such that \begin{align*} \mathbb{P} \Bigl\{ \nu_{t+\triangle}^{\epsilon} = m \, \bigl\vert \,(x_t^{\epsilon}, y_t^{\epsilon}) = (x, y), \nu_t^{\epsilon} = k \Bigr\} = \frac{c_{km}(x,y)}{\epsilon} \triangle + o(\triangle) \,\,\, \text{as} \,\,\, \triangle \downarrow 0, \end{align*} for $k, m \in \{1,2, \ldots, K\}$ and $k \neq m$, \item $\sigma_k(y)$ are Lipschitz continuous $\mathbb{R}^{m \times m}$-valued functions such that $a_k(y) = \sigma_k(y)\,\sigma_k^{T}(y)$ are uniformly elliptic, that is, \begin{align*} a_{min} \vert p \vert^2 < p \cdot a_k(y) p < a_{max} \vert p \vert^2, \quad p, y \in \mathbb{R}^{m}, k \in \{1,2, \ldots, K\}, \end{align*} for some $a_{max} > a_{min} > 0$, \item $w_t$ is a standard Wiener process in $\mathbb{R}^{m}$, and \item $\epsilon$ is a small positive number that represents the level of random perturbation in the system. \end{itemize} \begin{figure}[bht] \vspace{-2mm} \begin{center} \hspace{20 mm}\includegraphics[width=70mm]{Fig-DSC_With_DTP} {\scriptsize $ \begin{array}{c} \nu_t^{\epsilon} \,\colon\,\mathbb{P} \bigl\{ \nu_{t+\triangle}^{\epsilon} = m \, \bigl\vert \,(x_t^{\epsilon}, y_t^{\epsilon}) = (x, y), \nu_t^{\epsilon} = k \bigr\} = \dfrac{c_{km}(x,y)}{\epsilon} \triangle + o(\triangle) \\ \text{as} \,\,\, \triangle \downarrow 0 \,\,\, \text{for} \,\,\, k, m \in \{1,2, \ldots, K\} \,\,\, \text{and} \,\,\, k \neq m \end{array}$} \vspace{-2 mm} \caption{A dynamical system coupled with diffusion-transmutation processes} \label{Fig-DSC_With_DTP} \vspace{-5 mm} \end{center} \end{figure} \begin{remark} Note that the coupling of such a diffusion process with that of the so-called transmutation process in \eqref{Eq4} allows us to model random jumps or switchings from one state or mode to another, and thus modifying the dynamics of the systems. \end{remark} Define $z_t^{\epsilon}=(x_t^{\epsilon}, y_t^{\epsilon})$, then, with minor abuse of notation, we can rewrite equation~\eqref{Eq4} as follows \begin{align} dz_t^{\epsilon} = F_{\nu_t^{\epsilon}}(z_t^{\epsilon})dt+ \sqrt{\epsilon} \hat{\sigma}_{\nu_t^{\epsilon}}(z_t^{\epsilon}) d w_t, \quad z_0^{\epsilon}=(x_0, y_0), \label{Eq51} \end{align} where \begin{align*} \left.\begin{array}{l} F_k = \operatorname{blockdiag}\bigl\{F,\, f_k\bigr\} \\ \hat{\sigma}_k = \bigl[0_{n \times n}, \sigma_k^T\bigr]^T \end{array} \right \} \end{align*} for $k = 1, 2, \ldots, K$. Here, we also assume that the transmutation coefficients $c_{km}(z)$, for $(z) \in \mathbb{R}^{(n+m)}$, are positive and Lipschitz continuous. Moreover, under these conditions (e.g., see \cite{EizF90}), there exists a unique vector $\bar{\omega}(z) = \bigl(\omega_1(z), \omega_2(z), \ldots, \omega_K(z)\bigr)$ such that \begin{align*} \omega_k(z) > 0, \quad \sum\nolimits_{k = 1}^{K} \omega_k(z) = 1 \quad \text{and} \quad \bar{\omega}(z) \hat{C}(z) = 0, \end{align*} where $\hat{C}(z) = \bigl(\hat{C}_{km}(z)\bigr)$ is an $K \times K$ matrix and \begin{align*} \left \{ \begin{array}{l} \hat{C}_{km}(z) = c_{km}(z) \quad\qquad \quad ~\text{for} \quad k \neq m,\\ \hat{C}_{kk}(z) = - \sum\nolimits_{j: j \neq k} c_{kj}(z) \quad \text{for} \quad k = m. \end{array} \right. \end{align*} Denote by $\mathbb{P}_{z_0, k}^{\epsilon}$ the probability measures in the space of trajectories of the process $(z_t^{\epsilon}, \nu_t^{\epsilon})$ and by $\mathbb{E}_{z_0, k}^{\epsilon}$ the associated expectation. Define the occupation time $r_t^{\epsilon}$ for the component $\nu_t^{\epsilon}$ as \begin{align*} r_t^{\epsilon} = \biggl(\int_0^t \chi_1\bigl(\nu_s^{\epsilon}\bigr) ds, \int_0^t \chi_2\bigl(\nu_s^{\epsilon}\bigr) ds, \ldots, \int_0^t \chi_K\bigl(\nu_s^{\epsilon}\bigr) ds \biggr), \end{align*} where $\chi_k$ is the indicator function of the singleton set $\{k\}$. Then, we specifically study the process $(z_t^{\epsilon}, \nu_t^{\epsilon})$ and the occupation time $r_t^{\epsilon}$; and we further look on $z_t^{\epsilon}$ as a result of small random perturbations of the average system \begin{align*} \dot{z}(t) &= \sum\nolimits_{k=1}^K \omega_k(z(t)) F_k \bigl(z(t)\bigr) \notag \\ &\triangleq \bar{F} \bigl(z(t)\bigr), \quad \quad z(0) = z_0 \in \mathbb{R}^{(n+m)} \end{align*} that allows us to prove large deviation results for the joint type occupation times and positions as $\epsilon \rightarrow 0$ and study the exit probabilities for such a family of processes. On the other hand, let $G \subset \mathbb{R}^{(n+m)}$ be a bounded open domain with smooth boundary $\partial G$. The infinitesimal generator $\mathcal{L}^{\epsilon}$ of the process $(z_s^{\epsilon}, \nu_s^{\epsilon})$ acting on smooth functions (smooth in $z \in \mathbb{R}^{(n+m)}$) is given by \begin{align} \mathcal{L}^{\epsilon}\upsilon_k(z)= \mathcal{L}_k^{\epsilon}\upsilon_k(z) + \frac{1}{\epsilon} \sum\nolimits_{j=1}^K c_{kj}(z) \bigl[\upsilon_j(z) - \upsilon_k(z)\bigr], \label{Eq4b} \end{align} where \begin{align} \mathcal{L}_{k}^{\epsilon} \upsilon_k(z) = \Bigl \langle \bigtriangledown_z \upsilon_k(z), F_k(z) \Bigr\rangle + \frac{\epsilon}{2} \operatorname{tr}\bigl \{\hat{a}_k(z)\bigtriangledown_z^2 \upsilon_k(z) \bigr\}, \label{Eq4c} \end{align} and $\hat{a}_k(z) \triangleq \bigl(\hat{a}_k^{ij}(z)\bigr) = \hat{\sigma}_k(z)\,\hat{\sigma}_k^T(z)$ for $k =1, 2, \ldots, K$. Moreover, throughout the paper, we assume that the infinitesimal generator in \eqref{Eq4b} is hypoelliptic for each $k =1, 2, \ldots, K$ (cf. Remark~\ref{R2}). Note that the process $(z_s^{\epsilon}, \nu_s^{\epsilon})$ is closely connected with the following Dirichlet problem for a linear reaction-diffusion system, which also satisfies the maximum principle (e.g., see \cite[Chapter~3, Section~8]{ProW84} for the application of maximum principle for classical Dirichlet problems), \begin{align} \left \{ \begin{array}{l} \mathcal{L}_k^{\epsilon} \upsilon_k^{\epsilon}(z) + \frac{1}{\epsilon} \sum\nolimits_{j=1}^K c_{kj}(z) \bigl[\upsilon_j^{\epsilon}(z) - \upsilon_k^{\epsilon}(z)\bigr] = 0, \quad z \in G,\\ \upsilon_k^{\epsilon}(z)\vert_{\partial G} = g_k(z), \quad k=1,2, \ldots, K, \end{array} \right. \label{Eq5} \end{align} where, we can study the limiting behavior for the solution of the Dirichlet problem in \eqref{Eq5} as the small random perturbation vanishes, i.e., $\epsilon \rightarrow 0$. Here, we remark that the interplay between the small diffusion and the jumps $\nu$-component leads to the situation, where $g_k(z)$, for $k=1, 2, \ldots, K$, will influence the $\lim_{\epsilon \downarrow 0} \upsilon_k^{\epsilon}(z)$. In this paper, we also investigate the limiting behavior for the solution of the Dirichlet problem in \eqref{Eq5} in two steps: (i) the first step is related with the exit problem for the component $z_t^{\epsilon}$ from the domain $G$, where such an exit problem can be addressed by determining the action functional for the family of processes $z_t^{\epsilon}$ as $\epsilon \rightarrow 0$, and (ii) the second step is related with determining the position of the fast component $\nu_t^{\epsilon}$ at the random time $\tau_{G}^{\epsilon} = \bigl\{t > 0 \,\bigl\vert \, z_t^{\epsilon} \notin \partial G \bigr\}$. The rest of the paper is organized as follows. In Section~\ref{S2}, using the basic remarks made in Sections~\ref{S1}, we briefly discuss the action functional for a dynamical system with small random perturbations and coupled with the so-called transmutation process. In this section, we also discuss the Dirichlet problem corresponding to a linear reaction-diffusion system w.r.t. a given bounded open domain. In Section~\ref{S3}, we provide our results on the asymptotic estimates for the joint type occupation times and exit probabilities for a family of diffusion-transmutation processes and the corresponding solutions for the Dirichlet problem in a given bounded open domain with smooth boundary. \section{Action functional for the family $(z_t^{\epsilon}, r_t^{\epsilon})$} \label{S2} In this section, we provide some preliminary results that are concerned with the action functional for the family of processes $(z_t^{\epsilon}, r_t^{\epsilon})$ as $\epsilon$ tends to zero. Before stating these results, we need some notations. Let $\lambda(z, p, \alpha)$ be the principal eigenvalue of the matrix $\bigl(H_{km}(z, p,\alpha)\bigr)$, $z, p \in \mathbb{R}^{(n+m)}$, $\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K) \in \mathbb{R}^K$ \begin{align} H_{km}(z, p,\alpha) = \left \{ \begin{array}{l} \hat{C}_{km}, \hspace{2.07in} \text{if} \quad m \neq k,\\ \bigl[p \cdot \hat{a}_k(z) p/2 + p \cdot F_k(z) + \alpha_k \bigr] + \hat{C}_{kk}, \quad \text{if} \quad m = k \end{array} \right. \label{Eq6} \end{align} where $\hat{a}_k(z) = \hat{\sigma}_k(z)\,\hat{\sigma}_k^T(z)$ for $k =1, 2, \ldots, K$. Note that $\lambda(z, p, \alpha)$ is convex in $(p, \alpha)$ and its Legendre transform in $(p, \alpha)$ is given by \begin{align} \eta(z, q, \beta) =\sup_{p \in \mathbb{R}^{(n+m)},\, \alpha \in \mathbb{R}^K} \bigl[ q \cdot p + \beta \cdot \alpha - \lambda(z, p, \alpha) \bigr], \quad z, q \in \mathbb{R}^{(n+m)}, \,\,\beta \in \mathbb{R}^K. \label{Eq7} \end{align} Let $C(\mathbb{R}^{(n+m)})$ be the space of continuous functions: $[0, T] \rightarrow \mathbb{R}^{(n+m)}$ and \begin{align} C_{+}(\mathbb{R}^K) = \Bigl\{\mu=(\mu_1, \mu_2, \ldots, \mu_K) \,\bigl \vert \,\mu \in C(\mathbb{R}^K), \quad \mu_k(0)=0, \,\, \,\, 1 \le k \le K, \quad & \notag \\ \mu_k(t) \,\, \text{is non-decreasing and} \,\, \sum\nolimits_{k=1}^K \mu_k(t) = t, \,\, t \in [0, T] \Bigr\}.& \label{Eq8} \end{align} Let $T > 0$ be fixed and define \begin{align} S_{0T}(\varphi, \mu) = \left \{ \begin{array}{l} \int_0^T \eta\bigl(\varphi(s), \dot{\varphi}(s), \dot{\mu}(s)\bigr) ds, \quad \text{if} \,\, \varphi \in C(\mathbb{R}^{(n+m)}) \,\, \text{and} \,\, \mu \in C_{+}(\mathbb{R}^K) \\ \hspace{1.8 in} \text{are absolutely continuous (a.c.)}\\ +\infty \hspace{1.65 in}\text{otherwise}. \end{array} \right. \label{Eq9} \end{align} Suppose that the diffusion and transmutation coefficients satisfy the Lipschitz continuous and positive-Lipschitz continuous conditions, respectively. Then, we have the following result. \begin{proposition} \label{P1} The functional $\epsilon^{-1} S_{0T}$ is the action functional for the family of processes $(x_t^{\epsilon}, r_t^{\epsilon})$ as $\epsilon \rightarrow 0$ in the uniform topology. The action $S_{0T}$ is nonnegative and equal to zero only when $\dot{\varphi}(t) = \bar{F} \bigl(\varphi(t)\bigr) = \sum_{k+1}^K \omega_k(\varphi(t)) F_k \bigl(\varphi(t)\bigr)$ and $\dot{\mu}(t) = \bar{\omega}(\varphi(t))$, for $t \in [0, T]$. \end{proposition} Let us denote by $\Psi_t^{z_0}$ the integral curve of the vector field $\dot{z}(t)=\bar{F} \bigl(z(t)\bigr)$ starting from the point $z(0)=z_0$ (i.e., $\dot{\Psi}_t^{z_0} = \bar{F} \bigl( \Psi_t^{z_0}\bigr)$, with $\Psi_0^{z_0} = z_0$). Then, define \begin{align*} \rho(z, q) =\sup_{p \in \mathbb{R}^{(n+m)}} \bigl[ q \cdot p - \lambda(z, p, 0) \bigr], \quad z, q \in \mathbb{R}^{(n+m)} \end{align*} and \begin{align*} I_{0T}(\varphi) = \left \{ \begin{array}{l} \int_0^T \rho\bigl(\varphi(s), \dot{\varphi}(s)\bigr) ds, \quad \quad \text{if} \,\, \varphi \in C(\mathbb{R}^{(n+m)}) \,\, \text{is a.c.}\\ +\infty \hspace{1.2 in}\text{otherwise}. \end{array} \right. \end{align*} Taking into account the involution property of the Legendre transform, then we have the following \begin{align*} \eta(z, q, \beta) &= -\sup_{\beta \in \mathbb{R}^K} \bigl[ - 0 \cdot \beta - \eta(z, q, \beta) \bigr] \\ &= \sup_{p \in \mathbb{R}^{(n+m)}} \bigl[ q \cdot p - \lambda(z, p, 0) \bigr]\\ &= \rho(x, q). \end{align*} Next, we have the following result which is a direct consequence of the contraction principle (see also \cite[Chapter~5, pp.~117--124]{FreWe84}). \begin{corollary} \label{C1} The functional $\epsilon^{-1} I_{0T}$ is the action functional for the family $z_t^{\epsilon}$ as $\epsilon \rightarrow 0$ in the uniform topology. The action $I_{0T}$ is equal to zero only when $\varphi_t = \Psi_t^{z_0}$ and $\Psi_0^{z_0}=z_0$. \end{corollary} Let $\bar{n}(y)$ be a unit vector normal to $G$ at $y \in \partial G$. Furthermore, we assume that the average system $\bar{F}(z)=\sum\nolimits_{k=1}^K \omega_k(z) F_k(z)$ satisfies following large deviation condition. \begin{assumption} [Large deviation condition] \label{AS1} The vector field $\bar{F}(z)$ points inward from the boundary $\partial G$, i.e., $\bigl\langle \bar{F}(z), \bar{n}(z)\bigr\rangle < 0$ for any $z \in \partial G$. The vector field $\bar{F}(z)$ has a unique stationary point at $\bar{z}_0 \in G$. Moreover, the function \begin{align*} V(z) = \inf \Bigl\{ I_{0T}(\varphi) \, \bigl\vert \, \varphi(0) = z_0, \,\, T > 0, \,\, \varphi(T) = z \,\,\, \text{for} \,\,\, z \in \partial G \Bigr\} \end{align*} attains its unique minimum at $\bar{z}_0 \in \partial G$, i.e., $V(\bar{z}_0) < V(z)$ for any $z \in \partial G$. \end{assumption} \begin{assumption} \label{AS2} There exists $k_0$, with $k_0 \in \{1,2, \ldots, K\}$, such that at the point $\bar{z}_0 \in \partial G$, defined above in Assumption~\ref{AS1}, then the following generic inequalities hold \begin{align*} \bigl\langle F_{k_0}(\bar{z}_0), \bar{n}(\bar{z}_0)\bigr\rangle > \bigl\langle F_k (\bar{z}_0), \bar{n}(\bar{z}_0)\bigr\rangle, \,\, k_0 \in \{1,2, \ldots, K\}, \,\, k \neq k_0. \end{align*} Moreover, we also assume that the infinitesimal generator in \eqref{Eq4b} is hypoelliptic for each $k \in \{1, 2, \ldots, K\}$. \end{assumption} Let $\tau_{G}^{\epsilon}$ be the first exit time for the component $z_t^{\epsilon}$ from $G \times \{1,2, \ldots, K\}$, i.e., \begin{align} \tau_{G}^{\epsilon} = \min \Bigl\{ t > 0 \, \bigl\vert \, z_t^{\epsilon} \notin G \Bigr\}. \label{Eq10} \end{align} Then, in the following sections, we study the limiting distribution of $(z_{\tau_{G}^{\epsilon}}^{\epsilon}, \nu_{\tau_{G}^{\epsilon}}^{\epsilon})$ as $\epsilon \rightarrow 0$. This distribution also defines the limit for the solution of the Dirichlet problem in \eqref{Eq5} as $\epsilon \rightarrow 0$. \section{Main results} \label{S3} In this section, we present our main results that establish connection between the exit probability problem for $(z_t^{\epsilon}, \nu_t^{\epsilon})$ from $G \times \{1, 2, \ldots, K\}$ and that of the limiting behavior for the solutions of the Dirichlet problem in \eqref{Eq5}. Note that if Assumption~\ref{AS1} holds true (i.e., the large deviation condition), then the exit problem for the component $z_t^{\epsilon}$ from the given bounded open domain $G$ is equivalent to determining the action functional for the family of processes $z_t^{\epsilon}$ as $\epsilon \rightarrow 0$ (cf. Proposition~\ref{P1}) and the exact exit position for the fast component $\nu_t^{\epsilon}$ at the random time $\tau_{G}^{\epsilon} = \bigl\{t > 0 \,\bigl\vert \, z_t^{\epsilon} \notin \partial G \bigr\}$. Then, we have our first result concerning the asymptotic estimates for the joint type occupation times and the exit positions. \begin{proposition} \label{P2} Let the diffusion matrices $a_k(x)$ and transmutation coefficients $c_{km}(z)$ be Lipschitz continuous and let $a_k(z)$ be uniformly elliptic and $c_{km}(z) > 0$ for $z \in G \cup \partial G$, $k, m \in \{1,2, \ldots, K\}$, with $k \neq m$. If Assumption~\ref{AS1} holds true, i.e., the large deviation condition. Then, we have \begin{align} \lim_{\epsilon \rightarrow 0} \mathbb{P}_{x_0, k}^{\epsilon} \bigl \{ \vert z_{\tau_{G}^{\epsilon}}^{\epsilon} - \bar{z}_0 \vert > \delta \bigr\} = 0 \label{Eq11} \end{align} for any $\delta > 0$, $1 \le k \le K$, uniformly in $\bar{z}_0 \in \Omega$ for any compact $\Omega \subset G$. Furthermore, if Assumption~\ref{AS2} is satisfied, then we have the following \begin{align} \lim_{\epsilon \rightarrow 0} \mathbb{P}_{z_0, k}^{\epsilon} \bigl \{\nu_{\tau_{G}^{\epsilon}}^{\epsilon} = k_0 \, \vert \, \tau_{G}^{\epsilon} < \infty \bigr\} = 1 \label{Eq12} \end{align} for $1 \le k \le K$ and $\bar{z}_0 \in \Omega \subset G$. \end{proposition} Before attempting to prove Proposition~\ref{P2}, let us consider the following non-degenerate diffusion process $(x_t^{\epsilon,\hat{\delta}}, y_t^{\epsilon,\hat{\delta}})$ satisfying \begin{align} \left.\begin{array}{l} d x_t^{\epsilon,\hat{\delta}} = F\bigl(x_t^{\epsilon,\hat{\delta}}, y_t^{\epsilon,\hat{\delta}}\bigr) dt + \sqrt{\hat{\delta}} dv_t \\ d y_t^{\epsilon,\hat{\delta}} = f_{\nu_t^{\epsilon,\hat{\delta}}}\bigl(y_t^{\epsilon,\hat{\delta}}\bigr) dt + \sqrt{\epsilon} \sigma_{\nu_t^{\epsilon,\hat{\delta}}} \bigl(y_t^{\epsilon,\hat{\delta}}\bigr) dw_t \end{array} \right\} \label{Eq10x} \end{align} with an initial condition $\bigl(x_0^{\epsilon,\hat{\delta}}, y_0^{\epsilon,\hat{\delta}}\bigr) = \bigl(x_0, y_0\bigr)$ and $v_t$ (with $v_0=0$) is an $n$-dimensional standard Wiener process and independent to $w_t$. Moreover, for the case $\epsilon^{-1}\hat{\delta} \rightarrow 0$ as $\hat{\delta} \rightarrow 0$, then we can assume that $\nu_t^{\epsilon,\hat{\delta}}$ is a $\{1,2, \ldots, K\}$-valued process satisfies the following \begin{align*} \mathbb{P} \Bigl\{ \nu_{t+\triangle}^{\epsilon,\hat{\delta}} = m \, \bigl\vert \,(x_t^{\epsilon, \hat{\delta}}, y_t^{\epsilon, \hat{\delta}}) = (x, y), \nu_t^{\epsilon,\hat{\delta}} = k \Bigr\} = \frac{c_{km}(x,y)}{\epsilon} \triangle + o(\triangle) \,\,\ \text{as} \,\,\, \triangle \downarrow 0, \end{align*} for $k, m \in \{1,2, \ldots, K\}$ and $k \neq m$. Define $z_t^{\epsilon,\hat{\delta}}=(x_t^{\epsilon,\hat{\delta}}, y_t^{\epsilon,\hat{\delta}})$, then we can rewrite the system equations in \eqref{Eq10x} as follows \begin{align} dz_t^{\epsilon,\hat{\delta}} = \tilde{F}_{\nu_t^{\epsilon,\hat{\delta}}}(z_t^{\epsilon,\hat{\delta}})dt+ \sqrt{\epsilon} \tilde{\sigma}_{\nu_t^{\epsilon,\hat{\delta}}}(z_t^{\epsilon}) d \tilde{w}_t, \quad z_0^{\epsilon}=(x_0, y_0), \label{Eq51} \end{align} where $\tilde{w}_t = \left[v_t^T, \, w_t^T \right]^T$ and \begin{align*} \left.\begin{array}{l} \tilde{F}_k = \operatorname{blockdiag}\bigl\{F,\, f_k\bigr\} \\ \tilde{\sigma}_k = \left[\sqrt{(\hat{\delta}/\epsilon)}\, I_{n \times n},\, \sigma_k^T\right]^T \end{array} \right \} \end{align*} for $k = 1, 2, \ldots, K$. Let $\tau_{G}^{\epsilon,\hat{\delta}}$ be the first exit time for the diffusion process $z_t^{\epsilon,\hat{\delta}}$ from the domain $G$. Moreover, define the following \begin{align*} \begin{array}{c} \theta = \tau_{G}^{\epsilon} \wedge T, \quad\quad \theta^{\hat{\delta}} = \tau_{G}^{\epsilon,\hat{\delta}} \wedge T, \\ \bigl\Vert z^{\epsilon,\hat{\delta}} - z^{\epsilon} \bigr\Vert_t = \sup\limits_{s \le r \le t} \Bigl\vert z_r^{\epsilon,\hat{\delta}} - z_r^{\epsilon} \Bigr\vert \,\,\, \text{for} \,\,\, s, t \in [0, T]. \end{array} \end{align*} Then, we need the following lemma, which is useful for the development of our main results. \begin{lemma} (cf. \cite[Lemma~2.5]{BefA15a}) \label{L1} Suppose that $\epsilon > 0$ is fixed. Then, for any initial point $z_s^{\epsilon} \in G$, with $t > s$, the following statements hold true \begin{enumerate} [(i)] \item $\bigl\Vert z^{\epsilon,\hat{\delta}} - z^{\epsilon} \bigr\Vert_t \rightarrow 0$, \item $\theta^{\hat{\delta}} \rightarrow \theta$ \, and \, $z_{\theta^{\hat{\delta}}}^{\epsilon,\hat{\delta}} \rightarrow z_{\theta}^{\epsilon}$, \end{enumerate} as $\hat{\delta} \rightarrow 0$, almost surely. \end{lemma} Here, we remark that, for the case $\epsilon^{-1}\hat{\delta} \rightarrow 0$ as $\hat{\delta} \rightarrow 0$, the solutions for the Dirichlet problem that correspond to the non-degenerate diffusion process $(x_t^{\epsilon,\hat{\delta}}, y_t^{\epsilon,\hat{\delta}})$ continuity converge to the solutions of \eqref{Eq5} as the limiting case, when $\hat{\delta} \rightarrow 0$ (cf. Lemma~\ref{L1}, Parts~(i) and (ii)); and see also the Lebesgue's dominated convergence theorem (see \cite[Chapter~4]{Roy88}). Next, let us establish the following results (i.e., Proposition~\ref{P3} and Proposition~\ref{P4}) that are useful for proving the Proposition~\ref{P2}. \begin{proposition} \label{P3} Suppose that the drift, diffusion and transmutation coefficients are independent of the position variable $z$ (i.e., $F_k(z)$, $\sigma_k(z)$ and $c_{km}(z)$, $k, m \in \{1, 2, \ldots, K\}$, are constants). Then, the statement in Proposition~\ref{P1} holds true. \end{proposition} {\em Proof}: Let us first simplify some notation: $\lambda(p, \alpha)$ and $\eta(q, \beta)$ for $\lambda(z, p, \alpha)$ and $\eta(z, q, \beta)$, respectively. Then, define \begin{align*} L(\beta) =\sup_{\alpha} \bigl[\beta \cdot \alpha - \lambda(0, \alpha) \bigr], \quad \beta \in \mathbb{R}^K. \end{align*} The action functional for the family of processes $r_t^{\epsilon}$, $\epsilon \rightarrow 0$ (assuming that $\epsilon^{-1}\hat{\delta} \rightarrow 0$ as $\hat{\delta} \rightarrow 0$), is known to be ${\epsilon}^{-1}R_{0t}\colon C_{+}(\mathbb{R}^K) \rightarrow [0, \infty]$ (cf. \cite[Chapter~7]{FreWe84}), where \begin{align*} R_{0t}(\mu) = \left \{ \begin{array}{l} \int_0^t L\bigl(\dot{\mu}(s)\bigr) ds, \quad\quad \text{if} \,\, \mu \in C_{+}(\mathbb{R}^K)\,\, \text{is a.c.}\\ +\infty \hspace{1.0 in}\text{otherwise}. \end{array} \right. \end{align*} Let $y_t^{\epsilon,\hat{\delta}, k}$, for $k \in \{1,2, \ldots, K\}$, with $y_0^{\epsilon,\hat{\delta}, k}=0$, be independent diffusion processes corresponding to the generators $\bigl \langle \bigtriangledown_z(\cdot), \tilde{F}_k \bigr\rangle + \frac{\epsilon}{2} \operatorname{tr}\bigl \{\tilde{a}_k\bigtriangledown_z^2(\cdot) \bigr\}$, with $\tilde{a}_k= \tilde{\sigma}_k \tilde{\sigma}_k^T$ and $k \in \{1,2, \ldots, K\}$. Then, the action functional for the family of processes $y_t^{\epsilon,\hat{\delta}, k}$ is known (cf. \cite[Chapter~3]{FreWe84}), i.e., the action functional $\epsilon^{-1}R_{0t}^k(\varphi)\colon C_{+}(\mathbb{R}^n) \rightarrow [0, \infty]$, with \begin{align*} R_{0t}^{k}(\varphi(t)) = \left \{ \begin{array}{l} \frac{1}{2} \int_{0}^{t}\bigl\Vert\dot{\varphi}(s) - \tilde{F}_k \bigr\Vert_{[\tilde{a}_k]^{-1}}^2ds \quad\quad \text{if} \,\, \varphi \,\,\text{is a.c.}\\ +\infty \hspace{1.67 in}\text{otherwise}, \end{array} \right. \end{align*} where $\Vert \cdot \Vert_{[\tilde{a}_k]^{-1}}$ denotes the Riemannian norm of a tangent vector. Note that if $r_t^{\epsilon, \hat{\delta}} = (r_{1, t}^{\epsilon,\hat{\delta}}, r_{2, t}^{\epsilon,\hat{\delta}}, \ldots r_{K, t}^{\epsilon,\hat{\delta}})$ is independent of $y_t^{\epsilon, \hat{\delta}, k}$ for $k \in \{1,2, \ldots, K\}$, then it is clear that the process $z_t^{\epsilon, \hat{\delta}}$ can be realized as \begin{align*} z_t^{\epsilon,\hat{\delta}} = \sum\nolimits_{k=1}^K r^{\epsilon, \hat{\delta}, k} \bigl(r_{k, t}^{\epsilon,\hat{\delta}}\bigr), \quad \text{with} \quad y^{\epsilon, \hat{\delta}, k} \bigl(s\bigr) = y_s^{\epsilon,\hat{\delta}, k}. \end{align*} Moreover, the following transformation $S$ \begin{align*} \bigl(y_t^{\epsilon,\hat{\delta},1}, y_t^{\epsilon,\hat{\delta}, 2}, \ldots, y_t^{\epsilon, \hat{\delta}, K}, r_{ t}^{\epsilon,\hat{\delta}}\bigr) \xrightarrow{S} \bigl(z_{ t}^{\epsilon,\hat{\delta}}, r_{ t}^{\epsilon,\hat{\delta}} \bigr) \end{align*} is continuous in the uniform topology on any finite interval $[0, T]$ (cf. Lemma~\ref{L1}). Then, for a fixed $T>0$, it follows from \cite[Theorem~3.3.1]{FreWe84} that \begin{align*} \tilde{S}_{0T}(\varphi, \mu) = \inf\biggl\{\int_0^T & L\bigl(\dot{\mu}(s)\bigr) ds + \frac{1}{2} \sum\nolimits_{k=1}^K \int_{0}^{\mu_k(t)}\bigl\Vert\dot{\varphi}_k(s) - \tilde{F}_k \bigr\Vert_{[\tilde{a}_k]^{-1}}^2ds, \\ & \text{if} \,\, \varphi_k \,\, \text{are a.c. and} \,\,\, S\bigl(\varphi_1, \varphi_2, \ldots, \varphi_K, \mu \bigr)=\bigl(\varphi, \mu \bigr) \biggr\} \end{align*} is the action functional for the family of processes $\bigl(z_{ t}^{\epsilon,\hat{\delta}}, r_{ t}^{\epsilon,\hat{\delta}} \bigr)$ as $\epsilon \rightarrow 0$. Note that $\tilde{S}_{0T}(\varphi, \mu)$ is equal to zero only when $\dot{\mu}(t) = \bar{\omega}$ and $\dot{\varphi}(t) = \sum\nolimits_{k=1}^K \omega_k \tilde{F}_k$ for $t \in [0, T]$. It remains to show that \begin{align*} \tilde{S}_{0T}(\varphi, \mu) = S_{0T}(\varphi, \mu) \quad \text{for a.c.} \,\, \varphi \,\, \text{and} \,\, \mu. \end{align*} To prove this, we will first use the Jensen's inequality to get the following \begin{align} &\tilde{S}_{0T}(\varphi, \mu) \ge \inf_{\varphi_1(\mu_1(T)), \varphi_2(\mu_2(T)), \ldots, \varphi_K(\mu_K(T))} \biggl\{T L\bigl(\mu(T)/T\bigr) ds \notag \\ & \quad + \sum\nolimits_{k=1}^K \frac{1}{2\mu_k(T)}\Bigl\Vert\varphi_k(\mu_k(T)) - \mu_k(T)\tilde{F}_k \Bigr\Vert_{[\tilde{a}_k]^{-1}}^2 \, \biggl \vert \, \varphi(T) = \sum\nolimits_{k=1}^K \varphi_k(\mu_k(T)) \biggr\}. \label{Eq14} \end{align} We claim that the right-hand side is equal to \begin{align*} T\lambda(p, \alpha)=& \sup_{q \in \mathbb{R}^{(n+m)},\, \beta \in \mathbb{R}^K} \biggl\{ p \cdot q + \alpha \cdot \beta \\ & \quad- \inf_{q_1, q_2, \ldots, q_K\colon \sum\nolimits_{k=1}^K q_k=q} \biggl[ T L(\beta/T) + \sum\nolimits_{k=1}^K \frac{1}{2 \beta_k}\Bigl\Vert q_k - \beta_k \tilde{F}_k \Bigr\Vert_{[a_k]^{-1}}^2 \biggr] \biggr\}. \end{align*} This is done as follows: the right-hand side of the last equality is equal to \begin{align*} \sup_{\beta} & \biggl\{ \sup_{q_1, q_2, \ldots, q_k} \biggl[ p (q_1 + q_2 + \ldots + q_K) - \sum\nolimits_{k=1}^K \frac{1}{2 \beta_k}\Bigl\Vert q_k - \beta_k \tilde{F}_k \Bigr\Vert_{[a_k]^{-1}}^2 \biggr] + \alpha \cdot \beta - T L(\beta/T) \biggr\}\\ & = \sup_{\beta} \biggl\{ \biggl(\sum\nolimits_{k=1}^K \beta_k \bigl(p \cdot a_k p/2 + p \cdot \tilde{F}_k\bigr) \biggr) + \alpha \cdot \beta - T L(\beta/T) \biggr\}\\ & = T\lambda(p, \alpha). \end{align*} So far, we have shown that \begin{align*} \tilde{S}_{0T}(\varphi, \mu) \ge T\eta(\varphi(T)/T, \mu(T)/T). \end{align*} On other hand, if we consider the following restriction $\varphi(t_j) = \sum\nolimits_{k=1}^K \varphi(\mu(t_j))$ in \eqref{Eq14} for $1 \le j \le N$, with $t_0=0 < t_1 < \cdots < t_N =T$ and repeated the same argument for each of the intervals $[t_j, t_{j+1}]$. Then, we will have \begin{align} \tilde{S}_{0T}(\varphi, \mu) \ge \sum\nolimits_{j=1}^N (t_j - t_{j-1}) \eta\biggl(\frac{\varphi(t_j) - \varphi(t_{j-1})}{t_j - t_{j-1}}, \frac{\mu(t_j) - \mu(t_{j-1})}{t_j - t_{j-1}} \biggr). \label{Eq15} \end{align} Since the partition $\{t_0, t_1, \ldots, t_N\}$ of $[0, T]$ is arbitrary, then we have \begin{align*} \tilde{S}_{0T}(\varphi, \mu) \ge S_{0T}(\varphi, \mu). \end{align*} The equality follows from the fact that equation \eqref{Eq15} becomes an equality when $(\varphi, \mu)$ is piecewise linear w.r.t. the partition $\{t_0, t_1, \ldots, t_N\}$. This completes the proof. \hfill $\Box$ In what follows, we assume that $\epsilon^{-1}\hat{\delta} \rightarrow 0$ as $\hat{\delta} \rightarrow 0$. Then, let us drop $\hat{\delta}$ in the notation and consider a variation $\mathbb{Q}_{z_0, k}^{\epsilon}$ of $\mathbb{P}_{z_0, k}^{\epsilon}$ that is governed by the same initial value and evolution except that its transmutation intensity $c_{km}(t)$ depends on time rather than state position. Then, the corresponding Legendre transform for $\mathbb{Q}_{z_0, k}^{\epsilon}$ in $(p, \alpha)$ is given by \begin{align*} \hat{\eta}(t, z, q, \beta) =\sup_{p \in \mathbb{R}^{(n+m)},\, \alpha \in \mathbb{R}^K} \bigl[ q \cdot p + \beta \cdot \alpha - \hat{\lambda}(t, z, p, \alpha) \bigr], \,\,\, z, q \in \mathbb{R}^{(n+m)}, \,\,\, \beta \in \mathbb{R}^K, \end{align*} where the principal eigenvalue $\hat{\lambda}(t, z, p, \alpha)$ is associated with the following matrix $\bigl(\hat{H}_{km}(t, z, p,\alpha)\bigr)$ with \begin{align*} \hat{H}_{km}(t, z,p,\alpha) = \left \{ \begin{array}{l} c_{km}(t), \hspace{2.52in} \text{if} \quad m \neq k,\\ \bigl[p \cdot \tilde{a}_k(z) p/2 + p \cdot \tilde{F}_k(z) + \alpha_k \bigr] - \sum\nolimits_{j: j \neq k} c_{kj}(t), \quad \text{if} \quad m = k. \end{array} \right. \end{align*} where $\tilde{a}_k(z) = \tilde{\sigma}_k(z)\,\tilde{\sigma}_k^T(z)$ for $k =1, 2, \ldots, K$. Let $T > 0$ be fixed and define \begin{align*} \hat{S}_{0T}(\varphi, \mu) = \left \{ \begin{array}{l} \int_0^T \hat{\eta}\bigl(s, \varphi(s), \dot{\varphi}(s), \dot{\mu}(s)\bigr) ds, \,\,\, \text{if} \,\, \varphi \in C(\mathbb{R}^{(n+m)}) \,\, \text{and} \,\, \mu \in C_{+}(\mathbb{R}^K)\,\, \text{a.c.}\\ +\infty \hspace{1.65 in}\text{otherwise}. \end{array} \right. \end{align*} Let $\epsilon^{-1}\hat{\delta} \rightarrow 0$ as $\hat{\delta} \rightarrow 0$, then we have the following result. \begin{proposition} \label{P4} The action functional for the family of processes $(z_t^{\epsilon}, r_t^{\epsilon})$ w.r.t. $\mathbb{Q}_{z_0, k}^{\epsilon}$ as $\epsilon \rightarrow 0$ is ${\epsilon}^{-1}\hat{S}_{0T}(\varphi, \mu)$ in the uniform topology. \end{proposition} {\em Proof}: Note that when constant diffusivity and drift coefficients, this lemma can be checked by following the same argument as that of Proposition~\ref{P3}. Here, the standard argument is to freeze the diffusivity and drift coefficients for smaller and smaller durations and then updating them afterwards and extending the statement to the general case for $\mathbb{Q}_{z_0,k}^{\epsilon}$ (cf. \cite[Section~6]{Var84}). Hence, we omit the details.\hfill $\Box$ {\em Proof of Proposition~\ref{P2}}: Let us define \begin{align*} I(\epsilon, \delta) = \mathbb{P}_{z_0,k}^{\epsilon} \Bigl \{\max_{0 \le s \le T} \bigl\vert z_s^{\epsilon} - \varphi(s) \bigr\vert < \delta, \max_{0 \le s \le T} \bigl\vert r_s^{\epsilon} - \mu(s) \bigr\vert < \delta\Bigr\}. \end{align*} By a standard argument from the theory of large deviations, then it is easy to see that \begin{align} \text{if} \quad S_{0T}(\varphi, \mu) < \infty, \quad \text{then} \quad \lim_{\delta \rightarrow 0} \lim_{\epsilon \rightarrow 0} I(\epsilon, \delta) = - S_{0T}(\varphi, \mu), \label{Eq16} \end{align} that is, the exponential tightness condition (e.g., see \cite[Section~3.3]{FreWe84}); and, hence, there exists a sequence of $\Omega_j$ of compact subsets of the trajectories on $[0, T]$ in the uniform topology such that \begin{align*} \lim_{j \rightarrow \infty} \lim_{\epsilon \rightarrow 0} \epsilon \log \mathbb{P}_{z_0,k}^{\epsilon} \Bigl\{ \bigl( z_{\cdot}^{\epsilon}, r_{\cdot}^{\epsilon} \bigr) \notin \Omega_j \Bigr\} = -\infty. \end{align*} On the other hand, fix $T > 0$, let $\mathbb{Q}_{z_0, k}^{\epsilon}$ be the variation of $\mathbb{P}_{z_0, k}^{\epsilon}$ associated with $c_{mk}(\varphi(t))$ (i.e., noting that $\mathbb{Q}_{z_0, k}^{\epsilon}$ is absolutely continuous w.r.t. $\mathbb{P}_{z_0, k}^{\epsilon}$). Then, define the following change of measure in the space of trajectories on $[0, T]$, i.e., the Radon-Nikodym derivative, \begin{align*} \frac{d \mathbb{P}_{z_0, k}^{\epsilon}}{d \mathbb{Q}_{z_0, k}^{\epsilon}} (z,r) &= \exp \frac{1}{\epsilon} \biggl\{\int_0^T \Bigl[c_{\nu_s}(\varphi(s)) - c_{\nu_s}(z_s^{\epsilon})\Bigr] ds \\ & \quad\quad - \sum\nolimits_{k,\, m,\, k \neq m} \epsilon\, \sum\nolimits_{\tau \in \tau_{k,m}} \log \frac{c_{km}(\varphi(\tau))}{c_{lm}(z_{\tau}^{\epsilon})} \biggr \}, \end{align*} where $c_i = \sum\nolimits_{j \neq i} c_{ij}$ and $\tau_{k,m}$ is the set of times in $[0, T]$ when $\nu_s$ changes from type (or mode) $k$ to another mode $m$.\footnote{Note that such a change of measure is also anticipated by the relative density exponential waiting times (e.g. see \cite{GikS72}).} Then, from the continuity of $c_{km}(z)$, it follows that there exists $h(\delta)$ such that $h(\delta) \rightarrow 0$ as $\delta \rightarrow 0$; and, moreover, we have \begin{align*} \biggl \vert \int_0^T \Bigl[c_{\nu_s}(\varphi(s)) - c_{\nu_s}(z_s^{\epsilon})\Bigr] ds \biggr \vert \le h(\delta) \end{align*} and \begin{align*} \sup_{0 \le s \le T} \biggl \vert \log \frac{c_{km}(\varphi(\tau))}{c_{km}(z_{\tau}^{\epsilon})} \biggr \vert \le h(\delta) \end{align*} for $\max_{0 \le s \le T} \bigl \vert z_s^{\epsilon} - \varphi(s) \bigr \vert \le \delta$. Furthermore, let $\bar{\mathbb{E}}_{z_0,k}^{\epsilon}$ be the associated expectation with $\mathbb{Q}_{z_0,k}^{\epsilon}$ and $\bigl\vert \bigcup_{k,\,m, \,k \neq m} \tau_{k,m} \bigr \vert$ denote the total number of changes of types. Note that $\mathbb{Q}_{z_0,k}^{\epsilon}$ is the absolute continuous w.r.t $\mathbb{P}_{z_0,k}^{\epsilon}$, then we have \begin{align} I(\epsilon, \delta) &= \bar{\mathbb{E}}_{z_0,k}^{\epsilon} \biggl \{\frac{d \mathbb{P}_{z_0, k}^{\epsilon}}{d \mathbb{Q}_{z_0, k}^{\epsilon}} (z,r) \, \biggl \vert \, \max_{0 \le s \le T} \bigl\vert z_s^{\epsilon} - \varphi(s) \bigr\vert < \delta \,\,\, \text{and} \,\,\, \max_{0 \le s \le T} \bigl\vert r_s^{\epsilon} - \mu(s) \bigr\vert < \delta\biggr\} \notag \\ &\le \exp \Bigl(\frac{h(\delta)}{\epsilon} \Bigr) \bar{\mathbb{E}}_{z_0,k}^{\epsilon} \biggl \{\exp \biggl( h(\delta) \bigl\vert \bigcup\nolimits_{k,\,m, \,k \neq m} \tau_{k,m} \bigr \vert \biggr) \, \biggl \vert \, \max_{0 \le s \le T} \bigl\vert z_s^{\epsilon} - \varphi(s) \bigr\vert < \delta \notag \\ & \hspace{2.9 in } \text{and} \,\,\, \max_{0 \le s \le T} \bigl\vert r_s^{\epsilon} - \mu(s) \bigr\vert < \delta\biggr\}. \label{Eq17} \end{align} Then, by the H\"{o}lder's inequality, we have \begin{align*} I(\epsilon, \delta) &\le \exp\Bigl(\frac{h(\delta)}{\epsilon} \Bigr) \hat{\mathbb{Q}}_{z_0,k}^{\epsilon} \biggl \{ \max_{0 \le s \le T} \bigl\vert r_s^{\epsilon} - \varphi(s) \bigr\vert < \delta,\, \max_{0 \le s \le T} \bigl\vert r_s^{\epsilon} - \mu(s) \bigr\vert < \delta\biggr\}^{1 - 1/q} \\ & \hspace{2.5 in} \times \bar{\mathbb{E}}_{x,k}^{\epsilon} \biggl \{\exp \biggl(q h(\delta) \bigl\vert \bigcup\nolimits_{k,\,m, \,k \neq m} \tau_{k,m} \bigr \vert \biggr) \biggr\}^{1/q} \end{align*} for all $q > 1$. Denote $\max_{0 \le s \le T} \Bigl\{c_{km} \, \bigl \vert \, k, m \in\{1, 2, \ldots, K\}, \, k \neq m\Bigr\}$ by $\Xi$. Then, it is easy to check that \begin{align} \bar{\mathbb{E}}_{z,k}^{\epsilon} \biggl \{\exp \biggl(q h(\delta) \bigl\vert \bigcup\nolimits_{k,\,m, \,k \neq m} \tau_{k,m} \bigr \vert \biggr) \biggr\} \le \exp \biggl(T \frac{(n-1)\Xi}{\epsilon} \bigl(e^{qh(\delta)} - 1\bigr)\biggr). \label{Eq18} \end{align} The above equation together with \eqref{Eq17} imply that \begin{align*} \limsup_{\delta \rightarrow 0} \lim_{\epsilon \rightarrow 0} \epsilon \log I(\epsilon, \delta) \le - \bigl(1 - 1/q \bigr) \bar{S}_{0T} (\varphi, \mu) = - \bigl(1 - 1/q \bigr) S_{0T} (\varphi, \mu). \end{align*} Then, letting $q$ to $\infty$ gives the desired upper estimate of \eqref{Eq16}. Note that a lower bound estimate is obtained by a similar argument, when Jensen's inequality is taking the place of the H\"{o}lder's inequality in \eqref{Eq17}. This completes the proof.\hfill $\Box$ {\em Proof of Proposition~\ref{P2}}: Let $\Omega_{\delta}$ and $\Omega_{2\delta}$ be $\delta$ and $2\delta$-neighborhoods of the compact set $\Omega \subset G$ with boundaries $\partial \Omega_{\delta}$ and $\partial \Omega_{2\delta}$, respectively. Then, the state-trajectories $z_t^{\epsilon}$, starting from any $z \in G$, $k \in \{1,2, \ldots, K\}$, hit $\partial \Omega_{\delta}$ before $\partial G$ with probability close to one as $\epsilon$ is small enough. This follows from Assumption~\ref{AS1}. Hence, taking into account the strong Markov property of the process $(z_t^{\epsilon}, \nu_{t}^{\epsilon}, \mathbb{P}_{z_0,k}^{\epsilon})$, it is sufficient to prove Proposition~\ref{P2} for $z \in \partial \Omega_{\delta}$, $k \in \{1,2, \ldots, K\}$. Define the following Markov times $\zeta_0 < \tau_1 < \zeta_1 < \cdots < \tau_n < \zeta_n \cdots$ as follows \begin{align*} \left. \begin{array}{l} \zeta_0 = \min \bigl\{t > 0 \,\vert \, z_t^{\epsilon} \in\partial \Omega_{2\delta} \bigr\}\\ \tau_1 = \min \bigl\{t > \zeta_0 \,\vert \, z_t^{\epsilon} \in\partial \Omega_{\delta} \cup \partial G \bigr\}\\ \zeta_1 = \min \bigl\{t > \tau_1 \,\vert \, z_t^{\epsilon} \in\partial \Omega_{2\delta} \bigr\}\\ \hspace{0.75 in} \cdots \\ \tau_{n+1} = \min \bigl\{t > \zeta_n \,\vert \, z_t^{\epsilon} \in\partial \Omega_{\delta} \cup \partial G \bigr\}\\ \zeta_{n+1} = \min \bigl\{t > \tau_n \,\vert \, z_t^{\epsilon} \in\partial \Omega_{2\delta} \bigr\}\\ \hspace{0.75 in } \cdots \end{array} \right. \end{align*} Next, let us define a Markov chain $(z_t^{\epsilon}, \hat{\nu}_{t}^{\epsilon})$ in the phase space $\bigl\{ \Omega_{\delta} \cup \partial G\bigr\} \times \bigl\{1,2, \ldots, K\bigr\}$. Note that the first exit of $z_t^{\epsilon}$ from $G$ occurs, when the component $z_t^{\epsilon}$ of the chain first time belongs to $\partial G$. Then, using the large deviation estimates for the family of processes $(z_t^{\epsilon}, \nu_{t}^{\epsilon}, \mathbb{P}_{z_0,k}^{\epsilon})$ as $\epsilon \rightarrow 0$, we can show, in the standard way (e.g., see \cite[Chapter~4]{FreWe84}), that $z_t^{\epsilon}$ starting from any $z_0 \in \Omega_{\delta}$ and $k \in \{1,2, \ldots, K\}$ reaches $\partial G$ for the first time to a small neighborhood of the point $z_0 \in \partial G$, introduced in Assumption~\ref{AS2}, with probability close to one as the parameters $\epsilon$ and $\delta$ are small enough, which implies the first statement of Proposition~\ref{P2}. In order to prove the second statement, we use the fact that the extremal of the variational problem \begin{align*} \inf \Bigl\{ I_{0T}(\varphi) \, \bigl\vert \, \varphi(0) \in \Omega, \,\, \varphi(T) = \in \partial G, \,\, T > 0 \Bigr\} \end{align*} spends in $\delta$-neighborhood $\partial G_{\delta} = \bigl\{z \in G \, \vert \, \rho(z, \partial G) < \delta \bigr\}$ of $\partial G$ a time of order $\delta$ as $\delta \rightarrow 0$. Note that, with probability close to one as $\delta$ is small, the second component $\nu_{t}^{\epsilon}$ has no jumps during this time; and, hence, $z_t^{\epsilon}$ hits the boundary for the value of the second coordinate $\nu_{t}^{\epsilon}$ such that the transition of $z_t^{\epsilon}$ from $\partial G_{\delta} \setminus \partial G$ to $\partial G$ is easiest transition, when the second component is equal to $k_0$ defined in Assumption~\ref{AS2}. This completes the proof.\hfill $\Box$ Note that the limiting distributions of $(z_{\tau_{G}^{\epsilon}}^{\epsilon}, \nu_{\tau_{G}^{\epsilon}}^{\epsilon})$ as $\epsilon \rightarrow 0$ also determine the limiting behavior for the solutions of the Dirichlet problem in \eqref{Eq5}, where such a connection is established using the following fact. \begin{proposition} \label{P5} Suppose Assumptions~\ref{AS1} and \ref{AS2} hold true. Let $\epsilon^{-1}\hat{\delta} \rightarrow 0$ as $\hat{\delta} \rightarrow 0$, then we have \begin{align} \lim_{\epsilon \rightarrow 0} \upsilon_k^{\epsilon}(z) = g_{k_0} (\bar{z}_0), \,\, 1 \le k \le K, \label{Eq13} \end{align} uniformly in $z \in \Omega \subset G$, where $\upsilon_k^{\epsilon}(z)$ is the solution for the Dirichlet problem in \eqref{Eq5}. \end{proposition} {\em Proof}: The proof easily follows from Proposition~\ref{P2} and the representation \begin{align*} \upsilon_k^{\epsilon}(z) = \mathbb{E}_{z_0,k}^{\epsilon} \Bigl\{ \upsilon_{\nu_{\tau_G^{\epsilon}}^{\epsilon}}^{\epsilon}(x_{\tau_G^{\epsilon}}^{\epsilon}) \Bigr\} \end{align*} uniformly in $z \in \Omega \subset G$ (cf. \cite[Theorem~3]{EizF90}). Furthermore, note that \begin{align*} \mathbb{P}_{z_0,k}^{\epsilon} \Bigl\{ \tau_G^{\epsilon} < \infty \Bigr\} = 1, \,\,\, \text{for any} \,\, z_0 \in \Omega, \,\ k \in \{1,2, \ldots, K\}. \end{align*} Thus, taking into account the boundedness (cf. Lemma~\ref{L1}), we have \begin{align*} \lim_{\epsilon \rightarrow 0} \upsilon_k^{\epsilon}(z) &= \lim_{\epsilon \rightarrow 0}\mathbb{E}_{z_0,k}^{\epsilon} \Bigl\{ \upsilon_{\nu_{\tau_G^{\epsilon}}^{\epsilon}}^{\epsilon}(z_{\tau_G^{\epsilon}}^{\epsilon}) \Bigr\}\\ &= g_{k_0} (\bar{z}_0) \end{align*} for $k \in \{1, 2, \ldots, K\}$. This completes the proof. \hfill $\Box$
train/arxiv
BkiUdE05qsBC5lINcLAB
5
1
\section{Introduction} The concept of spin filter is an important element in the field of spintronics.\cite{sarma:review,wolf2001} One of the most representative mechanism of filtering is the spin field effect transistor proposed by Datta and Das which is based on the spin-orbit interaction (SOI).\cite{datta1990} St\v reda and \v Seba \cite{streda2003} proposed an antisymmetric filter (ASF) which employs the Zeeman interaction with \textit{in-plane} magnetic field (or \textit{ parallel} to quantum wire) as well as Rashba SOI.\cite{rashba1,rashba2} The interplay of Rashba SOI and the Zeeman interaction with the magnetic field parallel to wire gives rise to an interesting one-dimensional (1D) band structure of quantum wire\cite{streda2003,mine1}, where the orientation of electron spin depends on wave number (see Fig. \ref{energy}). This dependence on wave number causes the charge and the spin degrees of freedom to mix, which is a feature distinct from the well-known spin-charge separation of 1D Luttinger liquid (LL). \cite{bosobook} The diverse properties of quantum wires in the presence of SOI and/or magnetic field have been studied: the collective excitations\cite{mine1,collecB}, the interplay of Rashba SOI and electron-electron interaction\cite{yu2004,gritsev2005}, the optical property\cite{optical}, and the transmission/reflection coefficients in the presence of a potential step.\cite{imura} However, as far as we know there is no report on the systematic study of charge/spin transport of 1D ASF in the presence of impurity scattering and electron-electron interaction. Impurities necessarily exist in real materials and their effects are more pronounced in 1D systems such as quantum wires. Thus, it is important to study the effects of impurities in view of the possible realizations of 1D ASF in low-dimensional nanostructures. In this paper, we investigate the influences of a single spinless impurity on the charge and \textit{spin} transport properties of 1D ASF. Remarkably \textit{the spin transport is found not to be affected by the impurity}, and this is precisely due to the charge-spin mixing effect. This behavior is in sharp constrast with that of spinful LL where the spin transport is substantially influenced by the impurity scattering [see Eq.(\ref{LL:weak},\ref{LL:strong})]. Contrary to the spin conductance, the charge transport is like that of \textit{spinless} LL.\cite{kane,akira} In passing, we mention that in this paper we avoid the delicate problems arising from the contact with leads. The main results of this paper are the spin and the charge currents of 1D ASF in weak and strong impurity scattering regimes, which are given by Eq.(\ref{main1},\ref{main2},\ref{main3},\ref{main4}). This paper is organized as follows: In Sec. II, we introduce 1D ASF and review the previous results, in particular the band structure and the bosonized Hamitonian.\cite{streda2003,mine1} In Sec. III, the impurity Hamiltonian and the coupling to external fields which produce the charge/spin transport are discussed. In Sec.IV and V, the bosonized Hamiltonians are analyzed in the framework of Keldysh formalism and the charge/spin conductances are calculated in the weak scattering and in the strong scattering regime, respectively. Sec. VI concludes the paper with a summary and discussions. In this paper we heavily rely on the bosonization method and the Keldysh formulation of transport, and the readers are referred to Ref.[\onlinecite{delft}] for the bosonization and Refs.[\onlinecite{weiss,kamenev,rammer}] for the Keldysh method. \begin{figure}[ht] \begin{center} \includegraphics[angle=0,width=0.9\linewidth]{newdispersion.eps} \includegraphics[angle=0,width=0.9\linewidth]{uv.eps} \caption{ Upper figure: Solid lines represent the lowest energy subband structure of the quantum wire in the absence of the Dresselhaus term. (Dashed lines are for zero magnetic field). Note that the Fermi energy lies in the gap. In the figure $B=3 \text{T}$. The $g$-factor is taken to be approximately 15 (as for InAs). The input parameters are $\eta_R = 2 \times 10^{-9} \text{eV}\cdot \text{cm}$, $m^*=0.024 m_e$. Lower figure: The spin-up (solid line) and -down (dashed line) components $(u_k^-)^2$ and $(v_k^-)^2$ for the lower $E_-(k)$ band. The input parameters are identical with the upper figure. Note that $(v_k^-)^2=1-(u_k^-)^2$. Adapted from Ref.[\onlinecite{mine1}].} \label{energy} \end{center} \end{figure} \section{One-dimensional antisymmetric spin filter} \label{asf} This section is based on Refs.[\onlinecite{streda2003,mine1}], and the basic setup, the band structure, and the Hamiltonian of 1D ASF are reviewed. 1D ASF (along x-axis) can be realized by applying the confinement potentials in y and z direction, so that the electrons are forced to move along the x-axis. The confinement in y-direction is due to the Rashba electric field. We will consider only the lower one-dimensional subband. Also, a magnetic field is applied along the wire (parallel to x-axis). The 1D single particle Hamiltonian is given by \begin{equation} \label{one-particle} \mathcal{H}_1 = \frac{\hbar^2 k^2}{2 m^*} +\eta_R k \sigma_z - \epsilon_Z \sigma_x, \end{equation} where $\epsilon_Z$ is the Zeeman energy and $\eta_R$ is a parameter characterizing the strength of Rashba SOI. Practically $\eta_R$ is in the range of $(1-10)\times 10^{-9} \mathrm{eV}\cdot\mathrm{cm}$. By the diagonalization of the Hamiltonian Eq.(\ref{one-particle}) two bands as depicted in Fig.\ref{energy} are obtained. When Fermi energy is located in the gap as shown in Fig.\ref{energy} and at low energy, it suffices to take into account the lower band only. The energy eigenvalue and the corresponding normalized eigenspinor of the lower band are given by \begin{equation} \begin{split} E_-(k) &=\frac{\hbar^2}{2 m^*} k^2 - \sqrt{\epsilon_Z^2 + \eta_R^2 k^2},\cr \xi_{-} &=\begin{pmatrix} u_k^- \cr v_k^- \cr \end{pmatrix}, \end{split} \end{equation} where ( $A \equiv \sqrt{(\eta_R k)^2 + \epsilon_Z^2}$ ) \begin{equation} \begin{split} \label{eigenvector2} u_k^-&=\frac{\epsilon_Z}{\sqrt{(\eta_R k + A)^2 + \epsilon_Z^2}}, \\ v_k^- &=\frac{\eta_R k + A }{\sqrt{(\eta_R k +A)^2 + \epsilon_Z^2}}. \end{split} \end{equation} $u_k^-$ and $v_k^-$ represent the amplitudes for the spin to point in the +z and the -z direction, respectively. Fig.\ref{energy} clearly demonstrates that the spin of left-moving quasiparticles is mostly polarized in the +z direction while that of right-moving quasiparticles is mostly polarized in the -z direction. Let $a_k$ be the quasiparticle operator of the lower band. At low energy we can neglect the quasiparticle excitations of upper band, and the electron operator $c_\sigma$ can be approximately expressed in terms of $a_k$ only.\cite{mine1} \begin{equation} \label{simplified} c^\dag_{k \uparrow}\sim a_k^\dag u_k^- ,\quad c^\dag_{k \downarrow} \sim a_k^\dag v_k^-. \end{equation} Also, the $a$-quasiparticle excitations near the left and the right Fermi points are more important than others at low energy, so that the $a$-quasiparticle operator can be decomposed into the left ($\psi_L$) and the right moving ($\psi_R$) components. Then the electron operator $c_\sigma(x)$ can be expressed in terms of $\psi_{R/L}$ as follows ($k_F$ is a Fermi momentum):\cite{mine1,note0} \begin{align} \label{operator} c_\uparrow (x) &\sim u_{k_F}^- \, e^{i k_F x} \, \psi_R(x) + u_{-k_F}^- \, e^{-i k_F x} \psi_L(x), \cr c_\downarrow (x) & \sim v_{k_F}^- \, e^{i k_F x} \, \psi_R(x) + v_{-k_F}^- \, e^{-i k_F x} \psi_L(x). \end{align} The non-interacting Hamiltonian in terms of $\psi_{R/L}$ is given by \begin{equation} \label{non-interacting} \mathcal{H}_{\mathrm{non}} =v_F \int_{-L/2}^{L/2} dx \Big[ \psi_R^\dag (-i \partial_x) \psi_R + \psi_L^\dag (+i \partial_x) \psi_L \Big ], \end{equation} where $v_F$ is the Fermi velocity. The length of 1D ASF is $L$. The electron-electron interaction Hamiltonian projected on the lower band is\cite{mine1} \begin{align} \label{theHamiltonian} \mathcal{H}_{\mathrm{int}} &= \frac{g_4}{2}\,\int d x \, \Big[ \rho_R(x) \rho_R(x) + \rho_L(x) \rho_L(x) \Big ] \nonumber \\ &+g_2 \int d x \rho_R(x) \rho_L(x), \end{align} where $g_4 = V_q$ and $g_2 = V_q - \lambda^2 V_{2 k_F}$. $V_q$ is a short-range interaction matrix element, so that it is almost momentum-independent. Here \begin{equation} \lambda^2 = \frac{\epsilon_Z^2}{\epsilon_Z^2 +( \eta_R k_F)^2}, \end{equation} and $\rho_{R/L}(x) = \psi^\dag_{R/L}(x) \psi_{R/L}(x)$ is the density operator of right/left moving quasiparticles. The bosonized form of the sum of the non-interacting Hamitonian and the interaction Hamiltonian is given by\cite{mine1,delft} \begin{align} \label{LL:Hamil} \mathcal{H}_0 &= \pi v_F ( 1+ \frac{g_4}{2\pi v_F})\,\int d x \, \Big[ \rho_R(x) \rho_R(x) + \rho_L(x) \rho_L(x) \Big ] \nonumber \\ &+g_2 \int d x \rho_R(x) \rho_L(x). \end{align} It is convenient to define the LL parameter $K$ and the velocity of collective excitation $v_0$. \begin{align} \label{parameters} K & = \sqrt{ \frac{1+ \frac{g_4}{2\pi v_F} - \frac{g_2}{2\pi v_F}}{1+ \frac{g_4}{2\pi v_F} + \frac{g_2}{2\pi v_F} }}, \cr v_0 &= v_F \sqrt{ ( 1+ \frac{g_4}{2\pi})^2 - ( \frac{g_2}{2\pi})^2}. \end{align} For a repulsive electron-electron interaction, $K < 1$. The action corresponding to the Hamiltonian Eq.(\ref{LL:Hamil}), being expressed in terms of phase fields, is given by \begin{equation} \label{theaction} S_0 = -\int dt \, dx \Big[ \frac{1}{\pi} \partial_x \phi \partial_t \theta + \frac{v_0}{2\pi}\,\Big( K (\partial_x \phi)^2 + \frac{1}{K}\,(\partial_x \theta)^2 \Big ) \Big ], \end{equation} where the phase fields $\theta$ and $\phi$ are defined by the following relations\cite{delft} \begin{align} \rho_R + \rho_L &= \frac{1}{\pi} \partial_x \theta + \frac{\widehat{N}_R +\widehat{N}_L}{L} , \cr \rho_R -\rho_L &=\frac{1}{\pi} \partial_x \phi + \frac{\widehat{N}_R -\widehat{N}_L}{L}. \end{align} $\widehat{N}_{R/L}$ is the total number operator of right/left moving fermions. \section{Impurity Hamiltonian and coupling with external fields} The scattering by a spinless impurity (located at $x=0$) is described by the following Hamiltonian. \begin{equation} \label{impurity1} \mathcal{H}_{\mathrm{imp}} = V_0 \sum_{\sigma = \uparrow, \downarrow}\, c_\sigma^\dag(x=0) c_\sigma(x=0). \end{equation} Projected on the lower band using Eq.(\ref{operator}) the impurity Hamiltonian Eq.(\ref{impurity1}) becomes \begin{equation} \label{impurity2} \mathcal{H}_{\mathrm{imp}} = (V_0 \lambda) (2\pi a)\,\big[ \psi_R^\dag(0) \psi_L(0) + \psi_L^\dag(0) \psi_R(0)\big ], \end{equation} where $a$ is a short distance cutoff of the order of lattice spacing and the unimportant forward scattering terms are omitted. Note that the backscattering amplitude is suppressed by a factor $\lambda$ which is just a overlap of two spinors at $k=\pm k_F$. Thus, this suppression is a consequence of charge-spin mixing. Employing the bosonization formula we get \begin{equation} \label{impurity3} \mathcal{H}_{\mathrm{imp}} = W_0 \Big( F_R^\dag F_L e^{-i 2 \theta(0,t)} + \mathrm{H.c} \Big ), \end{equation} where $W_0 = ( \lambda V_0) (2\pi a)$ and $F_{R/L}$ is the Klein factor\cite{delft}. The renormalization group flow of impurity scattering strength $W_0$ with the Hamiltonian Eq.(\ref{LL:Hamil}) and Eq.(\ref{impurity3}) is well understood.\cite{kane,akira} The scaling equation is ($dl = - \frac{d \Lambda}{\Lambda}$) \begin{equation} \label{scaling} \frac{d W_0(l)}{d l} = (1 - K ) W_0(l), \end{equation} where $\Lambda$ is the flowing energy cutoff of the system. If $K < 1$ (the repulsive electron-electron interaction), the impurity scattering becomes stronger at lower energy. Therefore, it is natural to divide the problem into two regimes: the weak scattering (or high temperature) regime where the impurity scattering can be treated perturbatively and the strong scattering (or low temperature) regime where we had better start from two disconnected quantum wires which are weakly linked by tunnelings at finite temperature.\cite{kane,gogolin} We will compute the charge and the spin transport in two regimes. For transport to occur, some external fields should be applied. For the charge transport we will apply the \textit{potential difference} \cite{akira} ($V(x) = -\frac{V}{2} \mathrm{sign}(x)$) across the impurity. Similarly, for the spin transport, the \textit{magnetic field difference} along z-axis \cite{akira} ( $\vec{B}_{\mathrm{p}}(x) = \frac{B_0}{2} \mathrm{sign}(x) \widehat{z}$ ) is applied across the impurity. We emphasize that $\vec{B}_{\mathrm{p}}(x)$ has to be distinguished from the magnetic field applied parallel to the wire (along x-axis) which is necessary for the construction of ASF itself. The probe magnetic field can be applied in arbitrary direction in y-z plane, in general. It turns out that the contribution coming from the y-component of $\vec{B}_p$ is multiplied by a oscillating factor $e^{ \pm i 2 k_F x}$, so that it becomes negligible upon spatial integration. The Hamiltonian for the interaction with the potential difference is \begin{equation} \mathcal{H}_{V} = \sum_{\sigma=\uparrow,\downarrow} \int dx\, (-e) V(x) c^\dag_\sigma(x) c_\sigma(x). \end{equation} After the projection on the lower band using Eq.(\ref{operator}), the bosonized form of $\mathcal{H}_{V}$ is given by\cite{boundaryterm} \begin{equation} \label{potential} \mathcal{H}_{V} = \frac{e V}{\pi} \theta(x=0,t). \end{equation} The Hamiltonian for the interaction with the magnetic field difference in the z-direction is ($\uparrow = +1, \downarrow = -1$) $$\mathcal{H}_B = \sum_{\sigma=\pm 1} \, \int dx\,\mu_B B_{\mathrm{p}}(x) \sigma \,c^\dag_\sigma(x) c_\sigma(x).$$ Again after the projection to the lower band using Eq.(\ref{operator}) we obtain \begin{align} \label{magnetic3} &\mathcal{H}_B =\int dx\,\mu_B B_{\mathrm{p}}(x)\,\Big[ [(u_{k_F}^-)^2 - (v_{k_F}^-)^2] \psi^\dag_R(x) \psi_R(x) \cr &+[(u_{-k_F}^-)^2 - (v_{-k_F}^-)^2] \psi^\dag_L(x) \psi_L(x) \Big ]. \end{align} A short computation shows that \begin{equation} (u_{k}^-)^2 - (v_{k}^-)^2 = - \frac{ \eta_R k}{\sqrt{\epsilon_Z^2 + (\eta_R k)^2}}. \end{equation} Thus we arrive at ($\kappa = \frac{ \eta_R k_F}{\sqrt{\epsilon_Z^2 + (\eta_R k_F)^2}}$ ) \begin{equation} \label{magnetic} \begin{split} \mathcal{H}_B &= - \kappa \mu_B\, \int dx B_p(x) \Big[ \psi_R^\dag(x) \psi_R(x) - \psi_L^\dag(x) \psi_L(x)\Big ] \cr &= -\kappa \mu_B \int dx B_p(x) \frac{1}{\pi} \partial_x \phi(x). \end{split} \end{equation} Evaluating the integral of Eq.(\ref{magnetic}) we obtain \begin{equation} \label{magnetic2} \mathcal{H}_B = \frac{ \kappa \mu_B B_0}{\pi} \phi(x=0,t). \end{equation} In the case of 1D ASF the couplings to the electric potential and the magnetic field are \textit{not} independent from each other because $[\theta,\phi] \neq 0$ in general. This is a feature which is very different from that of spinful LL with spin-charge separation. Elaborating on this point further, it is interesting to compare the Hamiltonian Eq.(\ref{potential}, \ref{magnetic2}) with those of spinful LL:\cite{akira,note2} \begin{equation} \mathcal{H}_{V,sLL} = \frac{e V \theta_\rho(0)}{\pi}, \quad \mathcal{H}_{B,sLL} = \frac{ \mu_B B_p \theta_\sigma(0)}{2\pi}, \end{equation} where $\theta_\rho$ and $\theta_\sigma$ are the charge/spin boson phase field which describes the fluctuations of charge/spin density, and they are independent from each other in the sense of $[\theta_\rho,\theta_\sigma]=0$. To find the charge and the spin current we note that the electric potential couples to the charge and the magnetic field couples to the magnetic moment. Then from Eq.(\ref{potential}) and Eq.(\ref{magnetic2}), the following expressions of charge/spin currents can deduced. \begin{equation} \label{chargecurrent} J_\rho = \frac{(-e)}{\pi}\, \frac{ d \langle \theta(0,t) \rangle}{d t}. \end{equation} \begin{equation} \label{spincurrent} J_\sigma = (-1) \frac{\kappa}{\pi} \frac{d \langle \phi(0,t) \rangle}{d t}. \end{equation} Here $\langle \theta(0,t) \rangle$ and $ \langle \phi(0,t) \rangle$ are the averages over the non-equilibrium ensemble. The Keldysh formalism will be employed in computing these non-equilibrium averages. \section{Weak scattering regime} \label{weak} The total Hamiltonian of the system is [see Eqs.(\ref{LL:Hamil},\ref{impurity3})] \begin{equation} \mathcal{H} = \mathcal{H}_0 + \mathcal{H}_{\mathrm{imp}} + \mathcal{H}_S, \end{equation} where $\mathcal{H}_S$ is the source Hamiltonian for the coupling to external field. For the computation of charge transport $\mathcal{H}_S = \mathcal{H}_V$ [Eq.(\ref{potential})], and for the computation of spin transport $\mathcal{H}_S = \mathcal{H}_B$ [Eq.(\ref{magnetic2})]. In the weak scattering regime we can treat $\mathcal{H}_{\mathrm{imp}}$ perturbatively. In Keldysh path integral formulation the key element is the following functional integral\cite{weiss,kamenev,rammer} \begin{equation} Z = \int D[\theta_{f,b}, \phi_{f,b}] \,e^{i S_K + i S_{\mathrm{imp}} + i S_S}, \end{equation} where $(\theta,\phi)_{f,b}$ denote the phase boson fields defined on the forward time branch and the backward time branch of the closed time contour, respectively. $S_K$ is the Keldysh action corresponding to the Hamiltonian Eq.(\ref{LL:Hamil}). It is basically the difference of the action Eq.(\ref{theaction}), $S_K = S_{0,f}-S_{0,b}$ between the forward and the backward branch, which are eventually expressed in terms of $\theta_{c/q}=(\theta_f\pm \theta_b)/2$ and $\phi_{c/q}=(\phi_f \pm \phi_b)/2$. $S_{\mathrm{imp}}$ and $S_S$ is the Keldysh action for impurity Hamiltonian and the source Hamiltonian $\mathcal{H}_S$, respectively. \underline{The spin transport} - The spin current Eq.(\ref{spincurrent}) can be computed easily by the coupling to external sources.\cite{kamenev} The Hamiltonian Eq.(\ref{magnetic2}) expressed in the form of Keldysh source action is \begin{equation} S_B = - \frac{\kappa \mu_B}{\pi} \int_{-\infty}^\infty dt \,\Big [ B_{0q}(t) \phi_{c}(0,t) + B_{0c}(t) \phi_{q}(0,t) \Big ], \end{equation} where $B_{0 c/q}$ is the classical/quantum component of external magnetic field.\cite{kamenev} The spin current in the 0-th order of impurity scattering is \begin{equation} J_\sigma^{(0)}(t) = i \mu_B^{-1} \frac{d}{d t}\left( \frac{ \delta Z^{(0)}[B_{0c},B_{0q}]}{\delta B_{0q}(t)} \Bigg \vert_{B_{0q} =0} \right ), \end{equation} where $Z^{(0)}[B_{0c},B_{0q}] = \langle e^{i S_B} \rangle$. Here the average means the Keldysh functional integral with respect to $S_K$. Since $S_K$ is Gaussian in $\theta_{c,q}$ and $\phi_{c,q}$ we can use the identity $\langle e^{ i X } \rangle = e^{-\langle X X \rangle/2}$. Employing an identity $\langle \phi_q \phi_q \rangle =0$, we find \begin{align} &J_\sigma^{(0)}(t) = \frac{i}{\mu_B} ( \frac{\kappa \mu_B}{\pi})^2 \frac{d }{d t} \Big[ \int_{-\infty}^\infty dt' \cr \times &( \langle \phi_q(0,t') \phi_c(0,t) \rangle B_{0c}(t') + \langle \phi_c(0,t) \phi_q(0,t') \rangle B_{0c}(t') ) \Big ] \cr &=\mu_B \frac{\kappa^2}{2\pi K} B_0, \end{align} where we have used ($\Theta(t)$ is a Heaviside step function) \begin{align} \label{phi-function} \langle \phi_c (0,t_1) \phi_q (0, t_2) \rangle &= - \frac{i \pi}{4 K} \Theta(t_1 - t_2), \cr \langle \phi_q (0,t_1) \phi_c (0, t_2) \rangle &= - \frac{i \pi}{4 K} \Theta(t_2 - t_1). \end{align} The first order correction to the spin current by impurity scattering vanish because a single Klein factor does not conserve fermion number. As of the second order correction, the above argument does not work since $F_{L/R} F^\dag_{L/R}=1$ conserves the fermion number. The second order correction is schematically given by \begin{equation} J_\sigma^{(2)} \propto \frac{d}{dt}\,\left( \frac{\delta}{\delta B_q(t)}\, \langle e^{i S_B} S_{\mathrm{imp}} S_{\mathrm{imp}} \rangle \right ) \end{equation} In view of the fact that the impurity scattering is proprotional to $e^{ 2 i \theta(0,t)}$ we find that the functional differentiation would generate the Green functions only of the type $\langle \theta_{c/q}(x = 0,t) \phi_{c/q}(x=0,t') \rangle$ which vanishes identically. Note that this vanishing of the second order correction is \textit{solely due to the specific form of the Hamiltonian Eq.(\ref{magnetic2}) whose origin can be traced back to the spin-charge mixing effect of 1D ASF}. Summarizing the result, \begin{equation} \label{main1} J_\sigma = \mu_B \frac{\kappa^2}{2\pi K} B_0 = J_\sigma^{(0)},\;\;J_\sigma^{(1)} = J_\sigma^{(2)}=0. \end{equation} This is one of the main results of this paper. The corrections can only stem from the failure of linearization approximation which is necessary for the bosonization approach, therefore, such corrections are expected to be very small at low temperature. From Eq.(\ref{main1}) the spin conductance easily follows. \begin{equation} \label{main1-conductance} G_\sigma \equiv \lim_{B_0 \to 0} \frac{J_\sigma}{B_0} = \mu_B \frac{\kappa^2}{2\pi K}, \;\; \text{no corrections}. \end{equation} \underline{The charge transport} - The source Hamiltonian necessary for the computation of the charge current is given by Eq.(\ref{potential}) which does \textit{not} depend on the field $\phi$. With $\phi$ integrated out, the action Eq.(\ref{theaction}) becomes \begin{equation} \label{theta-action} S_\theta =\frac{1}{2\pi K} \int dt dx \Big( \frac{1}{v} (\partial_t \theta)^2 - v (\partial_x \theta)^2 \Big). \end{equation} This is the action for the \textit{spinless} LL with LL parameter $K$. The charge transport based on the action Eq.(\ref{theta-action}) have been calculated by the linear response theory\cite{kane} and by the influence functional method\cite{akira}. The calculation of the 0-th order charge current is entirely identical with that of the spin current except for the subsitution of $\phi \to \theta$ and the change of parameters. \begin{equation} \label{chargezero} J_\rho^{(0)} = \frac{e^2 K }{ 2\pi } V. \end{equation} This is just the charge conductance of \textit{one-channel (or spinless)} quantum wire. The first order correction due to impurity scattering vanishes again due to Klein factor. The second order correction is given by \begin{align} \label{secondorder} &J_\rho^{(2)} \propto i \frac{d}{d t} \Bigg[ \frac{\delta}{\delta V_q(t)} \int dt_1 dt_2 \Big \langle (e^{2i \theta_f(t_1) } - e^{2i \theta_b(t_1) }) \cr & \times(e^{-2i \theta_f(t_2) } - e^{2i \theta_b(t_2 ) }) e^{ \frac{e}{\pi}\,i \int d t' (V_q \theta_c +V_c \theta_q)} \Big \rangle \Big \vert_{V_q =0}\Bigg ], \end{align} where the average is done with respect to the Keldysh action $S_K$. The average is Gaussian functional integration, and the result turns out to be \begin{align} \label{chargesecond} J_\rho^{(2)} &\sim W_0^2 \int_{-\infty}^\infty dt' e^{-2 K C(t') }\, \sin ( \frac{e V K t'}{2}) \sin ( \frac{ \pi K}{2} \mathrm{sign}(t')) \cr &\sim W_0^2 \int_{0}^\infty dt'\frac{ \sin (\frac{e V t'}{2})}{ (t^{'2}+\tau_c^2)^{K} ( \frac{ \sinh ( t' \pi /\beta)}{t' \pi /\beta} )^{2K}}, \end{align} where \begin{align} \label{C-function} C(t') &\equiv \int_0^\infty \frac{d \omega}{\omega}\,e^{-\omega \tau_c} \coth\frac{\beta \omega}{2} [1- \cos \omega (t')] \cr &= \frac{ \sqrt{(t')^2 +\tau_c^2}}{\tau_c} + \ln \left [ \frac{ \sinh \frac{ t' \pi }{\beta}}{ \frac{ t' \pi }{\beta}} \right ], \end{align} where $\tau_c$ is a short-time cutoff. Collecting the previous results, we get (up to the second order in $W_0$ ) \begin{equation} \label{main2} J_\rho = \frac{e^2 K}{2 \pi} \Big[ V - c_\rho W_0^2 \int_{0}^\infty dt'\frac{ \sin (\frac{e V K t'}{2})}{ (t^2+\tau_c^2)^{K} ( \frac{ \sinh ( t \pi /\beta)}{t \pi /\beta} )^{2K}} \Big ]. \end{equation} $c_\rho$ is a constant. Note that this expression is essentially identical with that by Fisher and Zwerger ( Eq.(3.51) of Ref.[\onlinecite{zwerger}] ). From Eq.(\ref{main2}) the charge conductance easily follows: \begin{equation} \label{main2:conductance} G_\rho = \lim_{V \to 0} \frac{J_\rho}{V} = \frac{ e^2 K }{2\pi} \Big[1 - \tilde{c}_\rho T^{2K-2} \Big ]. \end{equation} It is very interesting to highlight our results on ASF against those of spinful LL. From Eq.(3.15) and Eq.(3.18) of Ref.[\onlinecite{akira}] we have \begin{align} \label{LL:weak} G_{\rho,sLL} & = \frac{ e^2 K_\rho}{\pi} \Big[ 1 - c_1( \frac{\pi T}{\Lambda})^{K_\rho+K_\sigma - 2} + \cdots \Big ], \cr G_{\sigma,sLL} & = \frac{ \mu_B K_\sigma}{\pi} \Big[ 1 - c_2 ( \frac{\pi T}{\Lambda})^{K_\rho+K_\sigma - 2} + \cdots \Big ], \end{align} where only the leading terms are indicated. $K_\rho$ and $K_\sigma$ is the LL parameter for the charge and the spin degrees of freedom, respectively. $c_{1,2}$ are constants. The comparison of the charge conductance Eq.(\ref{main2:conductance}) with Eq.(\ref{LL:weak}) shows that the charge transport of 1D ASF essentially behaves like that of \textit{spinless LL}. However, the LL parameter $K$ depends sensitively on the Rashba SOI and the Zeeman interaction (recall that $g_2$ depends on them). The spin conductance of 1D ASF Eq.(\ref{main1-conductance}) is \textit{qualitatively} different from that of spinful LL Eq.(\ref{LL:weak}). The absence of corrections to the spin conductance of 1D ASF reflects the dependence of spin orientation on wave number. The backscattering reverses the momentum, and this degrades charge flow. However, from the viewpoint of spin, the momentum reversed state has the spin orientation which is almost parallel to the one in the absence of impurity, so that the spin current does not degrade. Even the electron-electron interaction can not modify this property significantly. \section{Strong scattering regime} \label{strong} As mentioned in Sec. III, the proper starting point in the strong scattering regime at zero temperature is two disconnected semi-infinite wires. Finite temperature and external fields make tunneling between two wires possible, and it results in transport. The 1D interacting system with boundary is most conveniently described by the open-boundary bosonization.\cite{gogolin} Let us designate two disconnected wires by 1 and 2. For each semi-infinite wire, the boundary condition at the end ($x=0$) relates the left and right moving electrons, so that the left moving fields can be expressed solely in terms of right moving fields (as reflected images) \begin{equation} \psi_{a L}(x) = - \psi_{a R}(-x), \quad \rho_{a L}(x) = \rho_{a R}(-x),\;a =1,2. \end{equation} The bosonized Hamiltonian of each wire which is expressed purely in terms of the right moving fields are \begin{align} \label{open:hamil} \mathcal{H}_a &= \pi (v_F+ \frac{g_4}{2\pi} )\, \int_{-L/2}^{L/2} dx \rho_{a R}^2(x) \cr &+ \frac{g_2}{2} \, \int_{-L/2}^{L/2} dx \rho_{a R}(x) \rho_{a R}(-x),\;\; a = 1, 2. \end{align} Note that the last term of Eq.(\ref{open:hamil}) is \textit{non-local in space}. It is interesting to compare Eq.(\ref{open:hamil}) with Eq.(\ref{LL:Hamil}) and to notice how the presence of boundary is reflected in the structure of the Hamiltonian. The density operator in terms of \textit{chiral} boson field is given by \begin{equation} \label{density} \rho_{a R}(x) = \frac{\widehat{N}_a}{L} + \frac{1}{2\pi}\, \partial_x \phi_{a R}(x), \end{equation} where $\widehat{N}_a$ is the fermion number operator of the $a$-th wire. The tunneling between two wires is given by \cite{kane,gogolin} \begin{equation} \mathcal{H}_T = t_u \Big [ F_{1 R} F_{2 R}^\dag e^{i \phi_{1R}(x=0) - i \phi_{2 R}(x=0)} + \mathrm{H. c} \Big ]. \end{equation} The coupling to the potential difference is described by the Hamiltonain \begin{equation} \label{potential-s} \mathcal{H}_{V,s} = \frac{e V}{2} ( \widehat{N}_1 - \widehat{N}_2). \end{equation} The comparison of two Hamiltonians Eq.(\ref{potential}) and Eq.(\ref{potential-s}) reveals an important difference in the charge transport mechanism between the weak and the strong scattering regime. The Hamiltonian in the weak scattering regime Eq.(\ref{potential}) is given in terms of \textit{phase field} $\theta$ which commutes with the Klein factors, while the Hamiltonian in the strong scattering regime Eq.(\ref{potential-s}) does \textit{not} commute with the Klein factors. Because of this property it is not feasible to apply the Keldysh formalism on the charge transport in the strong scattering regime. As for the coupling with the magnetic field difference, starting from Eq.(\ref{magnetic3}) one can derive \begin{align} \mathcal{H}_{B,s} &= - \kappa \mu_B\,\Big[ \int_{-L/2}^0 dx \frac{-B_0}{2} [ \rho_{1R}(x) - \rho_{1 L}(x) ] \cr &+\int^{L/2}_0 dx \frac{B_0}{2} [ \rho_{2 R}(x) - \rho_{2 L}(x) ] \Big ] \end{align} Using Eq. (\ref{density}) (we set $\phi_{a R}(x=\pm L/2) =0$) we obtain \begin{equation} \mathcal{H}_{B,s} = \frac{\kappa \mu_B B}{2 \pi}\, \Big[ \phi_{1R}(x=0) + \phi_{2R}(x=0) \Big ]. \end{equation} The examination of the tunneling and external field Hamiltonians necessitates the introduction of the symmetric and the antisymmetric combinations of operators. \begin{equation} \phi_\pm = \frac{\phi_{1R} \pm \phi_{2R}}{\sqrt{2}}, \;\; \widehat{N}_\pm = \frac{ \widehat{N}_1 \pm \widehat{N}_2}{2}. \end{equation} Let us also define $F \equiv F_{1R} F_{2R}^\dag$ which satisfies the following relations. \begin{equation} F F^\dag = F^\dag F = 1,\;\; [N_-, F] = - F, \;\; [N_+, F] = 0. \end{equation} In terms of these new fields, \begin{align} \label{allofthem} \mathcal{H}_T &= t_u \Big[ F e^{ i \sqrt{2} \phi_-(x=0)} + \mathrm{H.c} \Big ], \cr \mathcal{H}_{V,s} &= - e V \widehat{N}_-, \quad \mathcal{H}_{B,s} = \frac{\kappa \mu_B B_0}{\sqrt{2}\pi}\, \phi_+(x=0), \cr \mathcal{H}_1 + \mathcal{H}_2 & = \mathcal{H}_{0+} + \mathcal{H}_{0-}, \end{align} where the Hamiltonians $\mathcal{H}_{0 \pm}$ (in terms of $\phi_{\pm}$) are given by \begin{align} \label{hamil:strong} &\mathcal{H}_{0 \pm} = \frac{(v_F + g_4/2\pi)}{4\pi}\,\int_{-L/2}^{L/2} dx (\partial_x \phi_{\pm})^2 \cr &+\frac{g_2}{2(2\pi)^2}\,\int_{-L/2}^{L/2} dx dy \delta(x+y) \partial_x \phi_{\pm}(x) \partial_y \phi_{\pm}(y). \end{align} As can be seen in Eq.(\ref{allofthem}), the Zeeman coupling Hamiltonian $\mathcal{H}_{B,s}$ ( which is solely expressed in terms of $\phi_+$ ) is \textit{decoupled} from the tunneling Hamiltonian (which is solely expressed in terms of $\phi_-$), and this implies that \textit{the spin transport is not affected by the tunneling}. The Hamiltonian Eq.(\ref{hamil:strong}) can be diagonalized by the following Bogoliubov transformation.\cite{bosobook} \begin{equation} \label{transformation} \phi_a(t,x) = \varphi_a(t,x) \cosh \zeta - \varphi_a(t,-x) \sinh \zeta . \end{equation} After the diagonalization, the corresponding action for the chiral boson $\varphi_\pm$ is given by\cite{wen} \begin{align} \label{action:strong2} S_{0\pm} &= - \frac{1}{4\pi}\,\int_{-\infty}^\infty dt \int_{-L/2}^{L/2} dx \partial_t \varphi_\pm \partial_x \varphi_\pm \cr &-\frac{v_0}{4\pi}\,\int_{-\infty}^\infty dt \int_{-L/2}^{L/2} dx (\partial_x \varphi_{\pm})^2, \end{align} where $K$ and $v_0$ are the same LL parameter and the velocity of collective excitation in the weak scattering regime given in Eq.(\ref{parameters}). The Bogoliubov parameters are \begin{equation} \label{cosh} \cosh \zeta = \frac{K+K^{-1}}{2}, \quad \sinh \zeta = \frac{K-K^{-1}}{2}. \end{equation} From Eq.(\ref{transformation},\ref{cosh}) we find that when the field is near the boundary \begin{equation} \phi_a(x \to 0, t) = \frac{1}{K} \varphi_a(x \to 0, t). \end{equation} \underline{The spin transport}- For the spin transport we can still apply the Keldysh formalism. The spin current can be calculated by \begin{align} &J_\sigma(t) =i \mu_B^{-1} \frac{d}{dt}\, \langle e^{ i S_{B,s}} \rangle \Big \vert_{B_{0q} \to 0}, \cr & S_B =\frac{ \mu_B \kappa}{\sqrt{2}\pi} \int dt \Big( B_{0q}(t) \phi_{c+}(t) + B_{0c}(t) \phi_{q+}(t) \Big ). \end{align} The calculation is entirely identical with that of the weak scattering case \begin{equation} \label{main3} J_\sigma(t) = \frac{ \kappa^2 \mu_B}{2\pi K} B_0,\;\;\text{no corrections}, \end{equation} where we have used $\langle \phi_{c+}(t_1) \phi_{q+}(t_2) \rangle = - i \pi K \Theta(t_1 - t_2)/2$. This result is the \textit{same as that of the weak scattering regime}. It indicates that the impurity is basically decoupled from the spin degrees of freedom. The spin conductance is \begin{equation} \label{main3-results} G_\sigma = \frac{J_\sigma}{B_0} \Big \vert_{B_0 \to 0} = \frac{ \mu_B \kappa^2}{2\pi K},\;\; \text{no corrections}, \end{equation} We can ask how the finite spin conductance is possible at zero temperature. At $T = 0$ fixed point, basically all the right movers are reflected into the left movers. However, as mentioned previously, the spin does not see the boundary since the orientation of the spin remains the same either in the presence or in the absence of the boundary. \underline{The charge transport}- the charge current is given by \begin{equation} J_\rho(t) = e \frac{ d \langle \widehat{N}_- (t) \rangle}{d t}. \end{equation} The time-dependence of $ \widehat{N}_-$ solely comes from the tunneling Hamitonian $H_T$. An efficient way of treating the dynamics of $ \widehat{N}_- $ and the zero modes is discussed in Ref.[\onlinecite{balents}]. It is clear the only non-vanishing contribution to current comes from the second order in tunneling Hamiltonian. In the interaction picture with respect to $\mathcal{H}_- + \mathcal{H}_{V,s}$, \begin{equation} (\mathcal{H}_{T})_{I}(t) = t_u \Big[ e^{ -i e V t} F e^{i \sqrt{2} \phi_-(0,t)} +\mathrm{H.c} \Big ]. \end{equation} The time evolution of $\phi_-(0,t)$ is implicitly assumed and the subscript $I$ is omitted. Since the Klein factor and the number operator $\widehat{N}_-$ do not obey the canonical commuation relation, the direct application of Keldysh path integral is not feasible. Instead, it is better to evaluate the expectation values directly in the Dyson expansion of time dependent perturbation theory. A straightforward calculation, employing $F^\dag \widehat{N}_- F = ( \widehat{N}_- -1)$ and $F \widehat{N}_- F^\dag = ( \widehat{N}_- +1)$, shows that \begin{align} \langle \widehat{N}_-(t) \rangle & = 2 t_u^2 \int_{-\infty}^t d t_1 \int_{-\infty}^t d t_2\, e^{-2 C(t_1-t_2)/K} \cr &\times \sin [eV(t_1-t_2)] \sin [ \frac{\pi}{K} \mathrm{sign}(t_1-t_2) ], \end{align} where $C(t)$ is given by Eq.(\ref{C-function}). Now using the explicit result of $C(t)$ we get \begin{equation} \label{main4} J_\rho(t) \sim \, e t_u^2 \int_0^\infty d t' \,\frac{ \sin (e V t')}{ (t^{\prime 2}+\tau_c^2)^{1/K} ( \frac{ \sinh ( t' \pi /\beta)}{t' \pi /\beta} )^{2/K}}. \end{equation} From the above result the charge conductance at finite $T$ easily follows: \begin{equation} \label{main4-results} G_\rho = c_1' e^2 t_u^2 T^{2/K-2}, \end{equation} where $c_1'$ is a constant. Now let us our compare results Eq.(\ref{main3-results},\ref{main4-results}) with those of the spinful LL. The charge and the spin conductance of spinful LL in strong scattering regime are given by (see Eq.(4.21) and Eq.(4.26) of Ref.[\onlinecite{akira}]) \begin{align} \label{LL:strong} G_\rho(T) &\sim d_1 e^2 t^2 T^{\frac{1}{K_\rho}+\frac{1}{K_\sigma}-2},\cr G_\sigma(T) &\sim d_2 \mu_B t^2 T^{\frac{1}{K_\rho}+\frac{1}{K_\sigma}-2}, \end{align} where $d_{1,2}$ are constants. Again the charge transport of 1D ASF in the strong scattering regime is consistent with that of \textit{spinless} LL, while the spin transport is radically different. The spin current of the spinful LL is degraded by impurities while that of ASF is not. \section{Summary and Discussions} In this paper, we have investigated the effects of an spinless impurity on the transport properties of 1D ASF. Due to the strong spin-charge mixing effect, the spin transport is not affected by the impurity , which is radically different from that of ordinary spinful LL where the spin current is equally strongly degraded by impurity as the charge current. On the other hand, the charge transport is essentially identical with that of the spinless LL. The results of this paper can be verified by direct transport measurements, or by the recently developed momentum-selective tunneling transport measurements.\cite{governale2002,boese2001,auslaender2002} \begin{acknowledgements} This work was supported by the Grant No. R01-2005-000-10352-0 from the Basic Research Program of the Korean Science and Engineering Foundation and by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD)(KRF-2005-070-C0044). \end{acknowledgements}
train/arxiv
BkiUfn85qhDC0kZMYKKj
5
1
\section{Introduction} \label{intro} The Camassa-Holm equation (CH) \begin{equation}\label{eq1} u_{t}-u_{xxt}+2\omega u_{x}+3uu_{x}-2u_{x}u_{xx}-uu_{xxx}=0, \end{equation} where $\omega$ is a real constant, firstly appeared in \cite{FF81} as an equation with a bi-Hamiltonian structure. In \cite{CH93} it was pushed forward as a model, describing the unidirectional propagation of shallow water waves over a flat bottom, see also \cite{J02}. CH is a completely integrable equation \cite{BBS98,CM99,C,C01,L02,R02}, describing permanent and breaking waves \cite{CE98,M,C00}. Its solitary waves are stable solitons if $\omega > 0$ \cite{BBS99,CS02,J03} or peakons if $\omega = 0$ \cite{CS00}. CH arises also as an equation of the geodesic flow for the $H^{1}$ right-invariant metric on the Bott-Virasoro group (if $\omega > 0$) \cite{M98,CKKT04} and on the diffeomorphism group (if $\omega = 0$) \cite{CK02,CK03}. The bi-Hamiltonian form of (\ref{eq1}) is \cite{CH93,FF81}: \begin{equation}\label{eq2} m_{t}=-(\partial-\partial^{3})\frac{\delta H_{2}[m]}{\delta m}=-(2\omega \partial +m\partial+\partial m)\frac{\delta H_{1}[m]}{\delta m}. \end{equation} \noindent where \begin{eqnarray}\label{eq4a} m = u-u_{xx} \end{eqnarray} \noindent and the Hamiltonians are \begin{eqnarray} \label{eq2a} H_{1}[m]&=&\frac{1}{2}\int m u dx \\\label{eq2b} H_{2}[m]&=&\frac{1}{2}\int(u^{3}+uu_{x}^{2}+2\omega u^{2})dx. \end{eqnarray} \noindent The integration is from $-\infty$ to $\infty$ in the case of Schwartz class functions, and over one period in the periodic case. In general, there exists an infinite sequence of conservation laws (multi-Hamiltonian structure) $H_n[m]$, $n=0,\pm1, \pm2,\ldots$, including (\ref{eq2a}) and (\ref{eq2b}), such that \cite{L05} \begin{equation}\label{eq2aa} (\partial-\partial^{3})\frac{\delta H_{n}[m]}{\delta m}=(2\omega \partial +m\partial+\partial m)\frac{\delta H_{n-1}[m]}{\delta m}. \end{equation} The CH equation can be written as \begin{equation}\label{eq1a} m_{t}=\{m, H_{1}\}, \end{equation} \noindent where the Poisson bracket is defined as \begin{equation}\label{PB} \{A,B\}\equiv \int \frac{\delta A}{\delta m}(-2\omega \partial -m\partial-\partial m)\frac{\delta B}{\delta m}dx, \end{equation} \noindent or in more obvious antisymmetric form \begin{equation}\label{PBa} \{A,B\}=-\int (\omega+m)\Big(\frac{\delta A}{\delta m}\partial \frac{\delta B}{\delta m}-\frac{\delta B}{\delta m}\partial \frac{\delta A}{\delta m}\Big)dx. \end{equation} CH has an infinite number of conserved quantities. Schemes for the computation of the conservation laws can be found in \cite{FS99,R02,L05,CL05,I05}. The equation (\ref{eq1}) admits a Lax pair \cite{CH93,C01} \begin{eqnarray} \label{eq3} \Psi_{xx}&=&\Big(\frac{1}{4}+\lambda (m+\omega)\Big)\Psi \\\label{eq4} \Psi_{t}&=&\Big(\frac{1}{2\lambda}-u\Big)\Psi_{x}+\frac{u_{x}}{2}\Psi+\gamma\Psi \end{eqnarray} \noindent where $\gamma$ is an arbitrary constant. We will use this freedom for a proper normalization of the eigenfunctions. We consider the case where $m$ is a Schwartz class function, $\omega>0$ and $m(x,0)+\omega > 0$ (see \cite{C,CM99} for a discussion of the periodic case). Then $m(x,t)+\omega > 0$ for all $t$ \cite{C01}. Let $k^{2}=-\frac{1}{4}-\lambda \omega$, i.e. \begin{eqnarray} \label{lambda} \lambda(k)= -\frac{1}{\omega}\Big( k^{2}+\frac{1}{4}\Big).\end{eqnarray} The spectrum of the problem (\ref{eq3}) under these conditions is described in \cite{C01}. The continuous spectrum in terms of $k$ corresponds to $k$ -- real. The discrete spectrum (in the upper half plane) consists of finitely many points $k_{n}=i\kappa _{n}$, $n=1,\ldots,N$ where $\kappa_{n}$ is real and $0<\kappa_{n}<1/2$. For all real $k\neq 0$ a basis in the space of solutions of (\ref{eq3}) can be introduced, fixed by its asymptotic when $x\rightarrow\infty$ \cite{C01} (see also \cite{ZMNP}): \begin{eqnarray} \label{eq5} \psi_{1}(x,k)&=&e^{-ikx}+o(1), \qquad x\rightarrow\infty; \\\label{eq6} \psi_{2}(x,k)&=& e^{ikx}+o(1), \qquad x\rightarrow \infty. \end{eqnarray} \noindent Another basis can be introduced, fixed by its asymptotic when $x\rightarrow -\infty$: \begin{eqnarray} \label{eq5a} \varphi_{1}(x,k)&=&e^{-ikx}+o(1), \qquad x\rightarrow -\infty; \\\label{eq6a} \varphi_{2}(x,k)&=& e^{ikx}+o(1), \qquad x\rightarrow -\infty. \end{eqnarray} \noindent For all real $k\neq 0$ if $\Psi(x,k)$ is a solution of (\ref{eq3}), then $\Psi(x,-k)$ is also a solution, thus \begin{eqnarray} \label{eq5aa} \varphi_{1}(x,k)=\varphi_{2}(x, -k), \qquad \psi_{1}(x,k)=\psi_{2}(x, -k). \end{eqnarray} \noindent Due to the reality of $m$ in (\ref{eq3}) for any $k$ we have \begin{eqnarray} \label{eq6aa} \varphi_{1}(x,k)=\bar{\varphi}_{2}(x, \bar{k}), \qquad \psi_{1}(x,k)=\bar{\psi}_{2}(x, \bar{k}) \end{eqnarray} The vectors of each of the bases are a linear combination of the vectors of the other basis: \begin{eqnarray} \label{eq7} \varphi_{i}(x,k)=\sum_{l=1,2}T_{il}(k)\psi_{l}(x,k) \end{eqnarray} \noindent where the matrix $T(k)$ defined above is called the scattering matrix. For real $k\neq 0$, instead of $\varphi_{1}(x,k)$, $\varphi_{2}(x,k)$, $\psi_{1}(x,k)$, $\psi_{2}(x,k)$ due to (\ref{eq6aa}), for simplicity we can write correspondingly $\varphi(x,k)$, $\bar{\varphi}(x,k)$, $\psi(x,k)$, $\bar{\psi}(x,k)$. Thus $T(k)$ has the form \begin{eqnarray} \label{T} T(k) = \left( \begin{array}{cc} a(k)& b(k) \\ \bar{b}(k) & \bar{a}(k) \\ \end{array} \right) \, \end{eqnarray} \noindent and clearly \begin{eqnarray} \label{eq8} \varphi(x,k)=a(k)\psi(x,k)+b(k)\bar{\psi}(x,k). \end{eqnarray} \noindent The Wronskian $W(f_{1},f_{2})\equiv f_{1}\partial_{x}f_{2}-f_{2}\partial_{x}f_{1}$ of any pair of solutions of (\ref{eq3}) does not depend on $x$. Therefore \begin{eqnarray} \label{eq9} W(\varphi(x,k), \bar{\varphi}(x,k))= W(\psi(x,k), \bar{\psi}(x,k))=2ik \end{eqnarray} \noindent From (\ref{eq8}) and (\ref{eq9}) it follows that \begin{eqnarray} \label{eq10} |a(k)|^{2}-|b(k)|^{2}=1, \end{eqnarray} \noindent i.e. $\det (T(k))=1$. Computing the Wronskians $W(\varphi,\bar{\psi})$ and $W(\psi,\varphi)$ and using (\ref{eq8}), (\ref{eq9}) we obtain: \begin{eqnarray} \label{eq10a}a(k)&=&(2ik)^{-1} \Big(\bar{\psi}_{x}(x,k)\varphi(x,k)-\bar{\psi}(x,k)\varphi_{x}(x,k)\Big), \\\label{eq10b} b(k)&=&(2ik)^{-1} \Big(\varphi_{x}(x,k)\psi(x,k)-\varphi(x,k)\psi_{x}(x,k)\Big).\end{eqnarray} In analogy with the spectral problem for the KdV equation, the quantities $\mathcal{T}(k)=a^{-1}(k)$ and $\mathcal{R}(k)=b(k)/a(k)$ represent the transmission and reflection coefficients respectively. Indeed, the asymptotic of the eigenfunction $\varphi(x,k)/a(k)$ when $x\rightarrow\infty$ is \begin{eqnarray} \label{eq11} \frac{\varphi(x,k)}{a(k)}=e^{-ikx}+\mathcal{R}(k)e^{ikx}+o(1),\end{eqnarray} \noindent i.e. a superposition of incident ($e^{-ikx}$) and reflected ($\mathcal{R}(k)e^{ikx}$) waves. For $x\rightarrow-\infty$ we have a transmitted wave: \begin{eqnarray} \label{eq12} \frac{\varphi(x,k)}{a(k)}=\mathcal{T}(k)e^{-ikx}+o(1)\end{eqnarray} \noindent From (\ref{eq10}) it follows that the scattering matrix is unitary, i.e. \begin{eqnarray} \label{eq13} |\mathcal{T}(k)|^{2}+|\mathcal{R}(k)|^{2}=1. \end{eqnarray} \noindent In what follows we will show that the entire information about $T(k)$ in (\ref{T}) is provided by $\mathcal{R}(k)$ for $k>0$ only. It is sufficient to know $\mathcal{R}(k)$ only on the half line $k>0$, since from (\ref{eq5aa}) and (\ref{eq8}), $\bar{a}(k)=a(-k)$, $\bar{b}(k)=b(-k)$ and thus $\mathcal{R}(-k)=\bar{\mathcal{R}}(k)$. Also, from (\ref{eq13}) \begin{eqnarray} \label{eq13a} |a(k)|^{2}=(1-|\mathcal{R}(k)|^{2})^{-1}, \end{eqnarray} \noindent i.e. $|\mathcal{R}(k)|$ determines $|a(k)|$. In the next section we will show that $|a(k)|$ uniquely determines $\arg(a(k))$ as well. At the points of the discrete spectrum, $a(k)$ has simple zeroes \cite{C01}, therefore the Wronskian $W(\varphi,\bar{\psi})$ (\ref{eq10a}) is zero. Thus $\varphi$ and $\bar{\psi}$ are linearly dependent: \begin{eqnarray} \label{eq200} \varphi(x,i\kappa_n)=b_n\bar{\psi}(x,-i\kappa_n).\end{eqnarray} \noindent In other words, the discrete spectrum is simple, there is only one (real) eigenfunction $\varphi^{(n)}(x)$, corresponding to each eigenvalue $i\kappa_n$, and we can take this eigenfunction to be \begin{eqnarray} \label{eq201}\varphi^{(n)}(x)\equiv \varphi(x,i\kappa_n)\end{eqnarray} \noindent Moreover, one can argue (see \cite{ZMNP}), that (cf. (\ref{eq200}), (\ref{eq6aa}) and (\ref{eq8})) \begin{eqnarray} \label{eq202} b_n= b(i\kappa_n)\end{eqnarray} \noindent The asymptotic of $\varphi^{(n)}$, according to (\ref{eq5a}), (\ref{eq6}), (\ref{eq200}) is \begin{eqnarray} \label{eq203} \varphi^{(n)}(x)&=&e^{\kappa_n x}+o(e^{\kappa_n x}), \qquad x\rightarrow -\infty; \\\label{eq204} \varphi^{(n)}(x)&=& b_n e^{-\kappa_n x}+o(e^{-\kappa_n x}), \qquad x\rightarrow \infty. \end{eqnarray} \noindent The sign of $b_n$ obviously depends on the number of the zeroes of $\varphi^{(n)}$. Suppose that $0<\kappa_{1}<\kappa_{2}<\ldots<\kappa_{N}<1/2$. Then from the oscillation theorem for the Sturm-Liouville problem \cite{B}, $\varphi^{(n)}$ has exactly $n-1$ zeroes. Therefore \begin{eqnarray} \label{eq205} b_n= (-1)^{n-1}|b_n|.\end{eqnarray} The set \begin{eqnarray} \label{eq206} \mathcal{S}\equiv\{ \mathcal{R}(k)\quad (k>0),\quad \kappa_n,\quad |b_n|,\quad n=1,\ldots N\} \end{eqnarray} \noindent is called scattering data. In what follows we will compute the Poisson brackets for the scattering data and we will also express the Hamiltonians for the CH equation in terms of the scattering data. The derivation is similar to that for other integrable systems, e.g. \cite{ZMNP,ZF71,ZM74,FT87,BFT86}. The time evolution of the scattering data can be easily obtained as follows. From (\ref{eq8}) with $x\rightarrow\infty$ one has \begin{eqnarray} \label{eq14} \varphi(x,k)=a(k)e^{-ikx}+b(k)e^{ikx}+o(1). \end{eqnarray} The substitution of $\varphi(x,k)$ into (\ref{eq4}) with $x\rightarrow\infty$ gives \begin{eqnarray} \label{eq15} \varphi_{t}=\frac{1}{2\lambda}\varphi_{x}+\gamma \varphi \end{eqnarray} \noindent From (\ref{eq14}), (\ref{eq15}) with the choice $\gamma=ik/2\lambda$ we obtain \begin{eqnarray} \label{eq16} \dot{a}(k,t)&=&0, \\\label{eq17} \dot{b}(k,t)&=& \frac{i k}{\lambda }b(k,t), \end{eqnarray} \noindent where the dot stands for derivative with respect to $t$. Thus \begin{eqnarray} \label{eq18} a(k,t)=a(k,0), \qquad b(k,t)=b(k,0)e^{\frac{i k}{\lambda }t}; \end{eqnarray} \begin{eqnarray} \label{eq19} \mathcal{T}(k,t)=\mathcal{T}(k,0), \qquad \mathcal{R}(k,t)=\mathcal{R}(k,0)e^{\frac{i k}{\lambda }t}. \end{eqnarray} In other words, $a(k)$ is independent on $t$ and will serve as a generating function of the conservation laws. The time evolution of the data on the discrete spectrum is found as follows. $i\kappa_n$ are zeroes of $a(k)$, which does not depend on $t$, and therefore $\dot {\kappa}_n =0$. From (\ref{eq202}) and (\ref{eq17}) one can obtain \begin{eqnarray} \label{eq207} \dot{b}_n=\frac{4\omega \kappa_n}{1-4\kappa_n^2}b_n. \end{eqnarray} The conservation laws are expressed through the scattering data in Section \ref{int} and the Poisson brackets for the scattering data are computed in Section \ref{PB}. \section{Conservation laws and scattering data} \label{int} The solution of (\ref{eq3}) can be represented in the form \begin{equation}\label{eqi1} \varphi(x,k)=\exp \Big( -ikx + \int _{-\infty}^{x}\chi(y,k)dy \Big). \end{equation} \noindent For $\mathrm{Im}\phantom{*} k>0$ and $x\rightarrow \infty$, $\varphi(x,k)e^{ ikx}=a(k)$, i.e. \begin{equation}\label{eqi2} \ln a(k)= \int _{-\infty}^{\infty}\chi(x,k)dx, \qquad \mathrm{Im}\phantom{*} k>0. \end{equation} \noindent Since $a(k)$ does not depend on $t$, the expressions $\int _{-\infty}^{\infty}\chi(x,k)dx$ represent integrals of motion for all $k$. The equation for $\chi(x,k)$ follows from (\ref{eq3}) and (\ref{eqi1}) \begin{equation}\label{eqi3} \chi_x (x,k)+\chi^2-2ik\chi=-\frac{1}{\omega}\Big(k^2+\frac{1}{4}\Big)m(x) \end{equation} \noindent and admits a solution with the asymptotic expansion \begin{equation}\label{eqi4} \chi(x,k)= p_1 k+p_0+\sum_{n=1}^{\infty}\frac{p_{-n}}{k^n}. \end{equation} \noindent The substitution of (\ref{eqi4}) into (\ref{eqi3}) gives the following quadratic equation for $p_1$: \begin{equation}\label{eqi5} p_1 ^{2} -2ip_1+\frac{m}{\omega}=0, \end{equation} \noindent with solutions \begin{equation}\label{eqi6} p_1=i\Big(1\pm \sqrt{1+\frac{m}{\omega}}\Big) \end{equation} \noindent Since $\int _{-\infty}^{\infty}p_1(x)dx$ is an integral of the CH equation, presumably finite, we take the minus sign in (\ref{eqi6}). One can easily see that $p_0$ and all $p_{-2n}$ are total derivatives \cite{I05} and thus we have the expansion \begin{equation}\label{eqi7} \ln a(k)= -i\alpha k+\sum_{n=1}^{\infty}\frac{I_{-n}}{k^n}, \end{equation} \noindent where $\alpha$ is a positive constant (integral of motion): \begin{equation}\label{eqi8} \alpha= \int _{-\infty}^{\infty}\Big(\sqrt{1+\frac{m(x)}{\omega}}-1\Big)dx, \end{equation} \noindent and $I_{-n}=\int _{-\infty}^{\infty}p_{-n}dx$ are the other integrals, whose densities, $p_{-n}$ can be obtained reccurently from (\ref{eqi3}), (\ref{eqi4}) \cite{I05}. For example \begin{equation}\label{eqi8a} p_0= \frac{q_x}{4q},\qquad q\equiv m+\omega, \end{equation} \begin{equation}\label{eqi8aa} p_{-1}= \frac{1}{8}p_1+i\frac{\sqrt{\omega}}{8}\Big[\frac{1}{\sqrt{q}}-\frac{1}{\sqrt{\omega}} +\frac{q_{x}^{2}}{4q^{5/2}}+\Big(\frac{q_x}{q^{3/2}}\Big)_x\Big], \end{equation} \noindent etc., i.e. \begin{equation}\label{eqi8aaa} I_{-1}= -\frac{1}{8}i\alpha+i\frac{\sqrt{\omega}}{8}\int _{-\infty}^{\infty}\Big(\frac{1}{\sqrt{q}}-\frac{1}{\sqrt{\omega}} +\frac{q_{x}^{2}}{4q^{5/2}}\Big)dx, \qquad \ldots \end{equation} The asymptotic of $a(k)$ for $\mathrm{Im}\phantom{*} k>0$ and $|k|\rightarrow\infty$ from (\ref{eqi7}) is $a(k)\rightarrow e^{-i\alpha k}$, or \begin{equation}\label{eqi9} e^{i\alpha k}a(k)\rightarrow 1, \qquad \mathrm{Im}\phantom{*} k>0, \qquad |k|\rightarrow\infty. \end{equation} Now let us consider the function \begin{equation}\label{eqi10} a_1(k)\equiv e^{i\alpha k}\prod _{n=1}^{N}\frac{k+i\kappa_n}{k-i\kappa_n}a(k). \end{equation} \noindent This function is analytic for $\mathrm{Im}\phantom{*} k>0$, but does not have any zeroes there. This is due to the fact \cite{C01} that $a(k)$ has at most simple zeroes at the points of the discrete spectrum $i\kappa_n$. Therefore $\ln a_1 (k)$ is analytic in the upper half plane and due to (\ref{eqi9}) $\ln a_1 (k)\rightarrow 0$ for $|k|\rightarrow\infty$. Moreover, on the real line $|a_1(k)|=|a(k)|$, and the Kramers-Kronig dispersion relation \cite{J99} for the function \begin{equation}\label{eqi11} \ln a_1(k)=\ln |a(k)|+i \arg a_1(k) \end{equation} \noindent gives $\arg a_1(k)$ (the symbol $\mathrm{P}$ means the principal value): \begin{equation}\label{eqi12} \arg a_1(k)=-\frac{1}{\pi}\mathrm{P}\int _{-\infty}^{\infty}\frac{\ln |a(k')|}{k'-k}dk' \end{equation} \noindent Therefore, from (\ref{eqi11}), (\ref{eqi12}), for real $k$: \begin{equation}\label{eqi13} \ln a_1(k)= \ln |a(k)|- \frac{i}{\pi}\mathrm{P}\int _{-\infty}^{\infty}\frac{\ln |a(k')|}{k'-k}dk' \end{equation} \noindent With the help of the Sohotski-Plemelj formula we have (cf. \cite{ZMNP,J99}) \begin{equation}\label{eqi15} \ln a_1(k)= \frac{1}{\pi i}\int _{-\infty}^{\infty}\frac{\ln |a(k')|}{k'-k-i0}dk', \end{equation} \noindent or with (\ref{eqi10}) \begin{equation}\label{eqi16} \ln a(k)=-i\alpha k +\sum _{n=1}^{N}\ln\frac{k-i\kappa_n}{k+i\kappa_n}+ \frac{1}{\pi i}\int _{-\infty}^{\infty}\frac{\ln |a(k')|}{k'-k-i0}dk'. \end{equation} We will argue that (\ref{eqi15}), (\ref{eqi16}) are valid not only for real $k$, but also when $k$ is in the upper half plane. Indeed, from the Cauchy theorem (the closed contour $\Gamma$ consists from the real axis and the infinite semicircle in the upper half plane, where $\ln a_1(k)=0$) for the function $\ln a_1(k)$ and $\mathrm{Im}\phantom{*} k>0$ we have: \begin{equation}\label{eqi17} \ln a_1(k)= \frac{1}{2\pi i}\int _{-\infty}^{\infty}\frac{\ln a_1(k')}{k'-k}dk'. \end{equation} The substitution of (\ref{eqi15}) into (\ref{eqi17}) gives \begin{equation}\label{eqi18} \ln a_1(k)= \frac{1}{2(\pi i)^2}\int _{-\infty}^{\infty}\Big(\int _{-\infty}^{\infty}\frac{dk'}{(k'-k)(k''-k'-i0)}\Big) \ln |a(k'')|dk'' \end{equation} \noindent Computing the integral in the brackets with the residue theorem, the contour $\Gamma$ being as before (note that the pole at $k'=k''-i0$ is outside the contour, since $k''$ is real) we find \begin{equation}\label{eqi20} \ln a_1(k)= \frac{1}{\pi i}\int _{-\infty}^{\infty}\frac{\ln |a(k'')|dk''}{k''-k},\qquad \mathrm{Im}\phantom{*} k>0. \end{equation} \noindent Therefore, from (\ref{eqi10}) and (\ref{eqi20}) when $k$ is in the upper half plane: \begin{equation}\label{eqi21} \ln a(k)=-i\alpha k +\sum _{n=1}^{N}\ln\frac{k-i\kappa_n}{k+i\kappa_n}+ \frac{1}{\pi i}\int _{-\infty}^{\infty}\frac{\ln |a(k')|}{k'-k}dk' \end{equation} Equation (\ref{eqi3}) can also be written in the form \begin{equation}\label{eqi22} \chi_x (x,k)+(\chi-ik)^2=\frac{1}{4}+\lambda (k) (m(x)+\omega) \end{equation} \noindent and admits a solution with the asymptotic expansion \begin{equation}\label{eqi23} \chi(x,k)= \frac{1}{2}+i k+\sum_{n=1}^{\infty}{p_{n}}{\lambda(k)^n}. \end{equation} \noindent Since $\lambda(i/2)=0$, then $\chi(x,i/2)=0$ and therefore, due to (\ref{eqi2}) $\ln a(i/2)=0$. Now (\ref{eqi21}) for $k=i/2$ gives the integral $\alpha$ (\ref{eqi8}) in terms of the scattering data: \begin{equation}\label{eqi24} \alpha = \sum _{n=1}^{N}\ln\Big(\frac{1+2\kappa_n}{1-2\kappa_n}\Big)^2- \frac{8}{\pi }\int _{0}^{\infty}\frac{\ln |a(\widetilde{k})|}{4\widetilde{k}^2+1}d\widetilde{k}. \end{equation} \noindent The integral $\sqrt{\omega} \alpha$ is equal to $H_{-1}$, cf. (\ref{eq2aa}), \cite{L05,CL05,I05}: \begin{eqnarray} \nonumber H_{-1}&\equiv& \int _{-\infty}^{\infty}\Big(\sqrt{\omega+m(x)}-\sqrt{\omega}\Big)dx \phantom{********}\\ \nonumber &=&\sqrt{\omega}\sum _{n=1}^{N}\ln\Big(\frac{1+2\kappa_n}{1-2\kappa_n}\Big)^2- \frac{8\sqrt{\omega}}{\pi }\int _{0}^{\infty}\frac{\ln |a(\widetilde{k})|}{4\widetilde{k}^2+1}d\widetilde{k}. \end{eqnarray} From (\ref{eq13a}) we know that $|\mathcal{R}(k)|$ determines $|a(k)|$. But $|a(k)|$ determines uniquely $a(k)$ for real $k$ due to (\ref{eqi16}) and (\ref{eqi24}). Therefore $|\mathcal{R}(k)|$ determines uniquely $a(k)$ for real $k$. Since $b(k)$ is simply $a(k) \mathcal{R}(k)$, then as expected, the entire information about $T(k)$ (\ref{T}) is provided by $\mathcal{R}(k)$ for $k>0$. The integrals $I_{-n}$ can be expressed by the scattering data from (\ref{eqi7}) and (\ref{eqi21}) taking the expansion at $|k|\rightarrow\infty$ : $I_{-2n}=0$; \begin{equation}\label{eqi25} I_{-(2n+1)}= \frac{2i(-1)^{n+1}}{2n+1}\sum _{n=1}^{N}\kappa_n^{2n+1} +\frac{2i}{\pi }\int _{0}^{\infty}\widetilde{k}^{2n}\ln |a(\widetilde{k})|d\widetilde{k}. \end{equation} For example, from $I_{-1}$, expressed from (\ref{eqi8aaa}), (\ref{eqi25}) and using (\ref{eqi24}), the conservation law $H_{-2}$ (see (\ref{eq2aa})) can be expressed by the scattering data: \begin{eqnarray}\label{eqi26} H_{-2}\equiv -\frac{1}{4}\int _{-\infty}^{\infty}\Big(\frac{1}{\sqrt{q}}-\frac{1}{\sqrt{\omega}} +\frac{q_{x}^{2}}{4q^{5/2}}\Big)dx= \phantom{*******************}\nonumber \\ \nonumber -\frac{1}{4\sqrt{\omega}}\sum_{n=1}^{N}\Big(\ln\Big(\frac{1+2\kappa_n}{1-2\kappa_n}\Big)^2-16\kappa_n\Big) -\frac{2}{\pi\sqrt{\omega} }\int _{0}^{\infty}\frac{8\widetilde{k}^2+1}{4\widetilde{k}^2+1}\ln |a(\widetilde{k})|d\widetilde{k}. \end{eqnarray} From (\ref{eqi22}) and (\ref{eqi23}) we obtain (recall that $q=m+\omega=u-u_{xx}+\omega$), \begin{eqnarray} \label{eqi27} p_1+p'_{1}=q, \qquad p_{1}=u-u_{x}+\omega, \end{eqnarray} \noindent which leads to the integral: \begin{eqnarray} \label{eqi28} H_{0}=\int _{-\infty}^{\infty} m dx, \end{eqnarray} \noindent i.e. \begin{eqnarray} \label{eqi29}\int _{-\infty}^{\infty} p_{1} dx=H_0+ \int _{-\infty}^{\infty} \omega dx \end{eqnarray} \noindent The infinite contribution $\int _{-\infty}^{\infty} dx$ does not make sense, but all such contributions cancel the also infinite term $\int _{-\infty}^{\infty} (1/2+i k) dx$ from (\ref{eqi23}) when it is substituted in (\ref{eqi2}). The next equation from (\ref{eqi22}) and (\ref{eqi23}) is \begin{eqnarray} \label{eqi30} p_{2}+p'_{2}+p_{1}^{2}=0, \end{eqnarray} \noindent and hence, formally (recall (\ref{eq2a}), (\ref{eqi28})) \begin{eqnarray} \label{eqi31} \int _{-\infty}^{\infty} p_{2}dx=-\int _{-\infty}^{\infty} p_{1}^{2}dx=-2H_1 - 2\omega H_{0}-\int _{-\infty}^{\infty} \omega^2 dx . \end{eqnarray} \noindent From (\ref{eqi22}) and (\ref{eqi23}) the equation for $p_{3}$ is \begin{eqnarray} \label{eqi32} p_{3}+p'_{3}+2p_{1}p_{2}=0, \end{eqnarray} \noindent and the next conserved quantity $\label{eqi33} \int _{-\infty}^{\infty}p_{3} dx = -2 \int _{-\infty}^{\infty} p_{1}p_{2}dx$ is (using (\ref{eqi27}), (\ref{eqi30}), (\ref{eq2b}) -- see the technicalities described in \cite{I05}) \begin{eqnarray} \label{eqi35} \int _{-\infty}^{\infty} p_3dx =4H_2 +4\omega H_{1} +6\omega ^{2} H_{0}+2\int _{-\infty}^{\infty} \omega^3 dx . \end{eqnarray} Now, let us expand $\ln a(k)$ about the point $k=i/2$. To this end, for simplicity, we define a new parameter, $l$, such that: \begin{eqnarray} \label{eqi36} k\equiv\frac{i}{2}(1+4l)^{1/2} , \qquad \lambda\equiv \frac{l}{\omega}. \end{eqnarray} \noindent The expansion is now about $l=0$. Using (\ref{eqi36}) and (\ref{eqi23}) in (\ref{eqi2}), and the expressions (\ref{eqi29}), (\ref{eqi31}), (\ref{eqi35}) we finally obtain \begin{eqnarray} \label{eqi37} \ln a(k(l))=\frac{l}{\omega}H_0-\Big(\frac{l}{\omega}\Big)^2(2H_1+2\omega H_0)+\Big(\frac{l}{\omega}\Big)^3(4H_2+4\omega H_1+6\omega^2 H_0)\nonumber \\ +o(l^3).\phantom{**} \end{eqnarray} Now the expansion of (\ref{eqi21}) in $l$ (\ref{eqi36}), taking into account (\ref{eqi24}), (\ref{eqi37}) gives the Hamiltonians in terms of the scattering data: \begin{eqnarray} \label{eqi38} H_0=2\omega \sum _{n=1}^{N}\Big(\ln \frac{1+2\kappa_n}{1-2\kappa_n}+\frac{4\kappa _n}{1-4\kappa_n ^{2}}\Big)- \frac{16\omega}{\pi }\int _{0}^{\infty}\frac{\ln |a(\widetilde{k})|}{(4\widetilde{k}^2+1)^2}d\widetilde{k}, \end{eqnarray} \begin{eqnarray} \label{eqi39} H_1=\omega^2 \sum _{n=1}^{N}\Big(\ln \frac{1-2\kappa_n}{1+2\kappa_n}+\frac{4\kappa _n(1+4\kappa_n ^{2})}{(1-4\kappa_n ^{2})^2}\Big)+ \frac{128\omega^2}{\pi }\int _{0}^{\infty} \frac{\widetilde{k}^2 \ln |a(\widetilde{k})|}{(4\widetilde{k}^2+1)^3}d\widetilde{k}, \nonumber \\ \end{eqnarray} \begin{eqnarray} H_2=\omega^3 \sum _{n=1}^{N}\Big(\ln \frac{1-2\kappa_n}{1+2\kappa_n}+\frac{4\kappa _n(3+32\kappa_n ^{2}-48\kappa _n ^4)}{3(1-4\kappa_n ^{2})^3}\Big)\phantom{**********}\nonumber \\ \label{eqi40} -\frac{8\omega^3}{\pi }\int _{0}^{\infty} \frac{(-5+28\widetilde{k}^2 +80\widetilde{k}^4+64\widetilde{k}^6)\ln |a(\widetilde{k})|}{(4\widetilde{k}^2+1)^4}d\widetilde{k}. \end{eqnarray} \noindent In the same fashion the higher conservation laws (which are nonlocal) can be expressed through the scattering data. \section{Poisson brackets of the scattering data} \label{PB} In this section our aim will be to compute the Poisson brackets between the elements of the scattering matrix (\ref{T}). Let us consider, for example, $\{a(k_1),b(k_2)\}$: \begin{equation}\label{20} \{a(k_{1}),b(k_{2})\}=-\int_{-\infty}^{\infty} q(x)\Big(\frac{\delta a(k_{1})}{\delta m(x)}\frac{\partial}{\partial x} \frac{\delta b(k_{2})}{\delta m(x)}-\frac{\delta b(k_{2})}{\delta m(x)}\frac{\partial}{\partial x} \frac{\delta a(k_{1})}{\delta m(x)}\Big)dx. \end{equation} \noindent For the computation of $\delta a(k )/\delta m(x)$ and $\delta b(k)/\delta m(x)$ we use (\ref{eq10a}) and (\ref{eq10b}): \begin{eqnarray} \frac{\delta a(k)}{\delta m(x)}=(2ik)^{-1} \Big(\frac{\delta \varphi(y,k)}{\delta m(x)}\frac{\partial}{\partial y}\bar{\psi}(y,k)-\frac{\delta \bar{\psi}(y,k)}{\delta m(x)}\frac{\partial}{\partial y}\varphi(y,k) \nonumber \\\label{eq21} +\varphi(y,k)\frac{\partial}{\partial y}\frac{\delta \bar{\psi}(y,k)}{\delta m(x)}-\bar{\psi}(y,k)\frac{\partial}{\partial y}\frac{\delta \varphi(y,k)}{\delta m(x)}\Big).\end{eqnarray} \noindent The function $G(x,y,k)\equiv \delta \varphi(y,k)/\delta m(x)$ satisfies the equation, obtained as a variational derivative of (\ref{eq3}): \begin{eqnarray} \label{eq22} \Big(\partial^{2}_{y}-\lambda m(y)+k^{2}\Big)G(x,y,k)=\lambda \delta(x-y) \varphi(y,k).\end{eqnarray} \noindent Since the source on the right hand side of (\ref{eq22}) is zero for $y<x$ (due to the delta-function) and since $\varphi(y,k)$ is defined by its asymptotic when $y\rightarrow-\infty$, i.e. for $y\rightarrow-\infty$, $\varphi(y,k)$ does not depend on $m(x)$, the solution of (\ref{eq22}) must satisfy \begin{eqnarray} \label{eq23} G(x,y,k)=0, \qquad y<x.\end{eqnarray} \noindent $G(x,y,k)$, considered as a solution of (\ref{eq22}) is a continuous function of $y$ for all $y$, however due to the source on the right hand side (proportional to a delta function) $\partial G(x,y,k)/\partial y$ has a finite jump at $y=x$. The value of $\partial G(x,y,k)/\partial y$ for $y\rightarrow x+0$ can be found by integrating both sides of (\ref{eq22}) from $x-\varepsilon$ to $x+\varepsilon$ and then taking $\varepsilon\rightarrow +0$: \begin{eqnarray} \label{eq24} \frac{\partial G(x,y,k)}{\partial y}\Big |_{y=x+0}=\lambda \varphi(x,k).\end{eqnarray} Now we can make use of the fact, that the left hand side of (\ref{eq21}) does not depend on $y$. We take $y=x+\varepsilon$, $\varepsilon>0$ and then we take $\varepsilon\rightarrow0$. Since $\psi(y,k)$ is defined by its asymptotic when $y\rightarrow\infty$, i.e. for $y\rightarrow\infty$, $\psi(y,k)$ does not depend on $m(x)$, by an analogous arguments we conclude that $\delta \psi(y,k)/\delta m(x)=0$ for $y>x$. Then finally from (\ref{eq21}) it follows: \begin{eqnarray}\label{eq25} \frac{\delta a(k)}{\delta m(x)}=-\frac{\lambda}{2ik} \bar{\psi}(x,k)\varphi(x,k) .\end{eqnarray} Similarly, we find \begin{eqnarray}\label{eq26} \frac{\delta b(k)}{\delta m(x)}=\frac{\lambda}{2ik} \psi(x,k)\varphi(x,k) .\end{eqnarray} Substituting (\ref{eq25}), (\ref{eq26}) in (\ref{eq21}), we have \begin{eqnarray} \{a(k_{1}),b(k_{2})\}= \frac{\lambda (k_{1})\lambda (k_{2})}{(2i)^{2}k_{1}k_{2}}\times \phantom{***********************} \nonumber \\ \int_{-\infty}^{\infty} q(x)\Big(\bar{\psi}(x,k_{1})\varphi(x,k_{1})\frac{\partial}{\partial x}(\psi(x,k_{2})\varphi(x,k_{2})) \phantom{*****} \nonumber \\\label{eq27} -\psi(x,k_{2})\varphi(x,k_{2})\frac{\partial}{\partial x}(\bar{\psi}(x,k_{1})\varphi(x,k_{1}))\Big)dx.\end{eqnarray} The expression under the integral in (\ref{eq27}) is a total derivative. Indeed, let $f_{1}$, $g_{1}$, $f_{2}$, $g_{2}$ be two pairs of solutions of (\ref{eq3}) with spectral parameters $k_{1}$ and $k_{2}$ correspondingly, i.e. \begin{eqnarray} \partial^{2}_{x} f_{1,2}=\Big(\frac{1}{4}+\lambda(k_{1,2}) q(x)\Big)f_{1,2}, \nonumber \\ \label{eq28} \partial^{2}_{x} g_{1,2}=\Big(\frac{1}{4}+\lambda(k_{1,2}) q(x)\Big)g_{1,2}.\end{eqnarray} \noindent Then with (\ref{eq28}) one can check easily the following identity: \begin{eqnarray} q(x)\Big(f_{1}g_{1}\frac{\partial}{\partial x}(f_2g_2)-f_{2}g_{2}\frac{\partial}{\partial x}(f_1g_1)\Big)= \phantom{*************} \nonumber \\ \frac{1}{\lambda (k_{2})-\lambda (k_{1})}\Big((g_{1}\partial_{x}g_{2}-g_{2}\partial_{x}g_{1}) (f_{1}\partial_{x}f_{2}-f_{2}\partial_{x}f_{1})\Big)_{x}.\label{eq31} \end{eqnarray} \noindent Now we can take $f_1=\bar{\psi}(x,k_1)$, $f_2=\psi(x,k_2)$, $g_1=\varphi(x,k_1)$, $g_2=\varphi(x,k_2)$ and substitute in (\ref{eq27}). Then with (\ref{eq31}) and with the asymptotic representations for $x\rightarrow\infty$ \begin{eqnarray} \label{eq32}\psi(x,k)\rightarrow e^{-ikx}, \qquad \varphi(x,k)\rightarrow a(k)e^{-ikx}+b(k)e^{ikx},\end{eqnarray} and for $x\rightarrow-\infty$ \begin{eqnarray} \label{eq33}\varphi(x,k)\rightarrow e^{-ikx}, \qquad \psi(x,k)\rightarrow \bar{a}(k)e^{-ikx}-b(k)e^{ikx}\end{eqnarray} we obtain \begin{eqnarray} \{a(k_{1}),b(k_{2})\}= \frac{\lambda (k_{1})\lambda (k_{2})}{(2i)^{2}k_{1}k_{2}(\lambda (k_2)-\lambda (k_1))}\times \phantom{************} \nonumber \\ \Big[\lim_{x\rightarrow\infty} \Big((k_{1}^{2}-k_{2}^{2})a(k_1)a(k_2)e^{-2ik_2 x}-(k_{1}^{2}-k_{2}^{2})b(k_1)b(k_2)e^{2ik_1 x} \nonumber \\ +(k_{1}+k_{2})^{2}a(k_1)b(k_2)-(k_{1}+k_{2})^{2}b(k_1)a(k_2)e^{2i(k_1-k_2) x}\Big)\nonumber \\ -\lim_{x\rightarrow-\infty} \Big((k_{1}^{2}-k_{2}^{2})a(k_1)\bar{a}(k_2)e^{-2ik_2 x}-(k_{1}^{2}-k_{2}^{2})\bar{b}(k_1)b(k_2)e^{-2ik_1 x} \nonumber \\\label{eq34} -(k_1-k_2)^{2}a(k_1)b(k_2)+(k_1-k_2)^{2}\bar{b}(k_1)\bar{a}(k_2)e^{-2i(k_1+k_2)x}\Big)\Big].\end{eqnarray} \noindent The expression on the right hand side in (\ref{eq34}) is defined only as a distribution. Then applying the formula $\lim_{x\rightarrow\infty}\mathrm{P}\frac{e^{ikx}}{k}=\pi i \delta (k)$ and assuming $k_{1,2}>0$ we have \begin{eqnarray} \{\ln a(k_{1}), \ln b(k_{2})\}= \omega \lambda (k_{1})\lambda (k_{2})\Big(-\frac{k_{1}^{2}+k_{2}^{2}}{2k_1k_2(k_{1}^{2}-k_{2}^{2})}+\frac{ \pi i}{2k_1} \delta(k_1-k_2) \Big).\nonumber \\\end{eqnarray} In the same fashion the rest of the Poisson brackets between the scattering data can be computed. The result can be expressed in terms of the quantities \begin{eqnarray} \label{eq36} \rho(k)\equiv-\frac{2k}{\pi \omega \lambda(k)^2}\ln|a(k)|,\qquad \phi(k)\equiv \arg b(k), \qquad k>0:\end{eqnarray} \noindent Their Poisson brackets have the canonical form \begin{eqnarray} \label{eq37} \{\phi(k_1),\rho(k_2)\}=\delta(k_1-k_2),\quad \{\phi(k_1),\phi(k_2)\}=\{\rho(k_1),\rho(k_2)\}=0\end{eqnarray} \noindent and thus (\ref{eq36}) are the action-angle variables for the CH equation, related to the continuous spectrum. Note that from (\ref{eq37}) and (\ref{eqi39}) we have \begin{eqnarray} \label{eq38} \dot{\phi}=\{\phi,H_1\}= \frac{k}{\lambda(k)},\end{eqnarray} \noindent which agrees with (\ref{eq18}). Let us now concentrate on the discrete spectrum. Let us denote, for simplicity, $\lambda_n\equiv\lambda(i\kappa_n)$. We will need the variational derivatives $\delta \lambda_n/\delta m(x)$ and $\delta b_n/\delta m(x)$. Due to (\ref{eq202}), for $\delta b_n/\delta m(x)$ the expression (\ref{eq26}) will be formally used followed by taking the limit $k\rightarrow i\kappa_n$. In order to find $\delta \lambda_n/\delta m(x)$ we proceed as follows. Differentiating the equation \begin{eqnarray} \label{eq39} \varphi^{(n)}_{xx}=\Big(\frac{1}{4}+\lambda_n q(x)\Big)\varphi^{(n)},\end{eqnarray} \noindent we obtain ($\delta q=\delta m$): \begin{eqnarray} \label{eq40} \delta \varphi^{(n)}_{xx}=(\delta\lambda_n)q \varphi^{(n)}+\lambda_n(\delta m)\varphi^{(n)}+\Big(\frac{1}{4}+\lambda_n q\Big)\delta \varphi^{(n)}.\end{eqnarray} From (\ref{eq39}) and (\ref{eq40}) it follows \begin{eqnarray} \label{eq41} (\varphi^{(n)}\delta \varphi^{(n)}_{x}-\varphi^{(n)}_{x}\delta \varphi^{(n)})_x=(\delta\lambda_n)q[\varphi^{(n)}]^2+\lambda_n(\delta m)[\varphi^{(n)}]^2.\end{eqnarray} \noindent The integration of (\ref{eq41}) gives: \begin{eqnarray} \label{eq42} (\delta\lambda_n)\int _{-\infty}^{\infty}q(x)[\varphi^{(n)}(x)]^2dx=-\lambda_n\int _{-\infty}^{\infty}(\delta m(x))[\varphi^{(n)}(x)]^2dx.\end{eqnarray} \noindent or \begin{eqnarray} \label{eq42a} \frac{\delta\ln \lambda_n}{\delta m(x)}=-\frac{[\varphi^{(n)}(x)]^2}{\int _{-\infty}^{\infty}q(y)[\varphi^{(n)}(y)]^2dy}.\end{eqnarray} \noindent From (\ref{eq3}) it is not difficult to obtain \begin{eqnarray} \label{eq43} (\varphi \varphi_{x\lambda}-\varphi_{x}\varphi_{\lambda})_x=q\varphi^2.\end{eqnarray} \noindent We will integrate (\ref{eq43}) and then take the limit $k\rightarrow i\kappa_n$, i.e. $\lambda\rightarrow\lambda_n$. We take into account that with this limit, clearly \begin{eqnarray} \label{eq44} \varphi(x,k)&\rightarrow&b_n e^{-\kappa_n x}+o(e^{-\kappa_n x}), \qquad x\rightarrow \infty; \\\label{eq45} \varphi_{\lambda}(x,k)&\rightarrow& \frac{a'(i\kappa_n)}{\lambda'(i\kappa_n)} e^{\kappa_n x}+o(e^{\kappa_n x}), \qquad x\rightarrow \infty. \end{eqnarray} \noindent Therefore \begin{eqnarray} \label{eq46} i\omega b_n a'(i\kappa_n)=\int _{-\infty}^{\infty}q(y)[\varphi^{(n)}(y)]^2dy \end{eqnarray} \noindent and finally \begin{eqnarray} \label{eq42b} \frac{\delta\ln \lambda_n}{\delta m(x)}=\frac{i[\varphi^{(n)}(x)]^2}{\omega b_n a'(i\kappa_n)}.\end{eqnarray} \noindent We compute the expression \begin{eqnarray} \{\ln \lambda _n,b_l\}= \frac{i\lambda _l}{2\omega b_n\kappa_l a'(i\kappa_n)}\times \phantom{************************}\nonumber \\ \int_{-\infty}^{\infty} q(x) \lim _{k_j\rightarrow i\kappa_j} \Big(\varphi^2(x,k_{n})(\psi(x,k_{l})\varphi(x,k_{l}))_x\phantom{*******}\nonumber \\\label{eq43a} -\psi(x,k_{l})\varphi(x,k_{l})(\varphi^2(x,k_{n}))_x\Big) dx.\phantom{**}\end{eqnarray} \noindent Taking $f_1=g_1=\varphi(x,k_n)$, $f_2=\psi(x,k_l)$, $g_2=\varphi(x,k_l)$ in (\ref{eq31}) and using the asymptotic representations (\ref{eq32}), (\ref{eq33}) for $x\rightarrow \pm \infty$ we obtain \begin{eqnarray} \{\ln \lambda _n,b_l\}= \frac{i\lambda _l}{2 b_n\kappa_l a'(i\kappa_n)}\times \phantom{**********************}\nonumber \\\label{eq44a} \lim_{x\rightarrow \infty}\lim _{k_j\rightarrow i\kappa_j} \frac{(k_n+k_l)\Big(a(k_n)b(k_n)b(k_l)-a(k_l)b^2(k_n)e^{2i(k_n-k_l)x}\Big)}{k_n-k_l}.\end{eqnarray} \noindent Clearly, the right hand side of (\ref{eq44a}) is zero if $\kappa_n\neq\kappa_l$, since $a(i\kappa_n)=0$. However, if $\kappa_n=\kappa_l$, the l'Hospital's rule for the limit $\kappa_l \rightarrow\kappa_n$ gives \begin{eqnarray} \{\ln \lambda _n,b_l\}= -\lambda _n b_n\delta _{nl}.\end{eqnarray} \noindent If we define the quantities \begin{eqnarray} \label{eq45a} \rho_n \equiv \lambda_n^{-1},\qquad \phi_n \equiv -\ln |b_n|, \qquad n=1,2,\ldots,N, \end{eqnarray} \noindent their Poisson brackets have the canonical form \begin{eqnarray} \label{eq47} \{\phi_n,\rho_l\}=\delta_{nl},\qquad \{\phi_n,\phi_l\}=\{\rho_n,\rho_l\}=0\end{eqnarray} \noindent and thus (\ref{eq47}) are the action-angle variables for the CH equation, related to the discrete spectrum. They also commute with the variables on the continuous spectrum (\ref{eq36}). Note that from (\ref{eq47}) and (\ref{eqi39}) we have \begin{eqnarray} \label{eq48} \dot{\phi}_n=\{\phi_n,H_1\}= \{\phi_n,\kappa_n\}\frac{\partial H_1}{\partial \kappa_n}=-\frac{4\omega\kappa_n}{1-4\kappa_n^2},\end{eqnarray} \noindent which agrees with (\ref{eq207}). \section{Conclusions} \label{sec:1} In this paper the action-angle variables for the CH equation are computed. They are expressed in terms of the scattering data for this integrable system, when the solutions are confined to be functions in the Schwartz class. The important question about the behavior of the scattering data at $k=0$ deserves further investigation. It is possible that in the case of singular behavior the Poisson bracket has to be modified in a way, similar to the KdV case, as described in \cite{FT85,APP88,BFT86}. The case $\omega=0$ (in which the spectrum is only discrete, cf. \cite{CM99}) is presented in \cite{V05}. The situation when the condition $m(x,0)+\omega>0$ does not hold requires separate analysis \cite{K05,B04} (if $m(x,0)+\omega$ changes sign there are infinitely many positive eigenvalues accumulating at infinity, cf. \cite{C01}). The action-angle variables for the periodic CH equation and $\omega=0$ are reported in \cite{P05}. \section*{Acknowledgements} The authors acknowledge the support of the Science Foundation Ireland, Grant 04/BR6/M0042.
train/arxiv
BkiUdDPxK7Ehm308qPLG
5
1
\section{Introduction} In Mathematics, the standard definition of a \emph{function} is given, e.g., by Bourbaki~\cite[\S3.4]{Bourbaki1968}: it consists of the domain and codomain, and requires that a function be \emph{total} and \emph{single-valued}. Mathematical practice is usually looser, and tends to define functions locally, or ``in a suitable open subset of ${\bf C}$''. Usually, therefore, users of computer algebra systems will use notations such as ``$\ln(x)$'' without bothering too much about the actual domain and codomain of the function $\ln$ they have in mind, hoping that the designers of the system have implemented what they need. An essential and important problem in any such definition is the determination of possible \emph{branch cuts} (also known as slits), see~\cite{DingleFateman94}. A simple way to define a function is by composing previously defined functions, provided their domains and codomains are compatible. This is perhaps best exemplified by \[ \sqrt z := \exp\left(\frac12\ln z\right), \] which implies that the branch cuts of $\sqrt z$ are inherited from the definition of~$\ln z$. The challenge is, of course, to decide which formula we should take. For instance, as reported in \cite[pp. 210--211]{Kahan1987b}, the classic handbook \cite{AbramowitzStegun1964} changed its interpretation of the branch cuts for $\mathop{\rm arccot}\nolimits$ from the original first printing (here denoted as ${\arccot_1}$) to that of the ninth (and subsequent) printings, and \cite{NIST2010} (here denoted as ${\arccot_9}$). Both versions of $\mathop{\rm arccot}\nolimits$ are related to $\arctan$ by the formulae: \begin{align*} {\arccot_1}(x) &= \pi/2 - \arctan(x),\\ {\arccot_9}(x) &= \arctan(1/x). \end{align*} The rationale for the ``correct'' choice is discussed further in Section~\ref{sec:arccot}. Difficulties occur also when dealing with functions without having precise definitions for them. One of the early successes of computer algebra was symbolic integration. This area is largely based on differential algebra~\cite{Bronstein2005a}. There, the expression $\ln(x)$ is not even a function, but some element~$\theta$ in a differential field, such that $\theta'=1/x$. But both $\ln(x)$ and $-\ln(1/x)$ have the same derivative so that their difference is a constant in the sense of differential algebra: its derivative is~$0$. However, with the definition of~$\ln$ from~\cite{AbramowitzStegun1964}, the function $\ln(x)+\ln(1/x)$ is~0 for all~$x$ in~${\bf C}$ except for real negative~$x$, where it is~$2i\pi$. Thus early versions of integrators would wrongly compute $\int_{-2}^{2}{2x\,dx/(x^2-1)}$ as~0 by first computing correctly the indefinite integral as $\ln(x^2-1)$ and then subtracting its values at both end points. The definition of functions is obviously relevant to identities between functions: the functions involved should have, as a minimum, the same domain and codomain. For instance \cite[pp. 187--188]{Kahan1987b}, the function \begin{equation}\label{eq:K1} g(z):= 2\mathop{\rm arccosh}\nolimits\left(1+\frac{2z}3\right)-\mathop{\rm arccosh}\nolimits\left(\frac{5z+12}{3(z+4)}\right) \end{equation} is not the same as the ostensibly more efficient \begin{equation}\label{eq:K2} q(z):= 2\mathop{\rm arccosh}\nolimits\left(2(z+3)\sqrt{\frac{z+3}{27(z+4)}}\right), \end{equation} unless they are defined over an identical domain that avoids the negative real axis and the area \[ \bigg\{z=x+iy : |y|\le \sqrt{\frac{(x+3)^2(-9-2x)}{2x+5}}\land -9/2\le x\le-3\bigg\} \] (and an identical codomain). Clearly, this last statement itself depends on the definitions taken for the functions $\mathop{\rm arccosh}\nolimits$ and~$\sqrt{}$. Here, the function~$\sqrt{}$ is defined over ${\bf C}\setminus{{\bf R}^-}$, while $\mathop{\rm arccosh}\nolimits$ is defined over~${\bf C}\setminus(1+{\bf R}^-)$. In other words, these functions have branch cuts located on horizontal left half-lines. In general, the location of branch cuts for classical functions follows very much from the mathematical tradition, as recorded in tables like~\cite{AbramowitzStegun1964}. We discuss here the possibility of automating the choice of these locations for a large class of functions ``defined'' by linear differential equations. We show that this is impossible in general. Therefore, we consider a heuristic approach that gives correct results for the special functions listed in~\cite{AbramowitzStegun1964} and is applicable to newly encountered functions. As a guide, we use Kahan's discussion of this question for inverse trigonometric functions~\cite{Kahan1987b}. {\rm We are concerned here with automating the choices made, historically by table-makers, and now by the compilers of resources like \cite{NIST2010} or the authors of systems such as \cite{DDMF}, for functions defined by linear differential equations and their inital conditions. A particular context for the {\em use\/} of the functions may impose special constraints, and may even require different choices at different points of the same application \cite[section 4(ii)]{Davenport2010}, but that is a different issue.\/} \section{Analytic Continuation} In complex analysis, analytic functions are often defined first on a small domain and then extended by analytic continuation. Two large families of examples are discussed below: inverse functions and solutions of linear differential equations. The basic property underlying this approach is that two analytic functions defined over a connected domain and coinciding over an open subset of it coincide over the whole domain and thus are identical (here we assume the codomain to be~${\bf C}$). Another way of discussing branch cuts is thus in terms of connected domains where the function of interest is to be defined. \subsection{Riemann Surfaces}\label{sec:Riemann} A radical approach is to use \emph{Riemann surfaces} as domains. They are maximal connected surfaces where the function is analytic. While theoretically appealing, the use of Riemann surfaces is not trivial in a computer algebra context (see, e.g.,~\cite{Hoeven2005}). Since the domain is not a subset of~${\bf C}$, but paths in the complex place, an \emph{ad hoc} language for specifying its elements has to be designed. One possibility is to restrict the domain to paths that are piecewise-straight lines starting from the origin. This is also the approach taken in the Dynamic Dictionary of Mathematical Functions~\cite{DDMF}. Such a path is specified as a list of its ``vertices'', for example, $(0,1+i,2,1-i,0)$ denotes a diamond-shaped, clockwise path around~$1$. Thus for instance, one can define~$\sqrt{}$ so that it takes the value~$1$ at $(0,1)$, while it is equal to~$-1$ at $(0,1,i,-1,-i,1)$ and to~$1$ again at $(0,1,i,-1,-i,1,i,-1,-i,1)$. \subsection{Positioning} In many applications however, users are interested in restricting the domain of their functions to the complex plane or a subset of it. In that case, the role of branch cuts is to define a connected domain where the function is analytic. Where to put the branch cuts is the \emph{positioning question}. Apart from the connectivity and analyticity constraints and as long as only one function is involved, the location of the branch cuts is quite arbitrary. The situation is completely different as soon as several functions are involved and identities are considered: the domains have to coincide. Thus branch cuts have to be chosen in a consistent way inside a corpus of functions of interest. \subsection{Adherence} It is customary to extend the domain of definitions to include the branch cuts, so that the function can be defined on the branch cut itself. There, the value of the function is taken as the limit of its values at points approaching the branch cut from one of its sides. In the numerical context, Kahan shows that using signed zeroes avoids having to make a choice~\cite{Kahan1987b}. In the symbolic computation context a choice has to be made and the boundary of the domain is closed on one side and open on the other one. Making the choice of which side is the closed one is the \emph{adherence question} \cite{Beaumontetal2005b}. For instance, the definition of~$\ln$ in~\cite{AbramowitzStegun1964} is taken so that $\ln(-1)$ is~$i\pi$. Again, these choices have to be made consistently if several functions are involved. \subsection{Inverse Functions} Inverse functions form a large class of functions that are commonly defined by analytic continuation. Suppose that $f:{\bf C}\rightarrow{\bf C}$ is analytic, that $f(x_0)=y_0$ and that $f'(x_0)\ne0$. Then there is a trivial function \[ \tilde{f} : \{x_0\}\to\{y_0\},\quad x_0\mapsto y_0 \] which clearly has an inverse. By the Inverse Function Theorem, this can be extended to a neighbourhood of $y_0$, and ultimately to the whole of ${\bf C}$ apart from those points where $f'(x)=0$. \begin{example}\label{ex:followsqrt2} The basic example is $\sqrt{}$ defined as the inverse of \[ f : {\bf C}\to{\bf C},\quad x\mapsto x^2. \] Since $f'(1)=2\neq0$, we can define a function~$f^{-1}$ in a neighbourhood of~1 with~$f^{-1}(1)=1$ and then extend it to larger connected domains. However, $f^{-1}$ cannot be continued arbitrarily far round the unit circle, for otherwise we would get the contradicting value $f^{-1}(1)=-1$, see Section~\ref{sec:Riemann}. \end{example} \section{Kahan's Rules} Branch cuts for $\sqrt{z}$, as well as other inverse functions like $\ln z$, $z^\omega$, $\arcsin(z)$, $\arccos(z)$, $\arctan(z)$, $\mathop{\rm arcsinh}\nolimits(z)$, $\mathop{\rm arccosh}\nolimits(z)$ and $\mathop{\rm arctanh}\nolimits(z)$ are given in~\cite{AbramowitzStegun1964}. In all cases, they can be deduced from that of $\ln$ once the function is expressed in terms of~$\ln$ (but even this expression is not neutral, as the second author was initially taught logarithms with a different branch cut). For these functions, Kahan claims~\cite{Kahan1987b}: \begin{quote}\em There can be no dispute about where to put the slits; their locations are deducible. However, Principal Values have too often been left ambiguous on the slits. \end{quote} In the terminology above, this means that the positioning question is soluble, and the problem is the adherence question. He states the following rules governing the location of the branch cuts: \renewcommand{\labelenumi}{R\arabic{enumi}.} \begin{enumerate} \item These functions $f$ are extensions to ${\bf C}$ of a real elementary function analytic at every interior point of its domain, which is a segment ${\mathcal S}$ of the real axis.\label{K:extend} \item Therefore, to preserve this analyticity (i.e. the convergence of the power series), the slits cannot intersect the interior of ${\mathcal S}$.\label{K:nonmeet} \item Since the power series for $f$ has real coefficients, $f(\overline z)=\overline{f(z)}$ in a complex neighbourhood of the segment's interior, so this should extend throughout the range of definition. In particular, complex conjugation should map slits to themselves.\label{K:conjugate} \item Similarly, the slits of an odd function should be invariant under reflection in the origin, i.e. $z\rightarrow-z$.\label{K:odd} \item The slits must begin and end at singularities.\label{K:sing} \end{enumerate} While these rules are satisfied by the branch cuts of the inverse functions listed above, they do not completely specify their location, unless one adds a form of Occam's razor: \begin{enumerate} \item[R6.] The slits might as well be straight lines.\label{K:straight} \end{enumerate} We shall interpret R4 in an extended way, by applying it as well when $f(z) + f(-z)$ is a constant, as will be motivated by the example of inverse cotangent in Section~\ref{sec:arccot}. \subsection{Worked example: $\arctan$}\label{K-arctan} Let us apply these rules to $\arctan$, considered as the inverse of~$\tan$. Writing \[ \tan(z)=\frac{e^{iz}-e^{-iz}}{i(e^{iz}+e^{-iz})} \] and solving a quadratic equation gives an expression for $\arctan$ in terms of~$\ln$: \[ \arctan(z)=-\frac{i}{2}\Big(\ln(1+ix)-\ln(1-ix)\Big). \] From this expression one deduces that the singularities are located at $\pm i$, so that it is analytic on~${\bf R}$. Moreover, as the inverse of an odd function, $\arctan$ itself is odd. Hence we need a cut which \begin{description} \item[(R\ref{K:sing})] joins $i$ and $-i$, \item[(R\ref{K:conjugate})] is invariant under complex conjugation, and \item[(R\ref{K:odd})] is invariant under $z\rightarrow-z$. \item[(R6)]We have the choice between a line from $-i$ to $i$ through 0 and two lines $-i-t i$ and $i+t i$, $t>0$, meeting at infinity, but \item[(R\ref{K:nonmeet})]the first of these two options is not admissible, \end{description} giving the classical branch cut $z=0+iy, |y|>1$. \subsection{The $\mathop{\rm arccot}\nolimits$ dilemma}\label{sec:arccot} The strange case of $\mathop{\rm arccot}\nolimits$ described in the introduction is still consistent with these rules. The key point is in~R\ref{K:extend}: in fact ${\arccot_1}$ and ${\arccot_9}$ were defined as different functions over~${\bf R}$: they agreed on ${\bf R}^+$ (in particular $\lim_{x\rightarrow+\infty}{\arccot_1}(x)=\lim_{x\rightarrow+\infty}{\arccot_9}(x)=0$), but not on~${\bf R}^-$: \begin{align*} {\arccot_1}(-1) & =3\pi/4,\\ {\arccot_9}(-1) & =-\pi/4. \end{align*} \par Therefore the limits at~$-\infty$ are different, and in fact ${\arccot_9}$ is continuous at infinity (but discontinuous at 0). What should the branch cuts of these functions be? For ${\arccot_1}$, most of the reasoning of Section~\ref{K-arctan} applies. Strictly speaking, the function is not odd, but it is ``odd apart from a constant'', and hence the branch cuts should still be symmetric under $z\rightarrow-z$. Therefore it should have the same cuts as $\arctan$, i.e. $z=0+iy, |y|>1$. \par ${\arccot_9}$ is odd, so all that reasoning applies, except that R\ref{K:nonmeet} no longer rules out the cut passing through 0. Indeed, since ${\arccot_9}$ is discontinuous at $0$, we are left with $z=0+iy, |y|<1$ (the cut in \cite[9th printing]{AbramowitzStegun1964}). \section{Linear ordinary differential equations}\label{sec:lode} Many of the elementary, trigonometric, inverse trigonometric functions and hyperbolic versions of those are part of the very large class of solutions of linear differential equations\footnote{Nonlinear equations have the major complication that it may not be obvious where the singularities are, and indeed they may not be finite in number. Some entries of Table~\ref{tab:fns} do not satisfy a linear o.d.e., and this fact is indicated by a dash.} (see Table~\ref{tab:fns}). Here, we set to extend the previous set of rules to fix the location of the branch cuts in a way that is consistent with that of the previous section. Also, we only consider the case where the singularities are all regular singular points (meaning that the solutions have only algebraic-logarithmic behaviour in their neighbourhood). \begin{table} \caption{Alternative definitions of functions\label{tab:fns}} \begin{tabular}{lcc} Function&Linear o.d.e.&Definition by inverse\\ $\exp$&$y'=y$&$\log^{-1}$\\ $\log$&$y'=1/x$&$\exp^{-1}$\\ $\sin$; $\cos$&$y''=-y$&$\arcsin^{-1}$; $\arccos^{-1}$\\ $\tan$; $\cot$&---&$\arctan^{-1}$; $\mathop{\rm arccot}\nolimits^{-1}$\\ $\sec$; $\csc$&---&$\mathop{\rm arcsec}\nolimits^{-1}$; $\mathop{\rm arccsc}\nolimits^{-1}$\\ $\arcsin$; $\arccos$&$y'=\frac{\pm1}{\sqrt{1-x^2}}$&$\sin^{-1}$; $\cos^{-1}$\\ $\arctan$; $\mathop{\rm arccot}\nolimits$&$y'=\frac{\pm1}{1+x^2}$&$\tan^{-1}$; $\cot^{-1}$\\ $\mathop{\rm arcsec}\nolimits$; $\mathop{\rm arccsc}\nolimits$&$y'=\frac{\pm1}{x\sqrt{x^2-1}}$&$\sec^{-1}$; $\csc^{-1}$\\ $\sinh$; $\cosh$&$y''=y$&$\mathop{\rm arcsinh}\nolimits^{-1}$; $\mathop{\rm arccosh}\nolimits^{-1}$\\ $\tanh$; $\coth$&---&$\mathop{\rm arctanh}\nolimits^{-1}$; $\mathop{\rm arccoth}\nolimits^{-1}$\\ $\mathop{\rm sech}\nolimits$; $\mathop{\rm csch}\nolimits$&---&$\mathop{\rm arcsech}\nolimits^{-1}$; $\mathop{\rm arccsch}\nolimits^{-1}$\\ $\mathop{\rm arcsinh}\nolimits$; $\mathop{\rm arccosh}\nolimits$&$y'=\frac{1}{\sqrt{x^2\pm1}}$&$\sinh^{-1}$; $\cosh^{-1}$\\ $\mathop{\rm arctanh}\nolimits$; $\mathop{\rm arccoth}\nolimits$&$y'=\frac{\pm1}{1-x^2}$&$\tanh^{-1}$; $\coth^{-1}$\\ $\mathop{\rm arcsech}\nolimits$; $\mathop{\rm arccsch}\nolimits$&$y'=\frac{\pm1}{x\sqrt{1\mp x^2}}$&$\mathop{\rm sech}\nolimits^{-1}$; $\mathop{\rm csch}\nolimits^{-1}$\\ \end{tabular} \end{table} Let us assume throughout that we are given a linear ordinary differential equation \begin{equation}\label{eq:ode} L(y)=\sum_{i=0}^nc_i {\frac{\d^i}{\d x^i}} y=d, \qquad c_i,d \in {\bf C}[x]. \end{equation} At the cost of dividing by $d$, one differentiation and some re-normalisation, we can consider the homogeneous equivalent \begin{equation}\label{eq:odeh} L(y)=\sum_{i=0}^{n+1}\hat c_i {\frac{\d^i}{\d x^i}} y=0, \qquad \hat c_i \in {\bf C}[x]. \end{equation} More generally, we can homogenize Equation~\eqref{eq:ode} whenever the inhomogeneous part $d$~itself satisfies a linear o.d.e.\ of the form~\eqref{eq:odeh}, as is the case with all examples in Table~\ref{tab:fns}. Outside the zeros of $c_n$ (or $\hat c_{n+1}$), knowing $y$ and sufficiently many of its derivatives at some point $x_0$ (the obvious meaning of \emph{initial conditions}) defines $y$ as an analytic function in the neighbourhood of $x_0$: \begin{equation}\label{eq:Taylor} y(x)=y(x_0)+ (x-x_0)y'(x_0)+\cdots, \end{equation} where higher derivatives of $y$ beyond the initial conditions are computed by applying \eqref{eq:ode} or~\eqref{eq:odeh} and their derivatives to the initial conditions. If it weren't for singularities, this would be an excellent definition. \begin{example}\label{ex:followsqrt} The function~$\sqrt{}$ can also be defined by \[ xy'-\frac12y=0,\quad y(1)=1. \] Obviously, the leading coefficient~$x$ has a (regular) singularity at~$0$. \end{example} \subsection{Germs of branch cuts} In the vicinity of a regular singularity, the location of a branch cut can be shown by the form adopted for the local expansion. For instance, the branch cut for $\arctan$ joins $i$ to $-i$ along the imaginary axis {via infinity} (see Section \ref{K-arctan}). The local behaviour at~$i$ is therefore well described by \begin{equation}\label{arctanseriesgood} \arctan(x)= \frac{-i}2\ln(1+ix)+i\ln\sqrt2+\frac14(x-i)+\cdots \end{equation} written in such a way that the branch cut ``heads north''. \par We can think of the precise formula used to encode the expansion at the singularity as encoding the \emph{germ} of the branch cut, i.e. its local behaviour. The correct angle can always be achieved by rotating the argument. This solves the positioning problem as far as the germ of the branch cut is concerned. We also need to consider the adherence problem. Eq.~\eqref{arctanseriesgood} inherits the adherence from the logarithm, and therefore, for $y>1$, means that \[ \arctan(0+iy)=\lim_{x\rightarrow0^+}\arctan(x+iy), \] which is the adherence described in \cite{Kahan1987b} as ``counter-clockwise continuity''. When we need the other adherence, we simply use the fact that $\ln(1/x)=-\ln(x)$ except on the branch cut. \par \subsection{Heuristic rules} The adaptation of Kahan's rules to an o.d.e. $L(y)=0$ together with a starting point is as follows: \renewcommand{\labelenumi}{R\arabic{enumi}$'$.} \begin{enumerate} \setcounter{enumi}{1} \item The branch cuts do not enter the circle of convergence. \item Complex conjugation is respected. \item Any symmetries inherent in the power series are respected. \item The branch cuts begin and end at singularities. \item The branch cuts are straight lines. \item The branch cuts are such that ${\bf C}$ less the branch cuts is simply connected. \end{enumerate} These subsume Kahan's rules, at the cost of explicitly requiring an initial value, which was implicit in his rules~R\ref{K:extend} and~R\ref{K:nonmeet}. He did not need an equivalent of R7$'$ as his examples only had two singularities. In general, it is required so that the Monodromy Theorem (e.g. \cite[p. 269]{Markushevich1967}) applies and guarantees uniqueness of function values. These rules \emph{do not} necessarily completely determine the branch cut: a ``random'' differential equation with singularities scattered in the complex plane and no special symmetries will not be determined. Moreover, they do not give any guarantee of consistency between different functions. For instance, both functions~$g$ and~$q$ of~\eqref{eq:K1} and~\eqref{eq:K2} satisfy \emph{the same} linear differential equations. Our rules that lead only to straight lines cannot be compatible with branch cuts that come from compositions of solutions of simpler differential equations with algebraic functions. However, they serve the simple purpose of producing useful and correct branch cuts in a wide variety of cases, including all those discussed before. \subsection{Worked example: $\arctan$}We apply these rules to $\arctan$, now defined by \[ y'=\frac1{1+x^2},\quad y(0)=0. \] The singularities of this differential equation are clearly at $x=\pm i$, and the function so defined is odd. Hence we need a cut which: \begin{description} \item[(R\ref{K:sing}$'$)]joins $i$ and $-i$, \item[(R6$'$)]does it in a straight line, \item[(R\ref{K:conjugate}$'$)]is invariant under complex conjugation, \item[(R\ref{K:odd}$'$)]is invariant under $z\rightarrow-z$, \item[(R2$'$)]does not enter the unit disk. \end{description} Thus we find again the classical branch cut $z=0+iy, |y|>1$. We then deduce expansions at the singularities that match the germs of this cut as in~(\ref{arctanseriesgood}). \subsection{$5\ln x$ or $\ln x^5$?} Once $\ln$ has been defined, the functions~$F_1=5\ln x$ and $F_2=\ln x^5$ are different: $F_1(i)=\frac{5\pi i}2$ while $F_2(i)=\frac{\pi i}2$. Nevertheless, they are both solutions to $xy'-5=0$ with $y(1)=0$. Our approach would make the choice $5\ln x$ with only one branch cut, while~$F_2$ has five branch cuts, at angles of $\{1,3,5,7,9\}\pi/10$, thus making the domain not connected (and violating R7$'$). \subsection{A harder example}\label{sec:arctanz2} Let us consider the functions $f$ defined by \[ f'=\frac{2x}{1+x^4} , \] or, if one prefers homogeneous equations, \[ x(1+x^4)f''+(3x^4-1)f'=0.\] In both cases, we assume we are given real initial conditions at~0. This example is selected because it has two simple, linearly independent solutions---1 and~$\arctan(x^2)$---to compare with the result, but the method does not use this information and would apply even if no such solution could be found. So here is what we get: \begin{description} \item[(R\ref{K:sing}$'$)] The equation has four regular singularities at \[ z=\pm\sqrt{\pm i}=\frac{\pm1\pm i}{\sqrt 2} \] (one can check that~$0$ is just an apparent singularity by exhibiting a basis of formal power series solutions and that $\infty$ is not a singularity by changing~$x$ into~$1/x$). \item[(R6$'$)] These four singularities have to be connected by straight lines. \item[(R\ref{K:nonmeet}$'$)] We cannot connect the singularities pairwise (in either way!) without going to infinity. \item[(R\ref{K:odd}$'$)] The symmetry $f(ix)=-f(x)$ can be checked directly from the equation, so that branch cuts should be mapped to branch cuts by a rotation of~$\pi/2$. \item[(R\ref{K:conjugate}$'$)] Reality implies that branch cuts are also mapped to branch cuts by horizontal symmetry. \end{description} We are thus left with only the following choice: Cuts that ``head northeast'' from $\frac{1+i}{\sqrt 2}$, ``northwest'' from $\frac{-1+i}{\sqrt 2}$ etc., all meeting at infinity. This is indeed consistent with~$\arctan(x^2)$. \par It is worth noting that this function actually also admits branch cuts that violate R\ref{K:nonmeet}$'$ and R7': for example we can connect $\frac{-1- i}{\sqrt 2}$ to $\frac{+1- i}{\sqrt 2}$, and $\frac{-1+ i}{\sqrt 2}$ to $\frac{+1+ i}{\sqrt 2}$. This is a peculiarity of our construction, and the fact that these are valid follows, not from the Monodromy Theorem, but from the fact that the residues at these branch points are equal and opposite. \section{Conclusions} When it comes to converting an analytic (be it linear ordinary differential equation, inverse function, or possibly other) definition of a function into a well-defined single-valued one, so that one can answer questions such as ``what is $\ln(-1)$?'' or ``what is $\arctan(2i)$'', branch cuts may need to be imposed on the locally analytic function. While the definition of the function may stipulate the endpoints of the cut, it does not, in general, specify the location of the cut between its endpoints, nor indeed even the germ of the cut at the singularities. We have given a simple set of rules that is convenient when nothing else is known about the function. This set of rules is sufficient to recover the classical branch cuts of the elementary inverse trigonometric or hyperbolic trigonometric functions. However, it is important to remember that this is only a useful heuristic, while there are cases where different cuts are dictated by the application. In a specific context, getting the right overall function defined by a formula can even require \emph{inconsistent\/} choices of the branch cuts of component functions: see, e.g., the Joukowski map studied in~\cite[pp. 294--8]{Henrici1974}) and reported on in~\cite{Davenport2010}. \par {\bf Acknowledgements.} The authors are grateful to other members of the Algorithms team, notably Alexandre Benoit, Marc Mezzarobba and Flavia Stan, for their contributions. The second author is grateful to Peter Olver for some clarifying discussions on the topic.
train/arxiv
BkiUcH_xK0zjCsHeZugR
5
1
\section{Introduction} Quasars at $z>3$ and with GHz flux densities of the order of 1 Jy have luminosities of $\sim10^{28}$~W~Hz$^{-1}$, making them the most luminous, steady radio emitters in the Universe. In this paper, we present 5 and 8.4~GHz global VLBI polarization observations of ten quasars at $z>3$. There have been several recent studies of high-redshift jets on VLBI scales \citep[e.g.,][]{gurvits2000, frey2008, veres2010} but our observations are the first to explore this region of ``luminosity -- emitted frequency'' parameter space in detail with polarization sensitivity. In the standard $\Lambda$CDM cosmology (H$_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$ and $\Omega_{M}=0.3$), the angular diameter distance reaches a maximum at $z\sim1.6$ \citep[e.g.,][]{hogg1999}, meaning that more distant objects actually begin to increase in apparent angular size with increasing $z$. As a consequence, the linear scale for observations of objects at $z\sim3$ is similar to objects at $z\sim0.7$. Therefore, our measurements allow us to study pc-scale structures at emitted frequencies of $\sim$20--45~GHz in comparison with properties of low-redshift core-dominated Active Galactic Nuclei (AGN) at 22 and 43~GHz \citep[e.g.,][]{osullivangabuzda2009b} at matching length scale and emitted frequency. An important question we try to address with these observations is whether or not quasar jets and their surrounding environments evolve with cosmic time. Any systematic differences in the polarization properties or the frequency dependence of the polarization between high and low redshift sources would suggest that conditions in the central engines, jets, and/or surrounding media of quasars have evolved with redshift. Previous observations of high redshift quasars \citep[e.g.,][]{frey1997, paragi1999} often show that these quasars can be strongly dominated by compact cores and that their extremely high apparent luminosities are likely due to large Doppler boosting, implying that their jets move at relativistic velocities and are pointed close to our line of sight. With this geometry, it is difficult to distinguish emerging jet components from the bright core. However, high resolution polarization observations have proven very effective in identifying barely resolved but highly polarized jet components, whose intensity is swamped by the intensity of the nearby core, but whose polarized flux is comparable to or even greater than that of the core \citep[e.g.,][]{gabuzda1999}. Thus, polarization sensitivity can be crucial to understanding the nature of compact jets in the highest redshift quasars. There is also some evidence that the VLBI jets of high redshift quasars are commonly bent or distorted; one remarkable case revealed by VSOP observations is of 1351-018 \citep{gurvits2000} where the jet appears to bend through almost $180^{\circ}$. Polarization observations can help elucidate the physical origin of these observed changes in the jet direction in the plane of the sky since the direction of the jet polarization can reflect the direction of the underlying flow. In some quasars \citep{cawthorne1993, osullivangabuzda2009}, the observed polarization vectors follow curves in the jet, indicating that the curves are actual physical bends, while in others, remarkable uniformity of polarization position angle has been observed over curved sections of jet, suggesting that the apparent curves correspond to bright regions in a much broader underlying flow. One possible origin for bends could be collisions of the jet with denser areas in the surrounding medium as suggested, for example, in the case of the jet in the radio galaxy 4C~41.~17 \citep{gurvits1997}. The polarization information allows us to test for evidence of such interaction, where the Faraday rotation measure (RM) and degree of polarization should increase substantially in those regions. In Section 2, we describe our observations and the data reduction process. Descriptions of the results for each source are given in Section 3, while their implications for the jet structure and environment are discussed in Section 4. Our conclusions are presented in Section 5. We use the $S\propto\nu^{\alpha}$ definition for the spectral index. \section{Observations and Data Reduction} Ten high-redshift ($z>3$) AGN jets (listed in \tref{basic_info}), which were all previously successfully imaged using global VLBI baselines, were targeted for global VLBI polarization observations at 4.99 and 8.415~GHz with two IFs of 8~MHz bandwidth at each frequency. These sources do not comprise a complete sample in any sense, and were chosen based on the previous VLBI observations of \cite{gurvits2000}, which indicated the presence of components with sufficiently high total intensities to suggest that it was feasible to detect their polarization. The sources were observed for 24 hours on 2001 June 5 with all 10 antennas of the NRAO Very Long Baseline Array (VLBA) plus six antennas of the European VLBI Network (EVN) capable of fast frequency switching between 5 and 8.4~GHz (see \tref{telescopes} for a list of the telescopes used). The target sources were observed at alternating several-minute scans at the two frequencies. This led to practically simultaneous observations at the two frequencies with a resolution of $\sim1.1$ mas at 5~GHz and $\sim0.7$ mas at 8.4~GHz (for the maximum baseline of 11,200 km from MK to NT). \begin{table*} \caption{Basic information for the high-redshift quasars and VLBI data presented in this paper.} \smallskip \scriptsize{ \begin{tabular}{ccccccccccccccccccc} \hline {(1)} & {(2)} & {(3)} & {(4)} & {(5)} & {(6)} & {(7)} & {(8)} & {(9)} & {(10)} \\% & {(11)} \\ \smallskip {Name} & {z} & {Linear Scale} & {Freq.} & {Beam} & {BPA} & {$I_{\rm total}$} & {$I_{\rm peak}$} & {$\sigma_{I,\rm{rms}}$} & $L_{\nu}$ \\ \smallskip { } & { } & {[pc mas$^{-1}$]} & {[GHz]} & {[mas$\times$mas]} & {[deg]} & {[mJy]} & {[mJy/bm]} & {[mJy/bm]} & [$10^{28}$ W Hz$^{-1}$]\\% & {[mJy]} & {[mJy]} & \noalign{\smallskip} \hline 0014$+$813 & 3.38 & 7.55 & 4.99 & $1.10\times0.95$ & $-85.9$ & 861.4 & 583.2 & 0.3 & 2.10\\ & & & 8.41 & $0.95\times0.68$ & $-85.1$ & 645.1 & 445.6 & 0.9 & 1.57\\ \noalign{\smallskip} 0636$+$680 & 3.17 & 7.71 & 4.99 & $1.25\times0.91$ & $-1.3$ & 348.8 & 270.0 & 0.3 & 0.76\\ & & & 8.41 & $0.82\times0.62$ & $5.3$ & 281.4 & 162.7 & 0.5 & 0.62\\ \noalign{\smallskip} 0642$+$449 & 3.41 & 7.53 & 4.99 & $1.70\times0.85$ & $-15.6$ & 2248.2 & 1974.9 & 0.7 & 5.55\\% & & 21.8 & \\ & & & 8.41 & $1.03\times0.70$ & $14.4$ & 2743.9 & 2460.4 & 3.9 & 6.77\\% & & 66.8 & \\ \noalign{\smallskip} 1351$-$018 & 3.71 & 7.30 & 4.99 & $3.51\times1.10$ & $-7.0$ & 788.0 & 662.2 & 0.4 & 2.23\\% & & 5.5 & \\ & & & 8.41 & $2.07\times0.86$ & $-2.8$ & 594.0 & 534.2 & 1.4 & 1.68\\ \noalign{\smallskip} 1402$+$044 & 3.21 & 7.68 & 4.99 & $3.52\times0.95$ & $-7.7$ & 831.7 & 617.4 & 0.5 & 1.86\\% & & 13.1 & \\ & & & 8.41 & $2.03\times0.75$ & $-6.8$ & 719.7 & 488.2 & 0.3 & 1.61\\% & & 6.3 & \\ \noalign{\smallskip} 1508$+$572 & 4.30 & 6.88 & 4.99 & $1.23\times0.97$ & $15.2$ & 291.1 & 248.7 & 0.2 & 1.04 \\% & & 7.4 & \\ & & & 8.41 & $0.80\times0.76$ & $8.9$ & 198.3 & 170.3 & 0.5 & 0.71 \\% & & 3.2 & \\ \noalign{\smallskip} 1557$+$032 & 3.90 & 7.16 & 4.99 & $3.81\times0.97$ & $-8.2$ & 287.0 & 249.6 & 0.3 & 0.88 \\% & & 3.1 & \\ & & & 8.41 & $2.06\times0.80$ & $-8.8$ & 255.8 & 239.1 & 1.0 & 0.78 \\% & & 2.5 & \\ \noalign{\smallskip} 1614$+$051 & 3.21 & 7.68 & 4.99 & $3.74\times0.98$ & $-8.1$ & 792.9 & 588.9 & 0.5 & 1.77 \\ & & & 8.41 & $1.99\times0.80$ & $-9.4$ & 465.0 & 333.6 & 1.0 & 1.04 \\ \noalign{\smallskip} 2048$+$312 & 3.18 & 7.70 & 4.99 & $2.17\times1.12$ & $-21.2$ & 547.0 & 192.7 & 1.5 & 1.20 \\% & & 6.2 & \\ & & & 8.41 & $2.06\times0.71$ & $-16.2$ & 539.1 & 253.9 & 0.6 & 1.19 \\% & & 6.5 & \\ \noalign{\smallskip} 2215$+$020 & 3.55 & 7.42 & 4.99 & $3.45\times1.03$ & $-7.6$ & 271.0 & 190.6 & 0.2 & 0.71 \\% & & 1.0 & \\ & & & 8.41 & $2.06\times0.71$ & $-6.8$ & 174.3 & 122.6 & 0.3 & 0.46 \\% & & 1.4 & \\ \hline \smallskip \end{tabular} \\ \noindent Column designation: 1~-~source name (IAU B1950.0); 2~-~redshift; 3~-~projected distance, in parsecs, corresponding to 1 mas on the sky; 4~-~observing frequency in GHz; 5~-~size of convolving beam in mas$\times$mas at the corresponding frequency for each source; 6~-~beam position angle (BPA), in degrees; 7~-~total flux density, in mJy, from \textsc{CLEAN} map; 8~-~peak brightness, in mJy beam$^{-1}$; 9~-~brightness rms noise, in mJy beam$^{-1}$, from off-source region; 10~-~monochromatic total luminosity at the corresponding frequency in units of $10^{28}$ W Hz$^{-1}$. \label{basic_info} } \end{table*} \begin{table} \caption{List of telescopes included in the global VLBI experiment.} \label{telescopes} \smallskip \begin{center} \begin{tabular}{lccl} \hline Telescope Location & Code & Diameter & Comment \\%& T$_{\rm sys}^3$ & K$^4$ & SEFD$^5$ \\ & & [m] & \\%& [K] & [K/Jy] & [Jy] \\ \hline \hline \multicolumn{3}{c}{EVN} \\ \hline Effelsberg, Germany & EB & 100 & \\%& 40 & 1.50 & 30 \\ Medicina, Italy & MC & 32 & \\%& 70 & 0.10 & 700 \\ Noto, Italy & NT & 32 & \\%& 95 & 0.11 & 950 \\ Toru\'n, Poland & TR & 32 & Failed \\%& 50 & 0.13 & 380 \\ Jodrell Bank, UK & JB & 25 & \\%& 30 & 0.29 & 90 \\ Westerbork, Netherlands & WB & 93$^*$ &\\% & 30 & 0.29 & 60 \\ \hline \multicolumn{3}{c}{VLBA, USA} \\ \hline Saint Croix, VI & SC & 25 & \\%& 28 & 0.082 & 310 \\ Hancock, NH & HN & 25 &\\% & 33 & 0.080 & 330 \\ North Liberty, IA & NL & 25 & \\%& 20 & 0.071 & 290 \\ Fort Davis, TX & FD & 25 & \\%& 24 & 0.090 & 270 \\ Los Alamos, NM & LA & 25 & Failed \\% & 32 & 0.121 & 280 \\ Pietown, NM & PT & 25 & \\%& 27 & 0.101 & 270 \\ Kitt Peak, AZ & KP & 25 & \\%& 31 & 0.099 & 310 \\ Owens Valley, CA & OV & 25 &\\% & 34 & 0.100 & 340 \\ Brewster, WA & BR & 25 &\\% & 24 & 0 087 & 280 \\ Mauna Kea, HI & MK & 25 & \\%& 34 & 0.097 & 340 \\\hline \hline \end{tabular}\\ \smallskip \scriptsize{$^*$ Equivalent diameter for the phased array, $\sqrt{14}\times 25$~m.} \end{center} \end{table} The data were recorded at each telescope with an aggregate bit rate of 128 Mbits/s, recorded in 8 base-band channels at 16 Msamples/sec with 2-bit sampling, and correlated at the Joint Institute for VLBI in Europe, Dwingeloo, The Netherlands. Los Alamos (LA) was unable to take data due to communication problems and Toru\'n (TR) also failed to take data, so both were excluded from the processing. The 5 and 8.4~GHz VLBI data were calibrated independently using standard techniques in the NRAO AIPS package. In both cases, the reference antenna for the VLBI observations was the Kitt Peak telescope. The instrumental polarizations (`D-terms') at each frequency were determined using observations of 3C\,84, using the task \textsc{LPCAL} and assuming 3C\,84 to be unpolarized. The polarization angle ($2\chi=\arctan \frac{U}{Q}$) calibration was achieved by comparing the total VLBI-scale polarizations observed for the compact sources 1823+568 (at 5~GHz) and 2048+312 (at 8.4~GHz) with their polarizations measured simultaneously with the NRAO Very Large Array (VLA) at both 5 and 8.4~GHz, and rotating the VLBI polarization angles to agree with the VLA values. We obtained 2 hours of VLA data, overlapping with the VLBI data on June 6. The VLA angles were calibrated using the known polarization angle of 3C\,286 at both 5 and 8.4~GHz (see \textsc{AIPS} cookbook\footnote{http://www.aips.nrao.edu/cook.html}, chapter 4). At 5~GHz, we found $\Delta\chi_{\rm 5\,GHz}=\chi_{\rm VLA}-\chi_{\rm VLBI}=67.5^{\circ}$ and at 8.4~GHz, $\Delta\chi_{\rm 8.4\,GHz}=\chi_{\rm VLA}-\chi_{\rm VLBI}=83.5^{\circ}$. \section{Results} \tref{basic_info} lists the 10 high-z quasars, their redshifts, the FWHM beam sizes at 5 and 8.4~GHz, the total CLEAN flux and the peak flux densities of the 5 and 8.4~GHz maps, the noise levels in the maps, and the 5 and 8.4~GHz luminosities, calculated assuming a spectral index of zero and isotropic radiation. Since the emission from these sources is likely relativistically boosted, the true luminosity is probably smaller by a factor of 10--100 \citep[e.g.,][]{cohen2007}. \fref{Figure:uvcoverage} gives an example of the sparse but relatively uniform \emph{uv} coverage obtained for this experiment. The 5-GHz \emph{uv} coverage for 2048+312 is shown; the 8.4~GHz is essentially identical but scaled accordingly. The \textsc{DIFMAP} software package \citep{shepherd1997} was used to fit circular and/or elliptical Gaussian components to model the self-calibrated source structure. The brightness temperatures in the source frame were calculated for each component from the integrated flux and angular size according to \begin{equation} T_b=1.22\times 10^{12}\frac{S_{\text{total}}(1+z)}{d_{\max }d_{\min }\nu ^2} \end{equation} \noindent where the total flux density is measured in Jy, the FWHM size is measured in mas and the observing frequency is measured in GHz. The limiting angular size for a Gaussian component ($d_{\rm lim}$) was also calculated to check whether the component size reflects the true size of that particular jet emission region or not. From \cite{lobanov2005}, we have \begin{equation} d_{\rm lim}=\sqrt{\frac{\ln 2}{\pi} b_{\rm max} b_{\rm min} \ln \left(\frac{\rm S/N}{{\rm S/N}-1} \right)} \end{equation} \noindent where $b_{\rm max}$ and $b_{\rm min}$ are the major and minor axes of the FWHM beam and S/N is the signal-to-noise ratio of a particular component. In other words, component sizes smaller than this value yielded by the model-fitting are not reliable. \tref{modelfits} shows the results of the total intensity model-fitting for each source. For clarity of comparison, the component identified as the core has been defined to be at the origin, and the positions of the jet components determined relative to this position. The errors in the model-fitted positions were estimated as $\Delta r = \sigma_{\rm rms}d \sqrt{1+S_{\rm peak}/\sigma_{\rm rms}}/2S_{\rm peak}$ and $\Delta\theta = \arctan(\Delta r/r)$, where $\sigma_{\rm rms}$ is the residual noise of the map after the subtraction of the model, $d$ is the model-fitted component size and $S_{\rm peak}$ is the peak flux density \citep[e.g.,][]{fomalont1999, lee2008, nadia2010}. These are formal errors, and may yield position errors that are appreciably underestimated in some cases; when this occurs we use a $1\sigma$ error estimate of $\pm0.1$ beamwidths in the structural position angle of the component in question. Polarization model fits, listed in Table 4, were found using the Brandeis \textsc{VISFIT} package \citep{roberts1987, gabuzda1989} adapted to run in a linux environment by \citet{bezrukovs2006}. The positions in Table~4 have been shifted in accordance with our cross-identification of the corresponding intensity cores, when we consider this cross-identification to be reliable. The errors quoted for the model-fitted components are formal $1\sigma$ errors, corresponding to an increase in the best fitting $\chi^2$ by unity; again, we have adopted a 1$\sigma$ error estimate of $\pm0.1$ beamwidths in the structural position angle of the component in question when the position errors are clearly underestimated. Before constructing spectral-index maps, images with matched resolution must be constructed at the two frequencies. Since the intrinsic resolutions of the 8.4 and 5-GHz images were not very different (less than a factor of two), we achieved this by making a version of the final 8.4-GHz image with the same cell size, image size and beam as the 5-GHz image. The two images must also be aligned based on optically thin regions of the structure: the mapping procedure aligns the partially optically thick cores with the map origin, whereas we expect shifts between the physical positions of the cores at the two frequencies. When possible, we aligned the two images by comparing the positions of optically thin jet components at the two frequencies derived from model-fits to the VLBI data, or using the cross-correlation technique of Croke \& Gabuzda (2008). After this alignment, we constructed spectral-index maps in AIPS using the task COMB. In the absence of Faraday rotation, we expect the polarization angles for corresponding regions at the two frequencies to coincide; in the presence of Faraday rotation of the electric vector position angle (EVPA), which occurs when the polarized radiation passes through regions of magnetized plasma, the observed polarization angles will be subject to a rotation by RM$\lambda^2$, where RM is the Faraday rotation measure and $\lambda$ is the observing wavelength. We are not able to unambiguously identify the action of Faraday rotation based on observations at only two frequencies; however, Faraday rotation provides a simple explanation for differences in the polarization angles observed at the two frequencies. We accordingly calculated tentative Faraday rotation measures, $\textrm{RM}_{min} = (\chi_{\rm 5~GHz}-\chi_{\rm 8.4~GHz})/(\lambda^{2}_{\rm 5~GHz}- \lambda^{2}_{\rm 8.4~GHz})$, based on the results of the polarization model fitting,taking the minimum difference between the component EVPAs. We also constructed tentative RM maps in AIPS using the task COMB. In both cases, if Faraday rotation is operating, the derived RM values essentially represent a lower limit for the true Faraday rotation (assuming an absence of $n\pi$ rotations in the observed angles). With our two frequencies, the n$\pi$ ambiguity in the RM corresponds to $n$ times 1350~rad m$^{-2}$. Table 4 lists the values of RM$_{\rm min}$ and the corresponding intrinsic polarization angle $\chi_0$, obtained by extrapolating to zero wavelength. However, we emphasize that polarization measurements at three or more wavelengths are required to come to any firm conclusions about the correct RM and $\chi_0$ values for these sources. The main source of uncertainty for the RMs (apart from possible $n\pi$ ambiguities) comes from the EVPA calibration, which we estimate is accurate to within $\pm3^{\circ}$; this corresponds to an RM error of $\pm23$ rad m$^{-2}$ between 5 and 8.4 GHz. We have also attempted to obtain the correct sign and magnitude of the RM in the immediate vicinity of the AGNs by subtracting the integrated RMs derived from lower-frequency VLA measurements centered on our sources, obtained from the literature. Note that these represent the integrated RMs directly along the line of sight toward our sources, rather than along a nearby sight-line; the typical uncertainties in such measurements when based on observations at several frequencies near 1--2~GHz are typically no larger than about $\pm 5$~rad/m$^2$. Because of the lower resolution of these measurements and the greater prominence of the jets at lower frequencies, the polarization detected in such observations usually originates fairly far from the VLBI core, where we expect the overall RM local to the source to be negligible. Therefore, the lower-frequency integrated RMs usually correspond to the foreground (Galactic) contribution to the overall RM detected in our VLBI data, and subtracting off this value should typically help isolate the RM occurring in the immediate vicinity of the AGN. Due to the extremely high redshifts of these sources it can be very important to remove the foreground RM before estimating the intrinsic minimum RM in the source rest frame, RM$_{\rm min}^{\rm intr}=(1+z)^2({\rm RM}_{\rm min}-{\rm RM}_{\rm Gal})$. Table 4 also lists ${\rm RM}_{\rm Gal}$ and RM$_{\rm min}^{\rm intr}$ for sources which have measured integrated RMs in the literature. We take the polarization angles measured at the two frequencies to essentially be equal (negligible RM) within the errors if they agree to within $6^{\circ}$, and do not attempt to correct such angles for Galactic Faraday rotation, since there can be some uncertainty about interpreting integrated RMs as purely Galactic foreground RMs, and the subtraction of the integrated values from such small nominal VLBI RM values could lead to erroneous results in some cases. For each source, we present the total-intensity distributions at 5 and 8.4~GHz overlaid with EVPA sticks in Fig.~2 and also plot the positions of the model-fitted total intensity and polarization components. The derived spectral-index and RM$_{\rm min}$ distributions generally do not show unexpected features, and are not presented here; these can be obtained by contacting S.P. O'Sullivan directly. \begin{table*} \begin{center} \caption{Total intensity model fit parameters of all sources.} \smallskip \scriptsize{ \begin{tabular}{ccccccccccccccccccc} \hline {(1)} & {(2)} & {(3)} & {(4)} & {(5)} & {(6)} & {(7)} & {(8)} & {(9)} & {(10)} \\ \noalign{\smallskip} Name & Freq. & Comp. & $r$ & $\theta$ & $I_{\rm tot}$ & $d_{\rm max}$ & $d_{\rm min}$ & $d_{\rm lim}$ & $T_{\rm b}$ \\ \noalign{\smallskip} & [GHz] & & [mas] & [deg] & [mJy] & [mas] & [mas] & [mas] & [$\times10^{10}$ K] \\ \hline 0014$+$813 &4.99& A & -- & -- & $617\pm20$ & 0.53 & 0.32 & 0.03 & $78.2\pm6.4$ \\ & & B & $0.66\pm0.10$ & $-175.7\pm1.2$ & $196\pm11$ & 0.53 & 0.53 & 0.05 & $15.2\pm2.3$ \\ & & C & $4.93\pm0.10$ & $-162.8\pm1.1$ & $13\pm3$ & 0.92 & 0.92 & 0.21 & $0.32\pm0.21$ \\ & & D & $6.41\pm0.21$ & $-171.0\pm1.9$ & $16\pm4$ & 1.75 & 1.75 & 0.19 & $0.11\pm0.08$ \\ & & E & $9.51\pm0.16$ & $-170.9\pm1.0$ & $17\pm4$ & 1.49 & 1.49 & 0.18 & $0.16\pm0.11$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $547\pm36$ & 0.46 & 0.13 & 0.04 & $65.9\pm11.0$ \\ & & B & $0.74\pm0.07$ & $-178.3\pm1.7$ & $85\pm14$ & 0.40 & 0.40 & 0.09 & $3.9\pm1.7$ \\ \noalign{\smallskip} 0636$+$680 & 4.99 & A & -- & -- & $344\pm15$ & 0.69 & 0.35 & 0.03 & $28.9\pm3.1$ \\ & & B & $2.22\pm0.54$ & $140.3\pm9.0$ & $6\pm3$ & 2.76 & 2.76 & 0.23 & $0.02\pm0.02$ \\ \noalign{\smallskip} & 8.41 & A & - & - & $286\pm21$ & 0.78 & 0.38 & 0.04 & $7.4\pm1.4$ \\ \noalign{\smallskip} 0642$+$449 & 4.99 & A & -- & -- & $2209\pm60$ & 0.41 & 0.28 & 0.02 & $417\pm28$ \\ & & B & $3.22\pm0.45$ & $92.5\pm7.5$ & $45\pm14$ & 2.86 & 2.86 & 0.15 & $0.12\pm0.11$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $2313\pm134$ & 0.16 & 0.16 & 0.03 & $731\pm103$ \\ & & A1 & $0.20\pm0.09$ & $158.1\pm0.8$ & $458\pm60$ & 0.09 & 0.09 & 0.08 & $439\pm138$ \\ \noalign{\smallskip} 1351$-$018 & 4.99 & A & -- & -- & $681\pm23$ & 0.77 & 0.23 & 0.05 & $89.5\pm7.5$ \\ & & B & $1.01\pm0.16$ & $132.7\pm1.0$ & $79\pm8$ & 0.48 & 0.48 & 0.14 & $7.9\pm2.0$ \\ & & C & $1.64\pm0.12$ & $56.6\pm0.5$ & $14\pm3$ & 0.17 & 0.17 & 0.32 & -- \\ & & D & $11.85\pm0.59$ & $-12.9\pm2.8$ & $17\pm5$ & 4.02 & 4.02 & 0.30 & $0.02\pm0.02$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $578\pm37$ & 0.67 & 0.06 & 0.06 & $125.9\pm20.1$ \\ & & B & $0.83\pm0.12$ & $138.5\pm9.7$ & $13\pm6$ & 0.76 & 0.76 & 0.40 & $0.18\pm0.19$ \\ & & C & $5.96\pm0.63$ & $-2.1\pm5.8$ & $3\pm3$ & 2.43 & 2.43 & 1.06 & $0.003\pm0.008$ \\ \noalign{\smallskip} 1402$+$044 & 4.99 & A & -- & -- & $330\pm17$ & 0.23 & 0.23 & 0.05 & $130.0\pm16.7$ \\ & & B & $0.76\pm0.25$ & $-24.5\pm1.3$ & $374\pm19$ & 0.46 & 0.46 & 0.04 & $36.9\pm4.6$ \\ & & C & $3.33\pm0.15$ & $-44.4\pm4.5$ & $19\pm5$ & 1.93 & 1.93 & 0.23 & $0.11\pm0.08$ \\ & & D & $9.07\pm0.14$ & $-46.6\pm0.9$ & $64\pm9$ & 2.13 & 2.13 & 0.11 & $0.29\pm0.12$ \\ & & E & $12.64\pm0.50$ & $-77.7\pm2.3$ & $44\pm10$ & 4.72 & 4.72 & 0.18 & $0.04\pm0.03$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $300\pm19$ & 0.14 & 0.14 & 0.07 & $111.1\pm17.5$ \\ & & B & $0.83\pm0.16$ & $-23.7\pm1.0$ & $336\pm21$ & 0.43 & 0.43 & 0.06 & $13.4\pm2.1$ \\ & & C & $3.23\pm0.29$ & $-43.3\pm5.5$ & $13\pm5$ & 1.62 & 1.62 & 0.27 & $0.03\pm0.04$ \\ & & D & $9.22\pm0.20$ & $-46.2\pm1.2$ & $55\pm11$ & 2.07 & 2.07 & 0.15 & $0.09\pm0.05$ \\ & & E & $12.83\pm0.32$ & $-77.8\pm1.5$ & $20\pm7$ & 2.09 & 2.09 & 0.18 & $0.03\pm0.03$ \\ \noalign{\smallskip} 1508$+$572 & 4.99 & A & -- & -- & $276\pm11$ & 0.45 & 0.28 & 0.03 & $56.1\pm5.3$\\ & & B & $2.00\pm0.12$ & $-178.4\pm3.3$ & $14\pm3$ & 1.37 & 1.37 & 0.13 & $0.20\pm0.11$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $196\pm14$ & 0.42 & 0.14 & 0.03 & $30.0\pm5.2$ \\ & & B & $1.94\pm0.14$ & $-172.1\pm3.7$ & $6\pm3$ & 0.70 & 0.70 & 0.20 &$0.11\pm0.13$ \\ \noalign{\smallskip} 1557$+$032 & 4.99 & A & -- & -- & $262\pm12$ & 0.70 & 0.26 & 0.06 & $34.0\pm3.9$\\ & & B & $2.64\pm0.17$ & $139.4\pm2.4$ & $14\pm3$ & 1.24 & 1.24 & 0.26 & $0.22\pm0.13$\\ \noalign{\smallskip} & 8.41 & A & -- & -- & $252\pm15$ & 0.49 & 0.12 & 0.05 & $35.0\pm5.1$\\ & & B & $2.16\pm0.37$ & $148.5\pm9.2$ & $15\pm5$ & 2.46 & 2.46 & 0.18 & $0.02\pm0.02$ \\ \noalign{\smallskip} 1614$+$051 & 4.99 & A & -- & -- & $566\pm24$ & 0.37 & 0.37 & 0.05 & $63.5\pm6.7$ \\ & & B & $1.26\pm0.16$ & $-154.4\pm0.7$ & $229\pm15$ & 0.54 & 0.54 & 0.08 & $12.4\pm2.1$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $347\pm28$ & 0.24 & 0.24 & 0.07 & $32.4\pm6.5$ \\ & & B & $1.26\pm0.12$ & $-154.1\pm0.7$ & $117\pm17$ & 0.37 & 0.37 & 0.12 & $4.6\pm1.6$ \\ \noalign{\smallskip} 2048$+$312 & 4.99 & A & -- & -- & $393\pm38$ & 1.78 & 1.24 & 0.10 & $3.7\pm1.0$ \\ & & B & $1.55\pm0.11$ & $65.7\pm2.2$ & $126\pm21$ & 1.43 & 1.43 & 0.17 & $1.3\pm0.6$ \\ & & C & $8.86\pm0.36$ &$56.8\pm2.0$ & $20\pm10$ & 1.68 & 1.68 & 0.44 & $0.14\pm0.19$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $411\pm23$ & 0.80 & 0.44 & 0.04 & $8.4\pm1.2$ \\ & & B & $1.49\pm0.07$ & $80.6\pm2.6$ & $129\pm14$ & 1.29 & 1.29 & 0.07 & $0.56\pm0.18$ \\ \noalign{\smallskip} 2215$+$020 & 4.99 & A & -- & -- & $141\pm9$ & 0.26 & 0.26 & 0.08 & $45.9\pm7.3$ \\ & & B & $0.52\pm0.11$ & $108.0\pm2.2$ & $73\pm7$ & 0.39 & 0.39 & 0.12 & $10.7\pm2.4$ \\ & & C &$1.53\pm0.10$ & $79.1\pm5.4$ & $11\pm3$ & 1.26 & 1.26 & 0.30 & $0.15\pm0.10$ \\ & & D &$50.52\pm1.52$ & $79.1\pm1.6$ & $33\pm11$ & 9.17 & 9.17 & 0.17 & $0.009\pm0.009$ \\ & & E & $57.71\pm2.48$ & $74.7\pm2.3$ & $19\pm8$ & 11.26 &11.26 & 0.23 & $0.003\pm0.004$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $94\pm7$ & 0.20 & 0.20 & 0.06 & $19.0\pm3.5$ \\ & & B & $0.50\pm0.08$ & $115.7\pm1.3$ & $66\pm6$ & 0.22 & 0.22 & 0.07 & $10.7\pm2.4$ \\ & & C & $0.95\pm0.07$ & $95.5\pm3.6$ & $17\pm3$ & 0.64 & 0.64 & 0.14 & $0.32\pm0.2$ \\ \noalign{\smallskip} \hline \smallskip \end{tabular}\\ \noindent Column designation: 1~-~source name (IAU B1950.0); 2~-~observing frequency, in GHz; 3~-~component identification; 4~-~distance of component from core, in mas; 5~-~position angle of component with respect to the core, in degrees; 6~-~total flux of model component, in mJy; 7~-~FWHM major axis of Gaussian component, in mas; 8~-~FWHM minor axis of Gaussian component, in mas; 9~-~Minimum resolvable size, in mas; 10~-~measured brightness temperature in units of $10^{10}$K. \label{modelfits} } \end{center} \end{table*} \begin{table*} \begin{center} \scriptsize{ \begin{center} \caption{Polarization model fit parameters of all sources.} \end{center} \begin{tabular}{ccccccccccccccccccc} \hline {(1)} & {(2)} & {(3)} & {(4)} & {(5)} & {(6)} & {(7)} & {(8)} & {(9)} & (10) & (11) & (12) \\ \noalign{\smallskip} Name & Freq. & Comp. & $r$ & $\theta$ & $p$ & $\chi$ & $m$ & RM$_{\rm min}$ & $\chi_0$ & RM$_{\rm Gal}$ & RM$_{\rm min}^{\rm intr}$ \\ \noalign{\smallskip} & [GHz] & & [mas] & [deg] & [mJy] & [deg] & [\%] & [rad m$^{-2}$] & [deg] & [rad m$^{-2}$] & [rad m$^{-2}$] \\ \hline 0014$+$813 & 4.99 & A & -- & -- & $4.5\pm0.3$ & $-40.9$ & $1.1\pm0.3$ & 384 & $-120\pm13$ & $9^a$ & $7194\pm525$ \\ & & B & $4.72\pm0.10$ & $-163.5\pm2.1$ & $1.3\pm0.5$ & 47.7 & $20.0\pm4.5$ & -- & -- & -- & -- \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $2.3\pm0.2$ & $-92.2$ & $0.7\pm0.2$ & & & & \\ \noalign{\smallskip} 0636$+$680 & 4.99 & A & -- & -- & $4.1\pm0.3$ & 78.4 & $1.7\pm0.5$ & -- & -- & -- & -- \\ \noalign{\smallskip} & 8.41 & -- & -- & -- & -- & -- & -- & & & & \\ \noalign{\smallskip} 0642$+$449 & 4.99 & A & -- & -- & $21.2\pm1.8$ & 15.6 & $1.1\pm0.4$ & -- & -- & -- & -- \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $68.2\pm4.2$ & 15.3 & $2.6\pm0.7$ & & & & \\ \noalign{\smallskip} 1351$-$018 & 4.99 & A & -- & -- & $5.2\pm0.4$ & $-76.2$ & $0.9\pm0.3$ &$-223$ & $ -30\pm6$ & $-8^a$ & $-4770\pm599$ \\ & & B & $1.45\pm0.11$ & $100.3\pm2.7$ & $2.8\pm0.7$ & 52.3 & $2.7\pm0.5$ & 199 & $11\pm3$ & $-8^a$ & $4592\pm646$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $4.1\pm0.4$ & $-46.4$ & $0.5\pm0.2$ & & & & \\ & & B & $2.38\pm0.16$ & $155.9\pm2.4$ & $1.7\pm0.5$ & 25.7 & $1.3\pm0.4$ & & & & \\ \noalign{\smallskip} 1402$+$044 & 4.99 & A & -- & -- & $3.1\pm0.3$ & 15.7 & $1.8\pm0.5$ & $-199$ & $57\pm12$ & -- & -- \\ & & B & $0.77\pm0.11$ & $-61.5\pm1.8$ & $11.2\pm1.1$ & 4.1 & $2.0\pm0.7$ & $-57$ & $16\pm12$ & -- & -- \\ & & C & $1.50\pm0.12$ & $-60.9\pm3.3$ & $1.6\pm0.5$ & $-26.2$ & $2.9\pm0.8$ & 141 & $-55\pm15$ & -- & -- \\ & & D & $8.12\pm0.15$ & $-45.4\pm0.9$ & $1.1\pm0.4$ & 83.8 & $17.8\pm5.4$ & $-59$ & $96\pm49$ & -- & -- \\ & & E & $9.56\pm0.14$ & $-49.2\pm1.2$ & $2.0\pm0.5$ & $-1.5$ & $9.1\pm2.1$ & -- & -- & -- & -- \\ & & F & $13.41\pm0.10$ & $-81.9\pm1.7$ & $1.6\pm0.5$ & 49.8 & $21.2\pm6.2$ & -- & -- & -- & -- \\ \noalign{\smallskip} & 8.41 & A1 & $2.77\pm0.20$ & $177.5\pm3.4$ & $5.8\pm0.6$ & $-6.2$ & $3.2\pm0.7$ & & & & \\ & & A & -- & -- & $2.3\pm0.5$ & 42.3 & $1.5\pm0.3$ & & & & \\ & & B & $0.54\pm0.09$ & $-60.1\pm2.7$ & $7.4\pm0.7$ & 11.7 & $1.3\pm0.3$ & & & & \\ & & C & $1.48\pm0.08$ & $-72.3\pm2.4$ & $2.8\pm0.6$ & $-45.1$ & $1.2\pm0.3$ & & & & \\ & & D & $8.26\pm0.11$ & $-43.0\pm0.9$ & $2.8\pm0.6$ & 91.7 & $22.9\pm9.1$ & & & & \\ \noalign{\smallskip} 1508$+$572 & 4.99 & A & -- & -- & $7.5\pm0.9$ & $-89.4$ & $2.9\pm0.8$ & -- & -- & -- & -- \\ & & B & $2.36\pm0.18$ & $177.3\pm1.4$ & $0.6\pm0.3$ & 36.1 & $16.9\pm5.7$ & -- & -- & -- & -- \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $4.0\pm1.2$ & $-93.4$ & $1.3\pm0.4$ & & & & \\ \noalign{\smallskip} 1557$+$032 & 4.99 & A & -- & -- & $3.4\pm0.5$ & 54.2 & $1.7\pm0.5$ & $-49$ & $64\pm40$ & 3$^a$ & $-1249\pm713$ \\ & & B & $1.77\pm0.19$ & $19.5\pm3.6$ & $0.7\pm0.3$ & 85.2 & $1.8\pm0.5$ & 68 & $71\pm32$ & 3$^a$ & $1561\pm642$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $3.5\pm0.4$ & 60.8 & $1.2\pm0.4$ & & & & \\ & & B & $1.25\pm0.09$ & $59.1\pm1.7$ & $2.1\pm0.3$ & 76.2 & $6.0\pm0.7$ & & & & \\ \noalign{\smallskip} 1614$+$051 & 4.99 & C & $1.51\pm0.10$ & $-95.7\pm1.0$ & $3.8\pm0.5$ & $-38.5$ & $1.8\pm0.5$ & 522 & $-107\pm10$ & 8$^c$ & $9110\pm488$ \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $2.9\pm0.6$ & 62.4 & $3.4\pm0.7$ & & & & \\ & & B & $0.82\pm0.08$ & $-119.1\pm2.9$ & $6.5\pm0.9$ & 78.7 & $2.1\pm0.4$ & & & & \\ & & C & $1.55\pm0.09$ & $-121.9\pm3.2$ & $2.6\pm0.5$ & $-69.8$ & $2.0\pm0.3$ & & & & \\ \noalign{\smallskip} 2048$+$312 & 4.99 & A & -- & -- & $3.8\pm0.6$ & $-39.5$ & $3.3\pm0.2$ & -- & -- & -- & -- \\ & & B & $2.31\pm0.11$ & $88.1\pm1.6$ & $1.1\pm0.5$ & $-61.2$ & $4.7\pm0.4$ & -- & -- & -- & -- \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $5.5\pm0.7$ & $-35.0$ & $2.2\pm0.3$ & & & & \\ & & B & $2.04\pm0.09$ & $86.6\pm1.9$ & $1.4\pm0.5$ & $-93.6$ & $6.2\pm0.6$ & & & & \\ \noalign{\smallskip} 2215$+$020 & 4.99 & A & -- & -- & $0.6\pm0.2$ & 57.0 & $0.9\pm0.4$ & -- & -- & -- & -- \\ \noalign{\smallskip} & 8.41 & A & -- & -- & $1.5\pm0.4$ & 53.8 & $1.3\pm0.5$ & & & & \\ \noalign{\smallskip} \hline \end{tabular}\\ Column designation: 1~-~source name (IAU B1950.0); 2~-~observing frequency, in GHz; 3~-~component identification; 4~-~distance of component from core component, in mas; 5~-~position angle of component with respect to core component, in degrees; 6~-~polarized flux of component, in mJy; 7~-~EVPA of component, in degrees (nominal error of $\pm3^{\circ}$ from calibration, which is much greater than any model-fit errors); 8~-~Degree of polarization, in per cent, taken from a 3x3 pixel area in degree of polarization image centred on the model-fitted position; 9~-~minimum rotation measure obtained from minimum separation between $\chi_{\rm 5~GHz}$ and $\chi_{\rm 8.4~GHz}$, in rad m$^{-2}$ (with nominal error of $\pm28$ rad m$^{-2}$); 10~-~intrinsic EVPA, in degrees, as corrected by RM$_{\rm min}$. 11~-~integrated (Galactic) RM, in rad m$^{-2}$, from literature $^a$\citet{taylor2009}, $^b$\citet{orenwolfe1995}. 12~-~intrinsic minimum RM corrected for Galactic contribution where RM$_{\rm min}^{\rm intr}=({\rm RM}_{\rm min}-{\rm RM}_{\rm Gal})(1+z)^2$. \label{pol_models} } \end{center} \end{table*} \begin{table} \scriptsize{ \caption{Total Intensity and Polarization image cutoff values.} \label{Table1} \begin{center} \begin{tabular}{ccccccccccccc} \hline \multicolumn{0}{}{} & \multicolumn{2}{c}{5 GHz} & \multicolumn{2}{c}{8.4 GHz} \\ \cline{2-3} \cline{4-5} Source name & $I$ cutoff & $p$ cutoff & $I$ cutoff & $p$ cutoff\\ & [mJy/bm] &[mJy/bm] & [mJy/bm] &[mJy/bm] \\ \hline 0014$+$813 & 1.2 & 0.7 & 2.8 & 1.3 \\ 0636$+$680 & 0.9 & 0.7 & 1.5 & 0.2 \\ 0642$+$449 & 3.6 & 2.4 & 5.0 & 11.0 \\ 1351$-$018 & 1.3 & 1.1 & 4.0 & 1.3 \\ 1402$+$044 & 2.0 & 1.4 & 1.1 & 1.7 \\ 1508$+$572 & 1.1 & 0.7 & 1.5 & 1.8 \\ 1557$+$032 & 1.1 & 0.9 & 1.0 & 1.2 \\ 1614$+$051 & 1.7 & 1.1 & 4.2 & 1.5 \\ 2048$+$312 & 6.0 & 1.9 & 1.9 & 1.5 \\ 2215$+$020 & 0.7 & 0.7 & 1.6 & 0.9 \\ \hline \end{tabular}\\ \smallskip \end{center} } \end{table} \subsection{0014+813} This source was discovered in the radio and classified as a flat spectrum radio quasar by \citet{kuhr1981}. \citet{kuhr1983} obtained a redshift of 3.366, and found the source to be exceptionally bright in the optical but unpolarized, while \citet{kaspi2007} found very little optical variability for the source. From VLBA observations at 8~GHz over 5 years, \citet{piner2007} did not detect any outward motion. The source was observed by the \emph{Swift} satellite in 2007 January from optical to hard X-rays. Through construction of the spectral energy distribution (SED), \cite{ghiselliniBHmass2009} show that it may harbour one of the largest black hole masses ever inferred. By associating the strong optical--UV emission with a thermal origin from a standard optically-thick geometrically-thin accretion disk they estimate a black hole mass of $\sim4\times10^{10}M_{\odot}$ Our 5~GHz image shows a core-jet structure extending Southwards to $\sim11$ mas, consistent with the VSOP Space-VLBI image at 1.6~GHz of \citet{hirabayashi1998}, which has about a factor of two worse resolution than our own image in the North-South direction. At 8.4~GHz the extended jet emission is very faint. If we cross-identify an intensity component in the inner jet with the innermost jet component at 5~GHz in order to align our images, we obtain a physically reasonable spectral index map. After applying this same relative shift to the polarization model fits, both fits indicate a polarized component close to the origin, with a residual offset between frequencies of less than 0.20~mas; we accordingly have identified both components with the core polarization. A comparison of the corresponding model-fitted core EVPAs indicates a minimum RM of 384 rad m$^{-2}$. \subsection{0636+680} This source has been observed previously on mas scales in the radio at 5~GHz by \citet{gurvits1994} and more recently by \citet{fey1997} with the VLBA at 2.3 and 8.6~GHz who found it to be unresolved. Their resolution at 8.6~GHz ($1.3\times1.2$ mas) is similar to our resolution at 5~GHz ($1.25\times0.91$ mas). It was first reported as a GPS source by \cite{odea1990}. The redshift of 3.17 is quoted from the catalogue of \citet{veron-cettyveron1989}. Due to the lack of extended jet emission at 8.4~GHz the images could not be aligned based on their optically thin jet emission. Polarized emission was detected in the core in our 5~GHz map at the level of $m\sim$1--2\%; no polarized emission was detected at 8.4~GHz above 0.2~mJy. \subsection{0642+449} This extremely luminous quasar ($L_{\rm 8.4\,GHz}=6.8\times10^{28}$ W Hz$^{-1}$; the highest in our sample) is regularly observed by the MOJAVE\footnote{http://www.physics.purdue.edu/astro/MOJAVE/index.html} team with the VLBA at 15~GHz, who have reported subluminal speeds of $\beta\sim0.76$ for an inner jet component \citep{lister2009}. A 5-GHz global VLBI image by \citet{gurvits1992} shows the jet extending almost 10 mas to the East while \cite{volvach2003} find the jet extending $\sim5$ mas to the East from EVN observations at 1.6~GHz. Our 5-GHz image is consistent with the images of \cite{gurvits1992} and \cite{volvach2000}, with no extended jet emission detected at 8.4~GHz. Due to the lack of extended jet emission at 8.4~GHz the images could not be aligned based on their optically thin jet emission. Polarized emission is detected in the core at both 5 and 8.4~GHz; the minimum difference between the model-fitted polarization angles is less than $1^{\circ}$, nominally indicating negligible Faraday rotation. \subsection{1351-018} This radio loud quasar is the third most distant source in our sample with a redshift of 3.707 \citep{osmer1994}. It was observed by the VSOP Space-VLBI project at 1.6 and 5 GHz \citep{frey2002}. It has a complex pc-scale structure with the jet appearing to bend through $\sim130^{\circ}$. The high resolution space-VLBI images ($1.56\times0.59$ mas at 5~GHz) clearly resolve an inner jet component within 1 mas of the core with a position angle of $\sim 120^{\circ}$, which was also detected at 8.6~GHz by \citet{fey2000}. Our 5-GHz image is dominated by the core emission, but the polarization gives a clear indication of the presence of an inner jet component, which is also indicated by the intensity model fitting. Although two polarized components are visible in the 8.4-GHz map, the relationship between these and the total-intensity structure is not entirely clear. The polarization angle of the polarized feature slightly north of the core is similar to the EVPA of the 5-GHz core, and we have on this basis identified this polarization with the 8.4-GHz core. The beam is relatively large in the North--South direction, and it is possible that this has contributed to the shift of this polarization from its true position relative to the intensity structure. If this identification is correct, the minimum RM obtained for the core is $-223$ rad m$^{-2}$. \subsection{1402+044} This flat-spectrum radio quasar has a redshift of 3.208 \citep{barthel1990}. Although the source structure is fairly complex, the intensity structures match well at the two frequencies. The components in the innermost jet lie along a position angle of $-24^{\circ}$, consistent with the higher resolution image of \cite{yang2008}, who find an inner jet direction of $-26^{\circ}$ from 5-GHz VSOP data. Polarized components A, B, C, and D agree well between the two frequencies, although it is not clear how these correspond in detail to the intensity components in the same regions. There is an additional polarized component A1 in the 8.4-GHz core, which does not have a counterpart in intensity or at 5~GHz; it is difficult to be sure whether this is a real feature or an artefact. The core polarization structure at both 5 and 8.4 GHz shows three distinct features within the core region. If we compare this with the VSOP space-VLBI image of \cite{yang2008} at 5 GHz we see that a similar type of structure is seen in total intensity. This indicates how the polarized emission can give information about the jet structure on scales smaller than is seen in the $I$ image alone. \subsection{1508+572} This is the most distant object in our sample at a redshift of 4.30 \citep{hook1995}. Hence, the frequencies of the emitted synchrotron radiation are the highest in our sample at 24.6 and 44.6~GHz. The images presented here are the first observations to show the direction of the inner pc-scale jet. This quasar also has an X-ray jet extending in a south-westerley direction on kpc (arcsecond) scales detected with \emph{Chandra} \citep{siemiginowska2003, yuan2003}. \cite{cheung2004} detected a radio jet in the same region using the VLA at 1.4~GHz. Although an optically thin jet component is detected at both frequencies roughly 2~mas to the south of the core, this component proved too weak to be used to align the two images. Both the core and this jet component were detected in polarization at 5~GHz, while only the core was detected at 8.4~GHz. The smallest angle between the 5 and 8.4~GHz core EVPAs is only $1^{\circ}$, which we take to indicate an absence of appreciable Faraday rotation. \subsection{1557+032} This quasar is the second most distant object in our sample with a redshift of 3.90 \citep{mcmahon1994}. There are two distinct polarized components in the core region where only one total intensity component is distinguished; the fact that these two components are visible and model fit at both frequencies with similar positions and polarization angles suggests that they are real. This appears to be a case when the polarized emission provides information on scales smaller than those evident in the total intensity image. \subsection{1614+051} This quasar, at a redshift of 3.212 \citep{barthel1990}, has been observed by \citet{fey2000} with the VLBA at 2.3 and 8.6~GHz, who observed the jet to lie in position angle $-158^{\circ}$. Our 8.4~GHz image shows the source clearly resolved into a core and inner jet, extending in roughly this same position angle. Both our 5 and 8.4-GHz data are best fit with two components, whose positions agree well at the two frequencies. One polarized component was model-fit at 5~GHz (Table 4); its position does not completely agree with the positions of either of the two $I$ components, although it clearly corresponds to jet emission. Polarization from both of the $I$ components is detected at 8.4~GHz (Table 4), as well as another region of polarized emission between them. Component C in the 8.4-GHz polarization fit is weak, but its position agrees with that of the jet $I$ component fit at this frequency, suggesting it may be real. \subsection{2048+312} \citet{veron-cettyveron1993} found a redshift of 3.185 for this quasar. The apparent shift in the position of the peak between 5 and 8.4~GHz is an artefact of the mapping process; the model fits indicate a core and inner jet whose positions agree well at the two frequencies, as well as another weaker jet component further from the core detected at 5 GHz. This is a promising candidate source for follow-up multi-frequency polarization observations because the jet is well resolved with VLBI and has a strongly polarized core and jet. \subsection{2215+020} \citet{drinkwater1997} found an emission redshift of 3.572 for this quasar. \citet{lobanov2001} present a 1.6~GHz VSOP space-VLBI image of this source showing the jet extending to almost 80 mas ($\sim600$ pc) with a particularly bright section between 45 and 60 mas. The extent of this jet is $\sim4$ times greater than in any other pc-scale jet observed for quasars with $z>3$. Our 5~GHz image has similar resolution to the \citet{lobanov2001} image but is less sensitive to the extended jet emission hence, our image has a similar but sparser intensity distribution. We also detect the particularly bright region, where the jet changes direction from East to North-East on the plane of the sky. \section{Discussion} \subsection{Brightness Temperature} The median core brightness temperature at 5~GHz is $5.6\times10^{11}$ K, while at 8.4~GHz, the median value is slightly smaller at $3.2\times10^{11}$ K. This can also be seen from a histogram of the core brightness temperature (\fref{fig:Tb_histogram}) where the 8.4~GHz values populate a larger majority of the lower bins. This may be as a result of the 8.4~GHz data probing regions of the jet where the physical conditions are intrinsically different, leading to lower observed brightness temperatures, similar to the results of \cite{lee2008} at 86~GHz. However, it is difficult to separate this effect from any bias due to the resolution difference between 5 and 8.4~GHz. Furthermore, due to the relatively small difference between the median values, a larger sample size would be required to determine whether this is a real effect or just scatter in the data for the small number of sources observed here. The maximum core brightness temperature at 5~GHz is $4.2\times10^{12}$ K found in 0642+449 and the minimum value is $3.7\times10^{10}$ K found in 2048+312. At 8.4~GHz, the maximum and minimum values are $7.3\times10^{12}$ K (0642+449) and $7.4\times10^{10}$ K (0636+680). Assuming that the intrinsic brightness temperature does not exceed the equipartition upper limit of $T_{eq}\sim10^{11}$ K \citep{readhead1994}, we can consider the observed core brightness temperatures in excess of this value to be the result of Doppler boosting of the approaching jet emission. Using the equipartition jet model of \cite{blandfordkonigl1979} for the unresolved core region, we can estimate the Doppler factor ($\delta$) required to match the observed brightness temperatures ($T_{\rm b, obs}\sim3\times10^{11}\delta^{5/6}$). In most cases the Doppler factors required are modest, with values ranging from 1 to 5, except for 0642+449 which requires a Doppler factor of 23 at 5~GHz and 46 at 8.4~GHz. From the observed brightness temperatures of the individual jets components, we can investigate the assumption of adiabatic expansion following \citet{marscher1990}, \citet{lobanov2001} and \citet{pushkarev2008}, who model the individual jet components as independent relativistic shocks with adiabatic energy losses dominating the radio emission. Note that this description of the outer jet differs from the $N\propto r^{-2}$, $B\propto r^{-1}$ case describing the compact inner jet region, which is not adiabatic. With a power-law energy distribution of $N(E)\propto E^{2\alpha-1}$ and a magnetic field described by $B\propto d^{-a}$, where $d$ is the transverse size of the jet and $a=1$ or $2$ corresponds to a transverse or longitudinal magnetic field orientation, we obtain \begin{equation} T_{\rm b,\,jet}=T_{\rm b,\,core}\left(\frac{d_{\rm core}}{d_{\rm jet}}\right)^{a+1-\alpha(a+4/3)} \end{equation} \noindent which holds for a constant or weakly varying Doppler factor along the jet. Hence, we can compare the expected jet brightness temperature ($T_{\rm b,\,jet}$) with our observed values by using the observed core brightness temperature ($T_{\rm b,\,core}$) along with the measured size of the core and jet components ($d_{\rm core}$ and $d_{\rm jet}$) for a particular jet spectral index. We apply this model to the sources with extended jet structures to see if this model is an accurate approximation of the jet emission and also as a diagnostic of regions where the physical properties along the jet may change. \fref{Figure:0014_Tb} shows the comparison of this model (solid line) with the observed brightness temperatures (dashed line) along the jet of 0014+813 at 5~GHz. From the spectral index distribution obtained between 5 and 8.4~GHz, we adopt a jet spectral index of $\alpha=-1.0$ and assume a transverse magnetic field orientation ($a=1$) from inspection of the jet EVPAs (Fig.~2) The two jet components furthest from the core agree relatively well with the model, however, the two jet components closest to the core are substantially weaker than expected. Given the high RM for this source, it is also possible that the intrinsic magnetic field orientation is longitudinal; using a value of $a=2$ yields model brightness temperatures for these components that are within our measurement errors. Unfortunately, the extended jet emission at 8.4~GHz is not detected so we cannot constrain the jet RM and we also cannot rule out a change in the Doppler factor along the jet due to either an intrinsic change in the jet dynamics or a change in the jet direction. Clearly there are not enough observational constraints for this source to convincingly test the applicability of this model. The observed brightness temperatures in the jet of 1402+044 at both 5 and 8.4~GHz (\fref{Figure:1402_Tb}) are in reasonable agreement with the model predictions for $\alpha=-0.5$, which is consistent with the overall jet spectral index distribution. Since the EVPA orientation implies a transverse magnetic field (\fref{Figure:I+p}), we use $B\propto d^{-1}$. The jet components at 1~mas and 9~mas from the core both have observed brightness temperatures slightly higher than would be expected for radio emission dominated by adiabatic losses. Using flatter spectral index values of $\alpha=0.0$ and $\alpha=-0.3$, respectively, for these two components brings the model brightness temperatures within the measurement errors. Hence, it is likely that these components are subject to strong reacceleration mechanism that temporarily overcomes the energy losses due to the expansion of the jet. Applying the analysis to the jet of 2215+020 at 5~GHz (\fref{Figure:2215_Tb}), we see that the model matches the observed values very well for the two inner jet components using our measured spectral index of $\alpha=-0.7$ and a transverse magnetic field ($a=1$). However, the extended components brightness temperatures at 50--60~mas from the core are higher than expected by the adiabatic expansion model. This suggests different physical conditions in this region of the jet creating a less steep spectral index (using $\alpha=-0.5$ brings the model in line with the observed values). This far from the core it is likely that the Doppler factor has changed with either a change in the jet viewing angle or speed, possibly from interaction with the external medium. This is consistent with the observed strong brightening of the jet along with its change in direction from East to North-East, shown in \fref{Figure:I+p}. Our results are very similar to those of \citet{lobanov2001} who employed the same model for this source at 1.6~GHz. \subsection{Jet Structure and Environment} The added value of polarization observations in providing information on the compact inner jet structure is clear from the images of 1402+044 (\fref{Figure:I+p}), 1557+032 (\fref{Figure:I+p}) and 1614+051 (\fref{Figure:I+p}), where polarized components are resolved on scales smaller than obtained from the total intensity image alone. For 1402+044, a higher resolution VSOP image at 5~GHz \citep{yang2008} resolved three components in the compact inner jet region consistent with what we find from the polarization structure of our lower resolution images where only one total intensity component is visible. In the case of 1557+032 and 1614+051, the identification of multiple polarized components in the core region, where again only one total intensity component is directly observed, suggests that observations at higher frequencies and/or longer baselines provided by, for example, the VSOP-2 mission \citep{tsuboi2009} are likely to resolve the total intensity structure of the core region. Hence, the polarization information is crucial in identifying the true radio core and helping to determine whether it corresponds to the $\tau=1$ frequency-dependent surface first proposed by \citet{blandfordkonigl1979} or a stationary feature such as a conical shock \citep{cawthorne2006, marschervsop2009} for a particular source. For the sources with extended jets, substantial jet bends are observed. This is not a surprise since as long as the observations imply relativistic jet motion close to the line of sight, small changes in the intrinsic jet direction will be strongly amplified by projection effects. For example, an observed right-angle bend could correspond to an intrinsic bend of only a few degrees \citep[e.g.,][]{cohen2007}. In the case of 1351-018, the jet appears to bend through $\sim130^{\circ}$ (\fref{Figure:I+p}) from a South-Easterly inner jet direction to a North-Westerly extended jet direction \citep[see also][]{frey2002}. The polarization distribution in the core region also supports an South-Easterly inner jet direction. This may be due to a helical jet motion along the line-of-sight as proposed, for example, for the jet of 1156+295 by \citet{hong2004} or small intrinsic bends due to shocks or interactions with the external medium amplified by projection effects in a highly beamed relativistic jet \citep[e.g.,][]{homan2002}. The jet bend at $\sim400$ pc from the core of 2215+020 at 5~GHz may be due to a shock that also causes particle reacceleration, increasing the observed jet emission. The core degree of polarization found in these sources is generally in the range 1--3\%. Given that the emitted frequencies are in the range 20--45~GHz, we can compare these values to the core degrees of polarization observed in low-redshift AGN jets at 22 and 43~GHz \citep{osullivangabuzda2009}. In general, the core degree of polarization is higher in the low-redshift objects with values typically in the range 3--7\%. This could be due to intrinsically more ordered jet magnetic field structures at lower redshifts or more likely, selection effects, since the AGN in \cite{osullivangabuzda2009} were selected on the basis of known rich polarization structure while the high-redshift quasars had unknown polarization properties on VLBI scales. Another likely possibility is that the effect of beam depolarization (different polarized regions adding incoherently within the observing beam) is significantly reducing the observed polarization at 5 and 8.4~GHz due to our much larger observing beam compared with the VLBA at 22 and 43~GHz (factor of $\sim$3 better resolution even considering the longer baselines in the global VLBI observations). Due to the extremely high redshifts, an observed RM of 50 rad m$^{-2}$ at $z\sim3.5$ (average redshift for the sources in our sample) is equivalent to a nearby source with an RM of 1000 rad m$^{-2}$. The mean core $|$RM$_{\rm min}^{\rm intr}|$ for our sample is $5580\pm3390$~rad m$^{-2}$. Naively comparing our results with the 8--15~GHz sample of 40 AGN reported in \cite{zt2003,zt2004}, which has a median intrinsic core RM of $\sim400$ rad m$^{-2}$ (and a median redshift of 0.7), suggests that the VLBI core RMs are higher in the early universe. However, RMs of the order of $10^4$ rad m$^{-2}$ have been measured in low to medium redshift sources at similar emitted frequencies \citep[e.g.,][]{attridge2005, algaba2010}; while some of the largest jet RM estimates, of order $10^5$ rad m$^{-2}$, have come through observation from 43--300 GHz of sources with redshifts ranging from $\sim0.1$--2 \citep{jorstad2007}. Furthermore, 15--43 GHz RM measurements of BL Lac objects from \cite{optical2006} and \cite{osullivangabuzda2009} have a median intrinsic RM of $\sim3000$ rad m$^{-2}$ and a median redshift of 0.34. Hence, from the small sample of minimum RM values presented in this paper it is not clear whether these sources are located in intrinsically denser environments or if the large RMs are simply due to the fact that the higher emitted frequencies are probing further upstream in the jet where it is expected that the electron density is higher and/or the magnetic field is stronger. Our results from 5 to 8.4~GHz generally show flat to slightly steep core spectral indices with a median value of $\alpha=-0.3$. However, there is a large range of values going from $0.7$ for 0642+449 to $-1.0$ in the case of 1614+051. In the majority of cases, we find steep jet spectral indices ranging from $\alpha\sim-0.5$ to $-1.5$ with a median value of $\alpha=-1.0$. High-redshift objects are often searched for through their steep spectral index due to the relatively higher emitted frequencies for a particular observing frequency. The $z$--$\alpha$ correlation, as it is known, (i.e., steeper $\alpha$ at higher $z$) has been very successful \citep[e.g.,][and references therein]{broderick2007} in finding high-redshift radio galaxies. While steeper spectral index values are expected simply from the higher rest frame emitted frequencies, \cite{klamer2006} show that this cannot completely explain the correlation. The possible physical explanation they present is that the sources with the steepest spectral index values are located in dense environments where the radio source is pressure confined and hence, loses its energy more slowly. This effect might also apply to the dense nuclear environments of our sample of quasar jets. To test this we have plotted the core spectral index versus $|$RM$_{\rm min}^{\rm intr}|$ in \fref{Figure:RM_v_alpha}. The data suggest that this is a useful avenue of investigation with a Spearman Rank test giving a correlation coefficient of -0.95, but clearly more data points are needed to determine whether there truly is a correlation or not. Further observations of these sources with much better frequency coverage is required to properly constrain the spectral and RM distributions and to make detailed comparisons with low redshift sources to further investigate the quasar environment through cosmic time. \section{Conclusion} We have successfully observed and imaged 10 high-redshift quasars in full polarization at both 5 and 8.4~GHz using global VLBI. Model fitting two-dimensional Gaussian components to the total intensity data enabled estimation of the brightness temperature in the core and out along the jet. The observed high core brightness temperatures are consistent with modestly Doppler-boosted emission from a relativistic jet orientated relatively close to our line-of-sight. This can also explain the dramatic jet bends observed for some of our sources. The core degrees of polarization are somewhat lower than observed from nearby AGN jets at similar emitted frequencies. However, beam depolarization is likely to have a stronger effect on these sources compared to the higher resolution observations of nearby sources. Model-fitting the polarization data enabled estimation of the minimum RM for each component since obtaining the exact value of the RM requires observations at three or more frequencies. For sources in which we were able to remove the integrated (foreground) RM, we calculated the minimum intrinsic RM and found a mean core $|$RM$_{\rm min}^{\rm intr}|$ of 5580~rad m$^{-2}$ for four quasars with values ranging from $-1249$ rad m$^{-2}$ to 9110 rad m$^{-2}$. We found relatively steep core and jet spectral index values with a median core spectral index of $-0.3$ and a median jet spectral index of $-1.0$. We note that the expectation of denser environments at higher redshifts leading to larger RMs can also lead to steeper spectral indices through strong pressure gradients \citep{klamer2006} and that this hypothesis is not inconsistent with our results. Several of the sources presented in this paper are interesting candidates for follow-up multi-frequency observations to obtain more accurate spectral and RM information in order to make more conclusive statements about the environment of quasar jets in the early universe and to determine whether or not it is significantly different to similar lower redshift objects. \section*{Acknowledgements} Funding for part of this research was provided by the Irish Research Council for Science, Engineering and Technology. The VLBA is a facility of the NRAO, operated by Associated Universities Inc. under cooperative agreement with the NSF. The EVN is a joint facility of European, Chinese, South African and other radio astronomy institutes funded by their national research councils. The Westerbork Synthesis Radio Telescope is operated by ASTRON (Netherlands Institute for Radio Astronomy) with support from the Netherlands Foundation for Scientific Research (NWO). This research has made use of NASA's Astrophysics Data System Service. The authors would like to thank the anonymous referee for valuable comments that substantially improved this paper. \bibliographystyle{mn2e}
train/arxiv
BkiUdPQ5ixsDMDLI8GtL
5
1
\section{Introduction} \label{sec:intro} Many measurements are made indirectly: quantities of interest (QoIs) are estimated by measuring one or more other quantities and calculating the required values from a model that links the QoIs to the measured quantities. Examples include scatterometry, where surface profile is inferred from intensity measurements, determination of Young's modulus, where the measured quantities are force and displacement, and determination of thermal diffusivity from measurements of temperature and time. Thermal diffusivity is defined as \begin{equation} \alpha = \lambda / (\varrho \cp), \label{eqn:a=l/pc} \end{equation} where $\lambda$ is thermal conductivity, $\varrho$ is density, and $\cp$ is specific heat capacity. Density and specific heat capacity can be measured independently, so measurement of thermal diffusivity is often used to evaluate thermal conductivity. Thermal conductivity is a key property in understanding heat transport in solids. Accurate characterisation of thermal conductivity supports design of thermal behaviour in products ranging from protective coatings for turbine blades in high temperature gas streams to insulation for low-temperature carriers for transport of vaccines. Reliable uncertainty evaluation of thermal conductivity enables designers to have confidence that their products will meet the required specification under all circumstances. In some cases the models used in indirect measurements are simple and can be inverted to express QoIs explicitly in terms of the measured quantities, but in many cases the models are not invertible (e.g. due to observational noise), so the problem of determining the QoIs is more challenging and we have to solve an inverse problem. Inverse problems present difficulties in applying the approach to uncertainty evaluation outlined in the GUM \cite{bipm2008evaluation} and its supplements. The difficulties become more severe when the model used to link the measured quantities and the QoIs is computationally expensive, making sampling methods too time-consuming to use. One remedy is to replace the expensive model with a cheaper one that is sufficiently accurate for the task at hand, called a surrogate model \cite[Chapter 5]{rasmussen2015novel} (also known as an emulator or metamodel). This paper reports on the application of an approach combining a parametric surrogate model with a Bayesian approach to uncertainty evaluation for the determination of thermal conductivity from the laser flash experiment. This approach has made a computationally expensive problem tractable. In the rest of this section we describe the laser flash experiment (see Figure \ref{fig:experiment}), present the mathematical model for the temperature of the material being tested and briefly discuss previous work by other authors. We also outline our approach, linking it to existing literature and methods from other areas. In Section \ref{sec:method}, we pose the problem of determining the thermal conductivity and the laser intensity as a Bayesian inverse problem. We then describe how a Markov chain Monte Carlo (MCMC) algorithm that incorporates a surrogate forward model, here based on a stochastic Galerkin finite element method (SGFEM), can be used to efficiently probe the resulting approximate target distribution. In Section \ref{sec:results}, we apply this methodology to a real life example where the sample material is a copper cylinder. We discuss the construction of the SGFEM surrogate, timings, the estimated joint and marginal distributions for the thermal conductivity and laser intensity, and examine how these results can be interpreted with regards to the experimental data. Finally, in Section \ref{sec:conclusions} we draw conclusions and briefly describe how this work may be extended to more complex problems. We note that the laser flash experiment was also considered in a Bayesian setting by Allard et al in \cite{allard2015multi}. Hence, since the set up of the physical experiment is similar, our discussion in Sections \ref{subsec:experiment} and \ref{subsec:mathmodel} below follows \cite[Section 1]{allard2015multi}. \begin{figure} \centering \includegraphics[width=0.725\textwidth]{figure1.png} \caption{Diagram of the laser flash experiment set up.} \label{fig:experiment} \end{figure} \subsection{The laser flash experiment} \label{subsec:experiment} The thermal conductivity of a material is not a directly measurable quantity; its value is indirectly observed through the change in temperature of the material being characterised. The laser flash experiment \cite{parker1961flash} is a commonly used method for making such observations. In the experiment, the sample is placed in a furnace in a low-pressure inert gas atmosphere, supported by pins to minimise conductive heat losses, and heated to ambient temperature $\Ta$. The lower face is then subjected to a short laser burst of duration $\tf$ seconds, heating the sample on that face. The average temperature of the opposite face is measured by an infra-red sensor through a small window in the furnace. The experimental set up is shown in Figure \ref{fig:experiment}. \subsection{Mathematical model} \label{subsec:mathmodel} We assume that the sample is a cylinder $C$ of radius $R$ and vertical height $H$ and work in cylindrical coordinates $\rv=(r,\theta,z)$. Furthermore, we assume the material is isotropic and homogeneous, with unknown scalar thermal conductivity $\lambda$ and known scalar density $\varrho$ and specific heat capacity $\cp$. Let $\zf$ denote the depth of penetration of the laser into the sample, which lasts for $\tf$ seconds. Then the temperature of the material during the laser flash experiment is modelled by the solution $u$ to the transient heat equation \begin{equation} \varrho \cp \partial_{t} u(\rv,t) = \nabla \cdot \left( \lambda \nabla u(\rv,t) \right) + Q(\rv,t), \qquad (\rv,t) \in C\times[0,T], \label{eqn:pde} \end{equation} where the source term \begin{equation} Q(\rv,t) := I \cdot \chi(r) \cdot 1_{\{[0,z_{f}]\times[0,\tf]\}}(z,t), \label{eqn:q} \end{equation} represents the laser flash with unknown intensity $I$. Here, the function $\chi\colon[0,R]\to\RR$ is either constant, to represent an ideal laser of uniform profile, or of the form $\chi(r) = \exp(-r^{2}/2\rf^{2})$ to give a Gaussian profile in the radial direction as is the typical profile for lasers. As stated in Section \ref{subsec:experiment}, the material is heated to the ambient temperature, yielding the initial condition \begin{equation} u(\rv,0) = \Ta, \qquad \rv \in C\cup\partial C. \label{eqn:ic} \end{equation} We assume that the height $H$ is sufficiently small that heat losses across the curved surface are negligible. Furthermore, we assume that heat is lost at the top and bottom faces at a rate proportional to the difference in temperature between the sample and the furnace. This assumption is valid for convection and radiation at small temperature differences, as are generated experimentally. Hence, we have the boundary conditions \begin{align} \lambda \pdudn(\rv,t) & = 0, && \rv \in \partial C_{R} := \left \{\rv \in \partial C \ \big| \ r=R\right\}, \ t \in [0,T], \label{eqn:bcR} \\ \lambda \pdudn(\rv,t) & = \kappa \left( \Ta - u(\rv,t) \right), && \rv \in \partial C_{H} := \left \{\rv \in \partial C \ \big| \ z\in\{0,H\}\right\}, \ t \in [0,T]. \label{eqn:bcH} \end{align} We also impose the condition \begin{equation} \lambda \pdudn(\rv,t) = 0, \quad \rv \in \partial C_{0} := \left \{\rv \in C \ \big| \ r=0\right\}, \ t \in [0,T]. \label{eqn:bc0} \end{equation} These conditions, together with the assumptions on the source term and material properties of the sample, ensure that the problem is axisymmetric. That is, $u$ is symmetric about the $z$-axis and does not depend on $\theta$. \subsection{Previous approaches} \label{subsec:previousapproaches} Many previous approaches to determining the unknowns $\lambda$ and $I$ have used optimization (e.g., see \cite{whittle1971optimization,wright2011parameter}). In such works, values which are by some measure optimal are found by matching model output to experimental data. There are a number of drawbacks to such approaches. Firstly, the optimal values that are returned do not have associated uncertainties. The standard GUM \cite{bipm2008evaluation} approach to calculating the uncertainty is unsuitable because the problem is an inverse one, and the problem is too computationally expensive for standard Monte Carlo simulation. Secondly, well-posedness of the problem in terms of uniqueness of the solution can be an issue, particularly in cases where there are many unknowns and there is little data to infer from. A Bayesian formulation of the problem \cite{dashti2016bayesian,stuart2010inverse} allows both of these issues to be resolved. Some previous work \cite{allard2015multi} has been done to treat the laser flash experiment in a Bayesian manner. However, the posterior distribution for $\lambda$ and $I$ that is provided (up to a constant of proportionality) by the Bayesian approach must be evaluated for one to compute quantities of interest such as probabilities, expectations and variances. Here, each evaluation of the posterior requires an evaluation of the solution $u$ of \eqref{eqn:pde}--\eqref{eqn:bc0}. Since an exact solution is unavailable, $u$ must be approximated using a finite element method combined with a time-stepping scheme, a finite difference scheme or similar approach. We may exploit axisymmetry to reduce the spatial domain to two dimensions. However, depending on the choice of spatio-temporal discretization, and its level of fidelity, a single solve may still take tens of seconds, or minutes to perform on a standard desktop computer. If computational resources are limited, then it is infeasible to perform a large number of forward solves. Markov chain Monte Carlo (MCMC) methods \cite{brooks2011handbook} are popular algorithms for sampling from a distribution that is known only up to a constant of proportionality. However, it is well documented that they suffer from a slow rate of convergence \cite{caflisch1998monte}, with the sampling error behaving as $\calO(M^{-1/2})$, where $M$ is the number of samples used. The large number of samples required to estimate quantities of interest accurately, coupled with the expense of each posterior evaluation means that MCMC methods are often infeasible. This is especially true when the forward problem consists of a time-dependent partial differential equation (PDE). We will use a pre-computed parametric surrogate for the forward solution $u$ that can be cheaply evaluated for any choice of $\lambda$ and $I$. This takes the form of a polynomial expansion and allows for fast sampling from the resulting approximate posterior distribution within an MCMC routine. As well as reducing the time required to solve the inverse problem by several orders of magnitude, we find that with the proper choice of polynomial degree, there is no significant loss of accuracy compared to the standard approach using a fixed spatio-temporal discretization. The idea of combining surrogate models with MCMC methods to accelerate the solution of Bayesian inverse problems is becoming popular in groundwater flow modelling \cite{domesova2017bayesian}, electrical impedance tomography \cite{hakula2014reconstruction,hyvonen2015stochastic,leinonen2014application} and other applications \cite{cui2015data,marzouk2009dimensionality,marzouk2007stochastic}. \section{Method} \label{sec:method} In this section we discuss a parametric form of the forward problem \eqref{eqn:pde}--\eqref{eqn:bc0} and describe how its solution may be approximated using a stochastic Galerkin finite element method (SGFEM). We also formally introduce the Bayesian inverse problem of determining $\lambda$ and $I$ from experimental data. Finally, we explain how to combine the SGFEM surrogate with an MCMC method to approximate posterior distributions. \subsection{Parametric forward problem and SGFEM} \label{subsec:forwardproblem} We consider the laser intensity $I$ as well as the thermal conductivity $\lambda$ to be unknown. This is because the exact strength of the laser is not known and the effects of the laser depend on the heat absorption characteristics of the sample. Note that we could treat other model inputs (such as density $\varrho$ or ambient temperature $\Ta$) as unknown and propagate all measurement uncertainties through the model but choose not to here for clarity. To construct the surrogate, we start by modelling $\lambda$ and $I$ as real-valued random variables. Specifically, we assume that $\lambda$ and $I$ may be expressed in terms of independent uniform random variables with mean zero and unit variance. That is, \begin{equation} \lambda(\xi_{1}) = \mu_{\lambda} + \nu_{\lambda} \xi_{1}, \qquad I(\xi_{2}) = \mu_{I} + \nu_{I} \xi_{2}, \label{eqn:ylambdaI} \end{equation} for some $\mu_{\lambda}, \mu_{I}, \nu_{\lambda}, \nu_{I}\in\RR^{+}$ (to be chosen) with \begin{equation} \xi_{1},\xi_{2} \sim \calU(-\sqrt{3},\sqrt{3}). \end{equation} For heterogeneous or layered materials, $\lambda$ may be more accurately represented as a spatial random field \cite{adler1981geometry}. If the material consists of layers, each with a distinct unknown value of $\lambda$, then a random field could be constructed using a linear combination of spatial indicator functions with uncertain coefficients (as used for example in \cite{hyvonen2015stochastic}). Alternatively, if the mean and covariance function of the random field are known, a truncated Karhunen-Lo{\`e}ve expansion \cite{loeve1978probability} could be used. Although we do not consider $\lambda$ and $I$ to be spatially varying here, the approach outlined below can be used whenever $\lambda$ and $I$ are linear functions of a finite set of independent random variables with appropriate distributions. To set up the parametric forward problem, we make a change of variable. Let $y_{1}$ and $y_{2}$ be the images of $\xi_{1}$ and $\xi_{2}$, respectively. Then, $\yv:=(y_{1}, y_{2}) \in\Gamma$ where $\Gamma:=[-\sqrt{3},\sqrt{3}]^{2}$ is the parameter domain. Assuming that $\lambda$, $I$ are written as in \eqref{eqn:ylambdaI}, we may rewrite \eqref{eqn:pde}--\eqref{eqn:bc0} in the equivalent parametric form \begin{align} \begin{split} \varrho \cp \partial_{t} u(\rv,t,\yv) & = \nabla \cdot \left( \lambda(y_{1}) \nabla u(\rv,t,\yv) \right) + Q(\rv,t,y_{2}), \\ & \hspace{3.8cm} (\rv,t,\yv) \in C \times [0,T] \times\Gamma, \label{eqn:ypde} \end{split}\\ u(\rv,0,\yv) & = \Ta, \hspace{2.8cm} (\rv,\yv) \in \overbar{C}\times\Gamma, \label{eqn:yic} \\ \lambda(y_{1}) \pdudn(\rv,t,\yv) & = \kappa \left( \Ta - u(\rv,t,\yv) \right), \hspace{0.5cm} (\rv,t,\yv) \in \partial C_{H} \times [0,T] \times \Gamma, \label{eqn:ybcH} \\ \lambda(y_{1}) \pdudn(\rv,t,\yv) & = 0, \hspace{3cm} (\rv,t,\yv) \in \left( \partial C_{0} \cup \partial C_{R} \right) \times [0,T] \times \Gamma, \label{eqn:ybc0} \end{align} where \begin{equation} Q(\rv,t,y_{2}) := I(y_{2}) \cdot \chi(r) \cdot 1_{\{[0,z_{f}]\times[0,\tf]\}}(z,t). \label{eqn:yq} \end{equation} We have chosen uniform random variables to ensure that weak formulations of the parametric model are well-posed. We must also choose the values of $\mu_{\lambda}$ and $\nu_{\lambda}$ in \eqref{eqn:ylambdaI} such that realisations of $\lambda$ are positive. Note also that it is physically meaningless to model either $\lambda$ or $I$ as a random variable with non-zero probability of taking a negative value (e.g. Gaussian). The solution $u$ to \eqref{eqn:ypde}--\eqref{eqn:yq} is a function of the parameters $y_{1}$ and $y_{2}$, as well as $\rv$ and $t$. We will approximate it using a spectral stochastic finite element method \cite{babuska2004galerkin,ghanem2003stochastic} based on Galerkin approximation. Specifically, we combine finite element approximation \cite{braess2007finite} on the physical domain with global polynomial approximation on the parameter domain. For parabolic PDEs, the suitability of this approach has been analyzed in \cite{nobile2009analysis}. If we exploit axisymmetry, then the SGFEM approximation takes the form \begin{equation} \uhk(\rv,t,\yv) := \sum\limits_{i=1}^{\nh} \sum\limits_{j=1}^{\nk} u_{ij}(t) \phi_{i}(\rv)\Psi_{j}(\yv), \label{eqn:uhk} \end{equation} where $\phi_{i}$, $i=1,2,\dots,\nh$, are standard piecewise linear finite element functions associated with a mesh of triangular elements of width $h$ on the spatial domain \begin{equation} D:=\{\rv = (r,z) \ | \ r\in(0,R), \ z \in (0,H)\}. \end{equation} The functions $\Psi_{j}$, $j=1,2,\dots,\nk$, are chosen to form a basis for the set of polynomials in $y_{1}$ and $y_{2}$ on $\Gamma$ of total degree less than or equal to $k$. Hence, \begin{equation}\label{nk_def} \nk = (k+2)(k+1)/2. \end{equation} In particular, we use Legendre basis polynomials which are orthonormal with respect to the inner product \begin{equation} \langle \Psi_{j}, \Psi_{s} \rangle_{\rho} := \int_{\Gamma} \Psi_{j}(\yv) \Psi_{s}(\yv) \rho(\yv) \dyv, \end{equation} where $\rho(\yv)=(2\sqrt{3})^{-2}$ is the joint probability density function of $\xi_{1}$ and $\xi_{2}$. We may then also write (\ref{eqn:uhk}) as \begin{equation} \uhk(\rv,t,\yv) := \sum\limits_{j=1}^{\nk} u_{j}(\rv,t) \Psi_{j}(\yv), \label{eqn:uhk-B} \end{equation} which is known as a polynomial chaos expansion \cite{xiu2002modeling}. To find the functions $u_{ij}(t)$ in (\ref{eqn:uhk}) we solve the associated Galerkin equations \cite{benner2015low} which leads to a coupled system of ODEs. For this, we use an implicit Euler scheme with uniform step size $\tau:=T/n_{t}$. This yields a sequence of discrete approximations \begin{equation} \uhkt^{n}(\rv,\yv) := \sum\limits_{i=1}^{\nh} \sum\limits_{j=1}^{\nk} u_{ij}^{n} \phi_{i}(\rv)\Psi_{j}(\yv), \label{eqn:uhkt} \end{equation} at the time steps $\tau_{n}=n\tau,$ $n = 0,1, \dots,\nt$, and requires the solution of $\nt$ linear systems of dimension $n_{h}n_{k}$. We will use the notation $\uhkt$ to denote a continuous function of time which interpolates $\uhkt^{n}$ at each $\tau_{n}$. Once the coefficients $u_{ij}^{n}$ in \eqref{eqn:uhkt} have been computed, $\uhkt^{n}$ may be evaluated at any $\yv \in \Gamma$ of interest by evaluating the polynomials $\Psi_{j}$. \subsection{Bayesian inverse problem formulation} \label{subsec:bip} Suppose we wish to recover the unknown thermal conductivity $\lambda$ given the (thermogram) results from a laser flash experiment with uncertain laser intensity. We also need to recover the laser intensity $I$ since it directly affects the observed thermogram. Specifically, we study a cylindrical sample of pure copper. Let $t_{1},t_{2},\dots,t_{\nd}$ denote the \textit{measurement times} and $D_{L}$ denote a disc of radius $L$ centrally positioned on the top face of the sample over which an average temperature is measured at each $t_{n}$. Our data $\dv\in\RR^{\nd}$ is the vector of observed averaged temperature values. The data used here was measured at NPL in 2004 on a Netzsch LFA427 using a sample of copper at ambient temperature $\Ta = 385$\,K. The Bayesian approach to this inverse problem is to condition our prior knowledge regarding $\lambda$ and $I$ on the measurements $\dv$. We may obtain this knowledge from historical information, expert elicitation or even previous Bayesian analysis using another data set. We assume that our measurements are subject to independent, identically distributed (iid), mean zero Gaussian noise, the variance $\sigma^{2}$ of which is also unknown. In this section, we choose a prior distribution, formulate the Bayesian inverse problem and construct the posterior distribution for the unknowns. \subsubsection{Prior distribution.} \label{subsubsec:prior} \begin{figure} \centering \subfloat[]{\includegraphics[width=\textwidth]{figure2a.eps}\label{fig:figure2a}} \\ \subfloat[]{\includegraphics[width=\textwidth]{figure2b.eps}\label{fig:figure2b}} \caption{ Distributions on the thermal conductivity $\lambda$: (a) two previously obtained Gaussian distributions and logNormal prior distribution. (b) logNormal prior distribution, and uniform distribution used to construct surrogate.} \end{figure} First, we assign a prior distribution to $\lambda$, which describes our belief about its value before incorporating the data $\dv$. Two previous studies of samples involving copper deduced the distributions $\calN(379, 3.79^{2})$ and $\calN(278,13^{2})$. In the first case, the value of 379 W\,m$^{-1}$\,K$^{-1}$ was obtained by directly fitting a model to data measured on a uniform sample of copper, and the estimated standard deviation of 3.79 W\,m$^{-1}$\,K$^{-1}$ was based on the typical uncertainty for the measurement. In the second case, the value of 278 W\,m$^{-1}$\,K$^{-1}$ was obtained from analysis of a layered sample that was half braze and half copper, and a Monte Carlo analysis was used to propagate several other measurement uncertainties through a model to give the standard deviation of 13 W\,m$^{-1}$\,K$^{-1}$. The second estimate is considered to be less reliable because it is affected by assumptions made about the properties of the braze. As high probability confidence intervals from the two stated Gaussian distributions do not overlap (see Figure \ref{fig:figure2a}), we have conflicting beliefs about the value of the thermal conductivity. This leads us to be less certain about the value of the thermal conductivity than either of the previous studies suggests in isolation and makes construction of a prior both challenging and subjective. To combine the two results into a prior distribution, in a way which reflects this uncertainty, we could consider a Gaussian distribution with (averaged) mean $\mu_{\lambda}:=(379+278)/2 = 328.5$ W\,m$^{-1}$\,K$^{-1}$ and large standard deviation $\sigma_{\lambda}:=50$ W\,m$^{-1}$\,K$^{-1}$. However, to ensure positivity, we will use \begin{equation} \lambda \sim \mbox{logNormal}(m_{\lambda}, s_{\lambda}), \label{eqn:priorlambda} \end{equation} as our prior and perform inference on \begin{equation} \theta_{1} := \ln(\lambda) \sim \calN(m_{\lambda},s_{\lambda}^{2}), \label{eqn:theta1} \end{equation} where $m_{\lambda}, s_{\lambda}$ are chosen so that $\EE[\lambda]=\mu_{\lambda}$ and $\mbox{sd}[\lambda]=\sigma_{\lambda}$ under the prior. That is, our prior density on $\theta_{1}$ is \begin{equation} \pi_{0}^{\lambda}(\theta_{1}) = \frac{1}{\sqrt{2\pi s_{\lambda}^{2}}}\exp\left(-\frac{1}{2s_{\lambda}}(\theta_{1} - m_{\lambda})^{2}\right). \label{eqn:prior_lam} \end{equation} The effects of the choice of prior are discussed later in this section. As we have no information about the value of the laser intensity $I$ other than that it is positive, we choose the improper prior distribution ${\cal U}(0,\infty)$ with ``density'' $\pi_{0}^{I}(I) \propto 1_{[0,\infty)}(I)$. For ease of sampling, we work with $\theta_{2} := \ln(I)$ and our prior ``density'' on $\theta_{2}$ is therefore given by \begin{equation} \pi_{0}^{I}(\theta_{2}) \propto \exp(\theta_{2}). \label{eqn:prior_I} \end{equation} This is an example of an uninformative prior. Another common choice is the Jeffreys' prior \cite{jeffreys1998theory}, which is based on the Fischer information matrix \cite{allard2015multi,lehmann2006theory}. We choose an inverse gamma prior distribution for $\sigma^{2}$ to ensure positivity and so that we can exploit it's conjugacy with the Gaussian distribution. That is, our prior density on $\sigma^{2}$ is given by \begin{equation} \pi_{0}^{\sigma}(\sigma^{2}) = \frac{\beta_{\sigma}^{\alpha_{\sigma}}}{\Gamma(\alpha_{\sigma})} (\sigma^{2})^{-\alpha_{\sigma}-1} \exp\left( -\frac{\beta_{\sigma}}{\sigma^{2}} \right), \label{eqn:prior_sig2} \end{equation} where, here, $\Gamma(\cdot)$ is the gamma function. The hyper parameters $\alpha_{\sigma} = 3$ and $\beta_{\sigma} = 0.0079$ are chosen so that all reasonable values of $\sigma^2$ have relatively high probability with respect to the prior distribution. Assuming (a priori) independence of $\lambda$, $I$ and $\sigma^{2}$ (and therefore of $\theta_{1}$, $\theta_{2}$ and $\sigma^{2}$), letting $\thetav:=(\theta_{1},\theta_{2})$ we define our prior density on $(\thetav$, $\sigma^{2})$ to be the product of the three densities \eqref{eqn:prior_lam}, \eqref{eqn:prior_I} and \eqref{eqn:prior_sig2}. That is, \begin{equation} \pi_{0}(\thetav,\sigma^{2}) = \pi_{0}^{\lambda}(\theta_{1}) \pi_{0}^{I}(\theta_{2}) \pi_{0}^{\sigma}(\sigma^{2}). \label{eqn:priortheta} \end{equation} We note that choosing a prior may be viewed as a form of regularization \cite{kaipio2006statistical}, allowing us to assign greater weight to values we deem more likely and less weight to those we believe less likely. Numerical investigations showed that for our problem the data $\dv$ is sufficiently informative about the unknowns that the choice of prior is not important. That is, changing the prior distribution (including the hyper parameters) had little effect on the estimated posterior. These studies are not presented in this paper. However, experiments involving different choices of prior for similar inverse problems are given in \cite{allard2015multi,heidenreich2015statistical}. \subsubsection{Data, likelihood, posterior and target densities.} \label{subsubsec:datalikelihoodposterior} We assume that our thermogram data $\dv$ are indirect observations of $\thetav$, subject to additive noise. That is, \begin{equation} \dv = \calvG(\thetav) + \etav, \quad \etav \sim \calN(\mathbf{0},\sigma^{2}I_{\nd}), \label{eqn:data} \end{equation} where $\calvG\colon\RR^{2}\to\RR^{\nd}$ is the \textit{observation operator}, mapping values of $\thetav$ into data and $\etav$ is the noise vector. A diagonal covariance matrix $\Sigma:=\sigma^{2}I_{\nd}$ is chosen as we assume individual measurements are subject to iid $\calN(0,\sigma^{2})$ noise. This yields a likelihood function $L\colon\RR^{\nd}\times\RR^{2}\times\RR^{+}\to\RR$ (describing the probability of observing $\dv$ for a specific choice of $\thetav$ and $\sigma^{2}$) given by \begin{equation} L(\dv\mid\thetav,\sigma^{2}) \propto (\sigma^2)^{-\nd/2} \exp(-\Phi(\thetav,\sigma^{2};\dv)), \label{eqn:likelihood} \end{equation} where \begin{equation} \Phi(\thetav,\sigma^{2};\dv) := \frac{1}{2}|\dv-\calvG(\thetav)|_{\Sigma}^{2} = \frac{1}{2\sigma^{2}}\|\dv-\calvG(\thetav)\|_{2}^{2}, \label{eqn:loglikelihood} \end{equation} is the negative log-likelihood or so-called \textit{potential}. The observation operator $\calvG$ is given explicitly by \begin{equation} \calvG(\thetav) = \left( \bar{u}(t_{1};\thetav), \bar{u}(t_{2};\thetav), \dots, \bar{u}(t_{\nd};\thetav) \right)^{\top}, \label{eqn:G} \end{equation} where \begin{equation} \bar{u}(t;\thetav):=\frac{1}{|D_{L}|}\int_{D_{L}}u(\rv,t;\thetav)\mbox{d}S_{z}(\rv) \label{eqn:ubar} \end{equation} is the average temperature over the disc $D_{L}$ on the top face of the sample at time $t$ and $u(\rv,t;\thetav)$ denotes the solution to the deterministic forward problem (\ref{eqn:pde}--\ref{eqn:bc0}) for a \textit{fixed} choice of $\thetav$ where $(\lambda,I) = \exp(\thetav):=(\exp(\theta_{1}), \exp(\theta_{2}))$. Note that \eqref{eqn:likelihood} is simply the probability density function of a multivariate Gaussian distribution. Now, from Bayes' Theorem \cite{lee2012bayesian}, the posterior density $\pid\colon\RR^{2}\times\RR^{+}\times\RR^{\nd}\to\RR^{+}$ (describing the probability of obtaining values for $(\thetav, \sigma^{2})$ given the data $\dv$) satisfies \begin{equation} \pid(\thetav,\sigma^{2}\mid\dv) \propto L(\dv\mid\thetav,\sigma^{2})\pi_{0}(\thetav,\sigma^{2}). \label{eqn:posterior} \end{equation} If we integrate out the dependence on $\sigma^{2}$, we are left with our target density \begin{equation} \pi(\thetav\mid\dv) \propto L^{\sigma}(\dv\mid\thetav) \pi_{0}^{\lambda}(\theta_{1}) \pi_{0}^{I}(\theta_{2}), \end{equation} for $\thetav$ given $\dv$. Exploiting the conjugacy of the inverse gamma and Gaussian distributions, we find that \begin{equation} L^{\sigma}(\dv\mid\thetav) = t_{2\alpha_{\sigma}}\left(\dv \ \bigg| \ \calvG(\thetav), \frac{\beta_{\sigma}}{\alpha_{\sigma}}I_{\nd}\right) \label{eqn:Lsigma} \end{equation} is the density function of a multivariate $t$-distribution with $2\alpha_{\sigma}$ degrees of freedom, mean vector $\calvG(\thetav)$ and shape matrix $(\beta_{\sigma}/\alpha_{\sigma})I_{\nd}$. Determining the normalization constant \begin{equation} Z:=\int_{\RR^{2}} L^{\sigma}(\dv\mid\thetav)\pi_{0}^{\lambda}(\theta_{1})\pi_{0}^{I}(\theta_{2})\dthetav, \end{equation} for $\pi$ gives the target density explicitly. Determination of any QoI involving $\lambda$ and $I$ (expectations, variances, probabilities etc.), requires the computation of some integral involving the density $\pi$. This task is non-trivial. One approach, which is particularly amenable to situations with a small number of unknowns and where the target density is a smooth function of the unknowns is Smolyak sparse grid quadrature \cite{schillings2013sparse}. An alternative approach, which we take here, is Markov chain Monte Carlo (MCMC) sampling \cite{brooks2011handbook}. MCMC methods allow us to sample from a distribution whose density is known only up to a constant of proportionality. Our specific approach is described in Section \ref{subsubsec:mcmc}. \subsubsection{Approximating the target density.} \label{subsubsec:approximatingthetargetdensity} Since we are unable to solve \eqref{eqn:pde}--\eqref{eqn:bc0} exactly, we are unable to evaluate $u$ in (\ref{eqn:ubar}) for a given value of $\thetav$. When using an MCMC method, the standard approach is to approximate the forward operator on a `case-by-case' basis. That is, for each proposed $\thetav$, a new spatio-temporal approximation is computed. We will refer to this as \textit{plain MCMC}, as in \cite{hoang2013complexity}. For time-dependent and nonlinear problems, each forward solve usually requires the solution of a \textit{sequence} of linear systems. This exacerbates the problem, and the cost of repeated forward solves can rapidly exceed a user's computational budget. We propose an alternative approach where a surrogate is used to replace the repeated forward solves for different values of $\thetav$. In particular, we use a pre-computed SGFEM approximation $\uhkt$ to the parametric forward problem \eqref{eqn:ypde}--\eqref{eqn:yq}. As explained in Section \ref{subsec:forwardproblem}, once computed, this can be evaluated for any choice of the parameters $y_{1}$ and $y_{2}$ (corresponding to proposed values of $\lambda=\exp(\theta_{1})$ and $I=\exp(\theta_{2})$) without the need for any further linear system solves. We will refer to this approach as \textit{SGFEM MCMC}. In detail, we use $\uhkt$ to approximate $L^{\sigma}(\dv|\thetav)$ in (\ref{eqn:Lsigma}) for proposed values of $\thetav$. However, recall that $\uhkt$ is a function of $\yv\in\Gamma$. From \eqref{eqn:ylambdaI}, each value of $y_{1}$ and $y_{2}$ can be mapped to values of $\lambda, I$ in the set $\Omega_{\lambda}\times\Omega_{I} := (\mu_{\lambda}-\nu_{\lambda}\sqrt{3}, \mu_{\lambda}+\nu_{\lambda}\sqrt{3}) \times (\mu_{I}-\nu_{I}\sqrt{3}, \mu_{I}+\nu_{I}\sqrt{3})$ via the mapping \begin{equation} \zeta(\yv) := \left( \mu_{\lambda} + \nu_{\lambda} y_{1}, \mu_{I} + \nu_{I} y_{2} \right), \label{eqn:zetamap} \end{equation} which has inverse \begin{equation} \zeta^{-1}(\lambda,I) := \left( (\lambda-\mu_{\lambda})/\nu_{\lambda}, (I-\mu_{I})/\nu_{\lambda} \right). \label{eqn:zetainvmap} \end{equation} Notice that $\yv\in\Gamma$ if, and only if, $\zeta(\yv) \in \Omega_{\lambda}\times\Omega_{I}$. For values of $\thetav$ for which $(\lambda,I) \in \Omega_{\lambda}\times\Omega_{I}$, we replace $\calvG$ in (\ref{eqn:Lsigma}) with the \textit{approximate} observation operator given by \begin{equation} \calvGhkt(\thetav) := \big(\bar{u}_{hk\tau}(t_{1}, \zeta^{-1}(e^{\thetav})), \dots, \bar{u}_{hk\tau}(t_{\nd},\zeta^{-1}(e^{\thetav}))\big)^{\top}, \label{eqn:Ghk} \end{equation} where \begin{equation} \bar{u}_{hk\tau}(t,\yv):=\frac{1}{|D_{L}|}\int_{D_{L}} \uhkt(\rv, t, \yv) \, \mbox{d}S_{z}(\rv). \label{eqn:uhktbar} \end{equation} This induces an approximate target density \cite{cotter2010approximation,dashti2016bayesian,stuart2010inverse} given by \begin{equation} \pihkt(\thetav\mid\dv) \propto L_{hk\tau}^{\sigma}(\dv\mid\thetav) \pi_{0}^{\lambda}(\theta_{1}) \pi_{0}^{I}(\theta_{2}) \label{eqn:pihktz} \end{equation} where $L_{hk\tau}^{\sigma}$ is given by \eqref{eqn:Lsigma} with $\calvG$ replaced by $\calvGhkt$. Note that we use a number of time steps $\nt$ which is a multiple of $(\nd-1)$ so that the measurement times $t_{n}$ are a subset of the time steps $\tau_{n}$ and we need not interpolate in time to compute $\bar{u}_{hk\tau}(t_{n}, \zeta^{-1}(e^{\thetav})).$ We have no guarantee that the surrogate is accurate outside of $\Omega_{\lambda}\times\Omega_{I}$. Hence, for values of $\thetav$ for which $(\lambda, I) \notin\Omega_{\lambda}\times\Omega_{I}$ (equivalently $\yv\notin\Gamma$), we would instead need to approximate \mbox{$L^{\sigma}(\dv \mid \thetav)$} using a standard deterministic spatio-temporal discretization (as in the plain MCMC approach). In that case, we denote the induced approximate target density (defined analogously to $\pihkt$ in \eqref{eqn:pihktz}) by $\piht$. To ensure that we can evaluate $\uhkt$ for as many values of $\lambda$ and $I$ proposed by the selected MCMC method as possible, we need to choose $\mu_{\lambda}$, $\nu_{\lambda}$, $\mu_{I}$ and $\nu_{I}$ in \eqref{eqn:ylambdaI} appropriately. That is, the distributions used for $\lambda$ and $I$ in the construction of the surrogate should sufficiently cover the supports of the prior distributions selected for the Bayesian inference. The uniform distribution selected for $\lambda$ is shown in Figure \ref{fig:figure2b}. Should the MCMC propose an appreciable number of samples outside of the region in which the surrogate can be used, leading to a large number of significantly more expensive likelihood approximations (as in the plain MCMC approach), then it may be necessary to compute a new surrogate approximation which is accurate on a larger region of the state space. This was not necessary in any of the examples that we present in Section \ref{sec:results}. \subsubsection{Markov chain Monte Carlo.} \label{subsubsec:mcmc} As mentioned in Section \ref{subsubsec:approximatingthetargetdensity}, we use an MCMC algorithm to sample from the approximate target distribution, with density $\pihkt$. For clarity of exposition, we use the simple Random Walk Metropolis Hastings (RWMH) \cite{brooks2011handbook} algorithm, although more advanced MCMC algorithms may be used, for example MALA \cite{roberts1998optimal} or HMC \cite{girolami2011riemann}. The RWMH algorithm proposes values of $\thetav$ from a proposal distribution $q$ which is symmetric around the current state (usually a Gaussian), and then accepts/rejects them in the so-called Metropolis step in such a way that the resulting samples are the states of a Markov chain with stationary density equal to the target density. This means that after a sufficient number of samples have been produced, say $n_{B}$, states can be considered draws from the target distribution. Further post-processing of the chain, such as \textit{thinning} (where only a subset of the states are retained), can be implemented in order to reduce the correlation between samples. Pseudocode for the algorithm used is given in Algorithm 1. \begin{algorithm} \caption{SGFEM (RWMH) MCMC algorithm to sample from the approximate target density $\pihkt$.} compute SGFEM solution $\uhkt$ (offline) draw initial state $\thetav^{(0)}\sim\pi^{\lambda}_{0}\times\pi^{I}_{0}$ \For{$ m = 0,1,2,\dots, M-1$} { draw proposal $\thetav^{*} \sim q=\calN(\thetav^{(m)}, \beta^{2}I_{2})$ \eIf{$\exp(\thetav^{*})\in\Omega_{\lambda}\times\Omega_{I}$} { set $\hat{\pi}_{m+1} := \pihkt\left(\thetav^{*}\mid\dv\right)$ } { set $\hat{\pi}_{m+1} := \piht\left(\thetav^{*}\mid\dv\right)$ } set $p := 1 \wedge \frac{\hat{\pi}_{m+1}}{\hat{\pi}_{m}} \cdot \frac{q\left( \thetav^{(m)}\mid\thetav^{*}\right)}{q\left(\thetav^{*}\mid\thetav^{(m)}\right)}$ draw $u\sim\calU(0,1)$ \eIf{$u<p$} { set $\thetav^{(m+1)} := \thetav^{*}$ (accept) } { set $\thetav^{(m+1)} := \thetav^{(m)}$ (reject) } } discard $\thetav_{0},\dots,\thetav_{\nB-1}$ (burn in) thin chain compute Monte Carlo approximations $\EE[\varphi(\lambda,I)] \approx \frac{1}{(M+1-n_{B})}\sum\limits_{i=n_{B}}^{M}\varphi\left(e^{\theta_{1}^{(i)}},e^{\theta_{2}^{(i)}}\right)$ \label{alg:rwmh} \end{algorithm} The SGFEM solve is performed \textit{once} outside the Monte Carlo loop. There are $n_{h}n_{k}$ equations to solve per time step and the cost of each of these solves is $\calO(\nh\nk)$, provided an optimal solver (see \cite{benner2015low,powell2009block}) is available. This is an offline computation. Recall that $n_{k}$ is defined in (\ref{nk_def}). Since $k$ denotes the chosen polynomial degree (typically $k=6$ is sufficiently high in this work), then $n_{k}$ is orders of magnitude smaller than $n_{h}$ and $n_{t}$. The evaluation of the approximate observation operator $\calvGhkt$ is made once per iteration (online) and has only a small cost. Specifically, since the observed quantity is averaged in space, evaluating $\bar{u}_{hk\tau}$ in (\ref{eqn:uhktbar}) for all $n_{z}$ measurement times only requires a matrix-vector product with a pre-computed matrix of size $n_{z} \times n_{k}$ with entries depending on the coefficients $u_{ij}^{n}$. In each iteration, we simply need to compute a vector of length $n_{k}$ containing the Legendre polynomials evaluated at the point $\yv \in \Gamma$ corresponding to the proposed $\thetav$. The total cost of the SGFEM MCMC approach consists of the offline cost (building the surrogate) and the online cost (evaluating the surrogate). This is summarized in Table \ref{tab:table1}. In the plain MCMC approach, we can use the same finite element discretization and time-stepping method as in the SGFEM approach. At each iteration, however, we need to perform a forward solve to compute a new spatio-temporal approximation and evaluate it at the proposed value of $\thetav$. The cost of each solve behaves as $\nt\times\calO(\nh)$, assuming an optimal solver for the FEM linear systems is available. Note that we do not include the cost of evaluating the approximation in Table \ref{tab:table1} as the solve is the dominant cost. It is clear that so long as the number of MCMC samples is larger than $\nk$ (the dimension of the polynomial space used to construct the surrogate), then a saving will be made with the SGFEM MCMC approach. Due to the slow convergence of MCMC methods, $M$ is typically of order $10^{6}$ or higher whereas, since we have only two unknowns, $\nk$ only grows like $\calO(k^{2})$, where $k$ is the chosen polynomial degree. Furthermore, as we save computational effort on every iteration, the larger $M$ is, the greater the savings made. \begin{table} \centering \caption{\label{tab:table1} Computation costs of plain and SGFEM MCMC.} \begin{tabular}{@{}lcc} \hline & Offline & Online \\ \hline plain & -/- & $M\times \nt \times\calO(\nh)$ \\ SGFEM & $\nt \times \calO(\nh \nk)$ & $M\times\calO(\nd\nk)$ \\ \hline \end{tabular} \end{table} \subsubsection{Convergence and error.} \label{subsubsec:convergence} The computational cost of the SGFEM approach, but also the accuracy, depends on our choice of discretization parameters. Since we use piecewise linear finite elements, the spatial approximation error for the forward problem can be expected to behave as $\calO(h)$, where $h$ is the mesh size. The implicit Euler method yields an error which behaves as $\calO(\tau)$, where $\tau$ is the time step size. For parabolic PDEs, under certain simplifying assumptions, the error associated with the polynomial approximation on the parameter domain decays exponentially with respect to the polynomial degree $k$ (see \cite{nobile2009analysis,xiu2009efficient}). Finally, the Monte Carlo error is $\calO(M^{-1/2})$, where $M$ is the number of samples. In Section \ref{sec:results}, we demonstrate that the approximation to the target density $\pihkt$ obtained by the SGFEM MCMC approach converges as the polynomial degree $k$ is increased. In future work, using the framework in \cite{cotter2010approximation} and \cite{hoang2013complexity} we hope to show convergence in the Hellinger distance \cite{stuart2010inverse} of $\pihkt$ to the true target density $\pi$ as the forward approximation is improved by decreasing $h$ and $\tau$ and increasing $k$. In fact, under certain conditions \cite{cotter2010approximation} on the approximation of $\calvG$ by $\calvGhkt$, the rate of convergence in the forward approximation may be preserved in that of the target density. This means that we are able to sample from a distribution with density close to $\pi$ as long as the forward approximation is sufficiently accurate. \section{Results} \label{sec:results} A laser flash experiment was performed and temperature measurements were taken at $\nd=401$ equally spaced time points for the duration $T=40$ milliseconds. The chosen values of all the model inputs are given in Table \ref{tab:table2}. Prior distributions are stated for the unknowns $\lambda$ and $I$ rather than fixed values. \begin{table} \centering \caption{\label{tab:table2} Parameter values for a single layer copper sample laser flash experiment.} \begin{tabular}{@{}lcc} \hline Quantity\,/\,units & Symbol & Value \\ \hline Sample Radius\,/\,m & $R$ & $1.240\times10^{-2}$ \\ Sample Height\,/\,m & $H$ & $2.037\times10^{-3}$ \\ Furnace/Ambient Temperature\,/\,K & $\Ta$ & $3.850\times10^{+2}$ \\ Experiment Duration\,/\,s & $T$ & $4.000\times10^{-2}$ \\ Laser Flash Duration\,/\,s & $\tf$ & $4.000\times10^{-4}$ \\ Laser Flash Depth\,/\,m & $\zf$ & $1.273\times10^{-4}$ \\ Density\,/\,kg\,m$^{-3}$ & $\varrho$ & $8.930\times10^{+3}$ \\ Specific Heat Capacity\,/\,J\,kg$^{-1}$\,K$^{-1}$ & $\cp$ & $3.970\times10^{+2}$ \\ Heat Transfer Coeff.\,/\,W\,m$^{-2}$\,K$^{-1}$ & $\kappa$ & $1.100\times10^{+3}$ \\ \hline Thermal Conductivity\,/\,W\,m$^{-1}$\,K$^{-1}$ & $\lambda$ & $\mbox{logN}(m_{\lambda},s_{\lambda})$ \\ Laser Intensity\,/\,W\,m$^{-3}$ & $I$ & $\calU(0,\infty)$ \\ \hline \end{tabular} \end{table} First, we use the SGFEM MCMC scheme to sample from the approximate target density $\pihkt$ in the case of a uniform laser profile. That is, when $\chi(r)=1$ in the definition of the source term $Q$ in (\ref{eqn:q}) and (\ref{eqn:yq}). We comment on the speedup achieved over the plain MCMC approach and demonstrate numerical convergence with respect to the polynomial degree $k$. Second, we look at how the modelling of the laser profile affects the results of the Bayesian inference, comparing the results for the uniform profile to those obtained with two Gaussian-shaped profiles. \subsection{Sampling the approximate target distribution using SGFEM MCMC} \label{subsec:results1} To construct the surrogate $\uhkt$, we use a spatial mesh with $\nh=12,206$ vertices and characteristic element size $h=4.24\times10^{-5}$\,m. The number of implicit Euler time steps is set to $\nt=800$. After investigating the accuracy of the solution to the forward problem, the polynomial degree for the parametric approximation is chosen to be $k=6$. The resulting polynomial space has dimension $\nk=28$. The time taken\footnote{CPU time. All experiments run on MacBook Pro laptop with 2.5\,GHz Intel Core i7 processor and 16\,GB RAM.} to complete the offline stage was 378 seconds. The surrogate RWMH algorithm described in Algorithm 1 in Section \ref{subsubsec:mcmc} was run to generate 100 million samples from the approximate target distribution. The time taken to produce these samples was 13,223 seconds. We discarded the initial 10,000 samples as burn-in and the acceptance rate was tuned to near the optimal value of $23\%$ (see \cite{roberts2001optimal}) by choosing the proposal standard deviation $\beta=1.55$. Histograms of the 100 million samples produced by Algorithm 1 were formed to obtain the approximate target density $\pihkt(\lambda, I \mid \dv)$ and the approximate marginal densities $\pihkt^{\lambda}(\lambda \mid \dv)$ and $ \pihkt^{I}(I \mid\dv)$ for $\lambda$ and $I$, respectively. These are shown in Figure \ref{fig:figure3}. \begin{figure*} \includegraphics[width=\textwidth]{figure3.eps} \caption{Approximate joint target density $\pihkt(\lambda,I \mid \dv)$ (bottom left) and approximate marginal densities $\pihkt^{\lambda}(\lambda \mid \dv)$ (top left) and $\pihkt^{I}(I \mid \dv)$ (bottom right) produced using 100 million MCMC samples with SGFEM surrogate computed using $n_{h}=12,206$, $n_{t}=800$ and $k=6$. Uniform laser profile $\chi(r)=1$. } \label{fig:figure3} \end{figure*} Combining the offline and online steps, the total time required to approximate the target distribution was 13,601 seconds ($1.36\times10^{-4}$ seconds per sample). For a finer spatio-temporal discretization, the cost of the offline stage would increase but, crucially, the cost of the online stage would remain unchanged (see Table \ref{tab:table1}). Like all MCMC algorithms, this approach is highly parallelizable. Once $\uhkt$ is computed, we could use it to produce multiple MCMC chains in parallel and combine the resulting samples appropriately. However, we did not do this here. No experiment was performed with the plain MCMC method. Since a single forward solve costs around 35 seconds when we use the same spatio-temporal discretization, we estimate that the (CPU) time required to compute 100 million samples (again without exploiting parallelization) would be around 111 years! This clearly shows that the surrogate approach represents a huge time saving (here, a reduction of 5 orders of magnitude) in comparison to the plain MCMC approach. Even with a more modest 100,000 samples, the plain MCMC approach would take 40.5 days, compared to 391 seconds with the SGFEM approach. With this smaller number of samples, the Monte Carlo error would far exceed the difference between the two approximate target densities $\piht$ and $\pihkt$. The distributions shown in Figure \ref{fig:figure3} now tell us about the values of the unknowns $\lambda$ and $I$ (conditional on the data) and the uncertainty in these values. We also gain information about the relationship between $\lambda$ and $I$. Note that this is a huge advantage over optimization approaches. We obtain a mean value of $355.15$\,W\,m$^{-1}$\,K$^{-1}$ for $\lambda$ and a mean value of $1.1816\times10^{12}$\,W\,m$^{-3}$ for $I$. The standard deviations are $1.10$ W~m$^{-1}$~K$^{-1}$ and $1\times10^{9}$ W~m$^{-3}$, respectively. The mean value for $\lambda$ lies within the support of the prior (see Figure \ref{fig:figure2b}), but is different to the prior mean $\mu_{\lambda}=328.5$ W~m$^{-1}$~K$^{-1}$. Moreover, we observe that the standard deviation is much reduced from the prior value $\sigma_{\lambda}=50$ W~m$^{-1}$~K$^{-1}$. From Figure \ref{fig:figure3} we see that there is negative correlation between $\lambda$ and $I$ in the target distribution. That is, a larger value of the thermal conductivity $\lambda$ requires a smaller value of the laser intensity $I$ (and vice versa) for the model output to be as consistent with the data as possible. The computed value of the correlation coefficient is -0.48. This highlights the importance of performing inference on both unknowns simultaneously, as fixing the value of one amounts to considering the conditional distribution for the other. The approximate densities of two such conditional distributions are displayed in Figure \ref{fig:figure4}, along with the approximate marginal target density for $\lambda$. Observe that the three densities indicate very different likely values of $\lambda$. Note also that fixing a lower (resp. higher) value of the laser intensity $I$ results in a larger (resp. smaller) likely value of $\lambda$ being indicated. The histograms used to approximate the conditional densities are far less converged than those for the marginals as there are fewer samples which may be used to construct them. Conditional distributions for larger or smaller values of $I$ than those used here did not have sufficient samples to create histograms, but would result in more severe differences in the distributions on $\lambda$. Correlation between $\lambda$ and $I$, and hence the need for joint inference, has not previously been considered for the data set considered here. \begin{figure} \centering \includegraphics[width=\textwidth]{figure4.eps} \caption{Approximate conditional densities \mbox{$\pihkt^{\lambda\mid I}(\lambda\mid\dv,I)$} for two different values of $I$ with the approximate marginal density $\pihkt^{\lambda}(\lambda \mid \dv)$ from Figure \ref{fig:figure3}. Uniform laser profile $\chi(r)=1$.} \label{fig:figure4} \end{figure} In Figure \ref{fig:figure5}, we show the results of a numerical convergence study. We plot the approximate marginal target density for $\lambda$ obtained using a varying value of $k$ in the construction of the surrogate. The spatio-temporal discretization is fixed as before. For $k \ge 6$, the densities produced are almost indistinguishable visually. The approximation converges as $k$ increases. Similar behaviour is witnessed when the discretization parameters $h$ and $\tau$ are decreased. \begin{figure*} \includegraphics[width=\textwidth]{figure5.eps} \caption{Approximate marginal target densities $\pihkt^{\lambda}(\lambda \mid \dv)$ for $\lambda$ produced using 100 million MCMC samples with SGFEM surrogate computed using $n_{h}=12,206$, $n_{t}=800$ and varying values of $k$. Uniform laser profile $\chi(r)=1$.} \label{fig:figure5} \end{figure*} In Figure \ref{fig:figure6}, we compare the experimental data $\dv$ with a model thermogram computed by evaluating the approximate observation operator $\calvGhkt$ in \eqref{eqn:Ghk} at the approximate target distribution mean (computed with $k=6$). We observe that, modelling the laser profile as uniform (i.e., $\chi(r) = 1$), a very good fit to the data has been achieved. This could potentially be improved further by treating other model inputs as unknown (such as the heat transfer coefficient $\kappa$ in the boundary condition) and inferring their values from the data too. \begin{figure*} \includegraphics[width=\textwidth]{figure6.eps} \caption{Thermogram of experimental data $\dv$ and model thermograms for three different laser profiles computed by evaluating the observation operator $\calvGhkt$ at the approximate target distribution mean obtained using 100 million MCMC samples and SGFEM surrogate computed using $n_{h}=12,206$, $n_{t}=800$ and $k=6$.} \label{fig:figure6} \end{figure*} \subsection{Varying the laser profile} \label{subsec:results2} In this section, we consider the effect on our inference results of using a Gaussian-shaped profile to describe the laser flash, instead of a uniform profile. That is, we now set \begin{equation} \chi(r) := \exp(-r^{2}/2\rf^{2}) \label{eqn:chiGaussian} \end{equation} in \eqref{eqn:yq}, rather than $\chi(r) =1$ as in Section \ref{subsec:results1}. By varying the parameter $\rf$ in \eqref{eqn:chiGaussian}, we are able to control how quickly the laser intensity in the model decays as the distance from the centre of the sample increases. We repeat the experiment in Section \ref{subsec:results1}, now using the values $\rf=R$ and $\rf=R/3$ to represent lasers whose profiles decay slowly and rapidly, respectively. The resulting estimates of the marginal target distribution means and standard deviations for $\lambda$ are displayed in Table \ref{tab:table3}. Note that such investigations would be impossible without the rapid sampling provided by the SGFEM surrogate approach. We see that the profile of the laser has a significant effect on the reported values. The mean value of $\lambda$ decreases as the laser becomes more focused at the centre of the sample. This indicates that checking the uniformity of the laser is important for accurate measurement and that uncertainties about the laser profile shape should be built into the model. Note that the computed values for $I$ are not directly comparable. \begin{table} \centering \caption{\label{tab:table3} Approximate marginal target distribution means and standard deviations for $\lambda$ for different laser profiles.} \begin{tabular}{@{}lccc} \hline & Uniform & $\rf = R$ & $\rf = R/3$ \\ \hline \ \ \, mean $\lambda$\,/\,W\,m$^{-1}$\,K$^{-1}$ & $355.15$ & $336.86$ & $281.94$ \\ st. dev. $\lambda$\,/\,W\,m$^{-1}$\,K$^{-1}$ & $1.10$ & $0.74$ & $1.61$ \\ \hline \end{tabular} \end{table} Figure \ref{fig:figure6} shows model thermograms computed by evaluating $\calvGhkt$ at the target distribution mean for all three laser profiles. It is clear that a good level of fit to the experimental data has been obtained when using the uniform laser profile or the flatter Gaussian profile ($\rf=R$). When using the rapidly decaying Gaussian profile however, the target distribution mean gives a bad fit to the data. This indicates that modelling the laser this way is incorrect and that the experiment has been performed in such a way that the laser profile is actually close to the ideal uniform shape. The bad fit observed for the sharply decaying Gaussian profile is indicative of the fact that in this model the heat energy travels rapidly through the centre of the sample and then slowly spreads out in the radial direction. The simulated temperature does not reach a steady state over the time scale of the experiment. Figure \ref{fig:figure7} shows the approximate temperature at the spatial locations $(r,z)=(R,H)$ and $(r,z)=(0,H)$ obtained by evaluating $\uhkt$ at the target density mean for the duration of the experiment. At the point $(r,z)=(0,H)$, the temperature increases rapidly initially and is then is still cooling at time $T$. However, at $(r,z)=(R,H)$, the temperature increases more slowly and is still increasing at time $T$. Note that, for each laser profile considered, there is a discrepancy at the early times where the measured temperature spikes. This is due to the fact that the laser flash itself is detected by the sensor. This feature has not been incorporated into the model. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{figure7.eps} \caption{Approximate temperature $\uhkt$ evaluated at the target density mean for $(\lambda,I)$ at $\rv=(0,H)$ (centre of the top face) and at $\rv=(R,H)$ (outer edge of top face) for the Gaussian laser profile with fast decay, i.e., $\chi(r):=\exp(-9r^{2}/2R^{2}$).} \label{fig:figure7} \end{figure} \section{Conclusions and future work} \label{sec:conclusions} In this paper we formulated the determination of the thermal diffusivity (and hence the conductivity) of a material given laser flash experiment data as a Bayesian inverse problem. We demonstrated how an SGFEM surrogate may be used to accelerate MCMC sampling from an approximate target distribution, in a real life situation where the plain MCMC approach is infeasible. We explained how posing the problem as a Bayesian inverse problem allows for clear quantification of the uncertainty in the estimate of the thermal conductivity. We used this approach to find the joint distribution on the thermal conductivity and the laser intensity. We observed a correlation in the posterior between these unknowns, which indicates that incorrect estimates of the laser intensity without joint inference can lead to biased estimates of the thermal conductivity. A numerical convergence study was presented, showing the effect of varying the discretization parameters in the construction of the surrogate. Due to the speedup over plain MCMC provided by the SGFEM MCMC approach, we have been able to investigate the effect of the laser profile on the inference results. In particular, we have shown that the spatial uniformity of the laser profile strongly affects the value of the estimated thermal conductivity. This demonstrates the importance of both accurately modelling the laser flash and correctly setting up the physical experiment. Combining a parametric surrogate with MCMC sampling offers a computationally efficient way to solve inverse problems involving PDEs. However, we caution that the appropriate choice of surrogate is problem-dependent. For problems where the forward solution is a non-smooth function of the parameters chosen to represent the uncertain inputs, the standard SGFEM approach adopted here is not recommended. Moreover, for problems where the solution becomes more nonlinear over time, using a fixed polynomial basis for all time steps, as has been done here, is not a sensible approach. When a surrogate is chosen, its accuracy with respect to the solution of the forward problem should first be investigated before using it to accelerate the numerical solution of an inverse problem. In future, we shall consider how the multi-thermogram approach of \cite{allard2015multi} can similarly be incorporated into the SGFEM MCMC framework (for single and multi-layered materials) through simultaneous evaluation of surrogates for multiple forward problems. We will also incorporate more uncertain inputs into the model, for example the heat transfer coefficient $\kappa$ in the definition of the boundary condition. \section*{Acknowledgements} The authors would like to acknowledge the support of the National Measurement System of the UK Department of Business, Energy \& Industrial Strategy, which funded this work as part of NPL's Data Science program. \bibliographystyle{siam}
train/arxiv
BkiUawPxaJJQnH0lk0og
5
1
\section{Introduction: setting the stage} \label{sec:intro} Stars are made of plasmas, in which physical conditions range over several orders of magnitude in pressure, temperature and density. In them, many hydro-dynamical and magneto-hydro-dynamical processes occur in variable and complex ways, characterized by micro- and macro-turbulence phenomena with Reynolds' numbers well beyond the limits experimentally studied in terrestrial laboratories \citep{tsuji} and actually also beyond our capability of detailed, quantitative modeling. Evolutionary computations can only ascertain that traditional one-dimension models are largely insufficient to account for short- and long-term processes of stirring and mixing \citep{stan6} and often limit themselves to simulate schematically the layers affected by pure convection, using the mixing length theory or some other simplified approaches \citep[see][and references therein]{sal1, sal2}. They can also address in similar ways the regions of semi convection, distinguishing between the two mixing schemes through the Schwarzschild's and Ledoux's criteria for stability, both based on simple polythropic approaches \citep{chan}. In modern computations, they may include also some (but not all) of the effects induced by rotation \citep[see e.g.][and references therein]{rot3, rot1, rot2, rot5, stan7}. These limits, in existing efforts, are unavoidable and make clear that the real behavior of stellar plasmas is more complex than our basic descriptions. Even for the Sun, a large family of different dynamical processes induce variations in the structure, hence in the irradiance, over time scales ranging from minutes to billions of years \citep{kopp}. It is therefore expected that also in the advanced stages of their life, during and after the ascent to the Red Giant Branch (RGB), stars sharing the same evolutionary scheme of the Sun experience mass and momentum transfer at different speed \citep{jerry1, char}. These are the stellar objects of low and intermediate mass (LMS and IMS: i.e. those in the mass ranges 1 to 3 M$_{\odot}$~ and 3 to about 8 M$_{\odot}$, respectively). The exercise of imagining, for them, which circulation or diffusion mechanisms might be at play is required at least by the need of reproducing isotopic and elemental observations that cannot be explained by usual models with pure convection \citep{b+9, n+03, b+10, kl14}. Recent research has focused on many such mechanisms, from relatively fast dynamical \citep{dt, pig, bat1, bat2} and magneto-hydro-dynamical \citep{b+07, nor} processes with speed up to several m/sec, all the way down to various forms of slow (less than 1 cm/sec) diffusive mixing \citep{cz8,egg1,egg2,stan1}. It is now ascertained that some abundance observations provide constraints at least on the velocity, and possibly on the nature, of the dynamical mixing phenomena occurring \citep{her2, b+07, pal2, den1, liu}. This is so in particular for Li \citep{cl10, pal2} and for the enrichment in neutron-rich elements \citep{cris0, cris1, t+16, cris2} occurring in the final evolutionary stages, approaching the RGB asymptotically (AGB phases). Indeed, it has been known for more than 30 years now that the bulk of neutron fluxes for the required nucleosynthesis episodes is produced in $^{13}$C~ reservoirs \citep{gal0, gal1, ar9, ka11}, formed at the recurrent penetration of envelope convection into the He-rich buffer of the star, called the Third Dredge Up (TDU). This last phenomenon has now been observed to occur over the whole range of the masses here considered and down to about 1 M$_{\odot}$~ stars \citep{shetye1}. The mentioned suggestions on $s$-processing have been verified in some detail directly on the observations of carbon stars \citep{abia1, abia2, abia3}. Formation of the $^{13}$C~ reservoir follows the retreat of shell convective instabilities developing in the He-layers \citep{stra1, her1} and requires that an important fraction of the He-rich buffer (hereafter {\it He-intershell} zone) be swept by the penetration of protons from the envelope \citep[][hereafter paper I]{t+14}, within the time interval over which the bottom border of envelope convection reaches its maximum downward expansion (see Figure \ref{fig:tdu}, for a star of solar metallicity \footnote{We {remind the reader} that the heavy element content of a star relative to the Sun, i.e. its {\it metallicity}, is commonly indicated in logarithmic notations, with the parameter ${\rm [Fe/H]} = Log($X$(Fe)/$X$(H))_{star} - Log($X$(Fe)/$X$(H))_{\odot}$}). \begin{figure}[t!!] \includegraphics[width=0.85\columnwidth]{figure1.pdf} \caption{The first occurrence of the envelope penetration in a Third Dredge Up episode, computed with the FRANEC code for a 2 M$_{\odot}$~ star of solar composition. The figure shows in red the innermost border of the convective envelope ($M_{\rm CE}$) and in blue the position of the H/He interface ($M_{\rm H}$), with its characteristic minimum ({\it post-flash dip}) before H-burning restarts. The parameter $t_0$ is the stellar age at the moment of the TDU episode shown. Note how, of the rather long duration of the post-flash dip ($\sim 10^4$ yr), only a very short fraction is occupied by TDU ($\sim$ 100 yr). \label{fig:tdu}} \end{figure} The limited duration in time of the TDU phenomenon ($\simeq$ 100 yr) plays in favor of rather fast mixing mechanisms (of the order of a few m/sec). This can be e.g. achieved in the case of the unimpeded buoyancy of magnetic flux tubes, which may occur given the specific polythropic structure of high index ($n \ge 3$) and the fast density decline ($\rho \propto r^k$, with $k << -1$), prevailing in the radiative layers below the convective envelope, which provide an exact analytical solution to MHD equations in the form of free, accelerating expansion \citep{nb}. We notice how the alternative to look for exact solutions in a simplified geometry would be that of performing detailed 3D numerical simulations, an approach that has indeed seen important attempts \citep[see e.g.][and references therein]{stan2}, but is much more difficult to implement in general. The first neutron-capture nucleosynthesis computations presented in the MHD scenario \citep[][]{t+16} demonstrated that the peculiar profile of $^{13}$C~ in the reservoirs thus formed was suitable for reproducing detailed isotopic patterns in presolar grains that could not be fitted otherwise, in agreement with previous indications by \citet{liu}. This was shown to be possible in a framework that could also mimic the solar $s$-process distribution; it was also suggested how extrapolations of that model could satisfy other observational requirements. In this contribution we want to extend the work presented in \citet[][hereafter paper II]{t+16} by computing the formation of MHD-induced $^{13}$C~ pockets over a wide range of metallicities and for FRANEC evolutionary models of 1.5 to 3 M$_{\odot}$. In so doing, we also provide analytic fits to the extension of the \ct-pockets, in order to make our results easily reproduced by others. We outline our approach and assumptions, as well as the mentioned analytic fits, together with general results as a function of mass and metallicity, in Section 2. Subsequently, in Section 3 we discuss how our results can be used, once weighted in mass and time using a common choice for the Initial Mass Function (IMF) and a Star Formation Rate (SFR) taken from the literature, in accounting for the abundances gradually built by Galactic evolution and now observed in stars of the Galactic disk, from our Solar System to the most recent stellar populations of young open clusters. This is a synthetic anticipation from a more extended work of chemo-dynamical Galactic evolution we are pursuing. Finally, in Section 4 we compare the abundance distributions produced by our models at the surface of evolved stars with some of the observational constraints available either from actual AGB and post-AGB stars of various families, or from the isotopic analysis of trace elements in presolar grains. The series of comparisons presented in this paper have the final goal to guarantee that our predictions can be safely and robustly verified. As a consequence of these checks, our mixing prescriptions have now been delivered for direct inclusion into full stellar models of the FUNS series \citep{cris3}: preliminary results of such an inclusion were presented in a recently published paper \citep{diego}. \section{Modeling the TP-AGB evolution and its nucleosynthesis} \subsection{The stellar models and the proton mixing} In paper II we implemented the exact solution of MHD equations, as found by \citet{nb} in the form of free buoyancy for magnetic flux tubes, and on that basis we computed neutron-capture nucleosynthesis in a star of 1.5 M$_{\odot}$~ of a metallicity slightly lower than solar. Then we adopted the ensuing model as a proxy for stars in the mass range from 1 to 3 M$_{\odot}$, at Galactic-disk metallicities. This preliminary, simplified extrapolation was motivated by previous suggestions advanced in \citet{mai1, mai2} and in paper I. In these works, starting from parameterized extensions of the $^{13}$C~ pocket, it had actually been found that the nuclear yields of a star characterized by parameters (mass, initial metallicity, extension of the $^{13}$C~ reservoir) very similar to those of the model shown in paper II would provide a reasonable approximation to the average ones in the Galactic disk, thus mimicking the enrichment of the Solar System and of recent stellar populations in neutron-rich nuclei. We want now to complete the job by estimating, for a wide metallcity range ($-1.3 \leq$ [Fe/H] $\leq 0.1$) and for three reference stellar masses (1.5, 2.0 and 3.0 M$_{\odot}$, computed ad-hoc with the FRANEC evolutionary code) the extensions of the $^{13}$C~ pockets formed in the hypotheses of paper II. In particular, we assumed that proton penetration from the envelope into the He-rich layers, at every TDU episode, occurs as a consequence of the activation of a stellar dynamo, with the ensuing buoyancy into the envelope of highly magnetized structures. These last push down poorly magnetized material, forcing it into the radiative He-rich layers. As mentioned previously, the stellar models were computed with the FRANEC evolutionary code, which uses the Schwarzschild's criterion for convection: for a description of the physical assumptions characterizing the code, see e.g. \citet{str}. It is today generally recognized that some form of extension of the convective border with respect to a pure Schwarzschild's limit is needed \citep{frey}; see on this point the discussion by \citet{vent} and the references cited therein. However, as we want here to look for how the $^{13}$C~ $n$-source is formed in MHD-driven mechanisms, our approach must be that of attributing any extension of such a border to magnetic effects, without the admixture of different schemes, each based on its own free parameters, which would make the disentangling of different effects ambiguous. Only after this work is completed we are authorized to check for possible changes induced by a different treatment of the convective extension, as is in fact done in \citet{diego}. Originally, our models adopted a Reimers' criterion for mass loss, with the parameter $\eta$ set to 1.0 for 1.5 and 2.0 M$_{\odot}$~ models, and to 3 for 3.0 M$_{\odot}$~ models. In making post-process computations for nucleosynthesis, we adopted instead the more efficient mass loss rates of the FRUITY repository. The rate of mass loss through stellar winds remains in any case largely unknown; this fact introduces imporant uncertainties on the composition of the stellar envelopes \citep{stan5}. In this context, it will be necessary to compute not only the average abundances gradually formed in the envelopes, but also those of the He-shell material cumulatively transported by TDU episodes. This represents an $s$-process-enhanced, C-rich phase averaged over the efficiency of mixing, which is called, since many years \citep{zinner}, the {\it G component}. {In our models, the $G$-component carries abundances very similar to those of flux tubes that, due to strong magnetic tension, were to survive destruction by turbulence in the convective envelope, opening later in the wind, as occurring in the Sun \citep{pinto}. There is actually some support in the current literature to the existence of such magnetized wind structures in evolved stars \citep{sk03, sab15, ros91, ros92}. Such blobs would maintain an unmixed C- and $s$-process rich composition, typical of the He-intershell zone, even when the rest of the envelope is O-rich. In our models, using the $G$-component to approximate the abundances of this wind phase is feasible because of the high neutron fluences generated in each pulse-interpulse cycle, which usually allow the effects of the most recent nucleosynthesis episode to dominate over the previous ones.} The relevance of this approximation will become clear in considering the isotopic admixtures of presolar SiC grains enriched in $s$-process elements (see section 4.1). \subsection{The $^{13}$C~ pocket and the ensuing nucleosynthesis} In the above approach, the extension and profile of any proton reservoir formed are computed according to the formulation of paper II (see there equations from 14 to 17), after verifying that the required conditions, stated in \citet{nb}, are satisfied. The occurrence of the proper physical conditions was ascertained as follows. \begin{figure*}[t!!] \center{ \includegraphics[width=0.6\linewidth]{figure2.pdf} \caption{Examples of how the density distributions below the formal convective envelope bottom (defined by the Schwarzschild's criterion) agree with the requirement of being exact power laws of the radius with large ($k < -3$) negative exponents. The cases shown are from the second and fifth TDU occurrence in a 2 M$_{\odot}$~ star of half-solar metallicity and from the first and fourth TDU occurrence in a 3 M$_{\odot}$~ star of one-third-solar metallicity. The fitting power laws are shown, with their regression coefficients. Also indicated are the resulting masses $M_p$ of the proton reservoirs, expressed in units of 10$^{-3}$ M$_{\odot}$. In models for the largest stellar masses considered (3 M$_{\odot}$) the validity of the solution is generally limited to layers considerably thinner than for lower mass models.}\label{fig:pockets}} \end{figure*} i) The MHD solution found in \citet{nb} and expressed in equation (5) of paper II ($\rho(r) \propto r^{k}$, with $k < -1$) was verified on the model structures computed with the stellar code, for every TDU occurrence (in the He-rich layers $k$ turns out to be always lower than $-3$). Since the condition on $k$ {derives} from an exact solution, we expect that the regression coefficients be close to 1. They are always larger than 0.98 over about half of the intershell region (in mass). In the layers of that zone where protons penetrate according to the equations of paper 2 they are even larger, reaching typical values as indicated here in Figure 1. ii) Over the layers thus selected, we computed the kinematic viscosity $\eta$ from the approach by \citet{scheck}, as: $$ \eta(r) \propto \frac{v_{th}(r)}{n(r)\sigma(r)} \eqno(1) $$ where $v_{th}(r)$ is the local thermal velocity and $\sigma(r)$ is the ionic cross section, $\pi l(r)^2$, $l$ being proportional to the DeBroglie's wavelength, $l \propto h/(m {v_{th})}$. From this, we estimated the dynamical viscosity: $$ \mu(r) = \eta(r) \rho(r) \eqno(2) $$ We then assumed from \citet{spiz} and \citet{scheck} the value of the magnetic Prandtl number, as: $$ P_m(r) = \frac{\eta}{\nu} \simeq 10^{-5} \frac{T(r)^4}{n(r)} \eqno(3) $$ (where $\nu$ is the magnetic viscosity) and verified that values of $P_m$ were always much larger than unity (they turned out to be always larger than 10 over the selected layers). iii) We also requested that the third condition posed by \citet{nb} were verified, namely that the region of interest had a {\it low} value for the dynamical viscosity (as defined in equation 2), i.e. values of $\mu$ much smaller than in more internal regions. This condition was again found to hold easily, due to the steep growth of the density in the innermost layers of the He-intershell zone. A few examples of the fits obtained under the above conditions, with the resulting masses for the proton reservoirs are shown in Figure \ref{fig:pockets}. \begin{deluxetable*}{c|ccc|ccc|ccc} \tablenum{1} \tablecaption{The coefficients of the parabolic dependence of $M_{p}$ versus $m_{\rm H}$ for three different metallicities and three different stellar masses. \label{tab:tab1}} \tablewidth{0pt} \tablehead{\multicolumn9c{Coefficients of $M_{p}$ in equation (4) }} \startdata {} & \multicolumn3c{M = 1.5 M$_{\odot}$} & \multicolumn3c{M = 2.0 M$_{\odot}$} & \multicolumn3c{M = 3.0 M$_{\odot}$} \\ \hline [Fe/H] & $a$ & $b$ & $c$ & $a$ & $b$ & $c$ & $a$ & $b$ & $c$ \\ \hline -0.50 & -1.3300 & 1.7040 & -0.5414 & -0.5070 & 0.6680 & -0.2164 & 0.019 & 0.0426 & -0.0366 \\ -0.30 & -2.0050 & 2.4775 & -0.7594 & 0.9646 & -1.2658 & 0.4180 & -0.8647 & 1.0337 & -0.3042 \\ 0.00 & -6.6146 & 8.3383 & -2.6217 & -1.3566 & 1.7393 & -0.5527 & -0.1373 & 0.1396 & -0.0298 \\ \enddata \end{deluxetable*} As mentioned, the proton distribution resulting in the above layers can be computed by the equations presented in paper II. The extensions in mass $M_p$ of the reservoirs vary in roughly quadratic (parabolic) ways as a function of the core mass $m_{\rm H}$ (which specifies the moment in the AGB evolution when a given TDU episode occurs). Regression coefficients are in this case between 0.97 and 0.99. Indicating by $f$ the relative metallicity in a linear scale ($f = {\rm Fe}/{\rm Fe}_{\odot} = 10^{\rm [Fe/H]}$), we can then write: $$ M_{p} = a(f)\cdot m_{\rm H}^2 + b(f)\cdot m_{\rm H} +c(f) \eqno(4) $$ where the three coefficients $a(f)$, $b(f)$, $c(f)$ are presented in Table \ref{tab:tab1}. We fitted the dependence of the coefficients on $f$ through further quadratic forms only for the sake of illustration, albeit there is no physics in this procedure, just an analytic formulation given for convenience. The dependence on the two parameters $m_{\rm H}$ and $f = {\rm Fe}/{\rm Fe}_{\odot}$, is then shown in Figure \ref{fig:pockmass} for the reference stellar models, over a range of metallicities typical of the Galactic disk \begin{figure*}[t!!] \begin{center} \includegraphics[height=\textheight]{figure3.pdf} \caption{A synthetic view of the dependence of the pocket mass $M_p$ on the core mass ($m_{\rm H}$) and the relative metallicity ($f$ = X(Fe)/X(Fe)$_{\odot}$) for the reference models, at typical Galactic disk metallicities. The plots are pure fits to model results and cannot be extrapolated beyond their limits. Note that the apparent {\it spike} at the low left end in the bottom panel is real: in the FRANEC code, at low metallicity, a 3.0 M$_{\odot}$~ model star behaves almost as an IMS, with TDU (hence also $^{13}$C~ pockets) starting at relavely high values of the core mass.\label{fig:pockmass}} \end{center} \end{figure*} Once the extension of the $^{13}$C~ reservoirs and the profile in them of the $^{13}$C~ abundance were estimated for every TDU episode of the mentioned stellar models, we used the NEWTON post-process code (see paper II) for computing the nucleosynthesis {induced by} the combined activation of the neutron sources $^{13}$C($\alpha$,n)$^{16}$O~ and $^{22}$Ne($\alpha$,n)$^{25}$Mg. As in previous issues of this series of works, the post-process code carefully imports from the stellar model the relevant physical parameters (extension of the intermediate convective layers at TPs, their temperature and density profiles in mass and time, the timing and extension of dredge-up phenomena, the mass of the envelope, gradually eroded by the growth of the core mass m$_{\rm H}$ and by mass loss, etc.). Reference solar abundances were taken from \citet{lod1}. The cross section database is from the on-line compilation KADONIS, v0.3 \citep{dil1}, with integrations from the upgrade KADONIS v1.0 \citep{dil2}. These extensions require some careful considerations, in particular for the stellar enhancement factors (SEFs). Indeed, in KADONIS v1.0, these corrections are accompanied with the alternative suggestions proposed by \citet{rauscher}, called $X-factors$. These last are sometimes sharply different from the traditional SEFs used by many groups and also by us so far. The choice of which stellar corrections to apply requires therefore some iterative checks, to be performed on preliminary computations (see later and Figure \ref{fig:singlemod}). Recent results from the n$\_$TOF collaboration \citep{ntof0,ntof1} are also included. Concerning weak interactions, their rates are normally taken from \citet{taka1, taka2}. For the cases in which bound-state decays are known to occur in ionized plasmas, the corrections discussed in \citet{taka3} and in Table V of \citet{taka1} were applied, although these are probably still insufficient to mimic real stellar conditions. On this point we must remember that weak interaction rates in stars remain the largest source of uncertainty in $s$-processing. This is e.g. evident in the important case of the $s$-only nucleus $^{187}$Os (produced by decay of its parent $^{187}$Re, which in laboratory is a very-long lived isotope, with $t_{1/2}^{lab} \simeq$ 40 Gyr). The parent is known to undergo a bound-state decay in stellar conditions: due to this fact, when it is completely ionized it decays in few tens of years \citep{lit}. The problem is that $s$-process conditions should correspond to partial ionization, and one would need to interpolate over nine orders of magnitude. We adopted here the Local Thermodynamic Equilibrium approach by \citet{taka1} for the dependency of the $^{187}$Re rate on temperature and density, but the treatment is largely insufficient. It is moreover unknown what kind of change is induced on the $^{187}$Re half-life by the process of astration, due to which the nucleus enters successive stellar generations, where it is taken at high temperature, so that its decay rate certainly increases, but we do not know by how much. As a consequence, in our models $^{187}$Os is produced only partly (see Figure 4b and later Figure 14). We are forced, in this respect, to accept that the quantitative reproduction of its abundance must wait for new measurements in conditions simulating the stellar ones. Similar uncertainties exist for several nuclei immediately following reaction branchings in the $s$-process path, where weak interaction rates dominate the production. A remarkable example is that of the couple $^{176}$Lu-$^{176}$Hf (see Figure \ref{fig:luhf}). The first nucleus is a long-lived isotope in laboratory conditions \citep[with a half-life of 36 Gyr, see][]{soder}. In stars it presents a short-lived isomeric state. Direct link of this with the ground state via dipole transitions is forbidden, at the temperature where $^{13}$C~ burns (0.9$\cdot 10^8$ K), so that the two states of $^{176}$Lu effectively behave as separate nuclei. However, when in the AGB environment a thermal instability develops, the locally high temperature ($T \ge 3 \cdot 10^8$) can excite a number of overlying mediating states and the isomeric level gets thermalized. These complications, studied in detail by \citet{klay1, klay2, k6}, sum to the fact that the half-life above about 20$-$22 keV becomes very short and dependent on temperature, so that $^{176}$Lu actually behaves as a thermometer \citep{klay2}. In our results, $^{176}$Lu and its daughter $^{176}$Hf are formally inside a $reasonable$ general error bar of $\simeq$ 15\% (see Figures \ref{fig:singlemod} and \ref{fig:bestdist}) but at the extremes of it; any further improvement must wait for better nuclear inputs. Another important case of weak interaction effects concerns $^{134}$Cs, whose $\beta^-$ decay to $^{134}$Ba is crucial in fixing the abundance ratio of the two $s$-only isotopes $^{134}$Ba and $^{136}$Ba. The value taken from \citet{taka1} and its temperature dependence is certainly very uncertain \citep[see discussion in][]{k+90}. Again, we must be content that, despite this, the model abundances of $^{134}$Ba and $^{136}$Ba lay within the general fiducial error bar; improving on this would require dedicated experimental data. In fact, some of the relevant radioactive nuclei along the $s$-path suitable to be affected by these specific uncertainties are now in the program of the new experiment PANDORA \citep{pan}, which will start in 2022 for measuring decay rates in ionization conditions as similar as possible to the stellar ones. As already suggested in paper II, $s$-processing in the Galaxy has the remarkable property that one can identify a specific model (characterized by a low initial mass and a metallicity typical of the Galactic disk within 2 $-$ 3 Gyr before the Sun is formed), which roughly simulates the solar distribution. Figure \ref{fig:singlemod} shows one such case, where indeed the abundances of $s$-only nuclei (indicated by heavy squared dots) have similar production factors, roughly averaging at about 1000. This model grossly represents a sort of average of what can be more properly obtained by the chemical evolution of the Galaxy, but this last is of course needed to account for the different effects of stellar temperatures in differently massive stars. The $average$ model is however a suitable preliminary calculation to be performed, on which to test tentatively the uncertain corrections to cross sections mentioned above. The top panel of Figure \ref{fig:singlemod} (panel a) shows results computed using the corrective $X-factors$ to cross sections from \citet{rauscher}. It is evident that, although the distribution for $s$-only nuclei (red dots) is rather flat (as it should be for producing them in solar proportions), there exist cases in which adjacent $s$-only nuclei show considerably discrepant abundances (see the red evidencing marks). If one normalizes the average to 1.0, then these discrepancies become evident in particular for $^{134}$Ba (1.21) and $^{136}$Ba (0.89); then for $^{148}$Sm (1.20) and $^{150}$Sm (1.01) and for $^{176}$Lu (0.95) and $^{176}$Hf (0.73). All these are complex cases, affected also by large uncertainties on $\beta^-$-decay rates (for $^{134}$Cs, $^{149}$Sm, $^{176}$Lu$^g$ and its isomer $^{176}$Lu$^m$). For all the nuclei considered, however, a significant worsening of the distribution is induced by the application of the mentioned $X-factors$, which sometimes imply corrections opposite to the traditional SEFs. Although the uncertainties do not allow us to reject the $X-factor$ corrections in general, we made an alternative computation by changing them with the usual SEFs to verify which data set was more suitable for obtaining a solar-like distribution. The result of this test is presented in Figure \ref{fig:singlemod}, panel b), where it is shown that, for the three couples indicated before, some improvements are immediately obtained. The new ratios found in the $average$ model are: 1.09 ($^{134}$Ba), 0.92 ($^{136}$Ba), 1.08 ($^{148}$Sm), 0.99 ($^{150}$Sm), 1.11 ($^{176}$Lu), 0.88 ($^{176}$Hf). On this basis, we decided to adopt, in the rest of this paper, the common SEFs to cross sections, postponing a more detailed check of the corrections by \citet{rauscher} to a separate work. \begin{figure*}[t!!] \begin{center} \includegraphics[width=0.85\textwidth]{figure4.pdf} \caption{The production factors of nuclei beyond Fe in the He-intershell layers for the model of a 1.5 M$_{\odot}$~ star of metallicity slightly lower than solar, mimicking rather well the solar distribution of $s$-only nuclei (red squared dots). The average overabundance for $s$-only nuclei is slightly higher than 1000 (dashed line: a general fiducial uncertainty of 15\% is indicated through the dotted lines). The other symbols (in blue) represent nuclei only partly contributed by $s$-processing. Panel a) shows the discrepancies found on close-by $s$-only nuclei in the preliminary computations made with the nuclear parameters mentioned in the text. The alternative choices we suggest permit to reduce the scatter considerably, as shown in panel b). \label{fig:singlemod}} \end{center} \end{figure*} With the choices thus made, examples of abundance distributions in the envelopes at the end of the TP-AGB stage are shown in Figures \ref{fig:lastm1p5}, \ref{fig:lastm2} and \ref{fig:lastm3} for various stellar masses and metallicities. The number of TDU episodes found in the cases shown by the figures is reported in Table \ref{tab:tab2}. \begin{deluxetable*}{c|ccc|ccc|ccc} \tablenum{2} \tablecaption{General characteristics of the TDU phases where the conditions for forming $^{13}$C~ pockets are found, according to the criteria exposed in the text. \label{tab:tab2}} \tablewidth{0pt} \tablehead{\multicolumn9c{Number of TDU episodes and their Maximum/minimum extension in mass}} \startdata {} & \multicolumn3c{M = 1.5 M$_{\odot}$} & \multicolumn3c{M = 2.0 M$_{\odot}$} & \multicolumn3c{M = 3.0 M$_{\odot}$} \\ \hline [Fe/H] & $N$ & $\Delta M_{min}$ & $\Delta M_{Max}$ & $N$ & $\Delta M_{min}$ & $\Delta M_{Max}$ & $N$ & $\Delta M_{min}$ & $\Delta M_{Max}$ \\ \hline -0.50 & 11 & 3.6$\cdot$10$^{-4}$ & 1.5$\cdot$10$^{-3}$ & 13 & 1.5$\cdot$10$^{-3}$ & 3.6$\cdot$10$^{-3}$ & 11 & 2.4$\cdot$10$^{-4}$ & 3.7$\cdot$10$^{-3}$ \\ -0.30 & 10 & 7.2$\cdot$10$^{-4}$ & 1.5$\cdot$10$^{-3}$ & 13 & 2.2$\cdot$10$^{-5}$ & 1.7$\cdot$10$^{-3}$ & 13 & 1.0$\cdot$10$^{-4}$ & 2.2$\cdot$10$^{-3}$ \\ 0.00 & 11 & 1.5$\cdot$10$^{-4}$ & 7.1$\cdot$10$^{-4}$ & 12 & 3.8$\cdot$10$^{-4}$ & 1.6$\cdot$10$^{-3}$ & 17 & 1.0$\cdot$10$^{-4}$ & 2.1$\cdot$10$^{-3}$ \\ \enddata \end{deluxetable*} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure5.pdf} \caption{Abundances in the envelope at the last TDU episode computed, for stars of initial mass M = 1.5 M$_{\odot}$, at the indicated metallicities. \label{fig:lastm1p5}} \end{center} \end{figure} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure6.pdf} \caption{Same as in Figure \ref{fig:lastm1p5}, for models of 2.0 M$_{\odot}$. \label{fig:lastm2}} \end{center} \end{figure} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure7.pdf} \caption{Same as in Figure \ref{fig:lastm1p5}, for models of 3.0 M$_{\odot}$. \label{fig:lastm3}} \end{center} \end{figure} The distributions presented in Figures \ref{fig:lastm1p5} to \ref{fig:lastm3} illustrate the increase in the abundances of $n$-capture elements expected for metal-poor AGB stars, a trend that had been previously inferred from parametric models \citep[see e.g.][]{gal1} and in general permits to account for the observations of $s$-elements in AGB stars at different metallicity. This appears now to remain true even in computations where the $^{13}$C~ pocket is generated by a physical model not explicitly related to metallicity. This is so because the complex dependence of the pocket masses on the stellar parameters shown in Figure \ref{fig:pockmass} is not sufficient to erase the signatures impressed in the $s$-process distribution by the fact that, for a {\it primary-like} neutron source, the neutron exposure grows for decreasing abundances of the seeds (mainly Fe) by which the neutrons are captured. Since the suggestions by \citet{lb1,lb2} it is common to synthetically represent the $s$-process distribution in stars with the two indices $[ls/{\rm Fe}]$ and $[hs/{\rm Fe}]$, where {\it ls} stands for {\it light s-elements} and {\it hs} stands for {\it heavy s-elements}. They are built with the logarithmic abundances at the first and second $s$-process peak, near the neutron magic numbers $N=50$ and $N=82$. In particular, we follow suggestions by \citet{lb2} in assuming [$hs$/{\rm Fe}] = 0.25$\cdot$([Ba/Fe]+[La/Fe]+[Nd/Fe]+[Sm/Fe]), and by \citet{b+01} in assuming [$ls$/{\rm Fe}]=0.5$\cdot$ ([Y/Fe]+[Zr/Fe]). Their difference, $[hs/ls] = [hs/{\rm Fe}] - [ls/{\rm Fe}]$, is an effective indicator of the neutron exposure, with low neutron fluences feeding primarily the $[ls]$ abundances and high fluences making the $[hs]$ indicator to prevail. In figures \ref{fig:lsfe}, \ref{fig:hsfe} and \ref{fig:hsls} the behavior of neutron-capture elements as a function of metallicity in our models is illustrated through the indices thus defined. As clarified many years ago \citep{gal1}, the heaviest $s$-process isotopes (the so-called $strong$ component of the $s$-process), and in particular $^{208}$Pb, grow with metallicity with a trend that is steeper than for the $[hs]$ nuclei, the heaviest isotopes being sited in correspondence of another magic number of neutrons ($N = 128$). This property is preserved in the present scenario. Figure \ref{fig:pbhsagb} shows the further increase at low metallicity of lead with respect to the $[hs]$ nuclei, these last representing the $N=82$ neutron magic number. We notice that the trend of $[{\rm Pb}/hs]$ shown in the figure is in nice agreement with what is obtained by a few current models with parametric extra-mixing, like e.g. those of the STAREVOL and FRUITY collaborations. For this, see in particular \citet{desmedt4} and Figure 14 in \citet{desmedt2}. \section{Reproducing constraints from the Sun and recent stellar populations} The effects of an efficiency in $s$-processing that increases toward lower metallicities are then mediated by the rate at which stars form as a function of the metal content of the Galaxy and of its age, these two parameters being linked by an extremely non-linear and probably non-unique \citep{casali} relation. In order to mimic abundances of the solar neighborhoods, we used an Age-Metallicity relation that was shown to be valid for that region \citep{mai1,mai2} to switch between the original metal content of a stellar model and the Galactic age to which its formation roughly corresponds. The results are shown in Figures \ref{fig:elage1p5} and \ref{fig:elage3} for the extreme cases of stellar masses M=1.5 M$_{\odot}$~ and M=3.0 M$_{\odot}$. As Figure \ref{fig:elage1p5} illustrates (see its panel a), for low stellar masses the production factors of elements near the two main $s$-process abundance peaks remain very similar, confined in a small range of less than 0.3 dex, for quite a long period in the Galactic disk before the solar metallicity [Fe/H] $= 0$ is reached (a few Gyr). This is not the case for larger masses (see Figure \ref{fig:elage3}), which however have a lower weight in the IMF. As discussed in \citet{mai2}, this condition is essential to permit a growth of $s$-element abundances that continues after the epoch of the solar formation maintaining a roughly constant $[hs/ls]$ ratio, as observed in Young Open Clusters. We are therefore confident that our scenario fulfills this basic constraint of Galactic chemical evolution, previously met only by varying parametrically the amount of $^{13}$C~ burnt and its distribution. \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure8.pdf} \caption{Production factors for elements at the abundance peak near the magic neutron number $N=50$, as summarized by the $[ls/{\rm Fe}]$ indicator, for stellar masses $M =$ 1.5, 2.0 and 3.0 M$_{\odot}$~ at various metallicities. \label{fig:lsfe}} \end{center} \end{figure} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure9.pdf} \caption{Production factors for elements at the abundance peak near the magic neutron number $N=82$, as summarized by the $[hs/{\rm Fe}]$ indicator, for stellar masses $M =$ 1.5, 2.0 and 3.0 M$_{\odot}$~ at various metallicities. \label{fig:hsfe}} \end{center} \end{figure} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure10.pdf} \caption{The logarithmic ratios of abundances at the two main $s$-process peaks for $N=50$ and $N=82$, as summarized by the $[hs/ls]$ indicator, for the same cases shown in the previous figures. \label{fig:hsls}} \end{center} \end{figure} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure11.pdf} \caption{Relative production factors for Pb and for elements at the abundance peak near the magic neutron number $N=82$, as summarized by the $[{\rm Pb}/hs]$ indicator, for stellar masses $M=$ 1.5, 2.0 and 3.0 M$_{\odot}$~ at various metallicities. \label{fig:pbhsagb}} \end{center} \end{figure} We have for the moment simulated such a chemical evolution by weighting the stellar production factors of our models for Galactic disk metallicities. We considered for this the range -1.0 $\lesssim $[Fe/H]$ \lesssim$ 0.0, i.e. excluding the most metal poor and the the most metal-rich (super-solar) models. We adopted a Salpeter initial mass function (IMF) and the mentioned history of the star formation rate (SFR) from \citet{mai2}. The result, once normalized to 1 for the average production factor of $s$-only nuclei, is presented in Table \ref{tab:tab3} and in Figure \ref{fig:bestdist}. This last shows as the solar abundances of $s$-only nuclei are reproduced at a sufficient level of accuracy. There is in fact a slight asymmetry, a sort of minor deficit of $s$-only nuclei below A $\simeq$ 130 with respect to the average distribution, as sometimes found in the past. It was ascribed to an unknown {\it primary} nucleosynthesis process integrating neutron captures \citep{claudia}. However, in our results the asymmetry remains within the relatively small limits set by abundance and nuclear uncertainties, so that no real conclusion in favor of possible different nuclear processes can be drawn from our findings, confirming previous indications by \citet{mai1, cris2b, nikos} \begin{deluxetable*}{ccc|ccc|ccc|ccc} \tablenum{3} \tablecaption{Percentages from the $s$-Process Main Component\label{tab:tab3}} \tablewidth{0pt} \tablehead{} \startdata \hline {A} & Elem. & Perc. MC & A & Elem. & Perc. MC & A & Elem. & Perc. MC &A & Elem. & Perc. MC \\ \hline 58 & Fe & 0.031 & 63 & Cu & 0.014 & 64 & Ni & 0.047 & 64 & Zn & 0.006 \\ 65 & Cu & 0.038 & 66 & Zn & 0.026 & 67 & Zn & 0.037 & 68 & Zn & 0.059 \\ 69 & Ga & 0.083 & 70 & Ge & 0.132 & 70 & Zn & 0.005 & 71 & Ga & 0.129 \\ 72 & Ge & 0.135 & 73 & Ge & 0.087 & 74 & Ge & 0.137 & 75 & As & 0.088 \\ 76 & Se & 0.213 & 76 & Ge & 0.001 & 77 & Se & 0.092 & 78 & Se & 0.145 \\ 79 & Br & 0.116 & 80 & Kr & 0.159 & 81 & Br & 0.142 & 82 & Kr & 0.409 \\ 82 & Se & 0.001 & 83 & Kr & 0.131 & 84 & Kr & 0.123 & 85 & Rb & 0.140 \\ 86 & Kr & 0.207 & 86 & Sr & 0.776 & 87 & Rb & 0.195 & 87 & Sr & 0.781 \\ 88 & Sr & 0.915 & 89 & Y & 0.843 & 90 & Zr & 0.664 & 91 & Zr & 0.780 \\ 92 & Zr & 0.757 & 93 & Nb & 0.660 & 94 & Zr & 0.976 & 95 & Mo & 0.495 \\ 96 & Mo & 1.033 & 96 & Zr & 0.106 & 97 & Mo & 0.564 & 98 & Mo & 0.750 \\ 99 & Ru & 0.262 & 100 & Ru & 1.021 & 101 & Ru & 0.160 & 102 & Ru & 0.430 \\ 103 & Rh & 0.130 & 104 & Pd & 1.078 & 104 & Ru & 0.012 & 105 & Pd & 0.139 \\ 106 & Pd & 0.510 & 107 & Ag & 0.015 & 108 & Pd & 0.622 & 109 & Ag & 0.241 \\ 110 & Cd & 0.975 & 110 & Pd & 0.014 & 111 & Cd & 0.231 & 112 & Cd & 0.493 \\ 113 & Cd & 0.342 & 114 & Cd & 0.614 & 115 & In & 0.352 & 116 & Sn & 0.874 \\ 116 & Cd & 0.059 & 117 & Sn & 0.494 & 118 & Sn & 0.727 & 119 & Sn & 0.395 \\ 120 & Sn & 0.811 & 121 & Sb & 0.376 & 122 & Te & 0.901 & 122 & Sn & 0.350 \\ 123 & Te & 0.953 & 123 & Sb & 0.042 & 124 & Sn & 0.002 & 124 & Te & 0.967 \\ 125 & Te & 0.224 & 126 & Te & 0.453 & 127 & I & 0.048 & 128 & Xe & 0.974 \\ 128 & Te & 0.022 & 129 & Xe & 0.038 & 130 & Xe & 1.011 & 130 & Te & 0.001 \\ 131 & Xe & 0.078 & 132 & Xe & 0.403 & 133 & Cs & 0.157 & 134 & Ba & 0.923 \\ 134 & Xe & 0.024 & 135 & Ba & 0.055 & 136 & Ba & 0.955 & 136 & Xe & 0.001 \\ 137 & Ba & 0.122 & 138 & Ba & 0.185 & 139 & La & 0.784 & 140 & Ce & 0.979 \\ 141 & Pr & 0.545 & 142 & Nd & 1.166 & 142 & Ce & 0.084 & 143 & Nd & 0.372 \\ 144 & Nd & 0.575 & 145 & Nd & 0.292 & 146 & Nd & 0.734 & 147 & Sm & 0.231 \\ 148 & Sm & 1.121 & 148 & Nd & 0.093 & 149 & Sm & 0.141 & 150 & Sm & 1.039 \\ 150 & Nd & 0.000 & 151 & Eu & 0.090 & 152 & Gd & 0.611 & 152 & Sm & 0.243 \\ 153 & Eu & 0.053 & 154 & Gd & 1.250 & 154 & Sm & 0.005 & 155 & Gd & 0.064 \\ 156 & Gd & 0.191 & 157 & Gd & 0.127 & 158 & Gd & 0.033 & 159 & Tb & 0.078 \\ 160 & Dy & 1.140 & 160 & Gd & 0.006 & 161 & Dy & 0.059 & 162 & Dy & 0.166 \\ 163 & Dy & 0.036 & 164 & Dy & 0.169 & 165 & Ho & 0.085 & 166 & Er & 0.159 \\ 167 & Er & 0.095 & 168 & Er & 0.313 & 169 & Tm & 0.145 & 170 & Yb & 1.210 \\ 170 & Er & 0.044 & 171 & Yb & 0.156 & 172 & Yb & 0.352 & 173 & Yb & 0.271 \\ 174 & Yb & 0.581 & 175 & Lu & 0.181 & 176 & Lu & 1.219 & 176 & Hf & 0.930 \\ 176 & Yb & 0.040 & 177 & Hf & 0.167 & 178 & Hf & 0.572 & 179 & Hf & 0.422 \\ 180 & Hf & 0.983 & 181 & Ta & 0.907 & 182 & W & 0.696 & 183 & W & 0.665 \\ 184 & W & 0.744 & 185 & Re & 0.251 & 186 & Os & 1.102 & 186 & W & 0.277 \\ 187 & Re & 0.033 & 187 & Os & 0.492 & 188 & Os & 0.262 & 189 & Os & 0.042 \\ 190 & Os & 0.125 & 191 & Ir & 0.019 & 192 & Os & 0.009 & 192 & Pt & 0.999 \\ 193 & Ir & 0.013 & 194 & Pt & 0.049 & 195 & Pt & 0.020 & 196 & Pt & 0.117 \\ 197 & Au & 0.060 & 198 & Hg & 1.210 & 198 & Pt & 0.000 & 199 & Hg & 0.210 \\ 200 & Hg & 0.511 & 201 & Hg & 0.391 & 202 & Hg & 0.661 & 203 & Tl & 0.775 \\ 204 & Pb & 1.050 & 204 & Hg & 0.102 & 205 & Tl & 0.711 & 206 & Pb & 0.660 \\ 207 & Pb & 0.748 & 208 & Pb & 0.491 & 209 & Bi & 0.066 & & & \\ \hline \enddata \end{deluxetable*} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.9\columnwidth]{figure12.pdf} \caption{Production factors of representative neutron-rich elements in stellar models of 1.5 M$_{\odot}$, for varying metallicity and Galactic age, using the Age-Metallicity relation mentioned in the text. The elements at the first (red) and second (blue) $s$-process peak (panel a) remain remarkably similar for a rather long time in the Galactic evolution that preceded solar formation (roughly between -4 and -1.5 Gyr). This is not true for the other elements (panel b, magenta lines): in particular, Pb has the peculiar trend of increasing steadily for decreasing metallicity. \label{fig:elage1p5}} \end{center} \end{figure} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.9\columnwidth]{figure13.pdf} \caption{Same as in Figure \ref{fig:elage1p5}, for models of 3.0 M$_{\odot}$. Here the stationary behavior shown in panel a) of the previous figure is not present, if not for a shorter time interval at earlier ages. \label{fig:elage3}} \end{center} \end{figure} \begin{figure}[t!!] \begin{center} \includegraphics[width=0.85\columnwidth]{figure14.pdf} \caption{Simulation of the chemical admixture operated by Galactic evolution, obtained by weighting our model abundances over a Salpeter IMF and a SFR taken from \citet{mai2}. Symbols and colors have the same meaning as in Figure \ref{fig:singlemod} \label{fig:bestdist}} \end{center} \end{figure} The detailed $s$-process fractions listed in Table 3 can then be used for disentangling the $s$ and $r$ contributions to each isotope, as done, e.g., by \citet{ar9, nikos2}. Since the table contains predictions for fractional contributions to each isotope from the $s$-process Main Component, the maximum ratio one should get is obviously one, reserved to $s$-only isotopes that do not receive contributions from other components (i.e. those in the atomic mass range between about 90 and about 210). As Table 3 shows, this is not formally true, with the fractional productions slightly differing from unity. One can however notice that the global value of the variance in the distribution ($\sigma \lesssim$ 15\%) is in the range of general uncertainties known to exist on the product of the two sets of crucial parameters, solar abundances ($N_s$) and cross sections ($\sigma_{N_s}$); the limited residual problems can therefore be simply due to the effects of these uncertainties in the input data. It is in any case worth noticing the cases of two special nuclei in Table 3. The first isotope we want to mention is $^{152}$Gd, sometimes indicated as an $s$-only isotope, whose production in our scenario amounts only to about 60\% of its solar concentration. In fact, we restrain from considering $^{152}$Gd as a real $s$-only isotope, as it is not shielded against $\beta^+$ decays and might receive a high contribution from the $p$-process. The second nucleus is $^{93}$Nb, which is produced at 66\% of its solar abundance. It is not shielded against $\beta-$decays, so that its production can be partially due also to the $p$-process, through $^{93}$Mo ($\beta ^+, t_{1/2} \simeq $ 4000 yr) and to the $r$-process, through $^{93}$Zr ($\beta^-$, $t_{1/2} \simeq$ 1.5 Myr). In evolved stars its presence is normally seen in objects enriched by mass transfer from an AGB companion (i.e. in Ba-stars and their relatives, see section 4.2). It must however be noticed that a further source for $^{93}$Nb production can arise if a Ba-star evolves in its turn to the TP-AGB phase, thus undergoing $s$-enrichment for a second time. Such a phenomenon is in fact observed \citep{shetye2}; detailed computations in this last scenario will be necessary for determining the real $s$-process contribution to $^{93}$Nb. \begin{figure}[t!!] \includegraphics[width=1.1\columnwidth]{figure15.pdf} \caption{The neutron-capture path around $^{176}$Lu and $^{176}$Hf in the chart of the nuclei, when the isomeric state and the ground state of $^{176}$Lu are not thermalized. See text for comments. \label{fig:luhf}} \end{figure} \section{Constraints from presolar grains and evolved stars} Surface stellar abundances derived in the previous sections can be then compared with various constraints deriving either from spectroscopic observations of evolved stars or from the analysis of presolar grains of AGB origin \citep[primarily of the SiC grain family, see][]{lf95, davis}. For both these fields a detailed, critical description of the database used would be necessary, implying rather long analyses that cannot be pursued here. Therefore we simply anticipate here some relevant examples, postponing more detailed analyses to separate contributions. \subsection{Crucial isotopic ratios in presolar {\rm SiC} grains} For presolar grains, results based on a preliminary extension of paper II, similar albeit more limited than the present one, were shown in \citet{GCA}. We refer to that paper for most of the details. Here we want to emphasize that a few isotopic ratios of trace elements measured recently in presolar SiC grains were crucial for excluding some previous parameterized scenarios for the formation of the $^{13}$C~ source. They were also used to suggest that models based on the indications contained in paper II had instead the characteristics required to account for the grain data \citep{liu, liu1}. Such measurements are therefore important tests to validate nucleosynthesis results {deriving} from any specific mixing scheme, as other general constraints on $s$-processing are much less sensitive to the details of mixing \citep{bun17}. It turned out that of particular importance in this respect is the isotopic ratio $^{92}$Zr/$^{94}$Zr versus $^{96}$Zr/$^{94}$Zr \citep{liu0} as well as the correlation between $^{88}$Sr/$^{86}$Sr and $^{138}$Ba/$^{136}$Ba \citep{liu, liu1}. These data (expressed in the usual $\delta$ notation) are reproduced in Figures from \ref{fig:zrdelta2b} to \ref{fig:basrdelta2}, overimposed to our model sequences (see discussion below). \begin{figure*}[t!!] \includegraphics[width=0.85\textwidth]{figure16.pdf} \caption{The isotopic ratios $^{92}$Zr/$^{94}$Zr measured in presolar SiC grains, versus $^{96}$Zr/$^{94}$Zr, in $\delta$ units, taken from the measurements cited in the text. They are compared with our model sequences for stellar atmospheres (left panel) and for the $G-component$ (right panel), for stars of M = 2 M$_{\odot}$. Lower masses would not differ much from the plot shown, if not for the more limited extension of the C-rich phase. \label{fig:zrdelta2b}} \end{figure*} \begin{figure*}[t!!] \includegraphics[width=0.85\textwidth]{figure17.pdf} \caption{Same as in Figure \ref{fig:zrdelta2b}, but for model stars of M = 3 M$_{\odot}$. \label{fig:zrdelta2c}} \end{figure*} \begin{figure*}[t!!] \includegraphics[width=0.85\textwidth]{figure18.pdf} \caption{The isotopic ratios $^{88}$Sr/$^{86}$Sr observed in presolar SiC grains, versus $^{138}$Ba$^{136}$Ba, expressed with the $\delta$ notation, taken from the measurements cited in the text, as compared with our model sequences for stellar atmospheres (left panel) and for the $G-component$ (right panel) of stars with M = 2.0 M$_{\odot}$. Again lower mass models do not differ drastically from what is shown. The data are integrated, with respect to \citet{GCA} with those shown by \citet{liu2, stephan}.\label{fig:basrdelta1}} \end{figure*} \begin{figure*}[t!!] \includegraphics[width=0.85\textwidth]{figure19.pdf} \caption{Same as in Figure \ref{fig:basrdelta1}, but for model stars of M = 3 M$_{\odot}$. \label{fig:basrdelta2}} \end{figure*} The choice of what to define a carbon-enhanced composition requires some comments. SiC grains are carbon-rich solids, so their formation is normal for an abundance ratio C/O larger than unity. However, recent work on non-equilibrium chemistry in circumstellar envelopes suggests that the contemporary formation of both O-rich and C-rich molecules can occur for wide composition ranges of the environment, thanks to various complex phenomena, including the photo-dissociation and re-assembly of previously-formed compounds. According to \citet{cher}, a two-component (silicate-carbon) dust may form when the C/O ratio is sufficiently high, even before it formally reaches unity, along the AGB evolution. These suggestions do correspond to the observations by \citet{olofsson}, who found carbon-rich compounds (like the HCN molecule) in O-rich environments. We therefore allowed a more relaxed constraint to the carbon enrichment of the envelope, imposing that C/O be larger than 0.8. The composition of the envelope is then largely influenced by mass loss rates, whose understanding is far from satisfactory. As mentioned before in this note, we can avoid being dominated by uncertainties in stellar wind efficiency if we use, aside to the envelope composition, also the composition of the $G-component$. {The importance of this last is enhanced by the already-mentioned fact that this extrapolated phase mimics well the abundances in flux tubes at each TDU episode, hence also that of possible flare-like phenomena disrupting magnetic structures in the winds and adding C-rich materials to them.} Figures from \ref{fig:zrdelta2b} to \ref{fig:basrdelta2} are therefore plotted in two panels, one representing the formal envelope composition when a C-rich situation (C/O $>$ 0.8) has been reached, the second showing the pure, C-rich $G$-component. From the figures it is clear that, indeed, at least this last offers excellent possibilities of reproducing the presolar grain data for critical Sr, Zr and Ba isotopic ratios, which fact confirms the viability of the adopted choices for the partial mixing processes generating the \ct-pocket, in agreement with the mentioned findings by \citet{liu, liu1}. We notice here that accounting for measurements with $\delta$($^{92}$Zr/$^{94}$Zr) $\ge -50$ was considered as impossible for $s$-process modeling \citep{lugaro4}. In \citet{liu0, liu} it was underlined how the problem could be alleviated or bypassed through a specific parameterization of the $^{13}$C~ pocket, subsequently recognized to mimic the distribution found in our Paper II \citep{liu1}. Here Figures \ref{fig:zrdelta2b} and \ref{fig:zrdelta2c} specify better the reason of this way out. The puzzling isotopic ratios correspond roughly to our $G-component$ at the first TDU episodes for high metallicity stars. The explanation of this is straightforward: in early cycles of high metallicity models, still at low temperature and having a low neutron density in the pulses, $^{96}$Zr is essentially destroyed, while a marginal production of $^{92}$Zr occurs. Clearly, the envelope is still O-rich, but for us (as already mentioned) the $G-component$ mimics rather well the composition of those flux tubes that, surviving disruption in the turbulent convective envelope, reach the surface (as done by the solar magnetic structures forming the corona). When a few magnetized blobs reach the atmosphere without mixing, kept together by magnetic tension, solids condensed there have a finite probability of preserving a C-enriched composition, being contemporarily slightly $^{92}$Zr-rich and largely $^{96}$Zr-poor. We believe therefore that presolar SiC grains, through those isotopic ratios that are {otherwise} hardly associated to C-rich envelopes, add a remarkable piece of evidence in favor of our MHD mixing scheme, {naturally accounting for the existence C-rich blobs}, where carbon-based dust can be formed even during generally O-rich phases. In Figures \ref{fig:zrdelta2b} and \ref{fig:zrdelta2c} the few points at high, positive values of $\delta$($^{92}$Zr/$^{94}$Zr) and with $\delta$($^{96}$Zr/$^{94}$Zr) $\ge -500$, which are not covered by the region of the models, might be easily fitted, should one consider initial abundance ratios for Zr isotopes in the envelope different from solar. This would indeed be the case for stellar models of low metallicity, as the contributions from AGB stars to the lighter isotopes of Zr is lower (see Table \ref{tab:tab3}) than for the reference $^{94}$Zr, implying that the envelope (initial) $\delta$ value for (say) $^{92}$Zr/$^{94}$Zr should be slightly higher than zero. We believe, however, that one should restrain from over-interpreting the data: there are, in fact, remaining problems in them that hamper too strong conclusions and suggest the need for new high-precision measurements. This was extensively discussed in \citet{liu1}, to which we refer for details. We also {remind the reader} that important effects on the Zr isotopes of SiC grains, related to the composition of the parent stars, also invoking contributions from super-solar-metallicity AGBs, were suggested by \citet{lugaro8}. This possibility and other relevant issues on this subject require to be discussed in more detail in a separate, forthcoming paper. \subsection{Reproducing the abundances of AGB stars and their relatives} AGBs are in principle precious also because, thanks to the TDU episodes, they offer a unique opportunity to observe ongoing nucleosynthesis products directly in the producing stars. Several important observational studies on their O-rich and C-rich members exist and have been crucial in revealing their properties and composition \citep{smith1, smith2, smith3, smith4, busso92, b+01, abia4, abia82, abia5, abia6, a20,rau, shetye}. However, due to their low temperature and complex atmospheric dynamics, when they cross their final (TP) phases they become very difficult to observe. At long wavelengths, in their cool circumstellar envelopes, $n$-capture elements are easily condensed at relatively low temperature \citep{ritch} so that abundances measured for the gaseous phase are difficult to correlate with the atmospheric ones. On the other hand, these atmospheres are subject to strong radial pulsations, with periods of the order of one year (from a fraction to a few), which make them variable by several magnitudes (LPVs, or Long Period Variables), see e.g. \citet{wood2}. They are classified as being Mira or Semi-regular pulsators, with pulsations variously dominated by the fundamental or first overtone mode \citep{wood1}. Model atmospheres in those conditions are extremely complex \citep [see][and citations there]{gautschy, hofner}. Moreover, strong molecular transitions hamper the observations of crucial elements \citep{abia82}. Owing to these difficulties, various families of relatives (having warmer photospheres) have become important surrogates in providing information on AGB nucleosynthesis. This includes in particular binary systems where a surviving companion inherited measurable abundances of $n$-capture elements through mass transfer by a specific mechanism, called {\it wind accretion} \citep{jm, bj}. The elements were formed previously in an AGB star now generally evolved to the white dwarf phase \citep[see, e.g.][and references quoted therein]{jor1, jor2, escorza1, escorza2, escorza3}. These objects are often called {\it extrinsic} AGBs \citep{sl, jm}, the most famous class of them being that of classical Ba-II stars. They were the object of many studies over the years, and of recent extensive discussions, with new observational data, see e.g. \citet{jornew}. Although we cannot afford such an extended topic in detail here, we need at least to verify our models in general on Ba-star constraints. Another class of AGB relatives that attracted large attention are post-AGB objects that, in their evolution toward the white dwarf stage, cross blueward the HR diagram, heating their remaining envelope and passing therefore through various spectral types corresponding to temperatures warmer than for real AGB stars \citep{vw1, vw2, vw3, vw4,desmedt5,desmedt3}. They include extrinsic objects, as well \citep{kama1, kama2}. We need to present at least a couple of examples of how our model scenario compares with post-AGB constraints. \begin{figure}[t!!] \includegraphics[width=0.85\columnwidth]{figure20.pdf} \caption{Comparison of models with observations for the Ba star HR774. We show results from the present work (red continuous line) as well as from the parametric study by \citet{b+95}, namely their case B (blue dashed line). \label{fig:hr774}}. \end{figure} \begin{figure}[t!!] \includegraphics[width=0.85\columnwidth]{figure21.pdf} \caption{Comparison of our models with observations for two mild Ba stars (see text for details) \label{fig:newba}} \end{figure} Starting with Ba-stars, extended fits to some of their abundance distributions were done by us many years ago \citep{b+95, b+01}, as a tool for understanding the extensions of the $^{13}$C~ pockets by calibrating parameterized models on observations. An example of how things have changed now is shown in Figure \ref{fig:hr774}, for HR774 (HD 16458), whose abundances are taken from the measurements by \citet{tl83} and by \citet{s84}. Our original fitting attempt was presented in Table 2 and Figure 12 of \citet{b+95} for two tentative parameterized models. Now Figure \ref{fig:hr774} shows the comparison of one such model (model B) with what can be obtained in the envelope of a low mass star, of metallicity $[\rm Fe/H] = -0.6$, having undergone efficient mass transfer from an AGB companion of about 1.5 M$_{\odot}$. The two model curves, albeit different, are fully compatible with the observed data; what 25 years ago could only be obtained by fixing ad hoc the parameters (in particular the abundance of $^{13}$C~ burnt), is now a natural outcome of our scenario with MHD mixing, applied to the mentioned star and without adjusting any further parameter. We might choose several other examples. In most cases, the observations do not include as many $s$-elements as for HR774, but they have reached a considerable statistical extension and are made with more modern instrumentation. Two such examples are shown in Figure \ref{fig:newba}, taken from the samples by \citet{decastro} and by \citet{jornew}. We choose the star BD$-$14$^o$2678 from the first mentioned list, and HD27271 from the second one. In this last case the observations are from \citet{karin}, who also measured the critical element Nb, having only one stable isotope, $^{93}$Nb. As mentioned, this nucleus is produced by decay of the rather long-lived parent $^{93}$Zr and the presence of the daughter $^ {93}$Nb is a clear indication that the star is extrinsic. Both the chosen sources are classified as mild Ba-stars by \citet{jornew}. The examples shown in the figure represent rather typical cases and the quality of the fits is in general good. This is however not possible for {\it all} Ba-stars, as some of the data sets available contain individual elements with abundances incompatible with any $s$-process distribution: e.g. elements belonging to one of the major abundance peaks, which are discrepant by large amounts (even one order of magnitude) with respect to their neighbors. The origin of these discrepancies is not known: we believe they may be due to difficulties in spectroscopic observations. We must in this respect {remind the reader} that the cases here shown {derive} from a very simple approach to mass accretion (actually, a simple dilution of the AGB material). More sophisticated treatments exist and should in fact be pursued, as done in some cases in the past \citep{stan3}. An important signature of how $s$-elements are enriched over the galactic history is provided by the abundance of Pb: its trend is shown here in Figure \ref{fig:pbhsagb}, and roughly the same behavior was previously found by all groups working in the field: see e.g. \citet{gal1, gm00, lu12}. As mentioned, the increase in the neutron exposure for lower abundance of the seeds makes it unavoidable that, on average, the photospheric abundance expected for Pb increases towards lower metallicities. While most observations of metal-poor extrinsic AGB stars confirmed this evidence, {some of them did not} \citep{ve01, ao01, ve03, be10, bi12}. {This throwed a shadow on our understanding of the $s$-process scenario, which is, for the rest, robust}. More doubts on Pb were accumulated in recent years from the second sample of AGB relatives we mentioned, i.e. that of post-AGB stars, which again for the rest {confirmed} the known trend of $s$-enrichment with metallicity \citep{desmedt4}. In the last two decades, various observational studies and detailed analyses have been performed in this framework. We refer to known works in this field, like e.g. \citet{vw3, vw4, rey7, desmedt5, desmedt3, desmedt2, desmedt1} and to review papers like \citet{vw1, vw2} for general reference on the subject. Both for Galactic \citep{desmedt2} and for extra-galactic low-metallicity post-AGB stars \citep{desmedt4} the expected strong enhancement of Pb was found not to be compatible with the upper limits determined observationally. {Should these indications be confirmed, in our models the Pb problem would} remain, as shown for example in figure \ref{fig:IRAS}, panel a), for observations by \citet{vw4}. It can be sometimes avoided if we refer to stellar models of 3 M$_{\odot}$~ (see panel b) as, in our scenario, the $s$-process efficiency decreases in general for increasing stellar mass. However, the star shown seems to be of lower mass and metallicity than found in this purely formal solution \citep{hr03}. We notice, in any case, how Figure \ref{fig:IRAS} shows that the remaining abundance distribution is reproduced quite well, at the same level before possible only through models with parameterized extra-mixing. A similar situation emerges from the comparison in Figure \ref{fig:j00444}, done with the observations of the low-mass post-AGB star J004441.04$-$732136.4 in the Small Magellanic Cloud (SMC) by \citet{desmedt5, desmedt4}. This star is also classified as a Carbon Enhanced Metal Poor star, enriched in both $r$- and $s$-elements \citep[${\rm CEMP}r$-$s$, see][]{yu-ga}. We recall how these objects often pose very difficult problems to detailed modeling of their abundances \citep{stan4}. { Very recently, new HST ultraviolet observations of three metal-poor stars by \citet{roed20}, using for the first time the Pb II line at $\lambda = $ 220.35 nm, yielded much higher abundances for Pb (by 0.3 -- 0.5 dex) than previously found with Pb I lines. The authors suggest that these last may lead to underestimate the Pb abundance. Although it is premature to derive final conclusions from such suggestions, they may open the road for solving a long-lasting discrepancy between observations and nucleosynthesis computations for AGB stars and their relatives, thus reconciling also our models with the measured data}. \begin{figure}[t!!] \includegraphics[width=0.85\columnwidth]{figure22.pdf} \caption{Comparison of our model results with the observations for the post-AGB star IRAS06530$-$0213. The curve of panel a), from a low mass star, shows the usual discrepancy on the abundance of Pb, which can in this case be avoided with a fit taken from a more massive star and a higher metallcity (panel b).\label{fig:IRAS}} \end{figure} \begin{figure}[t!!] \includegraphics[width=0.85\columnwidth]{figure23.pdf} \caption{Comparison of our model results with the observations for the post-AGB star J004441.04$-$732136.4. See text for details. \label{fig:j00444}} \end{figure} \section{Conclusions} The results of this work can be summarized briefly by saying that we showed how a general scenario for the activation of the $^{13}$C~ neutron source in AGB stars can be built on the simple hypothesis that the required mixing processes {derive} from the activation of a stellar dynamo, in which an exact, {\it particular} solution to MHD equations is possible on the basis of the simple, but plausible average field geometry suggested by \citet{nb}. In particular, we have shown how, based on that hypothesis, one can avoid all further free parameterizations and deduce rather general rules for deriving the extensions and shapes of the $^{13}$C~ distributions left in the He-intershell zone at each TDU episode. Such distributions provide nucleosynthesis models suitable to explain the known observational constraints on $s$-processing. These last include the average $s$-element distribution in our Solar System, as well as the peculiarities emerging from the isotopic ratios of trace elements measured in presolar SiC grains. We show in this respect that some such ratios, previously hardly accounted for by $s$-process models, can be naturally explained if the cool winds of evolved low mass stars contain unmixed blobs of materials, transported by flux tubes above the convective envelope, as occurring in the Sun. {This hypothesis provides an approximate interpretation for the so-called {\it G-component} of AGB $s$-processing.} Our results also imply a scheme for the enrichment of neutron-capture elements in the Galaxy that accounts for most abundance observations of evolved low- and intermediate- mass stars and has the characteristics previously indicated in the literature as required for understanding the enhanced heavy-element abundances of young open clusters. As a consequence of our rather long reanalysis of the mixing processes required for making the neutron source $^{13}$C~ available in the late evolutionary stages of red giant stars, initiated with the studies by \citet{b+07}, and considering that the observational confirmations so far accumulated give us a sufficient guarantee of robustness, the mixing scheme developed in the past years has now been released for direct inclusion in full stellar models of the FUNS series \citep{cris0, cris2}. A first attempt for implementing this integration was recently published separately by \citet{diego}.
train/arxiv
BkiUbQs241xg-HfkruG6
5
1
\section{Introduction} \label{sect:intro} Inflation is considered the dominant paradigm for understanding the initial conditions for structure formation in the Universe. As a consequence of the assumed flatness of the inflaton potential, any intrinsic non-linear (hence non-Gaussian, NG) effect during standard single-field slow-roll inflation is generally small. Thus, adiabatic perturbations originated by the quantum fluctuations of the inflaton field during standard inflation are nearly Gaussian distributed. Despite the simplicity of the inflationary paradigm, however, the mechanism by which perturbations are generated is not yet fully established and various alternatives to the standard scenario have been considered. A key-point is that the primordial NG is model-dependent. While standard single-field models of slow-roll inflation lead to small departures from Gaussianity, non-standard scenarios for the generation of primordial perturbations in single-field or multi-field inflation allow for a larger level of non-Gaussianity \citep[see, e.g.][]{bartolo2002, bernardeau2002, chen2007}. Moreover, alternative scenarios for the generation of the cosmological perturbations like the curvaton \citep{lyth2003, bartolo2004b}, the inhomogeneous reheating \citep{kofman2003,dvali2004} and the Dirac-Born-Infeld(DBI)-inflation \citep{alishahiha2004} scenarios, are characterized by a potentially large NG level \cite[see for a review][]{bartolo2004}. Because it directly probes the primordial perturbation field, the Cosmic Microwave Background (CMB) temperature anisotropy and polarization pattern has been considered the preferential way for detecting, or constraining, primordial NG signals, thereby shedding light on the physical mechanisms for perturbation generation. Alternatively, one can consider the Large-Scale Structure (LSS) of the Universe. This approach has both advantages and disadvantages. Unlike the CMB, which refers to a 2D dataset, LSS carries information on the 3D primordial fluctuation fields. On the other hand, the late non-linear evolution introduces NG features on its own, that need to be disentangled from the primordial ones. This can be done in two different ways: observing the high-redshift universe, such as e.g. anisotropies in the 21cm background \citep[][see also \cite{cooray2006}]{pillepich2007}, or studying the statistics of rare events, such as massive clusters, which are sensitive to the tails of the distribution of primordial fluctuations. So far, the investigation of the effects of NG models in the LSS has been carried out primarily by analytic means \citep{lucchin1988,koyama1999,matarrese2000,robinson2000a,robinson2000b,verde2000, verde2001,komatsu2003,scoccimarro2004,amara2004,ribeiro2007,sefusatti2007, sefusatti2007b, sadeh2007}. N-body simulations with NG initial conditions were performed in the early nineties \citep{messina1990,moscardini1991,weinberg1992}, but the considered NG models were rather simplistic, being based on fairly arbitrary distributions for the primordial density fluctuations \citep[see also][]{mathis2004}. Only very recently, \cite{kang2007} have investigated more realistic NG models. The suite of new N-body experiments presented in this paper considers, for the first time, large cosmological volumes at very high-resolution for physically motivated NG models (as described in the following section). Here we focus on massive haloes and their redshift evolution. A more general and exhaustive presentation of the simulations and the main results will be done elsewhere (Grossi et al., in preparation). This paper is organized as follows. In Section~\ref{sect:nong} we introduce the NG model here considered. The characteristics of our N-body simulations are presented in Section~\ref{sect:simul}. In Section~\ref{sect:halomass} we compute the halo mass function and compare it with analytic predictions. We discuss the results and conclude in Section~\ref{sect:discussion}. \section{Non-Gaussian Models} \label{sect:nong} For a large class of models for the generation of the initial seeds for structure formation, including standard single-field and multi-field inflation, the curvaton and the inhomogeneous reheating scenarios, the level of primordial non-Gaussianity can be modeled through a quadratic term in the Bardeen's gauge-invariant potential\footnote{on scales much smaller than the Hubble radius, Bardeen's gauge-invariant potential reduces to minus the usual peculiar gravitational potential} $\Phi$, namely \begin{equation} \label{FNL} \Phi = \Phi_{\rm L} + f_{\rm{NL}} \left(\Phi_{\rm L}^2 - \langle\Phi_{\rm L}^2\rangle \right) \;, \end{equation} where $\Phi_{\rm L}$ is a Gaussian random field and the specific value of the dimensionless non-linearity parameter $f_{\rm{NL}}$ depends on the assumed scenario \citep[see, e.g.,][]{bartolo2004}. It is worth stressing that eq.(\ref{FNL}), even though commonly used, is not generally valid: detailed second-order calculations of the evolution of perturbations from the inflationary period to the present time show that the quadratic, non-Gaussian contribution to the gravitational potential should be represented as a convolution with a kernel $f_{\rm NL}({\bf x},{\bf y})$ rather than a product \citep{bartolo2005}. However, for $|f_{\rm NL}| \gg 1$ as assumed in this paper, all space-dependent contributions to $f_{\rm NL}$ can be neglected and the non-linearity parameter can be effectively approximated by a constant. In this work we will take $-1000 \le f_{\rm NL} \le +1000$. We notice that owing to the smallness of $\Phi$, the contribution of non-Gaussianity implied by these values of $f_{\rm NL}$ is always within the percent level of the total primordial gravitational potential, and does not appreciably affect the linear matter power spectrum. Yet, this range is larger than that of $-54 < f_{\rm NL} < 114$, currently allowed, at the 95 per cent confidence level, by CMB data \citep{spergel2007}. The rationale behind this choice is twofold. First of all the large scale structure provides observational constraints which are {\it a priori} independent of the CMB. Second of all, $f_{\rm NL}$ is not guaranteed to be scale independent, while the LSS and CMB probe different scales. Indeed some inflationary scenarios do predict large and scale-dependent $f_{\rm NL}$ \citep[see, e.g.,][]{chen2005,shandera2006}. Departures from Gaussianity affect, among other things, the formation and evolution of dark matter haloes of mass $M$. One way of quantifying the effect is to look at the halo mass function which we can express as the product of the Gaussian mass function, $n_G(M,z)$, times a NG correction factor, $F_{NG}(M,z,f_{\rm{NL}})$: \begin{equation}\label{massfunction} n(M,z,f_{\rm{NL}})=n_G(M,z)\;F_{NG}(M,z,f_{\rm{NL}})\ . \end{equation} In the extended Press-Schecter scenario, the NG factor can be written as \citep[hereafter MVJ]{matarrese2000} \begin{eqnarray}\label{eq:rMVJ} F_{NG}(M,z,f_{\rm{NL}})\simeq \frac{1}{6}\frac{\delta_c^2(z_c)}{\delta_*(z_c)}\frac{d S_{3,M}}{d\ln \sigma_M} +\frac{\delta_*(z_c)}{\delta_c(z_c)}\ , \end{eqnarray} where $\delta_*(z_c)=\delta_c(z_c)\sqrt{1-S_{3,M}\delta_c(z_c)/3}$, $\sigma_M^2$ is the mass variance at the mass scale $M$ (linearly extrapolated to $z=0$), and $\delta_c(z_c)=\Delta_c/D_{+}(z_c)$ with $D_{+}$ the growing mode of linear density fluctuations and $\Delta_c$ the linear extrapolation of the overdensity for spherical collapse. The quantity $S_{3,M}$ represents the normalized skewness of the primordial density field on scale $M$ (linearly extrapolated to $z=0$), namely $S_{3,M}\equiv\frac{\langle\delta_M^3\rangle}{\sigma_M^4}\propto - f_{\rm{NL}}$ [see, e.g., eqs.(43-45) in MVJ, in which notation $f_{\rm{NL}} \to -\varepsilon$ in Model B]; $\delta_M$ represents the density fluctuation smoothed on the mass scale $M$. The redshift dependence is only through $z_c$, the collapse redshift, which, in the extended Press-Schechter approach, coincides with the considered epoch, i.e. $z_c=z$. A more robust statistics, that we will use throughout this paper, is the ratio $R_{NG}(M,z,f_{\rm{NL}})$ of the NG cumulative mass function $N_{\rm NG}(>M,z,f_{\rm{NL}})$ to the corresponding Gaussian one. Since in the high-mass tail $N(>M,z) \sim n(M,z) \times M$, we can approximate $R_{NG}(M,z,f_{\rm{NL}})\simeq F_{NG}(M,z,f_{\rm{NL}})$ \citep{verde2001}. We notice that, being defined as a ratio, $R_{NG}$ is almost independent of the explicit form assumed for $n_G(M,z)$. We found, however, that the mass function in our simulation with Gaussian initial conditions is well fitted by the \cite{sheth1999} relation. An alternative approximation for $R_{NG}$ can obtained starting from eq.(62) of MVJ: $R_{NG}\simeq P_{NG}(>\delta_c|z,M)/ P_{G}(>\delta_c|z,M)$, where \begin{eqnarray} P_{NG}(>\delta_c|z,M)\simeq & \frac{1}{2}-\frac{1}{\pi}\int_0^\infty \frac{d\lambda}{\lambda} \exp\left( -\frac{\lambda^2 \sigma_M^2}{2}\right) \nonumber \\ &\times \sin \left( \lambda \delta_c +\frac{\lambda^3 \langle \delta_M^3\rangle} {6}\right)\ , \label{eq:eq62} \end{eqnarray} that reduces to the Gaussian case $P_{G}(>\delta_c|z,M)$ when $\langle \delta_M^3\rangle=0$. As we will see, this formula provides a better fit for large positive values of $f_{\rm NL}$, while the previous one should be preferred when $f_{\rm NL}$ is large and negative. \section{Numerical simulations} \label{sect:simul} We have used the GADGET-2 numerical code \citep{springel2005} to perform two different sets of N-body simulations using collisionless, dark matter particles only. The first set consists of 7 different simulations characterized by different values of $f_{\rm{NL}}$ but the same $\Lambda$CDM model with mass density parameter $\Omega_{m}=0.3$, baryon density $\Omega_{b}=0.04$, Hubble parameter $h= H_{0}/(100 \ {\rm km} \ {\rm s}^{-1} \ {\rm Mpc}^{-1})=0.7$, primordial power law index $n=1$ and $\sigma_8=0.9$, in agreement with the WMAP first-year data \citep{spergel2003}. In all experiments we have loaded a computational box of $500^{3}$(Mpc/h)$^{3}$ with $800^{3}$ dark matter particles, each one with a mass of $m=2.033 \times 10^{10}$ M$_{\odot}$ h$^{-1}$. The force was computed using a softening length $\epsilon_{l}=12.5$ h$^{-1}$ kpc. To set the initial conditions we have used the same initial Gaussian gravitational potential $\Phi_{\rm L}$ characterized by a power-law spectrum $P(k) \propto k^{-3}$ which was then inverse-Fourier transformed to get the NG term $f_{\rm{NL}} \left(\Phi_{\rm L}^2 - \langle\Phi_{\rm L}^2\rangle \right)$ in real space, and then transformed back to $k$-space, to account for the CDM matter transfer function. This procedure has been adapted from the original one of \citep{moscardini1991} and guarantees that all the $f_{\rm{NL}}$ models have the same linear power spectrum as the Gaussian case, as we checked in the different realizations. The gravitational potential was then used to displace particles according to the Zel'dovich approximation, starting from a glass-like distribution. Besides the Gaussian case ($f_{\rm{NL}}=0$), we have explored 6 different NG scenarios characterized by $f_{\rm{NL}}=\pm 100 \;, \pm 500 \; {\rm and}\; \pm 1000$. The second set of simulations is designed to match those of \citep{kang2007}. Therefore, we have used smaller boxes of $300^{3}$(Mpc/h)$^{3}$ with $128^{3}$ particles and a different choice of cosmological parameters ($\Omega_{m}=0.233$, $\Omega_{\Lambda}=0.762$, $H_{0}=73 \ {\rm km} \ {\rm s}^{-1} \ {\rm Mpc}^{-1}$ and $\sigma_8=0.74$), in agreement with the WMAP three-year data \citep{spergel2007}. We have used the same procedure to generate the primordial NG, in order to explore the cases of $f_{\rm{NL}}=(-58,0,134)$, investigated by \citep{kang2007}. \section{The Halo Mass Function} \label{sect:halomass} To extract the dark matter haloes in the simulations we have used two standard algorithms: the \emph{friends-of-friends} and the \emph{spherical overdensity} methods. The haloes, identified using a linking length of 0.16 times the mean interparticle distance, contain at least 32 particles. In Fig.~\ref{fig:slices} we show the spatial positions of the dark haloes (circles) in the first set of simulations, superimposed on the mass density field (colour-coded contour plots), in a redshift slice of thickness 25.6 Mpc/h cut across the computational box. The maps are shown at four different redshifts, indicated in the plots, to follow their relative evolution (from top to bottom). The panels in the central column refer to the standard Gaussian case. The two extreme NG cases of $f_{\rm{NL}}=-1000$ and $f_{\rm{NL}}=+1000$ are illustrated in the panels in the left and right columns, respectively. The mass density fields look very similar in the Gaussian and NG cases at all redshifts. Deviations from Gaussianity become apparent only when considering, at a given epoch, the number and distribution of the top-massive, virialized objects. This is particularly evident in the $z=2.13$ snapshot, in which the first-forming cluster-sized haloes can only be seen in the $f_{\rm{NL}}=+1000$ and $f_{\rm{NL}}=0$ scenarios, in which cosmic structures are indeed expected to form earlier with respect to the $f_{\rm{NL}}=-1000$ case. \begin{figure*} \psfig{figure=fig1.ps,width=0.9\textwidth} \caption{Mass density distribution and halo positions in a slice cut across the simulation box. The color-coded contours indicate different density levels ranging from dark (deep blue) underdense regions to bright (yellow) high density peaks. The halo positions are indicated by open circles with size proportional to their masses. Left panels: NG model with $f_{\rm{NL}}=-1000$. Central panels: Gaussian model. Right panels NG model with $f_{\rm{NL}}=+1000$. The mass and halo distributions are shown at various epochs, characterized by increasing redshifts (from bottom to top), as indicated in the panels.} \label{fig:slices} \end{figure*} The visual impression is corroborated by the plots of Fig.~\ref{fig:cumul} in which we show the cumulative halo mass function of all models in the first set of simulations, considered at the same redshifts as Fig.~\ref{fig:slices}. The number density of massive objects increases with $f_{\rm{NL}}$, as expected, and the differences between models increases with the redshift, confirming that the occurrence of massive objects at early epochs provides an important observational test for NG models. \begin{figure*} \psfig{figure=fig2.ps,width=0.8\textwidth} \caption{Cumulative mass function of the haloes in the N-body experiments at the same redshifts as in Fig.~\ref{fig:slices}. The thick solid curve represents the halo mass function in the Gaussian model. Thin solid line: $f_{\rm{NL}}=\pm 1000$. Dotted line $f_{\rm{NL}}=\pm 500$. Dashed line $f_{\rm{NL}}=\pm 100$. The mass function of NG models with positive (negative) $f_{\rm{NL}}$ lies above (below) the Gaussian one. Poisson errors are shown for clarity only for the Gaussian model. } \label{fig:cumul} \end{figure*} In Fig.~\ref{fig:ratio} we show the logarithm of the ratio of the cumulative halo mass functions of NG and Gaussian models, $R_{NG}(M,z,f_{\rm{NL}})$, for our N-body simulations (symbols) compared to the theoretical predictions of MVJ (solid curves). Filled circles and triangles, that refer to $f_{\rm{NL}}=+100$ and $-100$, are compared to theoretical predictions of both eqs.~(\ref{eq:rMVJ}) and (\ref{eq:eq62}) (solid and dotted lines, respectively). The agreement between model and simulation is striking. Apparent deviations are well within the Poisson errors that we do not show to avoid overcrowding. Models and simulations are still in agreement, within the errors, for $|f_{\rm{NL}}|=500$ (open circles and triangles) and $|f_{\rm{NL}}|=1000$ (not shown in the figure) and out to $z\sim 5$, i.e. when the validity of the MVJ approximated expressions starts breaking down: we only notice that the analytic model tends to slightly overpredict the difference between NG and Gaussian cases. In particular, we have found that eq.~(\ref{eq:eq62}) is a better approximation to large, positive $f_{\rm{NL}}$, while eq.~(\ref{eq:rMVJ}) is to be preferred when $f_{\rm{NL}}$ is negative. This result is at variance with that of \cite{kang2007} in which the disagreement with the MVJ predictions is already significant at the more moderate values of $f_{\rm{NL}}=-58$ and $f_{\rm{NL}}=+134$, as it is clearly illustrated in the bottom panels of their Fig.~2. To investigate whether the mismatch is genuine or is to be ascribed to numerical effects we have repeated the same analysis using the second set of simulations. We still found an excellent agreement between numerical experiments and theoretical expectations, which adds to the robustness of our results. \begin{figure*} \psfig{figure=fig3.ps,width=0.8\textwidth} \caption{Logarithm of the ratio of the halo cumulative mass functions $R_{NG}$ as a function of the mass is shown in the different panels at the same redshifts as in Fig.~\ref{fig:slices}. Circles and triangles refer to positive and negative values for $f_{\rm{NL}}$; open and filled symbols refer to $f_{\rm{NL}}=\pm 500$ and $f_{\rm{NL}}=\pm 100$, respectively. Theoretical predictions obtained starting from eqs.~(\ref{eq:rMVJ}) and (\ref{eq:eq62}) are shown by dotted and solid lines, respectively. Poisson errors are shown for clarity only for the cases $f_{\rm{NL}}=\pm 500$. } \label{fig:ratio} \end{figure*} \section{Discussion and Conclusions} \label{sect:discussion} The numerical experiments performed in this work show the validity of the analytic model proposed by \cite{matarrese2000} for the halo mass function in NG scenarios. The model fits the results of our numerical experiments over a large range of $f_{\rm{NL}}$ and out to high redshifts despite the fact that non-Gaussianity is treated as perturbation of the underlying Gaussian model. The good match with the numerical experiments is, however, not surprising, since deviations from Gaussianity in the primordial gravitational potential are within the percent level, despite the large values used for $f_{\rm{NL}}$. This result is in disagreement with that of \cite{kang2007} who found a number density of haloes larger (smaller) than ours for NG models with the same positive (negative) $f_{\rm{NL}}$ values. The mismatch can be ascribed neither to resolution effects nor to the background cosmology adopted in the numerical experiments and, perhaps, should be traced back to a different setup in the NG initial conditions. Our result has two important consequences. The good news is that we can use an analytical model to provide accurate predictions for the mean halo abundance in NG scenarios spanning a range of $f_{\rm{NL}}$ well beyond the current WMAP constraints, without relying on time-consuming numerical experiments affected by shot noise and cosmic variance. The bad news is that deviations from the Gaussian model are more modest than in the \cite{kang2007} experiments, which makes more difficult to spot non-Gaussianity from the analysis of the LSS. This, however, might not be a problem when considering the observational constraints that will be provided by next-generation surveys. Indeed, current hints of non-Gaussianity from large scale structures are rather weak as they are provided by the excess power in the distribution of 2dF galaxies at $z \sim 0$ \citep{norberg2002,baugh2004}, for which it is difficult to disentangle primordial non-Gaussianity from galaxy bias, and by the possible presence of protoclusters at $z\sim 4$, as traced by radio galaxies surrounded by Ly-$\alpha$ emitters and Ly-break galaxies \citep{miley2004}, for which the measured velocity dispersion can be compared to theoretical predictions only including a realistic model for the velocity bias between galaxies and dark matter. However, the observational situation is going to improve dramatically, in many respects. Next-generation cluster surveys in X-ray, microwave and millimeter bands like the upcoming eROSITA X-ray cluster survey \citep{predehl2006}, the South Pole Telescope survey \citep{ruhl2004}, the Dark Energy Survey \citep{abbott2005} and the Atacama Cosmology Telescope survey \citep{kosowsky2006} should allow us to detect clusters out to high redshift. The abundance of these objects is, however, more sensitive to the presence of a dark energy component than to that of a primordial non-Gaussianity, at least for $|f_{\rm{NL}}| \le 100$. Indeed, as pointed out by \cite{sefusatti2007}, dark energy constraints from these surveys will not be substantially affected by primordial non-Gaussianity, as long as deviations from the Gaussian model do not dramatically exceed current WMAP constraints. However, these surveys could provide a fundamental cross-check on any detection of non-Gaussianity from CMB, especially from Planck since the physical scales probed by clusters differ from that of Planck by only a factor of two, allowing us to detect a possible scale dependence of non-Gaussian features. More effective constraints on both dark energy and non-Gaussianity based on statistics of rare objects is likely to be provided by the imaging of the mass distribution from gravitational lensing of high-redshift 21 cm absorption/emission signal that, if observed with a resolution of a few arcsec, will allow us to detect haloes more massive than the Milky Way back to $z\sim10$ \citep{metcalf2006}. Tight constraints on primordial non-Gaussianity can be set by measuring the high-order statistics of the LSS. In particular, the bispectrum analysis of the galaxy distribution at moderate $z$ in the next-generation redshift surveys such as HETDEX \citep{hill2004} and WFMOS2 \citep{glazebrook2005} will efficiently disentangle non-linear biasing from primordial non-Gaussianity \citep{sefusatti2007b}. In this work we have demonstrated the validity of the \cite{matarrese2000} approach to detect non-Gaussianity from the statistics of rare objects. In a future paper we will extend our analysis by considering the Probability Distribution Function of density fluctuations, that is easily obtained from our numerical simulations and that in principle can be determined by measuring the 21 cm line, either in emission or in absorption, before or after the epoch of reionization \citep{furlanetto2006}. \section*{acknowledgments} Computations have been performed by using 128 processors on the IBM-SP5 at CINECA (Consorzio Interuniversitario del Nord-Est per il Calcolo Automatico), Bologna, with CPU time assigned under an INAF-CINECA grant and on the IBM-SP4 machine at the ``Rechenzentrum der Max-Planck-Gesellschaft'' at the Max-Planck Institut fuer Plasmaphysik with CPU time assigned to the MPA. We acknowledge financial contribution from contracts ASI-INAF I/023/05/0, ASI-INAF I/088/06/0 and INFN PD51. We thank Licia Verde for useful discussions, Shude Mao for his critical reading of the manuscript, and Claudio Gheller for his assistance.
train/arxiv
BkiUdsg5qhLA5_F8kNHz
5
1
\section{Introduction} Understanding how cities visually differ from each others is interesting for planners, residents, and historians. Automatic visual recognition is now making great progress which can help identifying how cities visually differ. Creating interpretable convolutional neural network (CNN) is a fascinating path that may lead us towards trustworthy AI~\cite{fong2017interpretable,selvaraju2017grad,simonyan2013deep,springenberg2014striving,zeiler2014visualizing,zhou2014object,zhou2016learning}. Understanding CNN filters provides us with valuable insight on decision making criteria for a specific task. Visual features such as objects and parts are examples of high-level semantics that are consistent with how humans understand and analyze images \cite{bau2017network,li2010object,zhang2018interpretable}. Accordingly, we investigate and evaluate the interpretability of learned discriminate objects in city recognition CNNs. Visualization of CNN filters are a popular techniques for analyzing CNNs. In this work, we build on top of gradient-weighted class activation mapping (Grad-CAM) method \cite{selvaraju2017grad} to generate class-discriminate visualizations, for our city recognition CNNs. Grad-CAM generates visualizations on the input images with highlight of discriminate regions by analyzing learned convolutional features and taking the information of the fully connected layers into consideration. Grad-CAM does not need to alter structure of the trained CNNs and is model-agnostic. \begin{figure} \centering \vspace{-1ex} \subfigure[cat and dog image and visualizations]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=0.8in]{cat_dog.png} \end{minipage} \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=0.8in]{cat.png} \end{minipage} \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=0.8in]{dog.png} \end{minipage}} \subfigure[Tokyo image and visualization]{ \begin{minipage}[t]{0.36\linewidth} \centering \includegraphics[width=1in]{tokyo.png} \end{minipage} \begin{minipage}[t]{0.36\linewidth} \centering \includegraphics[width=1in]{tokyo.jpg} \end{minipage}} \label{fig:1} \caption{Visualization examples of image classification (supervised) and city recognition. (a) From left to right: original image with a cat and a dog and the visualization with 'cat'/'dog' information (highlighting cat/dog); ~\cite{selvaraju2017grad}. (b) From left to right: original image of Tokyo; visualization with 'Tokyo' information (highlighting, e.g., building, fence and signboard).} \vspace{-3ex} \end{figure} One common assumption in interpretability analysis of discriminate networks is that the image label matches with a single dominant object. However, interpreting CNNs for city recognition deviates from this assumption as the labels of images for place recognition are places, such as names of nations or cities, which is different from discriminate objects appearing in city images such as certain architecture or vegetation type. The information on these discriminate objects is an unknown priori, including what objects and how many kinds of them are present in the data and even the same kinds of objects could appear in images of different classes. Figure~\ref{fig:1} shows an example. Obtaining the information of discriminate objects and how to interpret these visual objects in a dataset are the main stream of our study. This work offers a method to both qualitatively and quantitatively evaluate interpretibility of city recognition CNNs. While qualitative methods judge the interpretibility of networks directly by human ~\cite{zhou2014object,bau2017network,selvaraju2017grad}, quantitative methods compute a mathematical expression that reflects the trustworthiness. Examples of the quantitative techniques are \cite{zhang2018interpretable,bau2017network} that compute Intersection over Union (IoU) score to evaluate the interpretability across networks as an objective confidence score. In \cite{selvaraju2017grad,fong2017interpretable} localization precision of visualizations through Pointing Game~\cite{zhang2018top} is evaluated. To the best of our knowledge there is no work that quantitatively measures the interpretibility of CNN in a holistic manner. Previous work consider supervised visualization where the the labels of objects that are localized in the image are consistent with the class labels~\cite{zhou2016learning,selvaraju2017grad,oquab2014learning,oquab2015object}. We raise the following research questions in this paper and we try to address them via relevant experiments. \renewcommand\labelitemi{$\vcenter{\hbox{\tiny$\bullet$}}$} \setlist{nolistsep} \begin{itemize}[noitemsep] \itemsep0em \item Are the deep representations learned by the city recognition CNNs interpretable? \item How to measure and evaluate the interpretability of in weakly supervised network? \item Do different architectures or initializations of CNNs affect the interpretability? \end{itemize} \section{Methodology} We summarize our proposed interpretability investigations roughly in several steps: \setlist{nolistsep} \begin{enumerate}[noitemsep] \item Weighted masks are generated in the ultimate layer of any given trained CNNs model that classifies images from different cities, using Grad-CAM that highlights the class-discriminate regions of the test image. A visual explanation is generated using a threshold and weighted mask to cover unimportant regions on test image for classification. \item Visual explanations are visualized using t-SNE to detect meaningful patterns in an unsupervised manner. \item A pretrained segmentation model is used to annotate the objects in the test images pixelwise. \item The normalized distribution of the objects annotated in visual explanations for each class is plotted to see if there is a significant skew towards certain objects. \end{enumerate} \subsection{Generating Visual Explanations} We adopt Grad-CAM~\cite{selvaraju2017grad} as our visualization technique to generate visual explanations for each test image. Selvaraju et al.~\cite{selvaraju2017grad} proposed Grad-CAM based on the work of~\cite{zhou2016learning}, to map any class-discriminate activation of last convolutional layers onto input images. In the localization heat-maps ($L^c_{\textrm{Grad-CAM}}$), the values of significance are calculated in pixel level and the important regions are highlighted on input images. The localization heat-maps can be computed by a linear combination of weighted forward activation maps as proposed in~\cite{selvaraju2017grad}. Note that the weighted masks $mask\_norm$ are generated by normalizing localization heat-maps to ensure the values of significance range between $[0,1]$ for each weighted mask. Additionally, we set a threshold to select important regions (pixels) from the weighted masks to generate visual explanations. See Figure~\ref{fig:2} for illustration. \begin{figure}[!htb] \centering \vspace{-1ex} \includegraphics[width=0.52\textwidth]{gradcamtaged.pdf} \caption{The pipeline of generating weighted masks and visual explanations with Grad-CAM~\cite{selvaraju2017grad} for city recognition CNNs.} \label{fig:2} \end{figure} \vspace{-3ex} \subsection{Clustering Weighted Masks} Due to the lack of object labels appearing in visual explanations, we adopt unsupervised method to cluster visual explanations directly to recognize potential patterns. Proper descriptors needs to be extracted to cluster the visual explanations. Instead of extracting descriptors from visual explanations, we take the weighted masks $mask\_norm$ as descriptors and cluster them. We use t-distributed stochastic neighbor embedding (t-SNE)~\cite{maaten2008visualizing} for clustering and dimensionality reduction. \subsection{Quantifying Interpretability} The aim of this study is to examine the interpretability of deep representations learned from city recognition CNNs, therefore it is necessary to obtain the information of what objects appear as discriminate in the images. In our work, we first use semantic segmentation model to label the objects in pixel level. This pretrained segmentation model should be able to recognize all classes of objects appearing in images. Hence the class information of objects can be used for evaluating the interpretability of deep representations quantitatively. Some quantitative measurements of interpretability in previous researches, such as IoU~\cite{zhang2018interpretable,bau2017network} and Pointing Game~\cite{selvaraju2017grad,fong2017interpretable,zhang2018top}, cannot be used for city recognition CNNs, since there is inconsistency between the class information of city images and the class information of objects appearing in city images. Alternatively, we suppose objects appearing in the visual explanations are class-discriminate and their frequent occurrence reflects the interpretability of deep representations. To quantify this metric we calculate the number of pixels for different objects in visual explanations of the test images. To rule out the biases of different classes, we normalize the numbers of pixels of class-discriminate object $p$ in the visual explanations $M_{P}$ to the pixels of the same object in all images from that dataset $N_{P}$: \begin{equation} R_{p}^c = \frac{\sum_{i=1}^{N^c}M_{p,i}}{\sum_{i=1}^{N^c}N_{p,i}}, \end{equation} where $N^c$ is the total number of city images of class $c$, indexed by $i$. For instance, $p$ can be trees where $R_{p}^c$ reflects the ratio of trees appearing as class discriminate in the class Tokyo to the whole trees appearing in this class. $R_{p}^c$ is a quantifiable bounded measure of object significance varying between $[0,1]$, where $0$ means non-discriminate with respect to other classes and $1$ means very discriminate. \vspace{-1ex} \section{Experiments} \subsection{Datasets} We use two datasets of city images, which are \textbf{Tokyo 24/7} and \textbf{Pittsburgh} introduced from~\cite{arandjelovic2016netvlad} to obtain city recognition CNNs. \begin{itemize}[noitemsep] \itemsep0em \item \textbf{Tokyo 24/7}: This dataset contains 76k dataset images. For the same spot, 12 images were taken from different directions. \item \textbf{Pittsburgh}: This dataset contains 250k database images. For the same spot, 24 images were taken from 12 different direction and 2 different angles. \end{itemize} To avoid unbalanced datasets, we only use 76k Pittsburgh images. All images are divided into training, validation and test datasets with the proportions as 6:2:2. These two datasets do not contain any information of objects. \subsection{Experimental Setup} We train four different image classification CNNs models to classify city images. The network architectures include VGG11~\cite{simonyan2014very}, ResNet18~\cite{he2016deep} and two other shallow networks (as shown below in Table~\ref{table:1}), Simple and Simpler. These four image classification networks are used for interpreting deep representations of city recognition CNNs and investigating the influence of network architectures on the interpretability. All four models are trained with the same training setup. The loss function is cross-entropy function, and Adam optimizer is applied. The initial learning rate is set as 0.0001 and is multiplied by 0.1 every 10 epochs. The accuracies of four models are $99.98\%$, $99.96\%$, $99.31\%$ and $98.18\%$. \begin{table}[htb]\small \centering \caption{Configurations of two shallow networks. In this table, 'convN$\times$N' represents convolutional layer with a N$\times$N filter, and each convolutional layer is followed by a ReLU activation function. The number after hyphen represents the number of channels in the corresponding feature map, and the numbers in the brackets is the size of filter in max pooling layer.} \label{table:1} \begin{tabular}{c|c} \hline Simple & Simpler \\ \hline \multicolumn{2}{c}{Input images:224$\times$224$\times$3(RGB)} \\ \hline conv5$\times$5-20 & conv9$\times$9-20 \\ \hline \multicolumn{2}{c}{max pooling(2$\times$2)} \\ \hline conv7$\times$7-64 & conv9$\times$9-64 \\ \hline \multicolumn{2}{c}{max pooling(2$\times$2)} \\ \hline conv5$\times$5-96 & conv9$\times$9-96 \\ \hline \multicolumn{2}{c}{max pooling(2$\times$2)} \\ \hline conv7$\times$7-128 & \\ \hline max pooling(2$\times$2) & \\ \hline \multicolumn{2}{c}{fully connected-4096} \\ \hline \multicolumn{2}{c}{fully connected-100} \\ \hline \multicolumn{2}{c}{fully connected-number of classes:2} \\ \hline \end{tabular} \end{table} \subsection{Clustering Weighted masks} To address our first research question on whether the learned representation in the last convolutional layer of our trained CNN are interpratable by human or not, we conduct the following experiment. Using t-SNE, the weighted masks ($mask\_norm$) are clustered in an unsupervised manner instead of visual explanations due to the lack of objec-level labels and the irregular shapes of black regions around visual explanations. We apply PCA to extract 50 dominating features prior the the t-SNE clustering and dimensionality reduction. Figure~\ref{fig:3} shows the scatter plots of VGG11 clustering results with label information of city images. \begin{figure}[!htb] \centering \vspace{-2.5ex} \includegraphics[width=0.4\textwidth]{vggscatter.pdf} \caption{Scatter plots of clustering results of VGG11 with city information of images. Each point represents a weighted mask generated from each test image. Most of weighted masks from different datasets are separable in terms of city label information.} \vspace{-2ex} \label{fig:3} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.43\textwidth]{vgg_cam_50_new.png} \caption{Exhibiting t-SNE results with visual explanations of VGG11. After replacing clustering reslut with visual explanations of test images, similar class-discriminate objects in visual explanations are clustered together. The information of these objects is obtained directly by human.} \label{fig:4} \vspace{-2ex} \end{figure} From the Figure~\ref{fig:3}, the clustering result that the weighted masks of test images are separable, is consistent with the high accuracy of VGG11. To visually exhibit the objects information in visual explanations that is related with the interpretability of deep representations, we next replace the points with visual explanations and demonstrate the relation between clustering result and class-discriminate objects intuitively. Due to the considerable number of test images, we randomly select around 500 visual explanations generated from VGG11 model to exhibit, as shown in Figure~\ref{fig:4}. Based on the data visualization results shown in Figure~\ref{fig:4}, we can see that the result of our clustering leads to a collection of visually similar objects in a 2D map, which indicates that the VGG11 model learns semantically meaningful discriminate objects in the last convolutional layer. Although these patterns of objects reveal the interpretability of deep representations learned for a city recognition CNN, to some degree, it is still necessary to evaluate the interpretability in a quantitative manner. \subsection{Object-level Interpretability} We address the second research question in this section by quantifying the object level information that are extracted using visualization method. The lack of classes information of objects appearing in city images from \textbf{Pittsburgh} and \textbf{Tokyo 24/7} datasets makes it difficult to quantify the interpretibility of the deep representations learned from a city recognition CNN. Therefore, we apply semantic segmentation models to obtain the objects classes information before evaluating interpretability. The semantic segmentation model used in our experiment is pre-trained on MIT ADE20K scene parsing dataset~\cite{zhou2016semantic,zhou2017scene,xiao2018unified} and is built on ResNet50~\cite{he2016deep}. The segmentation model is able to classify 150 different categories of objects, including all classes of objects appearing in city images. \begin{figure}[!htb] \centering \vspace{-1ex} \subfigure[Histogram of $R_{p}^c$ over class-discriminate objects appearing in \textbf{Pittsburgh}]{ \centering \includegraphics[width=0.47\textwidth]{pittsburghhistogramallsorted.pdf} \vspace{-2ex}} \subfigure[Histogram of $R_{p}^c$ over class-discriminate objects appearing in \textbf{Tokyo 24/7}]{ \centering \includegraphics[width=0.47\textwidth]{tokyohistogramallsorted.pdf}} \caption{Histograms of $R_{p}^c$ regarding different architectures of CNNs, initializations and datasets. Different values of $R_{p}^c$ of different objects are learned from different datasets. Some unique objects can only be learned from certain dataset.} \label{fig:histogram} \vspace{-3ex} \end{figure} To evaluate interpretability of deep representations quantitatively and avoid missing any possible information of objects in visual explanations, we calculate $R_{p}^c$ for different objects and datasets (classes), as shown in Figure~\ref{fig:histogram}. The objects are selected by the criterion that the average number of pixels exceeds a certain threshold (set as 100). Comparing the class-discriminate objects shown in Figure~\ref{fig:histogram}, dissimilar objects for different datasets are learned by city recognition CNNs. Skycraper and ground are the unique class-discriminate objects learned from \textbf{Pittsburgh} dataset, while signboard and fence are the unique ones from \textbf{Tokyo 24/7}. The values of the ratios of pixels $R_{p}^c$ indicate the selectivity of city recognition CNNs from specific dataset. The larger value of $R_{p}^c$ is, the stronger the class-discriminate attributes. E.g., a uniform histogram over different objects means city recognition CNNs take any object in the image as class-discriminate, which is meaningless in this case. Different non-uniform distributions over objects from different classes reveal city recognition CNNs learn distinct combinations of class-discriminate objects from different datasets, which is interpretable for city recognition CNNs. \subsubsection{Do Different Models Learn Similar discriminate Objects?} Besides the histograms used in Figure~\ref{fig:histogram}, we also apply another quantitative method to investigate the influence of network architectures and initializations on the interpretability of city recognition CNNs. Figure~\ref{fig:different} shows some examples of weighted masks learned by different city recognition CNNs. The difference among the weighted masks learned by different city recognition models reflects the influence of network architectures. \begin{figure}[!htb] \vspace{-1ex} \centering \includegraphics[width=0.45\textwidth]{different.pdf} \caption{Different city recognition CNNs generate different weighted masks for the same image. The first two rows present two test images from \textbf{Pittsburgh} and the last two rows show the test images from \textbf{Tokyo 24/7}. From the second column to the fifth column, the weighted masks are shown for Simple, Simpler, ResNet18 and VGG11 city recognition CNNs, respectively. Note that a shallow net triggers on the sky or on disjoint regions in the image. The ResNet18 focuses on wider regions and VGG11 is more selective.} \label{fig:different} \end{figure} To quantify the divergence between different models, we calculate the average residual $AR$ of each city image between any two models to investigate the consistency quantitatively: \begin{equation} AR_{m_1,m_2} = \frac{\left|mask\_norm^c_{m_1}-mask\_norm^c_{m_2}\right|}{H\times W}, \end{equation} where $H$ and $W$ are the height and width of weighted masks, and $m_1$ and $m_2$ represent CNN models. The value of $AR_{m_1,m_2}$ ranges from 0 to 1, where 0 means two models learn exactly same weighted mask for this image and 1 means totally different weighted masks have been learned by two models. Due to the considerable images, we calculate the average $AR_{m_1,m_2}$ over all test images. All values of average $AR_{m_1,m_2}$ between different network architectures and initialziations are listed in Table~\ref{ar}. Comparing the values in Table~\ref{ar}, we can find that $AR$s between different architectures are larger than the ones between different initializations in general, which means network architectures affect the deep representations learned from city recognition CNNs greater than different training initializations. This is also consistent with the results from Figure~\ref{fig:histogram}. \begin{table}[htb]\footnotesize \centering \caption{The average $AR_{m_1,m_2}$ between different network architectures and initializations. The values of $AR_{m_1,m_2}$ between different network architectures are all larger than the ones between different initializations.} \label{ar} \begin{tabular}{c|c} \hline Models ($m_1$-$m_2$) & Average $AR_{m_1,m_2}$ \\ \hline VGG11-ResNet18 & 0.4349 \\ \hline VGG11-Simple & 0.4303 \\ \hline VGG11-Simpler & 0.4502 \\ \hline ResNet18-Simple & 0.4118 \\ \hline ResNet18-Simpler & 0.4136 \\ \hline Simple-Simpler & 0.3149 \\ \hline VGG11-VGG11\_retrained & 0.2679 \\ \hline ResNet18-ResNet18\_retrained & 0.2265 \\ \hline Simple-Simple\_retrained & 0.2411 \\ \hline Simpler-Simpler\_retrained & 0.2460 \\ \hline \end{tabular} \end{table} Besides calculating $AR$s between different city recognition models, we can also use $R_{p}^c$ to get the consistent results. In Figure~\ref{fig:histogram} (a) and (b), different CNNs architectures learn dissimilar histograms over class-discriminate objects, however, similar values of $R_{p}^c$ over class-discriminate objects are learned due to different initializations. In Figure~\ref{fig:histogram} (b), we can also find the values $R_{p}^c$ of VGG11 and ResNet18 are larger than the ones of shallow networks over all class-discriminate objects, which also reflects that convolutional features learned by deep network architectures are more semantically interpretative than the shallow ones. Therefore, the influence of network architectures on the interpretability of CNN features is stronger than the one of different initializations. \vspace{-1ex} \section{Conclusion} In this work, we provided a framework to investigate the emergence of semantic objects as discriminate attributes in the ultimate layer of network. This is consistent with the way human understand city images. We applied our proposed framework to investigate the influence of network architectures and different initializations on the interpretability. We conclude that network architectures would affect the learned visual representations greater than different initializations. {\small \bibliographystyle{ieee}
train/arxiv
BkiUdro4eIOjR1Ci90YN
5
1
\section{Introduction} \label{sec:introduction} \vspace{-4pt} Complex problems from a wide range of fields can be modeled and described as complex networks~\cite{NewmanGirvan2004}. In such networks, nodes that present similar properties often tend to be linked to each other, thus forming consistent subgraphs with dense interconnections that are called \textit{communities}~\cite{NewmanGirvan2004}. The detection of communities in complex networks is an important step in the multitude of possible analyses that can be performed to such models. When a complex problem is modeled as a network, the identification of communities may allow both the comprehension of characteristics that are specific to subgroups of nodes and of how such nodes interact to each other~\cite{NewmanGirvan2004}. Although communities play an important role in the network analysis, as they allow the identification of functional properties of a group of nodes and also of the ways complex behaviors emerge from simple individual functions, the formal definition of communities is still vague in the literature. Therefore, one of the challenges associated with the development of community detection algorithms is to select which metric should be used to properly evaluate whether a given set of nodes actually represent a community with characteristics that are relevant to the context of the problem. Such metrics become even more important if it is considered that a large part of algorithms for community detection are based on the optimization of \textit{quality functions} (or \textit{fitness} functions). From all quality metrics described in the literature, one of the most adopted is Modularity~\cite{NewmanGirvan2004}, which assumes that a community is a \textit{module of the network} and that two nodes belonging to the same community tend to have much higher probability of being connected to each other than that of two nodes belonging to different communities. Given that each quality function may be intended to identify a set of communities according to one of the different existing definitions, the resulting partition of the original complex network invariably reflects the characteristics of such definitions. Besides, it is known that the optimization of some of these quality functions may lead to partitions of the network that do not correspond to the real partition observed in practice~\cite{lancichinettiEtAl20011}. The above scenario is even worse when \textit{overlapping communities} are considered. Differently from disjoint communities of complex networks, in which each node belongs to a single community, the partition of the network into overlapping communities allows some nodes (known as \textit{bridges}) to belong to different communities at the same time. In this context, aspects such as the \textit{clustering coefficient}\footnote{Clustering coefficient corresponds to the ratio between the number of triangles formed by a given node and all possible triangles that could be formed.} of the network and the number of edges connecting a given node to its neighbors (both in and out of its community) may have different impacts when choosing a bridge in complex networks that model different real-world situations. Therefore, in this paper a new flexible fitness function for community detection is proposed. This new metric, named \textit{Flex}, allows the user to predefine which characteristics should be present in the communities that will be obtained by the optimization process, thus allowing the identification of distinct sets of communities for the same complex network by simply adjusting a few intuitive parameters. Flex was combined with an adapted version of the immune-inspired optimization algorithm named cob-aiNet[C]~\cite{CoelhoEtAl2011} and applied to identify both disjoint and overlapping communities in a set of eight artificial and four real-world complex networks. The obtained results have shown that the partitions obtained with the optimization of this new metric are more coherent with the known real partitions than those obtained with the optimization of modularity. This paper is organized as follows. The new flexible objective function for community detection will be presented in Section~\ref{sec:flex}, together with some insights about how it can be applied to identify overlapping communities and a brief description of the optimization algorithm adopted here. The experimental methodology and the obtained results will be discussed in Section~\ref{sec:experiments}. Finally, concluding remarks and indications for future work will be given in Section~\ref{sec:conclusion}. \section{A Flexible Objective Function for Community Detection} \label{sec:flex} \vspace{-4pt} In a broader definition, a community structure of a network is a partition of the nodes so that each partition is densely connected. The modularity metric~\cite{NewmanGirvan2004} tries to capture this definition by analyzing the difference between the number of edges inside a community and the expected number of edges that would be observed if this community was formed in a random network. Although modularity is widely adopted by the complex network community, its structure may lead to the false assumption that the number of edges between two groups decreases as the network size increases. Therefore, for larger networks, a simple connection between two nodes of different communities may result in the merging of these two communities, in order to increase (maximize) modularity. This aspect is known as the \textit{resolution limit} of the metric~\cite{fortunato2007resolution}. Additionally, in a situation in which a given node has few links connecting it to a small community and most of its links connecting it to a large community, the optimization of modularity will often include such node into the larger community, without considering the local contribution of this node to the smaller communitiy. This can be a drawback if the node has a higher clustering coefficient with respect to the smaller community than to the larger one. With that in mind, a new quality function for community detection, hereby called \textit{Flex}, is proposed. The optimization of Flex tries to balance two objectives at the same time: maximize both the number of links inside a community and the local clustering coefficient of each community. Additionally, it also penalizes the occurrence of open triangles (i.e., it minimizes the \textit{random model} effect~\cite{erdds1959random}). The first step to calculate Flex for a given partition of the network is to define the \textit{Local Contribution} of a node $i$ to a given community $c$: \begin{equation} LC(i,c) = \alpha*\bigtriangleup(i,c) + (1-\alpha)*N(i,c) - \beta*\wedge(i,c), \label{eq:LC} \end{equation} \noindent where $\bigtriangleup(i,c)$ is the ratio between the transitivity of node $i$ (number of triangles that $i$ forms) inside community $c$ and the total transitivity of this node in the full network, $N(i,c)$ is the ratio between the number of neighbors node $i$ has inside community $c$ and its total number of neighbors, and $\wedge(i,c)$ is the ratio between the number of open triangles in community $c$ that contain node $i$ and the total participation of $i$ in the whole network. Variables $\alpha$ and $\beta$ are weights that balance the importance of each term. The weight parameter $\alpha$ directly dictates whether the optimization process will tend to insert a given node into a clustered community or into a community that contains the majority of this node's neighbors. It is important to consider both transitivity and the neighborhood of each node in the optimization, as both concepts are not necessarily related (i.e. a given node will not necessarily have high transitivity with the majority of its neighbors). Therefore, by weighting these two criteria, the user can emphasize each of them according to what is desirable in a given practical situation. Given the Local Contribution of all nodes to each community, the \textit{Community Contribution} (CC) of a community $c$ in a given partition is defined as: \begin{equation} CC(c) = \sum_{i \in c}{LC(i,c)} - \frac{|c|}{|V|}^{\gamma}, \label{eq:CC} \end{equation} \noindent where $|.|$ is the number of elements in a set, $V$ is the set of nodes of a network and $\gamma$ is the penalization weight. The penalization in this equation is devised to avoid the generation of a trivial solution, in which the entire network forms a single community. Finally, the Flex value of a given partition $p$ is given by: \begin{equation} Flex(p) = \frac{1}{|V|}\sum_{c \in p}{CC(c)} \end{equation} It is also important to highlight that the penalization term of Eq.~\ref{eq:LC} ensures that the convergence to the random model is penalized, even if $\alpha$ favors only the number of neighbors (i.e., $\alpha=0$) in the definition of the communities. \vspace{-4pt} \subsection{\textbf{Applying Flex to Identify Overlapping Nodes}} \label{sec:overlap} The Flex fitness metric also provides insights about overlapping nodes. As the importance of transitivity and neighborhood is balanced by $\alpha$, this can be exploited to infer whether a given node also belongs to another community. When using Flex as a fitness function, some nodes may be more sensible than others regarding the weight $\alpha$. This happens when, for example, a node has a fraction of its neighbors on a clustered community and the remaining neighbors spread across one or more communities with lower transitivity. Therefore, a simple heuristic that allows the identification of overlapping nodes is to search for nodes that do not make significant contribution to one of the $\alpha$-weighted factors, i.e., nodes that are sensible to changes of $\alpha$. After finding those nodes, we can allocate them to other communities that share a certain fraction of neighbors with them. This heuristic is summarized in Alg.~\ref{alg:overlapheuristic}. \vspace{-10pt} \begin{algorithm} \KwData{thresholds $thr\bigtriangleup$ and $thrN$ for the contribution to transitivity and neighborhood, respectively, and threshold $thrSh$ of shared neighbors between communities.} \KwResult{New set of communities with overlapping nodes.} \BlankLine \For{each node $i$}{ $c$ = community that contains $i$ \\ \If{$\bigtriangleup(i,c) < thr\bigtriangleup$ or $N(i,c) < thrN$}{ \For{$\forall c_j \neq c$}{ \If{$N(i,c_j) > thrSh$}{ Add $i$ to community $c_j$ } } } } \label{alg:overlapheuristic} \caption{Heuristic to find overlapping nodes.} \end{algorithm} \vspace{-20pt} \subsection{The cob-aiNet[C] Algorithm} \label{sec:cobAiNet} As previously mentioned, an adaptation of the cob-aiNet[C] algorithm (\textit{Con\-cen\-tra\-tion-based Artificial Immune Network for Combinatorial Optimization} -- \cite{CoelhoEtAl2011}) was adopted in this paper to obtain a set of communities for complex networks that maximize the new proposed quality function (Flex). The cob-aiNet[C] algorithm, which was originally proposed to solve combinatorial optimization problems~\cite{CoelhoEtAl2011}, was previously adapted to identify both disjoint and overlapping communities in complex networks~\cite{olivetti2013identifying}. As most of the adaptations proposed in~\cite{olivetti2013identifying} were adopted here as well, only a brief explanation of the general aspects of cob-aiNet[C] will be presented here, together with details about those aspects that differ from the adaptation proposed in~\cite{olivetti2013identifying}. For further details, the reader is referred to~\cite{CoelhoEtAl2011,olivetti2013identifying}. The cob-aiNet[C] is a bioinspired search-based optimization algorithm that contains operators inspired in the natural immune system of vertebrates. Therefore, it evolves a population of candidate solutions of the problem (cells or possible partitions of the complex network), through a sequence of cloning, mutation and selection steps, guided by the fitness of each individual solution. Besides these evolutionary steps, all the cells in cob-aiNet[C] population are compared to each other and, whenever a given cell is more similar to a better one than a given threshold, its concentration (a real value assigned to each cell) is reduced. This concentration can also be increased according to the fitness of the cell (higher fitness leads to higher concentration). Such concentration-based mechanism is an essential feature of the algorithm, as it controls the number clones that will be generated for each cell at each iteration, the intensity of the mutation process that will be applied to each clone and when a given cell should be eliminated from the population (when its concentration becomes null). When compared to the adaptations made in~\cite{olivetti2013identifying}, the only differences are associated with the new hypermutation operator, which will be discussed next, and the new approach to obtain overlapping communities described in Sect.~\ref{sec:overlap}. \vspace{-10pt} \subsubsection{\textbf{The new Hypermutation Operator}} \label{sec:cobHypermutation} To properly explain the new hypermutation operator, it is important to know that each cell in the population of the algorithm is represented as an array of integers with length equal to the number $N$ of nodes of the complex network. Each position $i$ of the array corresponds to a node of the network and assumes value $j \in \{1, 2, \ldots, N\}$ that indicates that nodes $i$ and $j$ belong to the same community. This operator, which is applied to all cells in the population at a given iteration, is basically a random modification of the integer values in $n_{mut}$ positions of the array (cell), being $n_{mut}$ inversely proportional to the fitness and concentration of each cell (further details can be found in~\cite{CoelhoEtAl2011,olivetti2013identifying}). The $n_{mut}$ positions of the cell that will suffer mutation are randomly selected, and so are the values that will be inserted into these positions. However, the probability that a given value $k$ (associated with node $k$) replaces the current value in position $i$ is directly proportional to $|N(i) \cap N(k)|$, where $N(i)$ is the set of nodes that are neighbors of $i$. \vspace{-4pt} \section{Experimental Results} \label{sec:experiments} In order to assess whether Flex is able to lead to improvements in community detection, when compared to Modularity, an extensive experimental setup, composed of $8$ artificial and $4$ real world networks was devised. The artificial networks, which were generated by the toolbox provided by Lancichinetti~\cite{lancichinetti2008benchmark}, are composed by $4$ networks formed by high density communities (i.e. with a high number of internal edges), which facilitates the identification of the optimal partition, and $4$ networks with noisy communities (i.e. with higher probability of presenting edges connecting them to other communities). The artificial networks were generated with $50$, $100$, $200$ and $500$ nodes, being these nodes with average degree of $10$ and maximum degree of $15$. Such networks were generated with a maximum of $10$ communities, $3$ overlapping nodes belonging to an average of $2$ communities and average clustering coefficient of $0.7$. The mixing parameter, which introduces noise to the network structure, was set as $0.1$ for the first set of networks (labeled Network ${50-500}$ in the tables that follow) and $0.3$ for the second set (labeled Noise Network ${50-500}$). The real world networks (with known partitions) that were chosen for the experiments were: Zachary's Karate Club~\cite{Zachary1977}; Dolphins Social Network~\cite{Lusseau2003}; American College Football~\cite{evans2010clique}; and a network of co-purchasing of books about US politics compiled by Krebs~\cite{krebs2004social}. A total of $20$ repetitions were performed for each network and for each fitness function adopted here (Flex and Modularity). The cob-aiNet[C] algorithm was empirically adjusted with the following parameters for all of the experiments: $\sigma_S=0.2$, maximum number of iterations equal to $1,500$, $\alpha^{Ini}=10$, $\alpha^{End}=1$, initial population with $4$ candidate solutions and maximum population size of $6$. After each run, the heuristic presented in Alg.~\ref{alg:overlapheuristic} was applied to the best solution returned by cob-aiNet[C]. The results were evaluated by the average of the Normalized Mutual Information, which indicates how close a given partition of the network is from the real partition (ground truth)~\cite{lancichinettiEtAl20011,olivetti2013identifying}. In this work, the solutions with non-overlapping (labeled NMI in the tables that follow) and overlapping (labeled NMI OVER.) partitions were evaluated. Besides, the obtained results for overlapping partitions were also compared to those obtained with the technique proposed in~\cite{olivetti2013identifying} (labeled NMI MULTIMODAL). In the following tables (Tables~\ref{tab:networks} to~\ref{tab:realnetworks}), it is also reported the evaluated fitness of the returned solutions (labeled FIT), the number of overlapping communities obtained with the proposed heuristic (labeled \# Over.) and with the technique described in~\cite{olivetti2013identifying} (labeled \# Over. Multimodal), the number of communities found (labeled \# Comm.) and the total time in seconds taken (labeled TIME)\footnote{All the experiments were performed on an Intel Core i5 with 2.7GHz, 8GB of RAM and OSX 10.9.2.}. All the parameters required by Flex were empirically defined here for groups of networks with similar structures, and are presented in Table~\ref{tab:parameters}. Notice though that, in practice, these parameters should be set depending on the goal of the network analysis (e.g. if partitions with highly clustered communities are required, $\alpha$ should be set with higher values). Also, if the partition structure of the network is known \textit{a priori}, the calibration of such parameters in order to obtain the known partition could indicate some characteristics of the network dynamics, such as, for example, the way that the connections of each node were established. \begin{table}[t] \caption{Weight parameters and heuristic thresholds for each dataset.} \vspace{-5pt} \centering{ \scriptsize{ \begin{tabular}{ccccccc} \hline Network & $\alpha$ & $\beta$ & $\gamma$ & $thr\bigtriangleup$ & $thrN$ & $thrSh$ \\ \hline Network 50, Karate & $0.8$ & $0.3$ & $2$ & $0.3$ & $0.6$ & $0.25$ \\ Network 100-500, Krebs & $0.8$ & $0.3$ & $4$ & $0.3$ & $0.7$ & $0.25$ \\ Noise Network 50-500 & $0.5$ & $1.0$ & $4$ & $0.3$ & $0.7$ & $0.45$ \\ Football & $0.8$ & $0.6$ & $4$ & $0.3$ & $0.6$ & $0.25$ \\ Dolphins & $0.4$ & $0.3$ & $4$ & $0.3$ & $0.6$ & $0.25$ \\ \hline \end{tabular} } } \label{tab:parameters} \end{table} \subsection{Artificial Networks with Overlapping Communities} \label{sec:artnetworks} As expected, for the first set of networks (results given in Tab.~\ref{tab:networks}), both Flex and Modularity obtained the same values of NMI for every experiment. This is due to the lower rate of noise adopted in the creation of such networks, which makes the identification of the real partitions trivial for most fitness functions. Notice though that the overlapping detection heuristic proposed here resulted in partitions with perfect NMI score (equal to 1.0) for every network considered, except Network 100. In this particular dataset, one of the overlapping nodes did not attend the criteria established by the heuristic, thus the obtained NMI was slightly lower than 1.0. \begin{table}[t] \caption{Results for the artificial networks with overlap and low level of noise.} \vspace{-18pt} \centering{ \scriptsize{ \subfloat[Network 50]{\input{network50.tex}} \subfloat[Network 100]{\input{network100.tex}} \\ \subfloat[Network 200]{\input{network200.tex}} \subfloat[Network 500]{\input{network500.tex}} } } \label{tab:networks} \vspace{-15pt} \end{table} The results for the second set of networks, which were generated with higher noise, are reported in Tab.~\ref{tab:noisenetworks}. In this scenario, the differences between Flex and Modularity become much more evident. In every situation the heuristic combined with Flex was able to find most of the overlapping nodes of each problem, as pointed out by the higher values of NMI. On the other hand, the method proposed in~\cite{olivetti2013identifying} obtained lower values of NMI by introducing many more (false) overlapping nodes into the partition. It is also noticeable that, in the presence of noise, Modularity also tends to find partitions with much less communities than Flex, which is due to the resolution limit discussed in Sect.~\ref{sec:flex}. \begin{table}[t] \caption{Results for the artificial networks with overlap and high level of noise.} \vspace{-18pt} \centering{ \scriptsize{ \subfloat[Noise Network 50]{\input{network50_03.tex}} \subfloat[Noise Network 100]{\input{network100_03.tex}} \\ \subfloat[Noise Network 200]{\input{network200_03.tex}} \subfloat[Noise Network 500]{\input{network500_03.tex}} }} \label{tab:noisenetworks} \vspace{-15pt} \end{table} It is important to highlight that the results were statistically verified by means of the Kruskal paired test with significance $< 0.05$. Therefore, in Tables~\ref{tab:networks}--\ref{tab:realnetworks}, those results with statistical significance (p-value$<0.05$) are marked in bold. \vspace{-5pt} \subsection{Real-world Social Networks} \label{sec:realworld} \vspace{-5pt} Regarding the real-world social networks, the obtained results (given in Tab.~\ref{tab:realnetworks}) show that, again, Flex leads to a significant improvement over Modularity. However, it is important to notice that the ground truths of such networks are related to the classification of the nodes of these datasets according to their respective domains, so it does not necessarily mean that they actually correspond to the true partitions of the networks. Therefore, it maybe practically impossible to reach a perfect NMI score in some situations. It is also important to notice that those ground truth were originally devised without overlapping, so the NMI score with overlapping will always be smaller than the original NMI score. Some interesting characteristics of each of these networks can be identified through a combination of visual inspection of the obtained partitions together with an analysis of the weights ($\alpha$ and $\beta$) that led to the best values of NMI. From the obtained results, it is possible to infer that both the Karate Club and Krebs networks are formed by highly clustered communities ($\alpha=0.8$), which makes sense as the Karate Club is a small social network prone to mutual friendships and the Krebs network, captures the interest of readers about particular subjects, so they tend to buy only books that are related to their political views. The Football network required the same value for $\alpha$ but a much higher value for $\beta$, which means that this particular network does not allow open triangles. The reason for that is due to the organization of tournaments that limit the occurrence of intra-cluster relationships. Finally, for the Dolphins network the required weights are more favorable to the establishment of inter-community relationships instead of clustering, which might be related to the hierarchy in their society that favors the creation of several hubs inside a community~\cite{Lusseau2003}, thus raising the number of open triangles. \begin{table}[t] \caption{Results obtained for the real world networks.} \vspace{-18pt} \centering{ \scriptsize{ \subfloat[Karate Club]{\input{karate.tex}} \subfloat[Dolphins]{\input{dolphins.tex}} \\ \subfloat[Football]{\input{footballTSEinput.tex}} \subfloat[Krebs]{\input{krebs.tex}} }} \label{tab:realnetworks} \vspace{-18pt} \end{table} In order to illustrate the overlapping communities obtained by the combination of Flex and the proposed heuristic, Fig.~\ref{fig:realnetworks} depicts the best partitions with overlapping nodes for each problem. In Fig.~\ref{fig:realnetworks}, the colors represent the communities found by the optimization of Flex, the shapes represent the communities in the ground truth and the larger nodes are the overlapping nodes. It is visually noticeable that the community formation found using the Flex function makes sense, given the weights for each network. Also, every overlapping node is clearly positioned between two or more distinct groups. \begin{figure}[t] \subfloat[Karate Club]{\includegraphics[trim=0cm 12cm 0cm 1cm, clip=true,width=0.42\textwidth]{karateF.pdf}} \subfloat[Dolphins]{\includegraphics[trim=0cm 20cm 8cm 0cm, clip=true,width=0.5\textwidth]{dolphinsF.pdf}} \\ \subfloat[Football]{\includegraphics[trim=0cm 14cm 4cm 1cm, clip=true,width=0.5\textwidth]{football.pdf}} \subfloat[Krebs]{\includegraphics[trim=2cm 20cm 10cm 0cm, clip=true,width=0.45\textwidth]{krebsF.pdf}} \caption{Results obtained by cob-aiNet[C] with Flex for the real-world networks.} \label{fig:realnetworks} \vspace{-8pt} \end{figure} \vspace{-5pt} \section{Conclusion} \label{sec:conclusion} \vspace{-5pt} In this paper a novel fitness function for community detection in complex networks was introduced, together with a heuristic that, based on particular characteristics of this function, allows the identification of overlapping nodes. This new fitness function, called Flex, is parametrized in such a way that it can be adapted to obtain communities with different characteristics. Through an extensive experimental setup, in which Flex was combined with an immune-inspired algorithm adopted for the optimization process (a novel mutation operator was also proposed in this paper), it was possible to verify that this new fitness function and heuristic are capable of leading to partitions close to the ground truth of a set of networks with different characteristics. As for future investigations, we intend to evaluate a multi-objective version of the proposed fitness function and verify whether it leads to a better set of overlapping nodes. Besides, a thorough sensitivity analysis of Flex's parameters will also be performed. \bibliographystyle{splncs}
train/arxiv
BkiUdpE4uzlh5yvfB6Si
5
1
\section{Introduction} The title of the talk was given to me by the organizers and force me to rethink what implications gluons saturation\cite{McLerran:1993ni}-\cite{McLerran:1993ka} and the Glasma\cite{Kovner:1995ja}-\cite{Kovner:1995ts} might have to do with the phase diagram of QCD in the region of high baryon number density. The problem I will address is what values of baryon number density and temperature are probed in the fragmentation region of nucleus-nucleus collisions at asymptotically high energy. I was therefore forced to go back and update the ideas found in the paper by Anishetty, Koehler and McLerran\cite{Anishetty:1980zp}, and put hem in a more modern context. This work is largely based on and stimulated by the considerations of Li and Kapusta\cite{Li:2016wzh}. This region was studied emperically based on experimental data in the classic work of Cleymans and Becattini\cite{Becattini:2007qr}. I have attempted to simplify some of the arguments in these very nice papers, and address specifically the region of the phase diagram of QCD that one can study at asymptotic energies. The result I find by very simple kinematic arguments is amusing: At asymptotically high energies and fixed rapidity as measured from the end of the fragmentation region, one makes a system initially at a very high temperature and a very high baryon number density. This temperature and baryon density grow as the collision energy increases. This can be much higher than the expected scale of the de-confinement or chiral symmetry restoration temperatures and densities. The ratio of the baryon chemical potential to temperature asymptotes to a constant. Since baryon number is conserved and the entropy is approximately conserved during expansion, the expansion dynamics should little change this ratio. Therefore at late stages of the collision, one is probing the baryon rich region of the phase diagram of QCD. Although the experimental challenge to probe such a region is formidable, the theoretical advantage of studying such a region is that at a fixed temperature and baryon number density, the matter produced at high energy is more slowly expanding than at lower energy. This gives longer time for long range correlations to grow. In addition, I believe we understand the dynamics of the early times better at high energy than we do at low energy. \section{Some Phenomenological Considerations} \begin{figure} \centerline{\includegraphics[width=0.5\linewidth]{cleymans.pdf}} \caption{Values of temperature and baryon number chemical potential as a function of rapidity for SPS and RHIC energies, from Ref. \cite{Becattini:2007qr} } \label{cleymans} \end{figure} In Fig. \ref{cleymans}, the values of baryon chemical potential and temperature extracted from fits to the spectra of produced particles at the top SPS and RHIC energies are shown. The ratio of $\mu_B/T$ for systems where the baryon number density is small compared to the temperatures is up to a constant the ratio of baryon number density to entropy. \begin{equation} \mu_B/T \sim N_B/S \end{equation} Since baryon number is conserved during expansion and entropy is approximately conserved if the effects of viscosity are ignored, this ratio is an approximate invariant under expansion. If the system has initially a very high temperature then the initial value of $\mu_B$ must be correspondingly increased. The advantage of studying systems that are initially very hot and dense is that by the time they reach low values of density and temperature, they are expanding more slowly than systems that started at lower temperatures and densities. This is because characteristic volumes and times are larger at fixed energy density for systems with higher initial densities. If there was limiting fragmentation for both baryons and pions, then we would expect that the rapidity distributions of these particles would be energy independent when measured as a function of the rapidity distance from the kinematic limit for nucleon-nucleon scattering. Such limiting fragmentation has been observed at RHIC energies. The question we will address is how does the initial energy density and baryon number density scale with energy, and how does this independence of $\mu_B/T$ arise. \section{Using Saturation to Estimate the Early Baryon Density and Temperatures} In the CGC description of the initial condition for ultra-relativistic heavy ion collisions, the saturation momentum grows as a function of the rapidity distance from the fragmentation region as an exponential in rapidity (power law in 1/x). If we sit in the fragmentation region of one heavy ion at some fixed rapidity difference from the beam rapidity, the density of partons in the beam nucleus does not grow. Let us call this rapidity difference $\Delta y$. The saturation momentum of the first nucleus is fixed at \begin{equation} Q_1^2 \sim Q_0^2 e^{\kappa \Delta y} \end{equation} where phenomenologically $\kappa$ is a number of order $\kappa \sim 0.2-0.3$. The saturation momentum in the other nucleus grows as the beam energy like \begin{equation} Q_2^2 \sim Q_0^2 e^{\kappa(2y_{cm} - \Delta y)} \end{equation} where $y_{cm}$ is the center of mass rapidity. At the saturation momentum of the first nucleus, the second nucleus is completely saturated and appears as a black disk. The collision will strip all of the partons from nucleus one up to a typical momentum scale of $Q_2^{sat}$. In so far as the parton distribution have little dependence upon the momentum scale of their measurement, the multiplicity will only very weakly depend upon $Q_2^{sat}$ and therefore the beam energy. There is therefore approximate limiting fragmentation. Let us estimate the density of produced particles in the fragmentation region of nucleus one. For central collisiion of two nuclei of size $R$, multiplicity per unit area scales as \begin{equation} {1 \over {\pi R^2}} {{dN} \over {dy}} \sim Q_1^2 \end{equation} The initial time and initial longitudinal length scales as $Q_2^{-1}$, so that the initial entropy density is \begin{equation} {S_{initial} \over V} \sim Q_1^2 Q_2 \end{equation} The initial baryon density is caused by compression of the target nucleon (nucleus 1) as the projectile (nucleus 2) passes through the target. This is easiest seen in the target rest frame. There the projectile is a thin disk striking a row of target nucleons, Fig. \ref{sequential}. \begin{figure} \centerline{\includegraphics[width=0.5\linewidth]{sequential.pdf}} \caption{The projectile nucleus passing through a target at rest } \label{sequential} \end{figure} The compression appears to be $1/(1-v)$ where $v$ is the velocity of the projectile, but the struck nucleons are moving and the true compression is computed in the rest frame, which reduced the compression by a factor of $\gamma_{comp}^{-1}$, resulting in a real compression factor of $\gamma_{comp}$ Now we need to determine the velocity associated with this compression. This is also simple to estimate since the longitudinal energy of fragments is trapped inside the nuclear fragmentation region so long as it is formed inside the target. This means that \begin{equation} \gamma_{comp} \sim Q_2 R_{1} \end{equation} since the typical transverse mass scale for a particle is of the order of the saturation momentum of the projectile nucleus. We have therefore that \begin{equation} {{N_B} \over V} \sim Q_2 R_1 \end{equation} or that $N_B/S$ is independent of the saturation momentum of nucleus 2 and is proportional to $R_1/Q_1^2$ This ratio has dependence on the rapidity difference from the kinematic endpoint, $\Delta y$ ,but is independent of $R_1$. These arguments are of course too crude to accurately resolve the rapidity dependence of this ratio. To do this properly requires a detailed simulation of the particle production and baryon compression factors. \section{Summary and Conclusions} The simple conclusion from this analysis is that as one makes the collision energy asymptotically higher, then at some fixed distance away from the kinematic end point associated with the energy per nucleon in a heavy ion collisions, the produced matter is initially hotter, denser and produced earlier. The entropy per baryon is roughly independent of energy, so that at late times at fixed energy density the matter being studied is independent of beam energy. However the dynamics of expansion will have changed and the matter is expanding more slowly as the energy changes. Therefore from a theorists perspective, studying matter at finite baryon density is perhaps simpler as the energy is increased. Unfortunately, it gets much more almost diffiicult for experimentalists \section{Acknowledgements} L. McLerran was supported under Department of Energy contract number Contract No. DE-SC0012704 at Brookhaven National Laboratory, and L. McLerran is now supported under Department of Energy under grant number DOE grant No. DE- FG02-00ER41132.
train/arxiv
BkiUdbE4eILhQJnKqn7C
5
1
\section*{Introduction} Polyominoes are, roughly speaking, plane figures obtained by joining squares of equal size edge to edge. Their appearance origins in recreational mathematics but also has been subject of many combinatorial investigations including tiling problems. A connection of polyominoes to commutative algebra has first been established by the second author of this paper by assigning to each polyomino its ideal of inner minors, see \cite{Q}. This class of ideals widely generalizes the ideal of $2$-minors of a matrix of indeterminates, and even that of the ideal of 2-minors of two-sided ladders. It also includes the meet-join ideal of plane distributive lattices. Those classical ideals have been extensively studied in the literature, see for example \cite{BV}, \cite{C} and \cite{HT}. Typically one determines for such ideals their Gr\"obner bases, determines their resolution and computes their regularity, checks whether the rings defined by them are normal, Cohen-Macaulay or Gorenstein. A first step in this direction for the inner minors of a polyomino (also called polyomino ideals) has been done by Qureshi in the afore mentioned paper. Very recently those convex polyominoes have been classified in \cite{EHH} whose ideal of inner minors is linearly related or has a linear resolution. For some special polyominoes also the regularity of the ideal of inner minors is known, see \cite{ERQ}. In this paper, for balanced polyominoes, we provide some positive answers to the questions addressed before. To define a balanced polyomino, one labels the vertices of a polyomino by integer numbers in a way that row and column sums are zero along intervals that belong to the polyomino. Such a labeling is called admissible. There is a natural way to attach to each admissible labeling of a polyomino ${\mathcal P}$ a binomial. The given polyomino is called balanced if all the binomials arising from admissible labelings belong to the ideal of inner minors $I_{\mathcal P}$ of ${\mathcal P}$. It turns out that ${\mathcal P}$ is balanced if and only if $I_{\mathcal P}$ coincides with the lattice ideal determined by ${\mathcal P}$, see Proposition~\ref{ilambda}. This is the main observation of Section~\ref{innerminors} where we provide the basic definitions regarding polyominoes and their ideals of inner minors. An important consequence of Proposition~\ref{ilambda} is the result stated in Corollary~\ref{primep} which asserts that for any balanced polyomino ${\mathcal P}$, the ideal $I_{\mathcal P}$ is a prime ideal and that its height coincides with the number of cells of ${\mathcal P}$. It is conjectured in \cite{Q} by the second author of this paper that $I_{\mathcal P}$ is a prime ideal for any simple polyomino ${\mathcal P}$. A polyomino is called simple if it has no `holes', see Section~\ref{innerminors} for the precise definition. We expect that simple polyominoes are balanced. This would then imply Qureshi's conjecture. In Section 2 we identify the primitive binomials of a balanced polyomino (Theorem~\ref{primitive}) and deduce from this that for a balanced polyomino ${\mathcal P}$ the ideal $I_{\mathcal P}$ has a squarefree initial ideal for any monomial order. This then implies, as shown in Corollary~\ref{balancedcm}, that the residue class ring of $I_{\mathcal P}$ is a normal Cohen-Macaulay domain. Finally, in Section~\ref{classes} we show that all row or column convex, and all tree-like polyminoes are simple and balanced. This supports our conjecture that simple polyominoes are balanced. \section{The ideal of inner minors of a polyomino} \label{innerminors} In this section we introduce polyominoes and their ideals of inner minors. Given $a=(i,j)$ and $b=(k,l)$ in ${\NZQ N}^2$ we write $a\leq b$ if $i\leq k$ and $j\leq l$. The set $[a,b]=\{c\in{\NZQ N}^2\:\; a\leq c\leq b\}$ is called an {\em interval}, and an interval of the from $C=[a,a+(1,1)]$ is called a {\em cell} (with left lower corner $a$). The elements of $C$ are called the {\em vertices} of $C$, and the sets $\{a,a+(1,0)\}, \{a,a+(0,1)\}, \{a+(1,0), a+(1,1)\}$ and $\{a+(0,1), a+(1,1)\}$ the {\em edges} of $C$. Let ${\mathcal P}$ be a finite collection of cells of ${\NZQ N}^2$, and let $C$ and $D$ be two cells of ${\mathcal P}$. Then $C$ and $D$ are said to be {\em connected}, if there is a sequence of cells $C= C_1, \ldots, C_m =D$ of ${\mathcal P}$ such that $C_i \cap C_{i+1}$ is an edge of $C_i$ for $i=1, \ldots, m-1$. If in addition, $C_i \neq C_j$ for all $i \neq j$, then $\mathcal{C}$ is called a {\em path} (connecting $C$ and $D$). The collection of cells ${\mathcal P}$ is called a {\em polyomino} if any two cells of ${\mathcal P}$ are connected, see Figure~\ref{polyomino}. \begin{figure}[htbp] \begin{center} \includegraphics[width =3cm]{polyomino} \end{center} \caption{A polyomino}\label{polyomino} \end{figure} Let ${\mathcal P}$ be a polyomino, and let $K$ be a field. We denote by $S$ the polynomial over $K$ with variables $x_{ij}$ with $(i,j)\in V({\mathcal P})$. Following \cite{Q} a $2$-minor $x_{ij}x_{kl}-x_{il}x_{kj}\in S$ is called an {\em inner minor} of ${\mathcal P}$ if all the cells $[(r,s),(r+1,s+1)]$ with $i\leq r\leq k-1$ and $j\leq s\leq l-1$ belong to ${\mathcal P}$. In that case the interval $[(i,j),(k,l)]$ is called an {\em inner interval} of ${\mathcal P}$. The ideal $I_{\mathcal P}\subset S$ generated by all inner minors of ${\mathcal P}$ is called the {\em polyomino ideal} of ${\mathcal P}$. We also set $K[{\mathcal P}]=S/I_{\mathcal P}$. Let ${\mathcal P}$ be a polyomino. An interval $[a,b]$ with $a=(i,j)$ and $b=(k,l)$ is called a {\em horizontal edge interval} of ${\mathcal P}$ if $j=l$ and the sets $\{r,r+1\}$ for $r=i,\ldots,k-1$ are edges of cells of ${\mathcal P}$. Similarly one defines vertical edge intervals of ${\mathcal P}$. According to \cite{Q}, an integer value function $\alpha\: V({\mathcal P}) \to {\NZQ Z}$ is called {\em admissible}, if for all maximal horizontal or vertical edge intervals ${\mathcal I}$ of ${\mathcal P}$ one has \[ \sum_{a\in {\mathcal I}}\alpha(a)=0. \] \begin{figure}[htbp] \begin{center} \includegraphics[width =3.3cm]{labeling} \end{center} \caption{An admissible labeling}\label{labeling} \end{figure} In Figure~\ref{labeling} an admissible labeling of the polyomino displayed Figure~\ref{polyomino} is shown. Given an admissible labeling $\alpha$ we define the binomial \[ f_{\alpha} = \prod_{a \in V({\mathcal P}) \atop \alpha(a) > 0} x_a^{\alpha(a)} - \prod_{a \in V({\mathcal P}) \atop \alpha(a) < 0} x_a^{-\alpha(a)} , \] Let $J_{\mathcal P}$ be the ideal generated by the binomials $f_\alpha$ where $\alpha$ is an admissible labeling of ${\mathcal P}$. It is obvious that $I_{\mathcal P}\subset J_{\mathcal P}$. We call a polyomino {\em balanced} if for any admissible labeling $\alpha$, the binomial $f_{\alpha} \in I_{{\mathcal P}}$. This is the case if and only if $I_{\mathcal P}=J_{\mathcal P}$. Consider the free abelian group $G=\Dirsum_{(i,j)\in V({\mathcal P})}{\NZQ Z} e_{ij}$ with basis elements $e_{ij}$. To any cell $C=[(i,j),(i+1,j+1)]$ of ${\mathcal P}$ we attach the element $b_C=e_{ij}+e_{i+1,j+1}- e_{i+1,j}-e_{i,j+1}$ in $G$ and let $\Lambda\subset G$ be the lattice spanned by these elements. \begin{Lemma} \label{lattice} The elements $b_C$ form a $K$-basis of $\Lambda$ and hence $\rank_{\NZQ Z} \Lambda=|{\mathcal P}|$. Moreover, $\Lambda$ is saturated. In other words, $G/\Lambda$ is torsionfree. \end{Lemma} \begin{proof} We order the basis elements $e_{ij}$ lexicographically. Then the lead term of $b_C$ is $e_{ij}$. This shows that the elements $b_C$ are linearly independent and hence form a ${\NZQ Z}$-basis of $\Lambda$. We may complete this basis of $\Lambda$ by the elements $e_{ij}$ for which $(i,j)$ is not a left lower corner of a cell of ${\mathcal P}$ to obtain a basis of $G$. This shows that $G/\Lambda$ is free, and hence torsionfree. \end{proof} The lattice ideal $I_\Lambda$ attached to the lattice $\Lambda$ is the ideal generated by all binomials \[ f_v= \prod_{a \in V({\mathcal P}) \atop v_a > 0} x_a^{v_a} - \prod_{a \in V({\mathcal P}) \atop v_a < 0} x_a^{-v_a} \] with $v\in \Lambda$. \begin{Proposition} \label{ilambda} Let ${\mathcal P}$ be a balanced polyomino. Then $I_{\mathcal P}= I_\Lambda$. \end{Proposition} \begin{proof} The assertion follows once we have shown that for any $v\in \Lambda$ there exists an admissible labeling $\alpha$ of ${\mathcal P}$ such that $v_a=\alpha(a)$ for all $a\in V({\mathcal P})$. Indeed, since the elements $b_C\in \Lambda$ form a ${\NZQ Z}$-basis of $\Lambda$, there exist integers $z_C\in {\NZQ Z}$ such that $v=\sum_Cz_Cb_C$. We set $\alpha=\sum_{C\in {\mathcal P}}z_C\alpha_C$ where for $C=[(i,j),(i+1,j+1)]$, \[ \alpha_C((k,l))= \left\{ \begin{array}{ll} 1, & \text{if $(k,l)=(i,j)$ or $(k,l)=(i+1,j+1)$}, \\ -1, &\text{if $(k,l)=(i+1,j)$ or $(k,l)=(i,j+1)$}, \\ 0,& \text{otherwise}. \end{array} \right. \] Then $\alpha(a)=v_a$ for all $a\in V({\mathcal P})$. Since each $\alpha_C$ is an admissible labeling of ${\mathcal P}$ and since any linear combination of admissible labelings is again an admissible labeling, the desired result follows. \end{proof} \begin{Corollary} \label{primep} If ${\mathcal P}$ is a balanced polyomino, then $I_{\mathcal P}$ is a prime ideal of height $|{\mathcal P}|$. \end{Corollary} \begin{proof} By Proposition~\ref{ilambda}, $I_{\mathcal P}=I_\Lambda$ and by Lemma~\ref{lattice}, $\Lambda$ is saturated. It follows that $I_{\mathcal P}$ is a prime ideal, see \cite[Theorem 7.4]{MS}. Next if follows from \cite[Corollary 2.2]{ES} (or \cite[Proposition 7.5]{MS}) that $\height I_{\mathcal P}=\rank_{\NZQ Z} \Lambda$. Hence the desired conclusion follows from Lemma~\ref{lattice}. \end{proof} Let ${\mathcal P}$ be a polyomino and let $[a,b]$ an interval with the property that ${\mathcal P}\subset [a,b]$. According to \cite{Q}, a polyomino ${\mathcal P}$ is called {\em simple}, if for any cell $C$ not belonging to ${\mathcal P}$ there exists a path $C=C_1,C_2,\ldots,C_m=D$ with $C_i\not \in {\mathcal P}$ for $i=1,\ldots,m$ and such that $D$ is not a cell of $[a, b]$. It is conjectured in \cite{Q} that $I_{\mathcal P}$ is a prime ideal if ${\mathcal P}$ is simple. There exist examples of polyominoes for which $I_{\mathcal P}$ is a prime ideal but which are not simple. Such an example is shown in Figure~\ref{prime1}. On the other hand, we conjecture that a polyomino is simple if and only it is balanced. This conjecture implies Qureshi's conjecture on simple polyominoes. \begin{figure}[htbp] \begin{center} \includegraphics[width =2cm]{simplepolyomino} \end{center} \caption{Not simple but prime}\label{prime1} \end{figure} \section{Primitive binomials of balanced polyominoes} \label{primitivesection} The purpose of this section is to identify for any balanced polyomino ${\mathcal P}$ the primitive binomials in $I_{\mathcal P}$. This will allow us to show that the initial ideal of $I_{\mathcal P}$ is a squarefree monomial ideal for any monomial order. The primitive binomials in ${\mathcal P}$ are determined by cycles. A sequence of vertices ${\mathcal C}= a_1,a_2, \ldots, a_m$ in $V({\mathcal P})$ with $a_m = a_1$ and such that $a_i \neq a_j$ for all $1 \leq i < j \leq m-1$ is a called a {\em cycle} in ${\mathcal P}$ if the following conditions hold: \begin{enumerate} \item[(i)] $[a_i, a_{i+1}] $ is a horizonal or vertical edge interval of ${\mathcal P}$ for all $i= 1, \ldots, m-1$; \item[(ii)] for $i=1, \ldots, m$ one has: if $[a_i, a_{i+1}]$ is a horizonal interval of ${\mathcal P}$, then $[a_{i+1}, a_{i+2}]$ is a vertical edge interval of ${\mathcal P}$ and vice versa. Here, $a_{m+1} = a_2$. \end{enumerate} \begin{figure}[htbp] \begin{center} \begin{tabular}{c} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[clip, width=3cm]{cycle} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[clip, width=3cm]{notcycle.eps} \end{center} \end{minipage} \end{tabular} \caption{A cycle and a non-cycle in ${\mathcal P}$} \label{figlena} \end{center} \end{figure} It follows immediately from the definition of a cycle that $m-1$ is even. Given a cycle ${\mathcal C}$, we attach to ${\mathcal C}$ the binomial \[ f_{{\mathcal C}} = \prod_{i=1}^{(m-1)/2} x_{a_{2i-1}} - \prod_{i=1}^{(m-1)/2} x_{a_{2i}} \] \begin{Theorem} \label{primitive} Let ${\mathcal P}$ be a balanced polyomino. \begin{enumerate} \item[{\em (a)}] Let ${\mathcal C}$ be a cycle in ${\mathcal P}$. Then $f_{\mathcal C}\in I_{\mathcal P}$. \item[{\em (b)}] Let $f \in I_{{\mathcal P}}$ be a primitive binomial. Then there exists a cycle ${\mathcal C}$ in ${\mathcal P}$ such that each maximal interval of ${\mathcal P}$ contains at most two vertices of ${\mathcal C}$ and $f=\pm f_{{\mathcal C}}$. \end{enumerate} \end{Theorem} \begin{proof} (a) Let ${\mathcal C}= a_1,a_2, \ldots, a_m$ be the cycle in ${\mathcal P}$. We define the labeling $\alpha$ of ${\mathcal P}$ by setting $\alpha(a)=0$ if $a\not\in {\mathcal C}$ and $\alpha(a_i)=(-1)^{i+1}$ for $i=1,\ldots,m$, and claim that $\alpha$ is an admissible labeling of ${\mathcal P}$. To see this we consider a maximal horizontal edge interval $I$ of ${\mathcal P}$. If $I\sect {\mathcal C}=\emptyset$, then $\alpha(a)=0$ for all $a\in I$. On the other hand, if $I\sect{\mathcal C}\neq \emptyset$, then there exist integers $i$ such that $a_i,a_{i+1} \in I$ (where $a_{i+1}=a_1$ if $i=m-1$), and no other vertex of $I$ belongs ${\mathcal C}$. It follows that $\sum_{a\in I}\alpha(a)=0$. Similarly, we see that $\sum_{a\in I}\alpha(a)=0$ for any vertical edge interval. It follows form the definition of $\alpha$ that $f_{\mathcal C}=f_{\alpha}$, and hence since ${\mathcal P}$ is balanced it follows that $f_{\mathcal C}\in I_{\mathcal P}$. (b) Let $f \in I_{\mathcal P}$ be a primitive binomial. Since ${\mathcal P}$ is balanced and $f$ is irreducible, \cite[Theorem 3.8(a)]{Q} implies that there exists an admissible labeling $\alpha$ of ${\mathcal P}$ such that \[ f=f_{\alpha} = \prod_{a \in V({\mathcal P}) \atop \alpha(a) > 0} x_a^{\alpha(a)} - \prod_{a \in V({\mathcal P}) \atop \alpha(a) < 0} x_a^{-\alpha(a)}. \] Choose $a_1 \in V({\mathcal P})$ such that $\alpha (a_1) >0$. Let $I_1$ be the maximal horizontal edge interval with $a_1 \in I_1$. Since $\alpha$ is admissible, there exists some $a_2 \in I_1$ with $\alpha(a_2) < 0$. Let $I_2$ be the maximal vertical edge interval containing $a_2$. Then similarly as before, there exists $a_3 \in I_2$ with $\alpha(a_3)>0$. In the next step we consider the maximal horizontal edge interval containing and $a_3$ and proceed as before. Continuing in this way we obtain a sequence $a_1,a_2,a_3.\ldots,$ of vertices of ${\mathcal P}$ such that $\alpha(a_1), \alpha(a_2), \alpha(a_3),\ldots$ is a sequence with alternating signs. Since $V({\mathcal P})$ is a finite set, there exist a number $m$ such that $a_i\neq a_j$ for all $1\leq i<j\leq m$ and $a_m= a_i$ for some $i<m$. If follows that $\alpha(a_m)=\alpha(a_i)$ which implies that $m-i$ is even. Then the sequence ${\mathcal C}=a_i,a_{i+1},\ldots a_m$ is a cycle in ${\mathcal P}$, and hence by (a), $f_{\mathcal C}\in I_{\mathcal P}$. For any binomial $g=u-v$ we set $g^{(+)}=u$ and $g^{(-)}=v$. Now if $i$ is odd, then $f_{\mathcal C}^{(+)}$ divides $f^{(+)}$ and $f_{\mathcal C}^{(-)}$ divides $f^{(-)}$, while if $i$ is even, then $f_{\mathcal C}^{(+)}$ divides $f^{(-)}$ and $f_{\mathcal C}^{(-)}$ divides $f^{(+)}$. Since $f$ is primitive, this implies that $f=\pm f_{{\mathcal C}}$, as desired. \end{proof} \begin{Corollary} Let ${\mathcal P}$ be a balanced polyomino. Then $I_{{\mathcal P}}$ admits a squarefree initial ideal for any monomial order. \end{Corollary} \begin{proof} By Corollary~\ref{primep}, $I_{\mathcal P}$ is a prime ideal, since ${\mathcal P}$ is a balanced polyomino. This implies that $I_{\mathcal P}$ is a toric ideal, see for example \cite[Theorem 5.5]{EH}. Now we use the fact (see \cite[Lemma 4.6]{St} or \cite[Corollary 10.1.5]{HHBook}) that the primitive binomials of a toric ideal form a universal Gr\"obner basis. Since by Theorem~\ref{primitive}, the primitive binomials of $I_{\mathcal P}$ have squarefree initial terms for any monomial order, the desired conclusion follows. \end{proof} \begin{Corollary} \label{balancedcm} Let ${\mathcal P}$ be a balanced polyomino. Then $K[{\mathcal P}]$ is a normal Cohen-Macaulay domain of dimension $|V({\mathcal P})|-|{\mathcal P}|$. \end{Corollary} \begin{proof} A toric ring whose toric ideal admits a squarefree initial ideal is normal by theorem of Sturmfels \cite[Chapter 8]{St}, and by a theorem of Hochster (\cite[Theorem 6.3.5]{BH}) a normal toric ring is Cohen--Macaulay. Since ${\mathcal P}$ is balanced, we know from Proposition~\ref{ilambda} that $I_{\mathcal P}=I_\Lambda$ where $\Lambda$ is the lattice in $\Dirsum_{(i,j)\in V({\mathcal P})}{\NZQ Z} e_{ij}$ spanned by the elements $b_C=e_{ij}+e_{i+1,j+1}- e_{i+1,j}-e_{i,j+1}$ where $C=[(i,j),(i+1,j+1)]$ is a cell of ${\mathcal P}$. By \cite[Proposition 7.5]{MS}, the height of $I_\Lambda$ is equal to the rank of $\Lambda$. Thus we see that $\height I_{\mathcal P}=|{\mathcal P}|$. It follows that the Krull dimension of $K[{\mathcal P}]$ is equal to $|V({\mathcal P})|-|{\mathcal P}|$, as desired. \end{proof} \section{Classes of balanced polyominoes} \label{classes} In this section we consider two classes of balanced polyominoes. As mentioned in Section~\ref{innerminors} one expects that any simple polyomino is balanced. In this generality we do not yet have a proof of this statement. Here we want to consider only two special classes of polyominoes which are simple and balanced, namely the row and column convex polyominoes, and the tree-like polyominoes. Let ${\mathcal P}$ be a polyomino. Let $C=[(i,j),(i+1,j+1)]$ be a cell of ${\mathcal P}$. We call the vertex $a=(i,j)$ the {\em left lower corner} of $C$. Let $C_1$ and $C_2$ be two cells with left lower corners $(i_1,j_1)$ and $(i_2,j_2)$, respectively. We say that $C_1$ and $C_2$ are in {\em horizontal (vertical) position} if $j_1=j_2$ ($i_1=i_2$). The polyomino ${\mathcal P}$ is called {\em row convex} if for any two horizontal cells $C_1$ and $C_2$ with lower left corners $(i_1,j)$ and $(i_2,j)$ and $i_1<i_2$, all cells with lower left corner $(i,j)$ with $i_1\leq i\leq i_2$ belong to ${\mathcal P}$. Similarly one defines {\em column convex} polyominoes. For example, the polyomino displayed in Figure~\ref{figcolumnconvex} is column convex but not row convex. \begin{figure}[htbp] \begin{center} \includegraphics[clip, width=2cm]{rowandcolumn} \caption{Column convex but not row convex} \label{figcolumnconvex} \end{center} \end{figure} A {\em neighbor} of a cell $C$ in ${\mathcal P}$ is a cell $D$ which shares a common edge with $C$. Obviously, any cell can have at most four neighbors. We call a cell of ${\mathcal P}$ a {\em leaf}, if it has an edge which does not has a common vertex with any other cell. Figure~\ref{leaves} illustrates this concept. \begin{figure}[htbp] \begin{center} \includegraphics[clip, width=4.3cm]{leaves} \caption{A polyomino with three leaves}\label{leaves} \end{center} \end{figure} The polyomino ${\mathcal P}$ is called {\em tree-like} each subpolyomino of ${\mathcal P}$ has a leaf. The polyomino displayed in Figure~\ref{leaves} is not tree-like, because it contains a subpolyomino which has no leaf. On the other hand, Figure~\ref{treelike} shows a tree-like polyomino. \begin{figure}[htbp] \begin{center} \includegraphics[clip, width=4.3cm]{treelike} \caption{A tree-like polyomino}\label{treelike} \end{center} \end{figure} A free vertex of ${\mathcal P}$ is a vertex which belongs to exactly one cell. Notice that any leaf has two free vertices. We call a path of cells a {\em horizontal (vertical) cell interval}, if the left lower corners of the path form a horizontal (vertical) edge interval. Let $C$ be a leaf, and let ${\mathcal I}$ by the maximal cell interval to which $C$ belongs, and assume that ${\mathcal I}$ is a horizontal (vertical) cell interval. Then we call $C$ a {\em good leaf}, if for one of the free vertices of $C$ the maximal horizontal (vertical) edge interval which contains it has the same length as ${\mathcal I}$. We call a leaf {\em bad} if it is not good, see Figure~\ref{badcells}. \begin{Theorem} \label{whatweunderstand} Let ${\mathcal P}$ be a row or column convex, or a tree-like polyomino. Then ${\mathcal P}$ is balanced and simple. \begin{figure}[htbp] \begin{center} \includegraphics[width =2cm]{badcells} \caption{Bad and good leaves}\label{badcells} \end{center} \end{figure} \end{Theorem} \begin{proof} Let ${\mathcal P}$ be a tree-like polyomino. We first show that ${\mathcal P}$ is balanced. Let $\alpha$ be an admissible labeling of ${\mathcal P}$. We have to show that $f_\alpha\in I_{\mathcal P}$. To prove this we first show that ${\mathcal P}$ has a good leaf. If $|{\mathcal P}| = 1$, then the assertion is trivial. We may assume that $|{\mathcal P}| \ge 2$. Let \[ n_i = |\{C \in {\mathcal P} : \deg C=i \}|. \] Observe that $\sum_{C \in {\mathcal P}} \deg C = n_1 + 2n_2 + 3n_3 + 4n_4$, where $\deg C$ denotes the number of neighbor cells of ${\mathcal P}$. Let ${\mathcal P}$ be any polyomino with cells $C_1, \ldots, C_n$. Associated to ${\mathcal P}$, is the so-called {\em connection graph} on vertex $[n]$ with the edge set $ \{\{i,j\}: E(C_i) \cap E(C_j) \neq \emptyset \}$. It is easy to see that connection graph of a tree-like polyomino is a tree. Therefore, using some elementary facts from graph theory, we obtain that \[ n_1 + 2n_2 + 3n_3 + 4n_4 = 2 (|{\mathcal P}| -1) = 2(n_1 + n_2 + n_3 + n_4-1). \] This implies that $n_1 = n_3+2n_4+2$. Let $g({\mathcal P})$ be the number of good leaves in ${\mathcal P}$ and $b({\mathcal P})$ be the number of bad leaves in ${\mathcal P}$. It is obvious that $n_1= g({\mathcal P}) + b({\mathcal P})$. Then we have \begin{eqnarray} \label{shikama} g({\mathcal P}) = n_3+ 2n_4 +2 -b({\mathcal P}). \end{eqnarray} Next, we show that $b({\mathcal P}) \leq n_3$. Suppose that $C$ is a bad leaf in ${\mathcal P}$ and ${\mathcal I}$ the unique maximal cell interval to which $C$ belongs. Let $D_C$ be the end cell of the interval ${\mathcal I}$. Observe that $C\neq D_C$. We claim that $\deg D_C = 3$. Indeed, since $C$ is bad, the length of the maximal intervals containing the two free vertices of $C$ is bigger than the length of the interval ${\mathcal I}$. See Figure \ref{Thm1} where the cells belonging to ${\mathcal I}$ are marked with dots and $E$ is the cell next to $D_C$ which does not belong to ${\mathcal P}$. Since $E \notin{\mathcal P}$, the cells $D_1$ and $D_2$ belong to ${\mathcal P}$. Suppose $D_3\not\in {\mathcal P}$. Since ${\mathcal P}$ is a polyomino there exists a path ${\mathcal C}$ connecting $D_1$ and $D_C$. Since $E,D_3\not\in {\mathcal P}$ the path ${\mathcal C}$ (which is a subpolyomino of ${\mathcal P}$) does not have a leaf, contradicting the assumption that ${\mathcal P}$ is tree-like. Therefore, $D_3\in{\mathcal P}$. Similarly one shows that $D_4\in {\mathcal P}$. This shows that $\deg D_C=3$. \begin{figure}[htbp] \begin{center} \includegraphics[width =2.8cm]{Thm1} \caption{}\label{Thm1} \end{center} \end{figure} Moreover, $D_C$ cannot be the end cell of any other cell interval, because $D_3$ and $D_4$ are neighbors of $D_C$. Thus we obtain a one-to-one correspondence between the bad leafs $C$ and the cells $D_C$ as defined before. It follows that $n_3\leq b({\mathcal P})$. Therefore, by (\ref{shikama}) we obtain \[ g({\mathcal P})=n_3+ 2n_4 +2 -b({\mathcal P})\geq 2n_4+2\geq 2. \] Thus, we have at least $2$ good end cell for every tree-like polyomino. Now we show ${\mathcal P}$ is balanced. Indeed, let $\alpha$ be an admissible labeling of ${\mathcal P}$. We want to show that $f_\alpha\in I_{\mathcal P}$. Let $C$ be a good leaf of ${\mathcal P}$ with free vertices $a_1$ and $a_2$. Then $\alpha(a_1)=-\alpha(a_2)$. If $\alpha(a_i)=0$ for $i=1,2$, then $\alpha$ restricted to ${\mathcal P}'$ is an admissible labeling of ${\mathcal P}'$, where ${\mathcal P}'$ is obtained from ${\mathcal P}$ by removing the cell $C$ from ${\mathcal P}$. Inducting on the number of cells we may then assume that $f_\alpha\in I_{{\mathcal P}'}$. Since $I_{{\mathcal P}'}\subset I_{\mathcal P}$, we are done is this case. Assume now that $\alpha(a_1)\neq 0$. We proceed by induction on $|\alpha(a_1)|$. Note that $\alpha(a_2)\neq 0$, too. Since $C$ is a good leaf, we may assume that the maximal interval $[a_2,b]$ to which $a_2$ belongs has the same length as the cell interval $[C,D_C]$ and $\alpha(a_1)>0$. Then $\alpha(a_2)<0$ and hence, since $\alpha$ is admissible, there exists a $c\in [a_2,b]$ with $\alpha(c)>0$. The cells of the interval $[c,a_1]$ all belong to $[C,D_C]$. Therefore, $g=x_cx_{a_1}-x_dx_{a_2}\in I_{\mathcal P}$. Here $d$ is the vertex as indicated in Figure~\ref{Thm2}. \begin{figure}[htbp] \begin{center} \includegraphics[width =3.3cm]{Thm2} \caption{}\label{Thm2} \end{center} \end{figure} Notice that the labeling $\beta$ of ${\mathcal P}$ defined by \[ \beta_1(e)= \left\{ \begin{array}{ll} \alpha(e)-1, & \text{if $e=a_1$ or $e=c$}, \\ \alpha(e)+1, &\text{if $e=a_2$ or $e=d$}, \\ \alpha(e),& \text{elsewhere}. \end{array} \right. \] is admissible and $|\beta(a_1)| < |\alpha( a_1)|$. By induction hypothesis we have that $f_\beta\in I_{\mathcal P}$. Then the following relation \begin{eqnarray}\label{relation} f_{\alpha} -( f_{\alpha}^{(+)}/ x_c x_{a_1}) g = (x_{a_2}x_d)f_{\beta} \end{eqnarray} gives that $f_{\alpha} \in I_{{\mathcal P}}$, as well. Next, we show that ${\mathcal P}$ is simple by applying induction on number of cells. The assertion is trivial if ${\mathcal P}$ consists of only one cell. Suppose now $|{\mathcal P}|>1$, and let $D$ be a cell not belonging to ${\mathcal P}$ and ${\mathcal I}$ an interval such $V({\mathcal P})\subset {\mathcal I}$. Since ${\mathcal P}$ is tree-like, ${\mathcal P}$ admits a leaf cell $C$. Let ${\mathcal P}'$ be the polyomino which is obtained from ${\mathcal P}$ by removing the cell $C$. Then ${\mathcal P}'$ is again tree-like and hence is simple by induction. Therefore, since $D\not\in{\mathcal P}'$, there exists a path ${\mathcal D}': D=D_1,D_2,\cdots, D_m$ of cells with $D_i\not\in {\mathcal P}'$ for all $i$ and such that $D_m$ is a border cell of ${\mathcal I}$. We let ${\mathcal D}={\mathcal D}'$, if $D_i\neq C$ for all $i$. Note that $C\neq D_1, D_m$. Suppose $D_i=C$ for some $i$ with $1<i<m$. Since $C$ a leaf, it follows that the cells $C_1,C_2,C_3,C_4$ and $C_5$ as shown in Figure~\ref{dom} do not belong to ${\mathcal P}$. \begin{figure}[htbp] \begin{center} \includegraphics[width =2.3cm]{treelikeproof} \caption{}\label{dom} \end{center} \end{figure} Since $D_i=C$ and since ${\mathcal D}'$ is a path of cells it follows that $D_{i-1}\in \{C_1,C_3,C_5\}$ and $D_{i+1}\in \{C_1,C_3,C_5\}$. If $D_{i-1}=C_1$ and $D_{i+1}=C_3$, then we let \[ {\mathcal D}: D_1,\ldots, D_{i-1},C_2,D_{i+1},\ldots, D_m, \] and if $D_{i-1}=C_1$ and $D_{i+1}=C_5$, then we let \[{\mathcal D}: D_1,\ldots, D_{i-1},C_2,C_3,C_4,D_{i+1},\ldots, D_m. \] Similarly one defines ${\mathcal D}$ in all the other cases that my occur for $D_{i-1}$ and $D_{i+1}$ , so that in any case ${\mathcal D}'$ can be replaced by the path of cells ${\mathcal D}$ which does not contain any cell of $Pc$ and connects $D$ with the border of ${\mathcal I}$. This shows that ${\mathcal P}$ is simple. \medskip Now we show that if ${\mathcal P}$ is row or column convex then ${\mathcal P}$ is balanced and simple. We may assume that ${\mathcal P}$ be column convex. We first show that ${\mathcal P}$ is balanced. For this part of the proof we follow the arguments given in \cite[Proof of Theorem 3.10]{Q}. Let ${\mathcal C}_1=[C_1, C_n]$ be the left most column interval of ${\mathcal P}$ and $a$ be lower left corner of $C_1$. Let $\alpha $ be an admissible labeling for ${\mathcal P}$. If $\alpha (a) =0$, then $\alpha$ restricted to ${\mathcal P}'$ is an admissible labeling of ${\mathcal P}'$, where ${\mathcal P}'$ is obtained from ${\mathcal P}$ by removing the cell $C_1$ from ${\mathcal P}$. By applying induction on the number of cells we have that $f_\alpha\in I_{{\mathcal P}'} \subset I_{\mathcal P}$, and we are done is this case. Now we may assume that $\alpha(a) \neq 0$. We may assume that $\alpha (a) > 0$. By following the same arguments as in case of tree-like polyominoes, it suffice to show that there exist an inner interval $[(i,j), (k,l)]$ with $i<k$ and $j<l$ such that $\alpha((i,j))\alpha((k,l)) >0$ or $\alpha((k,j))\alpha((i,l)) >0$. Let $[a,b]$ and $[a,c]$ be the maximal horizontal and vertical edge intervals of ${\mathcal P}$ containing $a$. Then there exist $e \in [a, b]$ and $f \in [a, c]$ such that $\alpha(e), \alpha(f) < 0$ because $\alpha$ is admissible. Let $[f, g]$ be the maximal horizontal edge interval of ${\mathcal P}$ which contains $f$. Then there exists a vertex $h \in [f, g]$ such that $\alpha(h) > 0$. If $\size([f, g])\leq \size([a, b ])$, then column convexity of ${\mathcal P}$ gives that $ [a, h]$ is an inner interval of P and $\alpha(a)\alpha(h)>0$. Otherwise, if $\size([f,g]) \geq \size([a,b])$, then again by using column convexity of ${\mathcal P}$, $x_a x_q - x_e x_f \in I_{{\mathcal P}}$ where $q$ is as shown in following figure. By following the same arguments as in case of tree-like polyominoes, we conclude that $f_\alpha \in I_{\mathcal P}$. Now we will show that ${\mathcal P}$ is simple. We may assume that $V({\mathcal P})\subset [(k,l),(r,s)]$ with $l>0$. Let $C\not\in {\mathcal P}$ be a cell with with lower left corner $a=(i,j)$, and consider the infinite path ${\mathcal C}$ of cells where ${\mathcal C}=C_0,C_1,\ldots $ and where the lower left corner of $C_k$ is $(i,k)$ for $k=0,1,\ldots$. If ${\mathcal C}\sect{\mathcal P}=\emptyset$, then $C$ is connected to $C_0$. On the other hand, if ${\mathcal C}\sect{\mathcal P}\neq \emptyset$, then, since ${\mathcal P}$ is column convex, there exist integers $k_1,k_2$ with $l\leq k_1\leq k_2\leq s$ such that $C_k\in {\mathcal P}$ if and only if $k_1\leq k\leq k_2$. Since $C\not\in {\mathcal P}$ it follows that $j<k_1$ or $j>k_2$. In the first case, $C$ is connected to $C_0$ and in the second case $C$ is connected to $C_s$. \end{proof} \begin{Corollary} Let ${\mathcal P}$ be a row or column convex, or a tree-like polyomino. Then $K[{\mathcal P}]$ is a normal Cohen--Macaulay domain. \end{Corollary}
train/arxiv
BkiUa_DxK3xgpfdGN-ET
5
1
\section{Introduction} \label{sec:int} One of the major performance bottlenecks for countless computational problems is the evaluation of linear algebra expressions, i.e., expressions involving operations with matrices and/or vectors. Libraries such as BLAS and LAPACK provide a small set of high performance kernels to compute some standard linear algebra operations~\cite{dongarra1985proposal,demmel1991lapack}. However, the mapping of linear algebra expressions on to an optimized sequence of standard operations is a task far from trivial; the expressions can be computed in many different ways---each corresponding to a specific sequence of library calls---which can significantly differ in performance from one another. Unfortunately, it has been found that the mapping done by most popular high level programming languages such as Matlab, Eigen, TensorFlow, PyTorch, etc., is still suboptimal~\cite{Psarras2022:618,sankaran2022benchmarking}. Linear algebra expressions can be manipulated using mathematical properties such as associativity, distributivity, etc., to derive different mathematically equivalent variants (or algorithms). For instance, consider the following expression that evaluates the product of four matrices: \begin{equation} \label{eqn:mc} X = ABCD \end{equation} where $A \in \mathbb{R}^{m \times n}$, $B \in \mathbb{R}^{n \times k}$, $C \in \mathbb{R}^{k \times l}$ and $D \in \mathbb{R}^{l \times q}$ are all dense matrices. An instance of Expression~\ref{eqn:mc} is identified by the tuple $(m,n,k,l,q)$. Because of associativity of the matrix product, Expression~\ref{eqn:mc} can be computed in many different ways, each identified by a specific parenthesization. Although different parenthesizations evaluate to the same mathematical result, they require different number of FLOPs, and might exhibit different performance. For Expression~\ref{eqn:mc}, five possible parenthesizations and their approximate associated costs are shown in Figure~\ref{fig:matchain}. At least six algorithms can be implemented from the five variants; note that the evaluation of $(AB)(CD)$ can correspond to two different implementations, which differ in the order of instructions, i.e., $AB$ can be computed either before or after $CD$. A simple strategy to select the fastest algorithm is to select a parenthesization that performs the least floating point operations (FLOPs). However, it has been observed that the algorithm with the lowest FLOP count is not always the fastest algorithm; such instances are referred to as \textit{anomalies}~\cite{Lopez2022:530}. \begin{figure} \includegraphics[width=0.5\textwidth]{fig/matchain} \caption{Variants for Matrix chain of length 4. The cost indicates the approximate FLOP count divided by 2.} \label{fig:matchain} \end{figure} Consider the following instance of Expression~\ref{eqn:mc}: $(331,279,338,854,497)$, which was observed as an anomaly in~\cite{Lopez2022:530}; there, the algorithms were implemented in C, linked against Intel Math Kernel library\footnote{MKL version 2019.0.5} and measurements were conducted on a Linux-based system using 10 cores of an Intel Xeon processor. The measurement of each algorithm was repeated 10 times and the median was used to compare algorithms. Now, in a comparable compute environment, we re-implement the algorithms in Julia\footnote{Julia version 1.3.0}, where the possible influence of the library overheads on execution times can be greater than that of the equivalent C implementations. We link against the same Intel MKL versions and measure the algorithms. The box-plot of the measurements from two independent runs are shown in Figure~\ref{fig:run1} and \ref{fig:run2}. The red line in the box-plot of each algorithm represents the median execution time; the range of the grey box indicates from 25th to 75th quantile, and the length of this box is the Inter-Quartile Region(IQR); the dotted lines are the ``whiskers'' that extend to the smallest and largest observations that are not outliers according to the 1.5IQR rule~\cite{hodge2004survey}. The difference in FLOP count of an algorithm ($\mathbf{alg}_i$) from the one that computes the least FLOPs is quantified by the Relative FLOPs score (RF$_i$): \begin{equation} \label{eq:rel-flops} \text{RF}_i = \frac{F_i - F_{\text{min}}}{F_{\text{min}}} \end{equation} where $F_i$ is the FLOPs computed by $\mathbf{alg}_i$ and $F_\text{min}$ is the cost corresponding to the algorithm that computes the least FLOPs. The relative FLOPs scores are shown in Table~\ref{tab:rank-med}. \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/run1} \caption{Run 1} \label{fig:run1} \end{subfigure} \par\bigskip \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/run2} \caption{Run 2} \label{fig:run2} \end{subfigure} \caption{Two independent runs consisting of 10 measurements of each algorithm for the instance $(331,279,338,854,497)$ of Expression~\ref{eqn:mc}. The algorithms in (a) and (b) are sorted based on increasing median execution times from bottom to top. } \label{fig:rank-medians} \end{figure} \begin{table}[h!] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{adjustbox}{width=1.0\columnwidth,center} \begin{tabular}{@{}rr cccccc@{}} \toprule \textbf{Rank} && \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6}\\ \midrule Run 1 && \makecell{algorithm1 \\ (0.0)} & \makecell{algorithm2 \\ (0.04)} & \makecell{algorithm3 \\ (0.11)} & \makecell{algorithm5 \\ (0.32)} & \makecell{algorithm4 \\ (0.27)} & \makecell{algorithm0 \\ (0.0)} \\ Run 2 && \makecell{algorithm2 \\ (0.04)} & \makecell{algorithm1 \\ (0.0)} & \makecell{algorithm0 \\ (0.0)} & \makecell{algorithm4 \\ (0.27)} & \makecell{algorithm3 \\ (0.11)} & \makecell{algorithm5 \\ (0.32)} \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Algorithms are ranked according to increasing median execution time. The relative FLOP score of every algorithm is indicated in the parenthesis.} \label{tab:rank-med} \end{center} \end{table} It is well known that execution times are influenced by many factors, and that repeated measurements, even with the same input data and compute environment, often result in different execution times~\cite{hoefler2010characterizing, peise2014study}. Therefore, comparing the performance of any two algorithms involves comparing two sets of measurements. In common practice, time measurements are summarized to statistical estimates (such as minimum or median execution time, possibly in combination with standard deviations or quantiles), which are then used to compare and rank algorithms~\cite{Lopez2022:530, hoefler2010characterizing}. However, when the turbo boost settings or inter-kernel cache effects begin to have a significant impact on program execution, the common statistical quantities cannot reliably capture the profile of the time measurements~\cite{hoefler2015scientific}; as a consequence, when time measurements are repeated, the ranking of algorithms would most likely change and this makes the development of reliable performance models for automatic algorithm comparisons difficult. Consider again the examples in Figure~\ref{fig:rank-medians}. It can be seen that the ranking of algorithms based on medians are completely different for the two runs. Moreover, the algorithms are not ranked based on the increasing FLOP counts; in the first run, algorithm0, which is one of the best algorithms in terms of FLOP count (i.e., RF$_1 = 0.0$) is ranked last, and in the second run, algorithm2, which is not among the best algorithms in terms of FLOPs (i.e., RF$_2 \neq 0.0$) is ranked first. The lack of consistency in ranking stems from not considering the possibility that two algorithms can be equivalent when comparing their performances. The box-plots in Figure~\ref{fig:rank-medians} show that the underlying distribution of measurements of the algorithms largely overlap. This indicates that the algorithms with least FLOPs, even though not ranked first, could have simply been ranked as being equivalent to (as good as) the algorithm chosen as the fastest based on median execution times. Hence, we elaborate on the definition of anomalies. Let $S_F$ be the set of all algorithms with the least FLOP count. In order to classify an instance as an anomaly, the following conditions are checked one after the other: \begin{enumerate} \item There should exist an algorithm that is not in $S_F$, but exhibits \textit{noticeably} better performance than those in $S_F$; that is, we first check if $S_F$ is a valid representative of the fastest algorithms. \item If an instance is not classified as an anomaly according to (1), then it is checked if one of the algorithms in $S_F$ exhibit noticeable difference in performance from the rest in $S_F$; this implies, even though $S_F$ is a valid representative of the fastest algorithms, it is not possible to randomly choose an algorithm from $S_F$ as the best algorithm. \end{enumerate} In order for one algorithm to be faster (or slower) than another, there should be noticeable difference in the distribution of their time measurements; for example, consider another instance of Expression~\ref{eqn:mc}: $(75, 75, 8, 75, 75)$. The measurements of the variant algorithms in Julia are shown in Figure~\ref{fig:clusters}. One could visually infer that algorithms 0 and 1 are equivalent and belong to the same (and best) performance class, while algorithms 2 and 3 have noticeable difference in performance from algorithms 0 and 1, hence belong to a different performance class. The expected ranks for the algorithms are shown in Table~\ref{tab:rank-clusters}. To this end, the comparison of any two algorithms should be able to yield one of the three outcomes: faster, slower, or equivalent. In this paper, we define a three-way comparison function, and develop a methodology that uses this three-way comparison to sort a set of algorithms into performance classes by merging the ranks of algorithms whose distribution of time measurements are significantly overlapping with one another. \begin{figure} \includegraphics[width=0.5\textwidth]{fig/clusters} \caption{20 measurements of each algorithm for the instance $(75,75,8,75,75)$ of Expression~\ref{eqn:mc}. } \label{fig:clusters} \end{figure} \begin{table}[h!] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{adjustbox}{width=1.0\columnwidth,center} \begin{tabular}{@{}ll cccccc@{}} \toprule \textbf{Algorithm} && \makecell{algorithm0 \\ (0.0)} & \makecell{algorithm1 \\ (0.0)} & \makecell{algorithm2 \\ (2.78)} & \makecell{algorithm3 \\ (2.78)} & \makecell{algorithm4 \\ (5.59)} & \makecell{algorithm5 \\ (5.59)} \\ \midrule Expected rank && 1 & 1& 2& 2& 3& 3\\ \bottomrule \end{tabular} \end{adjustbox} \caption{Expected ranks for the Algorithms based on the measurements in Figure~\ref{fig:clusters}. The relative FLOP score of every algorithm is indicated in the parenthesis.} \label{tab:rank-clusters} \end{center} \end{table} In practice, linear algebra expressions can have 100s of possible variants. A Statistically sound algorithm comparison requires several repetition of measurements for each variant. As it would be time-consuming to measure all the variants several times, the following approach is employed: \begin{enumerate} \item After a small warm up to exclude library overheads, all the algorithms are measured exactly once. \item The difference in the execution time of $\mathbf{alg}_i$ from the algorithm with the lowest execution time is quantified by the Relative Time score (RT$_{i}$): \begin{equation} \label{eq:rel-time} \text{RT}_i = \frac{T_i - T_{\text{min}}}{T_{\text{min}}} \end{equation} where $T_i$ is the execution time of $\mathbf{alg}_i$ and $T_\text{min}$ is the minimum observed execution time. \item A set of candidates (S) is created by first shortlisting all the algorithms with minimum FLOP count. Then, to this set, are added all the algorithms that perform more FLOPs, but have relative execution time within a user-specified threshold. \item An initial hypothesis is formed by ranking the candidates in S based on the single-run execution times. \item Each candidate in set S is measured $M$ times (where $M$ is small; e.g., 2 or 3) and ranks of the candidates are updated or merged using the three-way comparison function. \item Step 5 is repeated until the changes in ranks converge or the maximum allowed measurements per algorithm (specified by the user) has been reached. \end{enumerate} If FLOPs are a valid discriminant for a given instance of an expression, then all the algorithms with the least amount of FLOPs would obtain the best rank. Otherwise, the instance would be classified as an anomaly. The identified anomalies can be used to investigate the root cause of performance differences, which would in turn help in the development of meaningful performance models to predict the best algorithm without executing them. \textit{\textbf{Organization}}: In Sec.~\ref{sec:rel}, we present related works. In Sec.~\ref{sec:met}, we introduce the methodology to rank algorithms using the three-way comparison. The working of our methodology and the experiments are explained in Sec.~\ref{sec:exp}. Finally, in Sec.~\ref{sec:con}, we draw conclusions. \section{Related Works} \label{sec:rel} The problem of mapping one target linear algebra expression to a sequence of library calls is known as the Linear Algebra Mapping Problem (LAMP)~\cite{Psarras2022:618}; typical problem instances have many mathematically equivalent solutions, and high-level languages and environments such as Matlab, Julia etc., ideally should select the fastest one. However, it has been shown that most of these languages choose algorithms that are sub-optimal in terms of performance~\cite{Psarras2022:618,sankaran2022benchmarking}. A general approach to identify the fastest algorithm is by ranking the solution algorithms according to their predicted performance. For linear algebra computations, a common performance metric to be used as performance predictor is the FLOP count; however, it has been observed that the number of FLOPs is not always a direct indicator of the fastest code, especially when the computation is bandwidth-bound or executed in parallel~\cite{konstantinidis2015practical, Barthels2021:688}. For selected bandwidth-bound operations, Iakymchuk et al. developed analytical performance models based on memory access patterns~\cite{iakymchuk2011execution, iakymchuk2012modeling}; while those models capture the program execution accurately, their construction requests not only a deep understanding of the processor, but also of the details of the implementation. There are many examples where an increase in FLOPs count results in a decrease in execution time (anomalies); ~\cite{bischof1987wy,bischof1994parallel, buttari2008parallel} expose some specific mathematical operations for which the need for complex performance models (that mostly require measuring the execution times) are justified. However, that does not mean that FLOPs counts are ineffectual for general purpose linear algebra computations that are targeted by the high-level languages. For instance, in~\cite{sankaran2022benchmarking}, Sankaran et al. expose optimization opportunities in Tensorflow and PyTorch to improve algorithm selection by simply calculating the FLOPs count and applying linear algebra knowledge. In order to justify the need for complex performance models for algorithm selection, it is important to quantify the presence of anomalies. In~\cite{Lopez2022:530}, Lopez et al. estimate the percentage of anomalies for instances of Expression~\ref{eqn:mc} on certain single node multi-threaded setting to be 0.4 percent; in other words, for that case, complex performance models are pointless unless they achieve an accuracy greater than 99.6 percent. It was indicated that the percentage of anomalies increases for more complex expressions. However, in that study, the algorithms were compared using the median execution time from 10 repetition of measurements, and because of this, their comparisons may not be consistent when the experiments are repeated. Performance metrics that are a summary of execution times (such as minimum, median etc.) lack in consistency when the measurements of the programs are repeated; this is due to system noise~\cite{hoefler2010characterizing}, cache effects~\cite{peise2014study}, behavior of collective communications~\cite{agarwal2005impact} etc., and it is not realistic to eliminate the performance variations entirely~\cite{alcocer2015tracking}. The distribution of execution times obtained by repeated measurements of a program is known to lack in textbook statistical behaviours and hence specialized methods to quantify performance have been developed~\cite{chen2014statistical, chen2016robust, hoefler2010loggopsim, bohme2016identifying}. However, approximating statistical distributions require executing the algorithms several times. In this work, we develop a strategy to minimize the number of measurements. The performance of an algorithm can be predicted using regression or machine learning based methods; this requires careful formulation of an underlying problem. A wide body of significant work has been done in this direction for more than a decade~\cite{peise2014performance, barnes2008regression, barve2019fecbench}. Peise et al in~\cite{peise2014performance} create a prediction model for individual BLAS calls and estimates the execution time for an algorithm by composing the predictions from several BLAS calls. In~\cite{barve2019fecbench}, Barve et al predict performance to optimize resource allocation in multi-tenant systems. Barnes et al in~\cite{barnes2008regression} predict scalability of parallel algorithms. In those approaches, the performances are quantified in an absolute term. Instead, in this work, we quantify performance of algorithms relatively to one another using pairwise comparisons. In~\cite{sankaran2021performance}, Sankaran et al. discuss an application of algorithm ranking via relative performance in an edge computing environment to reduce energy consumption in devices. They compare algorithms by randomly bootstrapping measurements. Instead, we present an approach that compares algorithms by comparing the quantiles from the measurements. \section{Methodology} \label{sec:met} Let $\mathcal{A}$ be a set of mathematically equivalent algorithms. The algorithms $\mathbf{alg}_1, \dots, \mathbf{alg}_p \in \mathcal{A}$ are ordered according to decreasing performance based on some initial hypothesis. Let $\mathbf{h_0}: \langle \mathbf{alg}_{i(1)}, \dots, \mathbf{alg}_{i(j)}, \dots \mathbf{alg}_{i(p)} \rangle$ be an initial ordering. Here, $i(j)$ is the index of the algorithm at position $j$ in $\mathbf{h_0}$. For instance, consider the equivalent algorithms in Figure~\ref{fig:clusters}: $\{ \mathbf{alg}_0, \mathbf{alg}_1,\mathbf{alg}_2, \mathbf{alg}_3, \mathbf{alg}_4,\mathbf{alg}_5 \}$. If the initial hypothesis is formed based on the increasing \textit{minimum} execution times observed for each algorithm, then the initial ordering would be $\mathbf{h_0}: \langle \mathbf{alg}_2, \mathbf{alg}_1, \mathbf{alg}_0, \mathbf{alg}_4, \mathbf{alg}_3, \mathbf{alg}_5 \rangle $. Here, the index of the algorithm in the first position is $i(1)=2$, the second position is $i(2)=1$, and so on. The execution time of each algorithm in $\mathbf{h_0}$ is measured $N$ times, and based on the additional empirical evidence, the algorithms are re-ordered to produce a sequence consisting of tuples $\mathbf{s}: \langle (\mathbf{alg}_{s(1)},\textbf{rank}_1), \dots, (\mathbf{alg}_{s(p)},\textbf{rank}_p) \rangle$, where $\textbf{rank}_j$ is the rank of the algorithm at position $j$ in $\mathbf{s}$ and $\textbf{rank}_j \in \{1, \dots k\}$ with $k \le p$ (i.e., several algorithms can share the same rank). Here, $s(j)$ is the index of the algorithm at $j^{th}$ position in $\mathbf{s}$. For instance, for the experiment in Figure~\ref{fig:clusters}, the sorted sequence would be $\langle (\mathbf{alg}_{1},1), (\mathbf{alg}_{0},1), (\mathbf{alg}_{2},2), (\mathbf{alg}_{3},2), (\mathbf{alg}_{4},3), (\mathbf{alg}_{5},3) \rangle$. Algorithms that evaluate to be equivalent to one another are assigned the same rank. To this end, we first define the procedure to compare two algorithms that takes into account the equivalence of the algorithms. Then, we sort the algorithms using the three-way comparison function to update and merge ranks. \textbf{Algorithm comparison (Procedure~\ref{alg:compare}):} Procedure~\ref{alg:compare} takes in as input any two sets of $N$ measurements $\mathbf{t}_i, \mathbf{t_j} \in \mathbb{R}^N$ from algorithms $\mathbf{alg}_i, \mathbf{alg}_j$ respectively, and a specific quantile range $(q_{\text{lower}}, q_{\text{upper}})$. The procedure compares the quantiles of the two algorithms. If the $ q_{\text{upper}}$ of $\mathbf{alg}_i$ is less than the $ q_{\text{lower}}$ of $\mathbf{alg}_j$, then $\mathbf{alg}_i$ is ``better'' than ($<$) $\mathbf{alg}_j$. Otherwise, if $ q_{\text{upper}}$ of $\mathbf{alg}_j$ is less than the $ q_{\text{lower}}$ of $\mathbf{alg}_i$, then $\mathbf{alg}_i$ is ``worse'' than ($>$) $\mathbf{alg}_j$. Otherwise, both the algorithms are ``equivalent''($\sim$). \begin{algorithm} \caption{CompareAlgs $(\mathbf{alg}_i, \mathbf{alg}_j, q_{\text{lower}}, q_{\text{upper}})$ } \label{alg:compare} \hspace*{\algorithmicindent} \textbf{Inp: } $ \mathbf{alg}_i, \mathbf{alg}_j \in \mathcal{A}$ \\ \hspace*{\algorithmicindent} \hspace*{\algorithmicindent} $ \quad q_{\text{lower}}, q_{\text{upper}} \in (0,100) \quad q_{\text{upper}} > q_{\text{lower}} $\\ \hspace*{\algorithmicindent} \textbf{Out:} $ \{``\mathbf{alg}_i < \mathbf{alg}_j", ``\mathbf{alg}_i>\mathbf{alg}_j", ``\mathbf{alg}_i\sim \mathbf{alg}_j" \}$ \begin{algorithmic}[1] \State $\mathbf{t_i} = get\_measurements(\mathbf{alg}_i)$ \Comment{$\mathbf{t_i} \in \mathbb{R}^{N}$} \State $\mathbf{t_j} = get\_measurements(\mathbf{alg}_j)$ \Comment{$\mathbf{t_j} \in \mathbb{R}^{N}$} \\ \State $\mathbf{t_i}^{low} \leftarrow $ Value of the $q_{\text{lower}}$ quantile in $\mathbf{t_i}$ \Comment{$ \mathbf{t_i}^{low} \in \mathbb{R}$} \State $ \mathbf{t_i}^{up} \leftarrow $ Value of the $q_{\text{upper}}$ quantile in $\mathbf{t_i}$ \Comment{$ \mathbf{t_i}^{up} \in \mathbb{R}$}\\ \State $\mathbf{t_j}^{low} \leftarrow $ Value of the $q_{\text{lower}}$ quantile in $\mathbf{t_j}$ \Comment{$ \mathbf{t_j}^{low} \in \mathbb{R}$} \State $ \mathbf{t_j}^{up} \leftarrow $ Value of the $q_{\text{upper}}$ quantile in $\mathbf{t_j}$ \Comment{$ \mathbf{t_j}^{up} \in \mathbb{R}$}\\ \If{$\mathbf{t_i}^{up} < \mathbf{t_j}^{low} $} \State return ``$\mathbf{alg}_i < \mathbf{alg}_j$" \ElsIf{$\mathbf{t_j}^{up} < \mathbf{t_i}^{low} $} \State return ``$\mathbf{alg}_i>\mathbf{alg}_j$" \Else \State return ``$\mathbf{alg}_i\sim \mathbf{alg}_j $" \EndIf \end{algorithmic} \end{algorithm} \textbf{Sorting procedure (Procedure~\ref{alg:sort}):} The inputs to Procedure~\ref{alg:sort} are the initial sequence $\mathbf{h}_0$ and a quantile range $(q_{\text{lower}}, q_{\text{upper}})$. The output is a sorted sequence set $\mathbf{s}$. To this end, the bubble-sort procedure~\cite{astrachan2003bubble} is adapted to work with the three-way comparison function. Starting from the left most element in the initial sequence, the procedure compares adjacent algorithms and swaps their positions if an algorithm occurring later in the sequence is better (according to Procedure~\ref{alg:compare}) than the previous algorithm, and then ranks are updated. When the comparison of two algorithms results to be equivalent as each other, both are assigned with the same rank, but their positions are not swapped. In order to illustrate the rank update rules in detail, we consider the illustration in Figure~\ref{fig:sort}, which shows the intermediate steps while sorting an initial sequence $ \mathbf{h_0}: \langle \mathbf{alg}_1, \mathbf{alg}_2, \mathbf{alg}_3, \mathbf{alg}_4 \rangle $. All possible update rules that one might encounter appear in one of the intermediate steps of this example. \begin{figure} \includegraphics[width=0.5\textwidth]{fig/ranking} \caption{Bubble Sort with the three-way comparison function.} \label{fig:sort} \end{figure} \begin{enumerate} \item \textit{\textbf{Both positions and ranks are swapped} :} In the first pass of bubble sort, pair-wise comparison of adjacent algorithms are done starting from the first element in the sequence. Currently, the sequence is $\langle \mathbf{alg}_1, \mathbf{alg}_2, \mathbf{alg}_3, \mathbf{alg}_4 \rangle$. As a first step, algorithms $\mathbf{alg}_1$ and $\mathbf{alg}_2$ are compared, and $\mathbf{alg}_2$ ends up being faster. As the slower algorithm should be shifted towards the end of the sequence, $\mathbf{alg}_1$ and $\mathbf{alg}_2$ swap positions (line \ref{lst:swap} in Procedure~\ref{alg:sort}). Since all the algorithms still have unique ranks, $\mathbf{alg}_2$ and $\mathbf{alg}_1$ also exchange their ranks, and no special rules for updating ranks are applied. So, $\mathbf{alg}_2$ and $\mathbf{alg}_1$ receive rank 1 and 2, respectively. \item \textit{\textbf{Positions are not swapped but the ranks are merged}:} Next, algorithm $\mathbf{alg}_1$ is compared with its successor $\mathbf{alg}_3$; since they are just as good as one another, no swap takes place. Now, the rank of $\mathbf{alg}_3$ should also indicate that it is as good as $\mathbf{alg}_1$; so $\mathbf{alg}_3$ is given the same rank as $\mathbf{alg}_1$ and the rank of $\mathbf{alg}_4$ is corrected by decrementing by 1. (line \ref{lst:ag1}-\ref{lst:ag2} in Procedure \ref{alg:sort}). Hence $\mathbf{alg}_1$ and $\mathbf{alg}_3$ have rank 2, and $\mathbf{alg}_4$ is corrected to rank 3. \item \textit{\textbf{Both positions and ranks are swapped}:} (\textit{This is the same rule applied in Step 1}). In the last comparison of the first sweep of bubble sort, Algorithm $\mathbf{alg}_4$ results to be faster than $\mathbf{alg}_3$, so their positions and ranks are swapped. This completes the first pass of bubble-sort. At this point, the sequence is $\langle \mathbf{alg}_2, \mathbf{alg}_1, \mathbf{alg}_4, \mathbf{alg}_3 \rangle$. \item \textit{\textbf{Swapping positions with algorithms having same rank}:} In the second pass of bubble sort, the pair-wise comparison of adjacent algorithms, except the ``right-most" algorithm in sequence, is evaluated (note that the right-most algorithm can still have its rank updated depending upon the results of comparisons of algorithms occurring earlier in the sequence). The first two algorithms $\mathbf{alg}_2$ and $\mathbf{alg}_1$ were already compared in Step 1. So now, the next comparison is $\mathbf{alg}_1$ vs.~$\mathbf{alg}_4$. Algorithm $\mathbf{alg}_4$ results to be faster than $\mathbf{alg}_1$, so their positions are swapped as usual. Now, the rank of $\mathbf{alg}_4$ remains the same, but the rank of all the algorithms successive to $\mathbf{alg}_4$ is incremented by 1. Therefore, the rank of $\mathbf{alg}_1$ and $\mathbf{alg}_3$ is incremented by 1 (line \ref{lst:h1}-\ref{lst:h2} in Procedure \ref{alg:sort}). This completes the second pass of bubble sort and the two slowest algorithms have been pushed to the right. \item \textit{\textbf{Positions are not swapped but the ranks are merged:}} (\textit{This is the same rule applied in Step 2}). In the third and final pass, we again start from the first element on the left of the sequence and continue the pair-wise comparisons until the third last element. This leaves only one comparison to be done, $\mathbf{alg}_4$ vs.~$\mathbf{alg}_2$. Algorithm $\mathbf{alg}_4$ is evaluated to be as good as $\mathbf{alg}_2$, so both are given the same rank and the positions are not swapped. The ranks of algorithms occurring later than $\mathbf{alg}_4$ in the sequence are decremented by 1. Thus, the final sequence is $\langle \mathbf{alg}_2, \mathbf{alg}_4, \mathbf{alg}_1, \mathbf{alg}_3 \rangle$. Algorithms $\mathbf{alg}_2$ and $\mathbf{alg}_4$ obtain rank 1, and $\mathbf{alg}_1$ and $\mathbf{alg}_3$ obtain rank 2. \end{enumerate} \begin{algorithm} \caption{SortAlgs $(\mathbf{h_0}, q_{\text{lower}}, q_{\text{upper}})$ } \label{alg:sort} \hspace*{\algorithmicindent} \textbf{Input: } $ \mathbf{h_0} : \langle \mathbf{alg}_{i(1)},\dots,\mathbf{alg}_{i(p)} \rangle $ \\ \hspace*{\algorithmicindent} \hspace*{\algorithmicindent} $ \quad q_{\text{lower}}, q_{\text{upper}} \in (0,100) \quad q_{\text{upper}} > q_{\text{lower}} $\\ \hspace*{\algorithmicindent} \textbf{Output: } $ \mathbf{s}: \langle (\mathbf{alg}_{s(1)},r_1), \dots, (\mathbf{alg}_{s(p)},r_p) \rangle $ \begin{algorithmic}[1] \For{j = 1, $\dots$, p} \State Initialize $r_j \leftarrow j$ \Comment{Initialize Alg rank} \State Initialize $s(j) \leftarrow i(j)$ \Comment{Initialize Alg order} \EndFor\\ \For{k = 1, $\dots$, p} \For{j = 0, $\dots$, p-k-1} \State ret = CompareAlgs($\mathbf{alg}_{s(j)}, \mathbf{alg}_{s(j+1)}, q_{\text{lower}}, q_{\text{upper}}$) \If{$\mathbf{alg}_{s(j+1)}$ is faster than $\mathbf{alg}_{s(j)}$} \State Swap indices $s(j)$ and $s(j+1)$ \label{lst:swap} \If{$r_{j+1} = r_j$} \label{lst:h1} \State Increment ranks $r_{j+1}, \dots, r_p$ by 1 \label{lst:h2} \EndIf \ElsIf{$\mathbf{alg}_{s(j+1)}$ is as good as $\mathbf{alg}_{s(j)}$} \label{lst:ag1} \If{$r_{j+1} \ne r_j$} \State Decrement ranks $r_{j+1}, \dots, r_p$ by 1 \label{lst:ag2} \EndIf \ElsIf{$\mathbf{alg}_{s(j)}$ is faster than $\mathbf{alg}_{s(j+1)}$} \State Leave the ranks as they are \EndIf \EndFor \EndFor \State $\mathbf{s} \leftarrow \langle (\mathbf{alg}_{s(1)},r_1), \dots, (\mathbf{alg}_{s(p)},r_p) \rangle$ \State return $\mathbf{s}$ \end{algorithmic} \end{algorithm} \bigskip \textbf{Mean rank calculation (Procedure 3):} The results of the sorting procedure depend on the chosen quantile ranges $(q_{\text{lower}}, q_{\text{upper}})$. For the example in Figure~\ref{fig:clusters}, the estimated ranks for different quantile ranges are shown in Table~\ref{tab:q-ranks}. For large quantile ranges that cover the tail ends of the time distribution, such as $(q_5, q_{95})$, the algorithms result to be equivalent to one another more often than the small ranges. For instance, in our example, all the algorithms are estimated to be equivalent (i.e., rank 1) for the quantile range $(q_5, q_{95})$. As the quantile ranges become smaller and smaller, the tails of the distributions are curtailed, and the overlaps estimated among the algorithms become lesser and lesser. Thus, for $(q_{25}, q_{75})$, $\mathbf{alg_0}$, $\mathbf{alg_1}$ obtain rank 1, $\mathbf{alg_2}$, $\mathbf{alg_3}$ obtain rank 2, and $\mathbf{alg_4}$, $\mathbf{alg_5}$ obtain rank 3; these are the ranks one might expect according to the visual inference of the box-plots in Figure~\ref{fig:clusters}. For a smaller range, $(q_{35}, q_{65})$, $\mathbf{alg_2}$ and $\mathbf{alg_3}$ obtain different ranks as $\mathbf{alg_2}$ is slightly shifted towards the right of $\mathbf{alg}_3$ despite significant overlap. Therefore, ranks from isolated quantile ranges does not accurately quantify the underlying performance characteristics of the algorithms. \begin{table}[h!] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabular}{@{}r ccc ccc@{}} \toprule & \textbf{alg1} & \textbf{alg0} & \textbf{alg3} & \textbf{alg2} &\textbf{alg4} & \textbf{alg5} \\ \midrule {$(q_5, q_{95})$} & 1 & 1 & 1 & 1 & 1 & 1 \\ {$(q_{10}, q_{90})$} & 1 & 1 & 2 & 2 & 2 & 2 \\ {$(q_{15}, q_{85})$} & 1 & 1 & 2 & 2 & 2 & 2 \\ {$(q_{20}, q_{80})$} & 1 & 1 & 2 & 2 & 3 & 3 \\ {$\mathbf{(q_{25}, q_{75})}$} & \textbf{1} &\textbf{1} & \textbf{2} &\textbf{2} & \textbf{3} & \textbf{3} \\ {$(q_{30}, q_{70})$} & 1 & 1 & 2 & 2 & 3 & 3 \\ {$(q_{35}, q_{65})$} & 1 & 1 & 2 & 3 & 4 & 4 \\ {\textbf{Mean rank}} & \textbf{1.0} & \textbf{1.0} & \textbf{1.86 } & \textbf{2.0} & \textbf{2.57 } & \textbf{2.57 } \\ \bottomrule \end{tabular} \caption{Ranks calculated for the data in Figure~\ref{fig:clusters} on different quantile ranges. The mean ranks of the algorithms across the quantile ranges are also shown. } \label{tab:q-ranks} \end{center} \end{table} In order to estimate a reliable metric, we repeat Procedure~\ref{alg:sort}, and compute ranks with different quantile ranges, and compute the mean rank ($\mathbf{mr}$) for each algorithm. The ranks from a specific quantile range can be compared with the mean ranks to get a better understanding of the performance. We choose $(q_{25}, q_{75})$ as default, as this range is considered as a default for statistical outlier detections~\cite{hodge2004survey}. For $(q_{25}, q_{75})$, both $\mathbf{alg_2}$ and $\mathbf{alg_3}$ obtain the same rank; however, as the mean rank for $\mathbf{alg_2}$ is slightly greater than $\mathbf{alg_3}$, this indicates that according to the available empirical data, $\mathbf{alg_3}$ is better than $\mathbf{alg_2}$. \begin{algorithm} \caption{ MeanRanks$(\mathbf{h_0}, \mathbf{q})$ } \label{alg:meanrank} \hspace*{\algorithmicindent} \textbf{Input: } $ \mathbf{h_0} : \langle \mathbf{alg}_{i(1)},\dots,\mathbf{alg}_{i(p)} \rangle$ \\ \hspace*{\algorithmicindent} \hspace*{\algorithmicindent} $ \mathbf{q} : [ (q1_{\text{low}}, q1_{\text{up}}), \dots ,(qH_{\text{low}}, qH_{\text{up}})] $\\ \hspace*{\algorithmicindent} \textbf{Output: } $ \mathbf{s}_{[25,75]}, [(\mathbf{alg}_1, \mathbf{mr}_1), \dots, (\mathbf{alg}_p, \mathbf{mr}_p)]$ \begin{algorithmic}[1] \State Initialize $\mathbf{mr}_i \leftarrow 0$ \Comment{$i \in \{1, \dots, p\}$} \For{$q_{\text{low}}$, $q_{\text{up}}$ in $\mathbf{q}$} \State $\mathbf{s}_{[low, up]} \leftarrow$ SortAlgs$(\mathbf{h_0}, q_{\text{low}}, q_{\text{up}})$ \EndFor \State $\mathbf{mr}_i \leftarrow$ Mean rank of $\mathbf{alg}_i$ over all the quantiles in $\mathbf{s}$. \State return $\mathbf{s}_{[25,75]}, [(\mathbf{alg}_1, \mathbf{mr}_1), \dots, (\mathbf{alg}_p, \mathbf{mr}_p)]$ \end{algorithmic} \end{algorithm} \textbf{Convergence (Procedure 4): } The calculation of ranks requires measurements of execution times for each algorithm. Starting with an empty measurement set (i.e., $N=0$), $M$ measurements (typically only a few; e.g., 2 or 3) of each algorithm are iteratively added and the mean ranks are computed using Procedure~\ref{alg:meanrank}. The iteration stops as the mean ranks converge. We estimate the convergence of a rank as follows: Let $\mathbf{x}$ be an ordered list of mean ranks. Then the changes in mean rank between adjacent algorithms in the list ($\mathbf{dx}$) is computed as: \begin{equation*} \mathbf{dx} = \text{convolution}(\mathbf{x}, [1,-1], step=1) \end{equation*} where [1,-1] is the convolution filter. In simple words, $\mathbf{dx}$ is the difference in the mean ranks between the adjacent algorithms in $\mathbf{x}$. Let $\mathbf{dx}$ and $\mathbf{dy}$ be the changes in mean ranks over the lists from iteration $j$ and $j-1$ respectively. Then, the stopping criterion for the iteration is \begin{equation*} \frac{\lVert \mathbf{dx} - \mathbf{dy} \rVert_{L2}}{p} < eps \end{equation*} where $\lVert . \rVert_{L2}$ is the L2 norm and $p$ is the number of algorithms being compared. The iterations stop when the stopping criterion becomes less than $eps$ or if the number of measurements per algorithm reaches a user specified maximum value ($max$). For illustration, follow the example in Table~\ref{tab:q-ranks}. The list of mean ranks is $\mathbf{x} = [1,1,1.86,2.0,2.57,2.57]$, and the changes in mean ranks is $\mathbf{dx} = [0,0.86,0.14,0.57,0]$. Now, consider the ranks at $(q_{35}, q_{65})$, the ranks of the algorithms are $\langle (\mathbf{alg}_{1},1), (\mathbf{alg}_{0},1), (\mathbf{alg}_{3},2), (\mathbf{alg}_{2},3), (\mathbf{alg}_{4},4), (\mathbf{alg}_{5},4) \rangle$. Let us assume that in the following iteration, $\mathbf{alg_2}$ and $\mathbf{alg_3}$ obtain the same rank; i.e., $\langle (\mathbf{alg}_{1},1), (\mathbf{alg}_{0},1), (\mathbf{alg}_{3},2), (\mathbf{alg}_{2},2), (\mathbf{alg}_{4},3), (\mathbf{alg}_{5},3) \rangle$, then the mean rank for next iteration would be $\mathbf{y} = [1,1,1.86,1.86,2.43,2.43]$, and the changes in mean rank would be $\mathbf{dy} = [0,0.86,0,0.57,0]$. Then $ \frac{\lVert \mathbf{dy} - \mathbf{dx} \rVert_{L2}}{5} = 0.028$. \begin{algorithm} \caption{ MeasureAndRank$(\mathbf{h_0}, M, eps, max)$ } \label{alg:measure} \hspace*{\algorithmicindent} \textbf{Input: } $ \mathbf{h_0} :\langle \mathbf{alg}_{i(1)},\dots,\mathbf{alg}_{i(p)} \rangle$ \\ \hspace*{\algorithmicindent} \hspace*{\algorithmicindent} $ M, eps, max \in \mathbb{R} $\\ \hspace*{\algorithmicindent} \textbf{Output: } $\mathbf{s}_{[25,75]}, \mathbf{mr}'$ \begin{algorithmic}[1] \State Set norm $\gg eps$ \Comment{norm $\in \mathbb{R}$} \State $\mathbf{q} \leftarrow$ Define a set of quantile ranges. \State $N = 0$ \State Initialize $\mathbf{dy}_j \leftarrow 1 $ \Comment{$j \in \{1,\dots, p-1\}$} \While{norm $\ge eps$ \textbf{and} $N < max$} \State Measure every algorithm $M$ times. \State $N = N+M$ \State $\mathbf{s}_{[25,75]}, \mathbf{mr}' \leftarrow $MeanRanks($\mathbf{h_0}, \mathbf{q}$) \State \Comment{On N Measurements} \State $\mathbf{x}_i \leftarrow $ Mean rank of $\mathbf{alg}_i$ from $\mathbf{mr'}$ \Comment{$ i \in \{1,\dots, p\}$} \State $\mathbf{dx} \leftarrow \text{convolution}(\mathbf{x}, [1,-1], step=1)$ \Comment{$\mathbf{x} \in \mathbb{R}^{p}$} \State norm $ \leftarrow \frac{\lVert \mathbf{dx} - \mathbf{dy} \rVert_{L2}}{p}$ \Comment{$\mathbf{dx} \in \mathbb{R}^{p-1} $} \State $\mathbf{dy} \leftarrow \mathbf{dx}$ \State $\mathbf{h_0} \leftarrow \langle \mathbf{alg}_{s(1)},\dots,\mathbf{alg}_{s(p)} \rangle $ \Comment{Ordering from $\mathbf{s}_{[25,75]}$} \EndWhile \State return $\mathbf{s}_{[25,75]}, \mathbf{mr}'$ \end{algorithmic} \end{algorithm} \section{Interpretation} \label{sec:exp} In this section, we explain the working of our methodology. For our experiments, we use the Linnea framework~\cite{Barthels2021:688}, which accepts linear algebra expressions as input, and generates a family of mathematically equivalent algorithms (in the form of sequential Julia code) consisting of (mostly, but not limited to) sequences of BLAS and LAPACK calls. The experiments are run on a Linux based Intel Xeon machine with turbo-boost enabled and number of threads set to 10. We consider the following instances of Expression~\ref{eqn:mc}: \begin{itemize} \item Instance A: $(1000, 1000, 500, 1000, 1000 )$ \item Instance B: $(1000, 1000, 1000, 1000, 1000 )$ \end{itemize} For each instance, we first execute all the algorithms once. Then, the initial hypothesis ($\mathbf{h_0}$) is formed by ranking the algorithms in the increasing order of their single-run execution times. The Procedure~\ref{alg:measure} is applied with the parameters $M=3$, $eps=0.03$ and $max=30$. We define the quantile ranges same as those in Table~\ref{tab:q-ranks}. The experiments are run on two different settings. In the first setting, the experiments are run on a node, whose unused processing power and memory can be shared among other processes. In the second setting, the experiments are run on a node where exclusive access is granted. The execution times of the algorithms on the shared node is expected to have more fluctuations than the exclusive node. Before every iteration of the mean ranks computation in Procedure~\ref{alg:measure}, the $M$ measurements from every algorithm are shuffled to enable fair comparison. The results of the experiments are shown in Figure~\ref{fig:exp2}. The tables show the updated sequences, estimated ranks at $(q_{25}, q_{75})$ and the mean ranks. The shades of the cells indicate the relative FLOPs counts of the algorithms (see Equation~\ref{eq:rel-flops}); a darker shade indicates a higher relative FLOPs. The algorithms in white cells compute the least FLOPs. \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/exp2b} \caption{Instance A} \label{fig:exp2b} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/exp2c} \caption{Instance B} \label{fig:exp2c} \end{subfigure} \caption{$\mathbf{h_0}$ is the Initial hypothesis. $\mathbf{s}$ is the updated sequence. The ranks at $(q_{25}, q_{75})$ and the mean ranks (in brackets) are shown.} \label{fig:exp2} \end{figure} \begin{itemize} \item \textbf{Instance A} (Figure~\ref{fig:exp2b}): The minimum FLOPs algorithms (\textbf{alg0} and \textbf{alg1}) obtain the best rank in both settings. In the shared setting, the mean rank of \textbf{alg0} is greater than \textbf{alg1}, which indicates that the underlying distribution of \textbf{alg0} is slightly shifted towards the right of \textbf{alg1} (see Figure~\ref{fig:front2b}); this indicates that higher execution times are observed in some samples of \textbf{alg0} than in \textbf{alg1}. All the other algorithms obtain the same rank despite having different FLOP counts, which indicates significant overlap of distributions. However, the algorithms with the highest FLOP count (\textbf{alg4} and \textbf{alg5}) are slightly shifted to the right of \textbf{alg2} and \textbf{alg3}, indicating relatively worse performance; this is quantified by the higher mean rank scores of \textbf{alg4} and \textbf{alg5} than \textbf{alg2} and \textbf{alg3}. On the other hand, in the exclusive setting, \textbf{alg4} and \textbf{alg5} obtain a higher rank than \textbf{alg2} and \textbf{alg3}, which indicates not only the rightward-shift but also a non-significant overlap of the underlying distributions (see Figure~\ref{fig:turbo2c}). The iterations in Procedure~\ref{alg:measure} stop after 21 and 24 measurements per algorithm in the shared and exclusive settings respectively. \item \textbf{Instance B} (Figure~\ref{fig:exp2c}): All the algorithms compute comparable FLOPs and they all obtain the same rank in both the settings. In the shared mode, 15 measurements per algorithm were made, while the exclusive mode took 27 measurements per algorithm for the mean ranks to converge. \end{itemize} \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/front2b} \caption{Instance A: Shared} \label{fig:front2b} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/turbo2b} \caption{Instance A: Exclusive} \label{fig:turbo2b} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/turbo2c} \caption{Instance B: Exclusive} \label{fig:turbo2c} \end{subfigure} \caption{Measurements for Instance A and Instance B. The algorithms are ordered according to the increasing order of their ranks from bottom to top. } \label{fig:2b} \end{figure} \textit{\textbf{Effect of Turbo boost:}} It can be noticed that in the exclusive setting, more measurements were made than in the shared setting. The scatter plots of algorithms in Figure~\ref{fig:turbo2b} and \ref{fig:turbo2c} show multi-modal distribution of measurements (especially bi-modal with two clusters of data points at the two ends of the distribution). This is because the processor operated at multiple frequency levels due to turbo boost settings, thereby resulting in significantly different execution times for the same algorithm. As the measurements of algorithms were sufficiently shuffled, the probability that a particular algorithm executes in just one frequency mode---thereby resulting in a biased comparison---is minimized. However, for the quantile ranges we considered (from Table~\ref{tab:q-ranks}), the algorithms \textbf{alg4}, \textbf{alg3}, \textbf{alg0}, \textbf{alg1}, \textbf{alg2} obtain the same mean rank scores (see exclusive mode in Figure~\ref{fig:exp2c}) even though the relative shifts among their distributions can be visually observed in Figure~\ref{fig:turbo2c}. In order to compare algorithms based on the measurements taken during the fast frequency modes of the processor (i.e., measurements towards the left end of the distribution), we modify the quantiles set in Procedure~\ref{alg:measure} and consider the following ranges: $[(q_{5},q_{50}), (q_{15},q_{45}), (q_{20},q_{40}), (q_{25},q_{35}) ]$ and recalculate the mean ranks. The results are shown in Figure~\ref{fig:anomaly2c}. Now, \textbf{alg5} obtains the best rank. The relative shifts among the algorithms based on the left-part of the distributions are now quantified by the mean ranks. \textit{\textbf{Test for FLOPs as a discriminant for the best algorithm:}} Consider the algorithms for instance B again. If the algorithms are to be executed in the compute node that operates at multiple frequency levels, then according to our methodology, at $(q_{25},q_{75})$, all the algorithms are considered equivalent, as they all obtain the best rank. The mean rank of \textbf{alg5} is better than the rest, but the methodology does not consider them statistically significant. Hence, one would not lose significantly in performance by randomly choosing one of the minimum FLOPs algorithm. However, if one is interested only in the performance at the high frequency modes of the processors, then \textbf{alg5} shows significantly better performance than the other algorithms. In this case, FLOPs fail to discriminate the algorithms and the instance will be considered as an anomaly. Recall the instance $(331, 279, 338, 854, 497)$ which was observed as an \textit{anomaly} in~\cite{Lopez2022:530} and discussed in Sec.~\ref{sec:int}. When the different frequency modes of the processors are not taken into account, then all the algorithms are equivalent as they all obtain rank 1. However, when focusing on the fast frequency modes, \textbf{alg2} (which is not the best algorithm based on FLOPs) shows significantly better performance than the algorithms with minimum FLOPs (see Figure~\ref{fig:anomaly-1}). Now, according to the methodology, this instance will be considered as an anomaly. Expression~\ref{eqn:mc} consisted of only 6 variants and we measured all of them in order to explain the working of the methodology. However, in practise, compilers such as Linnea generate 100s of variant algorithms for a given linear algebra expression. Then, one could filter the initial set of algorithms and create a subset consisting of only the potential algorithms before taking further measurements. In order to test if FLOPs are a valid discriminant for a given instance of an expression, the set of potential candidates could be all the algorithms with the least FLOP count and those algorithms whose relative times based on single-run execution times (calculated according to Equation~\ref{eq:rel-time}) are less than certain threshold (say, 1.5). Then, the Procedure~\ref{alg:measure} can be applied on the reduced set, consisting of only the potential algorithms. \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/mins-2c} \caption{Instance B} \label{fig:anomaly2c} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\linewidth]{fig/mins-anomaly} \caption{Anomaly} \label{fig:anomaly-1} \end{subfigure} \caption{Quantiles: $[(q_{5},q_{50}), (q_{15},q_{45}), (q_{20},q_{40}), (q_{25},q_{35}) ] $ The ranks at $(q_{45}, q_{15})$ and the mean ranks (in brackets) are shown.} \label{fig:anomaly} \end{figure} \section{Conclusion} \label{sec:con} In this work, we developed a methodology to rank a set of equivalent algorithms into performance classes. The input to the methodology is a set of algorithms ranked based on an initial hypothesis such as FLOP count or single measurement of execution time of each algorithm. We take further measurements of the algorithms in small steps incrementally, and update the ranks accordingly. The process of measurements stops as the updates to ranks converge. To this end, we developed a strategy to sort algorithms by comparing the quantiles of the execution time measurements; the ranks of the algorithms are merged if they have significant overlaps in the distribution of measurements. The rank estimates quantify the relative performances of the algorithms from one another. We showed that our methodology can be used to interpret and analyse performance even in compute nodes that operates at multiple frequency levels (e.g., machines that have turbo-boost enabled). We used our methodology to develop a test for FLOPs as a discriminant for linear algebra algorithms. The Python implementation of the methodology is available online\footnote{https://github.com/as641651/AlgorithmRanking.}. Recall our proposition (from Sec.~\ref{sec:int}) that high-level languages such as Julia, Matlab, TensorFlow, etc., choose algorithms that are sub-optimal in performance. The argument for the sub-optimality can be two folds: First, the languages do not fully apply the linear algebra knowledge to explore all possible alternate algorithms (this issue was not discussed in this paper). Second, they select a sub-optimal algorithm from a given set of alternatives; because these high-level languages make algorithmic choices by minimizing FLOPs and it had been pointed out (e.g., in~\cite{Lopez2022:530}) that the algorithm with the lowest FLOP count is not always the fastest (such instances were referred to as \textit{anomalies}). In order to tackle the second argument, performance models that facilitate better algorithm selection strategy have to be developed; that is, those performance models should be able to perform better than what FLOPs can already do. To this end, it is important to verify that, for a considered use case, there exists an abundance of anomalies that cannot be discriminated using FLOP counts. Our methodology can be used to detect the presence of anomalies. The anomalies can be used for further investigations to find the root-causes of performance differences. \bibliographystyle{IEEEtran}
train/arxiv
BkiUc-jxK1ThhCdy8I2W
5
1
\section{Introduction} The fusion cross section $\sigma_{fus}$ at very low energies of reactions involving $^{12}$C and $^{16}$O is a crucial ingredient to calculate astrophysical reaction rates for different stellar burning scenarios in massive stars, in which $^{16}$O + $^{16}$O is the key reaction for the later oxygen burning phase. This cross section is usually represented by the S-factor ($S =\sigma_{fus} E e^{2\pi \eta}$, where $\eta$ is the Sommerfeld parameter), as it facilitates the extrapolation of relatively high-energy fusion data because direct experiments at very low energies are very difficult to carry out. Unfortunately, there is a huge uncertainty in the S-factor resulted from the extrapolation of different phenomenological parametrizations that explain the high-energy data, as shown by Jiang et al.\cite{Jiang} for $^{16}$O + $^{16}$O. For $^{12}$C + $^{12}$C, the presence of pronounced molecular resonance structures makes it much more uncertain\cite{Aguilera}. These extrapolated values result in reactions rates that differ by many orders of magnitude\cite{Jiang}. Therefore, a direct calculation of the S-factor at energies of astrophysical interest ($< 3$ MeV) is essential. We report on an investigation\cite{Alexis1} of the fusion reaction $^{16}$O + $^{16}$O within the adiabatic molecular picture\cite{Greiner}, which is realistic at low incident energies. This is because the radial motion of the nuclei is expected to be adiabatically slow compared to the rearrangement of the two-center mean field of nucleons. In this reaction the nuclei are spherical and coupled channels effects are expected to be insignificant (the first collective excited state ($3^{-}$) of $^{16}$O is at 6.1 MeV), making its theoretical description relatively simple. Furthermore, abundant experimental data\cite{Exp} exist for comparison to the model calculations. A basic microscopical model to describe the studied reaction is the two-center shell model (TCSM), a great concept introduced (in practice) in heavy-ion physics by the Frankfurt school\cite{TCSM}. We have used a new TCSM\cite{Alexis2} based on realistic Woods-Saxon (WS) potentials. The parameters of the asymptotic WS potentials including the spin-orbit term reproduce the experimental single-particle energy levels around the Fermi surface of $^{16}$O\cite{Alexis2}, whereas for $^{32}$S the parameters of the global WS potential by Soloviev\cite{Soloviev} are used, its depth being adjusted to reproduce the experimental single-particle separation energies\cite{Audi}. To describe fusion, the potential parameters (including those of the Coulomb potential for protons) have to be interpolated between their values for the separated nuclei and the compound nucleus. The parameters can be correlated\cite{Alexis2} by conserving the volume enclosed by certain equipotential surface of the two-center potential for all separations $R$ between the nuclei. \section{Calculations and discusion} The adiabatic collective potential energy surface $V(R)$ is obtained with Strutinsky's macroscopic-microscopic method, whilst the radial dependent collective mass parameter $M(R)$ is calculated with the cranking mass formula\cite{Inglis}. For simplicity, the pairing contribution to the collective potential and radial mass is neglected. The rotational moment of inertia of the dinuclear system is defined as the product of the cranking mass and the square of the internuclear distance. The macroscopic part of the potential results from the finite-range liquid drop model\cite{Moeller1981} and the nuclear shapes of the TCSM\cite{Alexis2}. The microscopic shell corrections to the potential are calculated with a novel method\cite{LanczosDiaz}. The TCSM is used to calculate the neutron and proton energy levels\cite{Alexis1} $E_i$ as a function of the separation $R$ between the nuclei along with the radial coupling\cite{Alexis2} between these levels that appears in the numerator of the cranking mass expression, \begin{equation} M(R) = 2 \hbar^2 \sum_{i=1}^{A} \sum_{j > A} \frac{|\langle j| \partial / \partial R |i \rangle|^2} {E_j - E_i}. \label{eq1} \end{equation} \begin{figure} \begin{tabular}{cc} \includegraphics[width=6.2cm,angle=0]{Fig1a.eps} & \includegraphics[width=6.2cm,angle=0]{Fig1b.eps} \end{tabular} \caption{(a) The s-wave collective potential energy as a function of the separation between the nuclei for $^{16}$O + $^{16}$O. The arrow indicates the geometrical contact separation. (b) The radial dependent collective mass parameter (in units of nucleon mass $m_0$). See text for further details.} \label{POTINERT} \end{figure} Figure \ref{POTINERT}a shows the s-wave molecular adiabatic potential (thick solid curve) as a function of the internuclear distance, which is normalized with the experimental $Q$-value of the reaction ($Q=16.54$ MeV). The sequence of nuclear shapes related to this potential\cite{Alexis2} is also presented. For comparison we show the Krappe-Nix-Sierk (KNS) potential\cite{BW} (thin solid curve) and the empirical Broglia-Winther (BW) potential\cite{BW} (dotted curve). Effects of neck between the interacting nuclei, before they reach the geometrical contact separation (arrow), are not incorporated into the KNS potential. The concept of nuclear shapes is not embedded in the BW potential which tends to be similar to the KNS potential. Comparing the KNS potential to the molecular adiabatic potential we note that the neck formation substantially decreases the potential energy after passing the barrier radius ($R_b = 8.4$ fm). It will be shown that the inclusion of neck effects is crucial to successfully explain the available S-factor data\cite{Exp} for the studied reaction. Figure \ref{POTINERT}b shows the radial dependent cranking mass (thick solid curve), whilst the asymptotic reduced mass is indicated by the dotted line. Just passing the barrier radius, when the neck between the nuclei starts to develop, the cranking mass slightly increases compared to the reduced mass and pronounced peaks appear inside the geometrical contact separation. For the studied reaction, these peaks are mainly caused by the strong change of the single-particle wave functions during the rearrangement of the shell structure of the asymptotic nuclei into the shell structure of the compound system. In general, the peaks could also be due to avoided crossings\cite{Alexis2} between the adiabatic molecular single-particle states\cite{Alexis1}, which can make the denominator of the cranking mass expression (\ref{eq1}) very small. It is important to stress that the amplitude of these peaks may be reduced by (i) the pairing correlation that spreads out the single-particle occupation numbers around the Fermi surface, and (ii) the diabatic single-particle motion\cite{Alexis2} at avoided crossings, which can change those populations. For the strongest peak in Fig. \ref{POTINERT}b, which is located very close to the internal turning point for a wide range of sub-barrier energies, only aspect (i) may be relevant as the radial velocity of the nuclei is rather small there, suppressing the Landau-Zener transitions. For compact shapes, aspect (ii) may lead to intrinsic excitation of the composite system, but this is not important here for the calculation of the fusion cross section. Fusion is determined by the tunneling probability of the external Coulomb barrier, as explained below. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{Fig2.eps} \end{center} \caption{(Color online) The S-factor as a function of the center-of-mass energy for $^{16}$O + $^{16}$O. The curves are theoretical calculations, whilst the symbols refer to experimental data. The arrow indicates the Coulomb barrier of the molecular potential of Fig \ref{POTINERT}a. See text for further details.} \label{SFACT} \end{figure} Having the adiabatic potential and the adiabatic mass parameter, the radial Schr\"odinger equation is exactly solved with the modified Numerov method and the ingoing wave boundary condition imposed inside (about 2 fm) the capture barrier. The fusion cross section $\sigma_{fus}$ is calculated taken into account the identity of the interacting nuclei and the parity of the wave function for the relative motion (only even partial waves $L$ are included here), i.e., $\sigma_{fus} = \pi \hbar^2/(2 \mu E) \sum_{L} (2L+1)(1+\delta_{1,2})P_L$, where $\mu$ is the asymptotic reduced mass, $E$ is the incident energy in the total center-of-mass reference frame and $P_L$ is the partial tunneling probability. Figure \ref{SFACT} shows the S-factor as a function of the incident energy in the center-of-mass reference frame. For a better presentation, the experimental data of each set\cite{Exp} are binned into $\Delta E = 0.5$ MeV energy intervals. In this figure the following features can be observed: \begin{romanlist}[(ii)] \item the molecular adiabatic potential of Fig. \ref{POTINERT}a correctly (thick and thin solid curves) explains the measured data, in contrast to either the results obtained with the BW potential (dotted curve) or the very recent calculations within the Fermionic Molecular Dynamics (FMD) approach\cite{Neff} (dashed curve). Since the width of the barrier decreases for the molecular adiabatic potential of Fig. \ref{POTINERT}a, it produces larger fusion cross sections than those arising from the shallower KNS and BW potentials. \item the use of the cranking mass parameter of Fig. \ref{POTINERT}b notably affects the low energy S-factor, which is revealed by the comparison between the thick and thin solid curves. It starts reducing the S-factor around 7-8 MeV energy region and produces a local maximum around 4.5 MeV. At the lowest incident energies (below 4 MeV) the S-factor is suppressed by a factor of five compared to that arising from a constant reduced mass. The peak in the S-factor is due to an increase of the fusion cross section, which is caused by the resonant behavior of the collective radial wave function\cite{Alexis1}. \end{romanlist} \section{Concluding remarks and outlook} The adiabatic molecular picture very well explains the available experimental data for the S-factor of $^{16}$O + $^{16}$O. The collective radial cranking mass causes a relevant peak in the S-factor excitation function, although the pairing correlation (neglected in the calculation) may somewhat reduce the magnitude of this bump. It is highly desirable to have fusion cross sections, measured around 4-5 MeV, to verify the existence of this resonant structure in the S-factor. The collective mass surface is very important for the reaction dynamics, and its effect on fusion of heavy-ions should be investigated systematically. This can be significant for a better understanding of reactions forming superheavy elements. Works are in progress for understanding the molecular resonance structures in the S-factor excitation function of the challenging system $^{12}$C + $^{12}$C. \section*{Acknowledgements} This work was supported by the Joint Institute for Nuclear Astrophysics (JINA) through grant NSF PHY 0216783, and by an ARC Discovery grant.
train/arxiv
BkiUc4A5qoTAmkNrkvP3
5
1
\section{Introduction} With the help of Galactic surveys (e.g., \citealp{1989ApJS...70..181A}, \citealp{1992ApJS...81..715S}), our understanding of the infrared (IR) energetics, morphology and evolution of supernova remnants (SNRs) has increased substantially in the past decades. However, even at \emph{IRAS} (Infrared Astronomical Satellite) resolution, confusion with other IR sources in the Galactic plane is a limiting factor that can prevent a clear picture from emerging. Such confusion, for example, can make the correct assessment of ejecta dust masses for Type II SNRs difficult, a key factor for interpreting high redshift dust in the Universe (\citealp{2010ApJ...719.1553S}, and references therein). Infrared emission from SNRs can also provide insight into the radio-IR correlation of external galaxies \citep{1985ApJ...298L...7H}. Additionally, the study of dust re-emission is relevant for determining the cooling rate of SNRs, an important quantity that shapes their evolution (\citealp{1987ApJ...322..812D}). In this paper, we search for counterparts of SNRs in the mid-infrared (at 24 and 70 $\micron$) using the higher resolution \emph{Spitzer} data obtained with the Multiband Imaging Photometer (MIPS), in the coordinate ranges $10^\circ< l <65^\circ$ and $285^\circ<l<350^\circ$, $| b |<1^\circ$, complementing the work of \cite{2006AJ....131.1479R}. We also briefly explore the relation of SNRs IR fluxes with other wavelength ranges, such as the radio and the X-ray. The structure of this paper is as follows: In \S\ref{presurv}, we review previous Galactic infrared surveys and their detection rates. In \S\ref{emission}, we discuss SNR emission mechanisms which can contribute in the mid-infrared regime. In \S\ref{sec:data}, we describe the mid-infrared, radio and X-ray data used in this work. In \S\ref{detec}, we report the MIPS detections. A discussion of SNR identifications, derived color ratios and radio and X-ray to infrared ratios follows in \S\ref{discussion}. Finally, in \S\ref{con}, we summarize our findings. Details on each detected SNR can be found in the appendix. \subsection{Previous Galactic Infrared Surveys \label{presurv}} Using the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire, GLIMPSE (\citealp{2003PASP..115..953B}; \citealp{2009PASP..121..213C}), \cite{2006AJ....131.1479R} searched the Galactic Plane for infrared counterparts to known SNRs. Out of 95 objects, 18 were clearly detected, a detection rate of about 20\%. Previous searches were conducted using the \emph{IRAS} all-sky survey. Within the GLIMPSE surveyed area, \cite{1989ApJS...70..181A} obtained a detection rate of 17\% and \cite{1992ApJS...81..715S} 18\% (\citealp{2006AJ....131.1479R}). In the present study, we identify 39 out of 121, i.e., a detection rate of about 32\%. \subsection{SNR infrared emission mechanisms \label{emission}} The IR part of the spectral energy distribution (SED) of SNRs can result from multiple sources ranging from dust (either stochastically or thermally heated), atomic/molecular line emission, PAH (polycyclic aromatic hydrocarbon) emission, and even synchrotron emission. In fact, the amount of the latter emission can vary substantially and is dependent on the type of remnant. In plerion remnants such as the Crab Nebula \citep{1992Natur.358..654S}, it accounts for about 90\% of the mid-infrared flux density (at 25 $\micron$). In contrast, about 2\% of the mid-infrared flux density (at around 25 $\micron$) in Cas A is due to synchrotron emission \citep{2010ApJ...719.1553S}. Dust emission is expected to be a substantial component of the IR emission of SNRs since their spectra is well fitted by one or more thermal dust populations, as revealed by previous infrared surveys (IRAS; \citealp{1989ApJS...70..181A}, \citealp{1992ApJS...81..715S}), or by stochastic heating of the grains, as proposed by \cite{2004ApJS..154..290H} to explain the mid-infrared emission from Cas A. The dust is heated by charged particles in hot plasma generated by shocks (e.g., \citealp{1981ApJ...248..138D}, \citealp{1987ApJ...322..812D}, \citealp{1992ARA&A..30...11D}). Direct evidence for the shock-heated plasma is provided by X-ray observations in the continuum, and X-ray lines are a good diagnostic of high energy interactions and abundances (\citealp{2004NuPhS.132...21V}). While for the majority of SNRs, the observed dust emission is thought to originate from the interaction of the shockwave with the surrounding interstellar medium (ISM; e.g., \citealp{2001A&A...373..281D}, \citealp{2006ApJ...642L.141B}, \citealp{2006ApJ...652L..33W}, \citealp{2007ApJ...662..998B}), evidence of SNRs with ejecta dust is scarce (e.g., \citealp{1999ApJ...521..234A}) even with the help of higher resolution IR telescopes such as \emph{ISO} and \emph{Spitzer}. However, observations with the \emph{Spitzer Infrared Spectrograph} (IRS) \citep{2008ApJ...673..271R} of the young remnant Cas A have pointed to the presence of a large hot dust continuum peaking at 21 $\micron$, from ejecta dust formed through condensation of freshly synthesized heavy elements. This component has temperatures ranging from 60 to 120 K. Furthermore, recent far-infrared and sub-millimeter work (\citealp{2010ApJ...719.1553S}) on the same remnant, also suggested the existence of a `tepid' ($\sim 35$ K) central dust component in the ejected material (see also \citealp{2010A&A...518L.138B}). Observational evidence for the importance of IR lines has been provided by multiple SNRs spectroscopic studies which identified a plethora of lines from elements such as Fe and O (e.g., \citealp{1999A&A...343..943O}; \citealp{2002ApJ...564..302R}). Much of the infrared line emission seen in SNRs results from the interaction of the blastwave with the surrounding environment where they are born as the shocked gas cools down. For example, IC443 is encountering an atomic region at the northeast side and a molecular cloud at the southern border (e.g., \citealp{2001ApJ...547..885R}). Spectral observations by \cite{1999A&A...341L..75O} on the north of the remnant revealed that most of the emission at 12 and 25 $\micron$, previously surveyed with IRAS, was in fact due to ionized line emission (e.g., [Fe II]) with only a small contribution from dust. Because optical spectra also showed strong collisionally excited fine structure atomic lines (\citealp{1980ApJ...242.1023F}), the infrared line emission is not unexpected. Likewise, for the 'optically bright regions' in N49, an old remnant in the Large Magellanic Cloud (LMC), \cite{2006AJ....132.1877W} found that IR line emission can be a substantial fraction of the total IR flux, up to 80\% at 24 $\micron$. On the south of IC443, however, the spectrum is dominated by the H$_2$ pure rotational lines S(2) through S(7) (\citealp{1999A&A...348..945C}), once again with a faint dust continuum. Moreover, \cite{2006ApJ...649..258T} found a good agreement between the soft X-ray emitting plasma and the radio/optical structure on the northeastern part of the remnant where the plasma density is highest. Such energetic encounters change the morphology of the neighboring ISM and of the remnant itself thereby creating prominent emission lines (e.g., molecular hydrogen) which can be detected in the infrared. OH masers (in particular, the 1720 MHz line) are also good tracers of the remnant encounter with dense molecular regions with 10\% of Galactic SNRs showing associated masers (\citealp{2002Sci...296.2350W} and references therein). The existence of infrared emission associated with PAHs in SNRs was first observed in a LMC remnant by \cite{2006ApJ...653..267T}. Plus, evidence for such emission in some Galactic SNRs has also been shown through near-infrared color-color ratios by \cite{2006AJ....131.1479R}. This is likely excitation of PAHs from the ISM, rather than PAHs from supernova dust. \section{Data Used}\label{sec:data} \subsection{Infrared Data} Infrared data used in this paper originates from two Spitzer surveys: GLIMPSE using the Infrared Array Camera (IRAC) and MIPSGAL (survey of the inner Galactic plane using MIPS, \citealp{2009PASP..121...76C}). Both surveys cover the Galactic coordinates $10^\circ< l <65^\circ$ and $285^\circ<l<350^\circ$, $| b |<1^\circ$. \subsubsection{MIPSGAL} The MIPS wavelength coverage is approximately 20 to 30 $\micron$ at ch 1 (24 $\micron$) and 50 to 100 $\micron$ at ch 2 (70 $\micron$), thus covering a potentially rich set of diagnostic lines and dust emission features (see Fig.~\ref{mipsfilters}). In the 24 $\micron$ bandpass, the classic [O IV] 25.9 and [Fe II] 26.0 $\micron$ tracers can be quite strong (e.g., \citealp{1999A&A...341L..75O}; \citealp{2009eimw.confE..46N}), and likewise [O~I] 63 and [O~III] 88 $\micron$ (e.g., \citealp{2001ApJ...547..885R}) within the 70 $\micron$ bandpass. Other standard shock excitation tracers, like [Ne~II] 12.81 or [Si II] 34.8 $\micron$, are unfortunately not included within the IRAC or MIPS bandpasses. Nevertheless, when dust is present, it is the dominant component of the IR radiation at 24 and 70 $\micron$ (see Fig.~\ref{mipsfilters}). Although the 24 $\micron$ data delivered by the Spitzer Science Center in their Post-BCD products are of a very high quality, the MIPSGAL pipeline was designed to enhance the data a step further. The details are described in \cite{2008PASP..120.1028M}, but briefly the pipeline: (1) carefully deals with saturated and non-saturated data, a key issue in the Galactic Plane; (2) removes a series of artifacts, the most common being a column-to-column jailbar striping, plus (3) that of short-duration afterimage latencies, (4) long-duration responsivity variations, and (5) background mismatches. The MIPSGAL observations were designed to be a 24 $\micron$ survey as the high background levels of the Galactic plane at longer wavelengths saturated the 160 $\micron$ detectors and placed the 70 $\micron$ at a fluence level not well characterized. Since MIPS takes data simultaneously at 70 and 160 $\micron$, and that at 70 $\micron$ at first sight looks reasonable, then a concerted effort has been invested by the MIPSGAL team to improve its quality. The specific steps of the 70 $\micron$ pipeline are described in {Paladini et al.\ (\it{in preparation})}, but the main issue is to mitigate the effect of the non-linear response of the Ge:Ga photoconducting detectors. Their standard behavior leads to striping of the images, sudden jumps in brightness and a large uncertainty in the calibration for bright sources and regions. The MIPSGAL 70 $\micron$ pipeline reduces these effects and decreases the calibration uncertainties to a level of $\sim$15$\%$. This uncertainty on the absolute calibration of the extended emission is consistent with that determined by the MIPS team in their calibration at 24 and 70 $\micron$ (\citealp{2007PASP..119..994E}; \citealp{2007PASP..119.1019G}). The adopted uncertainty in calibration for MIPS 24 $\micron$ is 10\% . For the 70 $\micron$ fluxes, the main errors come from uncertainties in the background estimation and calibration errors (we adopt a conservative value of 20\%). Moreover, we have used the 24 $\micron$ point source subtracted data. For details in the point source removal procedure see {Shenoy et al.\ (\it{in preparation})}. Point source sensitivities are 2 and 75 mJy (3$\sigma$) at 24 and 70 $\micron$, respectively. The resolution of the 24 $\micron$ data is $6^{\prime\prime}$ while for 70 $\micron$ it is $18^{\prime\prime}$. \subsubsection{GLIMPSE} \label{glimpse} At the IRAC bands (\citealp{2006AJ....131.1479R}, in particular Fig.\ 2) one expects to find a wealth of atomic lines, including Br$\alpha$ (4.05 $\micron$), Pf$\beta$ (4.65 $\micron$), [Fe~II] (5.34 $\micron$), and [Ar~II] (8.99 $\micron$), among others, plus the molecular S(13) through S(4) H${_2}$ rotational lines, and CO transitions at 4-5 $\micron$. This is very much the case in IC 443 (\citealp{2008ApJ...678..974N}; \citealp{1999A&A...348..945C}). We use primarily the 8 $\micron$ images from IRAC. Occasionally, other channels are also used in order to constrain the possible emission mechanisms. The 4.5 $\micron$ channel provides a valuable diagnostic of shocked molecular gas (e.g., \citealp{2006AJ....131.1479R}). At these wavelengths, our measured flux represents an upper limit since star contamination is present. We adopt a conservative value of 10\% for the calibration error given the vagaries in color correction plus measurement errors. The IRAC sampling is $1.2^{\prime\prime}$ but the 8 $\micron$ resolution is about $2^{\prime\prime}$. \subsection{Ancillary Data}\label{otherdata} Most of the supernova remnants in Green's catalog \citep{2009BASI...37...45G} are discovered using radio observations. By comparing the emission in at least two different radio wavelengths, one can calculate the spectral index and thus distinguish the kind of emission produced (thermal or non-thermal). In the case of SNRs, we expect to find a spectral index which is characteristic of synchrotron radiation. In contrast, \ion{H}{2} regions show a flat spectrum (when optically thin) which is indicative of free-free emission. The SNR radio emission is associated with regions where shock-induced particle acceleration take place within the source. In addition to data in Green's catalog, the following radio data sets are used for comparison with the infrared images: VLA (Very Large Array) archival data at 20 and 90 cm, available from the Multi-Array Galactic Plane Imaging Survey (MAGPIS; \citealp{2006AJ....131.2525H}) website\footnote[1]{http://third.ucllnl.org/gps/index.html}, VGPS (VLA Galactic Plane Survey; \citealp{2006AJ....132.1158S})\footnote[2]{http://www.ras.ucalgary.ca/VGPS/VGPS-data.html} at 21 cm and MOST (Mongolo Observatory Synthesis Telescope; \citealp{1996A&AS..118..329W})\footnote[3]{http://www.physics.usyd.edu.au/sifa/Main/MSC} at about 36 cm. Wide band (300-10000 eV) X-ray images from the \emph{Chandra} Supernova Remnant Catalog\footnote[4] {http://hea-www.harvard.edu/ChandraSNR/index.html} archive are also used for comparison. X-ray emission locates the shock heated plasma, including the important reverse shock in the interior region. \section{Detections of SNRs}\label{detec} To describe the amount of (or lack of) infrared emission from SNRs in our sample, we use a similar classification scheme to the one adopted by \citet{2006AJ....131.1479R}. Detection levels are characterized as follows: (1) detected, (2) possible detection, (3) unlikely detection and (4) not detected (see Table \ref{table:level}). Level 1 detections have a good positional match between the infrared contours and those at other energies (radio or X-ray). Level 2 detections display infrared emission within the remnant region but confused with cirrus or nearby \ion{H}{2} regions. Level 3 detections show the presence of some infrared emission which does not seem to be correlated with the remnant, although source confusion prevents a clear assessment of the emission origin. Level 4 is characterized by the absence of infrared emission within or nearby the contours of the radio remnant. Out of a sample of 121\ possible radio remnants, 39\ are level 1 detections, a detection rate of 32\% compared to 18\% with GLIMPSE. A few examples of level 1 detections are shown in Fig.~\ref{good}. The MIPSGAL sensitivity at 24 $\micron$ matches very well that of IRAC at 8 $\micron$ (\citealp{2009PASP..121...76C}; \citealp{2003PASP..115..953B}). The fact that we can identify many more SNRs than with IRAC could potentially be due to an extinction effect (lower at 24 $\micron$ than at 8 $\micron$ by 40$\%$, \citealp{2003ARA&A..41..241D}). However, it is far more plausible that it could simply be the details of how SNRs emit. In the case of SNRs interacting with the ISM where shocks modify the dust grain size distribution (e.g.,~\citealp{2010arXiv1004.0677G}), 24 $\micron$ continuum emission (due to Very Small Grains, VSGs) rises while at 8 $\micron$ the emission by stochastically heated PAHs is not as strong. We calculate flux densities at 8, 24, and 70 $\micron$ for SNRs which show obvious infrared counterparts (level 1 detections). The aperture size used for the flux derivations is, in some cases, different from the one listed at radio wavelengths (see Table \ref{table:1}). This is done in order to account for the different morphologies a remnant can have in various wavelength ranges. Furthermore, there are cases where the SNR is nearby other unrelated extended Galactic sources whose contamination can lead to an overestimate of the infrared flux (e.g., diffuse Galactic emission and \ion{H}{2} regions). To avoid such problems, we examined the MIPS images overlaid with contours from other wavelengths to help constrain the location and shape of the SNR infrared emission; however, such discrimination is not always well achieved. Another issue that arises, especially when dealing with Galactic plane images such as these, is the presence of unrelated infrared emission along the line of sight. This confusion can hinder the detection of SNRs as demonstrated by \cite{1989ApJS...70..181A} and \cite{1992ApJS...81..715S}. Flux densities of the detected remnants were obtained using either a circular or elliptical aperture with a background removed. The background was determined using the median value of the sky brightness around the remnant via two methods. We used an annulus around the source for cases where the external source contamination in the neighborhood is negligible. However, that method was often not feasible due to considerable overlap of sources (extended or point-like) with the periphery of the SNR. For those cases, the sky brightness was estimated with a user-selected region characteristic of the local background emission. In order to quantify the magnitude of background variations, we measured the surface brightness on several regions around one of the faintest 24$\micron$~SNR, G21.8-0.6. We used aperture sizes similar to the one used for the remnant and found that the standard deviation relative to the mean background value across the field, for our three wavelengths are of the order of 12\% (8 $\micron$), 10\% (24 $\micron$) and 20\% (70 $\micron$). Another consideration for photometry measurements is point source contamination, especially at 8 $\micron$ (as discussed in \S~\ref{glimpse}). Finally, since we are dealing with spatially extended emission, we applied corrections to the 8 $\micron$ estimates to account for internal scattering in the detectors as instructed by the IRAC data handbook\footnote[5]{http://ssc.spitzer.caltech.edu/irac/iracinstrumenthandbook/IRAC-Instrument-Handbook.pdf}. The `infinite' radius aperture is most appropriate for the angular extent of the remnants; we multiplied the surface brightness measurements by 0.74. \section{Discussion} \label{discussion} \subsection{Comparison of detections with previous infrared surveys}\label{othersur} Out of the previous 18 detections (level 1) using GLIMPSE \citep{2006AJ....131.1479R}, we confirm 16 at the same level with MIPSGAL (Table~1). The remaining two remnants are confused with the background and are reported as possible detections in our analysis. Of the other 23 (level 1) infrared counterparts found in this work, 10 are not in the initial sample from Green's catalog used by \citet{2006AJ....131.1479R} and the remaining 13 are mostly bright X-ray remnants which seem to be brighter at 24 $\micron$. Regarding the SNRs identified by \cite{1989ApJS...70..181A}, all of their level 1 detections are also seen in MIPSGAL with the exception of G12.0-0.1 and G54.1+0.3 which are both level 3 in the MIPS data. Comparing with \cite{1992ApJS...81..715S}, six SNRs have been detected in common (level 1). They also reported a level 1 detection of 8 others, which we classify as level 2 or 3. Again, we observe that these cases are mostly regions where there is a large contamination by nearby or overlapping \ion{H}{2} regions making a clear identification challenging even at the superior Spitzer resolution. \subsection{Lack of infrared signature} Given the strong ambient emission in the inner Galactic plane, it is not surprising that many SNRs, either too big or too old, and having low infrared surface brightness, do not have an identifiable infrared signature. Also, as noted by \cite{2006AJ....132.1877W}, the environs in which SNRs are encountered need to be sufficiently dense to promote collisional heating of the dust, thus making it distinguishable. Examples of non-detections in this survey are G18.1-0.1 and G299.6-0.5. \subsection{Color-color Diagrams} Table~\ref{table:1} shows flux densities (at 8, 24 and 70 $\micron$) and color ratios for the detected remnants. Figure \ref{colorplotflux} contains a logarithmic plot of [F$_{8}$/F$_{24}$] \emph{versus} [F$_{70}$/$F_{24}$] for which SNRs are detected as a whole (see last column of Table~\ref{table:1}). Two other SNRs are plotted: for Cas A, values are retrieved from \cite{2004ApJS..154..290H} and for IC443 from Noriega-Crespo et al.\ \emph{(in preparation)}. The data show a linear trend with a slope of $1.1\pm0.2$: remnants with a low [F$_{8}$/F$_{24}$] ratio statistically have a low [F$_{70}$/F$_{24}$] ratio. Furthermore, there seems to exist an age effect, with older remnants (such as IC443 and W44) showing higher [F$_{70}$/F$_{24}$] and [F$_{8}$/F$_{24}$]. The opposite seems to be more characteristic of younger remnants, such as Cas A, Kes 73 and G11.2-0.3. In some remnants, infrared emission is only detected in certain parts inside the radio/X-ray contours. Figure~\ref{colorplot} shows surface brightness ratios of these localized regions. To assess the mechanism of the infrared emission, we also calculate surface brightnesses for regions of SNRs known to be either interacting with neighboring molecular regions (such as the BML region in 3C391) and/or that show molecular emission lines (such as W49B) or known to have ionic lines (such as 3C396). For this exercise, we used regions of some remnants for which the main emission mechanism had previously been identified by \cite{2006AJ....131.1479R} (see their Fig. 22). These are also identified in the SNR figures in the appendix. Remnants with colors characteristic of ionic shocks seem to occupy the lower left part of the [I$_{70}$/I$_{24}$] versus [I$_{8}$/I$_{24}$] diagram, while remnants whose colors are consistent with molecular shocks and photodissociation regions are found in the upper right part. However, given the small number of examples and the fact that all three passbands (8, 24 and 70 $\micron$) contain dust emission and lines, we suggest that the [I$_{70}$/I$_{24}$] and [I$_{8}$/I$_{24}$] color ratios are not a completely secure method of distinguishing between different emission mechanisms. We plot some other color ratios for comparison. The locus for pure synchrotron is for spectral index of 0.3 to 1 \footnote{$\alpha$ is the spectral index and the radio flux density is defined as $S{_{\nu}}\propto\nu^{-\alpha}$, where $\nu$ is the frequency.}. For the diffuse ISM, we used the color ratios obtained by \cite{2010ApJ...724L..44C} for two regions (with different abundances of very small grains) in the Galactic plane at a longitude of approximately $59^\circ$. We have also included color ratios for evolved \ion{H}{2} regions (found near Galactic longitudes of $35^\circ$; {Paladini et al.\ \emph{(in preparation)}}). Are the color ratios [F$_{70}$/$F_{24}$] and [F$_{8}$/F$_{24}$] good discriminators between such very different regions? Based on Figure \ref{colorplot}, the plotted colors of individual evolved \ion{H}{2} regions often seem indistinguishable from SNRs using Spitzer colors. However, the overall trend for SNRs is displaced from the \ion{H}{2} regions and covers a broader range of colors. This is similar to what \cite{1989ApJS...70..181A} found for older SNRs and compact \ion{H}{2} regions using IRAS colors. Although line contamination is significant for radiative remnants, for younger ones the majority of the IR emission should be due to dust grains. For instance, for RCW 103 (age approximately 2000 yr; \citealp{1997PASP..109..990C}) we calculate the ratio of line emission (using ISO data) versus dust continuum within MIPS filters. We find that line emission contributes about 6\% and 3\% of the observed flux at 24 and 70 $\micron$, respectively. If emission from the remnants comes primarily from shocked hot dust, then we can derive the dust temperature just by assuming a simple modified blackbody emission. If dust has a thermal spectral energy distribution \begin{equation} S{_\nu}\propto~B{_\nu}(T)~\nu^{\beta} \, , \end{equation} where $\beta$ is the dust emissivity index (which depends on the dust composition, $\beta\sim1$ to 2), then flux ratios imply certain temperatures. Figure \ref{colorplotflux} shows temperatures based on the [F$_{8}$/$F_{24}$] and [F$_{24}$/$F_{70}$] ratios. The latter flux ratio yields dust temperatures for our SNR sample ranging between 45 and 70 K for $\beta=2$ and from 50 to 85 K for $\beta=1$. This is a very crude estimate of the dust temperature given that we are just considering one population of dust grains in temperature equilibrium when it is far more likely that there are several populations plus non-equilibrium emission. Applying the same simple dust model to the [F$_{8}$/$F_{24}$] ratio, we obtain temperature values higher than 145 K (for $\beta=2$) with younger SNRs having color temperatures lower that older remnants. This trend is similar to what was obtained by \cite{1989ApJS...70..181A} using color temperatures based on the IRAS 12 to 25 $\micron$ ratio. The infrared colors are not well explained by a single equilibrium dust temperature and atomic and molecular line emission as well as PAHs might significantly contribute to the infrared emission for SNRs. However, assuming that the infrared emission at 24 and 70 $\micron$ is cospatial, entirely due to dust, and well fitted by a single temperature modified blackbody, then SNR dust masses are given by the following equation: \begin{equation} M{_{\mathrm{dust}}} = \frac{d{^2}~F{_{\mathrm{\nu}}}} {{\kappa}{_{\mathrm{\nu}}}~B{_{\mathrm{\nu}}(T{_{\mathrm{dust}}})}} \, , \end{equation} where $d$ is the distance to each SNRs, $F{_{\nu}}$ is the flux density and $\kappa{_{\nu}}$ is the dust mass absorption coefficient. For the dust mass absorption coefficient, we use the diffuse ISM model by \cite{2001ApJ...554..778L} which consists of a mixture of amorphous silicate and carbonaceous grains. Distances are retrieved from Green's catalog. Results are reported in Table~\ref{mass}. Note that these derived dust masses, based on the 24 and 70 $\micron$ fluxes from MIPS, are overestimated given the probable contamination of these fluxes by line emission which can be substantial in radiative remnants. Even in the case where all of the flux in these bands is due to dust, mass estimates are based on a single color temperature (24/70) which can be reasonable for younger remnants but not ideal for older remnants given that their IR SED generally requires more that one dust population (\citealp{1989ApJS...70..181A}; \citealp{1992ApJS...81..715S}). \subsection{Infrared to radio (1.4 GHz) ratio} According to \cite{1987Natur.327..211H}, the comparison between mid-infrared (60 $\micron$ from IRAS) and radio continuum (11 cm) emission is a good diagnostic for distinguishing between thermal from non-thermal radio emitters (see also \citep{1987A&AS...71...63F} and \citep{1989MNRAS.237..381B}). \ion{H}{2} regions are shown to have infrared to radio ratios of the order of $\geq 500$ while ratios for SNRs are thought to be lower than 20. Moreover, \cite{1996A&AS..118..329W} using MOST found that SNRs have a ratio of infrared (60 $\micron$) to radio (843 MHz or 36 cm) of $\leq 50$ while \ion{H}{2} regions have again ratios of the order of 500 or more. We use the 1 GHz flux densities provided in Green's catalog \citep{2009BASI...37...45G}. Those values were converted to 1.4 GHz (21 cm) flux densities using the spectral indices quoted in the same catalog. We compare those to the 24 and 70 $\micron$ emission from MIPSGAL. Figure~\ref{radio_infrared} shows the spread in the ratio of the infrared to radio for the detected SNRs. Note that these ratios are often defined by $q{_{\mathrm{IR\lambda}}}=\log (F{_{\mathrm{IR\lambda}}}/F{_{\mathrm{21cm}}})$ where F${_\mathrm{IR\lambda}}$ and F$_\mathrm{21cm}$ are the flux densities (in Jy) at specific mid-infrared wavelengths and in the radio. There seems to be a group with ratios (or high $q{_{\mathrm{24}}}$ and $q{_{\mathrm{70}}}$) similar to those of \ion{H}{2} regions. In particular, we suggest that the morphology of the remnant G23.6+0.3 (and possibly G14.3+0.1) resembles more closely an \ion{H}{2} region and so whether it is a SNR should be re-considered. See also its infrared colors compared to \ion{H}{2} regions (Figs.~\ref{colorplotflux} and \ref{colorplot}). Image in Figure \ref{snrG236-03} (and \ref{snrG14301}) clearly shows a composite infrared emission, with the 24 $\micron$ (probing hot and small particles) matching the radio, while the 8 and 70 $\micron$ emission do not spatially coincide with the 24 $\micron$. In particular, the 8 $\micron$ appears as an outer layer which marks the transition between the ionized region and colder gas. The 70 $\micron$ emission is also seen along the same location as the 8 $\micron$, which is consistent with the premise that we are now probing a colder dust population. There is a strong empirical correlation between infrared and radio emission (at 1.4 GHz) found in spiral galaxies \citep{1985ApJ...298L...7H}. The radio emission in this range is thought to be mainly due to synchrotron emission with only 10\% is attributed to thermal bremsstrahlung produced by \ion{H}{2} regions (\citealp{2009ApJ...706..482M} and references therein). In our Galaxy and other local star forming galaxies, the bulk of the mid and far-infrared arises from warm dust associated with star forming regions. In older systems, things are more complicated due to cold cirrus and evolved stars (e.g., AGBs). We look for a similar correlation using the infrared and radio emission from our SNR sample. We estimate the mid-infrared emission as the sum of the integrated emission in each MIPS passband (24 and 70 $\micron$) and obtain an integrated mid-infrared flux F${_{\mathrm{MIR}}}$ in units of W/m$^2$ (see \citealp{1985ApJ...298L...7H}). The F${_{\mathrm{MIR}}}$ is an approximation (underestimate) of the mid-infrared bolometric flux (given that the bandpasses of 24 and 70 $\micron$ filters do not overlap). Figure~\ref{helou_82470} shows the correlation between infrared and 1.4 GHz non-thermal radio flux. The slope of the correlation is $1.10\pm0.13$ when combining all data (plus IC443). Again, two distinct populations seem to exist, an upper main trend and a lower one. Using only the lower one (i.e., objects which have 70 $\micron$ to 1.4 GHz ratios closer to \ion{H}{2} regions and high values of $q{_{\mathrm{24}}}$ and $q{_{\mathrm{70}}}$ in Fig.~\ref{radio_infrared}), we obtain a slope of $0.93\pm0.13$ while for the upper trend we get $0.96\pm0.09$. The correlation with the slope fixed to unity leads to the dimensionless parameter $q{_{\mathrm{MIR}}}$ (\citealp{1985ApJ...298L...7H}) which represents the ratio of mid-infrared to radio and is defined as \begin{equation} \begin{small} q{_{\mathrm{MIR}}}\equiv \mathrm{\log} \Big( \frac{F {_{\mathrm{MIR}}}}{3.75\times10^{12}~\mathrm{Wm^{-2}}} \Big) - ~\mathrm{\log} \Big( \frac{F~{_{\mathrm{1.4~ GHz}}}} {\mathrm{W m^{-2} ~Hz^{-1}}} \Big) \, . \end{small} \label{qmir} \end{equation} Our results show that remnants from the lower trend population in Figure~\ref{helou_82470} have $q{_{\mathrm{MIR}}}$ larger than 2.06 (up to 2.60). The other group has $q{_{\mathrm{MIR}}}$ ranging from 0.06 and 1.41. Specific $q$ values for each remnant are displayed on Table~\ref{table:qsnr}. For reference, a previous study by \cite{2003ApJ...586..794B} reported a median value of $q{_{\mathrm{TIR}}}$ (i.e., the ratio of the total 8-1000 $\micron$ IR luminosity to the radio power) of $2.64$ for a sample of 162 galaxies. But note this would be higher because it is based on the total IR emission. Table~\ref{table:qlambda} reports on the average and dispersion of the monochromatic $q{_{\mathrm{8}}}$, $q{_{\mathrm{24}}}$ and $q{_{\mathrm{70}}}$ parameters for the two trends. For comparison, a study of extragalactic VLA radio sources by \cite{2004ApJS..154..147A} obtained $q{_{\mathrm{24}}}=0.84$ and $q{_{\mathrm{70}}}=2.15$ while \cite{2007MNRAS.376.1182B} found $q{_{\mathrm{24}}}=1.39$ for somewhat fainter galaxies. Later on, \cite{2008PASJ...60S.453S} using AKARI data for some LMC SNRs compared the ratio of 24 $\micron$ to radio fluxes; their correlation implies $q{_{\mathrm{24}}}=0$. If SNRs are the sole contributors of synchrotron emission in a star-forming galaxy, then by comparison with the above $q{_{\mathrm{24}}}$'s for galaxies they concluded that about 4 to 14\% of the 24 $\micron$ emission in galaxies is due to SNRs. By doing a similar exercise using the upper trend remnants, for which $q{_{\mathrm{24}}}=0.39$ (see Table~\ref{table:qlambda}), we find that 10 to 35\% of the 24 $\micron$ galactic emission seen would be due to the remnants. Likewise, at 70 $\micron$ that contribution would be 11\%. The rest would be due to dust heated in \ion{H}{2} and photo-dissociation regions, as well as diffuse emission. \subsection{High energy emission from SNRs} In this section, we explore the relationship between the infrared and X-ray energetics of SNRs. Shocks produce hot plasma whose thermal energy is transferred to the dust grains via collisions and then re-emitted in the infrared (\citealp{1973ApJ...184L.113O}; \citealp{1981ApJ...245..880D}; \citealp{1987ApJ...322..812D}). Heating up the dust grains takes energy from the X-ray gas, thus cooling it and a good tool to measure this transfer of energy is the ratio of infrared to X-ray power, usually referred to as IRX \citep{1987ApJ...320L..27D}, which compares dust cooling (gas-grain collisions) with X-ray cooling (continuum and lines). The gas cooling function increases at lower plasma temperatures given that more lines become available due to recombination. On the other hand, at high temperatures of around $10^7$ K, most of the energy is released through the bremsstrahlung continuum (\citealp{1976ApJ...204..290R}). Assuming that line emission in the infrared is not energetically significant compared to the X-ray lines, then most of the line cooling must happen in the X-ray domain. This implies that if the observed powers in the infrared and X-ray are comparable then dust must be the essential contributor to the cooling in the infrared. Table\ \ref{table:xray} shows the values of IRX for a sample of SNRs. The total X-ray fluxes (with energy range from 0.3 to 10 keV) were retrieved from the \emph{Chandra} Supernova Remnant Catalog\footnote[4] {http://hea-www.harvard.edu/ChandraSNR/index.html}. Infrared flux densities (in Jy) were measured for regions approximately matching those in the X-ray. We obtain the IR flux by integrating under the SED using simple linear interpolation in log space plus $\nu$F$\nu$ constant across the 24 $\micron$ passband. Also, note that the flux densities used in this analysis are not corrected for extinction. For our sample, IRXs range from about 1.6 for remnant RCW103 to 240 for W44. \\ These data are presented as SEDs in Figure~\ref{figxray}, in the logarithmic form $\lambda F{_{\lambda}}$ vs. $\lambda$ which is convenient for assessing the energetics. The radio values are obtained from Green's catalog (\citealp{2009BASI...37...45G}). We also include Cas A and IC443 for comparison. Their infrared flux densities are taken from \cite{2004ApJS..154..290H} and Noriega-Crespo et al. (\emph{in preparation}) while the X-ray fluxes are from the Chandra Supernova Remnant Catalog and \cite{1987ApJ...320L..27D}, respectively. In Figure~\ref{figxray} the remnants are sorted according to increasing age (see Table~\ref{table:xray}). Although the ages for some remnants are uncertain, this Figure shows that hot dust grain cooling (24 $\micron$) tends to be the most important contributor in early phases of the remnant, while in the later stage the warm dust traced by the 70 $\micron$ emission is the main contributor to plasma cooling. It also appears that older remnants tend to have greater IRXs. This can potentially be either due to an increase in the dust-to-gas ratio and/or a change in the size distribution favoring smaller sizes. \section{Conclusion}\label{con} We have compiled a catalog of SNRs detected within the MIPSGAL survey at 24 and 70 $\micron$, with complementary measurements at 8 $\micron$ from the GLIMPSE survey. In order to better assess the nature of the detected infrared emission, we have compared it with radio and X-ray data. Our main findings are the following: \begin{itemize} \item{The detection rate of SNRs given the MIPSGAL sensitivity is 32$\%$, 39 out of the 121 candidates from Green's SNR catalog and higher than any previous infrared survey.} \item{We find a linear trend (slope $=1.1\pm0.2$) in the logarithmic relationship between [F$_{8}$/F$_{24}$] versus [F$_{70}$/$F_{24}$]. If there is indeed an age effect, then the youngest SNRs will have the lowest [F$_{8}$/F$_{24}$] and [F$_{70}$/F$_{24}$] ratios.} \item{The [I$_{70}$/I$_{24}$] and [I$_{8}$/I$_{24}$] color ratios provide a method of distinguishing between different emission mechanisms. This is not completely secure and the color ratios of some SNRs overlap with those of \ion{H}{2} regions.} \item{Assuming a simple modified blackbody model (at 24 and 70 $\micron$), we retrieve SNRs dust temperatures which range from 45 to 70 K for a dust emissivity of $\beta=2$.} \item{Using the previous color temperature (T${_{24/70}}$), we find rough estimates of dust masses ranging from 0.02 to 2.5 M$_{\sun}$. Note that the dust masses obtained here may be overestimated given the possible contribution of line emission to the MIPS fluxes.} \item{We also compare infrared fluxes with their corresponding radio fluxes at 1.4 GHz and find that most of the remnants have ratios of 70 $\micron$ to 1.4 GHz characteristic of SNRs, although six (about 18\% of the detected sample) have ratios closer to those found for \ion{H}{2} regions.} \item{The slope of the logarithmic correlation between `total' mid-infrared flux (24 and 70 $\micron$) and the 1.4 GHz non-thermal radio flux is close to unity (1.10) as found for galaxies. $q{_{\mathrm{MIR}}}$ values were calculated for each fully detected remnant and they range between approximately 0.06 and 2.60.} \item{Whether the strong 24 $\micron$ emission is the result of line emission or hot dust, it is clear that there is a good morphological association of the 24 $\micron$ and X-ray features in bright X-ray remnants. The mechanism for the 24 $\micron$ emission for these remnants is most likely grains heated by collisions in the hot plasma. The morphology of this mid-infrared emission is also generally distinct from the other infrared wavelengths which implies that the emission at 8 and 70 $\micron$ has a different origin. } \item{We present SEDs (radio, infrared, X-ray) for a sample of remnants and show that the energy released in the infrared is comparable to the cooling in the X-ray range. Moreover, IRX seems to increase with age.} \end{itemize} \begin{acknowledgements} This work was based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory (JPL), California Institute of Technology under a contract with NASA. Support for this work was provided by NASA in part through an award issued by JPL/Caltech and by the Natural Sciences and Engineering Research Council of Canada. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research is supported as part of the International Galactic Plane Survey through a Collaborative Research Opportunities grant from the Natural Sciences and Engineering Research Council of Canada. The MOST is operated by The University of Sydney with support from the Australian Research Council and the Science Foundation for Physics within The University of Sydney. This paper benefited from VLA archival data from The Multi-Array Galactic Plane Imaging Survey (MAGPIS) as well as Chandra archival data which was obtained in the online Chandra Supernova Remnant Catalog and is maintained by Fred Seward (SAO). The authors would like to thank Crystal Brogan for the 20 and 90 cm high resolution VLA images of the SNR G39.2-0.3 and Dae-Sik Moon and Lu\'{\i}s Be\c{}ca for useful discussions. Finally, we acknowledge the referee for valuable comments and corrections which improved this manuscript. \end{acknowledgements} \begin{deluxetable}{lccccccclccccccc} \tabletypesize{\scriptsize} \tablecaption{Infrared detection classification levels of SNRs from Green's catalog \label{table:level}} \tablehead{ \colhead{l+b} & \colhead{Name} & \colhead{Size\tablenotemark{a}} & \colhead{IA\tablenotemark{b}} & \colhead{IS\tablenotemark{c}} & \colhead{GR\tablenotemark{d}} & \colhead{M\tablenotemark{e}}& & \colhead{l+b} & \colhead{Name} & \colhead{Size} & \colhead{IA} & \colhead{IS} & \colhead{GR} & \colhead{M}} \startdata 10.5-0.0 & & 6 & - & - & - & 1 && 54.4-0.3 & HC40 & $40$ & 3& 4 & $1$ & 1\\ 11.0-0.0 & & $9\times11$ & - & - & - & 3 && 55.0+0.3 & & $17$& - & -& $2$ & 2 \\ 11.1-0.7 & & $11\times7$ & - & - &- & 3 && 57.2+0.8 & 4C21.53 & $12?$ & 4& 4 & $4$ & 3 \\ 11.1+0.1 & & $12\times10$ & - & -& - & 1 && 59.5+0.1 & & $15$ & - & -& $3$ & 3 \\ 11.2-0.3 & & $4$ & 1 & 2 & 1 & 1 && 296.1-0.5 & & $37\times25$ & 2 & 4 & 3 & 3 \\ 11.4-0.1 & & $8$ & 4 & 4 &3 & 3 && 296.8-0.3 & & $20\times14$ & 4 & 4 & 3 & 1 \\ 11.8-0.2 & & 4 & - & -& - & 2 && 298.5-0.3 & & $5?$ & 4 & 4 & 2 & 2\\ 12.0-0.1 & & $7$& 1 & 1 & 3 & 3 && 298.6-0.0 & & $12\times9$ & 3 & 4 & 2 & 2\\ 12.2+0.3 & & $5\times6$ & - & - & - & 3 && 299.6-0.5 & & 13 & - & - &3 & 4\\ 12.5+0.2 & & $5\times6$ & - &-& - & 1 && 302.3+0.7 & & 17 & 3 & 4 & 3 & 2\\ 12.7-0.0 & & 6 & - & - &-& 3 && 304.6+0.1 & Kes 17 & 8 & 1 & 1 & 1 & 1\\ 12.8-0.0 & & 3 & - & - &-& 3 && 308.1-0.7 & & 13 & - & - & 4 & 3\\ 13.5+0.2 & & $5$ & - & -&3 & 3 && 308.8-0.1 & & $30\times20?$ & - & 4 & 2 & 2\\ 14.1+0.1 & & $6\times5$ & - &-& - & 1 && 309.2-0.6 & &$15\times12$ & 4 & 4 & 3 & 3 \\ 14.3+0.1 & & $5\times4$ & - & -&- & 1 && 309.8+0.0 & &$25\times19$ & 4 & 4 & 3 & 3 \\ 15.4+0.1 & & $14\times15$ & - &-& - & 3 && 310.6-0.3 & Kes 20B & 8 & - & - & 2 & 3\\ 15.9+0.2 & & $6$ & 4 & 4& 3 & 1 && 310.8-0.4 & Kes 20A & 12 & - & - & 1 & 1 \\ 16.0-0.5 & & $15\times10$ &-& - & - & 3 && 311.5-0.3 & & 5 & 4 & 4 & 1 & 1\\ 16.4-0.5 & & 13 & - & - & -&1 && 312.4-0.4 & & 38 & 2 & 4 & 3 & 3\\ 16.7+0.1 & & $4$ & - & 4 & 3 & 2 && 315.4-0.3 & & $24\times13$ & 2 & 2 & 2 & 2\\ 17.0-0.0 & & 5 & - & - & - & 2 && 315.9-0.0 & & $25\times14$ & - & - & 3 & 3\\ 17.4-0.1 & & 6 & - & - & - & 3 && 316.3-0.0 & MSH 14-57 & $29\times14$ & 3 & 4 & 3 & 3\\ 18.1-0.1 & & 8 & - & - & -& 4 && 317.3-0.2 & & 11 & - &- & 3 & 2 \\ 18.6-0.2 & & 6 & - & - &-& 1 && 318.2+0.1 & & $40\times35$ &-& -& 3 & 2\\ 18.8+0.3 & Kes 67 & $14$ & 2 & 4 & 3 & 3 && 318.9+0.4 & & $30\times14$ & - & - & 3 & 2\\ 19.1+0.2 & & 27 & - & - & -& 3 && 321.9-0.3 & & $31\times23$ & 4 & 3 & 3 & 3\\ 20.0-0.2 & & $10$ & 4 & 4 & 3 & 3 && 322.5-0.1 & & 15 &-& -& 3 & 3 \\ 20.4+0.1 & & 8 & - & - & -& 1 && 323.5+0.1 & & 13 & 2 & 1 & 2 & 2\\ 21.0-0.4 & & $9\times7$ & -& - & - & 3 && 327.2-0.1 & & 5 & - & -& - & 3\\ 21.5-0.9 & & $4$ & 4 & 4 & 3 & 1 && 327.4+0.4 & Kes 27 & 21 & 3 & 4 & 2 & 2\\ 21.5-0.1 & & 5 & - & - & -& 1 && 328.4+0.2 & MSH15-57 & 5 & 4 & 4 & 4 & 3\\ 21.8-0.6 & Kes 69 & $20$ & 3 & 4 & 1 & 1 && 329.7+0.4 & & $40\times33$ & - & - & 2 & 2\\ 22.7-0.2 & & $26$ & 3 & 4 &1 & 2 && 332.0+0.2 & & 12 & 3 & 4 & 4 & 3\\ 23.3-0.3 & W41 & $27$ & 3 & 4& 2 & 2 && 332.4-0.4 & RCW 103 & 10 & 4 & 4 & 1 & 1\\ 23.6+0.3 & & $10$& 1 & 2 & 3 & 1 && 332.4+0.1 & Kes32 & 15 & 2& 3 & 2 & 2\\ 24.7-0.6 & & $15$ & 4 & 4 & 4 & 4 && 335.2+0.1 & &21 & 4 & 4 & 2 & 2\\ 24.7+0.6 & & $21$ & 4 & 4 & 3 & 2 && 336.7+0.5 & & $14\times10$ & 4 & 4 & 4 & 1\\ 27.4+0.0 & Kes 73 & $4$ & 3 & 1 & 3 & 1 && 337.0-0.1 & CTB 33 & 1.5 & 4 & 4 & 3 & 3 \\ 27.8+0.6 & & $50\times30$ & 4 & 4 & $3$ & 3 && 337.2-0.7 & & 6 & 4 & 4 & 4 & 1\\ 28.6-0.1 & & $13\times9$ & - & -& $3$ & 1 && 337.2+0.1 & & $3\times2$ & - & - & - & 3\\ 29.6+0.1 & & $5$ & - & -& $4$ & 3 && 337.8-0.1 & Kes 41 & $9\times6$ & 4 & 4 & 2 & 2 \\ 29.7-0.3 & Kes 75 & $3$ & 4& 4 & 3 & 1 && 338.1+0.4 & & $15?$ & 3 & 4 & 4 & 3\\ 31.5-0.6 & & $18$ & -&- & $3$ & 2 && 338.3-0.0 & & 8 & 4 & 4 & 3 & 3 \\% 31.9+0.0 & 3C 391 & $6$ & 1 & 1& $1$ & 1 && 338.5+0.1 & & 9 & 3 & 4 & 3 & 3\\ 32.1-0.9 & & $40$ & -& - & $3$ & 3 && 340.4+0.4 & & $10\times7$ & 4& 4 & 4 & 3\\ 32.4+0.1 & & 6 & - & - & -& 2 && 340.6+0.3 & & 6 & 1 & 1 & 2 & 1 \\ 32.8-0.1 & Kes 78 & $17$ & 3 & 1 & 3 & 3 && 341.2+0.9 & & $16\times22$ & - & - & 4 & 3 \\ 33.2-0.6 & &$18$ & 2 & 1 & 3 & 2 && 341.9-0.3 & & 7 & 3 & 4 & 4 & 3\\ 33.6+0.1 & Kes 79 & $10$ & 2 & 2 & 2 & 1 && 342.0-0.2 & & $12\times9$ & 2 & 4 & 3 & 3\\ 34.7-0.4 & W44 & $31$ & 1 & 4 & $1$ & 1 && 342.1+0.9 & & $10\times9$ & - &-& 4 & 3\\ 35.6-0.4 & & $15\times11$& - & -& - & 1 && 343.1-0.7 & & $27\times21$ & - & -& 3 & 3 \\ 36.6-0.7 & & $25?$ & - & 1 & $2$ & 3 && 344.7-0.1 & & 10 & 3 & 4 & 1 & 1\\ 39.2-0.3 & 3C 396 & $7$ & 4 & 4 & $1$ & 1 && 345.7-0.2 & & 6 & - & -& 4 & 1 \\ 40.5-0.5 & & $22$ & 2 & 4 &$4$ & 2 && 346.6-0.2 & & 8 & 4 & 4 & 1 & 2 \\ 41.1-0.3 & 3C 397 & $4.5\times2.5$ & 3 & 4& $1$ & $1$ && 347.3-0.5 & & $65\times55$ & - & -& 3 & 3\\ 42.8+0.6 & & $24$ & - & 4& $4$ & 4 && 348.5-0.0 & & 10 & - & - & 1 & 1\\ 43.3-0.2 & W49B & $4\times3$ & 1 & 1& $1$ & 1 && 348.5+0.1 & CTB 37A & 15 & 2 & 4 & 1 & 1\\ 45.7-0.4 & & $22$ & - & 4 & $2$ & 2 && 348.7+0.3 & CTB 37B & $17?$ & 1 & 3 & 3 & 1\\ 46.8-0.3 & HC30 & $17\times13$ & 3 & 1 & $3$ & 3 && 349.2-0.1 & & $9\times6$ & - & - & 3 & 3\\ 49.2-0.7 & W51C & $30$ & 4 & 1 & $3$ & 3 && 349.7+0.2 & & $2.5\times2$ & 1 & 1 & 1 & 1\\ 54.1+0.3 & & $1.5$ & 1 & 1& $3$ & 3 \\ \enddata \tablenotetext{a}{The radio sizes are given in arc-minutes and were taken from Green's catalog. When two dimensions are reported they refer to the major and minor axis of the ellipse.} \tablenotetext{b}{IRAS survey: Arendt 1989 , similar classification scheme as the one employed for this work} \tablenotetext{c}{IRAS survey: Saken et al. 1992. Detection levels 1 to 4 explained in the text. For the \cite{1992ApJS...81..715S} survey, detection classification is as follows: Y (detected), N (not detected), P (possible detections) and ? (not conclusive). We have translated these to the same classification used in this work for ready comparison (Y:1, P:2, ?:3, N:4).} \tablenotetext{d}{GLIMPSE survey: Reach et al. 2006. Similar classification scheme as the one employed for this work} \tablenotetext{e}{MIPSGAL survey: this paper} \end{deluxetable} \begin{deluxetable}{llrrrrrrrrl} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablecaption{Characteristics of detected SNRs in the MIPSGAL survey\tablenotemark{a}\label{table:1}} \tablehead{ \colhead{l+b} & \colhead{Name\tablenotemark{b}} & \colhead{S${_{\mathrm{rad}}}(^{\prime})$} & \colhead{S${_{\mathrm{phot}}}(^{\prime})$} & \colhead{F$_{8}$(Jy)} & \colhead{F$_{24}$(Jy)} & \colhead{F$_{70}$(Jy)} & \colhead{8/24} & \colhead{8/70} & \colhead{24/70} & \colhead{Region\tablenotemark{c}}} \startdata 10.5-0.0 & 10.5 & 6 & 4.8 & $14\pm2$ & $19\pm2 $ & $227\pm47$ & 0.75 & 0.06 & 0.08 & whole \\ 11.1+0.1 & 11.1 & $12\times10$ & $5.4\times3.8$ & $12\pm2$ & $16\pm2$ & $297\pm60$ & 0.71 & 0.04 & 0.06 & central region\\ 11.2-0.3 & 11.2 & $4$ & 4.8 & $4.6\pm1.1$ & $40\pm5$ & $101\pm23$ & 0.12 & 0.05 & 0.40 & whole \\ 12.5+0.2 & 12.5 & $6\times5$ & 1.5 & $0.9\pm0.2$ & $1.5\pm0.2$ & $7.4\pm1.7$ & 0.64 & 0.13 & 0.20 & northeastern region \\ 14.1-0.1 & 14.1& $6\times5$ & $1$ & $0.9\pm0.1$ &$ 10\pm1$ &$ 19\pm4$ & 0.09 & 0.54 & 0.05 & northwestern region\\ 14.3+0.1 & 14.3 & $5\times4$ & 4 & $7.9\pm1.1$ & $30\pm4$ & $171\pm36$ & 0.26 & 0.05 & 0.18 & whole\\ 15.9+0.2 & 15.9 & $7\times5$ & 6.5 & $3.2\pm2.5$ & $16\pm2$ & $55\pm16$ & 0.20 & 0.06 & 0.30 & whole \\ 16.4-0.5 & 16.4 & 13 & $6.5\times4.2$ & $40\pm5$ & $93\pm10$ & $950\pm192$ & 0.43 & 0.04 & 0.10 & northern arc\\ 18.6-0.2 & 18.6 & 6 & 7 & $26\pm4$ & $58\pm6$ &$ 554\pm115$ & 0.44 & 0.05 & 0.10 & whole\\ 20.4+0.1 & 20.4 & 8 & 8.2 & $43\pm5$ & $79\pm9$ & $1160\pm230$ &0.55 &0.04& 0.07 & whole\\ 21.5-0.1 & 21.5-0.1 & 5 & 5.2 &$ 12\pm3 $ & $39\pm4$ & $341\pm69$ & 0.31 & 0.04 & 0.11 & whole\\ 21.5-0.9 & 21.5-0.9& 4 & 1.6 & $0.4\pm0.1$ & $0.6\pm0.1$ & $3.5\pm0.8$ &0.63 & 0.11 & 0.18 & central region \\ 21.8-0.6 & Kes69C & 20 & $8.9\times6.8$ & $38\pm6$ & $23\pm3$ & $458\pm93$ & 1.6 & 0.08 & 0.05 & central region\\ 23.6+0.3 & 23.6 & 10 & $15.6\times6.2$ &$173\pm18$ & $355\pm36$ &$ 5440\pm1090$ & 0.49 & 0.03 & 0.07 & whole\\ 27.4+0.0 & Kes 73 & 4 & 4.8 & $3.0\pm1.2$ & $37\pm4$ & $87\pm18$ &0.08 &0.04 & 0.42 & whole\\ 28.6-0.1 & 28.6A& & 3.6 & $7.2\pm1.0$ & $12\pm2$ & $294\pm65$ & 0.62 & 0.03 & 0.04 & nothwestern arc\\ & 28.6B & & 2.8 & $\mathrm{<}2.0$ &$ 3.8\pm0.5$ & $\mathrm{<}35$ & $<$0.53 & - &0.11$>$ & southern filaments\\ & 28.6C & & 4.1 & $\mathrm{<}1.9$ &$ 5.7\pm0.8$ & $\mathrm{<}23$ & $<$0.32 & - & 0.25$>$ & eastern filament\\ 29.7-0.3 & Kes75S & 3 & $1.4\times0.9$ & $0.7\pm0.2$ & $4.8\pm0.5$ & $\mathrm{<}29$ & 0.15 & 0.17$>$ & 0.03$>$ & southern shell\\ & Kes75W & & $0.3\times1.1$ & $0.6\pm0.1$ & $2.9\pm0.3$ & $5.3\pm1.1$ & 0.22 & 0.55 & 0.12 & western shell\\ & Kes75 & & & $1.3\pm0.3$ & $7.8\pm0.8$ & $\mathrm{<}34$ & 0.17 & 0.23$>$ & 0.04$>$ & whole (both shells) \\ 31.9+0.0 & 3C 391 & $7\times5$ & 6.2 & $10\pm3$& $39\pm4$ & $395\pm81$ & 0.27 & 0.03 & 0.10 & whole\\ & 3C391BML & & 0.8 & $0.6\pm0.1$ & $0.9\pm0.1$ & $13\pm3$ & 0.74 & 0.05&0.06 & BML\\ & 3C391NWBar && $0.7\times0.2$ & $0.2\pm0.1$ & $0.8\pm0.1$ & $3.9\pm1.0$& 0.25&0.05 &0.22 & northwestern bar\\ 33.6+0.1 & Kes 79 & 10 & 8.4 & $18\pm6$ & $45\pm5 $ & $577\pm116$ & 0.40 & 0.03 & 0.08 & whole\\ & Kes79sf & & $1.7\times0.4$ & $\mathrm{<}0.4$ & $1.2\pm0.2$ & $\mathrm{<}10$ & $<$0.36 & - & 0.11$>$& southern filaments\\ 34.7-0.4 & W44 & $35\times27$ & $38\times28$ &$384\pm44$ & $348\pm39$& $6330\pm1300$ & 1.10 & 0.06 & 0.06 & whole\\ 35.6-0.4 & 35.6 & $15\times11$ & 11 & $44\pm8$ &$ 36\pm5$ & $487\pm99$ & 1.21 & 0.09 & 0.07 & whole\\ 39.2-0.3 & 3C 396 & $8\times6$ & $3.5\times2.5$ & $1.7\pm0.8$ & $8.5\pm1.0$ & $34\pm11$ & 0.20 & 0.05 & 0.25 & whole \\ & 3C396SE & & 1.4 & $3.5\pm2.3$ & $5.26\pm0.6$ & $158\pm32$ & 0.66 & 0.02 & 0.03 & southeastern plume\\ & 3C396NW & & $1.4\times2.8$ & $0.7\pm0.2$ & $4.9\pm0.6$ & $\mathrm{<}22$ & 0.14 & 0.02$>$ & 0.14$>$ & northwestern arc\\ 41.1-0.3 & 3C 397 & $4.5\times2.5$ & $5\times3.6$ & $4.3\pm0.8$& $17\pm2$ & $34\pm14$ & 0.25 & 0.13 & 0.50 & whole\\ & 3C397SE & & $0.3$ & $<0.1$ & $0.6\pm0.1$ & $0.6\pm0.1$ & $<0.18$ & $<0.17$ & $0.95$ & ionic shock region\\ 43.3-0.2 & W49B & $4\times3$ & 5 & $5.7\pm0.7$ & $77\pm8$ & $529\pm107$ & 0.07 & 0.01 & 0.15 & whole\\ & W49BMol & & $0.6\times0.3$ & $0.3\pm0.1$ & $0.6\pm0.1$ & $6.7\pm1.4$ & 0.52&0.04 &0.08 & molecular shock region\\ & W49BIon & & $0.6\times0.3$ & $0.2\pm0.1$ & $3.1\pm0.3$ & $4.3\pm0.9$ & 0.09& 0.03 & 0.28& ionic shock region\\ 54.4-0.3 & HC40NE & 40 & 2.8 & $5.9\pm0.7$ & $2.9\pm0.4$ & $38\pm8$ & 2.04 & 0.16 &0.08 & northeastern region \\ & HC40TR & 40 & $4.7\times2.1$ & $3.8\pm0.5$& $8.0\pm0.9$ & $131\pm27$ & 0.47 & 0.03 & 0.06 & north top region\\ 296.8-0.3 & 296 & $20\times14$ & $4\times1.8$& $\mathrm{<}0.6$ &$ 1.5\pm0.2$ &$\mathrm{<}11$ & $<$0.39 & - & 0.14$>$ & western arc \\ 304.6+0.1 & Kes 17 & 8 & 8 & $17\pm2$ &$ 17\pm2$ & $149\pm31$ & 1.02 & 0.12 & 0.11 & whole\\ 310.8-0.4 & Kes20Ana & 12 &1.1 &$0.3\pm0.1$ & $1.4\pm0.2$ & $20\pm4$ &0.21 & 0.02 & 0.07 &northern arc\\ & Kes20Ase & 12 & 1.2 & $0.8\pm0.1$ & $0.8\pm0.1$ & $\mathrm{<}17$ & 1.1 & 0.05$>$& 0.04$>$ &southeastern ridge\\ 311.5-0.3 & 311& 5 & 4 & $2.1\pm0.4$ & $3.5\pm0.5$ & $58\pm14$ & 0.60 & 0.04 & 0.06 & whole \\ 332.4-0.4 & RCW 103 & 10 & $10\times8.7$ & $51\pm6$ & $101\pm11$ & $648\pm134$ & 0.50 & 0.08 & 0.16 & whole\\ & RCW103M & & $1.9\times0.4$ & $0.9\pm0.3$ & $5.9\pm0.6$ & $10\pm3$ & 0.16 & 0.09 & 0.58 & molecular region\\ 336.7+0.5 & 336 & $14\times10$ & 2.3 & $0.6\pm0.2$ & $2.4\pm0.3$ & $6.3\pm2.5$ & 0.25 & 0.09 & 0.38 & bow\\ 337.2-0.7 & 337 & 6 & 5.4 & $2.4\pm0.6$ & $6.3\pm0.8$ &$ 13\pm5$ & 0.38 & 0.18 & 0.48 & whole \\ 340.6+0.3 & 340 & 6 & 5.4 & $14\pm2$ & $19\pm2$ & $193\pm40$ & 0.75 & 0.07 & 0.10 & whole\\ 344.7-0.1 & 344 & 10 & 9 & $4.8\pm1.9$ & $24\pm3$ & $117\pm28$ & 0.20 & 0.04 & 0.21 &whole\\ 345.7-0.2 & 345 & 6 & 1.8 & $\mathrm{<}0.4$ & $1.7\pm0.2$ & $4.8\pm1.0$ & $<$0.13 & 0.05 & 0.36 & southern arc\\ 348.5-0.0 & 348 & 10 & $3.6\times1.4$ & $1.3\pm0.2$ & $2.0\pm0.3$ & $13\pm3$ & 0.66 & 0.10 & 0.15 & central blob\\ 348.5+0.1 & CTB37A & 15 & 8 & $27\pm6$ & $27\pm4$ & $336\pm86$ & 0.89 & 0.08 & 0.09 & eastern shell\\ 348.7+0.3 & CTB37B & 17 & $8.2\times4$& $18\pm3$ & $32\pm4$ & $401\pm82$ & 0.58 & 0.05 & 0.08 & interface\\ 349.7+0.2 & 349 & $2.5\times2$ & 2.1& $4.1\pm0.6$ & $32\pm4$ & $240\pm49$ & 0.13 & 0.02 & 0.13 & whole \enddata \tablenotetext{a}{\ The color ratios are obtained using the flux density value for the partial or whole region depending on the infrared morphology. The radio sizes ($S_\mathrm{rad}$) are given in arc-minutes and were taken from Green's catalog. The photometry size ($S_\mathrm{phot}$) refers to the size used for obtaining the infrared fluxes. The integrated fluxes have not been color corrected since the mechanism for the infrared emission is uncertain. Also, no extinction correction has been applied. Furthermore, the 8 $\micron$ fluxes were multiplied by the extended source aperture correction factor as described in \cite{2005PASP..117..978R}. The tabulated errors are the combination of the 1$\sigma$ errors plus the systematic calibration errors associated to each wavelength. Upper flux limits are given for remnants where point source contamination or apparent lack of emission (3$\sigma$ detection) is present.} \tablenotetext{b}{\ Name or designation used to identify remnants in Figures \ref{colorplotflux} and \ref{colorplot}.} \tablenotetext{c}{\ Locations and areas used for photometry of subregions of remnants as identified in the figures in the appendix.} \end{deluxetable} \begin{deluxetable}{lccc} \tablewidth{0pt} \tablecaption{color temperatures and dust masses based of the 24 and 70 $\micron$ fluxes for selected SNRs \label{mass} \tablenotemark{a}} \tablehead{ \colhead{SNRs} & \colhead{T${_{\mathrm{24/70}}}$ \tablenotemark{b} } & \colhead{Distance \tablenotemark{c}} & \colhead{M${_{\mathrm{dust}}}$ \tablenotemark{d}}} \startdata G11.2-0.3 & 63 & 4.4 & 0.034 \\ G15.9+0.2 & 60 & 8.5 & 0.081\\ Kes 73 & 63 & 8.5 & 0.11 \\ Kes 75 & 58 & $<$7.5 & $<$0.045 \\ 3C 391 & 51 & 8.5 & 1.1\\ Kes 79 & 50 & 7.8 & 1.5 \\ W44 & 48 & 2.8 & 2.5 \\ 3C 396 & 58 & 7.7$>$ & 0.046$>$\\ 3C 397 & 65 & 7.5$>$ & 0.029$>$\\ W49B & 54 & 10 & 1.6 \\ RCW 103 & 55 & 3.1 & 0.18\\ G337.2-0.7 & 65 & 2.0-9.3 & $<$0.018 \\ G349.7+0.2 & 54 & 14.8 & 1.6 \\ CasA \tablenotemark{e} & 87 & 3.4 & 0.008 \\ IC443 \tablenotemark{e} & 48 & 0.7-2 & $<$0.27 \enddata \tablenotetext{a}{\ Color temperatures and dust masses are obtained using a dust emissivity index of $\beta=2$.} \tablenotetext{b}{\ Approximate values for the dust temperature (K).} \tablenotetext{c}{\ Distances (kpc) taken from Green's catalog.} \tablenotetext{d}{\ Dust mass in solar masses (M$_{\sun}$).} \tablenotetext{e}{\ Cas A and IC443 are included for comparison. Estimates are based on the flux densities obtained by \cite{2004ApJS..154..290H} and Noriega-Crespo et al. (\emph{in preparation}), respectively.} \end{deluxetable} \begin{deluxetable}{lrclr} \tablewidth{0pt} \tablecaption{F$_{\mathrm{MIR}}$-radio ratios for selected SNRs \label{table:qsnr}} \tablehead{ \colhead{Name} & \colhead{$q{_{\mathrm{MIR}}}$} & & \colhead{Name} & \colhead{$q{_{\mathrm{MIR}}}$}} \startdata G10.5-0.0& 2.06 && G35.6-0.4 & 1.37 \\ G11.2-0.3& 0.54 && 3C396 & 0.06 \\ G14.3+0.1& 2.16 && 3C397 & 0.10\\ G15.9+0.2& 0.86 && W49B & 0.83 \\ G18.6-0.2& 2.24 && Kes 17 & 0.69 \\ G20.4+0.1& 2.18 && G311.5-0.3 & 0.91 \\ G21.5-0.1& 2.60 && RCW103 & 1.07 \\ G23.6+0.3& 2.43 & & G337.2-0.7 & 0.83 \\ Kes 73& 1.06 && G340.6+0.3 & 1.22 \\ Kes 75& 0.32 && G344.7-0.1 & 1.41 \\ 3C391& 0.87 & & G349.7+0.2 & 0.76 \\ Kes 79 & 1.05 & & CasA \tablenotemark{a} & -1.05 \\ W44 & 1.02 && IC443 \tablenotemark{b} & 0.53 \enddata \tablenotetext{a}{Values for infrared and radio emission from Cas A were taken from \cite{2004ApJS..154..290H}.} \tablenotetext{b}{Values for infrared emission from IC 443 were taken from Noriega-Crespo et al. (\emph{in preparation}).} \end{deluxetable} \begin{deluxetable}{lrrr} \tablewidth{0pt} \tablecaption{$q{_{\mathrm{IR_{\lambda}}}}$ values for different wavelengths \tablenotemark{a}\label{table:qlambda}} \tablehead{ \colhead{$q{_{\mathrm{IR_{\lambda}}}}$} & \colhead{All sample} & \colhead{Upper trend} & \colhead{Lower trend}} \startdata $q{_{\mathrm{24}}}$ & $0.71\pm0.42$ & $0.39\pm0.11$ & $1.68\pm0.05$\\ $q{_{\mathrm{70}}}$ & $1.55\pm0.60$ & $1.17\pm0.19$ & $2.70\pm0.04$\\ \hline $q{_{\mathrm{MIR}}}$ & $1.19\pm0.53$ & $0.83\pm0.15$ & $2.28\pm0.04$ \enddata \tablenotetext{a} {\ $q{_{\mathrm{24}}}$ and $q{_{\mathrm{70}}}$ parameters are monochromatic. They are defined as $q{_\mathrm{IR_\lambda}}=\log(F{_\mathrm{IR_{\lambda}}}/F_\mathrm{21cm})$ where F${_\mathrm{IR_\lambda}}$ and F$_\mathrm{21cm}$ are the flux densities (in Jy) at specific mid-infrared wavelengths and in the radio. $q{_{\mathrm{MIR}}}$ is a dimensionless parameter which represents the ratio of mid-infrared to radio (it is defined in Eq. \ref{qmir}). Individual $q{_{\mathrm{MIR}}}$ values for each remnant are presented in the Table~\ref{table:qsnr}.} \end{deluxetable} \begin{deluxetable}{lcccr} \tablewidth{0pt} \tablecaption{Infrared to X-ray ratios for selected SNRs\label{table:xray}} \tablehead{ \colhead{Name} & \colhead{Total MIR Flux} & \colhead{X-ray Flux$_{(0.3-10~\mathrm{KeV})}$ \tablenotemark{a}} & \colhead{IRX} & \colhead{Age\tablenotemark{b}}\\ & \colhead{(10$^{-9}$ erg cm$^{-2}$ s$^{-1}$)} & \colhead{(10$^{-9}$ erg cm$^{-2}$ s$^{-1}$)} & & } \startdata G11.2-0.3 & 11 & 4.0 & 2.8 & 1600 (1) \\ G15.9+0.2 & 5.3 & 0.96 & 5.6 & 500-2400 (2)\\ Kes73 & 9.9 & 1.8 & 5.6 & 500-1000 (3)\\ Kes75 & 2.7 & 0.21 & 13 & $884<$ /$>2400$ (4)\\ 3C391 & 20 & 0.62 & 32 & ~4000 (5)\\ Kes79 & 16 & 0.17 & 93 & ~6000 (6) \\ W44\tablenotemark{c} & 2.0 & 0.093 & 240 & ~20000 (7) \\ 3C396 & 3.0 & 0.19 & 16 & 7100 (8) \\ 3C397 & 8.1 & 1.3 & 6.2 & ~5300 (9) \\ W49B & 30 & 6.0 & 5.1 & ~1000-4000 (10) \\ RCW103 & 27 & 17 & 1.6 & ~2000 (11) \\ G337.2-0.7 & 1.6 & 0.93 & 1.7 & 750-3500 (12)\\ G349.7+0.2 & 14 & 0.33 & 42 & ~2800 (13) \enddata \tablenotetext{a}{\ X-ray fluxes were retrieved from the \emph{Chandra} Supernova Remnant Catalog.} \tablenotetext{b}{\ Age is given in years and the values were taken from the following literature: (1) \cite{1977QB841.C58......}; (2) \cite{2006ApJ...652L..45R}; (3) \cite{2008ApJ...677..292T} ; (4) upper limit for the spin-down age of the associated pulsar \cite{2006ApJ...647.1286L}/ based on ionization time-scales \cite{2007ApJ...667..219M}; (5) \cite{2004ApJ...616..885C}; (6) \cite{2004ApJ...605..742S} ; (7) \cite{1991ApJ...372L..99W}; (8)\cite{1999ApJ...516..811H}; (9) \cite{2005ApJ...618..321S}; (10) e.g. \cite{2000ApJ...532..970H}; (11) \cite{1997PASP..109..990C}; (12) \cite{2006ApJ...646..982R} ; (13) \cite{2002ApJ...580..904S}} \tablenotetext{c}{\ Infrared and X-ray flux values are for a central rectangular region of W44.} \end{deluxetable} \begin{figure*} \centering \includegraphics[width=1.\textwidth]{f1.pdf} \caption{A sketch of a dusty SNR spectrum made with a blackbody at 60 K and superimposed the most important fine structure and ionic emission lines (mimicking ISO Short and Long wavelength spectrometer observations of SNR RCW103). Emission lines within the wavelength range of the MIPS filters are annotated. Overplotted (dashed lines) are the normalized response functions of the MIPS filters at 24 and 70 $\micron$. This shows that the contribution of line emission to the MIPS filters is relatively small when a dust continuum is present.} \label{mipsfilters} \end{figure*} \begin{figure*} \centering \includegraphics[width=.38\textwidth]{f2a.pdf} \includegraphics[width=.38\textwidth]{f2b.pdf}\\ \includegraphics[width=.38\textwidth]{f2c.pdf} \includegraphics[width=.38\textwidth]{f2d.pdf}\\ \includegraphics[width=.38\textwidth]{f2e.pdf} \includegraphics[width=.38\textwidth]{f2f.pdf} \caption{Examples of level 1 detections at 24 $\micron$. From left to right, top to bottom: G11.2-0.3, G27.4+0.0 (Kes 73), G31.9+0.0 (3C 391), G43.3-0.2 (W49B), G332.4-0.4 (RCW 103) and G349.7-0.2. The greyscale is linear with the ranges in MJy/sr as displayed in the intensity bars below each image. The angular extents of the images differ.} \label{good} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.\textwidth]{f3.pdf} \caption{Color-color plot of the SNRs in our sample detected and measured as a whole. For comparison, we have also plotted the location of Cas A and IC443 in this diagram. We find that this sample is well fitted by a slope of $1.1\pm0.2$ in this logarithmic representation, showing that 8 and 70 $\micron$ emission are related roughly by a constant while they change dramatically with respect to 24 $\micron$. Vertical lines show the range of expected temperatures for a modified blackbody using only the flux measured at 24 and 70 $\micron$ while for the horizontal lines, the temperatures are calculated using the flux at 8 and 24 $\micron$. Dotted and dashed lines correspond to a $\beta$ of 1 or 2, respectively. If the IR emission at longer wavelengths is mainly due to dust then the range of temperatures for our sample of SNRs is roughly between 50 and 85 K for $\beta$=1 and between 45 and 70 K for $\beta$=2.} \label{colorplotflux} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{f4.pdf} \caption{Color-color plot based on surface brightnesses of localized regions in SNRs in our sample. Ratios for SNRs known to have molecular interactions, atomic fine-structure line emission or PDRs colors are also shown. ICE and ICW refer to IC443, eastern (strong [O I] emission at 63 $\micron$) and western (interacting with a nearby \ion{H}{2} region) regions, respectively. Colors of pure infrared synchrotron emission are plotted as a straight line. Color ratios for the diffuse ISM \citep{2010arXiv1010.2774C} and evolved \ion{H}{2} regions ({Paladini et al.\ \emph{(in preparation)}}) are also included. Remnants with upper flux limits are represented in italic. For comparison, we plot the slope (dotted-dashed line) of $1.1\pm0.2$ found for the sample of detected remnants and measured as a whole (see Fig.~\ref{colorplotflux}). Also depicted is a dashed line showing a different trend for evolved HII regions.} \label{colorplot} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{f5.pdf} \caption{Ratio of MIPS 70 $\micron$ and radio continuum at 1.4 GHz ($q{_{\mathrm{70}}}=\log(F{_{\mathrm{70\micron}}}/F{_{\mathrm{1.4GHz}}}$)) versus ratio between 24 $\micron$ and radio again ($q{_{\mathrm{24}}}$). There is a considerable range in the positions of the SNRs. The ratios are correlated and are well fitted by a slope of 1.2. There seems to exist a group of catalogued sources which have ratios more characteristic of \ion{H}{2} regions (upper population in the plot). The infrared counterparts in the lower part of the plot show ratios of infrared (70 $\micron$) to radio comparable to what was found for SNRs in previous studies (e.g., \cite{1989MNRAS.237..381B}) which suggested ratios of infrared (60 $\micron$) to 2.7 GHz lower than 20. The location of IC443 is also shown. Cas A falls off the plot and below the trend with ratios (0.1, 0.05).} \label{radio_infrared} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.\textwidth]{f6.pdf} \caption{Correlation between radio and infrared emission. For the entire population, the slope of the correlation is 1.10 (dotted line). There seem to exist two trends. The upper population is more characteristic of SNRs while the lower population looks more like \ion{H}{2} regions. The latter are the same objects with the high values of $q{_{\mathrm{24}}}$ and $q{_{\mathrm{70}}}$ in Figure~\ref{radio_infrared}. The slopes for the upper and lower populations are 0.96 (dashed line) and 0.93 (dot-dashed line), respectively, not significantly different. Again, IC443 and Cas A are also included for comparison. Cas A stands out due to its strong radio synchrotron emission.} \label{helou_82470} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.\textwidth]{f7.pdf} \caption{SEDs of Chandra SNRs ordered according to approximate remnant's age (increasing left to right, top to bottom). For the radio, the synchrotron spectrum is shown by the dotted line for the appropriate spectral index, anchored in the observed range at 1 GHz (triangles). The circles indicate the power in the infrared at 8, 24 and 70 $\micron$. In the X-ray, the band (0.3-10 keV) of the Chandra images is used (squares). Horizontal lines represent the width of the bandpass. We also include Cas A and IC443 for comparison. Their infrared flux densities are taken from \cite{2004ApJS..154..290H} and Noriega-Crespo et al. (\emph{in preparation}) while the X-ray fluxes are from the Chandra Supernova Remnant Catalog and \cite{1987ApJ...320L..27D}, respectively. Again, note that for W44 the infrared and X-ray values apply only to the central region while the radio estimate is for the whole remnant. The integrals over the infrared and X-ray range reveal their contributions to cooling the shocked plasma.} \label{figxray} \end{figure*}
train/arxiv
BkiUdm04dbghX5eqfuGH
5
1
\section{I. Introduction} Unconventional superconductivity in the layered perovskite Sr$_2$RuO$_4$ has long attracted interest \cite{Maeno1994,Rice1995,Mackenzie2003,Kallin2012,Mackenzie2017,Kivelson2020}. In particular, recent progress of nuclear magnetic resonance (NMR) experiments has posed strong constraints on the spin sector of the superconducting order parameter \cite{Pustogow2019,Ishida2020,Chronister2021}, offering a clue for the consistent understanding of several experimental results \cite{Kittaka2009,Yonezawa2013,Hicks2014} that were difficult to reconcile with earlier NMR results \cite{Ishida1998,Murakawa2007,Ishida2008,Ishida2015}, as well as stimulating further symmetry-based experiments to elucidate the order parameter \cite{Benhabib2021,Ghosh2021,Grinenko2021}. As a result, an exotic two-component order parameter with broken time reversal symmetry has been proposed, which is unique as suggested in several examples \cite{Kasahara2007,Smidman2015}, and various theoretical attempts have also been made \cite{Gingras2019,Romer2019,Suh2020,Willa2021,Romer2021,Zhang2021}, opening an avenue for exploring unconventional pairing interaction to realize such order parameters. To address this underlying issue, it is crucially important to establish the electronic phase diagram as a function of external parameters such as pressure and chemical substitutions, as is widely discussed in correlated matters \cite{Das2016, Paschen2021}. Indeed, although the superconducting state in Sr$_2$RuO$_4$ is extremely sensitive to impurity \cite{Mackenzie1998}, dramatic changes in the electronic state with elemental substitutions have been reported. In the isovalent systems, for instance, a spin-glass state develops over a wide range of the Ca content in (Sr, Ca)$_2$RuO$_4$ \cite{Nakatsuji2000,Carlo2012}, which may originate from the degree of freedom in the RuO$_6$ octahedra. On the other hand, in Sr$_2$(Ru, Ti)O$_4$, an incommensurate spin-density-wave ordering due to the Fermi-surface nesting appears with glassy behavior \cite{Minakata2001}. In contrast to the antiferromagnetic (AFM) coupling seen in the isovalent systems, La$^{3+}$ substitution to Sr$^{2+}$ sites results in electron doping to expand the electron-like $\gamma$ Fermi surface, leading to ferromagnetic (FM) fluctuation owing to an enhancement of the density of states (DOS) at the Fermi energy $N(\varepsilon_{\rm F})$ near the van Hove singularity (vHs) \cite{Kikugawa2004_La,Kikugawa2004,Shen2007}. These results clearly show the complicated magnetic instabilities existing in the parent compound Sr$_2$RuO$_4$, which involve complex structural and electronic origins. Such instabilities are also discussed in several studies including NMR and neutron experiments \cite{Imai1998,Sidis1999,Braden2002,Steffens2019,Jenni2021}. Among many substituted systems, Co- and Mn-substituted Sr$_2$RuO$_4$ serve as a fascinating platform to investigate how magnetic fluctuations mediate the emergence of superconductivity, because slight substitutions of Co and Mn drastically vary the superconducting ground state into the FM and AFM glassy states, respectively \cite{Ortmann2013}. In the Co-substituted system, the FM cluster glass characterized by an exponential relaxation of the remanent magnetization appears at low temperatures. Also, the electronic specific heat $\gamma_{\rm e}$ increases with increasing Co contents, indicating an increased $N(\varepsilon_{\rm F})$ similar to La-substitution \cite{Kikugawa2004_La}. It is noteworthy that the increase of $N(\varepsilon_{\rm F})$ in a $1.5\%$ Co-substituted sample estimated from $\gamma_{\rm e}$ is comparable to that in a $5\%$ La-substituted sample, possibly implying an effective electron doping effect by the Co substitution \cite{Ortmann2013}. Similarly, the AFM transition temperature in Mn-substituted samples is much higher than that in Ti-substituted ones, demonstrating enigmatic roles of the Co and Mn substitutions to strengthen the magnetic couplings in Sr$_2$RuO$_4$. The aim of this study is to examine such magnetic fluctuations in Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn) by means of Seebeck coefficient measurement, known as a powerful tool for fluctuations as it is a measure of the entropy per charge carrier \cite{Behniabook}. The observed temperature dependence of $S/T$ ($S$ and $T$ being the Seebeck coefficient and temperature, respectively) has a maximum near characteristic temperatures in the magnetization in both Co- and Mn-substituted samples, indicating that both FM and AFM fluctuations are responsible for the enhancement of the Seebeck coefficient. Moreover, in sharp contrast to the typical metallic behavior in which $S/T$ remains constant, $S/T$ increases with cooling in the parent compound, implying both FM and AFM fluctuations remaining in the itinerant electrons in Sr$_2$RuO$_4$. \section{II. Experimental details} Single crystals of Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn) were grown by a floating-zone method using an image furnace with a pair of halogen lamps and elliptical mirrors \cite{Mao1999, Ortmann2013}. We used high-purity SrCO$_3$(99.99\%+), RuO$_2$(99.9\%), MnO$_2$(99.99\%), and CoO(99.9\%) as starting materials. An excess 15\% amount of RuO$_2$ was weighed to compensate for evaporation of Ru during single crystal growth. The concentration of Mn and Co in obtained crystals were determined by electron probe micro analyzer. The measured Mn concentration was 2.9\%, well corresponding to the nominal concentration of 3\%. In contrast, the measured Co concentrations were 1.8\% and 0.8\% for nominally 3\% and 1\% Co-substituted samples, respectively. The relation between the nominal and measured concentrations is consistent with the trend reported in Ref. \onlinecite{Ortmann2013}. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{fig1.pdf} \caption{Temperature dependence of the Seebeck coefficient $S$ of Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn) single crystals.} \label{fig1} \end{center} \end{figure} The typical dimension of the measured crystals was $\approx 3\times 0.2 \times 0.2$~mm$^3$. The Seebeck coefficient was measured by a steady-state technique using a manganin-constantan differential thermocouple in a closed-cycle refrigerator. The thermoelectric voltage of the sample was measured with Keithley 2182A nanovoltmeter. The temperature gradient with a typical temperature gradient of 0.5 K/mm was applied along the $ab$-plane direction using a resistive heater. The thermoelectric voltage from the wire leads was subtracted. Magnetization was measured using a superconducting quantum interference device magnetometer (Quantum Design, MPMS) under field-cooled (FC) and zero-field-cooled (ZFC) processes with the external field of $\mu_0H=0.1$~T applied along the $c$ axis. \section{III. results and discussions} Figure 1 summarizes the temperature variations of the Seebeck coefficient $S$ in Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn) single crystals. Overall behavior of $S$ in the parent compound Sr$_2$RuO$_4$ is consistent with earlier results \cite{Yoshino1996,Xu2008}, and is discussed as an intriguing example to study the internal degrees of freedom in correlated metals \cite{Mravlje2016}. In the substituted compounds, $S$ increases (decreases) with Co (Mn) substitutions near room temperature probably because of the electron (hole) doping effect as suggested in Ref. \onlinecite{Ortmann2013}. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{fig2.pdf} \caption{ Comparison of the temperature dependence of (a) $dS/dT$ and (b) $S/T$ of Sr$_2$RuO$_4$ single crystals. The red circles show the present data, and the others are data extracted from Refs. \onlinecite{Yoshino1996,Xu2008}.} \label{fig2} \end{center} \end{figure} In Refs. \onlinecite{Yoshino1996,Xu2008}, the Seebeck coefficient in the parent compound was analyzed in the differential form $dS/dT$, and an anomaly was found near $20-25$~K. Xu \textit{et al.} have suggested that a band-dependent coherence is developed below the anomaly temperature on the basis of the results of the Seebeck and the Nernst measurements \cite{Xu2008}. Indeed, such a coherency seems to be vital in this system \cite{Mravlje2011}. To see this anomaly at $20-25$~K, we compare the present data for Sr$_2$RuO$_4$ with the results extracted from Refs. \onlinecite{Yoshino1996,Xu2008} in Fig. 2(a). Although there is a sample dependence in magnitude, all the data exhibit a kink around $20-25$~K, in good agreement with the present result. Also note that the effect from the impurity phase of SrRuO$_3$ is negligible, because the Seebeck coefficient $S$ in a composite sample with the conductivity $\sigma$ is given as $\sigma S = \sum_k\alpha_k\sigma_kS_k$ in a parallel-circuit model, where $\alpha_k,\sigma_k,S_k$ are the volume fraction, the conductivity, and the Seebeck coefficient for the material $k$ \cite{Okazaki2012}, and the volume fraction of the impurity phase in the present crystal was negligibly small. Here we discuss the Seebeck coefficient in the form of $S/T$ instead of $dS/dT$, since the Seebeck coefficient in the free-electron model (oversimplified model to multiband Sr$_2$RuO$_4$) is expressed as $S\approx -\frac{k_{\rm{B}}^2T}{e}\frac{N(\mu)}{n}$, where $k_{\rm{B}}$, $e$, $n$, $\mu$, and $N$ are the Boltzmann constant, elementary charge, carrier density, chemical potential, and the DOS, respectively \cite{Behnia2004}. In this form, one can follow the temperature dependence of the DOS or carrier density, as is widely analyzed in correlated electron systems, similar to the case of the pseudogap state in transition-metal oxides \cite{Terasaki2001,Ikeda2016,Collignon2021} and heavy-fermion formation in rare-earth compounds \cite{Zlatic2003}. Note that the differential $dS/dT$ includes the temperature derivatives of $N(\mu)$ and $n$ in complicated forms. Figure 2(b) represents the temperature dependence of $S/T$ in Sr$_2$RuO$_4$ in the present crystals and the calculated $S/T$ from the extracted data from Refs. \onlinecite{Yoshino1996,Xu2008}. Interestingly, all the data are temperature-dependent; in sharp contrast to conventional metals, in which $S/T$ remains constant \cite{Behniabook}, $S/T$ significantly increases with decreasing temperature, the origin of which will be discussed later. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{fig3.pdf} \caption{ (a) Temperature dependence of $S/T$ of Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn) single crystals. The arrows show the peak temperature $T_{\rm peak}$. (b) Temperature dependence of the irreversible magnetization $M_{\rm ir} \equiv M_{\rm FC}-M_{\rm ZFC}$. The arrows represent the characteristic temperatures $T^\dagger$, $T_s$, and $T_w$. (See text for details.) In the inset, $M_{\rm FC}$, $M_{\rm ZFC}$, (left axis) and $M_{\rm ir}$ (right axis) of the 0.8\% Co-substituted sample are shown together with the definitions of $T_{\rm s}$ and $T_{\rm w}$. } \label{fig3} \end{center} \end{figure} We then focus on the temperature variation of $S/T$ in the substituted systems. Figure 3(a) represents the temperature dependence of $S/T$ in Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn), in which we find a drastic change due to substitutions. At low temperatures, $S/T$ exhibits a prominent peak structure in the temperature dependence for both substituted systems. The peak temperature $T_{\rm peak}$ defined as the temperature at which $S/T$ exhibits a maximum is shown by arrows in Fig. 3(a). Note that the phonon-drag effect is unlikely to enhance the Seebeck coefficient in the substituted systems, because the substitutions generally suppress the phonon mean free path \cite{Kurita2019}. To shed light on the relation to the magnetism, we plot the irreversible magnetization $M_{\rm ir}$ defined as $M_{\rm ir}\equiv M_{\rm FC} - M_{\rm ZFC}$, in Fig. 3(b). ($M_{\rm FC}$ and $M_{\rm ZFC}$ being the magnetization measured under FC and ZFC processes, respectively). While the onset of the irreversibility may correspond to a transition to the FM and AFM glassy states for the Co- and Mn-substituted systems, respectively \cite{Ortmann2013}, a close look of the $M_{\rm ir}$ data reveals that the onset of the irreversibility is accompanied by a double step. The inset of Fig. 3(b) shows the temperature variations of $M_{\rm FC}$, $M_{\rm ZFC}$, and $M_{\rm ir}$ for $x=0.008$ (Co) at low temperatures. The $M_{\rm ZFC}$ data more steeply decrease than that in Ref. \onlinecite{Ortmann2013}, but this may be due to the low applied field in our measurement. From the plots of $M_{\rm ir}$ in the inset, one can see that a weak irreversibility sets in at $T_{\rm w}\approx14$~K and subsequently the irreversibility is strongly enhanced below $T_{\rm s}\approx 4$~K. The weak and strong irreversibility temperatures $T_{\rm w}$ and $T_{\rm s}$ are defined as the points of intersection of the two linear lines as drawn in the dotted line in the inset. Importantly, such a coexistence of weak and strong irreversibility is often observed in ferromagnetic spin-glass systems \cite{Pappa1984,DeFotis1998,Dahr2006,Lago2012}, and theoretically interpreted in terms of a multi component vector spin model given by Gabay and Toulouse \cite{Gabay1981}, in which the transverse spin components are first frozen at $T_{\rm w}$ on cooling and then followed by the freezing of the longitudinal spin components at $T_{\rm s}$, although it is still under debate whether it represents a true thermodynamic phase transition. On the other hand, the temperature dependence of $M_{\rm ir}$ in the Mn-substituted sample is different from that in the Co-substituted samples: On cooling, the irreversibility sets in at $T\approx27$~K, which is almost identical to the onset temperature $T_\mathrm{ir}$ to the static order \cite{Ortmann2013}, and then shows weak temperature dependence below $T^\dagger \approx18$~K at which $M_{\rm ir}$ displays a kink structure as shown in Fig. 3(b). In the Mn-substituted sample, $T_\mathrm{peak}$ in $S/T$ is close to this anomaly temperature $T^\dagger$, although the origin of $T^\dagger$ is unclear at present. Note that the remanent magnetization is also distinct among the Co- and Mn-substituted systems: While the temperature dependence of remanent magnetization is monotonic in the Co-substituted samples, it exhibits an anomalous peak structure below the onset temperature $T_\mathrm{ir}$ for the Mn-substituted systems \cite{Ortmann2013}, implying the existence of a characteristic temperature below $T_\mathrm{ir}$. The nature of the magnetic glassy state is an issue for the future study using the microscopic NMR and neutron measurements. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{fig4.pdf} \caption{ Phase diagram of Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn). The closed circles shows the peak temperature $T_{\rm peak}$ at which $S/T$ exhibits maximum. The weak and strong irreversibility temperatures $T_{\rm w}$ (stars) and $T_{\rm s}$ (triangles) and the anomaly temperature $T^\dagger$ (square) are determined from the irreversibility magnetization $M_{\rm ir}$. The onset temperature $T_{\rm ir}$ (open circles) and the regions of FM cluster glass (CG) and spin glass state with short-range incommensurate AFM order (SG-IC) are extracted from Ref. \onlinecite{Ortmann2013}. } \label{fig4} \end{center} \end{figure} In Fig.~\ref{fig4}, we plot the peak temperature $T_{\rm peak}$ in $S/T$ together with $T_\mathrm{s}$, $T_\mathrm{w}$, and $T^\dagger$ on the phase diagram reported in Ref.~\onlinecite{Ortmann2013}. The peak temperature $T_\mathrm{peak}$ in $S/T$ well coincides with the weak irreversibility temperature $T_\mathrm{w}$ and the anomaly temperature $T^\dagger$ for the Co- and Mn-substituted samples, respectively, indicating an intimate relationship among the Seebeck coefficient and the magnetic fluctuations. Here, since the Co and Mn substitutions induce the FM cluster glass and the spin glass states with short-range AFM order, respectively \cite{Ortmann2013}, the present results indicate that both FM and AFM fluctuations are substantial for the enhancement of the Seebeck coefficient at $T_{\rm peak}$. Indeed, it bears a striking resemblance to that in the itinerant magnets such as the perovskite ruthenate CaRu$_{0.8}$Sc$_{0.2}$O$_3$ \cite{Yamamoto2017} and the doped Heusler alloy Fe$_2$VAl \cite{Tsujii2019}; in these compounds, the Seebeck coefficient is enhanced near the ferromagnetic transition temperature through a sort of the magnon-drag effect, and is reduced by applying magnetic field owing to the field-induced suppression of the magnetic fluctuation. In the antiferro-quadrupole (AFQ) PrIr$_2$Zn$_{20}$, moreover, $S/T$ exhibits a peak structure near the AFQ transition temperature in high magnetic field where the quadrupolar fluctuation could be enhanced\cite{Ikeura2013}. Now let us recall the $S/T$ behavior in Sr$_2$RuO$_4$ (Fig. 2), which increases with cooling down to the lowest temperature of the present measurement ($\sim3$~K). While Seebeck coefficients are not strictly linear in temperature even in simple metals \cite{Macdonald}, the non-linearity in Sr$_2$RuO$_4$ is unusual. As mentioned above, we show the suggestive evidence that both FM and AFM fluctuations drive the enhanced Seebeck coefficient in the vicinity of the parent compound. It is therefore reasonable to consider that, even in the parent compound Sr$_2$RuO$_4$, $S/T$ is enhanced similarly down to zero temperature, at which both FM and AFM glassy states tend to terminate for Sr$_2$RuO$_4$\cite{Ortmann2013}. Thus this is a kind of non-Fermi-liquid (NFL) behavior close to the quantum critical point \cite{Paul2001}, which has also been observed in various correlated matters such as heavy fermions \cite{Izawa2007,Hartmann2010,Malone2012,Pfau2012,Shimizu2015} and oxides \cite{Limelette2010}. Quite intriguingly, in contrast to the aforementioned systems, both FM and AFM fluctuations seem to be substantial for the itinerant electrons in Sr$_2$RuO$_4$ as seen in the pronounced peak of $S/T$ observed in both Co- and Mn-substituted compounds. One may, however, pose a simple question regarding the specific heat. In Sr$_2$RuO$_4$, the electronic specific heat shows conventional Fermi-liquid (FL) behavior \cite{Maeno1997}, in distinction to the NFL behavior observed in the Seebeck coefficient; since the Seebeck coefficient is also given as the specific heat per carrier \cite{Behnia2004}, both quantities are expected to show similar anomalies \cite{Kim2010}. On the other hand, an AFM quantum criticality may affect the ratio of these quantities in zero-temperature limit \cite{Miyake2005}. It is interesting to note that this effect of the AFM quantum criticality may be enhanced by the multiband nature; in Sr$_2$RuO$_4$, the Fermi surfaces are composed of three cylindrical sheets: hole-like $\alpha$ and electron-like $\beta$ and $\gamma$ sheets with the order of magnitudes of the effective mass of $m_{\alpha}<m_{\beta}<m_{\gamma}$ \cite{Oguchi1995,Singh1995,Mackenzie1996,Hill2000}. Since the lighter band contributes to the transport more significantly in general \cite{Kasahara2007,Yano2008}, the lighter $\alpha$ and $\beta$ bands are essential here, and importantly, these$\alpha$ and $\beta$ sheets possess the AFM instability due to nesting \cite{Sidis1999,Braden2002,Steffens2019,Jenni2021}. This band-dependent magnetic fluctuation may give origin to the anomalous low-temperature increase in $S/T$. On the other hand, it is unclear at present how the thermoelectric transport is affected by the FM fluctuation, the importance of which is clearly demonstrated in the Co-substituted systems; the existence of FM fluctuation is indicated by NMR \cite{Imai1998}, while the neutron experiments have revealed that it is very weak \cite{Sidis1999,Braden2002,Steffens2019,Jenni2021}. It is also known that the electrical resistivity of Sr$_2$RuO$_4$ is well described within the FL scheme \cite{Maeno1997}, in which the resistivity is given as $\rho(T) = \rho_0+AT^\nu$ with the exponent $\nu=2$. On the other hand, the determination of $\nu$ is delicate \cite{Kikugawa2002}, and as seen in other correlated materials, $\nu$ apparently depends on the residual resistivity even in high-purity level \cite{Tateiwa2012}. It may be useful to examine the exponent in high-purity crystals \cite{Bobowski2019}. We also mention that the thermal conductivity in the normal state is interesting in the sense that it also mirrors the entropy flow, while the earlier studies are mainly devoted to the superconducting state \cite{Tanatar2001,Izawa2001,Suzuki2002}. Also, the Seebeck coefficient in the system with the Fermi level at the vHs, which is not achieved with the present Co substitution, is worth exploring because the topological change in the Fermi surface at vHs may lead to the significant enhancement of the Seebeck coefficient \cite{Varlamov1989,Ito2019}. This effect of the vHs could be investigated in electron-doped Sr$_{2-x}$La$_x$RuO$_4$. \section{IV. Conclusion} To summarize, we performed Seebeck coefficient measurement in Sr$_2$Ru$_{1-x}M_x$O$_4$ ($M = $ Co, Mn). Although the $S/T$ of the parent compound Sr$_2$RuO$_4$ increases with cooling down to the lowest temperature $\sim3$~K similar to previous reports, for Co- and Mn-substituted systems, $S/T$ is enhanced near $T_{\rm w}$ and $T^\dagger$ in the irreversible magnetization. The emergence of the peak structure in $S/T$ can be related to glassy FM and AFM fluctuations in the Co- and Mn-substituted system, respectively, and therefore the increase of $S/T$ in Sr$_2$RuO$_4$ persisting at least down to $\sim3$~K suggests that both FM and AFM fluctuations seem to be substantial in Sr$_2$RuO$_4$. \begin{acknowledgments} The authors acknowledge Y. Maeno and K. Ishida for fruitful discussions. This work was supported by JSPS KAKENHI Grants No. JP17H06136 and No. JP18K13504. \end{acknowledgments}
train/arxiv
BkiUdQU5qX_Bpe9RFTn_
5
1
\section{Introduction} Optimal transport theory has a long and distinguished history in mathematics dating back to the seminal work of \citet{monge1781memoire} and \citet{kantorovich1942transfer}. While originally envisaged for applications in civil engineering, logistics and economics, optimal transport problems provide a natural framework for comparing probability measures and have therefore recently found numerous applications in statistics and machine learning. Indeed, the minimum cost of transforming a probability measure~$\mu$ on $\mathcal{X}$ to some other probability measure~$\nu$ on $\mathcal{Y}$ with respect to a prescribed cost function on~$\mathcal{X}\times \mathcal{Y}$ can be viewed as a measure of distance between~$\mu$ and $\nu$. If $\mathcal{X}=\mathcal{Y}$ and the cost function coincides with (the $p^{\text{th}}$ power of) a metric on~$\mathcal{X}\times \mathcal{X}$, then the resulting optimal transport distance represents (the $p^{\text{th}}$ power of) a Wasserstein metric on the space of probability measures over~$\mathcal{X}$ \citep{villani}. In the remainder of this paper we distinguish {\em discrete}, {\em semi-discrete} and {\em continuous} optimal transport problems in which either both, only one or none of the two probability measures~$\mu$ and~$\nu$ are discrete, respectively. In the wider context of machine learning, {\em discrete} optimal transport problems are nowadays routinely used, for example, in the analysis of mixture models \citep{kolouri2017optimal, nguyen2013convergence} as well as in image processing \citep{alvarez2017structured, ferradans2014regularized, kolouri2015transport, papadakis2017convex, tartavel2016wasserstein}, computer vision and graphics \citep{pele2008linear, pele2009fast, rubner2000earth, solomon2014earth, solomon2015convolutional}, data-driven bioengineering \citep{feydy2017optimal, kundu2018discovery, wang2010optimal}, clustering \citep{ho2017multilevel}, dimensionality reduction \citep{cazelles2018geodesic, flamary2018wasserstein, rolet2016fast, schmitzer2016sparse, seguy2015principal}, domain adaptation \citep{courty2016optimal, murez2018image}, distributionally robust optimization \citep{esfahani2018data, nguyen2020distributionally, NIPS2015_5745, shafieezadeh2019regularization}, scenario reduction \citep{heitsch2007note, rujeerapaiboon2018scenario}, scenario generation \citep{pflug2001scenario, hochreiter2007financial}, the assessment of the fairness properties of machine learning algorithms \citep{gordaliza2019obtaining, taskesen2020distributionally, taskesen2020statistical} and signal processing \citep{thorpe2017transportation}. The discrete optimal transport problem represents a tractable linear program that is susceptible to the network simplex algorithm \citep{orlin1997polynomial}. Alternatively, it can be addressed with dual ascent methods \citep{bertsimas1997introduction}, the Hungarian algorithm for assignment problems \citep{kuhn1955hungarian} or customized auction algorithms \citep{bertsekas1981new, bertsekas1992auction}. The currently best known complexity bound for computing an {\em exact} solution is attained by modern interior-point algorithms. Indeed, if $N$ denotes the number of atoms in~$\mu$ or in~$\nu$, whichever is larger, then the discrete optimal transport problem can be solved in time\footnote{We use the soft-O notation $\tilde{\mathcal O}(\cdot)$ to hide polylogarithmic factors.}~$\mathcal{\tilde{O}}(N^{2.5})$ with an interior point algorithm by \citet{lee2014path}. The need to evaluate optimal transport distances between increasingly fine-grained histograms has also motivated efficient approximation schemes. \citet{blanchet2018towards} and \citet{quanrud2018approximating} show that an $\epsilon$-optimal solution can be found in time~$\mathcal{O}(N^2/\epsilon)$ by reducing the discrete optimal transport problem to a matrix scaling or a positive linear programming problem, which can be solved efficiently by a Newton-type algorithm. \citet{jambulapati2019direct} describe a parallelizable primal-dual first-order method that achieves a similar convergence~rate. The tractability of the discrete optimal transport problem can be improved by adding an entropy regularizer to its objective function, which penalizes the entropy of the transportation plan for morphing~$\mu$ into~$\nu$. When the weight of the regularizer grows, this problem reduces to the classical Schr\"odinger bridge problem of finding the most likely random evolution from~$\mu$ to~$\nu$ \citep{schrodinger1931umkehrung}. Generic linear programs with entropic regularizers were first studied by \citet{fang1992unconstrained}. \citet{cominetti1994asymptotic} prove that the optimal values of these regularized problems converge exponentially fast to the optimal values of the corresponding unregularized problems as the regularization weight drops to zero. Non-asymptotic convergence rates for entropy regularized linear programs are derived by \citet{weed2018explicit}. \citet{sinkhorn} was the first to realize that entropic penalties are computationally attractive because they make the discrete optimal transport problem susceptible to a fast matrix scaling algorithm by \citet{sinkhorn1967diagonal}. This insight has spurred widespread interest in machine learning and led to a host of new applications of optimal transport in color transfer \citep{chizat2016scaling}, inverse problems \citep{karlsson2017generalized, adler2017learning}, texture synthesis \citep{peyre2017quantum}, the analysis of crowd evolutions \citep{peyre2015entropic} and shape interpolation \citep{solomon2015convolutional} to name a few. This surge of applications inspired in turn several new algorithms for the entropy regularized discrete optimal transport problem such as a greedy dual coordinate descent method also known as the Greenkhorn algorithm \citep{altschuler2017near, chakrabarty2018better, abid2018greedy}. \citet{dvurechensky2018computational} and \citet{lin2019efficient} prove that both the Sinkhorn and the Greenkhorn algorithms are guaranteed to find an $\epsilon$-optimal solution in time $\tilde{\mathcal{O}}({N^2}/{\epsilon^2})$. In practice, however, the Greenkhorn algorithm often outperforms the Sinkhorn algorithm~\citep{lin2019efficient}. The runtime guarantee of both algorithms can be improved to $\tilde{\mathcal{O}}(N^{7/3}/\epsilon)$ via a randomization scheme \citep{lin2019acceleration}. In addition, the regularized discrete optimal transport problem can be addressed by tailoring general-purpose optimization algorithms such as accelerated gradient descent algorithms \citep{dvurechensky2018computational}, iterative Bregman projections \citep{benamou2015iterative}, quasi-Newton methods \citep{blondel2017smooth} or stochastic average gradient descent algorithms \citep{genevay2016stochastic}. While the original optimal transport problem induces sparse solutions, the entropy penalty forces the optimal transportation plan of the regularized optimal transport problem to be strictly positive and thus completely dense. In applications where the interpretability of the optimal transportation plan is important, the lack of sparsity could be undesirable; examples include color transfer \citep{pitie2007automated}, domain adaptation \citep{courty2016optimal} or ecological inference \citep{muzellec2017tsallis}. Hence, there is merit in exploring alternative regularization schemes that retain the attractive computational properties of the entropic regularizer but induce sparsity. Examples that have attracted significant interest include smooth convex regularization and Tikhonov regularization \citep{dessein2018regularized, blondel2017smooth, seguy2017large, essid2018quadratically}, Lasso regularization \citep{li2016fast}, Tsallis entropy regularization \citep{muzellec2017tsallis} or group Lasso regularization \citep{courty2016optimal}. Much like the discrete optimal transport problems, the significantly more challenging \emph{semi-discrete} optimal transport problems emerge in numerous applications including variational inference \citep{ambrogioni2018wasserstein}, blue noise sampling \citep{qin2017wasserstein}, computational geometry \citep{levy2015numerical}, image quantization \citep{de2012blue} or deep learning with generative adversarial networks \citep{arjovsky2017wasserstein, genevay2017learning, gulrajani2017improved}. Semi-discrete optimal transport problems are also used in fluid mechanics to simulate incompressible fluids \citep{de2015power}. Exact solutions of a semi-discrete optimal transport problem can be constructed by solving an incompressible Euler-type partial differential equation discovered by \citet{brenier1991polar}. Any optimal solution is known to partition the support of the non-discrete measure into cells corresponding to the atoms of the discrete measure \citep{aurenhammer1998minkowski}, and the resulting tessellation is usually referred to as a power diagram. \citet{ref:mirebeau2015discretization} uses this insight to solve Monge-Amp{\`e}re equations with a damped Newton algorithm, and \citet{kitagawa2016convergence} show that a closely related algorithm with a global linear convergence rate lends itself for the numerical solution of generic semi-discrete optimal transport problems. In addition, \citet{merigot2011multiscale} proposes a quasi-Newton algorithm for semi-discrete optimal transport, which improves a method due to~\citet{aurenhammer1998minkowski} by exploiting Llyod's algorithm to iteratively simplify the discrete measure. If the transportation cost is quadratic, \citet{bonnotte2013knothe} relates the optimal transportation plan to the Knothe-Rosenblatt rearrangement for mapping~$\mu$ to~$\nu$, which is very easy to compute. As usual, regularization improves tractability. \citet{genevay2016stochastic} show that the dual of a semi-discrete optimal transport problem with an entropic regularizer is susceptible to an averaged stochastic gradient descent algorithm that enjoys a convergence rate of~$\mathcal O(1/\sqrt{T})$, $T$ being the number of iterations. {\color{black} \citet{ref:altschuler2022asymptotics} show that the optimal value of the entropically regularized problem converges to the optimal value of the unregularized problem at a quadratic rate as the regularization weight drops to zero. Improved error bounds under stronger regularity conditions are derived by~\citet{ref:delalande2021nearly}.} {\em Continuous} optimal transport problems constitute difficult variational problems involving infinitely many variables and constraints. \citet{benamou2000computational} recast them as boundary value problems in fluid dynamics, and \citet{papadakis2014optimal} solve discretized versions of these reformulations using first-order methods. For a comprehensive survey of the interplay between partial differential equations and optimal transport we refer to \citep{evans1997partial}. As nearly all numerical methods for partial differential equations suffer from a curse of dimensionality, current research focuses on solution schemes for {\em regularized} continuous optimal transport problems. For instance, \citet{genevay2016stochastic} embed their duals into a reproducing kernel Hilbert space to obtain finite-dimensional optimization problems that can be solved with a stochastic gradient descent algorithm. \citet{seguy2017large} solve regularized continuous optimal transport problems by representing the transportation plan as a multilayer neural network. This approach results in finite-dimensional optimization problems that are non-convex and offer no approximation guarantees. However, it provides an effective means to compute approximate solutions in high dimensions. {\color{black} Indeed, the optimal value of the entropically regularized continuous optimal transport problem is known to converge to the optimal value of the unregularized problem at a linear rate as the regularization weight drops to zero \citep{ref:chizat2020faster, ref:conforti2021formula, ref:erbar2015large,ref:pal2019difference}.} Due to a lack of efficient algorithms, applications of continuous optimal transport problems are scarce in the extant literature. \citet{peyre2019computational} provide a comprehensive survey of numerous applications and solution methods for discrete, semi-discrete and continuous optimal transport problems. This paper focuses on semi-discrete optimal transport problems. Our main goal is to formally establish that these problems are computationally hard, to propose a unifying regularization scheme for improving their tractability and to develop efficient algorithms for solving the resulting regularized problems, {\color{black} assuming only that we have access to independent samples from the continuous probability measure~$\mu$}. Our regularization scheme is based on the observation that any {\em dual} semi-discrete optimal transport problem maximizes the expectation of a piecewise affine function with $N$ pieces, where the expectation is evaluated with respect to~$\mu$, and where $N$ denotes the number of atoms of the discrete probability measure~$\nu$. We argue that this piecewise affine function can be interpreted as the optimal value of a {\em discrete choice problem}, which can be smoothed by adding random disturbances to the underlying utility values \citep{thurstone1927law, mcfadden1973conditional}. As probabilistic discrete choice problems are routinely studied in economics and psychology, we can draw on a wealth of literature in choice theory to design various smooth (dual) optimal transport problems with favorable numerical properties. For maximal generality we will also study {\em semi-parametric} discrete choice models where the disturbance distribution is itself subject to uncertainty \citep{natarajan2009persistency, mishra2014theoretical, feng2017relation, ahipasaoglu2018convex}. Specifically, we aim to evaluate the best-case (maximum) expected utility across a Fr\'echet ambiguity set containing all disturbance distributions with prescribed marginals. Such models can be addressed with customized methods from modern distributionally robust optimization \citep{natarajan2009persistency}. For Fr\'echet ambiguity sets, we prove that smoothing the dual objective is equivalent to regularizing the primal objective of the semi-discrete optimal transport problem. The corresponding regularizer penalizes the discrepancy between the chosen transportation plan and the product measure~$\mu\otimes \nu$ with respect to a divergence measure constructed from the marginal disturbance distributions. Connections between primal regularization and dual smoothing were previously recognized by \citet{blondel2017smooth} and \citet{paty2020regularized} in discrete optimal transport and by \citet{genevay2016stochastic} in semi-discrete optimal transport. As they are constructed ad hoc or under a specific adversarial noise model, these existing regularization schemes lack the intuitive interpretation offered by discrete choice theory and emerge as special cases of our unifying scheme. The key contributions of this paper are summarized below. \begin{enumerate}[label=\roman*.] \item We study the computational complexity of semi-discrete optimal transport problems. Specifically, we prove that computing the optimal transport distance between two probability measures~$\mu$ and~$\nu$ on the same Euclidean space is $\#$P-hard {\color{black} even if only approximate solutions are sought and} even if~$\mu$ is the Lebesgue measure on the standard hypercube and~$\nu$ is supported on merely two points. \item We propose a unifying framework for regularizing semi-discrete optimal transport problems by leveraging ideas from distributionally robust optimization and discrete choice theory \citep{natarajan2009persistency, mishra2014theoretical, feng2017relation, ahipasaoglu2018convex}. Specifically, we perturb the transportation cost to every atom of the discrete measure~$\nu$ with a random disturbance, and we assume that the vector of all disturbances is governed by an uncertain probability distribution from within a Fr\'echet ambiguity set that prescribes the marginal disturbance distributions. Solving the dual optimal transport problem under the least favorable disturbance distribution in the ambiguity set amounts to smoothing the dual and regularizing the primal objective function. We show that numerous known and new regularization schemes emerge as special cases of this framework, and we derive a priori approximation bounds for the resulting regularized optimal transport problems. \item We derive new convergence guarantees for an averaged stochastic gradient descent (SGD) algorithm that has only access to a {\em biased} stochastic gradient oracle. Specifically, we prove that this algorithm enjoys a convergence rate of $\mathcal O(1/\sqrt{T})$ for Lipschitz continuous and of~$\mathcal O(1/T)$ for generalized self-concordant objective functions. We also show that this algorithm lends itself to solving the smooth dual optimal transport problems obtained from the proposed regularization scheme. When the smoothing is based on a semi-parametric discrete choice model with a Fr\'echet ambiguity set, the algorithm's convergence rate depends on the smoothness properties of the marginal noise distributions, and its per-iteration complexity depends on our ability to compute the optimal choice probabilities. We demonstrate that these choice probabilities can indeed be computed efficiently via bisection or sorting, and in special cases they are even available in closed form. As a byproduct, we show that our algorithm can improve the state-of-the-art $\mathcal O(1/\sqrt{T})$ convergence guarantee of~\citet{genevay2016stochastic} for the semi-discrete optimal transport problem with an {\em entropic} regularizer. \end{enumerate} The rest of this paper unfolds as follows. In Section~\ref{sec:complexity} we study the computational complexity of semi-discrete optimal transport problems, and in Section~\ref{sec:smooth_ot} we develop our unifying regularization scheme. In Section~\ref{sec:computation} we analyze the convergence rate of an averaged SGD algorithm with a biased stochastic gradient oracle that can be used for solving smooth dual optimal transport problems, and in Section~\ref{sec:numerical} we compare its empirical convergence behavior against the theoretical convergence guarantees. \paragraph{Notation.} We denote by $\|\cdot\|$ the 2-norm, by $[N] = \{1, \ldots, N \}$ the set of all integers up to $N\in\mathbb N$ and by $\Delta^d = \{\boldsymbol x \in \mathbb R_+^d : \sum_{i = 1}^d x_i =1\}$ the probability simplex in $\mathbb R^d$. For a logical statement $\mathcal E$ we define $\mathds{1}_{\mathcal E} = 1$ if $\mathcal E$ is true and $\mathds{1}_{\mathcal E} = 0$ if $\mathcal E$ is false. For any closed set $\mathcal{X}\subseteq \mathbb{R}^d$ we define $\mathcal{M}(\mathcal{X})$ as the family of all Borel measures and $\mathcal{P}(\mathcal{X})$ as its subset of all Borel probability measures on~$\mathcal{X}$. For $\mu\in\mathcal{P}(\mathcal{X})$, we denote by $\mathbb{E}_{\boldsymbol x \sim \mu}[\cdot]$ the expectation operator under $\mu$ and define $\mathcal{L}(\mathcal{X}, \mu)$ as the family of all $\mu$-integrable functions $f:\mathcal{X}\rightarrow\mathbb{R}$, that is, $f \in \mathcal{L}(\mathcal{X}, \mu)$ if and only if $\int_{\mathcal{X}} |f(\boldsymbol x)| \mu(\mathrm{d} \boldsymbol x)<\infty$. The Lipschitz modulus of a function $f: \mathbb{R}^d \to \mathbb{R}$ is defined as $\lip(f) = \sup_{\boldsymbol x, \boldsymbol{x}'}\{|f(\boldsymbol x) - f(\boldsymbol{x}')|/\|\boldsymbol x - \boldsymbol{x}'\|: \boldsymbol x \neq \boldsymbol{x}'\}$. The convex conjugate of $f: \mathbb{R}^d \to [-\infty,+\infty]$ is the function $f^*:\mathbb{R}^d\rightarrow [-\infty,+\infty]$ defined through $f^{*}(\boldsymbol y) = \sup_{\boldsymbol x \in \mathbb{R}^d}\boldsymbol y^\top \boldsymbol x - f(\boldsymbol x)$. \section{Hardness of Computing Optimal Transport Distances} \label{sec:complexity} If $\mathcal{X}$ and $\mathcal{Y}$ are closed subsets of finite-dimensional Euclidean spaces and $c: \mathcal{X} \times \mathcal{Y} \to [0,+\infty]$ is a lower-semicontinuous cost function, then the Monge-Kantorovich {\em optimal transport distance} between two probability measures $\mu\in\mathcal P(\mathcal{X})$ and $\nu\in\mathcal P(\mathcal{Y})$ is defined as \begin{equation} W_c(\mu, \nu) = \min\limits_{\pi \in \Pi(\mu,\nu)} ~ \mathbb{E}_{(\boldsymbol x, \boldsymbol y) \sim \pi}\left[{c(\boldsymbol x, \boldsymbol y)}\right], \label{eq:primal} \end{equation} where $\Pi(\mu,\nu)$ denotes the family of all {\em couplings} of $\mu$ and $\nu$, that is, the set of all probability measures on $\mathcal{X} \times \mathcal{Y}$ with marginals $\mu$ on $\mathcal{X}$ and $\nu$ on $\mathcal{Y}$. One can show that the minimum in~\eqref{eq:primal} is always attained \citep[Theorem~4.1]{villani}. If $\mathcal{X}=\mathcal{Y}$ is a metric space with metric $d:\mathcal{X}\times \mathcal{X}\rightarrow \mathbb{R}_+$ and the transportation cost is defined as $c(\boldsymbol x, \boldsymbol y)=d^p(\boldsymbol x,\boldsymbol y)$ for some $p \geq 1$, then $W_c(\mu, \nu)^{1/p}$ is termed the $p$-th Wasserstein distance between $\mu$ and $\nu$. The optimal transport problem~\eqref{eq:primal} constitutes an infinite-dimensional linear program over measures and admits a strong dual linear program over functions \citep[Theorem~5.9]{villani}. \begin{proposition}[Kantorovich duality] \label{prop:kantorovich} The optimal transport distance between $\mu \in \mathcal{P}(\mathcal{X})$ and $\nu \in \mathcal{P}(\mathcal{Y})$ admits the dual representation \begin{equation} \label{eq:dual} W_c(\mu, \nu) =\left\{ \begin{array}{c@{\quad}l@{\qquad}l} \sup & \displaystyle \mathbb{E}_{\boldsymbol y \sim \nu}\left[ {\phi(\boldsymbol y)}\right] - \mathbb{E}_{\boldsymbol x \sim \mu}\left[{\psi(\boldsymbol x)}\right] & \\ [0.5em] \mathrm{s.t.} & \psi \in \mathcal{L}(\x, \mu),~ \phi \in \mathcal{L}(\y, \nu) & \\ [0.5em] & \phi(\boldsymbol y) - \psi(\boldsymbol x) \leq c(\boldsymbol x, \boldsymbol y) \quad \forall \boldsymbol x \in \mathcal{X},~ \boldsymbol y \in \mathcal{Y}. \end{array}\right. \end{equation} \end{proposition} The linear program~\eqref{eq:dual} optimizes over the two {\em Kantorovich potentials} $\psi \in \mathcal{L}(\x, \mu)$ and $\phi \in \mathcal{L}(\y, \nu)$, but it can be reformulated as the following non-linear program over a single potential function, \begin{equation} \label{eq:ctans_dual} W_c(\mu, \nu) =\sup_{\phi \in \mathcal{L}(\mathcal{Y}, \nu)} ~ \displaystyle \mathbb{E}_{\boldsymbol y \sim \nu}\left[\phi(\boldsymbol y)\right] - \mathbb{E}_{\boldsymbol x \sim \mu}\left[ \phi_c(\boldsymbol x) \right], \end{equation} where $\phi_c:\mathcal{X}\rightarrow [-\infty,+\infty]$ is called the \textit{$c$-transform} of $\phi$ and is defined through \begin{equation} \label{eq:$c$-transform} \phi_c(\boldsymbol x) = \sup_{\boldsymbol y \in \mathcal{Y}} ~ \phi(\boldsymbol y) - c(\boldsymbol x, \boldsymbol y) \qquad \forall \boldsymbol x \in \mathcal{X}, \end{equation} see \citet[\S~5]{villani} for details. The Kantorovich duality is the key enabling mechanism to study the computational complexity of the optimal transport problem~\eqref{eq:primal}. \begin{theorem}[Hardness of computing optimal transport distances] \label{theorem:hard} Computing $W_c(\mu, \nu)$ is \#P-hard even if $\mathcal{X}=\mathcal{Y}=\mathbb{R}^d$, $c(\boldsymbol x, \boldsymbol y) = \|\boldsymbol x-\boldsymbol y\|^{p}$ for some $p\geq 1$, $\mu$ is the Lebesgue measure on the standard hypercube~$[0,1]^d$, and $\nu$ is a discrete probability measure supported on only two points. \end{theorem} To prove Theorem~\ref{theorem:hard}, we will show that computing the optimal transport distance $W_c(\mu, \nu)$ is at least as hard computing the volume of the knapsack polytope $P( \boldsymbol{w}, b) = \{\boldsymbol x\in [0,1]^d : \boldsymbol{w}^\top \boldsymbol x\leq b\}$ for a given $\boldsymbol{w}\in \mathbb{R}^d_+$ and $ b \in \mathbb{R}_+$, which is known to be $\#$P-hard \citep[Theorem~1]{dyer1988complexity}. Specifically, we will leverage the following variant of this hardness result, which establishes that approximating the volume of the knapsack polytope $P( \boldsymbol{w}, b)$ to a sufficiently high accuracy is already $\#$P-hard. \begin{lemma}[{\citet[Lemma~1]{Grani}}] \label{lemma:Grani} Computing the volume of the knapsack polytope $P( \boldsymbol{w}, b)$ for a given $\boldsymbol{w}\in \mathbb{R}^d_+$ and $ b \in \mathbb{R}_+$ to within an absolute accuracy of $\delta>0$ is $\#$P-hard whenever \begin{equation} \label{eq:Granis-delta} \delta <\frac{1}{ {2d!(\|\boldsymbol{w}\|_1+2)^d(d+1)^{d+1}\prod_{i = 1}^{d}w_i}}. \end{equation} \end{lemma} Fix now any knapsack polytope $P( \boldsymbol{w}, b)$ encoded by $\boldsymbol{w}\in \mathbb{R}_+^d$ {\color{black}and $ b \in \mathbb{R}_+$. Without loss of generality, we may assume that~$\bm w \neq \bm 0$ and~$b > 0$. Indeed, we are allowed to exclude~$\bm w = \bm 0 $ because the volume of~$P(\bm 0, b) $ is trivially equal to 1. On the other hand, $b= 0$ can be excluded by applying a suitable rotation and translation, which are volume-preserving transformations.} In the remainder, we denote by $\mu$ the Lebesgue measure on the standard hypercube $[0,1]^d$ and by ${\nu}_ t = t \delta_{\boldsymbol y_1} + (1-t) \delta_{\boldsymbol y_2}$ a family of discrete probability measures with two atoms at $\boldsymbol y_1=\boldsymbol 0$ and $\boldsymbol y_2=2b\boldsymbol{w}/ \|\boldsymbol w\|^2$, respectively, whose probabilities are parameterized by $t \in [0, 1]$. The following preparatory lemma relates the volume of $P( \boldsymbol{{w}},b)$ to the optimal transport problem~\eqref{eq:primal} and is thus instrumental for the proof of Theorem~\ref{theorem:hard}. \begin{lemma} \label{lem:vol} If $c(\boldsymbol x, \boldsymbol y)=\|\boldsymbol x- \boldsymbol y \|^p$ for some $p\ge 1$, then we have $\mathop{\rm Vol}(P( \boldsymbol{{w}},b)) = \argmin_{ t \in [0,1]} W_c(\mu, {\nu}_ t )$. \end{lemma} \begin{proof} By the definition of the optimal transport distance in~\eqref{eq:primal} and our choice of~$c(\boldsymbol x, \boldsymbol y)$, we have \begin{align*} \underset{ t \in [0,1]}{\min}W_c(\mu, {\nu}_ t )&= \underset{ t \in [0,1]}{\min} ~ \min\limits_{\pi \in \Pi(\mu,\nu_t)} ~ \mathbb{E}_{(\boldsymbol x, \boldsymbol y)\sim\pi}\left[\|\boldsymbol x- \boldsymbol y \|^p \right] \\[0.5ex] &=\min\limits_{ t \in [0,1]}~ \left\{\begin{array}{cl} \min\limits_{q_1, q_2 \in \mathcal{P}(\mathbb{R}^d)}& t \displaystyle\int_{\mathbb{R}^d} \|\boldsymbol x-\boldsymbol y_1\|^p q_1(\mathrm{d} \boldsymbol x) + (1-t) \displaystyle\int_{\mathbb{R}^d}\left \|\boldsymbol x-\boldsymbol y_2 \right\|^p q_2(\mathrm{d} \boldsymbol x)\\ [3ex] \textrm{s.t.}& t \cdot q_1 + (1-t) \cdot q_2 = \mu, \end{array}\right. \end{align*} where the second equality holds because any coupling $\pi$ of $\mu$ and $\nu_t$ can be constructed from the marginal probability measure $\nu_t$ of $\boldsymbol y$ and the probability measures $q_1$ and $q_2$ of $\boldsymbol x$ conditional on $\boldsymbol y =\boldsymbol y_1$ and $\boldsymbol y = \boldsymbol y_2$, respectively, that is, we may write $\pi= t\cdot q_1\otimes \delta_{\boldsymbol y_1} + (1-t)\cdot q_2\otimes \delta_{\boldsymbol y_2}$. The constraint of the inner minimization problem ensures that the marginal probability measure of $\boldsymbol x$ under $\pi$ coincides with $\mu$. By applying the variable transformations $q_1\leftarrow t \cdot q_1 $ and $q_2 \leftarrow (1-t)\cdot q_2$ to eliminate all bilinear terms, we then obtain \begin{equation*} \underset{ t \in [0,1]}{\min}W_c(\mu, {\nu}_ t )=\left\{\begin{array}{cll} \underset{\substack{ t \in [0,1] \\ q_1, q_2 \in \mathcal{M}(\mathbb{R}^d)}}{\min} &\displaystyle\int_{\mathbb{R}^d} \|\boldsymbol x -\boldsymbol y_1\|^p q_1(\mathrm{d} \boldsymbol x) + \displaystyle\int_{\mathbb{R}^d} \left\|\boldsymbol x-\boldsymbol y_2 \right\|^p q_2(\mathrm{d} \boldsymbol x)\\[3ex] \textrm{s.t.} &\displaystyle\int_{\mathbb{R}^d} q_1(\mathrm{d} \boldsymbol x) = t \\[3ex] &\displaystyle\int_{\mathbb{R}^d} q_2(\mathrm{d} \boldsymbol x) = 1- t \\[3ex] & q_1 + q_2 = \mu. \end{array}\right. \end{equation*} Observe next that the decision variable $t$ and the two normalization constraints can be eliminated without affecting the optimal value of the resulting infinite-dimensional linear program because the Borel measures $q_1$ and $q_2$ are non-negative and because the constraint $q_1+q_2=\mu$ implies that $q_1(\mathbb{R}^d)+q_2(\mathbb{R}^d)=\mu(\mathbb{R}^d)=1$. Thus, there always exists $t\in[0,1]$ such that $q_1(\mathbb{R}^d)=t$ and $q_2(\mathbb{R}^d)=1-t$. This reasoning implies that \begin{equation*} \underset{ t \in [0,1]}{\min}W_c(\mu, {\nu}_ t )=\left\{\begin{array}{ccll} &\min\limits_{q_1,q_2\in \mathcal{M}(\mathbb{R}^d)}\; & \displaystyle\int_{\mathbb{R}^d} \|\boldsymbol x -\boldsymbol y_1\|^p q_1(\mathrm{d} \boldsymbol x) + \displaystyle\int_{\mathbb{R}^d}\left \|\boldsymbol x-\boldsymbol y_2 \right\|^p q_2(\mathrm{d} \boldsymbol x) \\[3ex] & \textrm{s.t.} & q_1 + q_2= \mu. \end{array}\right. \end{equation*} The constraint $q_1+q_2=\mu$ also implies that $q_1$ and $q_2$ are absolutely continuous with respect to $\mu$, and~thus \begin{align} \underset{ t \in [0,1]}{\min}W_c(\mu, {\nu}_ t )& =\left\{\begin{array}{ccll} &\min\limits_{q_1,q_2\in \mathcal{M}(\mathbb{R}^d)}\; & \displaystyle\int_{\mathbb{R}^d} \|\boldsymbol x -\boldsymbol y_1\|^p \frac{\mathrm{d} q_1}{\mathrm{d} \mu}(\boldsymbol x) +\left \|\boldsymbol x-\boldsymbol y_2 \right\|^p \, \frac{\mathrm{d} q_2}{\mathrm{d} \mu}(\boldsymbol x)\, \mu(\mathrm{d} \boldsymbol x) \\[3ex] & \textrm{s.t.} & \displaystyle \frac{\mathrm{d} q_1}{\mathrm{d} \mu}(\boldsymbol x) + \frac{\mathrm{d} q_2}{\mathrm{d} \mu}(\boldsymbol x)= 1 \quad \forall \boldsymbol x\in [0,1]^d \end{array}\right. \nonumber\\ &= \int_{\mathbb{R}^d} \min\left\{\|\boldsymbol x -\boldsymbol y_1 \|^p,\left\|\boldsymbol x - \boldsymbol y_2 \right\|^p \right\}\,\mu(\mathrm{d}\boldsymbol x), \label{eq:min-Wc} \end{align} where the second equality holds because at optimality the Radon-Nikodym derivatives must satisfy \[ \frac{\mathrm{d} q_i}{\mathrm{d} \mu}(\boldsymbol x)=\left\{\begin{array}{cl} 1 & \text{if } \|\boldsymbol x-\boldsymbol y_i\|^p \le \|\boldsymbol x-\boldsymbol y_{3-i}\|^p \\ 0 & \text{otherwise} \end{array} \right. \] for $\mu$-almost every $\boldsymbol x\in \mathbb{R}^d$ and for every $i=1,2$. In the second part of the proof we will demonstrate that the minimization problem $\min_{t\in[0,1]} W_c(\mu, \nu_ t )$ is solved by $t^\star=\textrm{Vol}(P(\boldsymbol{w}, b))$. By Proposition~\ref{prop:kantorovich} and the definition of the $c$-transform, we first note that \begin{align} W_c(\mu, \nu_ {t^\star} ) &=\underset{\phi \in \mathcal{L}(\mathbb{R}^d, \nu_{t^\star})}{\max} ~ \mathbb{E}_{\boldsymbol y\sim \nu_{t^\star}}[\phi(\boldsymbol y)] - \mathbb{E}_{\boldsymbol x\sim\mu}[\phi_c(\boldsymbol x)] \nonumber \\ \label{eq:min_wass} &= \underset{\boldsymbol \phi \in \mathbb{R}^2}{\max} ~ t^\star \cdot \phi_1 + (1- t^\star ) \cdot\phi_2- \int_{\mathbb{R}^d}\max_{i=1,2}\left\{\phi_i- \|\boldsymbol x-\boldsymbol y_i \|^p\right\}\mu(\mathrm{d} \boldsymbol x)\\ &= \max\limits_{\boldsymbol \phi \in \mathbb{R}^2} ~ t^\star \cdot \phi_1 + (1-t^\star)\cdot \phi_2- \sum\limits_{i = 1}^2 \int_{\mathcal{X}_i(\boldsymbol \phi)}(\phi_i - \|\boldsymbol x - \boldsymbol {y_i}\|^p)\,\mu(\mathrm{d} \boldsymbol x), \nonumber \end{align} where \[ \mathcal{X}_i(\boldsymbol \phi) = \{\boldsymbol x\in\mathbb{R}^d: \phi_i - \|\boldsymbol x-\boldsymbol y_i \|^p \geq \phi_{3-i} - \left \|\boldsymbol x - \boldsymbol y_{3-i} \right\|^p\}\quad \forall i=1,2. \] The second equality in~\eqref{eq:min_wass} follows from the construction of $\nu_{t^\star}$ as a probability measure with only two atoms at the points $\boldsymbol y_i$ for $i=1,2$. Indeed, by fixing the corresponding function values $\phi_i=\phi(\boldsymbol y_i)$ for $i=1,2$, the expectation $\mathbb{E}_{\boldsymbol y \sim \nu_{t}}[\phi(\boldsymbol y)]$ simplifies to $t^\star \cdot \phi_1 + (1-t^\star)\cdot \phi_2$, while the negative expectation $-\mathbb{E}_{\boldsymbol x \sim \mu}[\phi_c(\boldsymbol x)]$ is maximized by setting $\phi(\bm y)$ to a large negative constant for all $\boldsymbol y\notin\{\boldsymbol y_1,\boldsymbol y_2\}$, which implies~that \[ \phi_c(\boldsymbol x) = \sup_{\boldsymbol y \in \mathbb{R}^d} \phi(\boldsymbol y) - \|\boldsymbol x - \boldsymbol y\|^p = \max_{i=1,2}\left\{\phi_i- \|\boldsymbol x-\boldsymbol y_i \|^p\right\} \quad \forall \boldsymbol x\in [0,1]^d. \] Next, we will prove that any $\boldsymbol \phi^\star\in\mathbb{R}^2$ with $\phi^\star_1=\phi^\star_2$ attains the maximum of the unconstrained convex optimization problem on the last line of~\eqref{eq:min_wass}. To see this, note that \[ \nabla_{\boldsymbol \phi} \left[\sum\limits_{i = 1}^2 \int_{\mathcal{X}_i(\boldsymbol \phi)}(\phi_i - \|\boldsymbol x - \boldsymbol {y}_i\|^p)\,\mu(\mathrm{d} \boldsymbol x)\right] = \sum\limits_{i = 1}^2 \int_{\mathcal{X}_i(\boldsymbol \phi)} \nabla_{\boldsymbol \phi}(\phi_i - \|\boldsymbol x - \boldsymbol {y}_i\|^p)\,\mu(\mathrm{d} \boldsymbol x) =\begin{bmatrix} \mu(\mathcal{X}_1(\boldsymbol \phi))\\ \mu(\mathcal{X}_2(\boldsymbol \phi)) \end{bmatrix} \] by virtue of the Reynolds theorem. Thus, the first-order optimality condition\footnote{Note that the first-order condition $1-t^\star=\mu(\mathcal{X}_2(\boldsymbol \phi))$ for $\phi_2$ is redundant in view of the first-order condition $t^\star=\mu(\mathcal{X}_1(\boldsymbol \phi))$ for $\phi_1$ because $\mu$ is the Lebesgue measure on $[0,1]^d$, whereby $\mu(\mathcal{X}_1(\boldsymbol \phi)\cup\mathcal{X}_2(\boldsymbol \phi))=\mu(\mathcal{X}_1(\boldsymbol \phi))+\mu(\mathcal{X}_2(\boldsymbol \phi))=1$.} $t^\star=\mu(\mathcal{X}_1(\boldsymbol \phi))$ is necessary and sufficient for global optimality. Fix now any $\boldsymbol \phi^\star\in\mathbb{R}^2$ with $\phi^\star_1=\phi^\star_2$ and observe that \begin{align*} t^\star=\textrm{Vol}(P(\boldsymbol{w}, b)) =& \mu\left(\left\{\boldsymbol x\in\mathbb{R}^d: \boldsymbol w^\top\boldsymbol x\leq b \right\}\right)\\ =&\mu\left( \left\{\boldsymbol x\in\mathbb{R}^d: \|\boldsymbol x \|^2\leq \|\boldsymbol x-2b \boldsymbol w/\|\boldsymbol w\|^2\|^2 \right\}\right)\\ =& \mu\left(\left\{\boldsymbol x\in\mathbb{R}^d: \|\boldsymbol x -\boldsymbol y_1\|^p\leq \|\boldsymbol x-\boldsymbol y_2\|^p \right\}\right)=\mu(\mathcal{X}_1(\boldsymbol \phi^\star)), \end{align*} where the first and second equalities follow from the definitions of $t^\star$ and the knapsack polytope $P(\boldsymbol{w}, b)$, respectively, the fourth equality holds because $\boldsymbol y_1=\boldsymbol 0$ and $\boldsymbol y_2=2b\boldsymbol w/\|\boldsymbol w\|^2$, and the fifth equality follows from the definition of $\mathcal{X}_1(\boldsymbol \phi^\star)$ and our assumption that $\phi^\star_1=\phi^\star_2$. This reasoning implies that $\boldsymbol \phi^\star$ attains indeed the maximum of the optimization problem on the last line of~\eqref{eq:min_wass}. Hence, we find \begin{align*} W_c(\mu, \nu_ {t^\star} ) &= t^\star \cdot \phi^\star_1 + (1-t^\star)\cdot \phi^\star_2- \sum\limits_{i = 1}^2 \int_{\mathcal{X}_i(\boldsymbol \phi^\star)}(\phi^\star_i - \|\boldsymbol x - \boldsymbol {y_i}\|^p)\,\mu(\mathrm{d} \boldsymbol x)\\ &= \sum\limits_{i = 1}^2 \int_{\mathcal{X}_i(\boldsymbol \phi^\star)} \|\boldsymbol x - \boldsymbol {y_i}\|^p \,\mu(\mathrm{d} \boldsymbol x) = \int_{\mathbb{R}^d} \min_{i=1,2}\left\{\|\boldsymbol x -\boldsymbol y_i \|^p\right\}\,\mu(\mathrm{d}\boldsymbol x) =\underset{ t \in [0,1]}{\min}W_c(\mu, {\nu}_ t ), \end{align*} where the second equality holds because $\phi^\star_1=\phi^\star_2$, the third equality exploits the definition of $\mathcal{X}_1(\boldsymbol \phi^\star)$, and the fourth equality follows from~\eqref{eq:min-Wc}. We may therefore conclude that $t^\star=\textrm{Vol}(P(\boldsymbol{w}, b))$ solves indeed the minimization problem $\min_{t\in[0,1]} W_c(\mu, \nu_ t )$. {\color{black} Using similar techniques, one can further prove that~$\partial_t W_c(\mu, \nu_t)$ exists and is strictly increasing in~$t$, which ensures that~$W_c(\mu, \nu_t)$ is strictly convex in~$t$ and, in particular, that~$t^\star $ is the unique solution of $\min_{t\in[0,1]} W_c(\mu, \nu_ t )$. Details are omitted for brevity.} \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:hard}.] Lemma~\ref{lem:vol} applies under the assumptions of the theorem, and therefore the volume of the knapsack polytope~$P(\boldsymbol{w}, b)$ coincides with the unique minimizer of \begin{equation} \label{eq:min-Wass} \min_{ t \in [0,1]} W_c(\mu, {\nu}_ t ). \end{equation} {\color{black} From the proof of Lemma~\ref{lem:vol} we know that the Wasserstein distance $W_c(\mu,{\nu}_ t )$ is strictly convex in~$t$, which implies that the minimization problem~\eqref{eq:min-Wass} constitutes a one-dimensional convex program with a unique minimizer. A near-optimal solution that approximates the exact minimizer to within an absolute accuracy~$\delta=(6d!(\|\boldsymbol{w}\|_1+2)^d(d+1)^{d+1}\prod_{i = 1}^{d}w_i)^{-1}$ can readily be computed with a binary search method such as Algorithm~\ref{algorithm:binary} described in Lemma~\ref{lemma:strictly_convex_min}\,(i), which evaluates~$g(t)=W_c(\mu,\nu_t)$ at exactly~$2L=2(\ceil{\log_2(1/\delta)} + 1)$ test points. Note that~$\delta$ falls within the interval $(0, 1)$ and satisfies the strict inequality~\eqref{eq:Granis-delta}. Note also that~$L$ grows only polynomially with the bit length of $\bm w$ and $b$; see Appendix~\ref{appendix:polynomial_calls} for details. One readily verifies that all operations in Algorithm~\ref{algorithm:binary} except for the computation of~$W_c(\mu, \nu_t)$ can be carried out in time polynomial in the bit length of~$\boldsymbol{w}$ and~$b$.} Thus, if we could compute $W_c(\mu, \nu_t)$ in time polynomial in the bit length of $\boldsymbol{w}$, $b$ and $t$, then we could efficiently compute the volume of the knapsack polytope~$P( \boldsymbol{w}, b)$ to within accuracy $\delta$, which is $\#$P-hard by Lemma~\ref{lemma:Grani}. We have thus constructed a polynomial-time Turing reduction from the $\#$P-hard problem of (approximately) computing the volume of a knapsack polytope to computing the Wasserstein distance $W_c(\mu, {\nu}_ t )$. By the definition of the class of $\#$P-hard problems (see, {\em e.g.}, \citep[Definition~1]{ref:van1990handbook}), we may thus conclude that computing $W_c(\mu, \nu_t)$ is $\#$P-hard. \end{proof} {\color{black} \begin{corollary}[Hardness of computing approximate optimal transport distances] \label{corollary:approximate-hard} Computing $W_c(\mu, \nu)$ to within an absolute accuracy of \[ \varepsilon =\frac{1}{4} \min\limits_{l\in [ 2^L]} \left\{ |W_c(\mu, \nu_{t_{l}}) - W_c(\mu, \nu_{t_{l-1}})| : W_c(\mu, \nu_{t_{l}}) \neq W_c(\mu, \nu_{t_{l-1}})\right\}, \] where $L = \ceil{\log_2(1/ \delta)} + 1$, $\delta = (6 d!(\|\boldsymbol{w}\|_1+2)^d(d+1)^{d+1}\prod_{i = 1}^{d}w_i)^{-1} $ and $t_l = l/ 2^{L}$ for all~$l =0, \ldots, 2^L$, is \#P-hard even if $\mathcal{X}=\mathcal{Y}=\mathbb{R}^d$, $c(\boldsymbol x, \boldsymbol y) = \|\boldsymbol x-\boldsymbol y\|^{p}$ for some $p\geq 1$, $\mu$ is the Lebesgue measure on the standard hypercube~$[0,1]^d$, and $\nu$ is a discrete probability measure supported on only two points. \end{corollary} \begin{proof} {\color{black} Assume that we have access to an inexact oracle that outputs, for any fixed~$t\in[0,1]$, an approximate optimal transport distance~$\widetilde W_c(\mu, \nu_t)$ with~$|\widetilde W_c(\mu, \nu_t) - W_c(\mu, \nu_t) |\leq \varepsilon$. By Lemma~\ref{lemma:strictly_convex_min}\,(ii), which applies thanks to the definition of~$\varepsilon$, we can then find a $2\delta$-approximation for the unique minimizer of~\eqref{eq:min-Wass} using $2L$ oracle calls. Note that $\delta'=2\delta$ falls within the interval $(0, 1)$ and satisfies the strict inequality~\eqref{eq:Granis-delta}. Recall also that $L$ grows only polynomially with the bit length of $\bm w$ and $b$; see Appendix~\ref{appendix:polynomial_calls} for details. Thus, if we could compute $\widetilde W_c(\mu, \nu_t)$ in time polynomial in the bit length of $\boldsymbol{w}$, $b$ and $t$, then we could efficiently compute the volume of the knapsack polytope~$P( \boldsymbol{w}, b)$ to within accuracy $\delta'$, which is $\#$P-hard by Lemma~\ref{lemma:Grani}. Computing $W_c(\mu, \nu)$ to within an absolute accuracy of~$\varepsilon$ is therefore also $\#$P-hard. } \end{proof} } The hardness of optimal transport established in Theorem~\ref{theorem:hard} {\color{black} and Corollary~\ref{corollary:approximate-hard}} is predicated on the hardness of numerical integration. A popular technique to reduce the complexity of numerical integration is smoothing, whereby an initial (possibly discontinuous) integrand is approximated with a differentiable one \citep{dick2013high}. Smoothness is also a desired property of objective functions when designing scalable optimization algorithms \citep{bubeck2015convex}. These observations prompt us to develop a systematic way to smooth the optimal transport problem that leads to efficient approximate numerical solution schemes. \section{Smooth Optimal Transport} \label{sec:smooth_ot} The semi-discrete optimal transport problem evaluates the optimal transport distance~\eqref{eq:primal} between an arbitrary probability measure~$\mu$ supported on $\mathcal{X}$ and a discrete probability measure $\nu = \sum_{i=1}^N {\nu}_i\delta_{\boldsymbol {y_i}}$ with atoms~$\boldsymbol y_1,\ldots, \boldsymbol {y}_N \in \mathcal{Y}$ and corresponding probabilities $\boldsymbol \nu =(\nu_1,\ldots, \nu_N)\in \Delta^N$ for some $N\ge 2$. In the following, we define the {\em discrete $c$-transform} $\psi_c:\mathbb{R}^N\times \mathcal{X} \rightarrow [-\infty,+\infty)$ of $\boldsymbol\phi\in\mathbb{R}^N$ through \begin{equation} \label{eq:disc-c-transform} \psi_c(\boldsymbol \phi, \boldsymbol x) = \max\limits_{i \in [N]} \phi_i - c(\boldsymbol x, \boldsymbol y_i) \quad \forall \boldsymbol x \in \mathcal{X}. \end{equation} Armed with the discrete $c$-transform, we can now reformulate the semi-discrete optimal transport problem as a finite-dimensional maximization problem over a single dual potential vector. \begin{lemma}[Discrete $c$-transform] \label{lem:disc_ctrans} The semi-discrete optimal transport problem is equivalent to \begin{equation} \label{eq:ctans_dual_semidisc} W_c(\mu, \nu) = \sup_{ \boldsymbol{\phi} \in \mathbb{R}^N} \boldsymbol {\nu}^\top \boldsymbol {\phi} - \mathbb{E}_{\boldsymbol x \sim \mu}[{\psi_c(\boldsymbol \phi, \boldsymbol x) } ]. \end{equation} \end{lemma} \begin{proof} As $\nu = \sum_{i=1}^N {\nu}_i\delta_{\boldsymbol {y_i}}$ is discrete, the dual optimal transport problem~\eqref{eq:ctans_dual} simplifies to \begin{align*} W_c(\mu, \nu) & =\sup_{\boldsymbol \phi\in\mathbb{R}^N} \sup_{\phi \in \mathcal{L}(\mathcal{Y}, \nu)} \left\{ \boldsymbol \nu^\top\boldsymbol \phi - \mathbb{E}_{\boldsymbol x \sim \mu}\left[ \phi_c(\boldsymbol x) \right]\;:\;\phi(\boldsymbol y_i)=\phi_i~\forall i\in[N] \right\}\\ &=\sup_{\boldsymbol \phi\in\mathbb{R}^N}~ \boldsymbol \nu^\top\boldsymbol \phi - \inf_{\phi \in \mathcal{L}(\mathcal{Y}, \nu)} \Big\{ \mathbb{E}_{\boldsymbol x \sim \mu}\left[ \phi_c(\boldsymbol x) \right]\;:\;\phi(\boldsymbol y_i)=\phi_i~\forall i\in[N] \Big\} . \end{align*} Using the definition of the standard $c$-transform, we can then recast the inner minimization problem as \begin{align*} &\inf_{\phi \in \mathcal{L}(\mathcal{Y}, \nu)} \left\{ \mathbb{E}_{\boldsymbol x \sim \mu}\left[ \sup_{\boldsymbol y \in \mathcal{Y}} \phi(\boldsymbol y) - c(\boldsymbol x, \boldsymbol y) \right]\;:\;\phi(\boldsymbol y_i)=\phi_i~\forall i\in[N] \right\}\\ &\quad = ~\mathbb{E}_{\boldsymbol x \sim \mu} \left[\max_{i \in [N]}\left\{\phi_i- c(\boldsymbol x, \boldsymbol y_i)\right\} \right] ~=~ \mathbb{E}_{\boldsymbol x \sim \mu} \left[{\psi_c(\boldsymbol \phi, \boldsymbol x) } \right], \end{align*} where the first equality follows from setting $\phi(\bm y)=\underline \phi$ for all $\boldsymbol y\notin\{\boldsymbol y_1, \ldots, \boldsymbol y_N\}$ and letting~$\underline\phi$ tend to $-\infty$, while the second equality exploits the definition of the discrete $c$-transform. Thus, \eqref{eq:ctans_dual_semidisc} follows. \end{proof} The discrete $c$-transform~\eqref{eq:disc-c-transform} can be viewed as the optimal value of a {\em discrete choice model}, where a utility-maximizing agent selects one of $N$ mutually exclusive alternatives with utilities $\phi_i - c(\boldsymbol x, \boldsymbol y_i)$, $i\in[N]$, respectively. Discrete choice models are routinely used for explaining the preferences of travelers selecting among different modes of transportation \citep{ben1985discrete}, but they are also used for modeling the choice of residential location \citep{mcfadden1978modeling}, the interests of end-users in engineering design \citep{wassenaar2003approach} or the propensity of consumers to adopt new technologies \citep{hackbarth2013consumer}. In practice, the preferences of decision-makers and the attributes of the different choice alternatives are invariably subject to uncertainty, and it is impossible to specify a discrete choice model that reliably predicts the behavior of multiple individuals. Psychological theory thus models the utilities as random variables \citep{thurstone1927law}, in which case the optimal choice becomes random, too. The theory as well as the econometric analysis of probabilistic discrete choice models were pioneered by \citet{mcfadden1973conditional}. The availability of a wealth of elegant theoretical results in discrete choice theory prompts us to add a random noise term to each deterministic utility value $\phi_i - c(\boldsymbol x, \boldsymbol y_i)$ in~\eqref{eq:disc-c-transform}. We will argue below that the expected value of the resulting maximal utility with respect to the noise distribution provides a smooth approximation for the $c$-transform $\psi_c(\boldsymbol \phi, \boldsymbol x)$, which in turn leads to a smooth optimal transport problem that displays favorable numerical properties. For a comprehensive survey of additive random utility models in discrete choice theory we refer to \citet{dubin1984econometric} and \citet{daganzo2014multinomial}. Generalized semi-parametric discrete choice models where the noise distribution is itself subject to uncertainty are studied by \citet{natarajan2009persistency}. Using techniques from modern distributionally robust optimization, these models evaluate the best-case (maximum) expected utility across an ambiguity set of multivariate noise distributions. Semi-parametric discrete choice models are studied in the context of appointment scheduling \citep{mak2015appointment}, traffic management \citep{ahipacsaouglu2016flexibility} and product line pricing \citep{qi2019product}. We now define the {\em smooth} ({\em discrete}) {\em $c$-transform} as a best-case expected utility of the type studied in semi-parametric discrete choice theory, that is, \begin{equation} \label{eq:smooth_c_transform} \overline{\psi}_c(\boldsymbol \phi, \boldsymbol x) = \sup_{\theta \in \Theta}\;\mathbb{E}_{\boldsymbol z \sim\theta}\left[\max_{i \in [N]} \phi_i -c(\boldsymbol x, \boldsymbol {y_i}) +z_i \right], \end{equation} where $\boldsymbol z$ represents a random vector of perturbations that are independent of $\boldsymbol x$ and $\boldsymbol y$. Specifically, we assume that~$\boldsymbol z$ is governed by a Borel probability measure $\theta$ from within some ambiguity set $\Theta\subseteq\mathcal{P}(\mathbb{R}^N)$. Note that if~$\Theta$ is a singleton that contains only the Dirac measure at the origin of~$\mathbb{R}^N$, then the smooth $c$-transform collapses to ordinary $c$-transform defined in \eqref{eq:disc-c-transform}, which is piecewise affine and thus non-smooth in~$\boldsymbol \phi$. For many commonly used ambiguity sets, however, we will show below that the smooth $c$-transform is indeed differentiable in $\boldsymbol \phi$. In practice, the additive noise~$z_i$ in the transportation cost could originate, for example, from uncertainty about the position~$\boldsymbol y_i$ of the $i$-th atom of the discrete distribution~$\nu$. This interpretation is justified if $c(\boldsymbol x,\boldsymbol y)$ is approximately affine in~$\boldsymbol y$ around the atoms~$\boldsymbol y_i$, $i\in[N]$. The smooth $c$-transform gives rise to the following {\em smooth} ({\em semi-discrete}) {\em optimal transport problem} in dual form. \begin{equation} \label{eq:smooth_ot} \overline W_c (\mu, \nu) = \sup\limits_{\boldsymbol {\phi} \in \mathbb{R}^N} \mathbb{E}_{\boldsymbol x \sim \mu} \left[ \boldsymbol \nu^\top \boldsymbol \phi - \overline\psi_c(\boldsymbol \phi, \boldsymbol x)\right] \end{equation} Note that~\eqref{eq:smooth_ot} is indeed obtained from the original dual optimal transport problem~\eqref{eq:ctans_dual_semidisc} by replacing the original $c$-transform $\psi_c(\boldsymbol \phi, \boldsymbol x)$ with the smooth $c$-transform $\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$. As smooth functions are susceptible to efficient numerical integration, we expect that~\eqref{eq:smooth_ot} is easier to solve than~\eqref{eq:ctans_dual_semidisc}. A key insight of this work is that the smooth {\em dual} optimal transport problem~\eqref{eq:smooth_ot} typically has a primal representation of the~form \begin{equation} \label{eq:reg_ot_pri_abstract} \min\limits_{\pi \in \Pi(\mu,\nu)}\mathbb E_{(\boldsymbol x, \boldsymbol y) \sim \pi}\left[ c(\boldsymbol x, \boldsymbol y)\right] + R_\Theta(\pi), \end{equation} where $R_\Theta(\pi)$ can be viewed as a regularization term that penalizes the complexity of the transportation plan~$\pi$. In the remainder of this section we will prove~\eqref{eq:reg_ot_pri_abstract} and derive~$R_\Theta(\pi)$ for different ambiguity sets~$\Theta$. We will see that this regularization term is often related to an $f$-divergence, where $f:\mathbb{R}_+ \to \mathbb{R} \cup \{\infty\}$ constitutes a lower-semicontinuous convex function with $f(1) = 0$. If $\tau$ and $\rho$ are two Borel probability measures on a closed subset $\mathcal Z$ of a finite-dimensional Euclidean space, and if~$\tau$ is absolutely continuous with respect to~$\rho$, then the continuous $f$-divergence form~$\tau$ to $\rho$ is defined as $D_f(\tau \parallel \rho) = \int_{\mathcal Z} f({\mathrm{d} \tau}/{\mathrm{d} \rho}(\boldsymbol z)) \rho(\mathrm{d} \boldsymbol z)$, where ${\mathrm{d} \tau}/{\mathrm{d} \rho}$ stands for the Radon-Nikodym derivative of $\tau$ with respect to $\rho$. By slight abuse of notation, if $\boldsymbol \tau$ and $\boldsymbol \rho$ are two probability vectors in~$\Delta^N$ and if $\boldsymbol\rho>\boldsymbol 0$, then the discrete $f$-divergence form $\boldsymbol \tau$ to $\boldsymbol \rho$ is defined as $D_f(\boldsymbol \tau \parallel \boldsymbol \rho) = \sum_{i =1}^N f({\tau_i}/{\rho_i}) \rho_i$. The correct interpretation of $D_f$ is usually clear from the context. The following lemma shows that the smooth optimal transport problem~\eqref{eq:reg_ot_pri_abstract} equipped with an $f$-divergence regularization term is equivalent to a finite-dimensional convex minimization problem. This result will be instrumental to prove the equivalence of~\eqref{eq:smooth_ot} and~\eqref{eq:reg_ot_pri_abstract} for different ambiguity sets~$\Theta$. \begin{lemma}[Strong duality] \label{lem:strong_dual_reg_ot} If $\boldsymbol\eta\in\Delta^N$ with $\boldsymbol\eta>\boldsymbol 0$ and $\eta = \sum_{i=1}^N \eta_i \delta_{\boldsymbol y_i}$ is a discrete probability measure on~$\mathcal{Y}$, then problem~\eqref{eq:reg_ot_pri_abstract} with regularization term $R_\Theta (\pi ) = D_{f}(\pi\|\mu \otimes \eta)$ is equivalent to \begin{align} \label{eq:dual_regularized_ot} \sup\limits_{ \boldsymbol {\phi}\in \mathbb{R}^N} ~ \mathbb{E}_{\boldsymbol x \sim \mu}\left[\min\limits_{\boldsymbol p\in \Delta^N} \sum\limits_{i=1}^N{\phi_i\nu_i}- (\phi_i - c(\boldsymbol x, \boldsymbol {y_i}))p_i + D_f(\boldsymbol p \parallel \boldsymbol \eta) \right]. \end{align} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:strong_dual_reg_ot}] If $\mathbb{E}_{\boldsymbol x \sim \mu}[c(\boldsymbol x,\boldsymbol y_i)]=\infty$ for some $i\in[N]$, then both~\eqref{eq:reg_ot_pri_abstract} and~\eqref{eq:dual_regularized_ot} evaluate to infinity, and the claim holds trivially. In the remainder of the proof we may thus assume without loss of generality that $\mathbb{E}_{\boldsymbol x \sim \mu}[c(\boldsymbol x,\boldsymbol y_i)]<\infty$ for all $i\in[N]$. Using \citep[Theorem~14.6]{rockafellar2009variational} to interchange the minimization over $\boldsymbol p$ with the expectation over $\boldsymbol x$, problem~\eqref{eq:dual_regularized_ot} can first be reformulated as \begin{equation*} \begin{array}{ccccll} & &\sup\limits_{ \boldsymbol{\phi}\in \mathbb{R}^N} &\min\limits_{\boldsymbol p\in\mathcal L_\infty^N(\mathcal{X},\mu)} ~ &\mathbb{E}_{\boldsymbol x \sim \mu}\left[\displaystyle\sum\limits_{i=1}^N{\phi_i\nu_i} - (\phi_i - c(\boldsymbol x, \boldsymbol {y_i}))p_i(\boldsymbol x)+ D_f(\boldsymbol p(\boldsymbol x)\|\boldsymbol \eta)\right] \\[3ex] &&&\textrm{s.t.} &\displaystyle \boldsymbol p(\boldsymbol x)\in\Delta^N \quad \mu\text{-a.s.}, \end{array} \end{equation*} where $\mathcal L_\infty^N(\mathcal{X},\mu)$ denotes the Banach space of all Borel-measurable functions from~$\mathcal{X}$ to~$\mathbb{R}^N$ that are essentially bounded with respect to~$\mu$. Interchanging the supremum over~$\boldsymbol \phi$ with the minimum over~$\boldsymbol p$ and evaluating the resulting unconstrained linear program over~$\boldsymbol \phi$ in closed form then yields the dual problem \begin{equation} \label{eq:primal_dual_relation_final} \begin{array}{ccl} & \min\limits_{\boldsymbol p\in\mathcal L_\infty^N(\mathcal{X},\mu)} &\displaystyle \mathbb{E}_{\boldsymbol x \sim \mu}\Bigg[ \sum\limits_{i=1}^Nc(\boldsymbol x, \boldsymbol {y_i})p_{i}(\boldsymbol x) +\displaystyle D_f (\boldsymbol p(\boldsymbol x) \! \parallel \!\boldsymbol \eta) \Bigg] \\[3ex] &\textrm{s.t.} &\displaystyle\mathbb{E}_{\boldsymbol x \sim \mu}\left[ \boldsymbol p(\boldsymbol x)\right] = \boldsymbol \nu,\quad \boldsymbol p(\boldsymbol x)\in\Delta^N \quad \mu\text{-a.s.} \end{array} \end{equation} Strong duality holds for the following reasons. As $c$ and $f$ are lower-semicontinuous and $c$ is non-negative, we may proceed as in~\citep[\S~3.2]{shapiro2017distributionally} to show that the dual objective function is weakly${}^*$ lower semicontinuous in $\boldsymbol p$. Similarly, as $\Delta^N$ is compact, one can use the Banach-Alaoglu theorem to show that the dual feasible set is weakly${}^*$ compact. Finally, as $f$ is real-valued and $\mathbb{E}_{\boldsymbol x \sim \mu}[c(\boldsymbol x,\boldsymbol y_i)]<\infty$ for all $i\in[N]$, the constant solution $\boldsymbol p(\boldsymbol x)=\boldsymbol \nu$ is dual feasible for all $\boldsymbol \nu \in\Delta^N$. Thus, the dual problem is solvable and has a finite optimal value. This argument remains valid if we add a perturbation $\boldsymbol \delta\in H=\{\boldsymbol\delta'\in\mathbb{R}^N: \sum_{i=1}^N\delta'_i=0\}$ to the right hand side vector~$\boldsymbol \nu$ as long as $\boldsymbol \delta>-\boldsymbol \nu$. The optimal value of the perturbed dual problem is thus pointwise finite as well as convex and---consequently---continuous and locally bounded in~$\boldsymbol \delta$ at the origin of~$H$. As $\boldsymbol \nu>\boldsymbol 0$, strong duality therefore follows from~\citep[Theorem~17\,(a)]{rockafellar1974conjugate}. Any dual feasible solution $\boldsymbol p\in \mathcal L^N_\infty(\mathcal{X},\mu)$ gives rise to a Borel probability measure $\pi \in \mathcal P(\mathcal X \times \mathcal Y)$ defined through $\pi( \boldsymbol y \in \mathcal B) = \nu(\boldsymbol y \in \mathcal B)$ for all Borel sets $\mathcal B \subseteq \mathcal Y$ and $\pi(\boldsymbol x \in \mathcal A | \boldsymbol y = \boldsymbol y_i) = \int_{ \mathcal A} p_i(\boldsymbol x) \mu(\mathrm{d} \boldsymbol x) / \nu_i$ for all Borel sets $\mathcal A \subseteq \mathcal X$ and $i \in [N]$. This follows from the law of total probability, whereby the joint distribution of~$\boldsymbol x$ and~$\boldsymbol y$ is uniquely determined if we specify the marginal distribution of~$\boldsymbol y$ and the conditional distribution of~$\boldsymbol x$ given~$\boldsymbol y=\boldsymbol y_i$ for every~$i\in[N]$. By construction, the marginal distributions of $\boldsymbol x$ and $\boldsymbol y$ under $\pi$ are determined by $\mu$ and $\nu$, respectively. Indeed, note that for any Borel set $\mathcal A \subseteq \mathcal X$ we have \begin{align*} \pi(\boldsymbol x \in \mathcal A) &= \sum\limits_{i=1}^N \pi(\boldsymbol x \in \mathcal A | \boldsymbol y = \boldsymbol y_i) \cdot \pi(\boldsymbol y = \boldsymbol y_i) = \sum\limits_{i=1}^N \pi(\boldsymbol x \in \mathcal A | \boldsymbol y = \boldsymbol y_i) \cdot \nu_i\\ &= \sum\limits_{i=1}^N \int_{\mathcal A} {p_i(\boldsymbol x)}\mu(\mathrm{d} \boldsymbol x) = \int_{\mathcal A} \mu(\mathrm{d} \boldsymbol x) = \mu(\boldsymbol x\in \mathcal A), \end{align*} where the first equality follows from the law of total probability, the second and the third equalities both exploit the construction of~$\pi$, and the fourth equality holds because $\boldsymbol p(\boldsymbol x)\in\Delta^N$ $\mu$-almost surely due to dual feasibility. This reasoning implies that $\pi$ constitutes a coupling of $\mu$ and $\nu$ (that is, $\pi \in \Pi(\mu, \nu)$) and is thus feasible in~\eqref{eq:reg_ot_pri_abstract}. Conversely, any $\pi\in\Pi(\mu,\nu)$ gives rise to a function $\boldsymbol p\in\mathcal L_\infty^N(\mathcal{X},\mu)$ defined through \[ p_i(\boldsymbol x) =\nu_i\cdot \frac{\mathrm{d} \pi}{\mathrm{d} (\mu \otimes \nu)} (\boldsymbol x, \boldsymbol y_i)\quad \forall i\in [N]. \] By the properties of the Randon-Nikodym derivative, we have $p_i(\boldsymbol x)\ge 0$ $\mu$-almost surely for all $i\in[N]$. In addition, for any Borel set $\mathcal A\subseteq \mathcal{X}$ we have \begin{align*} \int_{\mathcal A}\sum_{i=1}^N p_i(\boldsymbol x)\,\mu(\mathrm{d}\boldsymbol x) & = \int_{\mathcal A} \sum_{i=1}^N \nu_i\cdot \frac{\mathrm{d} \pi}{\mathrm{d} (\mu \otimes \nu)} (\boldsymbol x, \boldsymbol y_i)\,\mu(\mathrm{d}\boldsymbol x)\\ & = \int_{\mathcal A\times \mathcal{Y}} \frac{\mathrm{d} \pi}{\mathrm{d} (\mu\otimes \nu)} (\boldsymbol x, \boldsymbol y)\,(\mu\otimes \nu)(\mathrm{d} \boldsymbol x,\mathrm{d}\boldsymbol y) \\&= \int_{\mathcal A\times \mathcal{Y}} \pi(\mathrm{d}\boldsymbol x, \mathrm{d} \boldsymbol y) = \int_{\mathcal A}\mu(\mathrm{d}\boldsymbol x), \end{align*} where the second equality follows from Fubini's theorem and the definition of $\nu=\sum_{i=1}^N\nu_i\delta_{\boldsymbol y_i}$, while the fourth equality exploits that the marginal distribution of $\boldsymbol x$ under $\pi$ is determined by $\mu$. As the above identity holds for all Borel sets $\mathcal A\subseteq \mathcal{X}$, we find that $\sum_{i=1}^N p_i(\boldsymbol x)=1$ $\mu$-almost surely. Similarly, we have \begin{align*} \mathbb E_{\boldsymbol x\sim\mu}\left[ p_i(\boldsymbol x)\right] &=\int_\mathcal{X} \nu_i\cdot \frac{\mathrm{d} \pi}{\mathrm{d} (\mu \otimes \nu)} (\boldsymbol x, \boldsymbol y_i) \,\mu(\mathrm{d}\boldsymbol x) \\ &=\int_{\mathcal{X}\times\{\boldsymbol y_i\}} \frac{\mathrm{d} \pi}{\mathrm{d} (\mu \otimes \nu)} (\boldsymbol x, \boldsymbol y) \,(\mu\otimes\nu)(\mathrm{d}\boldsymbol x,\mathrm{d}\boldsymbol y) \\ &= \int_{\mathcal{X}\times\{\boldsymbol y_i\}} \pi(\mathrm{d}\boldsymbol x,\mathrm{d}\boldsymbol y)=\int_{\{\boldsymbol y_i\}}\nu(\mathrm{d} \boldsymbol y)=\nu_i \end{align*} for all $i\in[N]$. In summary, $\boldsymbol p$ is feasible in~\eqref{eq:primal_dual_relation_final}. Thus, we have shown that every probability measure $\pi$ feasible in~\eqref{eq:reg_ot_pri_abstract} induces a function $\boldsymbol p$ feasible in~\eqref{eq:primal_dual_relation_final} and vice versa. We further find that the objective value of~$\boldsymbol p$ in~\eqref{eq:primal_dual_relation_final} coincides with the objective value of the corresponding~$\pi$ in~\eqref{eq:reg_ot_pri_abstract}. Specifically, we have \begin{align*} & \mathbb{E}_{\boldsymbol x \sim \mu}\Bigg[ \sum\limits_{i=1}^N c(\boldsymbol x, \boldsymbol {y_i})\, p_{i}(\boldsymbol x) +\displaystyle D_f (\boldsymbol p(\boldsymbol x) \| \boldsymbol \eta) \Bigg] =\displaystyle\int_\mathcal{X} \sum\limits_{i=1}^N c(\boldsymbol x, \boldsymbol y_i) p_i(\boldsymbol x) \,\mu( \mathrm{d}\boldsymbol x) + \displaystyle\int_\mathcal{X}\sum_{i=1}^N f\left(\frac{p_i(\boldsymbol x)}{\eta_i}\right) \eta_i \, \mu (\mathrm{d} \boldsymbol x) \\ &\hspace{1cm}=\displaystyle\int_\mathcal{X} \sum\limits_{i=1}^N c(\boldsymbol x, \boldsymbol y_i) \cdot\nu_i\cdot \frac{\mathrm{d} \pi}{\mathrm{d}(\mu \otimes \nu)}(\boldsymbol x, \boldsymbol y_i)\, \mu( \mathrm{d}\boldsymbol x) + \int_\mathcal{X} \sum_{i=1}^N f\left( \frac{\nu_i}{\eta_i} \cdot \frac{\mathrm{d} \pi}{\mathrm{d}(\mu \otimes \nu)}(\boldsymbol x, \boldsymbol y_i)\right) \cdot \eta_i \,\mu ( \mathrm{d} \boldsymbol x) \\ &\hspace{1cm}=\displaystyle\int_{\mathcal{X}\times \mathcal{Y}} c(\boldsymbol x, \boldsymbol y)\frac{\mathrm{d} \pi}{\mathrm{d}(\mu \otimes \nu)}(\boldsymbol x, \boldsymbol y) \,(\mu \otimes \nu)(\mathrm{d}\boldsymbol x, \mathrm{d}\boldsymbol y) + \displaystyle\int_{\mathcal{X}\times\mathcal{Y}} f\left( \frac{\mathrm{d} \pi}{\mathrm{d}(\mu \otimes \eta)}(\boldsymbol x, \boldsymbol y)\right) (\mu\otimes \eta)(\mathrm{d} \boldsymbol x,\mathrm{d} \boldsymbol y) \\[1ex] &\hspace{1cm}=\mathbb E_{(\boldsymbol x, \boldsymbol y) \sim \pi} \left[c(\boldsymbol x, \boldsymbol y)\right] + D_f(\pi \| \mu \otimes \eta), \end{align*} where the first equality exploits the definition of the discrete $f$-divergence, the second equality expresses the function~$\boldsymbol p$ in terms of the corresponding probability measure~$\pi$, the third equality follows from Fubini's theorem and uses the definitions $\nu=\sum_{i=1}^N \nu_i\delta_{\boldsymbol y_i}$ and $\eta=\sum_{i=1}^N \eta_i\delta_{\boldsymbol y_i}$, and the fourth equality follows from the definition of the continuous $f$-divergence. In summary, we have thus shown that~\eqref{eq:reg_ot_pri_abstract} is equivalent to~\eqref{eq:primal_dual_relation_final}, which in turn is equivalent to~\eqref{eq:dual_regularized_ot}. This observation completes the proof. \end{proof} \begin{proposition}[Approximation bound] \label{prop:approx_bound} If $\boldsymbol\eta\in\Delta^N$ with $\boldsymbol\eta>\boldsymbol 0$ and $\eta = \sum_{i=1}^N \eta_i \delta_{\boldsymbol y_i}$ is a discrete probability measure on~$\mathcal{Y}$, then problem~\eqref{eq:reg_ot_pri_abstract} with regularization term $R_\Theta (\pi ) = D_{f}(\pi\|\mu \otimes \eta)$ satisfies \[|\overline W_c(\mu, \nu) - W_c(\mu, \nu)| \leq \max\Bigg\{\bigg|\min_{\boldsymbol p \in \Delta^N} D_f(\boldsymbol p \| \boldsymbol \eta )\bigg|, \bigg|\max_{i \in [N]}\bigg\{ f\bigg(\frac{1}{\eta_i}\bigg) \eta_i+ f(0) \sum_{k \neq i} \eta_k\bigg\}\bigg|\Bigg\}.\] \end{proposition} \begin{proof} By Lemma~\ref{lem:strong_dual_reg_ot}, problem~\eqref{eq:reg_ot_pri_abstract} is equivalent to~\eqref{eq:dual_regularized_ot}. Note that the inner optimization problem in~\eqref{eq:dual_regularized_ot} can be viewed as an $f$-divergence regularized linear program with optimal value $\boldsymbol \nu^\top \boldsymbol\phi-\ell(\boldsymbol \phi, \boldsymbol x)$, where \[ \ell(\boldsymbol \phi, \boldsymbol x) = \max\limits_{\boldsymbol p \in \Delta^N} \sum\limits_{i=1}^N (\phi_i - c(\boldsymbol x, \boldsymbol y_i)) p_i - D_f(\boldsymbol p \| \boldsymbol \eta). \] Bounding $D_f(\boldsymbol p \| \boldsymbol \eta)$ by its minimum and its maximum over $\boldsymbol p\in\Delta^N$ then yields the estimates \begin{equation} \label{eq:bound:c_trans_ineq} \psi_c(\boldsymbol \phi, \boldsymbol x) - \max_{ \boldsymbol p \in \Delta^N} D_f(\boldsymbol p \| \boldsymbol \eta) \leq \ell(\boldsymbol \phi, \boldsymbol x) \leq \psi_c(\boldsymbol \phi, \boldsymbol x) - \min_{\boldsymbol p \in \Delta^N} D_f(\boldsymbol p \| \boldsymbol \eta). \end{equation} Here, $\psi_c(\boldsymbol \phi, \boldsymbol x)$ stands as usual for the discrete $c$-transform defined in~\eqref{eq:disc-c-transform}, which can be represented as \begin{equation} \label{eq:bound:ot_ineq} \psi_c(\boldsymbol \phi, \boldsymbol x) = \max\limits_{\boldsymbol p \in \Delta^N}\sum\limits_{i=1}^N (\phi_i - c(\boldsymbol x, \boldsymbol y_i)) p_i. \end{equation} Multiplying~\eqref{eq:bound:c_trans_ineq} by~$-1$, adding $\boldsymbol\nu^\top\boldsymbol \phi$, averaging over~$\boldsymbol x$ using the probability measure~$\mu$ and maximizing over~$\boldsymbol \phi \in \mathbb{R}^N$ further implies via~\eqref{eq:ctans_dual_semidisc} and~\eqref{eq:dual_regularized_ot} that \begin{equation} \label{eq:smooth_ot_approximation_bounds} W_c(\mu,\nu)+ \min_{ \boldsymbol p \in \Delta^N} D_f(\boldsymbol p \| \boldsymbol \eta) \leq \overline W_c(\mu, \nu) \leq W_c(\mu,\nu) + \max_{\boldsymbol p \in \Delta^N} D_f(\boldsymbol p \| \boldsymbol \eta). \end{equation} As $D_f(\boldsymbol p \| \boldsymbol \eta)$ is convex in~$\boldsymbol p$, its maximum is attained at a vertex of~$\Delta^N$ \citep[Theorem~1]{hoffman1981method}, that~is, \[ \max_{\boldsymbol p \in \Delta^N} D_f(\boldsymbol p \| \boldsymbol \eta) = \max_{i \in [N]}\bigg\{ f\bigg(\frac{1}{\eta_i}\bigg) \eta_i + f(0) \sum_{k \neq i} \eta_k\bigg\}. \] The claim then follows by substituting the above formula into~\eqref{eq:smooth_ot_approximation_bounds} and rearranging terms. \end{proof} In the following we discuss three different classes of ambiguity sets $\Theta$ for which the dual smooth optimal transport problem~\eqref{eq:smooth_ot} is indeed equivalent to the primal reguarized optimal transport problem~\eqref{eq:reg_ot_pri_abstract}. \subsection{Generalized Extreme Value Distributions} \label{sec:gevm} Assume first that the ambiguity set~$\Theta$ represents a singleton that accommodates only one single Borel probability measure $\theta$ on $\mathbb{R}^N$ defined through \begin{equation} \label{eq:dist_gevd} \theta(\boldsymbol z \leq \boldsymbol s) = \exp \left(-G \left( \exp(-s_1),\ldots, \exp(-s_N) \right) \right)\quad \forall \boldsymbol s\in\mathbb{R}^N, \end{equation} where $G:\mathbb{R}^N \to \mathbb{R}_+$ is a smooth generating function with the following properties. First, $G$ is homogeneous of degree $1/\lambda$ for some $\lambda>0$, that is, for any $\alpha \neq 0$ and $\boldsymbol s\in \mathbb{R}^N$ we have $G(\alpha \boldsymbol s) = \alpha^{1/\lambda}G(\boldsymbol s)$. In addition, $G(\boldsymbol s)$ tends to infinity as $s_i$ grows for any $i \in [N]$. Finally, the partial derivative of $G$ with respect to $k$ distinct arguments is non-negative if $k$ is odd and non-positive if $k$ is even. These properties ensure that the noise vector $\boldsymbol z$ follows a generalized extreme value distribution in the sense of \citep[\S~4.1]{train2009discrete}. \begin{proposition}[Entropic regularization] \label{prop:gumbel} Assume that $\Theta$ is a singleton ambiguity set that contains only a generalized extreme value distribution with $G( \boldsymbol{s}) = \exp(-e)N\sum_{i=1}^N \eta_i s_i^{1/\lambda}$ for some $\lambda > 0$ and~$\boldsymbol \eta \in \Delta^N$, $\boldsymbol\eta> \boldsymbol 0${\color{black}, where~$e$ stands for Euler's constant}. Then, the components of $\boldsymbol z$ follow independent Gumbel distributions with means $\lambda \log(N \eta_i)$ and variances $\lambda^2 \pi^2 /6$ for all $i\in[N]$, while the smooth $c$-transform~\eqref{eq:smooth_c_transform} reduces to the $\log$-partition function \begin{equation} \label{eq:partition:function} \overline\psi(\boldsymbol \phi, \boldsymbol x) = \lambda \log\left(\sum_{i=1}^N \eta_i \exp\left( \frac{\phi_i -c(\boldsymbol x,\boldsymbol {y_i})}{\lambda} \right) \right). \end{equation} In addition, the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)$, where $f(s) =\lambda s\log(s)$ and $\eta = \sum_{i =1}^N \eta_i \delta_{\boldsymbol y_i}$. \end{proposition} Note that the log-partition function~\eqref{eq:partition:function} constitutes indeed a smooth approximation for the maximum function in the definition~\eqref{eq:disc-c-transform} of the discrete $c$-transform. As $\lambda$ decreases, this approximation becomes increasingly accurate. It is also instructive to consider the special case where $\mu=\sum_{i=1}^M\mu_i\delta_{\boldsymbol x_i}$ is a discrete probability measure with atoms $\boldsymbol x_1,\ldots,\boldsymbol x_M\in\mathcal{X}$ and corresponding vector of probabilities $\boldsymbol \mu\in \Delta^M$. In this case, any coupling $\pi\in\Pi(\mu,\nu)$ constitutes a discrete probability measure $\pi=\sum_{i=1}^M\sum_{j=1}^N \pi_{ij}\delta_{(\boldsymbol x_i,\boldsymbol y_j)}$ with matrix of probabilities $\boldsymbol \pi\in\Delta^{M\times N}$. If $f(x)=s\log(s)$, then the continuous $f$-divergence reduces to \begin{align*} D_f(\pi \| \mu \otimes \eta)&=\sum_{i=1}^M\sum_{j=1}^N \pi_{ij}\log(\pi_{ij})-\sum_{i=1}^M\sum_{j=1}^N \pi_{ij}\log(\mu_i)-\sum_{i=1}^M\sum_{j=1}^N \pi_{ij}\log(\eta_j)\\ &=\sum_{i=1}^M\sum_{j=1}^N \pi_{ij}\log(\pi_{ij})-\sum_{i=1}^M\mu_i\log(\mu_i)-\sum_{j=1}^N \nu_j\log(\eta_j), \end{align*} where the second equality holds because $\pi$ is a coupling of $\mu$ and $\nu$. Thus, $D_f(\pi \| \mu \otimes \eta)$ coincides with the negative entropy of the probability matrix~$\boldsymbol \pi$ offset by a constant that is independent of~$\boldsymbol \pi$. For $f(s)=s\log(s)$ the choice of $\boldsymbol \eta$ has therefore no impact on the minimizer of the smooth optimal transport problem~\eqref{eq:reg_ot_pri_abstract}, and we simply recover the celebrated entropic regularization proposed by \citet{sinkhorn, genevay2016stochastic, rigollet2018entropic, peyre2019computational} and \cite{ref:clason2019entropic}. \begin{proof}[Proof of Proposition~\ref{prop:gumbel}] Substituting the explicit formula for the generating function $G$ into~\eqref{eq:dist_gevd} yields \begin{align*} \theta(\boldsymbol z \leq \boldsymbol s) = \exp\left(-\exp(-e)N\sum\limits_{i=1}^N \eta_i \exp\left(-\frac{s_i}{\lambda}\right)\right) &=\prod\limits_{i=1}^N \exp\left(-\exp(-e)N\eta_i \exp\left(-\frac{s_i}{\lambda} \right)\right)\\ &= \prod\limits_{i=1}^N \exp\left(-\exp\left(-\frac{s_i - \lambda(\log(N\eta_i)-e)}{\lambda}\right)\right), \end{align*} where $e$ stands for Euler's constant. The components of the noise vector $\boldsymbol z$ are thus independent under~$\theta$, and $z_i$ follows a Gumbel distribution with location parameter $\lambda(\log(N\eta_i)-e)$ and scale parameter $\lambda$ for every $i \in [N]$. Therefore, $z_i$ has mean $\lambda \log(N \eta_i)$ and variance $\lambda^2 \pi^2/6$. If the ambiguity set $\Theta$ contains only one single probability measure~$\theta$ of the form~\eqref{eq:dist_gevd}, then Theorem~5.2 of \citet{mcfadden1981econometric} readily implies that the smooth $c$-transform \eqref{eq:smooth_c_transform} simplifies~to \begin{equation} \label{eq:gev_smooth_ctrans} \overline\psi(\boldsymbol \phi , \boldsymbol x) = \lambda \log G \left(\exp(\phi_1 -c(\boldsymbol x,\boldsymbol y_1)),\dots, \exp(\phi_N - c(\boldsymbol x, \boldsymbol {y}_N)) \right) + \lambda e. \end{equation} The closed-form expression for the smooth $c$-transform in~\eqref{eq:partition:function} follows immediately by substituting the explicit formula for the generating function~$G$ into~\eqref{eq:gev_smooth_ctrans}. One further verifies that~\eqref{eq:partition:function} can be reformulated~as \begin{equation} \label{eq:partition:reg_c_trans} \overline{\psi}_c(\boldsymbol \phi, \boldsymbol x) = \max\limits_{\boldsymbol p \in \Delta^N} \sum\limits_{i=1}^N (\phi_i - c(\boldsymbol x, \boldsymbol y_i)) p_i - \lambda \sum\limits_{i=1}^N p_i \log\left(\frac{p_i}{\eta_i}\right). \end{equation} Indeed, solving the underlying Karush-Kuhn-Tucker conditions analytically shows that the optimal value of the nonlinear program~\eqref{eq:partition:reg_c_trans} coincides with the smooth $c$-transform~\eqref{eq:partition:function}. In the special case where $\eta_i = 1/N$ for all $i \in [N]$, the equivalence of~\eqref{eq:partition:function} and~\eqref{eq:partition:reg_c_trans} has already been recognized by \citet{anderson1988representative}. Substituting the representation~\eqref{eq:partition:reg_c_trans} of the smooth $c$-transform into the dual smooth optimal transport problem~\eqref{eq:smooth_ot} yields~\eqref{eq:dual_regularized_ot} with $f(s)= \lambda s \log(s)$. By Lemma~\ref{lem:strong_dual_reg_ot}, problem~\eqref{eq:smooth_ot} is thus equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)$, where $\eta = \sum_{i =1}^N \eta_i \delta_{\boldsymbol y_i}$. \end{proof} \subsection{Chebyshev Ambiguity Sets} \label{sec:chebyshev} Assume next that $\Theta$ constitutes a Chebyshev ambiguity set comprising all Borel probability measures on~$\mathbb{R}^N$ with mean vector $\boldsymbol 0$ and positive definite covariance matrix $\lambda \boldsymbol \Sigma$ for some $\boldsymbol \Sigma\succ \boldsymbol 0$ and $\lambda> 0$. Formally, we thus set $\Theta = \{\theta \in \mathcal P(\mathbb{R}^N) : \mathbb{E}_\theta [\boldsymbol z] = \boldsymbol 0,\, \mathbb E_\theta [\boldsymbol z \boldsymbol z^\top] = \lambda \boldsymbol \Sigma\}$. In this case, \citep[Theorem~1]{ahipasaoglu2018convex} implies that the smooth $c$-transform~\eqref{eq:smooth_c_transform} can be equivalently expressed as \begin{equation} \label{eq:moment_ambig_ctrans} \overline\psi_c(\boldsymbol \phi, \boldsymbol x) = \max_{\boldsymbol p\in \Delta^N} \sum\limits_{i=1}^N(\phi_i -c(\boldsymbol x, \boldsymbol {y_i}))p_i + \lambda\,\textrm{tr}\left((\boldsymbol \Sigma^{1/2}(\textrm{diag}(\boldsymbol p)-\boldsymbol p\boldsymbol p^\top)\boldsymbol \Sigma^{1/2})^{1/2}\right), \end{equation} where $\textrm{diag}(\boldsymbol p)\in\mathbb{R}^{N\times N}$ represents the diagonal matrix with $\boldsymbol p$ on its main diagonal. Note that the maximum in~\eqref{eq:moment_ambig_ctrans} evaluates the convex conjugate of the extended real-valued regularization function \[ V(\boldsymbol p)=\left\{ \begin{array}{cl} -\lambda\,\textrm{tr}\left((\boldsymbol \Sigma^{1/2}(\textrm{diag}(\boldsymbol p)-\boldsymbol p\boldsymbol p^\top)\boldsymbol \Sigma^{1/2})^{1/2}\right) & \text{if }\boldsymbol p\in\Delta^N \\ \infty & \text{if }\boldsymbol p\notin\Delta^N \end{array}\right. \] at the point $(\phi_i -c(\boldsymbol x, \boldsymbol {y_i}))_{i\in [N]}$. As $\boldsymbol\Sigma\succ \boldsymbol 0$ and $\lambda>0$, \citep[Theorem~3]{ahipasaoglu2018convex} implies that~$V(\boldsymbol p)$ is strongly convex over its effective domain~$\Delta^N$. By~\cite[Proposition~12.60]{rockafellar2009variational}, the smooth discrete $c$-transform~$\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ is therefore indeed differentiable in~$\boldsymbol \phi$ for any fixed~$\boldsymbol x$. It is further known that problem~\eqref{eq:moment_ambig_ctrans} admits an exact reformulation as a tractable semidefinite program; see \citep[Proposition~1]{mishra2012choice}. If $\boldsymbol \Sigma = \boldsymbol I$, then the regularization function $V(\boldsymbol p)$ can be re-expressed in terms of a discrete $f$-divergence, which implies via Lemma~\ref{lem:strong_dual_reg_ot} that the smooth optimal transport problem is equivalent to the original optimal transport problem regularized with a continuous $f$-divergence. \begin{proposition}[Chebyshev regularization] \label{prop:chebyshev-regularization} If $\Theta$ is the Chebyshev ambiguity set of all Borel probability measures with mean $\boldsymbol 0$ and covariance matrix~$\lambda\boldsymbol I$ with $\lambda> 0$, then the smooth $c$-transform~\eqref{eq:smooth_c_transform} simplifies~to \begin{equation} \label{eq:marginal_moment_ctrans} \overline \psi_c(\boldsymbol \phi, \boldsymbol x) = \max_{ \boldsymbol p\in \Delta^N} \sum\limits_{i=1}^N(\phi_i -c(\boldsymbol x, \boldsymbol {y_i})) p_i + \lambda\sum_{i=1}^N\sqrt{p_i(1-p_i)}. \end{equation} In addition, the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)+ \lambda \sqrt{N-1}$, where $\eta = \frac{1}{N} \sum_{i =1}^N \delta_{\boldsymbol y_i}$ and \begin{equation} \label{eq:chebychev_f} f(s) = \begin{cases} -\lambda\sqrt{s(N - s)} + \lambda s \sqrt{N-1} \quad & \text{if }0 \leq s \leq N\\ +\infty & \text{if }s>N. \end{cases}\end{equation} \end{proposition} \begin{proof} The relation~\eqref{eq:marginal_moment_ctrans} follows directly from~\eqref{eq:moment_ambig_ctrans} by replacing $\boldsymbol \Sigma$ with $\boldsymbol I$. Next, one readily verifies that $-\sum_{i \in [N]} \sqrt{p_i(1-p_i)} $ can be re-expressed as the discrete $f$-divergence $D_f(\boldsymbol p\| \boldsymbol \eta)$ from $\boldsymbol p$ to $\boldsymbol\eta=(\frac{1}{N},\ldots,\frac{1}{N})$, where $f(s) =-\lambda \sqrt{s (N - s)}+ \lambda \sqrt{N-1}$. This implies that~\eqref{eq:marginal_moment_ctrans} is equivalent to \[ \overline \psi_c(\boldsymbol \phi, \boldsymbol x) = \max_{ \boldsymbol p\in \Delta^N} \sum\limits_{i=1}^N(\phi_i -c(\boldsymbol x, \boldsymbol {y_i})) p_i - D_f(\boldsymbol p\| \boldsymbol \eta). \] Substituting the above representation of the smooth $c$-transform into the dual smooth optimal transport problem~\eqref{eq:smooth_ot} yields~\eqref{eq:dual_regularized_ot} with $f(s)= -\lambda \sqrt{s (N - s)} +\lambda s \sqrt{N-1} $. By Lemma~\ref{lem:strong_dual_reg_ot}, \eqref{eq:smooth_ot} thus reduces to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)$, where $\eta = \frac{1}{N} \sum_{i =1}^N \delta_{\boldsymbol y_i}$. \end{proof} Note that the function $f(s)$ defined in~\eqref{eq:chebychev_f} is indeed convex, lower-semicontinuous and satisfies $f(1)=0$. Therefore, it induces a standard $f$-divergence. Proposition~\ref{prop:chebyshev-regularization} can be generalized to arbitrary diagonal matrices $\boldsymbol \Sigma$, but the emerging $f$-divergences are rather intricate and not insightful. Hence, we do not show this generalization. We were not able to generalize Proposition~\ref{prop:chebyshev-regularization} to non-diagonal matrices $\bm \Sigma$. \subsection{Marginal Ambiguity Sets} \label{sec:marginal} We now investigate the class of marginal ambiguity sets of the form \begin{equation} \label{eq:marginal_ambiguity_set} \Theta = \Big\{ \theta \in \mathcal{P}(\mathbb{R}^N) \, : \, \theta(z_i \leq s) = F_i(s)\;\forall s\in \mathbb{R}, \; \forall i \in [N] \Big\}, \end{equation} where~$F_i$ stands for the cumulative distribution function of the uncertain disturbance~$z_i$, $i\in[N]$. Marginal ambiguity sets completely specify the marginal distributions of the components of the random vector~$\bm z$ but impose no restrictions on their dependence structure ({\em i.e.}, their copula). Sometimes marginal ambiguity sets are also referred to as Fr\'echet ambiguity sets \citep{frechet1951}. We will argue below that the marginal ambiguity sets explain most known as well as several new regularization methods for the optimal transport problem. In particular, they are more expressive than the extreme value distributions as well as the Chebyshev ambiguity sets in the sense that they induce a richer family of regularization terms. Below we denote by $F_i^{-1} : [0, 1] \to \mathbb{R}$ the (left) quantile function corresponding to $F_i$, which is defined through $$ F_i^{-1}(t) = \inf \{s :F_i(s) \geq t \}\quad \forall t\in\mathbb{R}. $$ We first prove that if $\Theta$ constitutes a marginal ambiguity set, then the smooth $c$-transform~\eqref{eq:smooth_c_transform} admits an equivalent reformulation as the optimal value of a finite convex program. \begin{proposition}[Smooth $c$-transform for marginal ambiguity sets] \label{proposition:regularized_ctrans} If $\Theta$ is a marginal ambiguity set of the form~\eqref{eq:marginal_ambiguity_set}, and if the underlying cumulative distribution functions $F_i$, $i\in[N]$, are continuous, then the smooth $c$-transform~\eqref{eq:smooth_c_transform} can be equivalently expressed as \begin{equation} \label{eq:regularized_c_transform} \overline \psi_c(\boldsymbol \phi, \boldsymbol x) = \max_{ \boldsymbol p \in \Delta^N} \displaystyle \sum\limits_{i=1}^N ~ (\phi_i - c(\boldsymbol x, \boldsymbol {y_i}))p_i + \sum_{i=1}^N \int_{1-p_i}^1 F_i^{-1}(t) \mathrm{d} t \end{equation} for all $\bm x\in\mathcal{X}$ and $\bm \phi\in \mathbb{R}^N$. In addition, the smooth $c$-transform is convex and differentiable with respect to $\boldsymbol \phi$, and $\nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ represents the unique solution of the convex maximization problem~\eqref{eq:regularized_c_transform}. \end{proposition} Recall that the smooth $c$-transform~\eqref{eq:smooth_c_transform} can be viewed as the best-case utility of a semi-parametric discrete choice model. Thus, \eqref{eq:regularized_c_transform} follows from~\cite[Theorem~1]{natarajan2009persistency}. To keep this paper self-contained, we provide a new proof of Proposition~\ref{proposition:regularized_ctrans}, which exploits a natural connection between the smooth $c$-transform induced by a marginal ambiguity set and the conditional value-at-risk (CVaR). \begin{proof}[Proof of Proposition~\ref{proposition:regularized_ctrans}] Throughout the proof we fix $\bm x\in\mathcal{X}$ and $\bm \phi\in \mathbb{R}^N$, and we introduce the nominal utility vector~$\boldsymbol u \in \mathbb{R}^N$ with components~$u_i= \phi_i - c(\boldsymbol x, \boldsymbol y_i)$ in order to simplify notation. In addition, it is useful to define the binary function $\boldsymbol r: \mathbb{R}^N \to \{ 0, 1 \}^N$ with components \begin{align*} r_i(\boldsymbol z) = \begin{cases} 1 & \text{if } i = \displaystyle \min \argmax_{j \in [N]} ~ u_j + z_j, \\ 0 & \text{otherwise.} \end{cases} \end{align*} For any fixed~$\theta \in \Theta$, we then have \begin{align*} \mathbb{E}_{\boldsymbol z \sim \theta} \Big[ \max\limits_{i \in [N]} u_i + z_{i} \Big] = \mathbb{E}_{\boldsymbol z \sim \theta} \Big[ \; \sum_{i=1}^N ( u_i + z_i) r_i(\boldsymbol z) \Big] &= \sum_{i=1}^N u_i p_i + \sum_{i=1}^N \mathbb{E}_{\boldsymbol z \sim \theta} \left[ z_i q_i(z_i) \right], \end{align*} where $p_i = \mathbb{E}_{\boldsymbol z \sim \theta} [ r_i(\boldsymbol z) ]$ and $q_i(z_i) = \mathbb{E}_{\boldsymbol z \sim \theta} [ r_i(\boldsymbol z) | z_i ]$ almost surely with respect to~$\theta$. From now on we denote by $\theta_i$ the marginal probability distribution of the random variable $z_i$ under $\theta$. As $\theta$ belongs to a marginal ambiguity set of the form~\eqref{eq:marginal_ambiguity_set}, we thus have~$\theta_i (z_i \leq s) = F_i(s)$ for all $s \in \mathbb{R}$, that is, $\theta_i$ is uniquely determined by the cumulative distribution function $F_i$. The above reasoning then implies that \begin{align} \nonumber \overline \psi_c(\boldsymbol \phi, \boldsymbol x) = \sup_{\theta \in \Theta} ~ \mathbb{E}_{\boldsymbol z \sim \theta} \Big[ \max_{i \in [N]} u_i + z_i \Big] &= \left\{ \begin{array}{cll} \sup & \displaystyle \sum_{i=1}^N u_i p_i + \sum_{i=1}^N \mathbb{E}_{\boldsymbol z \sim \theta} \left[ z_i q_i(z_i) \right] \\[3ex] \text{s.t.} & \theta \in \Theta, ~\boldsymbol p \in \Delta^N, ~\boldsymbol q \in \mathcal L^N(\mathbb{R}) \\ [1ex] & \mathbb{E}_{\boldsymbol z \sim \theta} \left[ r_i(\boldsymbol z) \right] = p_i & \forall i \in [N] \\[2ex] & \mathbb{E}_{\boldsymbol z \sim \theta} [ r_i(\boldsymbol z) | z_i ] = q_i(z_i) \quad \theta\text{-a.s.} & \forall i \in [N] \end{array} \right.\\ &\leq \left\{ \begin{array}{cll} \sup & \displaystyle \sum_{i=1}^N u_i p_i + \sum_{i=1}^N \mathbb{E}_{z_i \sim \theta_i} \left[ z_i q_i(z_i) \right] \\[3ex] \text{s.t.} & \boldsymbol p \in \Delta^N,~ \boldsymbol q \in \mathcal L^N(\mathbb{R}) \\ [1ex] & \mathbb{E}_{z_i \sim \theta_i} \left[ q_i(z_i) \right] = p_i & \forall i \in [N] \\[2ex] & 0 \leq q_i(z_i) \leq 1 \quad \theta_i\text{-a.s.} & \forall i \in [N]. \end{array} \right. \label{eq:upper-bound} \end{align} The inequality can be justified as follows. One may first add the redundant expectation constraints~$p_i = \mathbb{E}_{z_i \sim \theta} [q_i(z_i)]$ and the redundant $\theta_i$-almost sure constraints $0\leq q_i(z_i)\leq 1$ to the maximization problem over $ \theta$, $\bm p$ and $\bm q$ without affecting the problem's optimal value. Next, one may remove the constraints that express $p_i$ and $q_i(z_i)$ in terms of $r_i(\bm z)$. The resulting relaxation provides an upper bound on the original maximization problem. Note that all remaining expectation operators involve integrands that depend on~$\bm z$ only through~$z_i$ for some $i\in[N]$, and therefore the expectations with respect to the joint probability measure~$\theta$ can all be simplified to expectations with respect to one of the marginal probability measures~$\theta_i$. As neither the objective nor the constraints of the resulting problem depend on~$\theta$, we may finally remove~$\theta$ from the list of decision variables without affecting the problem's optimal value. For any fixed $\boldsymbol p \in \Delta^N$, the upper bounding problem~\eqref{eq:upper-bound} gives rise the following $N$ subproblems indexed by~$i\in[N]$. \begin{subequations} \begin{align} \label{eq:dual:CVaR} \sup_{q_i \in \mathcal L(\mathbb{R})} \bigg\{ \mathbb{E}_{z_i \sim \theta_i} \left[ z_i q_i(z_i) \right]: \mathbb{E}_{z_i \sim \theta_i} \left[ q_i(z_i) \right] = p_i, ~ 0 \leq q_i(z_i) \leq 1 ~ \theta_i\text{-a.s.} \bigg\} \end{align} If $p_i > 0 $, the optimization problem~\eqref{eq:dual:CVaR} over the functions $q_i \in \mathcal L(\mathbb{R})$ can be recast as an optimization problem over probability measures $\tilde \theta_i \in \mathcal P(\mathbb{R})$ that are absolutely continuous with respect to~$\theta_i$, \begin{align} \label{eq:dual:CVaR2} \sup_{\tilde \theta_i \in \mathcal P(\mathbb{R})} \bigg\{ p_i \; \mathbb{E}_{z_i \sim \tilde \theta_i} \left[ z_i \right]: \frac{\mathrm{d} \tilde \theta_i}{\mathrm{d} \theta_i}(z_i) \leq \frac{1}{p_i} ~ \theta_i\text{-a.s.} \bigg\}, \end{align} \end{subequations} where $\mathrm{d} \tilde \theta_i / \mathrm{d} \theta_i $ denotes as usual the Radon-Nikodym derivative of $\tilde \theta_i$ with respect to $\theta_i$. Indeed, if $q_i$ is feasible in~\eqref{eq:dual:CVaR}, then $\tilde \theta_i$ defined through $\tilde \theta_i[\mathcal B]= \frac{1}{p_i} \int_B q_i(z_i) \theta_i(\mathrm{d} z_i)$ for all Borel sets $B\subseteq \mathbb{R}$ is feasible in~\eqref{eq:dual:CVaR2} and attains the same objective function value. Conversely, if $\tilde\theta_i$ is feasible in~\eqref{eq:dual:CVaR2}, then $q_i (z_i)= p_i \, \mathrm{d} \tilde \theta_i / \mathrm{d} \theta_i (z_i)$ is feasible in~\eqref{eq:dual:CVaR} and attains the same objective function value. Thus, \eqref{eq:dual:CVaR} and~\eqref{eq:dual:CVaR2} are indeed~equivalent. By \cite[Theorem~4.47]{follmer2004stochastic}, the optimal value of~\eqref{eq:dual:CVaR2} is given by $p_i \, \theta_i \text{-CVaR}_{p_i}(z_i) = \int_{1-p_i}^1 F_i^{-1}(t) \mathrm{d} t$, where $\theta_i \text{-CVaR}_{p_i}(z_i)$ denotes the CVaR of~$z_i$ at level~$p_i$ under~$\theta_i$. If $p_i = 0$, on the other hand, then the optimal value of~\eqref{eq:dual:CVaR} and the integral $\int_{1-p_i}^1 F_i^{-1}(t) \mathrm{d} t$ both evaluate to zero. Thus, the optimal value of the subproblem~\eqref{eq:dual:CVaR} coincides with $\int_{1-p_i}^1 F_i^{-1}(t) \mathrm{d} t$ irrespective of $p_i$. Substituting this optimal value into~\eqref{eq:upper-bound} finally yields the explicit upper bound \begin{align} \label{eq:upper:bound:choice} \sup_{\theta \in \Theta} ~ \mathbb{E}_{z \sim \theta} \Big[ \max\limits_{i \in [N]} u_i + z_i \Big] &\leq \sup_{\boldsymbol p \in \Delta^N} ~ \sum_{i=1}^N u_i p_i + \sum_{i=1}^N \int_{1-p_i}^1 F_i^{-1}(t) \mathrm{d} t. \end{align} Note that the objective function of the upper bounding problem on the right hand side of~\eqref{eq:upper:bound:choice} constitutes a sum of the strictly concave and differentiable univariate functions $u_i p_i + \int_{1-p_i}^1 F_i^{-1}(t)$. Indeed, the derivative of the $i^{\text{th}}$ function with respect to $p_i$ is given by $u_i + F_i^{-1}(1-p_i)$, which is strictly increasing in~$p_i$ because $F_i$ is continuous by assumption. The upper bounding problem in~\eqref{eq:upper:bound:choice} is thus solvable as it has a compact feasible set as well as a differentiable objective function. Moreover, the solution is unique thanks to the strict concavity of the objective function. In the following we denote this unique solution by~$\boldsymbol p^\star$. It remains to be shown that there exists a distribution $\theta^\star \in \Theta$ that attains the upper bound in~\eqref{eq:upper:bound:choice}. To this end, we define the functions $ q_i^\star(z_i) = \mathds{1}_{\{ z_i > F_i^{-1}(1 - p_i^\star) \}}$ for all $i \in [N]$. By \citep[Remark~4.48]{follmer2004stochastic}, $q_i^\star(z_i)$ is optimal in~\eqref{eq:dual:CVaR} for $p_i=p_i^\star$. In other words, we have $\mathbb{E}_{z_i \sim \theta_i} [q_i^\star(z_i)] = p_i^\star$ and $\mathbb{E}_{z_i \sim \theta_i}[z_i q_i^\star(z_i)] = \int_{1 - p_i^\star}^1 F_i^{-1}(t) \mathrm{d} t$. In addition, we also define the Borel measures $\theta_i^+$ and $\theta_i^-$ through \begin{align*} \theta_i^+(B) = \theta_i(B | z_i > F_i^{-1}(1 - p_i^\star)) \quad \text{and} \quad \theta_i^-(B) = \theta_i(B | z_i \leq F_i^{-1}(1 - p_i^\star)) \end{align*} for all Borel sets $B \subseteq \mathbb{R}$, respectively. By construction, $\theta_i^+$ is supported on~$(F_i^{-1}(1 - p_i^\star), \infty)$, while $\theta_i^-$ is supported on~$(-\infty, (F_i^{-1}(1 - p_i^\star)]$. The law of total probability further implies that $\theta_i = p_i^\star \theta_i^+ + (1 - p_i^\star) \theta_i^-$. In the remainder of the proof we will demonstrate that the maximization problem on the left hand side of~\eqref{eq:upper:bound:choice} is solved by the mixture distribution \begin{align*} \theta^\star = \sum_{j=1}^N p_j^\star \cdot \left( \otimes_{k=1}^{j-1} \theta_k^- \right) \otimes \theta_j^+ \otimes \left( \otimes_{k=j+1}^{N} \theta_k^- \right). \end{align*} This will show that the inequality in~\eqref{eq:upper:bound:choice} is in fact an equality, which in turn implies that the smooth $c$-transform is given by~\eqref{eq:regularized_c_transform}. We first prove that $\theta^\star \in \Theta$. To see this, note that for all $i \in [N]$ we have \begin{align*} \textstyle \theta^\star (z_i \leq s) = p_i^\star \theta_i^+ (z_i \leq s) + ( \sum_{j \neq i} p_j^\star ) \theta_i^- (z_i \leq s) = \theta_i (z_i \leq s) = F_i(s), \end{align*} where the second equality exploits the relation $\sum_{j \neq i} p_j^\star = 1 - p_i^\star$. This observation implies that $\theta^\star \in \Theta$. Next, we prove that $\theta^\star$ attains the upper bound in~\eqref{eq:upper:bound:choice}. By the definition of the binary function $\boldsymbol r$, we~have \begin{align*} \mathbb{E}_{\boldsymbol z \sim \theta^\star} \Big[ \max\limits_{i \in [N]} u_i + z_{i} \Big] &=\mathbb{E}_{\boldsymbol z \sim \theta^\star} \left[ ( u_i + z_i) r_i(\boldsymbol z) \right] \\ &= \mathbb{E}_{z_i \sim \theta_i} \left[ (u_i + z_i) \mathbb{E}_{\boldsymbol z \sim \theta^\star} \left[r_i(\boldsymbol z) | z_i \right] \right] \\ &= \mathbb{E}_{z_i \sim \theta_i} \Big[ ( u_i + z_i) \, \theta^\star \Big( i = \min \argmax\limits_{j \in [N]} ~ u_j + z_j \big| z_i \Big) \Big] \\ &= \mathbb{E}_{z_i \sim \theta_i} \left[ ( u_i + z_i) \, \theta^\star \left( z_j < u_i + z_i - u_j~ \forall j \neq i \big| z_i \right) \right], \end{align*} where the third equality holds because $r_i(\boldsymbol z)=1$ if and only if $i = \min \argmax_{j \in [N]} u_j + z_j$, and the fourth equality follows from the assumed continuity of the marginal distribution functions $F_i$, $i\in[N]$, which implies that $\theta^\star ( z_j = u_i + z_i - u_j~ \forall j \neq i \big| z_i ) = 0$ $\theta_i$-almost surely for all $i,j\in[N]$. Hence, we find \begin{subequations} \label{eq:both:exp} \begin{align} \mathbb{E}_{\boldsymbol z \sim \theta^\star} \Big[ \max\limits_{i \in [N]} u_i + z_{i} \Big] &= p_i^\star\, \mathbb{E}_{z_i \sim \theta_i^+} \left[ ( u_i + z_i) \, \theta^\star \left( z_j < u_i + z_i - u_j~ \forall j \neq i \big| z_i \right) \right] \notag \\ &\quad + (1 - p_i^\star)\, \mathbb{E}_{z_i \sim \theta_i^-} \left[ ( u_i + z_i) \, \theta^\star \left( z_j < u_i + z_i - u_j~ \forall j \neq i \big| z_i \right) \right] \notag \\ &= \displaystyle p_i^\star\, \mathbb{E}_{z_i \sim \theta_i^+} \Big[ (u_i + z_i) \Big( \prod_{j \neq i} \theta_j^-(z_j < z_i + u_i - u_j) \Big) \Big] \label{eq:first:exp}\\ &\quad + \displaystyle \sum_{j \neq i} p_j^\star \,\mathbb{E}_{z_i \sim \theta_i^-} \Big[ (u_i + z_i) \Big( \!\prod_{k \neq i, j} \theta_k^-(z_k < z_i + u_i - u_k) \Big) \theta_j^+(z_j < z_i + u_i - u_j) \Big], \label{eq:second:exp} \end{align} \end{subequations} where the first equality exploits the relation $\theta_i = p_i^\star \theta_i^+ + (1 - p_i^\star) \theta_i^-$, while the second equality follows from the definition of $\theta^\star$. The expectations in~\eqref{eq:both:exp} can be further simplified by using the stationarity conditions of the upper bounding problem in~\eqref{eq:upper:bound:choice}, which imply that the partial derivatives of the objective function with respect to the decision variables $p_i$, $i\in[N]$, are all equal at $\bm p=\bm p^\star$. Thus, $\boldsymbol p^\star$ must satisfy \begin{align} \label{eq:KKT} u_i + F_i^{-1}(1 - p_i^\star) = u_j + F_j^{-1}(1 - p_j^\star) \quad \forall i, j \in [N]. \end{align} Consequently, for every $z_i > F_i^{-1}(1 - p_i^\star)$ and $j\neq i$ we have \begin{align*} \theta_j^-(z_j < z_i + u_i - u_j) \geq \theta_j^-(z_j \leq F_i^{-1}(1 - p_i^\star) + u_i - u_j) = \theta_j^-(z_j \leq F_j^{-1}(1 - p_j^\star)) = 1, \end{align*} where the first equality follows from~\eqref{eq:KKT}, and the second equality holds because $\theta_j^-$ is supported on $(-\infty, F_j^{-1}(1 - p_j^\star)]$. As no probability can exceed~1, the above reasoning implies that $\theta_j^-(z_j < z_i + u_i - u_j)=1$ for all $z_i > F_i^{-1}(1 - p_i^\star)$ and $j\neq i$. Noting that $q_i^\star(z_i)= \mathds{1}_{\{ z_i > F_i^{-1}(1 - p_i^\star) \}}$ represents the characteristic function of the set $(F_i^{-1}(1 - p_i^\star), \infty)$ covering the support of $\theta_i^+$, the term~\eqref{eq:first:exp} can thus be simplified to \begin{align} & p_i^\star \,\mathbb{E}_{z_i \sim \theta_i^+} \Big[ (u_i + z_i) \Big( \prod_{j \neq i} \theta_j^-(z_j < z_i + u_i - u_j) \Big) q_i^\star(z_i) \Big] \notag = \mathbb{E}_{z_i \sim \theta_i} \left[ (u_i + z_i) q_i^\star(z_i) \right]. \label{eq:first:term} \end{align} Similarly, for any $z_i \leq F_i^{-1}(1 - p_i^\star)$ and $j\neq i$ we have \begin{align*} \theta_j^+(z_j < z_i + u_i - u_j) \leq \theta_j^+(z_j < F_i^{-1}(1 - p_i^\star) + u_i - u_j) = \theta_j^+(z_j < F_j^{-1}(1 - p_j^\star)) = 0, \end{align*} where the two equalities follow from~\eqref{eq:KKT} and the observation that $\theta_j^+$ is supported on $(F_j^{-1}(1 - p_j^\star), \infty)$, respectively. As probabilities are non-negative, the above implies that $\theta_j^+(z_j < z_i + u_i - u_j)=0$ for all $z_i \leq F_i^{-1}(1 - p_i^\star)$ and $j\neq i$. Hence, as $\theta_i^-$ is supported on $(-\infty, F_i^{-1}(1 - p_i^\star)]$, the term~\eqref{eq:second:exp} simplifies to \begin{align*} \sum_{j \neq i} p_j^\star \mathbb{E}_{z_i \sim \theta_i^-} \Big[ (u_i + z_i) \Big( \prod_{k \neq i, j} \theta_k^-(z_k < z_i + u_i - u_k) \Big) \theta_j^+(z_j < z_i + u_i - u_j) \mathds{1}_{\{ z_i \leq F_i^{-1}(1 - p_i^\star) \}} \Big] = 0. \end{align*} By combining the simplified reformulations of~\eqref{eq:first:exp} and~\eqref{eq:second:exp}, we finally obtain \begin{align*} \mathbb{E}_{\boldsymbol z \sim \theta^\star} \Big[ \max\limits_{i \in [N]} u_i + z_{i} \Big] = \sum_{i=1}^N \mathbb{E}_{z_i \sim \theta_i} \left[ ( u_i + z_i) q_i^\star(z_i) \right] = \sum_{i=1}^N u_i p_i^\star + \sum_{i=1}^N \int_{1-p_i^\star}^1 F_i^{-1}(t) \mathrm{d} t, \end{align*} where the last equality exploits the relations $\mathbb{E}_{z_i \sim \theta_i} [q_i^\star(z_i)] = p_i^\star$ and $\mathbb{E}_{z_i \sim \theta_i}[z_i q_i^\star(z_i)] = \int_{1 - p_i^\star}^1 F_i^{-1}(t) \mathrm{d} t$ derived in the first part of the proof. We have thus shown that the smooth $c$-transform is given by~\eqref{eq:regularized_c_transform}. Finally, by the envelope theorem~\citep[Theorem~2.16]{de2000mathematical}, the gradient of $\nabla_{\boldsymbol \phi}\overline \psi(\boldsymbol \phi, \boldsymbol x)$ exists and coincides with the unique maximizer $\bm p^\star$ of the upper bounding problem in~\eqref{eq:regularized_c_transform}. \end{proof} The next theorem reveals that the smooth dual optimal transport problem~\eqref{eq:smooth_ot} with a marginal ambiguity set corresponds to a regularized primal optimal transport problem of the form~\eqref{eq:reg_ot_pri_abstract}. \begin{theorem}[Fr\'echet regularization] \label{theorem:primal_dual} Suppose that $\Theta$ is a marginal ambiguity set of the form~\eqref{eq:marginal_ambiguity_set} and that the marginal cumulative distribution functions are defined through \begin{equation} \label{eq:marginal_dists} F_i(s) = \min\{1, \max\{0, 1-\eta_i F(-s)\}\} \end{equation} for some probability vector $\boldsymbol \eta \in \Delta^N$ and strictly increasing function $F: \mathbb{R} \to \mathbb{R}$ with $\int_0^1 F^{-1} (t) \mathrm{d} t = 0$. Then, the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with $R_\Theta = D_f(\pi \| \mu \otimes \eta)$, where $f(s) = \int_{0 }^{s} F^{-1}(t) \mathrm{d} t$ and $\eta = \sum_{i=1}^N \eta_i \delta_{y_i}$. \end{theorem} The function~$f(s)$ introduced in Theorem~\ref{theorem:primal_dual} is smooth and convex because its derivative $ \mathrm{d} f(s) / \mathrm{d} s = F^{-1}(s)$ is strictly increasing, and $f(1) = \int_0^1 F^{-1}(t) \mathrm{d} t=0$ by assumption. Therefore, this function induces a standard $f$-divergence. From now on we will refer to $F$ as the {\em marginal generating function}. \begin{proof} [Proof of Theorem~\ref{theorem:primal_dual}] By Proposition~\ref{proposition:regularized_ctrans}, the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent~to \begin{align*} \overline{W}_{c}(\mu, \nu) &= \sup\limits_{ \boldsymbol {\phi}\in \mathbb{R}^N} ~ \mathbb{E}_{\boldsymbol x \sim \mu}\left[\min\limits_{\boldsymbol p\in \Delta^N} \sum\limits_{i=1}^N{\phi_i\nu_i}- \sum\limits_{i=1}^N(\phi_i - c(\boldsymbol x, \boldsymbol {y_i}))p_i - \sum_{i=1}^N \displaystyle\int_{1-p_i}^1 F_i^{-1}(t)\mathrm{d} t \right]. \end{align*} As $F$ is strictly increasing, we have $F_i^{-1}(s) = -F^{-1}((1-s) / \eta_i)$ for all $s \in (0, 1)$. Thus, we find \begin{align} \label{eq:integral_rep_f} f(s) = \int_{0}^{s} F^{-1}(t) \mathrm{d} t = -\frac{1}{\eta_i} \int_{1}^{1 - s \eta_i} F^{-1} \left( \frac{1 - z}{\eta_i} \right) \mathrm{d} z= -\frac{1}{ \eta_i} \int_{1 - s \eta_i}^1 F_i^{-1}(z) \mathrm{d} z, \end{align} where the second equality follows from the variable substitution $z\leftarrow 1-\eta_i t$. This integral representation of~$f(s)$ then allows us to reformulate the smooth dual optimal transport problem as \begin{align*} \overline{W}_{c}(\mu, \nu)= \sup\limits_{ \boldsymbol {\phi}\in \mathbb{R}^N} ~ \mathbb{E}_{\boldsymbol x \sim \mu}\left[\min\limits_{\boldsymbol p\in \Delta^N} \sum\limits_{i=1}^N{\phi_i\nu_i}- \sum\limits_{i=1}^N(\phi_i - c(\boldsymbol x, \boldsymbol {y_i}))p_i + \sum\limits_{i=1}^N \eta_i \,f\left( \frac{p_i}{\eta_i} \right) \right], \end{align*} which is manifestly equivalent to problem~\eqref{eq:dual_regularized_ot} thanks to the definition of the discrete $f$-divergence. Lemma~\ref{lem:strong_dual_reg_ot} finally implies that the resulting instance of~\eqref{eq:dual_regularized_ot} is equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with regularization term $R_\Theta (\pi ) = D_{f}(\pi\|\mu \otimes \eta)$. Hence, the claim follows. \end{proof} Theorem~\ref{theorem:primal_dual} imposes relatively restrictive conditions on the marginals of~$\boldsymbol z$. Indeed, it requires that all marginal distribution functions $F_i$, $i\in[N]$, must be generated by a single marginal generating function~$F$ through the relation~\eqref{eq:marginal_dists}. The following examples showcase, however, that the freedom to select~$F$ offers significant flexibility in designing various (existing as well as new) regularization schemes. Details of the underlying derivations are relegated to Appendix~\ref{appendix:derivations}. {\color{black} Table~\ref{tab:summary_examples} summarizes the marginal generating functions~$F$ studied in these examples and lists the corresponding divergence generators~$f$.} \begin{table}[h!] \scriptsize \centering {\color{black} \begin{tabular}{||l|c|c|c||} \hline \hline Marginal Distribution & $F(s)$ & $f(s)$ & Regularization \\ \hline\hline Exponential & $ \exp(s/\lambda - 1)$ & $\lambda s \log(s)$ & Entropic \\ \hline Uniform & $s/(2\lambda) + 1/2$ &$\lambda (s^2 - s)$ &$\chi^2$-divergence\\\hline Pareto & $(s(q-1)/(\lambda q) + 1/q)^{\frac{1}{q-1}}$ & $\lambda (s^q - s) / (q-1)$ & Tsallis divergence\\ \hline Hyperbolic cosine & ${\sinh(s/\lambda \!-\! k),~k = \sqrt{2} \!-\! 1\!-\! \textrm{arcsinh}(1)} $ & $\lambda(s\, \text{arcsinh}(s) \!-\! \sqrt{s^2 \!+\!1} \!+\! 1\! +\! ks)$ & Hyperbolic divergence\\ \hline $t$-distribution &$\frac{N}{2}\left(1 + \frac{s-\sqrt{N\!-\!1}}{\sqrt{\lambda^2 + (s \!-\! \sqrt{N\!-\!1})^2}}\right)$ & \!\!\!\!$\begin{cases}-\lambda \sqrt{s(N\!-\!s)} \! +\! \lambda s \sqrt{N\!-\!1} ~&\text{if}~0\leq \!s\! \leq N\\ +\infty &\text{if} ~s\!>\!N\end{cases}$& Chebychev\\ \hline\hline \end{tabular} \caption{\color{black} Marginal generating functions~$F$ with parameter~$\lambda$ and corresponding divergence generators~$f$.} \label{tab:summary_examples}} \end{table} \begin{example}[Exponential distribution model] \label{ex:exp} Suppose that $\Theta$ is a marginal ambiguity set with (shifted) exponential marginals of the form~\eqref{eq:marginal_dists} induced by the generating function $F(s) = \exp(s / \lambda - 1)$ with $\lambda > 0$. Then the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the regularized optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with an entropic regularizer of the form $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)$, where $f(s) =\lambda s \log(s)$, while the smooth $c$-transform~\eqref{eq:smooth_c_transform} reduces to the log-partition function~\eqref{eq:partition:function}. This example shows that entropic regularizers are not only induced by singleton ambiguity sets containing a generalized extreme value distribution (see Section~\ref{sec:gevm}) but also by marginal ambiguity sets with exponential marginals. \end{example} \begin{example}[Uniform distribution model] \label{ex:uniform} Suppose that $\Theta$ is a marginal ambiguity set with uniform marginals of the form~\eqref{eq:marginal_dists} induced by the generating function $F(s) = s/(2\lambda) + 1/2$ with $\lambda > 0$. In this case the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the regularized optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with a $\chi^2$-divergence regularizer of the form $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)$, where $f(s) = \lambda (s^2 -s)$. Such regularizers were previously investigated by \citet{blondel2017smooth} and~\citet{seguy2017large} under the additional assumption that $\eta_i$ is independent of $i\in[N]$, yet their intimate relation to noise models with uniform marginals remained undiscovered until now. In addition, the smooth $c$-transform~\eqref{eq:smooth_c_transform} satisfies \begin{align*} \overline\psi(\boldsymbol \phi, \boldsymbol x) = \lambda + \lambda \spmax_{i \in [N]} \;\frac{\phi_i - c(\boldsymbol x, \boldsymbol {y_i})}{\lambda}, \end{align*} where the sparse maximum operator `$\spmax$' inspired by \citet{sparsemax} is defined through \begin{align} \label{eq:spmax} \spmax_{i \in [N]} \; u_i = \max_{\boldsymbol p \in \Delta^N} \; \sum_{i=1}^N u_i p_i - {p_i^2}/{\eta_i} \qquad \forall \bm u\in\mathbb{R}^N. \end{align} The envelope theorem~\citep[Theorem~2.16]{de2000mathematical} ensures that $\spmax_{i \in[N]} u_i$ is smooth and that its gradient with respect to~$\bm u$ is given by the unique solution~$\bm p^\star$ of the maximization problem on the right hand side of~\eqref{eq:spmax}. We note that $\bm p^\star$ has many zero entries due to the sparsity-inducing nature of the problem's simplicial feasible set. In addition, we have $\lim_{\lambda\downarrow 0} \lambda \spmax_{i \in [N]} u_i/\lambda = \max_{i\in[N]}u_i$. Thus, the sparse maximum can indeed be viewed as a smooth approximation of the ordinary maximum. In marked contrast to the more widely used LogSumExp function, however, the sparse maximum has a sparse gradient. Proposition~\ref{proposition:spmax} in Appendix~\ref{appendix:spmax} shows that $\bm p^\star$ can be computed efficiently by sorting. \end{example} \begin{example}[Pareto distribution model] \label{ex:pareto} Suppose that $\Theta$ is a marginal ambiguity set with (shifted) Pareto distributed marginals of the form~\eqref{eq:marginal_dists} induced by the generating function $F(s) = (s (q-1) / (\lambda q)+1/q)^{1/(q-1)}$ with $\lambda,q>0$. Then the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the regularized optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with a Tsallis divergence regularizer of the form $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)$, where $f(s) = \lambda (s^q - s)/(q-1)$. Such regularizers were investigated by~\citep{muzellec2017tsallis} under the additional assumption that $\eta_i$ is independent of $i\in[N]$. The Pareto distribution model encapsulates the exponential model (in the limit $q\to 1$) and the uniform distribution model (for $q=2$) as special cases. The smooth $c$-transform admits no simple closed-form representation under this model. \end{example} \begin{example}[Hyperbolic cosine distribution model] \label{ex:hyperbolic} Suppose that $\Theta$ is a marginal ambiguity set with hyperbolic cosine distributed marginals of the form~\eqref{eq:marginal_dists} induced by the generating function $F(s) = \sinh(s/\lambda - k)$ with $k = \sqrt{2} - 1 - \textrm{arcsinh}(1)$ and $\lambda > 0$. Then the marginal probability density functions are given by scaled and truncated hyperbolic cosine functions, and the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the regularized optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with a hyperbolic divergence regularizer of the form $R_\Theta(\pi) = D_f(\pi \| \mu \otimes \eta)$, where $f(s) = \lambda(s \hspace{0.1em} \textrm{arcsinh}(s) - \sqrt{s^2 + 1} + 1 + ks)$. Hyperbolic divergences were introduced by \citet{ghai2019exponentiated} in order to unify several gradient descent algorithms. \end{example} \begin{example}[$t$-distribution model] \label{ex:t-distribution} Suppose that $\Theta$ is a marginal ambiguity set where the marginals are determined by~\eqref{eq:marginal_dists}, and assume that the generating function is given by \[ F(s) = \frac{N}{2}\left(1 + \frac{s - \sqrt{N-1}} {\sqrt{\lambda^2 + (s - \sqrt{N-1})^{2}}}\right) \] for some $\lambda > 0$. In this case one can show that all marginals constitute $t$-distributions with $2$ degrees of freedom. In addition, one can show that the smooth dual optimal transport problem~\eqref{eq:smooth_ot} is equivalent to the Chebyshev regularized optimal transport problem described in Proposition~\ref{prop:chebyshev-regularization}. \end{example} To close this section, we remark that different regularization schemes differ as to how well they approximate the original (unregularized) optimal transport problem. Proposition~\ref{prop:approx_bound} provides simple error bounds that may help in selecting suitable regularizers. For the entropic regularization scheme associated with the exponential distribution model of Example~\ref{ex:exp}, for example, the error bound evaluates to $\max_{i\in [N]}\lambda \log(1/\eta_i)$, while for the $\chi^2$-divergence regularization scheme associated with the uniform distribution model of Example~\ref{ex:uniform}, the error bound is given by $\max_{i \in [N]}\lambda (1/\eta_i - 1)$. In both cases, the error is minimized by setting~$\eta_i = 1/N $ for all $i \in [N]$. Thus, the error bound grows logarithmically with $N$ for entropic regularization and linearly with $N$ for $\chi^2$-divergence regularization. Different regularization schemes also differ with regard to their computational properties, which will be discussed in Section~\ref{sec:computation}. \section{Numerical Solution of Smooth Optimal Transport Problems} \label{sec:computation} The smooth semi-discrete optimal transport problem~\eqref{eq:smooth_ot} constitutes a stochastic optimization problem and can therefore be addressed with a stochastic gradient descent (SGD) algorithm. In Section~\ref{section:AGD} we first derive new convergence guarantees for an averaged gradient descent algorithm that has only access to a biased stochastic gradient oracle. This algorithm outputs the uniform average of the iterates (instead of the last iterate) as the recommended candidate solution. We prove that if the objective function is Lipschitz continuous, then the suboptimality of this candidate solution is of the order~$\mathcal O(1/\sqrt{T})$, where $T$ stands for the number of iterations. An improvement in the non-leading terms is possible if the objective function is additionally smooth. We further prove that a convergence rate of $\mathcal O(1/{T})$ can be obtained for generalized self-concordant objective functions. In Section~\ref{section:ASGD-OT} we then show that the algorithm of Section~\ref{section:AGD} can be used to efficiently solve the smooth semi-discrete optimal transport problem~\eqref{eq:smooth_ot} corresponding to a marginal ambiguity set of the type~\eqref{eq:marginal_ambiguity_set}. As a byproduct, we prove that the convergence rate of the averaged SGD algorithm for the semi-discrete optimal transport problem with {\em entropic} regularization is of the order~$\mathcal O(1/T)$, which improves the $\mathcal O(1/\sqrt{T})$ guarantee of~\citet{genevay2016stochastic}. \subsection{Averaged Gradient Descent Algorithm with Biased Gradient Oracles} \label{section:AGD} Consider a general convex minimization problem of the form \begin{equation} \label{eq:convex:problem} \min_{\bm \phi \in \mathbb{R}^n} ~ h(\bm \phi), \end{equation} where the objective function $h: \mathbb{R}^n \to \mathbb{R}$ is convex and differentiable. We assume that problem~\eqref{eq:convex:problem} admits a minimizer $\bm \phi^\star$. We study the convergence behavior of the inexact gradient descent algorithm \begin{equation} \label{eq:gd} \bm \phi_{t} = \bm \phi_{t-1} - \gamma \bm g_t(\bm \phi_{t-1}), \end{equation} where $\gamma > 0$ is a fixed step size, $\bm \phi_0$ is a given deterministic initial point and the function $\bm g_t: \mathbb{R}^n \to \mathbb{R}^n$ is an inexact gradient oracle that returns for every fixed $\bm \phi\in\mathbb{R}^n$ a random estimate of the gradient of~$h$ at~$\bm \phi$. Note that we allow the gradient oracle to depend on the iteration counter~$t$, which allows us to account for increasingly accurate gradient estimates. In contrast to the previous sections, we henceforth model all random objects as measurable functions on an abstract filtered probability space $(\Omega, \mathcal F, (\mathcal F_t)_{t \geq 0}, \mathbb P)$, where $\mathcal{F}_0 = \{ \emptyset,\Omega \}$ represents the trivial $\sigma$-field, while the gradient oracle $\bm g_t(\bm \phi)$ is $\mathcal F_t$-measurable for all $t\in\mathbb N$ and $\bm \phi \in\mathbb{R}^n$. In order to avoid clutter, we use $\mathbb E[\cdot]$ to denote the expectation operator with respect to~$\mathbb P$, and all inequalities and equalities involving random variables are understood to hold $\mathbb P$-almost surely. In the following we analyze the effect of averaging in inexact gradient descent algorithms. We will show that after $T$ iterations with a constant step size~$\gamma = \mathcal O(1 / \sqrt{T})$, the objective function value of the uniform average of all iterates generated by~\eqref{eq:gd} converges to the optimal value of~\eqref{eq:convex:problem} at a sublinear rate. Specifically, we will prove that the rate of convergence varies between $\mathcal{O}(1 / \sqrt{T})$ and $\mathcal{O}(1/T)$ depending on properties of the objective function. Our convergence analysis will rely on several regularity conditions. \begin{assumption}[Regularity conditions] Different combinations of the following regularity conditions will enable us to establish different convergence guarantees for the averaged inexact gradient descent algorithm. \label{assumption:main}~ \begin{enumerate}[label=(\roman*)] \item \textbf{Biased gradient oracle:} \label{assumption:main:gradients} There exists tolerances $\varepsilon_t>0$, $t\in\mathbb N\cup\{0\}$, such that \begin{align*} \left\| \mathbb{E} \left[ \bm g_t(\bm \phi_{t-1}) \big| \mathcal F_{t-1} \right] - \nabla h(\bm \phi_t) \right\| \leq \varepsilon_{t-1}\quad \forall t\in\mathbb N. \end{align*} \item \textbf{Bounded gradients:} \label{assumption:main:bounded} There exists $R > 0$ such that $$ \| \nabla h(\bm \phi) \| \leq R\quad \text{and} \quad \| \bm g_t(\bm \phi) \| \leq R \quad \forall \bm \phi \in \mathbb{R}^n,~ \forall t \in \mathbb N. $$ \item \textbf{Generalized self-concordance:} \label{assumption:main:concordance} The function~$h$ is $M$-generalized self-concordant for some $M > 0$, that is, $h$ is three times differentiable, and for any $\bm \phi, \bm \phi' \in \mathbb{R}^n$ the function $u(s) = h(\bm \phi + s (\bm \phi' - \bm \phi))$ satisfies the inequality $$ \left| \frac{\mathrm{d}^3 u(s)}{\mathrm{d} s^3} \right| \leq M \| \bm \phi - \bm \phi' \| \, \frac{\mathrm{d}^2 u(s)}{\mathrm{d} s^2} \quad \forall s \in \mathbb{R}.$$ \item \textbf{Lipschitz continuous gradient:} \label{assumption:main:smooth} The function $h$ is $L$-smooth for some $L > 0$, that is, we have $$ \| \nabla h(\bm \phi) - \nabla h(\bm \phi') \| \leq L \| \bm \phi - \bm \phi' \| \quad \forall \bm \phi, \bm \phi' \in \mathbb{R}^n. $$ \item \textbf{Bounded second moments:} \label{assumption:main:moment} There exists $\sigma > 0$ such that \begin{align*} \mathbb{E} \left[\left\| \bm g_t(\bm \phi_{t-1}) - \nabla h(\bm \phi_{t-1}) \right\|^2 | \mathcal F_{t-1} \right] \leq \sigma^2 \quad \forall t \in \mathbb N. \end{align*} \end{enumerate} \end{assumption} The averaged gradient descent algorithm with biased gradient oracles lends itself to solving both deterministic as well as stochastic optimization problems. In deterministic optimization, the gradient oracles $\bm g_t$ are deterministic and output inexact gradients satisfying $\| \bm g_t(\bm \phi) - \nabla h(\bm \phi) \| \leq \varepsilon_t$ for all $\bm \phi\in \mathbb{R}^n$, where the tolerances $\varepsilon_t$ bound the errors associated with the numerical computation of the gradients. A vast body of literature on deterministic optimization focuses on exact gradient oracles for which these tolerances can be set to~$0$. Inexact deterministic gradient oracles with bounded error tolerances are investigated by \citet{nedic2001convergence} and \citet{d2008smooth}. In this case exact convergence to $\bm \phi^\star$ is not possible. If the error bounds decrease to~$0$, however, \citet{luo1993error, schmidt2011convergence} and \citet{friedlander2012hybrid} show that adaptive gradient descent algorithms are guaranteed to converge to $\bm \phi^\star$. In stochastic optimization, the objective function is representable as $h(\bm \phi) = \mathbb{E} [H(\bm \phi, \bm x)]$, where the marginal distribution of the random vector $\bm x$ under $\mathbb P$ is given by~$\mu$, while the integrand $H(\bm \phi,\bm x)$ is convex and differentiable in~$\bm \phi$ and $\mu$-integrable in~$\bm x$. In this setting it is convenient to use gradient oracles of the form $\bm g_t(\bm \phi) = \nabla_{\bm \phi} H(\bm \phi, \bm x_t)$ for all $t \in \mathbb N$, where the samples $\bm x_t$ are drawn independently from~$\mu$. As these oracles output unbiased estimates for $\nabla h(\bm \phi)$, all tolerances $\varepsilon_t$ in Assumptions~\ref{assumption:main}\,(i) may be set to~$0$. SGD algorithms with unbiased gradient oracles date back to the seminal paper by~\citet{robbins1951stochastic}. Nowadays, averaged SGD algorithms with Polyak-Ruppert averaging figure among the most popular variants of the SGD algorithm~\citep{ruppert1988efficient, polyak1992acceleration, nemirovski2009robust}. For general convex objective functions the best possible convergence rate of any averaged SGD algorithm run over~$T$ iterations amounts to $\mathcal{O}(1 / \sqrt{T})$, but it improves to $\mathcal{O}(1 / T)$ if the objective function is strongly convex; see for example \citep{nesterov2008confidence, nemirovski2009robust, shalev2009stochastic, duchi2009efficient, xiao2010dual, moulines2011non, shalev2011pegasos, lacoste2012simpler}. While smoothness plays a critical role to achieve acceleration in deterministic optimization, it only improves the constants in the convergence rate in stochastic optimization \citep{srebro2010optimistic, dekel2012optimal, lan2012optimal, cohen2018acceleration, kavis2019unixgrad}. In fact, \citet{tsybakov2003optimal} demonstrates that smoothness does not provide any acceleration in general, that is, the best possible convergence rate of any averaged SGD algorithm can still not be improved beyond $\mathcal{O}(1 / \sqrt{T})$. Nevertheless, a substantial acceleration is possible when focusing on special problem classes such as linear or logistic regression problems \citep{bach2013adaptivity, bach2013non, hazan2014logistic}. In these special cases, the improvement in the convergence rate is facilitated by a generalized self-concordance property of the objective function~\citep{bach2010self}. Self-concordance was originally introduced in the context of Newton-type interior point methods \citep{nesterov1994interior} and later generalized to facilitate the analysis of probabilistic models \citep{bach2010self} and second-order optimization algorithms \citep{sun2019generalized}. In the following we analyze the convergence properties of the averaged SGD algorithm when we have only access to an {\em inexact} stochastic gradient oracle, in which case the tolerances $\varepsilon_t$ cannot be set to~$0$. To our best knowledge, inexact stochastic gradient oracles have only been considered by~\citet{cohen2018acceleration, hu2020analysis} and \citet{ajalloeian2020analysis}. Specifically, \citet{hu2020analysis} use sequential semidefinite programs to analyze the convergence rate of the averaged SGD algorithm when~$\mu$ has a finite support. In contrast, we do not impose any restrictions on the support of~$\mu$. \citet{cohen2018acceleration} and \citet{ajalloeian2020analysis}, on the other hand, study the convergence behavior of accelerated gradient descent algorithms for smooth stochastic optimization problems under the assumption that~$\bm \phi$ ranges over a compact domain. The proposed algorithms necessitate a projection onto the compact feasible set in each iteration. In contrast, our convergence analysis does not rely on any compactness assumptions. We note that compactness assumptions have been critical for the convergence analysis of the averaged SGD algorithm in the context of convex stochastic optimization \citep{nemirovski2009robust, dekel2012optimal, bubeck2015convex, cohen2018acceleration}. By leveraging a trick due to \citet{bach2013adaptivity}, however, we can relax this assumption provided that the objective function is Lipschitz continuous. \begin{proposition} \label{proposition:moments} Consider the inexact gradient descent algorithm~\eqref{eq:gd} with constant step size $\gamma > 0$. If Assumptions~\ref{assumption:main}\,\ref{assumption:main:gradients}--\ref{assumption:main:bounded} hold with $\varepsilon_t \leq {\bar \varepsilon}/{(2\sqrt{1+t})}$ for some $\bar \varepsilon \geq 0$, then we have for all $ p \in \mathbb N$ that \begin{align*} \mathbb{E} \left[ \left( h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \right)^p \right]^{1/p} \leq \frac{\| \bm \phi_0 - \bm \phi^\star \|^2}{\gamma T} + 20 \gamma \left( R + \bar \varepsilon \right)^2 p. \end{align*} If additionally Assumption~\ref{assumption:main}\,\ref{assumption:main:concordance} holds and if $G = \max\{ M, R + \bar \varepsilon \}$, then we have for all $ p \in \mathbb N$ that \begin{align*} \mathbb{E} \left[ \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\|^{2p} \right]^{1/p} &\leq \frac{G^{2}}{T} \left( 10 \sqrt{p} + \frac{4p}{\sqrt{T}} + 80 G^2 \gamma \sqrt{T} p + \frac{2 \| \bm \phi_0 - \bm \phi^\star \|^2}{\gamma \sqrt{T}} + \frac{3 \| \bm \phi_0 - \bm \phi^\star \|}{G \gamma \sqrt{T}} \right)^2. \end{align*} \end{proposition} The proof of Proposition~\ref{proposition:moments} relies on two lemmas. In order to state these lemmas concisely, we define the $L_p$-norm, of a random variable $\bm z \in \mathbb{R}^n$ for any $p > 0$ through $\| \bm z \|_{L_p} = \left( \mathbb{E} \left[ \| \bm z \|^p \right] \right)^{1/p}$. For any random variables $\bm z, \bm z' \in \mathbb{R}^n$ and $p \geq 1$, Minkowski's inequality~\citep[\S~2.11]{boucheron2013concentration} then states that \begin{equation} \label{eq:minkowski} \| \bm z + \bm z' \|_{L_p} \leq \| \bm z \|_{L_p} + \| \bm z' \|_{L_p}. \end{equation} Another essential tool for proving Proposition~\ref{proposition:moments} is the Burkholder-Rosenthal-Pinelis (BRP) inequality~\citep[Theorem~4.1]{pinelis1994optimum}, which we restate below without proof to keep this paper self-contained. \begin{lemma}[BRP inequality] \label{lemma:BRP} Let $\bm z_t$ be an $\mathcal F_t$-measurable random variable for every $t\in\mathbb N$ and assume that $p \geq 2$. For any $t \in [T]$ with $\mathbb{E}[\bm z_t | \mathcal F_{t-1}] = 0 $ and $\| \bm z_t \|_{L_p}<\infty$ we then have \begin{align*} \left\| \max_{t \in [T]} \left\| \sum_{k=1}^t \bm z_k \right\| \right\|_{L_p} \leq \sqrt{p} \left\| \sum_{t=1}^T \mathbb{E}[ \| \bm z_t \|^2 | \mathcal F_{t-1}] \right\|_{L_{p/2}}^{1/2} + p \left\| \max_{t \in [T]} \| \bm z_t \| \right\|_{L_p}. \end{align*} \end{lemma} The following lemma reviews two useful properties of generalized self-concordant functions. \begin{lemma} \label{lemma:concordance} [Generalized self-concordance] Assume that the objective function $h$ of the convex optimization problem~\eqref{eq:convex:problem} is $M$-generalized self-concordant in the sense of Assumption~\ref{assumption:main}\,\ref{assumption:main:concordance} for some $M>0$. \begin{enumerate} [label=(\roman*)] \item \label{lemma:smoothness} {\citep[Appendix~D.2]{bach2013adaptivity}} For any sequence $\bm \phi_0, \dots, \bm \phi_{T-1} \in \mathbb{R}^n$, we have \begin{align*} \left\| \nabla h \left( \frac{1}{T} \sum_{t=1}^T \bm \phi_{t-1} \right) - \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\| \leq 2 M \left( \frac{1}{T} \sum_{t=1}^T h(\bm \phi_{t-1}) - h(\bm \phi^\star) \right). \end{align*} \item \label{lemma:stong:convexity} {\citep[Lemma~9]{bach2013adaptivity}} For any $\bm \phi \in \mathbb{R}^n$ with $ \| \nabla h(\bm \phi) \| \leq 3 \kappa / (4 M) $, where $\kappa$ is the smallest eigenvalue of $\nabla^2 h(\bm \phi^\star)$, and $\bm \phi^\star$ is the optimizer of~\eqref{eq:convex:problem}, we have $ h(\bm \phi) - h(\bm \phi^\star) \leq 2 {\| \nabla h(\bm \phi) \|^2}/{\kappa}.$ \end{enumerate} \end{lemma} Armed with Lemmas~\ref{lemma:BRP} and~\ref{lemma:concordance}, we are now ready to prove Proposition~\ref{proposition:moments}. \begin{proof}[Proof of Proposition~\ref{proposition:moments}] The first claim generalizes Proposition~5 by~\citet{bach2013adaptivity} to inexact gradient oracles. By the assumed convexity and differentiability of the objective function $h$, we have \begin{align} \label{eq:lip:update} h(\bm \phi_{k-1}) &\leq h(\bm \phi_{\star}) + \nabla h(\bm \phi_{k-1})^\top (\bm \phi_{k-1} - \bm \phi_{\star}) \\ &= h(\bm \phi_{\star}) + \bm g_k(\bm \phi_{k-1})^\top (\bm \phi_{k-1} - \bm \phi_{\star}) + \left( \nabla h(\bm \phi_{k-1}) - \bm g_k(\bm \phi_{k-1}) \right)^\top (\bm \phi_{k-1} - \bm \phi_{\star}). \notag \end{align} In addition, elementary algebra yields the recursion \begin{equation*} \| \bm \phi_{k} - \bm \phi^\star \|^2 = \| \bm \phi_{k} - \bm \phi_{k-1} \|^2 + \| \bm \phi_{k-1} - \bm \phi^\star \|^2 + 2 (\bm \phi_{k} - \bm \phi_{k-1})^\top (\bm \phi_{k-1} - \bm \phi^\star). \end{equation*} Thanks to the update rule~\eqref{eq:gd}, this recursion can be re-expressed as \begin{equation*} \bm g_k(\bm \phi_{k-1})^\top (\bm \phi_{k-1} - \bm \phi^\star) = \frac{1}{2 \gamma} \left( \gamma^2 \| \bm g_k(\bm \phi_{k-1}) \|^2 + \| \bm \phi_{k-1} - \bm \phi^\star \|^2 - \| \bm \phi_{k} - \bm \phi^\star \|^2 \right), \end{equation*} where $\gamma > 0$ is an arbitrary step size. Combining the above identity with~\eqref{eq:lip:update} then yields \begin{align*} & ~h(\bm \phi_{k-1}) \\ \leq & ~h(\bm \phi_{\star}) + \frac{1}{2 \gamma} \left( \gamma^2 \| \bm g_k(\bm \phi_{k-1}) \|^2 + \| \bm \phi_{k-1} - \bm \phi^\star \|^2 - \| \bm \phi_{k} - \bm \phi^\star \|^2 \right) + \left( \nabla h(\bm \phi_{k-1}) - \bm g_k(\bm \phi_{k-1}) \right)^\top \! (\bm \phi_{k-1} - \bm \phi_{\star}) \\ \leq & ~h(\bm \phi_{\star}) + \frac{1}{2 \gamma} \left( \gamma^2 R^2 + \| \bm \phi_{k-1} - \bm \phi^\star \|^2 - \| \bm \phi_{k} - \bm \phi^\star \|^2 \right) + \left( \nabla h(\bm \phi_{k-1}) - \bm g_k(\bm \phi_{k-1}) \right)^\top (\bm \phi_{k-1} - \bm \phi_{\star}), \end{align*} where the last inequality follows from Assumption~\ref{assumption:main}\,\ref{assumption:main:bounded}. Summing this inequality over $k$ then shows that \begin{align} \label{eq:bound:A} 2 \gamma \sum_{k=1}^t \big( h ( \bm \phi_{k-1}) - h(\bm \phi_{\star}) \big) + \| \bm \phi_{t} - \bm \phi^\star \|^2 \leq A_t, \end{align} where \begin{align*} A_t = t \gamma^2 R^2 + \| \bm \phi_{0} - \bm \phi^\star \|^2 + \sum_{k=1}^t B_k \quad \text{and} \quad B_t = 2 \gamma \left( \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right)^\top (\bm \phi_{t-1} - \bm \phi_{\star}) \end{align*} for all $t \in\mathbb N$. Note that the term on the left hand side of~\eqref{eq:bound:A} is non-negative because $\bm \phi^\star$ is a global minimizer of $h$, which implies that the random variable $A_t$ is also non-negative for all $t\in\mathbb N$. For later use we further define $A_0 = \| \bm \phi_{0} - \bm \phi^\star \|^2$. The estimate~\eqref{eq:bound:A} for $t=T$ then implies via the convexity of $h$ that \begin{align} \label{eq:bound:A:convexity} h \left( \frac{1}{T} \sum_{t=1}^T \bm \phi_{t-1} \right) - h(\bm \phi_{\star}) \leq \frac{A_T}{2 \gamma T }, \end{align} where we dropped the non-negative term $\|\bm \phi_T-\bm \phi^\star\|^2/(2\gamma T)$ without invalidating the inequality. In the following we analyze the $L_p$-norm of $A_T$ in order to obtain the desired bounds from the proposition statement. To do so, we distinguish three different regimes for $p \in \mathbb N$, and we show that the $L_p$-norm of the non-negative random variable $A_T$ is upper bounded by an affine function of~$p$ in each of these regimes. \textbf{Case I ($p \geq T / 4$):} By using the update rule~\eqref{eq:gd} and Assumption~\ref{assumption:main}\,\ref{assumption:main:bounded}, one readily verifies that \begin{align*} \| \bm \phi_k - \bm \phi^\star \| \leq \| \bm \phi_{k-1} - \bm \phi^\star \| + \| \bm \phi_k - \bm \phi_{k-1} \| \leq \| \bm \phi_{k-1} - \bm \phi^\star \| + \gamma R. \end{align*} Iterating the above recursion $k$ times then yields the conservative estimate $\| \bm \phi_k - \bm \phi^\star \|\leq \| \bm \phi_{0} - \bm \phi^\star \| + k \gamma R$. By definitions of $A_t$ and $B_t$ for $t\in\mathbb N$, we thus have \begin{align*} A_t &\textstyle = t \gamma^2 R^2 + \| \bm \phi_{0} - \bm \phi^\star \|^2 + 2 \gamma \sum_{k=1}^t \left( \nabla h(\bm \phi_{k-1}) - \bm g_k(\bm \phi_{k-1}) \right)^\top (\bm \phi_{k-1} - \bm \phi_{\star}) \\ &\textstyle \leq t \gamma^2 R^2 + \| \bm \phi_{0} - \bm \phi^\star \|^2 + 4 \gamma R \sum_{k=1}^t \| \bm \phi_{k-1} - \bm \phi_{\star} \| \\ &\textstyle \leq t \gamma^2 R^2 + \| \bm \phi_{0} - \bm \phi^\star \|^2 + 4 \gamma R \sum_{k=1}^t \left( \| \bm \phi_{0} - \bm \phi^\star \| + (k-1) \gamma R \right) \\ &\leq t \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 4 t \gamma R \| \bm \phi_0 - \bm \phi^\star \| + 2 t^2 \gamma^2 R^2 \notag \\ &\leq t \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 4 t^2 \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 2 t^2 \gamma^2 R^2 \leq 7 t^2 \gamma^2 R^2 + 2 \| \bm \phi_0 - \bm \phi^\star \|^2, \end{align*} where the first two inequalities follow from Assumption~\ref{assumption:main}\,\ref{assumption:main:bounded} and the conservative estimate derived above, respectively, while the fourth inequality holds because $2 a b \leq a^2 + b^2$ for all $a,b\in\mathbb{R}$. As $A_t \geq 0$, the random variable $A_t$ is bounded and satisfies $| A_t| \leq 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 7 t^2 \gamma^2 R^2$ for all $t\in\mathbb N$, which implies that \begin{align} \label{eq:bound:A:T/4} \| A_T \|_{L_p} \leq 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 7 T^2 \gamma^2 R^2 &\leq 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 28 T \gamma^2 R^2 p, \end{align} where the last inequality holds because $p \geq T/4$. Note that the resulting upper bound is affine in~$p$. \textbf{Case II $({2 \leq p \leq T/4})$:} The subsequent analysis relies on the simple bounds \begin{align} \label{eq:bound:varepsilon} \textstyle \max_{t \in [T]} \varepsilon_{t-1} \leq \frac{\bar \varepsilon}{2} \quad \text{and} \quad \sum_{t=1}^T \varepsilon_{t-1} \leq \bar \varepsilon \sqrt{T}, \end{align} which hold because $\varepsilon_t \leq \bar \varepsilon / (2 \sqrt{1+t})$ by assumption and because $\sum_{t=1}^T 1 / \sqrt{t} \leq 2 \sqrt{T}$, which can be proved by induction. In addition, it proves useful to introduce the martingale differences $ \bar B_t = B_t - \mathbb{E}[B_t | \mathcal F_{t-1}]$ for all $t\in\mathbb N$. By the definition of $A_t$ and the subadditivity of the supremum operator, we then have \begin{align*} \max_{t \in [T+1]} A_{t-1} &= \max_{t \in [T+1]} \left\{ (t-1) \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + \sum_{k=1}^{t-1} \mathbb{E}[B_k | \mathcal F_{k-1}] + \sum_{k=1}^{t-1} \bar B_k \right\} \\ &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + \max_{t \in [T]} \sum_{k=1}^t \mathbb{E}[B_k | \mathcal F_{k-1}] + \max_{t \in [T]} \sum_{k=1}^t \bar B_k . \end{align*} As $p \geq 2$, Minkowski's inequality~\eqref{eq:minkowski} thus implies that \begin{align} \label{eq:bound:sup:A} \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_p} &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + \left\| \max_{t \in [T]} \sum_{k=1}^t \mathbb{E}[B_k | \mathcal F_{k-1}] \right\|_{L_p} + \left\| \max_{t \in [T]} \sum_{k=1}^t \bar B_k \right\|_{L_p}. \end{align} In order to bound the penultimate term in~\eqref{eq:bound:sup:A}, we first note that \begin{align} \left| \mathbb{E}[B_k | \mathcal F_{k-1}] \right| &= 2 \gamma \left| \mathbb{E} \left[ \left( \nabla h(\bm \phi_{k-1}) - \bm g_t(\bm \phi_{k-1}) \right) | \mathcal F_{k-1} \right]^\top (\bm \phi_{k-1} - \bm \phi_{\star}) \right| \notag \\ &\leq 2 \gamma \| \mathbb{E} \left[ \left( \nabla h(\bm \phi_{k-1}) - \bm g_k(\bm \phi_{k-1}) \right) | \mathcal F_{k-1} \right] \| \| \bm \phi_{k-1} - \bm \phi_{\star} \| \notag \\ &\leq 2 \gamma \varepsilon_{k-1} \| \bm \phi_{k-1} - \bm \phi_{\star} \| \leq 2 \gamma \varepsilon_{k-1}\sqrt{ A_{k-1}} \label{eq:bound:B} \end{align} for all $k\in\mathbb N$, where the second inequality holds due to Assumption~\ref{assumption:main}\,\ref{assumption:main:gradients}, and the last inequality follows from \eqref{eq:bound:A}. This in turn implies that for all $t \in [T]$ we have \begin{align*} \left| \sum_{k=1}^t \mathbb{E}[B_k | \mathcal F_{k-1}] \right| \leq 2 \gamma \sum_{k=1}^t \varepsilon_{k-1} \sqrt{A_{k-1}} \leq 2 \gamma \left( \sum_{k=1}^t \varepsilon_{k-1} \right) \left( \max_{k \in [t]} \sqrt{A_{k-1}} \right) \leq 2 \gamma \bar \varepsilon \sqrt{t} \max_{k \in [t]} \sqrt{A_{k-1}}, \end{align*} where the last inequality exploits~\eqref{eq:bound:varepsilon}. Therefore, the penultimate term in~\eqref{eq:bound:sup:A} satisfies \begin{align} \label{eq:B_k:1} \left\| \max_{t \in [T]} \sum_{k=1}^t \mathbb{E}[B_k | \mathcal F_{k-1}] \right\|_{L_p} \leq 2 \gamma \bar \varepsilon \sqrt{T} \left\| \max_{t \in [T+1]} \sqrt{A_{t-1}} \right\|_{L_p} = 2 \gamma \bar \varepsilon \sqrt{T} \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_{p/2}}^{1/2}, \end{align} where the equality follows from the definition of the $L_p$-norm. Next, we bound the last term in~\eqref{eq:bound:sup:A} by using the BRP inequality of Lemma~\ref{lemma:BRP}. To this end, note~that \begin{align*} |\bar B_t | & \leq | B_t | + | \mathbb{E}[B_t | \mathcal F_{t-1}] | \\ &\leq 2 \gamma \| \bm \phi_{t-1} - \bm \phi_{\star} \| \| \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \| + 2 \gamma \varepsilon_{t-1} \sqrt{A_{t-1}} \\ &\leq 2 \gamma \sqrt{A_{t-1}} \left( \| \nabla h(\bm \phi_{t-1}) \| + \| \bm g_t(\bm \phi_{t-1}) \| \right) + 2 \gamma \varepsilon_{t-1} \sqrt{A_{t-1}} \leq 2 \gamma (2R + \varepsilon_{t-1}) \sqrt{A_{t-1}} \end{align*} for all $t\in\mathbb N$, where the second inequality exploits the definition of~$B_t$ and~\eqref{eq:bound:B}, the third inequality follows from~\eqref{eq:bound:A}, and the last inequality holds because of Assumption~\ref{assumption:main}\,\ref{assumption:main:bounded}. Hence, we obtain \begin{align*} \textstyle \left\| \max_{t \in [T]} | \bar B_t | \right\|_{L_p} \leq 2 \gamma \left( 2 R + \max_{t \in [T]} \varepsilon_{t-1} \right) \left\| \max_{t \in [T]} \sqrt{A_{t-1}} \right\|_{L_p} \leq ( 4 \gamma R + \gamma \bar \varepsilon) \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_{p/2}}^{1/2}, \end{align*} where the second inequality follows from~\eqref{eq:bound:varepsilon} and the definition of the $L_p$-norm. In addition, we have \begin{align*} \left\| \sum_{t=1}^T \mathbb{E}[ \bar B_t^2 | \mathcal F_{t-1}] \right\|_{L_{p/2}}^{1/2} = \left\| \sqrt{\sum_{t=1}^T \mathbb{E}[ \bar B_t^2 | \mathcal F_{t-1}]} \right\|_{L_p} &\leq 2 \gamma \left\| \sqrt{ \sum_{t=1}^T (2R + \varepsilon_{t-1})^2 A_{t-1} } \right\|_{L_p} \\ &\leq 2 \gamma \left( \sum_{t=1}^T (2R + \varepsilon_{t-1})^2 \right)^{1/2} \left\| \max_{t \in [T+1]} A_{t-1}^{1/2} \right\|_{L_p} \\ &\leq 2 \gamma \left( 2 R \sqrt{T} + \sqrt{\sum_{t=1}^T \varepsilon_{t-1}^2} \right) \left\| \max_{t \in [T+1]} A_{t-1}^{1/2} \right\|_{L_p} \\ &\leq \left( 4 \gamma R \sqrt{T} + \gamma \bar \varepsilon \sqrt{T} \right) \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_{p/2}}^{1/2}, \end{align*} where the first inequality exploits the upper bound on $|\bar B_t|$ derived above, which implies that $\mathbb{E}[ \bar B_t ^2 | \mathcal F_{t-1}] \leq 4 \gamma^2 (2R + \varepsilon_{t-1})^2 A_{t-1}$. The last three inequalities follow from the H\"{o}lder inequality, the triangle inequality for the Euclidean norm and the two inequalities in~\eqref{eq:bound:varepsilon}, respectively. Recalling that $p \geq 2$, we may then apply the BRP inequality of Lemma~\ref{lemma:BRP} to the martingale differences $\bar B_t$, $t\in[T]$, and use the bounds derived in the last two display equations in order to conclude that \begin{align} \label{eq:BRP-application} \left\| \max_{t \in [T]} \left| \sum_{k=1}^t \bar B_k \right| \right\|_{L_p} &\leq \left( 4 \gamma R \sqrt{pT} + \gamma \bar \varepsilon \sqrt{pT} + \gamma \bar \varepsilon p + 4 \gamma R p \right) \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_{p/2}}^{1/2}. \end{align} Substituting~\eqref{eq:B_k:1} and~\eqref{eq:BRP-application} into~\eqref{eq:bound:sup:A}, we thus obtain \begin{align*} \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_p} &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + \left( 4 \gamma R \left( \sqrt{pT} + p \right) + \gamma \bar \varepsilon \left( \sqrt{pT} + p +2 \sqrt{T} \right) \right) \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_{p/2}}^{1/2} \\ &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 \gamma \left( R + \bar \varepsilon \right) \sqrt{pT} \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_{p/2}}^{1/2}, \end{align*} where the second inequality holds because $p \leq T/4$ by assumption, which implies that $\sqrt{pT} + p \leq 1.5 \sqrt{pT} $ and $ \sqrt{pT} + p + 2 \sqrt{T} \leq 6 \sqrt{pT}$. As Jensen's inequality ensures that $\| \bm z \|_{L_{p/2}} \leq \| \bm z \|_{L_p}$ for any random variable~$\bm z$ and $p > 0$, the following inequality holds for all $2 \leq p \leq T/4$. \begin{align*} \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_p} &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 \gamma \left( R + \bar \varepsilon \right) \sqrt{pT} \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_p}^{1/2} \end{align*} To complete the proof of Case~II, we note that for any numbers $a, b, c \geq 0$ the inequality $c \leq a + 2b \sqrt{c} $ is equivalent to $\sqrt{c} \leq b + \sqrt{b^2+a}$ and therefore also to $c \leq (b + \sqrt{b^2+a})^2 \leq 4b^2 + 2a$. Identifying $a$ with $T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2$, $b$ with $3\gamma \left( R + \bar \varepsilon \right) \sqrt{pT}$ and $c$ with $\| \max_{t \in [T+1]} A_{t-1}\|_{L_p}$ then allows us to translate the inequality in the last display equation to \begin{align} \label{eq:bound:Lp:A} \left\| A_{T} \right\|_{L_p} \leq \left\| \max_{t \in [T+1]} A_{t-1} \right\|_{L_p} &\leq 2 T \gamma^2 R^2 + 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 36 \gamma^2 \left( R + \bar \varepsilon \right)^2 p T. \end{align} Thus, for any $2 \leq p \leq T/4$, we have again found an upper bound on $\| A_{T}\|_{L_p}$ that is affine in $p$. \textbf{Case III $({p = 1})$:} Recalling the definition of $A_T\ge 0$, we find that \begin{align*} \| A_T \|_{L_{1}} = \mathbb{E} [A_T] &= T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + \mathbb{E} \left[ \, \sum_{t=1}^T \mathbb{E} [B_t | \mathcal F_{t-1}] \right] \\ &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + \left\| \max_{t \in [T]} \sum_{k=1}^t \mathbb{E}[B_k | \mathcal F_{k-1}] \right\|_{L_1} \\ &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 2 \gamma \bar \varepsilon \sqrt{T} \left\| \max_{t \in [T+1]} A_{t-1} \right\|^{1/2}_{L_{1/2}} \\ &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 2 \gamma \bar \varepsilon \sqrt{T} \left\| \max_{t \in [T+1]} A_{t-1} \right\|^{1/2}_{L_{2}}, \end{align*} where the second inequality follows from the estimate~\eqref{eq:B_k:1}, which holds indeed for all~$p\in\mathbb N$, while the last inequality follows from Jensen's inequality. By the second inequality in~\eqref{eq:bound:Lp:A} for $p=2$, we thus find \begin{subequations} \label{eq:bound:E:A} \begin{align} \label{eq:bound:E:A1} \| A_T \|_{L_{1}} &\leq T \gamma^2 R^2 + \| \bm \phi_0 - \bm \phi^\star \|^2 + 2 \bar \varepsilon \gamma \sqrt{T} \cdot \sqrt{2 T \gamma^2 R^2 + 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 72 \gamma^2 (R + \bar \varepsilon)^2 T} \\ &\leq 2 T \gamma^2 R^2 + 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 36 \gamma^2 (R + \bar \varepsilon)^2 T + 2 \bar \varepsilon^2 \gamma^2 T , \label{eq:bound:E:A2} \end{align} \end{subequations} where the last inequality holds because $2ab \leq 2a^2 + b^2/ 2$ for all $a,b\in\mathbb{R}$. We now combine the bounds derived in Cases~I, II and~III to obtain a universal bound on $\left\| A_{T} \right\|_{L_p}$ that holds for all $p\in\mathbb N$. Specifically, one readily verifies that the bound \begin{align} \label{eq:universal-bound} \left\| A_{T} \right\|_{L_p} &\leq 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 40 \gamma^2 \left( R + \bar \varepsilon \right)^2 p T, \end{align} is more conservative than each of the bounds~\eqref{eq:bound:A:T/4}, \eqref{eq:bound:Lp:A} and \eqref{eq:bound:E:A}, and thus it holds indeed for any $p \in \mathbb N$. Combining this universal bound with~\eqref{eq:bound:A:convexity} proves the first inequality from the proposition statement. In order to prove the second inequality, we need to extend \citep[Proposition~7]{bach2013adaptivity} to biased gradient oracles. To this end, we first note that \begin{align*} \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\| &\leq \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\| + \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\| \\ &\leq 2 M \left( \frac{1}{T} \sum_{t=1}^T h(\bm \phi_{t-1}) - h(\bm \phi^\star) \right) + \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\| \\ &\leq \frac{M}{T \gamma} A_T + \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\|, \end{align*} where the second inequality follows from Lemma~\ref{lemma:concordance}\,\ref{lemma:smoothness}, and the third inequality holds due to~\eqref{eq:bound:A}. By Minkowski's inequality \eqref{eq:minkowski}, we thus have for any $p \geq 1$ that \begin{align*} \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\|_{L_{2p}} &\leq \frac{M}{T \gamma} \| A_T \|_{L_{2p}} + \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\|_{L_{2p}} \\ &\leq \frac{2 M}{T \gamma} \| \bm \phi_0 - \bm \phi^\star \|^2 + 80 M \gamma \left( R + \bar \varepsilon \right)^2 p + \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\|_{L_{2p}}, \end{align*} where the last inequality follows from the universal bound~\eqref{eq:universal-bound}. In order to estimate the last term in the above expression, we recall that the update rule~\eqref{eq:gd} is equivalent to $\bm g_t(\bm \phi_{t-1}) = \left( \bm \phi_{t-1} - \bm \phi_{t} \right) / \gamma ,$ which in turn implies that $\sum_{t=1}^T \bm g_t(\bm \phi_{t-1}) = \left( \bm \phi_0 - \bm \phi_T \right) / \gamma.$ Hence, for any $p \geq 1$, we have \begin{align*} \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) \right\|_{L_{2p}} &= \left\| \frac{1}{T} \sum_{t=1}^T \Big( \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \Big) + \frac{\bm \phi_0 - \bm \phi^\star}{T \gamma} + \frac{\bm \phi^\star - \bm \phi_T}{T \gamma} \right\|_{L_{2p}} \\ &\leq \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right\|_{L_{2p}} + \frac{1}{T \gamma} \left\| \bm \phi_0 - \bm \phi^\star \right\| + \frac{1}{T \gamma} \left\| \bm \phi^\star - \bm \phi_T \right\|_{L_{2p}} \\ &\leq \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right\|_{L_{2p}} + \frac{1}{T \gamma} \left\| \bm \phi_0 - \bm \phi^\star \right\| + \frac{1}{T \gamma} \left\| A_T \right\|_{L_{p}}^{1/2} \\ &\leq \left\| \frac{1}{T} \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right\|_{L_{2p}} + \frac{1 + \sqrt{2}}{T \gamma} \left\| \bm \phi_0 - \bm \phi^\star \right\| + \frac{2 \sqrt{10} \left( R + \bar \varepsilon \right) \sqrt{p}}{\sqrt{T}}, \end{align*} where the first inequality exploits Minkowski's inequality~\eqref{eq:minkowski}, the second inequality follows from~\eqref{eq:bound:A}, which implies that $\| \bm \phi^\star - \bm \phi_T \| \leq \sqrt{A_T}$, and the definition of the $L_p$-norm. The last inequality in the above expression is a direct consequence of the universal bound~\eqref{eq:universal-bound} and the inequality $ \sqrt{a+b} \leq \sqrt{a} + \sqrt{b}$ for all $a,b\ge 0$. Next, define for any $t\in\mathbb N$ a martingale difference of the form $$\bm C_t = \frac{1}{T} \Big( \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) - \mathbb{E}[\nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) | \mathcal F_{t-1}] \Big).$$ Note that these martingale differences are bounded because \begin{align*} \| \bm C_t \| &\leq \frac{1}{T} \Big( \| \nabla h(\bm \phi_{t-1}) \| + \| \bm g_t(\bm \phi_{t-1}) \| + \| \mathbb{E}[\nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) | \mathcal F_{t-1}] \| \Big) \leq \frac{2R + \varepsilon_{t-1}}{T} \leq \frac{2R + \bar \varepsilon}{T}, \end{align*} and thus the BRP inequality of Lemma~\ref{lemma:BRP} implies that \begin{align*} \left\| \sum_{t=1}^T \bm C_t \right\|_{L_{2p}} \leq \sqrt{2p} \, \frac{2R + \bar \varepsilon}{\sqrt{T}} + 2p \, \frac{2R + \bar \varepsilon}{T}. \end{align*} Recalling the definition of the martingale differences $\bm C_t$, $t\in\mathbb N$, this bound allows us to conclude that \begin{align*} \frac{1}{T} \left\| \sum_{t=1}^T \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right\|_{L_{2p}} &\leq \left\| \sum_{t=1}^T \bm C_t \right\|_{L_{2p}} + \frac{1}{T} \left\| \sum_{t=1}^T \mathbb{E}[\nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) | \mathcal F_{t-1}] \right\|_{L_{2p}} \\ &\leq \sqrt{2p} \, \frac{2R + \bar \varepsilon}{\sqrt{T}} + 2p \, \frac{2R + \bar \varepsilon}{T} + \frac{\bar \varepsilon}{\sqrt{T}} \leq 2 \sqrt{2p} \, \frac{R + \bar \varepsilon}{\sqrt{T}} + 4p \, \frac{R + \bar \varepsilon}{T}, \end{align*} where the second inequality exploits Assumption~\ref{assumption:main}\,\ref{assumption:main:gradients} as well as the second inequality in~\eqref{eq:bound:varepsilon}. Combining all inequalities derived above and observing that $2\sqrt{2} + 2 \sqrt{10} < 10 $ finally yields \begin{align*} \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\|_{L_{2p}} &\leq \frac{2 M}{T \gamma} \| \bm \phi_0 - \bm \phi^\star \|^2 + 80 M \gamma \left( R + \bar \varepsilon \right)^2 p + 2 \sqrt{2p} \, \frac{R + \bar \varepsilon}{\sqrt{T}} + 4p \, \frac{R + \bar \varepsilon}{T} \\ &\qquad + \frac{1 + \sqrt{2}}{T \gamma} \left\| \bm \phi_0 - \bm \phi^\star \right\| + \frac{2 \sqrt{10} \left( R + \bar \varepsilon \right) \sqrt{p}}{\sqrt{T}} \\ &\leq \frac{G}{\sqrt{T}} \left( 10 \sqrt{p} + \frac{4p}{\sqrt{T}} + 80 G^2 \gamma \sqrt{T} p + \frac{2}{\gamma \sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{3}{G \gamma \sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \| \right), \end{align*} where $G = \max\{ M, R + \bar \varepsilon \}$. This proves the second inequality from the proposition statement. \end{proof} The following corollary follows immediately from the proof of Proposition~\ref{proposition:moments}. \begin{corollary} \label{corollary:auxiliary} Consider the inexact gradient descent algorithm~\eqref{eq:gd} with constant step size $\gamma > 0$. If Assumptions~\ref{assumption:main}\,\ref{assumption:main:gradients}--\ref{assumption:main:bounded} hold with $\varepsilon_t \leq {\bar \varepsilon}/{(2\sqrt{1+t})}$ for some $\bar \varepsilon \geq 0$, then we have \begin{align*} \frac{1}{T} \sum_{t=1}^T \mathbb E \left[ \left( \nabla h(\bm \phi_{t}) - \bm g_t(\bm \phi_{t}) \right)^\top (\bm \phi_{t} - \bm \phi_{\star}) \right] \leq \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 74 \gamma^2 (R + \bar \varepsilon)^2 T}. \end{align*} \end{corollary} \begin{proof}[Proof of Corollary~\ref{corollary:auxiliary}] Defining $B_t$ as in the proof of Proposition~\ref{proposition:moments}, we find \begin{align*} \frac{1}{T} \sum_{t=1}^T \mathbb E \left[ \left( \nabla h(\bm \phi_{t}) - \bm g_t(\bm \phi_{t}) \right)^\top (\bm \phi_{t} - \bm \phi_{\star}) \right] &= \frac{1}{2 \gamma T} \mathbb{E} \left[ \sum_{t=1}^T \mathbb{E} [B_t | \mathcal F_{t-1}] \right] \\ &\leq \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 T \gamma^2 R^2 + 2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 72 \gamma^2 (R + \bar \varepsilon)^2 T}, \end{align*} where the inequality is an immediate consequence of the reasoning in Case~(III) in the proof of Proposition~\ref{proposition:moments}. The claim then follows from the trivial inequality $R+ \bar \varepsilon \geq R$. \end{proof} Armed with Proposition~\ref{proposition:moments} and Corollary~\ref{corollary:auxiliary}, we are now ready to prove the main convergence result. \begin{theorem} \label{theorem:convergence} Consider the inexact gradient descent algorithm~\eqref{eq:gd} with constant step size $\gamma > 0$. If Assumptions~\ref{assumption:main}\,\ref{assumption:main:gradients}--\ref{assumption:main:bounded} hold with $\varepsilon_t \leq {\bar \varepsilon}/{(2\sqrt{1+t})}$ for some $\bar \varepsilon \geq 0$, then the following statements hold. \begin{enumerate} [label=(\roman*)] \item \label{theorem:convergence:Lipschitz} If $\gamma = 1 / (2 (R + \bar \varepsilon)^2 \sqrt{T})$, then we have \begin{align*} \mathbb{E} \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right] - h(\bm \phi^\star) &\leq \frac{(R + \bar \varepsilon)^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{1}{4\sqrt{T}} + \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{37}{2(R + \bar \varepsilon)^2}} . \end{align*} \item \label{theorem:convergence:smooth} If $\gamma = 1 / (2 (R + \bar \varepsilon)^2 \sqrt{T} + L)$ and the Assumptions~\ref{assumption:main}\,\ref{assumption:main:smooth}--\ref{assumption:main:moment} hold in addition to the blanket assumptions mentioned above, then we have \begin{align*} \mathbb{E} \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t} \right) \right] - h(\bm \phi^\star) &\leq \frac{L}{2T}\| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{(R + \bar \varepsilon)^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{\sigma^2}{4 (R+\bar \varepsilon)^2\sqrt{T}} \\ &\qquad+ \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{37}{2(R + \bar \varepsilon)^2}}. \end{align*} \item \label{theorem:convergence:concordance} If $\gamma = 1 / (2 G^2 \sqrt{T})$ with $G = \max \{M, R + \bar \varepsilon \}$, the smallest eigenvalue $\kappa$ of $\nabla^2 h(\bm \phi^\star)$ is strictly positive and Assumption~\ref{assumption:main}\,\ref{assumption:main:concordance} holds in addition to the blanket assumptions mentioned above, then we have \begin{align*} \mathbb E \left[ h\left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1}\right) \right] - h(\bm \phi^\star) &\leq \frac{G^2}{\kappa T} \left( 4 G \| \bm \phi_0 - \bm \phi^\star \| + 20 \right)^4. \end{align*} \end{enumerate} \end{theorem} The proof of Theorem~\ref{theorem:convergence} relies on the following concentration inequalities due to \citet{bach2013adaptivity}. \begin{lemma} \label{lemma:probability101} [Concentration inequalities]~ \begin{enumerate} [label=(\roman*)] \item \label{lemma:sub-exponential1} {\citep[Lemma~11]{bach2013adaptivity}:} If there exist $a,b>0$ and a random variable $\bm z \in \mathbb{R}^n$ with $ \| \bm z \|_{L_p} \leq a + b p $ for all $p \in \mathbb N$, then we have $$ \mathbb P \left[ \| \bm z \| \geq 3 b s + 2 a \right] \leq 2 \exp(-s)\quad \forall s \geq 0. $$ \item \label{lemma:sub-exponential2} {\citep[Lemma~12]{bach2013adaptivity}:} If there exist $a,b,c>0$ and a random variable $\bm z \in \mathbb{R}^n$ with $ \| \bm z \|_{L_p} \leq (a \sqrt{p} + b p + c)^2 $ for all $p \in [T]$, then we have $$ \mathbb P \left[ \| \bm z \| \geq (2 a \sqrt{s} + 2 b s + 2 c)^2 \right] \leq 4 \exp(-s)\quad \forall s \leq T. $$ \end{enumerate} \end{lemma} \begin{proof} [Proof of Theorem~\ref{theorem:convergence}] Define $A_t$ as in the proof of Proposition~\ref{proposition:moments}. Then, we have \begin{align} \mathbb{E} \left[ h \left(\frac{1}{T} \sum_{t=0}^{T-1} \bm \phi_{t} \right) - h(\bm \phi^\star) \right] &\leq \frac{\mathbb E[A_T]}{2 \gamma T} = \frac{\| \bm \phi_0 - \bm \phi^\star \|^2}{2 \gamma T} + \frac{\gamma R^2}{2} + \frac{1}{T} \sum_{t=1}^T \mathbb E \left[ \left( \nabla h(\bm \phi_{t}) - \bm g_t(\bm \phi_{t}) \right)^\top (\bm \phi_{t} - \bm \phi_{\star}) \right] \notag \\ &\leq \frac{\| \bm \phi_0 - \bm \phi^\star \|^2}{2 \gamma T} + \frac{\gamma R^2}{2} + \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 74 \gamma^2 (R + \bar \varepsilon)^2 T}, \label{eq:main:convergence} \end{align} where the two inequalities follow from~\eqref{eq:bound:A:convexity} and from Corollary~\ref{corollary:auxiliary}, respectively. Setting the step size to $\gamma = 1 / ( 2 (R+ \bar \varepsilon)^2 \sqrt{T} )$ then completes the proof of assertion~\ref{theorem:convergence:Lipschitz}. Assertion~\ref{theorem:convergence:smooth} generalizes \citep[Theorem~1]{dekel2012optimal}. By the $L$-smoothness of $h(\bm \phi)$, we have \begin{align} h(\bm \phi_{t}) &\leq h(\bm \phi_{t-1}) + \nabla h(\bm \phi_{t-1})^\top (\bm \phi_{t} - \bm \phi_{t-1}) + \frac{L}{2}\|\bm \phi_{t} - \bm \phi_{t-1}\|^2 \notag \\ &= h(\bm \phi_{t-1}) + \bm g_t(\bm \phi_{t-1})^\top (\bm \phi_{t} - \bm \phi_{t-1}) + \bm \left( \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right)^\top (\bm \phi_{t} - \bm \phi_{t-1}) + \frac{L}{2}\|\bm \phi_{t} - \bm \phi_{t-1}\|^2 \notag \\ &\leq h(\bm \phi_{t-1}) + \bm g_t(\bm \phi_{t-1})^\top (\bm \phi_{t} - \bm \phi_{t-1}) + \frac{\zeta}{2}\| \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \|^2 + \frac{L + 1/\zeta}{2}\|\bm \phi_{t} - \bm \phi_{t-1}\|^2, \label{eq:smooth:update} \end{align} where the last inequality exploits the Cauchy-Schwarz inequality together with the elementary inequality $2ab \leq \zeta a^2 + b^2 / \zeta$, which holds for all $a,b\in\mathbb{R}$ and $\zeta > 0$. Next, note that the iterates satisfy the recursion \begin{equation*} \| \bm \phi_{t-1} - \bm \phi^\star \|^2 = \| \bm \phi_{t-1} - \bm \phi_{t} \|^2 + \| \bm \phi_{t} - \bm \phi^\star \|^2 + 2 (\bm \phi_{t-1} - \bm \phi_{t})^\top (\bm \phi_{t} - \bm \phi^\star), \end{equation*} which can be re-expressed as \begin{equation*} \bm g_t(\bm \phi_{t-1})^\top (\bm \phi_{t} - \bm \phi^\star) = \frac{1}{2 \gamma} \left( \| \bm \phi_{t-1} - \bm \phi^\star \|^2 - \| \bm \phi_{t-1} - \bm \phi_{t} \|^2 - \| \bm \phi_{t} - \bm \phi^\star \|^2 \right) \end{equation*} by using the update rule~\eqref{eq:gd}. In the remainder of the proof we assume that $0 < \gamma < 1 / L$. Substituting the above equality into~\eqref{eq:smooth:update} and setting~$\zeta = \gamma / (1 - \gamma L)$ then~yields \begin{align*} h(\bm \phi_{t}) &\leq h(\bm \phi_{t-1}) + \bm g_t(\bm \phi_{t-1})^\top (\bm \phi^\star - \bm \phi_{t-1}) + \frac{\gamma}{2(1 - \gamma L)} \| \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \|^2 \\ & \qquad + \frac{1}{2 \gamma} \left( \| \bm \phi_{t-1} - \bm \phi^\star \|^2 - \| \bm \phi_{t} - \bm \phi^\star \|^2 \right). \end{align*} By the convexity of $h$, we have $h(\bm \phi^\star) \geq h(\bm \phi_{t-1}) + \nabla h(\bm \phi_{t-1})^\top (\bm \phi^\star - \bm \phi_{t-1})$, which finally implies that \begin{align*} h(\bm \phi_{t}) &\leq h(\bm \phi^\star) + \left( \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right)^\top ( \bm \phi_{t-1} - \bm \phi^\star) + \frac{\gamma}{2(1 - \gamma L)} \| \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \|^2 \\ & \qquad + \frac{1}{2\gamma} \left( \| \bm \phi_{t-1} - \bm \phi^\star \|^2 - \| \bm \phi_{t} - \bm \phi^\star \|^2 \right). \end{align*} Averaging the above inequality over $t$ and taking expectations then yields the estimate \begin{align*} \mathbb E \left[ \frac{1}{T} \sum_{t=1}^T h(\bm \phi_{t}) \right] - h(\bm \phi^\star) &\leq \frac{\| \bm \phi_0 - \bm \phi^\star \|^2}{2\gamma T} + \frac{\gamma}{2 (1 - \gamma L)} \mathbb E \left[ \frac{1}{T} \sum_{t=1}^T \| \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \|^2 \right] \\ &\qquad + \mathbb E \left[ \frac{1}{T} \sum_{t=1}^T \left( \nabla h(\bm \phi_{t-1}) - \bm g_t(\bm \phi_{t-1}) \right)^\top (\bm \phi_{t-1} - \bm \phi_{\star}) \right] \\ &\leq \frac{\| \bm \phi_0 - \bm \phi^\star \|^2}{2\gamma T} + \frac{\gamma \sigma^2}{2 (1 - \gamma L)} + \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 74 \gamma^2 (R + \bar \varepsilon)^2 T}, \end{align*} where the second inequality exloits Assumption~\ref{assumption:main}\,\ref{assumption:main:moment} and Corollary~\ref{corollary:auxiliary}. Using Jensen's inequality to move the average over $t$ inside~$h$, assertion~\ref{theorem:convergence:smooth} then follows by setting $\gamma = 1 / (2 (R + \bar \varepsilon)^2 \sqrt{T} + L)$ and observing that $\gamma / ( 1 - \gamma L) = 1 / ( 2(R+\bar \varepsilon)^2 \sqrt{T} )$. To prove assertion~\ref{theorem:convergence:concordance}, we distinguish two different cases. \textbf{Case~I:} Assume first that $4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| \leq {\kappa \sqrt{T}}/{(8 G^2)}$, where~$G = \max \{M, R + \bar \varepsilon \}$ and $\kappa$ denotes the smallest eigenvalue of $\nabla^2 h(\bm \phi^\star)$. By a standard formula for the expected value of a non-negative random variable, we find \begin{align} \mathbb{E} \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \right] \nonumber &= \phantom{+} \int_{0}^\infty \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq u \right] \mathrm{d} u \\ &= \phantom{+} \int_{0}^{u_1} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq u \right] \mathrm{d} u \nonumber\\ &\quad + \int_{u_1}^{u_2} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq u \right] \mathrm{d} u \label{eq:three-integrals}\\ &\quad + \int_{u_2}^{\infty} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq u \right] \mathrm{d} u, \nonumber \end{align} where $u_1 = \frac{8 G^2}{\kappa T}(4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \|)^2$ and $u_2 = \frac{8 G^2}{\kappa T}(\frac{\kappa \sqrt{T}}{4 G^2} + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \|)^2$. The first of the three integrals in~\eqref{eq:three-integrals} is trivially upper bounded by~$u_1$. Next, we investigate the third integral in~\eqref{eq:three-integrals}, which is easier to bound from above than the second one. By combining the first inequality in Proposition~\ref{proposition:moments} for $\gamma = 1 / (2 G^2 \sqrt{T})$ with the trivial inequality $G \geq R + \bar \varepsilon$, we find \[ \left\| h\left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \right\|_{L_p} \leq \frac{2G^2}{\sqrt{T}}\,\|\bm \phi_0-\bm \phi^\star\|^2 + \frac{10}{\sqrt{T}} \,p\quad \forall p\in\mathbb N. \] Lemma~\ref{lemma:probability101}\,\ref{lemma:sub-exponential1} with $a = 2 G^2 \| \bm \phi_0 -\bm \phi^\star \|^2 / \sqrt{T}$ and $b = 10 / \sqrt{T}$ thus implies that \begin{align} \label{eq:large:deviation} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq \frac{30}{\sqrt{T}} s + \frac{4 G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 \right] \leq 2 \exp(-s) \quad \forall s \geq 0. \end{align} We also have \begin{align} \label{eq:upper:bound:for:u_2} u_2 - \frac{4 G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 \geq u_2 - \frac{\kappa}{8 G^2} \geq \frac{8 G^2}{\kappa T} \left( \frac{\kappa \sqrt{T}}{4 G^2} \right)^2 - \frac{\kappa}{8 G^2} = \frac{3 \kappa}{8 G^2} \geq 0, \end{align} where the first inequality follows from the basic assumption underlying Case~I, while the second inequality holds due to the definition of $u_2$. By~\eqref{eq:large:deviation} and~\eqref{eq:upper:bound:for:u_2}, the third integral in~\eqref{eq:three-integrals} satisfies \begin{align*} &\int_{u_2}^{\infty} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq u \right] \mathrm{d} u \\ =\;& \int_{u_2 - \frac{4 G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2}^{\infty} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq u + \frac{4 G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 \right] \mathrm{d} u \\ \leq\; & 2 \int_{u_2 - \frac{4 G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2}^\infty \exp \left( -\frac{\sqrt{T} u}{30} \right) \mathrm{d} u= \frac{60}{\sqrt{T}} \exp \left( -\frac{\sqrt{T}}{30} \left( u_2 - \frac{4 G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 \right) \right) \\ \leq\; & \frac{60}{\sqrt{T}} \exp \left( -\frac{\kappa \sqrt{T}}{80 G^2} \right) \leq \frac{2400 G^2}{\kappa T}, \end{align*} where the first inequality follows from the concentration inequality~\eqref{eq:large:deviation} and the insight from~\eqref{eq:upper:bound:for:u_2} that $u_2 - \frac{4 G^2}{\sqrt{T}}\| \bm \phi_0 - \bm \phi^\star \|^2 \geq 0$. The second inequality exploits again~\eqref{eq:upper:bound:for:u_2}, and the last inequality holds because $\exp(-x) \leq 1 / (2x)$ for all $ x > 0$. We have thus found a simple upper bound on the third integral in~\eqref{eq:three-integrals}. It remains to derive an upper bound on the second integral in~\eqref{eq:three-integrals}. To this end, we first observe that the second inequality in Proposition~\ref{proposition:moments} for $\gamma = 1 / (2 G^2 \sqrt{T})$ translates to \[ \left\| \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\|^2 \right\|_{L_p} \leq \frac{G^{2}}{T} \left( 10 \sqrt{p} + \frac{4p}{\sqrt{T}} + 40 p + 4G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6G \| \bm \phi_0 - \bm \phi^\star \| \right)^2 \quad \forall p\in\mathbb N. \] Lemma~\ref{lemma:probability101}\,\ref{lemma:sub-exponential2} with $a = 10 G / \sqrt{T}$, $b = 4 G / T + 40 G / \sqrt{T}$ and $c = 4 G^3 \| \bm \phi_0 - \bm \phi^\star \|^2 / \sqrt{T} + 6 G^2 \| \bm \phi_0 - \bm \phi^\star \|/\sqrt{T}$ thus gives rise to the concentration inequality \begin{align*} \mathbb P \left[ \;\left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \!\right) \right\|^2 \!\!\!\geq\! \frac{4G^2}{T} \left( 10 \sqrt{s} + \frac{4s}{\sqrt{T}} + 40 s + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| \right)^2 \right] \leq 4 \exp(-s), \end{align*} which holds only for small deviations~$s\leq T$. However, this concentration inequality can be simplified to \begin{align*} \mathbb P \left[ \;\left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\| \geq \frac{2G}{\sqrt{T}} \left( 12 \sqrt{s} + 40 s + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| \right) \right] \leq 4 \exp(-s), \end{align*} which remains valid for all deviations~$ s\ge 0$. To see this, note that if $ s \leq T/4 $, then the simplified concentration inequality holds because $ 4 s / T \leq 2 \sqrt{s / T}$. Otherwise, if $ s > T/4 $, then the simplified concentration inequality holds trivially because the probability on the left hand vanishes. Indeed, this is an immediate consequence of Assumption~\ref{assumption:main}\,\ref{assumption:main:bounded}, which stipulates that the norm of the gradient of $h$ is bounded by $R$, and of the elementary estimate~$24 G \sqrt{s / T} > G\geq R$, which holds for all $s > T / 4$. In the following, we restrict attention to those deviations $s\ge 0$ that are small in the sense that \begin{align} \label{eq:condition:s} \displaystyle 12 \sqrt{s} + 40 s \leq \frac{ \kappa \sqrt{T}}{4G^2}. \end{align} Assume now for the sake of argument that the event inside the probability in the simplified concentration inequality does {\em not} occur, that is, assume that \begin{align} \label{eq:inverse-event} \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\| < \frac{2G}{\sqrt{T}} \left( 12 \sqrt{s} + 40 s + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| \right). \end{align} By~\eqref{eq:condition:s} and the assumption of Case~I, \eqref{eq:inverse-event} implies that $\| \nabla h ( \frac{1}{T}\sum_{t=1}^T \bm \phi_{t-1} ) \| < 3 \kappa / (4G) < 3 \kappa / (4M)$. Hence, we may apply Lemma~\ref{lemma:concordance}\,\ref{lemma:stong:convexity} to conclude that $h ( \frac{1}{T}\sum_{t=1}^T \bm \phi_{t-1} ) - h(\bm \phi^\star) \leq \frac{2}{\kappa} \| \nabla h ( \frac{1}{T} \sum_{t=1}^T \bm \phi_{t-1} ) \|^2$. Combining this inequality with~\eqref{eq:inverse-event} then yields \begin{align} \label{eq:inverse-event-implication} h \left( \frac{1}{T}\sum_{t=1}^T \bm \phi_{t-1} \right) - h(\bm \phi^\star) < \frac{8G^2}{\kappa T} \left( 12 \sqrt{s} + 40 s + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| \right)^2. \end{align} By the simplified concentration inequality derived above, we may thus conclude that \begin{align} \nonumber 4 \exp(-s) \geq \; &\mathbb P \left[ \; \left\| \nabla h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right\| \geq \frac{2G}{\sqrt{T}} \left( 12 \sqrt{s} + 40 s + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| \right) \right] \\ \geq \; & \mathbb P \left[ \; h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq \frac{8G^2}{\kappa T} \left( 12 \sqrt{s} + 40 s + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| \right)^2 \right] \label{eq:small:deviation} \end{align} for any $s\ge 0$ that satisfies~\eqref{eq:condition:s}, where the second inequality holds because~\eqref{eq:inverse-event} implies~\eqref{eq:inverse-event-implication} or, equivalently, because the negation of~\eqref{eq:inverse-event-implication} implies the negation of~\eqref{eq:inverse-event}. The resulting concentration inequality~\eqref{eq:small:deviation} now enables us to construct an upper bound on the second integral in~\eqref{eq:three-integrals}. To this end, we define the function $$ \ell(s) = \frac{8 G^2}{\kappa T} \left(12 \sqrt{s} + 40 s + 4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \|\right)^2 $$ for all $s\ge 0$, and set $\bar s = ((9/400 + \kappa \sqrt{T} / (160 G^2))^{\frac{1}{2}} - 3 / 20)^{2}$. Note that $s\ge 0$ satisfies the inequality~\eqref{eq:condition:s} if and only if $s\le\bar s$ and that $\ell(0) = u_1$ as well as $\ell(\bar s) = u_2$. By substituting $u$ with $ \ell(s)$ and using the concentration inequality~\eqref{eq:small:deviation} to bound the integrand, we find that the second integral in~\eqref{eq:three-integrals} satisfies \begin{align*} \int_{u_1}^{u_2} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq u \right] \mathrm{d} u &= \int_{0}^{\bar s} \mathbb P \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \geq \ell(s) \right] \frac{\mathrm{d} \ell(s)}{\mathrm{d} s} \mathrm{d} s \\ &\leq \int_{0}^{\bar s} 4 \mathrm{e}^{-s} \; \frac{\mathrm{d}}{\mathrm{d} s} \! \left( \frac{8 G^2}{\kappa T} \left(12 \sqrt{s} + 40 s + \tau \right)^2 \right) \mathrm{d} s \\ &\leq \frac{32 G^2}{\kappa T} \int_{0}^{\infty} \mathrm{e}^{-s} \left( 144 + 3200 s + 1440 s^{1/2} + 80 \tau + 12 \tau s^{-1/2} \right) \mathrm{d} s \\ &= \frac{32 G^2}{\kappa T} \big( 144 + 3200 \Gamma(2) + 1440 \Gamma(3/2) + 80 \tau + 12 \tau \Gamma(1/2) \big) \\ &\leq \frac{32 G^2}{\kappa T} ( 4621 + 102 \tau ), \end{align*} where $\tau$ is is a shorthand for $4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| $, and $\Gamma$ denotes the Gamma function with $\Gamma(2) = 1$, $\Gamma(1/2) = \sqrt{\pi}$ and $\Gamma(3/2) = \sqrt{\pi}/2$; see for example~\citep[Chapter~8]{rudin1964principles}. The last inequality is obtained by rounding all fractional numbers up to the next higher integer. Combining the upper bounds for the three integrals in~\eqref{eq:three-integrals} finally yields \begin{align*} \mathbb{E} \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) - h(\bm \phi^\star) \right] &\leq \frac{8 G^2}{\kappa T} \left( \tau^2 + 18484 + 408 \tau + 300 \right) \\ &= \frac{8 G^2}{\kappa T} \Big( 16 G^4 \| \bm \phi_0 - \bm \phi^\star \|^4 + 48 G^3 \| \bm \phi_0 - \bm \phi^\star \|^3 + 1668 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 \\ &\hspace{4em} + 2448 G \| \bm \phi_0 - \bm \phi^\star \| + 18784 \Big) \\ &\leq \frac{G^2}{\kappa T} (4 G \| \bm \phi_0 - \bm \phi^\star \| + 20)^4. \end{align*} This complete the proof of assertion~\ref{theorem:convergence:concordance} in Case~I. \textbf{Case II:} Assume now that $4 G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 6 G \| \bm \phi_0 - \bm \phi^\star \| > {\kappa \sqrt{T}}/{(8 G^2)}$, where $G$ is defined as before. Since $h$ has bounded gradients, the inequality~\eqref{eq:main:convergence} remains valid. Setting the step size to $\gamma = 1 / (2 G^2 \sqrt{T})$ and using the trivial inequalities $G \geq R + \bar \varepsilon \geq R$, we thus obtain \begin{align*} \mathbb{E} \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right] - h(\bm \phi^\star) &\leq \frac{G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{1}{4\sqrt{T}} + \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{37}{2G^2}} \\ &\leq \frac{G^2}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \|^2 + \frac{2G}{\sqrt{T}} \| \bm \phi_0 - \bm \phi^\star \| + \frac{5}{\sqrt{T}} , \end{align*} where the second inequality holds because $G \geq \bar \varepsilon$ and $\sqrt{a + b} \leq \sqrt{a} + \sqrt{b}$ for all $a,b\ge 0$. Multiplying the right hand side of the last inequality by $G^2 (32 G^2 \| \bm \phi_0^\star - \bm \phi^\star \|^2 + 48 G \| \bm \phi_0^\star - \bm \phi^\star \|) / (\kappa \sqrt{T})$, which is strictly larger than~$1$ by the basic assumption underlying Case~II, we then find \begin{align*} & \phantom{\leq} \mathbb{E} \left[ h \left(\frac{1}{T} \sum_{t=1}^{T} \bm \phi_{t-1} \right) \right] - h(\bm \phi^\star) \\ &\leq \frac{G^2}{\kappa T} \left( G^2 \| \bm \phi_0 - \bm \phi^\star \|^2 + 2 G \| \bm \phi_0 - \bm \phi^\star \| + 5 \right) \left( 32 G^2 \| \bm \phi_0^\star - \bm \phi^\star \|^2 + 48 G \| \bm \phi_0^\star - \bm \phi^\star \| \right) \\ &\leq \frac{G^2}{\kappa T} (4 G \| \bm \phi_0 - \bm \phi^\star \| + 20)^4. \end{align*} This observation completes the proof. \end{proof} \subsection{Smooth Optimal Transport Problems with Marginal Ambiguity Sets} \label{section:ASGD-OT} The smooth optimal transport problem \eqref{eq:smooth_ot} can be viewed as an instance of a stochastic optimization problem, that is, a convex maximization problem akin to~\eqref{eq:convex:problem}, where the objective function is representable as $h(\bm \phi) = \mathbb{E}_{\boldsymbol x \sim \mu} [ \boldsymbol \nu^\top \boldsymbol \phi - \overline\psi_c(\boldsymbol \phi, \boldsymbol x)]$. Throughout this section we assume that the smooth (discrete) $c$-transform~$\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ defined in~\eqref{eq:smooth_c_transform} is induced by a marginal ambiguity set of the form~\eqref{eq:marginal_ambiguity_set} with continuous marginal distribution functions. By Proposition~\ref{proposition:regularized_ctrans}, the integrand $\boldsymbol \nu^\top \boldsymbol \phi - \overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ is therefore concave and differentiable in~$\bm \phi$. We also assume that $\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ is $\mu$-integrable in~$\bm x$, that we have access to an oracle that generates independent samples from~$\mu$ and that problem~\eqref{eq:smooth_ot} is solvable. The following proposition establishes several useful properties of the smooth $c$-transform. \begin{proposition}[Properties of the smooth $c$-transform] \label{proposition:structural} If $\Theta$ is a marginal ambiguity set of the form~\eqref{eq:marginal_ambiguity_set} with cumulative distribution functions $F_i$, $i\in[N]$, then $\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ has the following properties for all $\boldsymbol x \in \mathcal X$. \begin{enumerate} [label=(\roman*)] \item \textbf{Bounded gradient:} \label{proposition:gradient} If $F_i$, $i\in[N]$, are continuous, then we have $ \| \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x) \| \leq 1 $ for all $\boldsymbol \phi\in\mathbb{R}^N$. \item \textbf{Lipschitz continuous gradient:} \label{proposition:smooth} If $F_i$, $i\in[N]$, are Lipschitz continuous with Lipschitz constant~$L>0$, then $\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ is $L$-smooth with respect to $\boldsymbol \phi$ in the sense of Assumption~\ref{assumption:main}\,\ref{assumption:main:smooth}. \item \textbf{Generalized self-concordance:} \label{proposition:concordance} If $F_i$, $i\in[N]$, are twice differentiable on the interiors of their respective supports and if there is $M > 0$ with \begin{equation} \label{eq:cumulative:concoradance} \sup_{s \in F_i^{-1}(0,1)} ~ \frac{|\mathrm{d}^2F_i(s) / \mathrm{d} s^2|}{\mathrm{d} F_i(s) / \mathrm{d} s} \leq M, \end{equation} then $\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ is $M$-generalized self-concordant with respect to $\boldsymbol \phi$ in the sense of Assumption~\ref{assumption:main}\,\ref{assumption:main:concordance}. \end{enumerate} \end{proposition} \begin{proof} As for~\ref{proposition:gradient}, Proposition~\ref{proposition:regularized_ctrans} implies that $\nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x) \in \Delta^N$, and thus we have $\| \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x) \| \leq 1$. As for~\ref{proposition:smooth}, note that the convex conjugate of the smooth $c$-transform with respect to $\boldsymbol \phi$ is given by \begin{align*} \overline\psi{}_c^*(\boldsymbol p, \boldsymbol x) &= \sup_{\boldsymbol \phi \in \mathbb{R}^N} \boldsymbol p^\top \boldsymbol \phi - \overline\psi(\boldsymbol \phi, \boldsymbol x) = \sup_{\boldsymbol \phi \in \mathbb{R}^N} \inf_{\boldsymbol q \in \Delta^N} ~ \sum_{i=1}^N p_i \phi_i - (\phi_i - c(\boldsymbol x, \boldsymbol {y_i})) q_i - \int_{1-q_i}^1 F_i^{-1}(t)\mathrm{d} t \\ &= \inf_{\boldsymbol q \in \Delta^N} \sup_{\boldsymbol \phi \in \mathbb{R}^N} ~ \sum_{i=1}^N p_i \phi_i - (\phi_i - c(\boldsymbol x, \boldsymbol {y_i})) q_i - \int_{1-q_i}^1 F_i^{-1}(t)\mathrm{d} t \\ &= \begin{cases} \;\displaystyle \sum\limits_{i=1}^N c(\boldsymbol x, \boldsymbol {y_i}) p_i - \int_{1-p_i}^1 F_i^{-1}(t)\mathrm{d} t & \text{if } \boldsymbol p \in \Delta^N \\ \;+\infty & \text{otherwise,} \end{cases} \end{align*} where the second equality follows again from Proposition~\ref{proposition:regularized_ctrans}, and the interchange of the infimum and the supremum is allowed by Sion's classical minimax theorem. In the following we first prove that $\overline\psi{}_c^*(\boldsymbol p, \boldsymbol x)$ is $1/L$-strongly convex in $\boldsymbol p$, that is, the function $\overline\psi{}_c^*(\boldsymbol p, \boldsymbol x) - \| \boldsymbol p\|^2/ (2L)$ is convex in $\boldsymbol p$ for any fixed $\boldsymbol x \in \mathcal X$. To this end, recall that $F_i$ is assumed to be Lipschitz continuous with Lipschitz constant $L$. Thus, we have \begin{align*} L\ge \sup_{\substack{s_1,s_2 \in \mathbb{R}\\ s_1 \neq s_2}}\frac{\left| F_i (s_1) - F_i(s_2)\right|}{|s_1 - s_2|} = \sup_{\substack{s_1, s_2 \in \mathbb{R}\\ s_1 > s_2}}\frac{ F_i (s_1) - F_i(s_2)}{s_1 - s_2} \geq \sup_{\substack{p_i, q_i \in (0,1)\\ p_i > q_i}} \frac{p_i - q_i}{F_i^{-1}(p_i) - F_i^{-1}(q_i)}, \end{align*} where the second inequality follows from restricting $s_1$ and $s_2$ to the preimage of $(0,1)$ with respect to~$F_i$. Rearranging terms in the above inequality then yields \begin{align*} -F_i^{-1}(1 - q_i) - q_i/L &\leq -F_i^{-1}(1-p_i)-p_i/L \end{align*} for all $p_i, q_i \in (0, 1)$ with $q_i < p_i$. Consequently, the function $- F_i^{-1}(1-p_i) - {p_i}/L$ is non-decreasing and its primitive $- \int_{1-p_i}^1 F_i^{-1}(t)\mathrm{d} t - p_i^2/(2 L)$ is convex in $p_i$ on the interval $(0,1)$. This implies that $$ \overline\psi{}_c^*(\boldsymbol p, \boldsymbol x) - \frac{\| \boldsymbol p\|_2^2}{2 L} = \sum_{i=1}^N c(\boldsymbol x, \boldsymbol {y_i}) p_i - \int_{1-p_i}^1 F_i^{-1}(t)\mathrm{d} t - \frac{p_i^2}{2 L}$$ constitutes a sum of convex univariate functions for every fixed $\boldsymbol x\in \mathcal{X}$. Thus, $\overline\psi{}_c^*(\boldsymbol p, \boldsymbol x)$ is $1/L$-strongly convex in $\boldsymbol p$. By \citep[Theorem~6]{kakade2009duality}, however, any convex function whose conjugate is $1/L$-strongly convex is guaranteed to be $L$-smooth. This observation completes the proof of assertion~\ref{proposition:smooth}. As for assertion~\ref{proposition:concordance}, choose any $\boldsymbol \phi, \boldsymbol \varphi \in \mathbb{R}^N$ and $\boldsymbol x \in \mathcal X$, and introduce the auxiliary function \begin{equation} \label{eq:u:function} u(s) = \overline \psi_c \left(\boldsymbol \phi + s (\boldsymbol \varphi - \boldsymbol \phi), \boldsymbol x \right) = \max_{ \boldsymbol p \in \Delta^N} \displaystyle \sum\limits_{i=1}^N ~ (\phi_i + s (\varphi_i - \phi_i) - c(\boldsymbol x, \boldsymbol {y_i}))p_i + \int_{1-p_i}^1 F_i^{-1}(t) \mathrm{d} t. \end{equation} For ease of exposition, in the remainder of the proof we use prime symbols to designate derivatives of univariate functions. A direct calculation then yields \begin{align*} u'(s) = \left( \boldsymbol \varphi - \boldsymbol \phi \right)^\top \nabla_{\boldsymbol \phi} \overline \psi \left(\boldsymbol \phi + s (\boldsymbol \varphi - \boldsymbol \phi), \boldsymbol x \right) \quad \text{and} \quad u''(s) = \left( \boldsymbol \varphi - \boldsymbol \phi \right)^\top \nabla_{\boldsymbol \phi}^2 \overline \psi \left(\boldsymbol \phi + s (\boldsymbol \varphi - \boldsymbol \phi), \boldsymbol x \right) \left( \boldsymbol \varphi - \boldsymbol \phi \right). \end{align*} By Proposition~\ref{proposition:regularized_ctrans}, $\boldsymbol p^\star(s)=\nabla_{\boldsymbol \phi} \overline \psi_c \left(\boldsymbol \phi + s (\boldsymbol \varphi - \boldsymbol \phi), \boldsymbol x \right)$ represents the unique solution of the maximization problem in~\eqref{eq:u:function}. In addition, by~\citep[Proposition~6]{sun2019generalized}, the Hessian of the smooth $c$-transform with respect to $\boldsymbol \phi$ can be computed from the Hessian of its convex conjugate as follows. $$ \nabla_{\boldsymbol \phi}^2 \overline \psi_c \left(\boldsymbol \phi + s (\boldsymbol \varphi - \boldsymbol \phi), \boldsymbol x \right) = \left( \nabla^2_{\boldsymbol p} \overline \psi{}_c^*(\boldsymbol p^\star(s), \boldsymbol x) \right)^{-1} = \mathrm{diag} \left( [F_1'(F_1^{-1}(1 - p_1^\star(s))), \dots, F_N'(F_N^{-1}(1 - p_N^\star(s))) ] \right)$$ Hence, the first two derivatives of the auxiliary function $u(s)$ simplify to $$ u'(s) = \sum_{i=1}^N (\varphi_i- \phi_i) p^\star_i(s) \quad \text{and} \quad u''(s) = \sum_{i=1}^N (\varphi_i- \phi_i)^2 F_i'(F_i^{-1}(1 - p_i^\star(s))).$$ Similarly, the above formula for the Hessian of the smooth $c$-transform can be used to show that $(p_i^\star)'(s) = (\varphi_i- \phi_i) F_i'(F_i^{-1}(1 - p_i^\star(s)))$ for all $i \in [N]$. The third derivative of $u(s)$ therefore simplifies to \begin{align*} u'''(s) = - \sum_{i=1}^N (\varphi_i- \phi_i)^2 \,\frac{ F_i''(F_i^{-1}(1 - p_i^\star(s)))}{F_i'(F_i^{-1}(1 - p_i^\star(s)))}\, (p_i^\star)'(s) = - \sum_{i=1}^N (\varphi_i- \phi_i)^3 F_i''(F_i^{-1}(1 - p_i^\star(s))). \end{align*} This implies via H\"{o}lder's inequality that \begin{align*} | u'''(s) | &= \left| \sum_{i=1}^N (\varphi_i- \phi_i)^2\, F_i'(F_i^{-1}(1 - p_i^\star(s))) \, \frac{F_i''(F_i^{-1}(1 - p_i^\star(s)))}{F_i'(F_i^{-1}(1 - p_i^\star(s)))} \, (\varphi_i- \phi_i) \right| \\ &\leq \left( \sum_{i=1}^N (\varphi_i- \phi_i)^2\, F_i'(F_i^{-1}(1 - p_i^\star(s))) \right) \left( \max_{i \in [N]} \left| \frac{F_i''(F_i^{-1}(1 - p_i^\star(s)))}{F_i'(F_i^{-1}(1 - p_i^\star(s)))} \, (\varphi_i- \phi_i) \right| \right). \end{align*} Notice that the first term in the above expression coincides with $u''(s)$, and the second term satisfies \begin{align*} \max_{i \in [N]} \left| \frac{F_i''(F_i^{-1}(1 - p_i^\star(s)))}{F_i'(F_i^{-1}(1 - p_i^\star(s)))} \, (\varphi_i- \phi_i) \right| \leq \max_{i \in [N]} \left| \frac{F_i''(F_i^{-1}(1 - p_i^\star(s)))}{F_i'(F_i^{-1}(1 - p_i^\star(s)))} \right| \, \| \boldsymbol \varphi - \boldsymbol \phi \|_\infty \leq M \| \boldsymbol \varphi \boldsymbol - \boldsymbol \phi \|, \end{align*} where the first inequality holds because $\max_{i \in [N]} |a_i b_i| \leq \| \boldsymbol a \|_{\infty} \| \boldsymbol b \|_\infty $ for all $\boldsymbol a, \boldsymbol b \in \mathbb R^N$, and the second inequality follows from the definition of $M$ and the fact that the 2-norm provides an upper bound on the $\infty$-norm. Combining the above results shows that $|u'''(s)|\leq M \| \boldsymbol \varphi \boldsymbol - \boldsymbol \phi \| u''(s)$ for all $s\in\mathbb{R}$. The claim now follows because $\boldsymbol \phi, \boldsymbol \varphi \in \mathbb{R}^N$ and $\boldsymbol x \in \mathcal X$ were chosen arbitrarily. \end{proof} \begin{table}[!b] \centering \begin{minipage}{0.375\textwidth} \vspace{-1em} \begin{algorithm}[H] \caption{\label{algorithm:asgd}Averaged SGD \protect\phantom{$\nabla_{\boldsymbol \phi} \overline\psi$}} \begin{algorithmic}[1] \Require $\gamma, T, \bar \varepsilon$ \vspace{0.05em} \State Set $\boldsymbol \phi_0 \gets \boldsymbol 0$ \For{$t = 1, 2,\dots, T$} \State \hspace{-1ex}Sample $\boldsymbol {x_t}$ from $\mu$ \State \hspace{-1ex}Choose $\varepsilon_{t-1} \in (0, \bar \varepsilon / (2 \sqrt{t})]$ \State \hspace{-1ex}Set $ \boldsymbol p \gets \text{Bisection}(\boldsymbol {x_t}, \boldsymbol \phi_{t-1}, \varepsilon_{t-1})$ \State \hspace{-1ex}Set $\boldsymbol \phi_t \leftarrow \boldsymbol \phi_{t-1} + \gamma (\boldsymbol \nu - \boldsymbol p)$ \EndFor \vspace{0.05em} \Ensure $\ubar{\boldsymbol \phi}_T = \frac{1}{T}\sum_{t=1}^T \boldsymbol \phi_{t-1}$~ and \phantom{abcd} \phantom{abcd}\hspace{0.85ex} $\bar{\boldsymbol \phi}_T = \frac{1}{T}\sum_{t=1}^T \boldsymbol \phi_{t}$ \end{algorithmic} \end{algorithm} \vspace{-1em} \end{minipage} \begin{minipage}{0.6175\textwidth} \vspace{-1em} \begin{algorithm}[H] \caption{\label{algorithm:bisection}Bisection method to approximate $\nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x)$} \begin{algorithmic}[1] \Require $\boldsymbol x, \boldsymbol \phi, \varepsilon$ \vspace{0.1em} \State Set $\overline{\tau} \gets \max_{i \in [N]} ~ \{c(\boldsymbol x,\boldsymbol {y_i}) - \phi_i - F_i^{-1}(1-1/N) \}$ \State Set $\underline{\tau} \gets \min_{i \in [N]} ~ \{c(\boldsymbol x,\boldsymbol {y_i}) - \phi_i - F_i^{-1}(1-1/N) \}$ \State Evaluate $\delta(\varepsilon)$ as defined in~\eqref{eq:delta(eps)} \For{$k = 1, 2,\dots, \lceil \log_2 ((\overline{\tau} - \underline{\tau}) / \delta(\varepsilon)) \rceil$} \State \hspace{-1ex}Set $\tau \gets {(\overline{\tau} + \underline{\tau})}/{2}$ \State \hspace{-1ex}Set $p_i \gets 1-F_i(c(\boldsymbol x,\boldsymbol {y_i}) -\phi_i -\tau)$ for $i \in [N]$ \State \hspace{-1ex}\algorithmicif~~$\sum_{i \in [N]} p_i > 1$~~\algorithmicthen~~$\overline{\tau} \gets \tau$~~\algorithmicelse~~$\underline{\tau} \gets \tau$ \EndFor \vspace{0.1em} \Ensure $\boldsymbol p $ with $p_i = 1-F_i(c(\boldsymbol x,\boldsymbol {y_i}) -\phi_i -\underline \tau)$, $i\in[N]$ \end{algorithmic} \end{algorithm} \vspace{-1em} \end{minipage} \end{table} In the following we use the averaged SGD algorithm of Section~\ref{section:AGD} to solve the smooth optimal transport problem~\eqref{eq:smooth_ot}. A detailed description of this algorithm in pseudocode is provided in Algorithm~\ref{algorithm:asgd}. This algorithm repeatedly calls a sub-routine for estimating the gradient of $\overline\psi_c(\boldsymbol \phi, \boldsymbol x)$ with respect to $\bm \phi$. By Proposition~\ref{proposition:regularized_ctrans}, this gradient coincides with the unique solution~$\boldsymbol p^\star$ of the convex maximization problem~\eqref{eq:regularized_c_transform}. In addition, from the proof of Proposition~\ref{proposition:regularized_ctrans} it is clear that its components are given by \begin{align*} p^\star_i = \theta^\star \left[ i = \min \argmax_{j \in [N]} \phi_j - c(\boldsymbol x, \boldsymbol y_j) + z_j \right] \quad \forall i \in [N], \end{align*} where $\theta^\star$ represents an optimizer of the semi-parametric discrete choice problem~\eqref{eq:smooth_c_transform}. Therefore, $\boldsymbol p^\star$ can be interpreted as a vector of choice probabilities under the best-case probability measure~$\theta^\star$. Sometimes these choice probabilities are available in closed form. This is the case, for instance, in the exponential distribution model of Example~\ref{ex:exp}, which is equivalent to the generalized extreme value distribution model of Section~\ref{sec:gevm}. Indeed, in this case $\boldsymbol p^\star$ is given by a softmax of the utility values $\phi_i - c(\boldsymbol x, \boldsymbol {y_i})$, $i\in[N]$, {\em i.e.}, \begin{equation}\label{eq:softmax} p_i^\star= \frac{\eta_i \exp\left(({\phi_i - c(\boldsymbol x, \boldsymbol {y_i}) )}/{\lambda}\right)}{\sum_{j=1}^N \eta_j \exp\left(({\phi_j - c(\boldsymbol x,\boldsymbol {y_j}) })/{\lambda} \right)} \quad \forall i \in [N]. \end{equation} Note that these particular choice probabilities are routinely studied in the celebrated multinomial logit choice model~\citep[\S~5.1]{ben1985discrete}. The choice probabilities are also available in closed form in the uniform distribution model of Example~\ref{ex:uniform}. As the derivation of $\boldsymbol p^\star$ is somewhat cumbersome in this case, we relegate it to Appendix~\ref{appendix:spmax}. For general marginal ambiguity sets with continuous marginal distribution functions, we propose a bisection method to compute the gradient of the smooth $c$-transform numerically up to any prescribed accuracy; see Algorithm~\ref{algorithm:bisection}. \begin{theorem}[Biased gradient oracle] \label{theorem:bisection} If $\Theta$ is a marginal ambiguity set of the form~\eqref{eq:marginal_ambiguity_set} and the cumulative distribution function $F_i$ is continuous for every $i\in[N]$, then, for any $\boldsymbol x \in \mathcal X$, $\boldsymbol \phi \in \mathbb{R}^N$ and $\varepsilon > 0$, Algorithm~\ref{algorithm:bisection} outputs $\boldsymbol p \in \mathbb{R}^N $ with $\| \boldsymbol p \| \leq 1$ and $\| \nabla_{\boldsymbol \phi} \overline\psi(\boldsymbol \phi, \boldsymbol x) - {\boldsymbol p} \| \leq \varepsilon$. \end{theorem} \begin{proof} Thanks to Proposition~\ref{proposition:regularized_ctrans}, we can recast the smooth $c$-transform in dual form as \begin{equation*} \overline \psi_c(\boldsymbol \phi, \boldsymbol x) = \min_{\substack{\boldsymbol \zeta \in \mathbb{R}_+^N \\ \tau \in \mathbb{R}}}\;\sup_{\boldsymbol p \in \mathbb{R}^N} ~ \sum\limits_{i=1}^N (\phi_i - c(\boldsymbol {x},\boldsymbol {y_i}))p_i +\sum\limits_{i=1}^N \int^1_{1- p_i} F_i^{-1}(t)\mathrm{d} t + \tau \left(\sum\limits_{i=1}^N p_i - 1 \right)+ \sum\limits_{i=1}^N \zeta_i p_i. \end{equation*} Strong duality and dual solvability hold because we may construct a Slater point for the primal problem by setting $p_i=1/N$, $i\in[N]$. By the Karush-Kuhn-Tucker optimality conditions, $\boldsymbol p^\star$ and $(\tau^\star,\boldsymbol \zeta^\star)$ are therefore optimal in the primal and dual problems, respectively, if and only if we have \begin{align*} \begin{array}{lll} \sum_{i=1}^N p^\star_i =1, ~p^\star_i \geq 0 & \forall i \in [N] & \text{(primal feasibility)}\\ \zeta^\star_i\geq 0 & \forall i \in [N] & \text{(dual feasibility)}\\ \zeta_i^\star p_i^\star=0 & \forall i \in [N] & \text{(complementary slackness)} \\ \phi_i-c(\boldsymbol {x},\boldsymbol {y_i}) + F_i^{-1}(1-p^\star_i) + \tau^\star + \zeta^\star_i = 0 & \forall i \in [N] & \text{(stationarity)}. \end{array} \end{align*} If $p_i^\star > 0$, then the complementary slackness and stationarity conditions imply that $\zeta_i^\star = 0$ and that $\phi_i-c(\boldsymbol {x},\boldsymbol {y_i}) + F_i^{-1}(1-p^\star_i) + \tau^\star = 0$, respectively. Thus, we have $p_i^\star = 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\tau^\star)$. If $p_i^\star = 0$, on the other hand, then similar arguments show that $\zeta_i^\star \geq 0$ and $\phi_i-c(\boldsymbol {x},\boldsymbol {y_i}) + F_i^{-1}(1) + \tau^\star \leq 0$. These two inequalities are equivalent to $1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\tau^\star) \leq 0$. As all values of~$F_i$ are smaller or equal to~$1$, the last equality must in in fact hold as an equality. Combining the insights gained so far thus yields $p_i^\star = 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\tau^\star)$, which holds for all~$i\in[N]$ irrespective of the sign of $p_i^\star$. Primal feasibility therefore ensures that $\sum_{i=1}^N 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\tau^\star) = 1$. Finding the unique optimizer $\boldsymbol p^\star$ of~\eqref{eq:regularized_c_transform} ({\em i.e.}, finding the gradient of $ \overline \psi_c(\boldsymbol \phi, \boldsymbol x)$) is therefore tantamount to finding a root $\tau^\star$ of the univariate equation \begin{align} \label{eq:root-finding} \sum_{i=1}^N 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\tau) = 1. \end{align} Note the function on the left hand side of~\eqref{eq:root-finding} is continuous and non-decreasing in $\tau$ because of the continuity (by assumption) and monotonicity (by definition) of the cumulative distribution functions $F_i$, $i\in[N]$. Hence, the root finding problem can be solved efficiently via bisection. To complete the proof, we first show that the interval between the constants $\underline{\tau}$ and $\overline{\tau}$ defined in Algorithm~\ref{algorithm:bisection} is guaranteed to contain~$\tau^\star$. Specifically, we will demonstrate that evaluating the function on the left hand side of~\eqref{eq:root-finding} at $\underline\tau$ or $\overline \tau$ yields a number that is not larger or not smaller than~1, respectively. For $\tau=\underline\tau$ we have \begin{align*} 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\underline\tau) &= 1 - F_i \left( c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i - \min_{j \in [N]} \left\{ c \left( \boldsymbol {x}, \boldsymbol {y_j} \right) - \phi_j -F_j^{-1}(1-1/N) \right\} \right) \\ &\leq 1 - F_i \left( F_i^{-1}(1-1/N) \right) = 1 / N\qquad \forall i\in[N], \end{align*} where the inequality follows from the monotonicity of $F_i$. Summing the above inequality over all $i\in[N]$ then yields the desired inequality $\sum_{i =1}^N 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\underline\tau) \leq 1$. Similarly, for $\tau=\overline\tau$ we have \begin{align*} 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\overline\tau) &= 1 - F_i \left( c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i - \max_{j \in [N]} \left\{ c \left( \boldsymbol {x}, \boldsymbol {y_j} \right) - \phi_j -F_j^{-1}(1-1/N) \right\} \right) \\ &\geq 1 - F_i \left( F_i^{-1}(1-1/N) \right) = 1/N \qquad \forall i\in[N]. \end{align*} We may thus conclude that $\sum_{i =1}^N 1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\overline\tau) \geq 1$. Therefore, $[\underline \tau, \overline \tau]$ constitutes a valid initial search interval for the bisection algorithm. Note that the function $1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\tau)$, which defines $p_i$ in terms of $\tau$, is uniformly continuous in~$\tau$ throughout~$\mathbb R$. This follows from \citep[Problem~14.8]{billingsley1995} and our assumption that~$F_i$ is continuous. The uniform continuity ensures that the tolerance \begin{align} \label{eq:delta(eps)} \delta(\varepsilon) = \min_{i \in N} \left\{ \max_\delta \left\{ \delta : | F_i(t_1) - F_i(t_2) | \leq \varepsilon / \sqrt{N} ~~ \forall t_1,t_2\in\mathbb{R} \text{ with } | t_1 - t_2 | \leq \delta \right\} \right\} \end{align} is strictly positive for every~$\varepsilon>0$. As the length of the search interval is halved in each iteration, Algorithm~\ref{algorithm:bisection} outputs a near optimal solution $\tau$ with $| \tau - \tau^\star | \leq \delta(\varepsilon)$ after $\lceil \log_2 ((\overline{\tau} - \underline{\tau}) / \delta(\varepsilon)) \rceil$ iterations. Moreover, the construction of~$\delta(\varepsilon)$ guarantees that $|1 - F_i(c(\boldsymbol {x},\boldsymbol {y_i})-\phi_i -\tau) - p_i^\star| \leq \varepsilon / \sqrt{N}$ for all $\tau$ with~$|\tau - \tau^\star| \leq \delta(\varepsilon)$. Therefore, the output~$\boldsymbol p\in\mathbb{R}^N_+$ of Algorithm~\ref{algorithm:bisection} satisfies $|p_i - p_i^\star| \leq \varepsilon / \sqrt{N} $ for each~$i\in[N]$, which in turn implies that $ \| \boldsymbol p - \boldsymbol p^\star \| \leq \varepsilon$. By construction, finally, Algorithm~\ref{algorithm:bisection} outputs $\boldsymbol p\ge \boldsymbol 0$ with $\sum_{i \in [N]} p_i < 1$, which ensures that $\| p \| \leq 1$. Thus, the claim follows. \end{proof} If all cumulative distribution functions $F_i$, $i\in[N]$, are Lipschitz continuous with a common Lipschitz constant~$L>0$, then the uniform continuity parameter~$\delta(\varepsilon)$ required in Algorithm~\ref{algorithm:bisection} can simply be set to~$\delta(\varepsilon) = \varepsilon / (L \sqrt{N})$. We are now ready to prove that Algorithm~\ref{algorithm:asgd} offers different convergence guarantees depending on the continuity and smoothness properties of the marginal cumulative distribution functions. \begin{corollary} \label{corollary:convergence} Use $h(\bm \phi) = \mathbb{E}_{\boldsymbol x \sim \mu} [ \boldsymbol \nu^\top \boldsymbol \phi - \overline\psi_c(\boldsymbol \phi, \boldsymbol x)]$ as a shorthand for the objective function of the smooth optimal transport problem~\eqref{eq:smooth_ot}, and let $\boldsymbol \phi^\star$ be a maximizer of~\eqref{eq:smooth_ot}. If $\Theta$ is a marginal ambiguity set of the form~\eqref{eq:marginal_ambiguity_set} with distribution functions $F_i$, $i\in[N]$, then for any $T \in \mathbb N$ and $\bar \varepsilon\ge 0$, the outputs $\ubar{\boldsymbol \phi}_T = \frac{1}{T} \sum_{t=1}^{T} \boldsymbol \phi_{t-1}$ and $\bar{\boldsymbol \phi}_T = \frac{1}{T} \sum_{t=1}^{T} \boldsymbol \phi_{t}$ of Algorithm~\ref{algorithm:asgd} satisfy the following inequalities. \begin{enumerate} [label=(\roman*)] \item \label{corollary:convergence:Lipschitz} If $\gamma = 1 / (2 (2 + \bar \varepsilon) \sqrt{T})$ and $F_i$ is continuous for every $i\in[N]$, then we have \begin{align*} \overline W_c (\mu, \nu) - \mathbb{E} \left[ h \big(\ubar{\boldsymbol \phi}_T \big) \right] \leq \frac{(2 + \bar \varepsilon)^2}{\sqrt{T}} \| \bm \phi^\star \|^2 + \frac{1}{4\sqrt{T}} + \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi^\star \|^2 + \frac{37}{2(2 + \bar \varepsilon)^2}}. \end{align*} \item \label{corollary:convergence:smooth} If $\gamma = 1 / (2 \sqrt{T} + L)$ and $F_i$ is Lipschitz continuous with Lipschitz constant $L>0$ for every $i\in[N]$, then we have \begin{align*} \overline W_c (\mu, \nu) - \mathbb{E} \left[ h \big(\bar{\boldsymbol \phi}_T \big) \right] &\leq \frac{L}{2T}\| \bm \phi^\star \|^2 + \frac{(2 + \bar \varepsilon)^2}{\sqrt{T}} \| \bm \phi^\star \|^2 + \frac{\bar\varepsilon^2 + 2}{4 (2+\bar \varepsilon)^2\sqrt{T}} + \frac{\bar \varepsilon}{\sqrt{T}} \sqrt{2 \| \bm \phi^\star \|^2 + \frac{37}{2(2 + \bar \varepsilon)^2}}. \end{align*} \item \label{corollary:convergence:concordance} If $\gamma = 1 / (2 G^2 \sqrt{T}) $ with $G = \max \{M, 2 + \bar\varepsilon\}$, $F_i$ satisfies the generalized self-concordance condition~\eqref{eq:cumulative:concoradance} with~$M> 0$ for every $i\in[N]$, and the smallest eigenvalue $\kappa$ of $-\nabla^2_{\boldsymbol \phi} h(\boldsymbol \phi^\star)$ is strictly positive, then we have \begin{align*} \overline W_c (\mu, \nu) - \mathbb{E} \left[ h \big(\ubar{\boldsymbol \phi}_T \big) \right] &\leq \frac{G^2}{T \kappa} \left( 4 G \| \bm \phi_0 - \bm \phi^\star \| + 20 \right)^4. \end{align*} \end{enumerate} \end{corollary} \begin{proof} Recall that problem \eqref{eq:smooth_ot} can be viewed as an instance of the convex minimization problem~\eqref{eq:convex:problem} provided that its objective function is inverted. Throughout the proof we denote by $\boldsymbol p_t(\boldsymbol \phi_t, \boldsymbol x_t)$ the inexact estimate for $\nabla_{\boldsymbol \phi} \overline\psi(\boldsymbol \phi_t, \boldsymbol x_t)$ output by Algorithm~\ref{algorithm:bisection} in iteration $t$ of the averaged SGD algorithm. Note that \begin{align*} \left\| \mathbb{E} \left[ \bm \nu - \bm p_t(\bm \phi_{t-1}, \bm x_t) \big| \mathcal F_{t-1} \right] - \nabla h(\bm \phi_{t-1}) \right\| &= \left\| \mathbb{E} \left[ \bm p_t(\bm \phi_{t-1}, \bm x_t) - \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x_t)\right] \right\| \\ &\leq \mathbb{E} \left[ \left\| \bm p_t(\bm \phi_{t-1}, \bm x_t) - \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x_t) \right\| \right] \leq \varepsilon_{t-1} \leq \frac{\bar \varepsilon}{2 \sqrt{t}}, \end{align*} where the two inequalities follow from Jensen's inequality and the choice of $\varepsilon_{t-1}$ in Algorithm~\ref{algorithm:asgd}, respectively. The triangle inequality and Proposition~\ref{proposition:structural}\,\ref{proposition:gradient} further imply that \begin{align*} \left\| \nabla h(\bm \phi) \right\| = \mathbb{E} \left[ \left\| \boldsymbol \nu - \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x) \right\| \right] \leq \left\| \boldsymbol \nu \right\| + \mathbb{E} \left[ \left\| \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x) \right\| \right] \leq 2. \end{align*} Assertion~\ref{corollary:convergence:Lipschitz} thus follows from Theorem~\ref{theorem:convergence}\,\ref{theorem:convergence:Lipschitz} with $R=2$. As for assertion~\ref{corollary:convergence:smooth}, we have \begin{align*} &\phantom{=}~\; \mathbb{E} \left[\left\| \bm \nu - \bm p_t(\bm \phi_{t-1}, \bm x_t) - \nabla h(\bm \phi_{t-1}) \right\|^2 | \mathcal F_{t-1} \right] \\ &= \mathbb{E} \left[\left\| \bm p_t(\bm \phi_{t-1}, \bm x_t) - \mathbb{E} \left[ \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) \right] \right\|^2 | \mathcal F_{t-1} \right] \\ &= \mathbb{E} \left[\left\| \bm p_t(\bm \phi_{t-1}, \bm x_t) - \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) + \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) - \mathbb{E} \left[ \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) \right] \right\|^2 | \mathcal F_{t-1} \right] \\ & \leq \mathbb{E} \left[ 2 \left\| \bm p_t(\bm \phi_{t-1}, \bm x_t) - \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) \right\|^2 + 2 \left\| \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) - \mathbb{E} \left[ \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) \right] \right\|^2 | \mathcal F_{t-1} \right] \\ & \leq 2\varepsilon_{t-1}^2 + 2 \leq \bar \varepsilon^2 + 2, \end{align*} where the second inequality holds because $\nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) \in \Delta^N$ and because $\| \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi_{t-1}, \boldsymbol x) \|_2^2 \leq 1$, while the last inequality follows from the choice of $\varepsilon_{t-1}$ in Algorithm~1. As $\overline\psi(\boldsymbol \phi, \boldsymbol x)$ is $L$-smooth with respect to~$\boldsymbol \phi$ by virtue of Proposition~\ref{proposition:structural}\,\ref{proposition:smooth}, we further have \begin{align*} \| \nabla h(\bm \phi) - \nabla h(\bm \phi') \| = \left\| \mathbb{E} \left[ \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi, \boldsymbol x) - \nabla_{\boldsymbol \phi} \overline\psi_c(\boldsymbol \phi', \boldsymbol x) \right] \right\| \leq L \| \bm \phi - \bm \phi' \| \quad \forall \bm \phi, \bm \phi' \in \mathbb{R}^n. \end{align*} Assertion~\ref{corollary:convergence:smooth} thus follows from Theorem~\ref{theorem:convergence}\,\ref{theorem:convergence:smooth} with $R=2$ and $\sigma = \sqrt{\bar \varepsilon^2 + 2}$. As for assertion~\ref{proposition:concordance}, finally, we observe that $h$ is $M$-generalized self-concordant thanks to Proposition~\ref{proposition:structural}\,\ref{proposition:concordance}. Assertion~\ref{corollary:convergence:concordance} thus follows from Theorem~\ref{theorem:convergence}\,\ref{theorem:convergence:concordance} with $R=2$. \end{proof} One can show that the objective function of the smooth optimal transport problem~\eqref{eq:smooth_ot} with marginal exponential noise distributions as described in Example~\ref{ex:exp} is generalized self-concordant. Hence, the convergence rate of Algorithm~\ref{algorithm:asgd} for the exponential distribution model of Example~\ref{ex:exp} is of the order~$\mathcal O(1/T)$, which improves the state-of-the-art $\mathcal O(1/\sqrt{T})$ guarantee established by~\citet{genevay2016stochastic}. \section{Numerical Experiments} \label{sec:numerical} All experiments are run on a 2.6 GHz 6-Core Intel Core i7, and all optimization problems are implemented in MATLAB~R2020a. The corresponding codes are available at \url{https://github.com/RAO-EPFL/Semi-Discrete-Smooth-OT.git}. We now aim to assess the empirical convergence behavior of Algorithm~\ref{algorithm:asgd} and to showcase the effects of regularization in semi-discrete optimal transport. To this end, we solve the original dual optimal transport problem~\eqref{eq:ctans_dual_semidisc} as well as its smooth variant~\eqref{eq:smooth_ot} with a Fr\'echet ambiguity set corresponding to the exponential distribution model of Example~\ref{ex:exp}, to the uniform distribution model of Example~\ref{ex:uniform} {\color{black} and to the hyperbolic cosine distribution model of Example~\ref{ex:hyperbolic}}. Recall from Theorem~\ref{theorem:primal_dual} that any Fr\'echet ambiguity set is uniquely determined by a marginal generating function~$F$ and a probability vector~$\boldsymbol \eta$. As for the exponential distribution model of Example~\ref{ex:exp}, we set~$F(s) = \exp(10 s - 1)$ and~$\eta_i = 1/N$ for all~$i\in[N]$. In this case problem~\eqref{eq:smooth_ot} is equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with an entropic regularizer, and the gradient $\nabla_{\boldsymbol \phi}\overline \psi_c(\boldsymbol \phi, \boldsymbol x)$, which is known to coincide with the vector~$\boldsymbol p^\star$ of optimal choice probabilities in problem~\eqref{eq:regularized_c_transform}, admits the closed-form representation~\eqref{eq:softmax}. We can therefore solve problem~\eqref{eq:smooth_ot} with a variant of Algorithm~\ref{algorithm:asgd} that calculates $\nabla_{\boldsymbol \phi}\overline \psi_c(\boldsymbol \phi, \boldsymbol x)$ exactly instead of approximately via bisection. As for the uniform distribution model of Example~\ref{ex:uniform}, we set~$F(s) = s / 20 + 1/2$ and~$\eta_i = 1/N$ for all~$i\in[N]$. In this case problem~\eqref{eq:smooth_ot} is equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with a $\chi^2$-divergence regularizer, and the vector~$\boldsymbol p^\star$ of optimal choice probabilities can be computed exactly and highly efficiently by sorting thanks to Proposition~\ref{proposition:spmax} in the appendix. We can therefore again solve problem~\eqref{eq:smooth_ot} with a variant of Algorithm~\ref{algorithm:asgd} that calculates $\nabla_{\boldsymbol \phi}\overline \psi_c(\boldsymbol \phi, \boldsymbol x)$ exactly. {\color{black} As for the hyperbolic cosine model of Example~\ref{ex:hyperbolic}, we set~$F(s) = \sinh(10s - k)$ with $k=\sqrt{2} - 1 - \textrm{arcsinh}(1)$ and~$\eta_i = 1/N$ for all~$i \in [N]$. In this case problem~\eqref{eq:smooth_ot} is equivalent to the regularized primal optimal transport problem~\eqref{eq:reg_ot_pri_abstract} with a hyperbolic divergence regularizer. However, the vector~$\bm p^\star$ is not available in closed form, and thus we use~Algorithm~\ref{algorithm:bisection} to compute~$\bm p^\star$ approximately.} Lastly, note that the original dual optimal transport problem~\eqref{eq:ctans_dual_semidisc} can be interpreted as an instance of~\eqref{eq:smooth_ot} equipped with a degenerate singleton ambiguity set that only contains the Dirac measure at the origin of~$\mathbb{R}^N$. In this case $\overline \psi_c(\boldsymbol \phi,\boldsymbol x) = \psi_c(\boldsymbol \phi,\boldsymbol x)$ fails to be smooth in~$\boldsymbol \phi$, but an exact subgradient $\boldsymbol p^\star\in\partial_{\boldsymbol \phi} \overline \psi_c(\boldsymbol \phi,\boldsymbol x)$ is given by \[ p_i^\star = \begin{cases} 1 \quad &\text{if}~ i = \min \argmax\limits_{i \in [N]}~\phi_i - c(\boldsymbol x, \boldsymbol y_i),\\ 0 &\text{otherwise.} \end{cases} \] We can therefore solve problem~\eqref{eq:ctans_dual_semidisc} with a variant of Algorithm~\ref{algorithm:asgd} that has access to exact subgradients (instead of gradients) of~$\overline \psi_c(\boldsymbol \phi, \boldsymbol x)$. Note that the maximizer~$\boldsymbol \phi^\star$ of~\eqref{eq:ctans_dual_semidisc} may not be unique. In our experiments, we force Algorithm~\ref{algorithm:asgd} to converge to the maximizer with minimal Euclidean norm by adding a vanishingly small Tikhonov regularization term to~$\psi_c(\boldsymbol \phi,\boldsymbol x)$. Thus, we set $\overline \psi_c(\boldsymbol \phi,\boldsymbol x) = \psi_c(\boldsymbol \phi,\boldsymbol x) + \varepsilon\|\boldsymbol \phi\|_2^2$ for some small regularization weight~$\varepsilon> 0$, in which case~$\boldsymbol p^\star+2\varepsilon \boldsymbol \phi\in\partial_{\boldsymbol \phi}\overline \psi_c(\boldsymbol \phi,\boldsymbol x)$ is an exact subgradient. In the following we set~$\mu$ to the standard Gaussian measure on~$\mathcal X= \mathbb{R}^2$ and~$\nu$ to the uniform measure on 10 independent samples drawn uniformly from~$\mathcal Y=[-1,\, 1]^2$. We further set the transportation cost to~$c(\boldsymbol x, \boldsymbol y) = \|\boldsymbol x - \boldsymbol y\|_\infty$. Under these assumptions, we use Algorithm~\ref{algorithm:asgd} to solve the original as well as the~{\color{black}three} smooth optimal transport problems approximately {\color{black} for $T=1,\ldots, 10^5$. For each fixed~$T$ the step size is selected in accordance with Corollary~\ref{corollary:convergence}.} We emphasize that Corollary~\ref{corollary:convergence}\,\ref{corollary:convergence:Lipschitz} remains valid if~$\overline \psi_c(\boldsymbol \phi,\boldsymbol x)$ fails to be smooth in~$\boldsymbol \phi$ and we have only access to subgradients; see \cite[Corollary~1]{nesterov2008confidence}. Denoting by~$\bar{\boldsymbol \phi}_T$ the output of Algorithm~\ref{algorithm:asgd}, we record the suboptimality \begin{equation*} \overline W_c(\mu, \nu) - \mathbb{E}_{\boldsymbol x \sim \mu} \left[ \boldsymbol \nu^\top \bar{\boldsymbol \phi}_T - \overline\psi_c(\bar{\boldsymbol \phi}_T , \boldsymbol x)\right] \end{equation*} of~$\bar{\boldsymbol \phi}_T$ in~\eqref{eq:smooth_ot} as well as the discrepancy~$\| \bar {\boldsymbol \phi}_T - \boldsymbol \phi^\star \|^2_2$ of~$\bar{\boldsymbol \phi}_T$ to the exact maximizer~$\boldsymbol \phi^\star$ of problem~\eqref{eq:smooth_ot} as a function of~$T$. In order to faithfully measure the convergence rate of $\bar{\boldsymbol \phi}_T$ and its suboptimality, we need to compute~$\boldsymbol \phi^\star$ as well as~$\overline W_c(\mu, \nu)$ to within high accuracy. This is only possible if the dimension of~$\mathcal X$ is small ({\em e.g.}, if~$\mathcal X= \mathbb{R}^2$ as in our numerical example); even though Algorithm~\ref{algorithm:asgd} can efficiently solve optimal transport problems in high dimensions. We obtain high-quality approximations for~$\overline W_c(\mu, \nu)$ and~$\boldsymbol \phi^\star$ by solving the finite-dimensional optimal transport problem between~$\nu$ and the discrete distribution that places equal weight on $10 \times T$ samples drawn independently from~$\mu$. Note that only the first~$T$ of these samples are used by Algorithm~\ref{algorithm:asgd}. The proposed high-quality approximations of the {\color{black} entropic and $\chi^2$-divergence regularized} optimal transport problems are conveniently solved via Nesterov's accelerated gradient descent method, where the suboptimality gap of the $t^{\text{th}}$ iterate is guaranteed to decay as~$\mathcal O(1/ t^2)$ under the step size rule advocated in~\citep[Theorem~1]{Nesterov1983AMF}. {\color{black} To our best knowledge, Nesterov's accelerated gradient descent algorithm is not guaranteed to converge with inexact gradients. For the hyperbolic divergence regularized optimal transport problem, we thus use Algorithm~\ref{algorithm:asgd} with $50 \times T$ iterations to obtain an approximation for~$\overline W_c(\mu, \nu)$ and~$\bm \phi^\star$.} In contrast, we model the high-quality approximation of the original optimal transport problem~\eqref{eq:ctans_dual_semidisc} in YALMIP~\citep{yalmip} and solve it with MOSEK. If this problem has multiple maximizers, we report the one with minimal Euclidean norm. Figure~\ref{fig:num_results} shows how the suboptimality of~$\bar{\boldsymbol \phi}_T$ and the discrepancy between $\bar {\boldsymbol \phi}_T$ and the exact maximizer decay with~$T$, both for the original as well as for the entropic, the $\chi^2$-divergence and~{\color{black} hyperbolic divergence} regularized optimal transport problems, {\color{black} averaged across 20 independent simulation runs.} Figure~\ref{fig:suboptimality} suggests that the suboptimality decays as~$\mathcal O(1/\sqrt{T})$ for the original optimal transport problem, which is in line with the theoretical guarantees by~\citet[Corollary~1]{nesterov2008confidence}, and as $\mathcal O(1/ T)$ for the {\color{black} entropic, the $\chi^2$-divergence and the hyperbolic divergence regularized} optimal transport problems, which is consistent with the theoretical guarantees established in Corollary~\ref{corollary:convergence}. Indeed, entropic regularization can be explained by the exponential distribution model of Example~\ref{ex:exp}, where the exponential distribution functions $F_i$ satisfy the generalized self-concordance condition~\eqref{eq:cumulative:concoradance} with~$M =1/ \lambda$. Similarly, $\chi^2$-divergence regularization can be explained by the uniform distribution model of Example~\ref{ex:uniform}, where the uniform distribution functions $F_i$ satisfy the generalized self-concordance condition with any~$M > 0$. {\color{black} Finally, hyperbolic divergence regularization can be explained by the hyperbolic cosine distribution model of Example~\ref{ex:hyperbolic}, where the hyperbolic cosine functions~$F_i$ satisfy the generalized self-concordance condition with $M = 1/\lambda$.} In all cases the smallest eigenvalue of $-\nabla_{\boldsymbol \phi}^2 \mathbb{E}_{\boldsymbol x \sim \mu} [\boldsymbol \nu^\top \boldsymbol \phi^\star - \overline{\psi}_{c}(\boldsymbol \phi^\star, \boldsymbol x)]$, which we estimate when solving the high-quality approximations of the two smooth optimal transport problems, is strictly positive. Therefore, Corollary~\ref{corollary:convergence}~\ref{corollary:convergence:concordance} is indeed applicable and guarantees that the suboptimality gap is bounded above by~$\mathcal O (1/T)$. Finally, Figure~\ref{fig:dualvars} suggests {\color{black} that~$\| \bar {\boldsymbol \phi}_T - \boldsymbol \phi^\star \|^2_2$ converges to~$0$ at rate~$\mathcal O(1/T)$ for the entropic, the $\chi^2$-divergence and the hyperbolic divergence} regularized optimal transport problems, which is consistent with~\citep[Proposition~10]{bach2013adaptivity}. \begin{figure}[t] \centering \begin{subfigure}[h]{0.43\columnwidth} \includegraphics[width=\textwidth]{convergence.pdf} \caption{} \label{fig:suboptimality} \end{subfigure} \hspace{2cm} \begin{subfigure}[h]{0.43\columnwidth} \includegraphics[width=\textwidth]{dualvars.pdf} \caption{} \label{fig:dualvars} \end{subfigure} \caption{Suboptimality (a) and discrepancy to~$\boldsymbol\phi^\star$ (b) of the outputs~$\bar{\boldsymbol \phi}_T$ of Algorithm~\ref{algorithm:asgd} for the original (blue), the entropic regularized (orange), the $\chi^2$-divergence regularized (red) {\color{black} and the hyperbolic divergence regularized (purple)} optimal transport~problems.} \label{fig:num_results} \end{figure} \textbf{Acknowledgements.} This research was supported by the Swiss National Science Foundation under the NCCR Automation, grant agreement~51NF40\_180545. The research of the second author is supported by an Early Postdoc.Mobility Fellowship, grant agreement P2ELP2\_195149.
train/arxiv
BkiUdhE5qoYAy-NzTSRr
5
1
\section{Introduction} Single crystal relaxor-PbTiO$_{\text{3}}$ ferroelectric materials can have exceptionally high piezoelectric properties at room temperature. Their large piezoelectric and dielectric constants, along with low dielectric losses are desirable for a wide range of applications \cite{Park1997,Zhang2012,Zhang2015}. Much of the recent effort to understand the origins of the excellent room temeprature properties of relaxor-PbTiO$_{\text{3}}$ materials has focused on understanding the piezo- or dielectric behaviour of the materials below room temperature. A relaxation step feature in piezoelectric and dielectric properties at low temperatures have been reported in single crystal relaxor-PbTiO$_{\text{3}}$ samples, \cite{Martin2012,Bukhari2014,Li2016} in addition to the characteristic relaxor-ferroelectric dielectric peaks above room temperature \cite{Wang2009,Wang2014,Li2016,Takenaka2017}. For rhombohedral Pb(Mg$_{\text{1/3}}$Nb$_{\text{2/3}}$)O$_{\text{3}}$-PbTiO$_{\text{3}}$ (PMN-PT) crystals, Martin et al and Li et al \cite{Martin2012,Porokhonskyy2010} have shown that at around 200 K the reduction in dielectric permittivity and piezoelectricity with temperature levels off, before dropping sharply between 100 K and 20 K. The drop in permittivity is associated with a peak in the dielectric loss, both of which show a large variation as a function of driving frequency. The step feature suggests a "freezing out" of temperature activated dynamics, which has been cited as evidence that the persistence of polar nano-regions down to lower temperatures gives relaxor-PbTiO$_{\text{3}}$ materials their high room temperature properties\cite{Li2016,Li2017}. Low temperature dielectric data from different ferroelectric and relaxor-ferroelectric materials show a wide range of anomalies and features at cryogenic temperatures. The large dielectric relaxation feature highlighted by Li et al \cite{Li2016} is not always present in relaxor-PbTiO$_{\text{3}}$ single crystals \cite{Lente2004,Wang2009,Hentati2015}. Studies on PMN-PT suggest that the material composition and poling state \cite{Wang2009} influence the size, shape and presence of a low temperature feature. Work on PMN-PT ceramics \cite{Singh2007,Li2008} showed broad dielectric loss anomalies that appear more similar to some data on lead zirconate titanate (PZT) based ceramics than relaxor-PbTiO$_{\text{3}}$ single crystals \cite{Zhang1983,Zhang1994,Thiercelin2010}. In PZT ceramics, freezing out of the motion of domain walls is used to explain broad peaks in the dielectric loss spectra \cite{Li2014}. There are dielectric data on Fe doped PZT ceramics \cite{Arlt1987} showing a step feature with frequency dispersion very similar to that seen in PMN-PT \cite{Martin2012,Li2016}. Arlt et al found good agreement between this data and their domain wall dynamics model \cite{Arlt1987}. The low temperature, rhombohedral phase of BaTiO$_{\text{3}}$ can give dielectric data that peak to anomalously high values, then reduce close to 0 K \cite{Akishige1998,Akishige2004,Wang2007,Wang2011}. The peaks are similar to those seen in PMN-PT in their shape and frequency dispersion. The presence of this low temperature relaxation behaviour in ferroelectric BaTiO$_{\text{3}}$ single crystals has been shown to vary between crystals grown by different methods, and depending on the crystals' electric field and temperature histories \cite{Akishige1998,Wang2007}. Wang et al have shown that images of different domain states can be linked to differences in the size of the dielectric constant peak in the rhombohedral phase of BaTiO$_{\text{3}}$ single crystals \cite{Wang2011}. In order to understand the origins of low temperature anomalies and their relationship to room temperature properties, data on materials in a range of conditions are required. Here we investigate the relaxation step in relaxor-PbTiO$_{\text{3}}$ materials by considering polycrystalline ceramic and single crystal Pb(In$_{\text{1/2}}$Nb$_{\text{1/2}}$)O$_{\text{3}}$-Pb(Mg$_{\text{1/3}}$Nb$_{\text{2/3}}$)O$_{\text{3}}$-PbTiO$_{\text{3}}$ (PIN-PMN-PT). We investigate the effects of poling the material and find that the large relaxation step only becomes apparent when the single crystal is poled. \begin{figure}[ht] \centering \includegraphics[width=7cm]{permittivity_real_imag_vs_T_20-300K_lines_crystal} \caption{The a) real and b) imaginary parts of the dielectric permittivity in (001) and (111) cut PIN-PMN-PT single crystals are shown at temperatures from 20 $K$ to 300 $K$. The solid, red lines are for the poled (001) cut and the dashed, red lines are for the depoled (001) cut. The solid, blue lines are for the poled (111) cut and the dashed, red lines for the depoled (111) cut.} \label{fig:permTcrystal} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=7cm]{permittivity_real_imag_vs_T_20-300K_lines_polycrystal} \caption{The a) real and b) imaginary parts of the dielectric permittivity in a PIN-PMN-PT polycrystalline ceramic are shown at temperatures from 10 $K$ to 300 $K$. The solid lines are for the poled ceramic and the dashed lines are for the depoled ceramic.} \label{fig:permTpoly} \end{figure} \section{Experimental Details} We have studied relaxor-ferroelectric PIN-PMN-PT below room temperature. The data shown here are from crystal plates cut with (001) and (111) faces, and from a polycrystalline ceramic pellet. To make the samples, powdered material was prepared by mixed oxide methods. The polycrystalline ceramic pellet was formed by sintering and the crystal was grown by Bridgman technique \cite{Stoica2016}. Silver epoxy was painted onto the crystal and pellet main faces and cured at 770 K to form electrodes. The (001) cut crystal was 1.117 mm thick and had an electrode area of 0.2019 cm$^{2}$, the (111) cut crystal was 1.200 mm thick and had an electrode area of 0.2601 cm$^{2}$, and the ceramic pellet was 1.720 mm thick and had an electrode area of 0.7557 cm$^{2}$. The nominal composition of the material, PIN$_{\text{0.28}}$-PMN$_{\text{0.40}}$-PT$_{\text{0.32}}$, was chosen to be close to the morphotropic phase boundary (MPB) and give a rhombohedral structure at room temperature. The samples were all prepared for measurements in the poled state by first annealing to 850 K, a point well above any phase transitions or dielectric maxima, then allowing them to cool to room temperature. We poled the samples by heating them to 370 K, then applying an electric field of 1 kV/mm while the samples cooled to room temperature. The elevated temperature is high enough to lower the energy barrier for domain re-orientation, but is below any phase transitions. All samples showed piezoelectric resonance peaks at high frequency, indicating that they were properly poled. The measurements in a depoled state were taken after the samples had been annealed to a point (above 500 K) where they no longer showed a spontaneous polarisation or piezoelectric resonance peaks. The real and imaginary parts of the dielectric permittivity, $\varepsilon _{\text{r}}'$ and $\varepsilon _{\text{r}}''$, were measured with a Solartron impedance analyser and XM-Studio MTS software. The crystals were mounted in an Oxford Microstat, where the temperature was swept at a rate of 2 K/minute between 10 K and 300 K. A driving voltage with an rms value of 2 V was applied at a range of frequencies between 10 kHz and 0.05 Hz, and the response was measured. The full set of data is available from (DOI to be inserted). \section{Results} The real and imaginary parts of the dielectric permittivity, $\varepsilon _{\text{r}}'$ and $\varepsilon _{\text{r}}''$, measured for the (001) and (111) cut PIN-PMN-PT crystals, in a poled and depoled state, are plotted in Figure \ref{fig:permTcrystal}. We see the same features in the dielectric properties of (001) poled rhombohedral PIN-PMN-PT as have been reported for PMN-PT \cite {Li2016}, but we find differences between the two crystal cuts and there are large differences between the depoled and poled crystals. The real part of the permittivity $\varepsilon _{\text{r}}'$ of the (001) cut crystal increases when the sample is poled, whereas for the (111) cut $\varepsilon _{\text{r}}'$ decreased after poling. For both crystals, the frequency dispersion at room temperature is reduced by poling, however the room temperature values of permittivity are very different. The poled (001) crystal has the permittivity step feature seen in PMN-PT, where the rate of decrease of $\varepsilon _{\text{r}}'$ as the sample is cooled slows at 200 K, then increases around 100 K, so that $\varepsilon _{\text{r}}'$ drops sharply. The feature is also present to some degree in the (111) crystal, although the size and sharpness of the drop is much less significant than in the (001) crystal. The imaginary part of the permittivity $\varepsilon _{\text{r}}''$ is low at room temperature in the poled crystals. There is very little change in $\varepsilon _{\text{r}}''$ from the room temperature value in the (111) crystal. In the (001) crystal the $\varepsilon _{\text{r}}'$ step feature is associated with a large peak in $\varepsilon _{\text{r}}''$. The behaviour with temperature of the two depoled PIN-PMN-PT crystals, (001) and (111) cut, is almost identical. There is a large variation in the relative permittivity $\varepsilon _{\text{r}}'$ of depoled PIN-PMN-PT with driving frequency. The frequency dispersion is largest at room temperature, then below 150 K the frequency dispersion begins to decrease. The imaginary part of the permittivity $\varepsilon _{\text{r}}''$ follows a similar function of temperature as $\varepsilon _{\text{r}}'$, dropping to approximately 20 $\%$ of its room temperature value by 20 K, with no prominent step features. The polycrystal behaves in a similar way to the depoled crystals. The real and imaginary parts of the dielectric permittivity, $\varepsilon _{\text{r}}'$ and $\varepsilon _{\text{r}}''$, were measured for the ceramic in a poled and depoled state and the results are shown in Figure \ref{fig:permTpoly}. There is a substantial difference as a function of frequency in the dielectric data (both real and imaginary parts), which closes as the temperature approaches 0 K. The slopes of relative permittivity $\varepsilon _{\text{r}}'$ and the imaginary part of the permittivity $\varepsilon _{\text{r}}''$ are almost constant over the measured temperature range, although the data at 1 Hz are considerably steeper than the data at 10 kHz. In the range of temperature and frequency in Figure \ref{fig:permTpoly} there is very little change in the permittivity spectra of the polycrystal when it is poled compared to when it is depoled. In addition to the real and imaginary parts of the dielectric permittivity, we also show the dielectric loss tangent for all samples in Figure \ref{fig:tand}. For the single crystal samples the low temperature features in the imaginary permittivity (Figure \ref{fig:permTcrystal}b) and the dielectric loss (Figure \ref{fig:tand}a) are qualitatively similar. The peak in the (001) data represents a maximum in the energy lost when changing the polarisation direction. For the polycrystalline ceramic the dielectric loss in Figure \ref{fig:tand}b shows a more prominent low temperature effect than the permittivity in Figure \ref{fig:permTpoly}. The dielectric loss in the polycrystal changes less than $\varepsilon _{\text{r}}''$ close to room temperature, and we then see larger changes at lower temperatures. Loss data at frequencies below 1 kHz reduces more steeply below 150 K, whereas for higher frequency data there is small bump that could indicate a high frequency process that freezes out below 100 K. \begin{figure} \includegraphics[width=7cm]{tand_vs_T_20-300K_lines_polycrystal_crystal} \caption{The dielectric loss in a) (001) and (111) cut PIN-PMN-PT single crystals and b) a polycrystalline ceramic are shown at temperatures from 10 $K$ to 300 $K$. The solid lines are for the poled material and the dashed are for the depoled material. The red lines are for the (001) cut, the blue lines are for the (111) cut and the black lines are for the polycrystalline ceramic.} \label{fig:tand} \end{figure} \section{Discussion} Low temperature dielectric relaxation data have been modelled and explained by freezing-out of dynamics associated with both domains \cite{Arlt1987,Wang2011} and polar nano-regions \cite{Li2017}. The preparation conditions of materials, including the crystal growth method, crystalline or polycrystalline nature and the field and temperature histories are all important factors in the low temperature dielectric response. Poling a crystal changes the domain configuration, aligning randomly oriented domains to give a macroscopic polarisation direction. Poling a PIN-PMN-PT crystal along [111] aligns the polarisation to an energetically favourable direction for the rhombohedral crystal structure, giving a single domain. During the permittivity measurement, we apply a small ac driving voltage along the same axis as the polarisation. The polarisation can only respond by changing in size, so the response is small, meaning we measure a lowered permittivity in the (111) poled crystal. The largest effect from low temperature relaxation is seen in the (001) crystal sample, where there is a domain state with a high degree of order. For a [001] poled rhombohedral PIN-PMN-PT crystal we expect four domain variants with polarization pointing along each of the $\{$111$\}$ axes, to the corners of the crystal's unit cell -- all with a component along [001]. In the (001) cut crystal the polarisation is predominantly not aligned to the same axis as the ac driving voltage. The polarisation domains can respond by rotating towards and away from the [001] direction, as well as by changing the magnitude of their polarisation, giving an increased permittivity in the (001) cut. The motion of domain walls can contribute to polarisation changes, and therefore to the permittivity. Both of the depoled crystals have eight rhombohedral domain variants. The depoled polycrystalline material has domains with polarisation pointing in all directions because of the range of orientations of crystal grains. Poling the polycrystal gives domains in all the directions that have some positive component along [001]. Poling PIN-PMN-PT \cite{Wang2014} and PMN-PT \cite{Kiat2002,Cao2006a} crystals with compositions close to the MPB has been shown to change the crystallographic symmetry. X-ray diffraction shows that field cooling a crystal that is initially rhombohedral at room temperature gives a monoclinic symmetry. A difference in crystal symmetry between the poled and depoled samples would change the availible domain states and polarisation orientations. However, there is also evidence that the apparent monoclinic structure in materials close to the MPB is due to the averaging of variations in local structure, such as a combination of rhombohedral and tetragonal nano-domains\cite{Shvartsman2004,Schonau2007,Kim2012}. Poling relaxor-PbTiO$_{\text{3}}$ materials close to the MPB may enhance nanoscale structural variations that emerge from compositional variations, giving rise to phase domains with different crystal symmetries and polarisation orientations. Whether the polarisation-change mechanism involves reorientation within a domain or motion of domain walls depends on the energies needed to activate the processes. If all domains have an equal polarisation component along the direction of an applied electric field, there should be no difference in the activation energy to rotate the polarisation within domains, and there will be no energetic benefit to domain wall motion. If there are domains with differences in polarisation component along an applied electric field direction -- for example if there is a combination of rhomboherdral and tetragonal phase domains -- domain walls may move to expand domains whose polarisation is better aligned to the applied field. The range of orientations in the unpoled and the polycrystalline samples make domain wall motion likely as a mechanism for polarisation change \cite{Arlt1987}. In these samples, domain wall contributions are a good candidate for deviations from the permittivity expected from Landau theory \cite{Wang2011}. The broad frequency dispersion in the polycrystalline material is likely to be a consequence of many types of domain walls with a large range activation energies that respond to the applied electric field at different frequencies. The relaxation step that we measure in the PIN-PMN-PT (001) single crystal is very similar to both the step seen in PMN-PT \cite{Li2016,Li2017} (which has been modelled by dynamics of polar nano-regions) and in rhombohedral BaTiO$_{\text{3}}$ \cite{Wang2007,Wang2011} (which has been modelled by dynamics of domain walls). Comparing the permittivity versus temperature of single crystals and a polycrystal in poled and depoled states plotted in Figure \ref{fig:permTcrystal} and \ref{fig:permTpoly} shows that the low temperature step feature is only present in poled single crystals. The difference in the low temperature dielectric properties between single crystal and polycrystalline material with the same composition shows that the relaxation features reported here and by other researchers may not be entirely a consequence of the relaxor-like behaviour of polar nano-regions in a ferroelectric matrix. It is not clear that the motion of ferroelectric domain walls can be used to explain the effects in the (001) cut crystal, since there are domain wall motion mechanisms for the polarisation in the polycrystal and in the depoled single crystals, which have very different low temperature dielectric features. If the relaxation step in PIN-PMN-PT were due to the freezing of domain wall dynamics, we might expect there to be no effect at all in the single domain (111) crystal. Instead we see a small relaxation step, although it's possible that this could indicate dynamics from domains at the crystal's surfaces, rather than in the bulk, or the existance of a small number of phase domains enhanced by poling. The data we present here, along with that of other studies on single crystals and ceramics, suggest a mechanism that is dependent on preparation conditions and sample history. The key requirement is a dynamic process linked to polarisation. So far, mechanisms have been proposed that depend on the motion of domain walls \cite{Arlt1987,Wang2011} or polarisation changes of polar nano-regions \cite{Li2016,Li2017}. Since we have shown that these mechanisms don't account for all of our data, we suggest other candidates: the fluctuations of nanoscale ferroelectric domains, or the motion of phase domain walls that lie at the interface between nano-regions with different crystal symmetry and polarisation orientation. \section{Conclusions} We have presented dielectric data below room temperature for the relaxor-ferroelectric material PIN-PMN-PT. We compare PIN-PMN-PT measured in six different conditions: poled and depoled single crystal (001) cut, poled and depoled single crystal (111) cut, and poled and depoled polycrystalline ceramic. The large dielectric relaxation feature reported in relaxor-PbTiO$_{\text{3}}$ is only present in the poled single crystals, and is much more prominent in the multi-domain (001) cut than the single-domain (111). The differences between sample material under different conditions show that low temperature relaxations in relaxor-PbTiO$_{\text{3}}$ materials cannot be fully explained by a model based on the dynamics of polar nano-regions in a ferroelectric matrix \cite{Li2016,Li2017} or by a model based on the motion of domain walls \cite{Arlt1987}. The former model would suggest that the relaxation should be present in all the PIN-PMN-PT samples, and the latter would suggest that the relaxation should be present in all the unpoled samples, but not in the single domain (111) crystal. In addition to polarisation mechanisms for low temperature dynamics involving domain walls or polar nano-regions, mechanisms involving fluctuations of nanoscale ferroelectric domains and motions of phase domain walls could contribute low temperature dielectric features. \begin{acknowledgments} P.M. Shepley is greatful to cryogen technicians Luke Bone, Brian Gibbs, Dave Chapman and John Turton for a steady supply of liquid Helium, and to Dr Mannan Ali and Jamie Massey for cryostat advice. The authors acknowledge funding from a CASE studentship from EPSRC and Thales UK (L.A. Stoica), the School of Chemical and Process Engineering (P.M. Shepley) and EPSRC award EP/M002462/1 (A.J. Bell). \end{acknowledgments}
train/arxiv
BkiUdsc25V5jEBMNXKX3
5
1
\section{Algorithms} Approximate Bayesian computation (ABC) approximates Bayesian inference on parameters $\theta$ with prior $\pi(\theta)$ given data $y_{\text{obs}}$. It must be possible to simulate data $y$ from the model of interest given $\theta$. This implicitly defines a likelihood function $L(\theta)$: the density of $y_{\text{obs}}$ conditional on $\theta$. A standard importance sampling version of ABC samples parameters $\theta_{1:N}$ from an importance density $g(\theta)$ and simulates corresponding datasets $y_{1:N}$. Weights $w_{1:N}$ are calculated by equation \eqref{eq:ABCweight} below. Then for a generic function $f(\theta)$, an estimate of its posterior expectation $\E[f(\theta) | y_{\text{obs}}]$ is $\mu_f = [\sum_{i=1}^N w_i]^{-1} \sum_{i=1}^N f(\theta_i) w_i$. An estimate of the normalising constant $\pi(y_{\text{obs}}) = \int \pi(\theta) L(\theta) d\theta$, used in Bayesian model choice, is $\hat{z} = N^{-1} \sum_{i=1}^N w_i$. Under the ideal choice of weights, $w_i = L(\theta_i) \pi(\theta_i) / g(\theta_i)$, these estimates converge (almost surely) to the correct quantities as $N \to \infty$ \cite{Liu:1996}. In applications where $L(\theta)$ cannot be evaluated ABC makes inference possible with the trade-off that it gives approximate results. That is, the estimators converge to approximations of the desired values. The ABC importance sampling weights avoid evaluating $L(\theta)$ by using: \begin{align} w_{\text{ABC}} &= L_{\text{ABC}} \pi(\theta) / g(\theta) \label{eq:ABCweight} \\ \text{where} \qquad L_{\text{ABC}} &= K[d(s(y), s(y_{\text{obs}})) / h] \label{eq:LABC} \end{align} Here: \begin{itemize} \item $L_{\text{ABC}}$ acts as an estimate (up to proportionality) of $L(\theta)$. This and $w_{\text{ABC}}$ are random variables since they depend on $y$, a random draw from the model conditional on $\theta$. \item $s(\cdot)$ maps a dataset to a lower dimensional vector of \emph{summary statistics}. \item $d(\cdot,\cdot)$ maps two summary statistic vectors to a non-negative value. This defines the \emph{distance} between two vectors. \item $K[\cdot]$, the \emph{ABC kernel} maps from a non-negative value to another. A typical choice is a uniform kernel $K[x]=\mathbbm{1}(x \in [0,1])$, which makes an accept/reject decision. Another choice is a normal kernel $K[x]=e^{-x^2}$. \item $h \geq 0$ is a tuning parameter, the \emph{bandwidth}. It controls how close a match of $s(y)$ and $s(y_{\text{obs}})$ is required to produce a significant weight. \end{itemize} The interplay between these tuning choices has been the subject of considerable research but is not considered further here. For further information on this and all aspects of ABC see the review papers \cite{Beaumont:2010, Marin:2012}. Lazy ABC splits simulation of data into two stages. First the output of some \emph{initial simulation stage} $x$ is simulated conditional on $\theta$, then, sometimes, a full dataset $y$ is simulated conditional on $\theta$ and $x$. The latter is referred to as the \emph{continuation simulation stage}. The variable $x$ should encapsulate all the information which is required to resume the simulation so may be high dimensional. There is considerable freedom of what the initial simulation stage is. It may conclude after a prespecified set of operations, or after some random event is observed. Another tuning choice is introduced, the \emph{continuation probability} function $\alpha(\theta, x)$. This outputs a value in $[0,1]$ which is the probability of continuing to the continuation simulation stage. The desired behaviour in choosing the initial simulation stage and $\alpha$ is that simulating $x$ is computationally cheap but can be used to save time by assigning small continuation probabilities to many unpromising simulations. Given all the above notation, lazy ABC is Algorithm \ref{alg:lazyABC}. To avoid division by zero in step 5, it will be required that $\alpha(\theta,x)>0$, although this condition can be weakened \cite{Prangle:2014}. \begin{algorithm}[htp] \rule{\textwidth}{0.2mm} \\ {\bf Algorithm:}\\ Perform the following steps for $i=1,\ldots,N$: \begin{enumerate} \item[1] Simulate $\theta_i$ from $g(\theta)$. \item[2] Simulate $x_i$ conditional on $\theta_i$ and set $a_i = \alpha(\theta_i, x_i)$. \item[3] With probability $a_i$ continue to step 4. Otherwise perform \emph{early stopping}: let $\ell_i=0$ and go to step 6. \item[4] Simulate $y_i$ conditional on $\theta_i$ and $x_i$. \item[5] Set $\ell_i = K[d(s(y_i), s(y_{\text{obs}})/h)] / a_i$. \item[6] Set $w_i = \ell_i \pi(\theta_i) / g(\theta_i)$. \end{enumerate} {\bf Output:}\\ A set of $N$ pairs of $(\theta_i, w_i)$ values.\\ \rule{\textwidth}{0.2mm} \caption{Lazy ABC \label{alg:lazyABC}} \end{algorithm} Lazy ABC has the same target as standard ABC importance sampling, in the sense that the Monte Carlo estimates $\mu_f$ and $\hat{z}$ converge to the same values for $N \to \infty$. This is proved by Theorem 1 and related discussion in \cite{Prangle:2014}. A sketch of the argument is as follows. Standard ABC is essentially an importance sampling algorithm: each iteration samples a parameter value $\theta$ from $g(\theta)$ and assigns it a random weight $w$ given by \eqref{eq:ABCweight}. The randomness is due to the random simulation of data $y$. The expectation of this weight conditional on $\theta$ is \[ \E [w_{\text{ABC}} | \theta ] = \E [ L_{\text{ABC}} | \theta ] \pi(\theta)/g(\theta) \] where expectation is taken over values of $y$. Lazy ABC acts similarly but uses different random weights \begin{equation} \label{eq:wlazyABC} w_{\text{lazy}} = \begin{cases} L_{\text{ABC}} a^{-1} \pi(\theta)/g(\theta) & \text{with probability } a=\alpha(\theta,x) \\ 0 & \text{otherwise} \end{cases} \end{equation} The randomness here is due to simulation of $x$ and $y$. Taking expectations gives: \begin{align*} \E[w_{\text{lazy}} | \theta, x] &= \E[L_{\text{ABC}} | \theta, x] \pi(\theta)/g(\theta) \\ \Rightarrow \qquad \E[w_{\text{lazy}} | \theta] &= \E[L_{\text{ABC}} | \theta] \pi(\theta)/g(\theta) \end{align*} From the theory of importance sampling algorithms with random weights (see \cite{Prangle:2014}) this ensures that both algorithms target the same distribution. This argument shows lazy ABC targets the same $\mu_f$ and $\hat{z}$ quantities as standard ABC, for any choice of initial simulation stage and $\alpha$. However, for poor choices of these tuning decisions it may converge very slowly. The next section considers effective tuning. \section{Lazy ABC tuning} \label{sec:tuning} The quality of lazy ABC tuning can be judged by an appropriate measure of \emph{efficiency}. Here this is defined as effective sample size (ESS) divided by computing time. The ESS for a sample with weights $w_1, \ldots, w_N$ is \[ N_{\text{eff}} = N \bigg[ N^{-1}\sum_{i=1}^N w_i \bigg]^2 / \bigg[ N^{-1} \sum_{i=1}^N w_i^2 \bigg]. \] It can be shown \cite{Liu:1996} that for large $N$ the variance of $\mu_f$ typically equals that of $N_{\text{eff}}$ independent samples. Computing time is taken to be the sum of CPU time for each core used (as the lazy ABC iterations can easily be performed in parallel.) Theorem 2 of \cite{Prangle:2014} gives the following results on the choice of $\alpha$ which maximises the efficiency of lazy ABC in the asymptotic case of large $N$. For now let $\phi$ represent $(\theta, x)$. Then the optimal choice of $\alpha$ is of the following form: \begin{align} \alpha(\phi) &= \min\left\{1, \lambda \left[\frac{ \gamma(\phi) }{\bar{T}_2(\phi)}\right]^{1/2} \right\} \label{eq:tuning} \\ \text{where} \qquad \gamma(\phi) &= \E \big[ L_{\text{ABC}}^2 \pi(\theta)^2 g(\theta)^{-2} | \phi \big]. \label{eq:gamma} \end{align} Here $\gamma(\phi)$ is the expectation given $\phi$ of $w_{\text{ABC}}^2$, the squared weight which would be achieved under standard ABC importance sampling; $\bar{T}_2(\phi)$ is the expected time for steps 4-6 of Algorithm \ref{alg:lazyABC} given $\phi$; $\lambda \geq 0$ is a tuning parameter that controls the relative importance of maximising ESS (maximised by $\lambda=\infty$) and minimising computation time (minimised by $\lambda=0$). A natural approach to tuning $\alpha$ in practice is as follows. The remainder of the section discusses these steps in more detail. \begin{enumerate} \item Using Algorithm \ref{alg:lazyABC} with $\alpha \equiv 1$ simulate training data $(\theta_i, x_i, y_i, t^{(1)}_i, t^{(2)}_i)_{1 \leq i \leq M}$. Here $t^{(1)}_i$ is the time to perform steps 1-3 of Algorithm \ref{alg:lazyABC} and $t^{(2)}_i$ is the time for steps 4-6. \item Estimate $\gamma(\phi)$ and $\bar{T}_2(\phi)$ from training data. \item Choose $\lambda$ to maximise an efficiency estimate based on the training data. \item Decide amongst various choices of initial simulation stage (and $\phi$, see below) by maximising estimated efficiency. By collecting appropriate data for these choices in step 1 it is not necessary to repeat it. \end{enumerate} Step 2 is a regression problem, but is not feasible for $\phi=(\theta,x)$ as this will typically be very high dimensional. Instead $\alpha$ can be based on low dimensional features of $(\theta,x)$, referred to as \emph{decision statistics}. That is, only $\alpha$ functions of the form $\alpha(\phi(\theta,x))$ are considered, where $\phi(\theta,x)$ outputs a vector of decision statistics. The optimal such $\alpha$ is again given by \eqref{eq:tuning} and \eqref{eq:gamma}. The choice of which decision statistics to use can be included in step 4 above. Estimating $\gamma(\phi)$ by regression is also challenging if there are regions of $\phi$ space for which most of the responses are zero. This is typically the case for uniform $K$. In \cite{Prangle:2014} various tuning methods were proposed for uniform $K$ but these are complicated and rely on strong assumptions. A simpler alternative used here is to use a normal $K$ as it has full support. Local regression techniques \cite{Hastie:2009} are suggested for step 2. This is because the behaviour of the responses typically varies considerably for different $\phi$ values, motivating fitting separate regressions. Firstly, the typical magnitude of $L_{\text{ABC}}$ varies over widely different scales. Secondly, for both regressions the distribution of the residuals may also vary with $\phi$. To ensure positive predictions, the use of degree zero regression is suggested i.e.~a Nadaraya-Watson kernel estimator. The efficiency estimate required in steps 3 and 4 can be formed from the training data and proposed choice of $\alpha$. Let $(\alpha_i)_{1 \leq i \leq M}$ be the $\alpha$ values for the training data and $(l_i)_{1 \leq i \leq M}$ be the values of $L_{\text{ABC}}$. The realised efficiency of the training data is not used since it is based on a small sample size. Instead the asymptotic efficiency is estimated. Under weak assumptions (see \cite{Prangle:2014}) this is $\E(w)^2 \E(w^2)^{-1} \E(T)^{-1}$, where random variable $T$ is the CPU time for a single iteration of lazy ABC. Note that $E(w)$ is constant (the ABC approximation for the normalising constant $\pi(y_{\text{obs}})$) under any tuning choices, so it is omitted. This leaves an estimate up to proportionality of $[\E(w^2) \E(T)]^{-1}$ which can be used to calculate efficiency relative to standard ABC (found by setting $\alpha \equiv 1$). An estimate of $\E(T)$ is $\hat{T} = M^{-1} [\sum_{i=1}^{M} t^{(1)}_i + \sum_{i=1}^{M} \alpha_i t^{(2)}_i]$. Using \eqref{eq:wlazyABC} an estimate of $E(w^2)$ is $\widehat{w^2} = M^{-1} \sum_{i=1}^{M} l_i^2 \alpha_i^{-1} \pi(\theta_i)^2 g(\theta_i)^{-2}$ \section{Example} As an example the spatial extremes application of \cite{Erhardt:2012} is used. This application and the implementation of lazy ABC is described in full in \cite{Prangle:2014}. A short sketch is that the model of interest has two parameters $(c,\nu)$. Given these, data $y_{t,d}$ can be generated for years $1 \leq t \leq 100$ and locations $1 \leq d \leq 20$. These represent annual extreme measurements e.g.~of rainfall or temperature. An ABC approach has been proposed including choices of $s(\cdot)$ and $d(\cdot,\cdot)$. Also, given data for a subset of locations an estimate of the ABC distance can be formed. Simulation of data is hard to interrupt and later resume. However the most expensive part of the process is calculating the summary statistics, which involves calculating certain coefficients for every triple of locations. Therefore the initial simulation stage of lazy ABC is to simulate all the data and calculate an estimated distance based on a subset of locations $L$, which is used as the decision statistic $\phi$. The continuation stage is to calculate the coefficients for the remaining triples and return the realised distance. Tuning of lazy ABC was performed as described in Section \ref{sec:tuning}, using backwards selection in step 4 to find an appropriate subset of locations to use as $L$. To fit the regressions estimating $\gamma(\phi)$ and $\bar{T}_2(\phi)$ a Nadaraya-Watson kernel estimator was used with a Gaussian kernel and bandwidth 0.5, chosen manually. Repeating the example of \cite{Prangle:2014}, 6 simulated data sets were analysed using standard and lazy ABC. Each analysis used $10^6$ simulations in total. In lazy ABC $M=10^4$ of these were used for training. The results are shown in Table \ref{tab:SErep}. The efficiency improvements of lazy ABC relative to standard ABC are of similar magnitudes to those in \cite{Prangle:2014} but are less close to the values estimated in step 3 of tuning. \begin{table}[pht] \centering \fbox{ \footnotesize \begin{tabular}{cc|c|cc|cc} \multicolumn{2}{c|}{Parameters} & Standard & \multicolumn{2}{|c|}{Lazy} & \multicolumn{2}{|c}{Relative efficiency} \\ $c$ & $\nu$ & Time ($10^3$s) & Time ($10^3$s) & ESS & Estimated & Actual \\ \hline 0.5 & 1 & 26.7 & 8.0 & 131.6 & 3.9 & 2.2 \\ 1 & 1 & 25.6 & 7.1 & 174.2 & 4.5 & 3.1 \\ 1 & 3 & 25.5 & 8.3 & 185.3 & 3.8 & 2.8 \\ 3 & 1 & 25.6 & 7.6 & 267.2 & 4.2 & 4.5 \\ 3 & 3 & 25.2 & 8.2 & 193.5 & 3.9 & 3.0 \\ 5 & 3 & 25.7 & 8.4 & 162.4 & 3.7 & 2.5 \end{tabular}} \caption{\label{tab:SErep}Simulation study on spatial extremes. Each row represents the analysis of a simulated dataset under the given values of parameters $c$ and $\nu$. In each analysis a choice of $\epsilon$ was made under standard ABC so that the ESS was 200, and the same value was used for lazy ABC. The lazy ABC output sample includes the training data, as described in \cite{Prangle:2014}. Also its computation time includes the time for tuning calculation (roughly 70 seconds). Iterations were run in parallel and computation times are summed over all cores used.} \end{table} \section{Conclusion} The paper has reviewed lazy ABC \cite{Prangle:2014}, a method to speed up ABC without introducing further approximations to the target distribution. Unlike \cite{Prangle:2014}, non-uniform ABC kernels have been considered. This allows a simpler approach to tuning, which provides a comparable three-fold efficiency increase in a spatial extremes example. Several extensions to lazy ABC are described in \cite{Prangle:2014}: multiple stopping decisions, choosing $h$ after running the algorithm and a similar scheme for likelihood-based inference. Other potential extensions include using the lazy ABC approach in ABC versions of MCMC or SMC algorithms, or focusing on model choice. \newpage \bibliographystyle{plainnat}
train/arxiv
BkiUd0I5qoYDgZr1DmWU
5
1
\section{Introduction} Ever since the celebrated paper of Gallager, Humblet, and Spira \cite{GHS-83}, the task of constructing a minimum-weight spanning tree (MST) continues to be a rich source of difficulties and ideas that drive network algorithmics (see, e.g., \cite{Elkin-MST,GarayKP-98,LotkerPP,PelegR-00}). The \emph{Steiner Forest} (SF) problem is a strict generalization of MST: We are given a network with edge weights and some disjoint node subsets called \emph{input components}; the task is to find a minimum-weight edge set which makes each component connected. MST is a special case of SF, and so are the Steiner Tree and shortest $s$-$t$ path problems. The general SF problem is well motivated by many practical situations involving the design of networks, be it physical (it was famously posed as a problem of railroad design), or virtual (e.g., VPNs or streaming multicast). The problem has attracted much attention in the classical algorithms community, as detailed on the dedicated website \cite{Steiner-site}. The first network algorithm for SF in the \textsc{congest}\xspace model (where a link can deliver $\mathcal{O}(\log n)$ bits in a time unit---details in \sectionref{sec:model}) was presented by Khan \textit{et al.}\ \cite{KKMPT-12}. It provides $\mathcal{O}(\log n)$-approximate solutions in time $\tilde{\mathcal{O}}(sk)$, where $n$ is the number of nodes, $k$ is the number of components, and $s$ the \emph{shortest path diameter} of the network, which is (roughly---see \sectionref{sec:model}) the maximal number of edges in a weighted shortest path. Subsequently, in \cite{LenzenP13}, it was shown that for any given $0<\varepsilon\le1/2$, an $\mathcal{O}(\varepsilon^{-1})$-approximate solution to SF can be found in time $\tilde{\mathcal{O}}((\sqrt n+t)^{1+\varepsilon}+D)$, where $D$ is the diameter of the unweighted version of the network, and $t$ is the number of \emph{terminals}, i.e., the total number of nodes in all input components. The algorithms in \cite{KKMPT-12,LenzenP13} are both randomized. \paragraph{Our Results.} In this paper we improve the results for SF in the \textsc{congest}\xspace model in two ways. First, we show that for any given constant $\varepsilon>0$, a $(2+\varepsilon)$-approximate solution to SF can be computed by a deterministic network algorithm in time $\tilde{\mathcal{O}}(sk+\sqrt{\min\Set{st,n}})$. Second, we show that an $\mathcal{O}(\log n)$-approximation can be attained by a randomized algorithm in time $\tilde{\mathcal{O}}(k+\min\Set{s,\sqrt n}+D)\subseteq\tilde{\mathcal{O}}(s+k)$. On the other hand, we show that any algorithm in the \textsc{congest}\xspace model that computes a solution to SF with non-trivial approximation ratio has running time in $\tilde{\Omega}(k+\min\Set{s,\sqrt n}+D)$. If the input is not given by indicating to each terminal its input component, but rather by \emph{connection requests} between terminals, i.e., informing each terminal which terminals it must be connected to, an $\tilde{\Omega}(t+\min\Set{s,\sqrt n}+D)$ lower bound holds. (It is easy to transform connection requests into equivalent input components in $\mathcal{O}(t+D)$ rounds.) \paragraph{Related work.} The Steiner Tree problem (the special case of SF where there is one input component) has a remarkable history, starting with Fermat, who posed the geometric 3-point on a plane problem circa 1643, including Gauss (1836), and culminating with a popularization in 1941 by Courant and Robbins in their book ``What is Mathematics'' \cite{CourantR-41}. An interesting account of these early developments is given in \cite{SteinerHistory}. The contribution of Computer Science to the history of the problem apparently started with the inclusion of Steiner Tree as one of the original 21 problems proved NP-complete by Karp \cite{Karp-72}. There are quite a few variants of the SF problem which are algorithmically interesting, such as Directed Steiner Tree, Prize-Collecting Steiner Tree, Group Steiner Tree, and more. The site \cite{Steiner-site} gives a continuously updated state of the art results for many variants. Let us mention results for just the most common variants: For the Steiner Tree problem, the best (polynomial-time) approximation ratio known is $\ln 4+\varepsilon\approx1.386+\varepsilon$ for any constant $\varepsilon>0$ \cite{ByrkaGRS-10}. For Steiner Forest, the best approximation ratio known is $2-1/(t-k)$ \cite{AgrawalKR-95}. It is also known that the approximation ratio of the Steiner Tree (or Forest) problem is at least $96/95$, unless P=NP~\cite{ChlebikC-08}. Regarding distributed algorithms, there are a few relevant results. First, the special case of minimum-weight spanning tree (MST) is known to have time complexity of $\tilde\Theta(D+\sqrt n)$ in the \textsc{congest}\xspace model \cite{DHKNPPW-11,Elkin-MST,GarayKP-98,KuttenP-98,PelegR-00}. In \cite{CF05}, a 2-approximation for the special case of Steiner Tree is presented, with time complexity $\tilde{\mathcal{O}}(n)$. The first distributed solution to the Steiner Forest problem was presented by Khan \textit{et al.}~\cite{KKMPT-12}, where a randomized algorithm is used to embed the instance in a virtual tree with $\mathcal{O}(\log n)$ distortion, then finding the optimal solution on the tree (which is just the minimal subforest connecting each input component), and finally mapping the selected tree edges back to corresponding paths in the original graph. The result is an $\mathcal{O}(\log n)$-approximation in time $\tilde{\mathcal{O}}(sk)$. Intuitively, $s$ is the time required by the Bellman-Ford algorithm to compute distributed single-source shortest path , and the virtual tree of \cite{KKMPT-12} is computed in $\tilde{\mathcal{O}}(s)$ rounds. A second distributed algorithm for Steiner Forest is presented in \cite{LenzenP13}. Here, a sparse spanner for the metric induced on the set of terminals and a random sample of $\tilde{\Theta}(\sqrt{n})$ nodes is computed, on which the instance then is solved centrally. To get an $\mathcal{O}(\varepsilon^{-1})$-approximation, the algorithm runs for $\tilde{\mathcal{O}}(D+(\sqrt n+t)^{1+\varepsilon})$ rounds. For approximation ratio $\mathcal{O}(\log n)$, the running time is $\tilde{\mathcal{O}}(D+\sqrt n+t)$. \paragraph{Main Techniques.} Our lower bounds are derived by the standard technique of reduction from results on $2$-party communication complexity. Our deterministic algorithm is an adaptation of the ``moat growing'' algorithm of Agrawal, Klein, and Ravi \cite{AgrawalKR-95} to the \textsc{congest}\xspace model. It involves determining the times in which ``significant events'' occur (e.g., all terminals in an input component becoming connected by the currently selected edges) and extensive usage of pipelining. The algorithm generalizes the MST algorithm from~\cite{KuttenP-98 : for the special case of a Steiner Tree (i.e., $k=1$), one can interpret the output as the edge set induced by an MST of the complete graph on the terminals with edge weights given by the terminal-terminal distances, yielding a factor-$2$ approximation; specializing further to the MST problem, the result is an exact MST and the running time becomes $\tilde{\mathcal{O}}(\sqrt{n}+D)$. Our randomized algorithm is based on the embedding of the graph into a tree metric from \cite{KKMPT-12}, but we improve the complexity of finding a Steiner Forest. A key insight is that while the least-weight paths in the original graph corresponding to virtual tree edges might intersect, no node participates in more than $\mathcal{O}(\log n)$ distinct paths. Since the union of all least-weight paths ending at a specific node induces a tree, letting each node serve routing requests corresponding to different destinations in a round-robin fashion achieves a pipelining effect reducing the complexity to $\tilde{\mathcal{O}}(s+k)$. If $s>\sqrt{n}$, the virtual tree and the corresponding solution are constructed only partially, in time $\tilde{\mathcal{O}}(\sqrt{n}+k+D)$, and the partial result is used to create another instance with $\mathcal{O}(\sqrt n)$ terminals that captures the remaining connectivity demands; we solve it using the algorithm from~\cite{LenzenP13}, obtaining an $\mathcal{O}(\log n)$-approximation. \paragraph{Organization.} In \sectionref{sec:model} we define the model, problem and basic concepts. \sectionref{sec-lb} contains our lower bounds. In \sectionref{sec-alg1} and \sectionref{sec-alg2} we present our deterministic and randomized algorithms, respectively. We only give a high-level overview in this extended abstract. Proofs are deferred to the appendix. \section{Model and Notation} \label{sec:model} \paragraph{System Model.} We consider the $\textsc{congest}\xspace(\log n)$ or simply the \textsc{congest}\xspace model as specified in~\cite{Peleg:book}, briefly described as follows. The distributed system is represented by a weighted graph $G=(V,E,W)$ of $n:=|V|$ nodes. The weights $W:E\to \mathbb{N}$ are polynomially bounded in $n$ (and therefore polynomial sums of weights can be encoded with $\mathcal{O}(\log n)$ bits). Each node initially knows its unique identifier of $\mathcal{O}(\log n)$ bits, the identifiers of its neighbors, the weight of its incident edges, and the local problem-specific input specified below. Algorithms proceed in synchronous rounds, where in each round, (i) nodes perform arbitrary, finite local computations,% \footnote{All our algorithms require polynomial computations only.} (ii) may send, to each neighbor, a possibly distinct message of $\mathcal{O}(\log n)$ bits, and (iii) receive the messages sent by their neighbors. For randomized algorithms, each node has access to an unlimited supply of unbiased, independent random bits. Time complexity is measured by the number of rounds until all nodes (explicitly) terminate. \paragraph{Notation.} We use the following conventions and graph-theoretic notions. \begin{compactitem} \item The \emph{length} or number of \emph{hops} of a path $p=(v_0,\ldots,v_{\ell(p)})$ in $G$ is $\ell(p)$. \item The weight of such a path is $W(p):=\sum_{i=1}^{\ell(p)}W(v_i,v_{i-1})$. For notational convenience, we assume w.l.o.g.\ that different paths have different weight (ties broken lexicographically). \item By $\cP(v,w)$ we denote the set of all paths between $v,w\in V$ in $G$, i.e., $v_0=v$ and $v_{\ell(p)}=w$. \item The (unweighted) \emph{diameter} of $G$ is \hfill\\$D:=\max_{v,w\in V}\{\min_{p\in \cP(v,w)}\{\ell(p)\}\}$. \item The (weighted) \emph{distance} of $v$ and $w$ in $G$ is $\Wd(v,w):=\min_{p\in \cP(v,w)}\{W(p)\}$. \item The \emph{weighted diameter} of $G$ is $\WD:=\max_{v,w\in V}\{\Wd(v,w)\}$. \item Its \emph{shortest-path-diameter} is $s:=\max_{v,w\in V}\{\min \{\ell(p)\,|\,p\in \cP(v,w)\wedge W(p)=\Wd(v,w)\}\}$. \item For $v\in V$ and $r\in \mathbb{R}^+_0$, we use $B_G(v,r)$ to denote the ball of radius $r$ around $v$ in $G$, which includes all nodes and edges at weighted distance at most $r$ from $v$. The ball may contain edge fractions: for an edge $\{w,u\}$ for which $w$ is in $B_G(v,r)$, the $(r-\Wd(v,w))/\Wd(v,w)$ fraction of the edge closer to $w$ is considered to be within $B_G(v,r)$, and the remainder is considered outside $B_G(v,r)$. \end{compactitem} We use ``soft'' asymptotic notation. Formally, given functions $f$ and $g$, define (i) $f\in \tilde{\mathcal{O}}(g)$ iff there is some $h\in \polylog n$ so that $f\in \mathcal{O}(gh)$, (ii) $f\in \tilde{\Omega}(g)$ iff $g\in\tilde{\mathcal{O}}(f)$, and (iii) $f\in \tilde{\Theta}(g)$ iff $f\in \tilde{\mathcal{O}}(g)\cap \tilde{\Omega}(g)$. By ``w.h.p.,'' we abbreviate ``with probability $1-n^{-\Omega(1)}$'' for a sufficiently large constant in the $\Omega(1)$ term. \paragraph{The Distributed Steiner Forest Problem.} In the Steiner Forest problem, the output is a set of edges. We require that the output edge set $F$ is represented distributively, i.e., each node can locally answer which of its adjacent edges are in the output. The input may be represented by two alternative methods, both are justified and are common in the literature. We give the two definitions. \begin{definition} [Distributed Steiner Forest with Connection Requests (\textsc{dsf-cr}\xspace)]\ \begin{compactitem} \item[\textbf{Input:}] At each node $v$, a set of \emph{connection requests} $R_v\subseteq V$. \item[\textbf{Output:}] An edge set $F\subseteq E$ such that for each connection request $w\in R_v$, $v$ and $w$ are connected by $F$. \item[\textbf{Goal:}] Minimize $W(F)=\sum_{e\in F}W(e)$. \end{compactitem} \end{definition} \noindent The set of \emph{terminal} nodes is defined to be $T=\Set{w\mid w\in R_v\mbox{ for some }v\in V}\cup \Set{v\mid R_v\ne\emptyset}$, i.e., the set of nodes $v$ for which there is some connection request $\{v,w\}$. \begin{definition}[Distributed Steiner Forest with Input Components (\textsc{dsf-ic}\xspace)]\ \begin{compactitem} \item[\textbf{Input:}] At each node $v$, $\lambda(v)\in \Lambda \cup \{\bot\}$, where $\Lambda$ is the set of \emph{component identifiers}. The set of \emph{terminals} is $T:=\{v\in V\mid\lambda(v)\neq \bot\}$. An \emph{input component} $C_{\lambda}$ for $\lambda\ne\bot$ is the set of terminals with label $\lambda$. \item[\textbf{Output:}] An edge set $F\subseteq E$ such that all terminals in each {input component} are connected by $F$. \item[\textbf{Goal:}] Minimize $W(F)=\sum_{e\in F}W(e)$. \end{compactitem} \end{definition} An instance of \textsc{dsf-ic}\xspace is \emph{minimal}, if $|C_{\lambda}|\neq 1$ for all $\lambda\in \Lambda$. We assume that the labels $\lambda\in \Lambda$ are encoded using $\mathcal{O}(\log n)$ bits. We define $t:=|T|$ and $k:=|\Lambda|\leq t$, i.e., the number of terminals and input components, respectively. We say that any two instances of the above problems on the same weighted graph, regardless of the way the input is given, are \emph{equivalent} if the set of feasible outputs for the two instances is identical. \begin{lemma}\label{lemma:transform_to_input} Any instance of \textsc{dsf-cr}\xspace can be transformed into an equivalent instance of \textsc{dsf-ic}\xspace in $\mathcal{O}(D+t)$ rounds. \end{lemma} \begin{lemma}\label{lemma:transform_to_minimal} Any instance of \textsc{dsf-ic}\xspace can be transformed into an equivalent minimal instance of \textsc{dsf-ic}\xspace in $\mathcal{O}(D+k)$ rounds. \end{lemma} \section{Lower Bounds} \label{sec-lb} In this section we state our lower bounds (for proofs and more discussion, see
train/arxiv
BkiUehLxK3YB9raX11NQ
5
1
\subsection*{Abstract} Our previous experience building systems for middlebox chain composition and scaling in software-defined networks has revealed that existing mechanisms of flow annotation commonly do not survive middlebox-traversals, or suffer from extreme identifier domain limitations resulting in excessive flow table size. In this paper, we analyze the structural artifacts resulting in these challenges, and offer a framework for describing the behavior of middleboxes based on actions taken on traversing packets. We then present a novel mechanism for flow annotation that features an identifier domain significantly larger than existing techniques, that is transparent to hosts traversed, and that conserves flow-table resources by requiring only a small number of match rules and actions in most switches. We evaluate said technique, showing that it requires less per-switch state than conventional techniques. We then describe extensions allowing implementation of this architecture within a broader class of systems. Finally, we close with architectural suggestions for enabling straightforward integration of middleboxes within software-defined networks. \section{Analysis} We consider an abstract network fabric and network controller to demonstrate the state-space advantage resulting from active switching compared to traditional match-and-forward logic. To the network fabric are attached six servers and an external connection. Of those six servers, two are connection-terminating endpoints (e.g. web servers), and the remaining four consist of two pairs of equivalent middleboxes. The controller's configured policy is such that each new flow received from the external connection is randomly assigned to one of each type of middlebox, and one endpoint server. Under the traditional paradigm, each new flow would result in a number of new rules being installed in all switches along the flow's path: a match rule forwarding ingress traffic to the correct first middlebox's edge switch, no less than four rules in each middlebox-attached edge switch (received from upstream, middlebox-bound; received from middlebox, downstream-bound; etc), and two rules in the endpoint device-attached switch. As each rule would need to uniquely identify a flow, multiple flows cannot be handled by the same rule; every new flow results in at least eleven new flow-rules being installed in the network, and control messages being sent to at least four switches. \begin{table}[h!] \begin{center} \center{\textbf{Match-and-Forward}} \\ \begin{tabular}{|c || c | c | c | c|} \hline concurrent flows & 1 & 10 & 100 & n\\ \hline ingress rules & 2 & 11 & 101 & $ n + 1 $ \\ middlebox rules & 8 & 80 & 800 & $ 8n $ \\ endpoint rules & 2 & 20 & 200 & $ 2n $ \\ \hline total & 12 & 111 & 1101 & $ 11n + 1 $ \\ \hline \end{tabular} \end{center} \begin{center} \center{\textbf{Active Switching}} \\ \begin{tabular}{|c || c | c | c | c|} \hline concurrent flows & 1 & 10 & 100 & n\\ \hline ingress rules & 2 & 11 & 101 & $n + 1$ \\ middlebox rules & 1 & 1 & 1 & $1$ \\ endpoint rules & 1 & 10 & 100 & $n$ \\ \hline total & 4 & 22 & 202 & $ 2n + 2 $ \\ \hline \end{tabular} \end{center} \label{tab:space_comparison} \caption{Comparison of flow-table space requirements using traditional and active switching logic.} \end{table} Using active switching, we can support an arbitrary number of upto 5-hop paths through a fabric with upto 254 ports. Each edge switch adjacent to an endpoint, and the ingress switch, must maintain state of size $O(n)$, where $n$ is the number of flows that originate from the adjacent device. All other edge-switches, such as those adjacent to middleboxes, need only maintain constant-size ($O(1)$) state. \begin{figure}[h!] \begin{center} \input{comparison} \end{center} \label{fig:space_comparison} \caption{Flow-table entries required as a function of concurrent demand.} \end{figure} To the best of our knowledge, no existing techniques can make such claims. \section{Conclusion} This paper presents "active switching", a novel technique for the construction of software-defined networks. Active switching is a general descriptor for any technique where flow-state is embedded into traffic, rather than maintained in flow-table entries; and where switches' flow-tables act as transition functions modifying flow state.\footnote{There are intriguing theoretical implications to software-defined networking, especially vis-a-vis register extensions. We think it intuitive, for example, that such a network of sufficient size could be programmed so as to recognize the class of regular languages. A full exploration of these implications is left to future work.} We have described the use of active switching to solve a number of issues, broadly characterized as "traffic-steering," across a variety of network topologies; although we suspect that the technique has far broader applications that are left for future work. We have shown that this technique can effect behavior from a software-defined network that would be challenging or impossible to implement based solely on "match-and-forward" logic, and that this technique can result in a dramatic efficiency gains. \section{Construction} \label{sec:const_basic} Active switching's design is inspired by the following observations: \begin{compactitem} \item OpenFlow-enabled switches need not be compliant with IEEE 802.1d, and, by default, are not. In other words, "this ain't DIX Ethernet." \item As OpenFlow-enabled switches are able to forward traffic based on L3 header match, L2 addressing need not uniquely identify a network host. \item In fact, L2 addressing conveys no additional information to the network that could not be gleaned by examining L3 headers and network topology. \end{compactitem} Given those observations, we have chosen to re-purpose the source and destination MAC address fields in the Ethernet frame header to encode the policy-defined hop-by-hop path a packet should follow between the ingress gateway and the destination endpoint host. \subsection{Required flow-rule actions} We require three register extensions to the OpenFlow 1.0 standard: \begin{compactitem} \item bit-oriented partial load from field offset to register \item bit-oriented partial save to field offset from register \item output to port given in register \end{compactitem} Given that fabric-edge switches in cloud architectures tend to be implemented in software, and as these actions are already supported by Open vSwitch \cite{ovs}, we do not believe that these requirements impose a significant limitation to this approach. \footnote{There may also be designs employing features of the OpenFlow 1.3 standard that can achieve reasonable facsimiles of this functionality, possibly at the cost of increased flow-table size.} \subsection{Baseline Functionality} We initially consider only the composition of middleboxes that do not mangle L2 traffic headers, and that do not originate connections. We additionally restrict the port identifiers of the fabric to 8 bits, and exclude paths with a hop-count greater than $5$ from consideration. In section \ref{sec:ext}, we will loosen these restrictions in order to better support a greater variety of middleboxes under less-abstract network topologies. \subsubsection{Ingress Switch Behavior} \label{sec:ingress_behavior} The default action for packets received by an ingress switch that fail to match any current flow rules is to forward the packet to the controller. Upon receipt of a packet from an ingress switch, the controller considers its configuration, policy, and knowledge of the network topology to construct a port-by-port path through the network. It then writes a flow-rule into the ingress switch that matches the flow under consideration and causes the destination MAC address to be rewritten. We encode the hop-by-hop path through the network as follows: \newcommand{\minisection}[1]{\smallskip \noindent{\bf #1.}} \newcounter{linenum} \newenvironment{proto}{\setcounter{linenum}{0}\begin{tabbing}\makebox[-0.02in]{}\=\+\kill}{\end{tabbing}} \newenvironment{algorithm}[2]{\setcounter{linenum}{0}\begin{tabbing}\textsc{#1}\((#2)\)\\ \makebox[0.2in]{}\=\+\kill}{\end{tabbing}} \newcommand{\all}[1]{\addtocounter{linenum}{1}\'#1\\} \begin{figure}[h] \centerline{ \setlength{\tabcolsep}{0pt} \framebox[1\columnwidth][l]{ \vspace{1em} \begin{minipage}[t]{1\columnwidth} {\small \begin{proto} xxx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\= \kill \all{{\bf dst\_construct($\mathit{Path}\ P$)}} \all{1\> $\mathit{dst\_mac} \leftarrow \mathit{00:00:00:00:00:FF}$} \all{2\> for $\mathit{h} \in $reverse$(P)$:} \all{3\>\> $\mathit{dst\_mac} \leftarrow ($octet\_left\_shift$( \mathit{dst\_mac}) \mathrel{\&} \mathit{h})$} \end{proto} } \end{minipage} \vspace{-3em} } } \caption{Destination MAC address construction} \vspace{-1em} \label{algo:dst_constr} \end{figure} Subsequent to destination MAC address re-writing, the packet is handled as though it were received by an edge switch. \subsubsection{Edge Switch Behavior} \label{sec:edge_behavior} Given the register extensions previously discussed, the operation of edge- switches is quite simple: upon receipt of a packet, the switch rewrites the destination MAC address by shifting it one octet right. The packet is then output through the port identified by the byte that was shifted off the destination MAC address. \begin{figure}[h] \centerline{ \setlength{\tabcolsep}{0pt} \framebox[1\columnwidth][l]{ \vspace{1em} \begin{minipage}[t]{1\columnwidth} {\small \begin{proto} xxx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\= \kill \all{{\bf octet\_rshift\_field$(\mathit{Field}\ F$)}} \all{1\> let $\mathit{R} := $allocate\_and\_zero\_register$()$} \all{2\> $\mathit{R} \leftarrow $load\_field$(F, 8, $len$(F) - 8)$} \all{3\> $F \leftarrow \mathit{R}$} \all{} \all{{\bf handle\_packet$(\mathit{Packet}\ P)$}} \all{4\> let $\mathit{F} := $P$.$dst\_mac} \all{5\> let $\mathit{R} := $allocate\_and\_zero\_register$()$} \all{6\> $\mathit{R} \leftarrow $load\_field$(F, 0, 8)$} \all{7\> octet\_rshift\_field$(F)$} \all{8\> output\_to\_port$(P, R)$} \end{proto} } \end{minipage} \vspace{-3em} } } \caption{Basic edge-switch logic} \vspace{-1em} \label{algo:handle_packet} \end{figure} \subsubsection{Upstream path} While constructing the downstream path through the network, the controller can also construct a reverse path from the endpoint back to the ingress switch by simply reversing the hop order and pushing a flow-rule into the switch adjacent to the endpoint. Alternatively, the controller could become involved in processing the first packet of a flow in either direction, in which case similar logic would be employed, with the conceptual ingress- point for the unidirectional flow being the switch adjacent to the end host. In either case, the only additional logic required is a rule at the gateway to the external network rewriting all destination MAC addresses to that of the upstream external router.\footnote{Assuming, of course, that the link between the gateway and the upstream router is, in fact, traditional Ethernet.} \subsubsection{Example} Consider the topology and policy described in section \ref{sec:motivation}. Upon receipt of the first packet of a new flow from the external network that is addressed for receipt by host H, the ingress switch discovers that the packet received fails to match any existing flow-rules, and therefore sends it to the controller. Using its knowledge of the network's topology and the policy configuration, the controller selects a redundancy eliminator for handling the flow. Suppose it selected RE1. The controller will then construct the destination MAC address 00:00:FF:04:05:02, and program the ingress switch to rewrite packets for this flow accordingly. From this point forward, the packet is processed by edge switch logic only. The initial traversal of the fabric results in the destination MAC address being re-written to 00:00:00:FF:04:05, and that altered packet to be output from port 2 (to which RE1 is connected.) As the middlebox does not mangle L2 headers, that address will remain intact when the packet is re-received by the fabric. It will then be output from port 5, to the IPS, and, finally, from port 4, the endpoint for which the packet was originally destined. \footnote{In order for H to \textit{receive} this packet, the destination MAC address on the packet must be identical to the hardware address of H's receiving interface. This is trivial, as modern interfaces near-universally support administrator-configured Ethernet addresses. We assume herein that all interfaces connected to an actively switched network will be configured with the Ethernet address "00:00:00:00:00:FF", and all ARP queries will receive a response indicating that address.\footnotemark} \footnotetext{\label{foot:arp_abuse} One alternative we have explored is to have the controller respond to ARP requests directly with a path-encoded MAC address, rather than rewriting flows in the edge-switch. This mechanism can be useful, in that it reduces controller overhead substantially. It comes, however, at the cost of flexibility: individual flows cannot be independently steered; all traffic between two endpoints must follow the same path, as ARP requests are only issued on a per-host basis. What's more, the controller may not be able to invalidate a network participant's cached ARP-table entry: although it seems reasonable that an unsolicited ARP reply should suffice, we've observed that default security policy on many devices cause such packets to be ignored.} Subsequent packets of the flow received at the ingress switch will be annotated and steered identically, without controller involvement. \section{Extensions} We present a number of example extensions to the active switching architecture that might be employed to support its use in service of policies requiring longer path-lengths, and upon network topologies other than an abstract fabric. \subsection{Path length} There are a number of mechanisms that can be employed to effect traffic- steering over paths longer than five hops. \subsubsection{Address swapping} In order to support paths of up to 10 hops, simple changes are required to the algorithms presented in sections \ref{sec:ingress_behavior} and \ref{sec:edge_behavior}. In order to encode hops six through ten, we can also utilize the source address field in the Ethernet header. We omit a detailed description of the construction algorithm, as it is obvious. We present the modified edge-switch behavior in figure \ref{algo:handle_packet_x}. \begin{figure}[h] \centerline{ \setlength{\tabcolsep}{0pt} \framebox[1\columnwidth][l]{ \vspace{1em} \begin{minipage}[t]{1\columnwidth} {\small \begin{proto} xxx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\= \kill \all{{\bf handle\_packet$(\mathit{Packet}\ P)$}} \all{1\> let $\mathit{F} := $P$.$dst\_mac} \all{2\> let $\mathit{R} := $allocate\_and\_zero\_register$()$} \all{3\> $\mathit{R} \leftarrow $load\_field$(F, 0, 8)$} \all{4\> if $R == 0xFE$:} \all{5\>\> let $S := $P$.$src\_mac} \all{6\>\> $R \leftarrow $load\_field$(S, 0, 48)$} \all{7\>\> $F \leftarrow \mathit{R}$} \all{8\>\> $S \leftarrow 0$} \all{9\>\> handle\_packet$(P)$} \all{10\> else:} \all{11\>\> octet\_rshift\_field$(F)$} \all{12\>\> output\_to\_port$(P, R)$} \end{proto} } \end{minipage} \vspace{-3em} } } \caption{Edge-switch logic supporting 10-hop paths} \vspace{-1em} \label{algo:handle_packet_x} \end{figure} \subsubsection{Flow re-annotation} To handle circumstances where even a 10 hop path is insufficient to effect steering policy, a number of techniques may be employed to re-annotate flows within the network. \footnote{These examples are \textbf{not suggestions}; they are provided only to illustrate the potential of active switching. The author respectfully submits that traffic steering over paths longer than 10 hops is a solution in search of a problem.} Three options are presented herein; the first fails to meet the goal of supporting arbitrarily long paths, while the latter two represent the extrema of a spectrum of possibilities trading switch state and performance. \paragraph{Alternative storage.} Conceivably, a number of other writable header fields within the packet could be appropriated to encode path extensions. This method would require a deeper understanding of the behavior of the middleboxes along the path, however, and would still impose a constant upper-bound on the length of a path that a packet might be steered through; we thus reject it as an insufficient solution. \paragraph{Table lookup.} When the number of possible path continuations from an edge switch is small, the final octet of a MAC address can be used as an index into a lookup- table of possibilities. This requires that the controller's behavior upon receipt of a packet be modified such that, if a packet will require in-network re-annotation, an identifier $i$ is allocated; and a flow rule installed at the switch adjacent to the 10th hop re-annotating packets matching 00:00:00:00:00:$i$. When the number of path continuations is large, match rules against L3 headers may be employed to re-annotate packets.\footnote{We suggest it plausible that a combination approach, where $i$ indicates that a packet should be re-submitted through a particular flow table containing L3-match rules, might yield a performance enhancement.} \paragraph{Controller involved.} Trivially, every edge switch could submit all packets with empty source and destination addresses to the controller for reconsideration. This might be necessary if, for example, the path continuation of a flow beyond a certain point can not be ascertained by the controller prior to the packet's processing by some prior middlebox. \subsection{Larger Fabrics} The only challenge with supporting fabrics exposing more than 255 ports is that we cannot conveniently encode port identifiers in a single octet. It is, however, straightforward to modify the techniques described previously to consider, e.g., pairs of bytes to identify a port, at the cost of path-length, when the number of ports on the fabric is known or bounded. \footnote{Conceivably, an encoding could be developed that did not assume fixed-width port identifiers as well, although the edge-switch logic to support such a scheme appears daunting.} The details are omitted for brevity. \subsection{Alternate topologies} The abstraction of a network fabric is convenient for these purposes, because it allows us to cleanly differentiate between inter-hop and intra-hop routing. However, active switching can be used to effect intra-hop traffic-steering as well. We assume a network topology consisting of an ingress switch, and no more than fifteen interior 32-port switches, connected so as to form a maximally-interconnected graph. To each interior switch are connected no more than fifteen middlebox or endpoint devices. \footnote{It's interesting to note that a fully-connected mesh of switches where there is only \textbf{one} middlebox or endpoint device connected to each interior switch is indistinguishable from a fabric for the purposes of this construction.} The ingress switch connects to an external network, as before. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{mesh} \caption{Example mesh network topology} \label{fig:mesh_topo} \end{figure} An example of such a topology is shown in figure \ref{fig:mesh_topo}. On each switch, only the port with identifier 1 is shown; the identifiers of subsequent ports are sequential, moving in the clockwise direction. We present two techniques for traffic steering through such a topology. \subsubsection{Hop-by-hop} \label{sec:hop_by_hop} Under the given topology, all traffic at any point in the network is no more than two switch traversals away from its next hop. What's more, all traffic received by any switch from an endpoint device or middlebox must traverse at most one switch, at which point one of the attached devices must be the next hop. We can exploit these known constraints to encode a five-hop path into a MAC address by using nibbles to identify the egress port from whence a packet should be transmitted. As there are no more than sixteen possible next-hop switches, and each switch connects to no more than 15 endpoint devices, this representation is non-ambiguous. \paragraph{Example.} \label{p:hop_by_hop_ex} % Suppose policy specified that traffic received at the ingress switch and addressed for receipt by host 8 should be first steered through host 1. The first packet of such a flow would be sent to the controller, which would determine that the sequence of ports through which the flow should be emitted are as follows: from switch A's port 2, then from switch B's port 1 (at which point the flow will traverse host 1), then from switch B's port 5, and finally from switch E's port 2 (to host 8). The controller, using this information, constructs the flow annotation "00:00:00:FF:25:12", and installs a flow-rule tagging this flow with that destination address. At each switch in the network, the least significant nibble is shifted off the address, and the packet is emitted out the port so identified. \subsubsection{Destination encoding} When dealing with fabrics, we claimed to be encoding port identifiers into the MAC address of packets. In order to extend that port-by-port traffic steering concept to non-fabric topologies, in the previous section we encoded the egress port from each switch into the MAC address. However, implicit in the fabric abstraction was a one-to-one correspondence between port identifiers and the devices situated adjacent to those ports. It is as reasonable to claim that the fabric techniques encoded not port identifiers, but device identifiers, into the packet itself. We can extend this concept to non-fabric networks when the set of endpoint devices and middleboxes in the network is fixed, and the topology is known. The trade-off is that this requires maintaining $O(n)$-size state in each switch, where $n$ is the number of devices in the network. We begin by assigning a unique identifier to the ingress switch, and to each middlebox and endpoint device. As in section \ref{sec:const_basic}, the controller will construct MAC addresses encoding the identifiers of each middlebox through which traffic will be steered prior to arrival at the flow endpoint. In order to preserve identifiers over, potentially, multiple inter-switch links, the flow-table actions must be altered. In figure \ref{algo:mesh_prog}, we present an algorithm for the controller to program the switches in the network, assuming the existence of functionality to interrogate the network topology, and global arrays of network device identifiers and switches. The function "install\_rewrite\_rule" installs a flow-table rule that operates as described in section \ref{sec:edge_behavior}. The function "install\_forward\_rule" installs a rule matching the final octet of the destination MAC address against the given identifier, and forwards the packet out the given port unmodified. Effectively, we build a forwarding table for all identifiers in the network, for each switch in the network. \begin{figure}[h] \centerline{ \setlength{\tabcolsep}{0pt} \framebox[1\columnwidth][l]{ \vspace{1em} \begin{minipage}[t]{1\columnwidth} {\small \begin{proto} xxx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\=xx\= \kill \all{{\bf program\_switch$(\mathit{Switch}\ S)$}} \all{1\> for $\mathit{id} \in \mathit{ALL\_IDS}$:} \all{2\>\> if is\_adjacent$(S, \mathit{id})$:} \all{3\>\>\> install\_rewrite\_rule$(S, \mathit{id}, $get\_port\_for\_id$( S, \mathit{id}))$} \all{4\>\> else:} \all{5\>\>\> install\_forward\_rule$(S, \mathit{id}, $get\_next\_hop$( S, \mathit{id}))$} \all{} \all{{\bf program\_network$()$}} \all{6\> for $\mathit{s} \in \mathit{ALL\_SWITCHES}$:} \all{7\>\> program\_switch$(\mathit{s})$} \end{proto} } \end{minipage} \vspace{-3em} } } \caption{Controller algorithm for mesh-topology programming} \vspace{-1em} \label{algo:mesh_prog} \end{figure} \paragraph{Example.} Suppose the policy described in the example from section \ref{p:hop_by_hop_ex}. Suppose further that the next-hop tables shown in table \ref{tab:next_hop_table} have been already installed by the controller in each switch, such that destination MAC address is only shifted when a switch outputs a packet from ports 1 or 2. The controller simply constructs the MAC address 00:00:00:FF:08:01. Switch A outputs the packet unmodified on port 2, leading to switch B, which shifts the MAC address right by an octet, and outputs the packet on port 1. Upon re-receipt by switch B, it is emitted on port 5 towards switch E, unaltered. Switch E shifts the MAC address right by an octet, and emits the packet on port 2. \begin{table} \begin{center} \textbf{Destination ID} \begin{tabular}{l c || c | c | c | c | c | c | c | c | c} \cline{2-11} & & gw & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \cline{2-11} \cline{2-11} & A & 1 & 2 & 2 & 3 & 3 & 4 & 4 & 5 & 5 \\ \cline{2-11} & B & 6 & 1 & 2 & 3 & 3 & 4 & 4 & 5 & 5 \\ \cline{2-11} & C & 5 & 6 & 6 & 1 & 2 & 3 & 3 & 4 & 4 \\ \cline{2-11} & D & 4 & 5 & 5 & 6 & 6 & 1 & 2 & 3 & 3 \\ \cline{2-11} \begin{rotate}{90}\textbf{Source~Switch}\end{rotate} & % E & 3 & 4 & 4 & 5 & 5 & 6 & 6 & 1 & 2 \\ \cline{2-11} \end{tabular} \end{center} \label{tab:next_hop_table} \caption{Next hop table showing per-switch egress port for each destination ID.} \end{table} \section{Implementation} \label{sec:poc} We have implemented the functionality described in section \ref{sec:hop_by_hop} as a proof-of-concept, subject to the alteration described in footnote \ref{foot:arp_abuse}. The implementation is written in Python, and contains approximately 500 lines of code. It is constructed as a module for the POX~\cite{pox} controller, and has been successfully deployed on a Mininet~\cite{mininet} testbed utilizing Open vSwitch. We have additionally implemented the encapsulation layer described in section \ref{sec:local_dscp}, so as to support middleboxes that act as routers. \section{Implications} We arrived at a number of unexpected implications relevant to various subsets of the networking community in the course of this research. We offer them herein. \paragraph{For middlebox designers.} Perhaps the most frustrating aspect of this work has been attempting to integrate L2-mangling middleboxes (or, as we have come to describe them informally, "routers-with-side-effect") into this architecture, and we still are not entirely satisfied with the resolution presented in section \ref{sec:routers_with_side_effect}. Our frustration is exacerbated by the fact that this L2-mangling does not meaningfully contribute to the overall functionality implemented within a middlebox, but is simply an artifact of legacy network construction and operation. We urge middlebox designers to design middleboxes as bridges, not routers. \paragraph{For SDN architects and switch designers.} We believe that the degree to which the OpenFlow specification incorporates semantic understanding of upper-layer protocols (such as IP and TCP) is excessive, and results in diminished flexibility and increased maintenance cost. Although we utilize the MAC address header fields, the use to which we put them is decidedly not for storing addresses; the semantic meaning attached to those fields by OpenFlow is effectively a legacy meaning rendered obsolete by SDN. However, the restrictions of active switching (i.e. path length, port identifier size) are all a direct result of that legacy meaning--- specifically, that the addresses used in the 802.1d MAC are six bytes in length. We suggest that primitives matching and acting on bit-strings at given offsets into the packet would result in a more flexible protocol, and that ease-of-use concerns could be mitigated by incorporating the upper-layer protocol semantic awareness into a controller or switch programming library. We also note that register and rewrite actions vastly increase the flexibility of software-defined networking, and, thus, the availability and feasibility of solutions to challenges within the networking space. We encourage their broader availability. \paragraph{For developers of SDN applications.} We note that there is a striking similarity between the logic commonly programmed into SDN switches, and the behavior of legacy ("flood-and-learn") switches. In many applications that we have observed, it appears that the primary contribution of SDN is to eliminate the need to flood and to allow forwarding decisions to be made on upper-layer headers. However, as the specificity of match rules increases, a greater number of such rules are required to describe policy for the same volume of traffic, and the size of said match rules are significantly larger than those of the rules learned by legacy switches. As a result, we believe that this common approach to the construction of software-defined networks does not result in the promised efficiency gains of SDN, and may, in fact, be less efficient than legacy networking.\footnote{At least in terms of space.} We encourage the developers of SDN applications to explore techniques that do not result in flow-table explosion on the order of the number of flows. \section{Introduction} As the enabling technology of many recent networking innovations, SDN has been widely described and viewed as a "swiss-army knife" capable of solving all manner of challenges within the networking domain. While this reputation is primarily well-deserved, less discussed are the problem spaces in which SDN is counterproductive or of non-obvious utility. In previous work, we explored the challenges and opportunities of SDN as an infrastructure for the implementation of complex traffic intermediation and modification functionality utilizing middleboxes~\cite{stratos}. Our experience of constructing said controller lead us to identify a seemingly-significant, yet little-discussed, impediment to the use of SDN for this use case as well as for broader scenarios involving middlebox chaining~\cite{servicechaining} and/or network functions virtualization~\cite{nfv}. Consider an administrative policy requiring all traffic to be shunted through a middlebox prior to its arrival at the destination host, and suppose there are multiple identical instances of that middlebox the controller might choose from. In order to effect a unique per-flow path to ensure effective load balancing, the controller must install rules in the switches adjacent to the middleboxes and endpoints that uniquely identify each possible traversing flow\footnote{It is sometimes suggested that load-balancing within an SDN can be achieved using prefix-match rules to decide among the candidate middleboxes. At best, such a solution can only provide probabilistic guarantees as to equitable load-distribution, and the strength of those guarantees are proportional to the number of prefix-match rules (i.e. the inverse of the cardinality of the prefix.) What's more, this technique is predicated on a somewhat-predictable distribution of source addresses within the address space. Large traffic spikes from a particular geographical region can easily break this system, saturating one device while underutilizing the remainder. }, leading to a state-size requirement in {\it each switch} that is proportional to the number of concurrent flows in the network. This problem is usually described as {\bf state-space explosion}, and it presents a significant limitation to the scalability of SDN architectures. However, consider the possibility that the middleboxes interposed on traffic not only inspect, but also {\it mangle} (modify) said traffic. Such a middlebox could, conceivably, alter the traffic such that, on egress, it no longer appears to belong to the same flow to which it belonged on ingress (for example, by modifying header addressing fields.) When the behavior of the middlebox is known and deterministic, this challenge can be overcome by informing the controller of the modifications the middlebox will make to traffic; however, this requires middleboxes and controllers to share state, and duplicate some work. In the case of a middlebox that might arbitrarily and unpredictably alter traffic, though, the controller {\it cannot} craft a match-rule that will reliably identify the flow post middlebox-traversal. We call this problem the {\bf post-traversal flow reassociation problem}, and, in general, we call the challenges involved in the implementation of such a system the {\bf traffic-steering problem}. In \cite{stratos}, we partially obviated these problems by accepting certain design and functional limitations to the behavior the controller might effect, and by placing certain restrictions on the behavior of middleboxes that can be utilized within the system. While functional, the mechanisms we employed suffer from significant limitations in scalability (e.g., imposing a fairly small limit on the number of potential next hops in a chain) and functionality (e.g., re-purposing DSCP bits in the IP header precludes certain forms of QoS). Similar limitations can also be found in other approaches to this challenge (e.g., \cite{flowtags}). This paper seeks an architecturally clean, yet immediately deployable approach to this problem of growing importance. We present a solution that we believe to be more flexible and more efficient than comparable systems previously described in literature. In brief, we utilize fields within each packet that have been rendered useless by SDN to {\em cache} the result of the first flow-table lookup for a packet. Put differently, we {\it annotate} each packet at the ingress point to the network with the controller's routing decisions, thereby eliminating the need for each packet to be matched against a flow-unique rule at each switch in the network. We dub this technique {\bf active switching}. During design and implementation of our prototype active switching controller (SOFT), we came to believe that this solution has broader utility beyond the challenges involved in middlebox chain-composition and traffic steering. While the primary focus of this paper is that set of problems, later sections describe how active switching may be useful in different scenarios, such as the construction of network fabrics, and routing within non-acyclic topologies. We begin this paper by discussing the architectural artifacts that inform the implementation and behavior of commonly available middleboxes, and offer a taxonomy for describing and classifying them by their operation as it is visible to the network. We then discuss previously identified solutions to the traffic-steering problem, and their limitations. We proceed to describe the architecture of a novel potential solution to this problem, and evaluate a proof-of-concept implementation. Finally, we close with a general sketch of extensions to this technique allowing it to address a broader class of problems to which this technique may be applicable, and take-away implications for various members of the networking community. \section{Supporting arbitrary middleboxes} \label{sec:ext} In previous sections, we limited those middleboxes supported within an active switching architecture to only those that do not modify the network address fields of traversing packets, and maintain a one-to-one correspondence between packets received and packets emitted. Those limitations, while helpful in describing the design of this system, dramatically restrict the variety of middleboxes that can be supported. Herein we present extensions to the logic described by previous sections that can be employed to the end of supporting arbitrary middleboxes, {\it including} those that mangle MAC addresses and originate flows. \subsection{Flow-originating middleboxes} Middleboxes that originate new flows can be trivially supported within this architecture via logic similar to that employed for reverse-path construction. Conceptually, each edge-switch adjacent to a middlebox from whence a flow might originate is treated as an ingress switch: the controller produces a path annotation for the initial packet of the flow, and subsequent packets are annotated directly in the edge switch. Each edge switch so used must then maintain state of size $O(n)$, where $n$ is again the number of flows originating at the device adjacent to the switch. \subsection{L2-mangling middleboxes} In order to support middleboxes that are not L2-transparent, it's useful to take a step back and consider the purpose of L2-header mangling. There are two cases to consider: \subsubsection{Middleboxes providing network-layer service} \label{sec:routers_with_side_effect} By "network-layer service," we are referring to functionality that acts on packets' layer 2 headers exclusively. This would be applications such as ARP-spoofing detection, or ether-NAT. Active switching cannot support integration of such middleboxes within the architecture. In many cases, the functionality provided by such middleboxes is not applicable to networks other than traditional Ethernet. As software-defined networking provides requisite primitives to effect the functionality of the remainder directly within the network, we do not believe that the lack of support for this class of middlebox represents a significant limitation of the architecture. \subsubsection{Middleboxes providing application-layer service} \label{sec:middleboxbox} We describe a middlebox that coerces L2-transparency from middleboxes implemented as routers.\footnote{Note that, while the description herein refers to the encapsulation layer as a "middlebox", it need not be a discrete network participant. In the proof-of-concept implementation described in section \ref{sec:poc}, this middlebox is implemented as a dedicated OpenFlow switch; it would be feasible furthermore to fully integrate this functionality directly into the network edge switches.} \begin{figure}[b] \centering \includegraphics[width=0.4\textwidth]{middleboxbox} \caption{Encapsulating middlebox connectivity} \label{fig:middleboxbox} \end{figure} This middlebox has four logical interfaces: \begin{compactitem} \item upstream outer \item upstream inner \item downstream outer \item downstream inner \end{compactitem} The outer interfaces connect to the edge of the switching fabric in the directions indicated. The inner interfaces connect to the encapsulated middlebox's interfaces in the directions indicated. These connections are illustrated in Figure \ref{fig:middleboxbox}. There are several plausible mechanisms by which this encapsulation layer can address the post-traversal flow re-annotation challenge resulting from the encapsulated middlebox's L2-header mangling. We present two alternatives: the first requires some amount of semantic understanding of the behavior of the middlebox; the second does not, but constrains the number of possible path continuations of flows traversing the middlebox. \paragraph{Associative array.} % Upon receipt of a packet on the upstream outer interface, the encapsulating middlebox extracts sufficient information from the packet to identify it when emitted from the encapsulated middlebox in the downstream direction, and uses that information as a key into an associative array storing the packet's L2 addressing. The encapsulating middlebox then mangles the L2 headers as/if required by the middlebox encapsulated, and emits that packet on the upstream inner interface. Upon receipt of a packet on the downstream inner interface, the middlebox extracts from the packet the information required to dereference the associative array, and rewrites the packet with the L2 headers thereby produced before emitting it on the downstream outer interface. \paragraph{Local DSCP tagging.} \label{sec:local_dscp} % This mechanism operates similarly to the previous, with one major exception: rather than extracting information from the packet to key the array, we instead assign an identifying tag to each path continuation observed on packets prior to traversing the interior middlebox, and tag the packet with said identifier using the DSCP bits. Using the DSCP bits in this fashion substantially mitigates the drawbacks described in section \ref{sec:global_dscp}, as tags need not be globally unique, nor have consistent meaning at different points in the network. In fact, this technique can be employed even when the network as a whole utilizes the DSCP bits for QoS, although it further constrains the number of possible path continuations beyond the encapsulation layer. \section{Motivation} \label{sec:motivation} Previous work (e.g. \cite{flowtags}, \cite{stratos}) have discussed the general set of problems related to traffic steering in detail. As such, we do not attempt to fully describe that problem space herein, and refer the interested reader to related works for an in-depth treatment. Herein, we describe the challenges only to the degree sufficient to motivate the remainder of this paper. \subsection{Definition} For the purposes of this system, we define a middlebox as a network device that interposes on network traffic prior to its receipt by the intended destination. This definition is somewhat surprising in its generality, in that switches and routers can be validly and reasonably considered to be middleboxes under it. However, we've found that more-limited definitions of middlebox are insufficient to describe the full spectrum of extant devices that are traditionally considered to be one. For example, in terms of network-visible behavior, a simple switch and a transparent intrusion detection system operate identically, as both receive traffic and re-emit it without modification. Similarly, a traditional IP router and a MAC-layer network address translation device both emit traffic received with modifications to the network addressing header fields. As such, we believe that any reasonable definition of middlebox that encompasses devices such as IDSs and MAC NAT must also encompass devices such as traditional switches and routers. \subsection{Taxonomy} For reasons historical, there are two predominant architectural foundations upon which the higher-level functionality of middleboxes is implemented. In general, the distinction between these types of middleboxes can be distinguished by the type of traffic the middlebox is able to "see": \begin{compactitem} \item A \textbf{bridging middlebox} receives all packets available on the medium. \item A \textbf{routing middlebox} only receives packets that are constructed so as to appear that the middlebox is a valid "next-hop" on a route from the source address to the destination address, given the middlebox's IP configuration. \end{compactitem} While this distinction implies the potential side-effects of a given middlebox, it's important to note that it does not meaningfully describe or constrain the behavior a given middlebox may exhibit with respect to traffic "seen". For the purpose of discussing the behavior and higher-level functionality implemented by a middlebox, this distinction is not useful, as these types of middlebox are roughly equipotent. To the end of describing the behavior of middleboxes from the perspective of the network itself, we can also classify middleboxes based on their possible behavior \textit{upon those packets received}: \begin{compactitem} \item A \textbf{transparent middlebox} emits all packets received from upstream without alteration in the downstream direction. \item A \textbf{translucent middlebox} need not emit all packets, but all packets that are emitted in the downstream direction are identical to some packet received from upstream. \item A \textbf{mangling middlebox} may emit packets downstream that are not identical to some packet received from upstream; however, each packet emitted downstream "corresponds"\footnote{This relationship is left vague somewhat intentionally, as a middlebox which appears to originate and terminate flows when considered as a black-box may, in fact, be recognizable as a mangling middlebox given a sufficiently nuanced understanding of its behavior.} to some packet received from upstream. \item A \textbf{connection-originating middlebox} may act as an originator of new flows. \item A \textbf{connection-terminating middlebox} may terminate flows received.\footnote{This could also be viewed as an extreme case of middlebox translucency.} \end{compactitem} We will be using this latter terminology to describe middleboxes through which traffic might be steered throughout the rest of the paper.\footnote{Only a subset of this taxonomy is employed in the remainder, as only that subset is relevant to the design of active switching. We believe though that this taxonomy, as a whole, could have broader utility for related middlebox research, and offer it to that end.} We employ a convenient abstraction in this work, that all middleboxes have exactly three interfaces: \begin{compactitem} \item An \textbf{ingress interface}, upon which traffic is only received. \item A \textbf{egress interface}, upon which traffic is only transmitted. \item A \textbf{control interface}, responsible for management traffic. \end{compactitem} We do not believe that this abstraction in any way lessens the generality of the discussion to follow. Consider another occasionally-useful abstraction of interfaces, which we employ in later sections: \begin{compactitem} \item An \textbf{upstream interface}, upon which traffic originating from the external network is received, and through which traffic originating from endpoint hosts within the network is transmitted. \item A \textbf{downstream interface}, which functions as the inverse of the upstream interface. \item A \textbf{control interface}, as in the previous abstraction. \end{compactitem} These abstractions are equivalent, as the following construction demonstrates: on a middlebox featuring physical upstream/downstream interfaces, the abstract ingress interface traffic can be identified as that which is received by either physical interface. Conversely, the abstract egress interface traffic is necessarily all traffic transmitted by either physical interface. The control interface behaves identically under either abstraction. \subsection{Topology} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{fabric} \caption{Abstract network topology} \label{fig:fabric} \end{figure} \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{chain} \caption{Example policy chain} \label{fig:policy_chain} \end{figure*} We assume the existence of a network fabric as described by M.~Casado,~\textit{et al.}in \cite{fabric}, with OpenFlow-enabled edge switches. Positioned about the fabric is a collection of servers, middleboxes, and gateway switches connecting to an external network. This abstract topology is illustrated by Figure \ref{fig:fabric}, where the numerals situated about the fabric identify the egress port from the network for traffic bound for a particular device.\footnote{In later sections, we show that the fabric abstraction is equivalent to a connected graph of OpenFlow-enabled switches with a common controller. As such, this abstraction does not limit the generality or applicability of the discussion to follow.} The figure omits, and we do not define identifiers for, the fabric's ingress ports. Multiple tenants may co-exist within this network; as such, the various endpoints may not share a common owner, and their connectivity may be logically isolated from that of other tenants within the same network by means of a network virtualization facility such as 802.1q~\cite{802.1q}, STT~\cite{stt}, VXLAN~\cite{vxlan} or NVGRE~\cite{nvgre}. \subsection{Policy} We further assume a policy language for describing rules interposing chains of middleboxes between the ingress switch and destination for traffic matching given patterns, and an OpenFlow controller, connected to the fabric's edge switches. We will consider, as an example, a policy that specifies traffic destined for host \textit{H} must first traverse one of two redundancy eliminators \textit{RE1} or \textit{RE2}, followed by a single intrusion prevention device \textit{IPS}. This policy is illustrated in Figure \ref{fig:policy_chain}. We further specify two general requirements on the behavior of the system: \begin{compactitem} \item A flow initially submitted to one of multiple potential equivalent middleboxes must maintain \textbf{affinity} with that instance over the life of the flow. \item A flow that passes through a given sequence of middleboxes on the downstream path to the destination must maintain \textbf{symmetricality} on the return path, meaning that the very same middleboxes must be traversed in reverse order. \end{compactitem} We now consider the question of how such a controller might program the edge switches of the fabric to effect this policy. \section{Definitions and Assumptions} \label{sec:motivation} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{fabric} \caption{Abstract network topology} \label{fig:fabric} \end{figure} \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{chain} \caption{Example policy chain} \label{fig:policy_chain} \end{figure*} For the purposes of this system, we define a middlebox as a network device that performs some application-level functionality. We employ a convenient abstraction in this work, that all middleboxes have exactly three interfaces: \begin{compactitem} \item An \textbf{ingress interface}, upon which traffic is only received. \item A \textbf{egress interface}, upon which traffic is only transmitted. \item A \textbf{control interface}, responsible for management traffic. \end{compactitem} We do not believe that this abstraction in any way lessens the generality of the discussion to follow. Consider another occasionally-useful abstraction of interfaces, which we employ in later sections: \begin{compactitem} \item An \textbf{upstream interface}, upon which traffic originating from the external network is received, and through which traffic originating from endpoint hosts within the network is transmitted. \item A \textbf{downstream interface}, which functions as the inverse of the upstream interface. \item A \textbf{control interface}, as in the previous abstraction. \end{compactitem} These abstractions are equivalent, as the following construction demonstrates: on a middlebox featuring physical upstream/downstream interfaces, the abstract ingress interface traffic can be identified as that which is received by either physical interface. Conversely, the abstract egress interface traffic is necessarily all traffic transmitted by either physical interface. The control interface behaves identically under either abstraction. We assume the existence of a network fabric as described by M.~Casado,~\textit{et al.}in \cite{fabric}, with OpenFlow-enabled edge switches. Positioned about the fabric is a collection of servers, middleboxes, and gateway switches connecting to an external network. This abstract topology is illustrated by Figure \ref{fig:fabric}, where the numerals situated about the fabric identify the egress port from the network for traffic bound for a particular device.\footnote{In later sections, we show that the fabric abstraction is equivalent to a connected graph of OpenFlow-enabled switches with a common controller. As such, this abstraction does not limit the generality or applicability of the discussion to follow.} The figure omits, and we do not define identifiers for, the fabric's ingress ports. Multiple tenants may co-exist within this network; as such, the various endpoints may not share a common owner, and their connectivity may be logically isolated from that of other tenants within the same network by means of a network virtualization facility such as 802.1q~\cite{802.1q}, STT~\cite{stt}, VXLAN~\cite{vxlan} or NVGRE~\cite{nvgre}. We further assume a policy language for describing rules interposing chains of middleboxes between the ingress switch and destination for traffic matching given patterns, and an OpenFlow controller, connected to the fabric's edge switches. We will consider, as an example, a policy that specifies traffic destined for host \textit{H} must first traverse one of two redundancy eliminators \textit{RE1} or \textit{RE2}, followed by a single intrusion prevention device \textit{IPS}. This policy is illustrated in Figure \ref{fig:policy_chain}. We further specify two general requirements on the behavior of the system: \begin{compactitem} \item A flow initially submitted to one of multiple potential equivalent middleboxes must maintain \textbf{affinity} with that instance over the life of the flow. \item A flow that passes through a given sequence of middleboxes on the downstream path to the destination must maintain \textbf{symmetricality} on the return path, meaning that the very same middleboxes must be traversed in reverse order. \end{compactitem} We now consider the question of how such a controller might program the edge switches of the fabric to effect this policy. \subsubsection*{Source} The active switching proof-of-concept implementation (called SOFT) can be found at \url{http://pages.cs.wisc.edu/~sstjohn/soft.tgz}. It is released under the terms of the GPLv3. \section{Steering} Previous work (e.g. \cite{flowtags}, \cite{stratos}) have discussed the general set of problems related to traffic steering in detail. As such, we do not attempt to fully describe that problem space herein, and refer the interested reader to related works for an in-depth treatment. Herein, we describe the challenges only to the degree sufficient to motivate the remainder of this paper. In order for a controller to implement the desired functionality as described in the previous section, it must be able to select, on a per-flow basis, a sequence of intermediary devices through which traffic is shunted prior to arrival at its destination. It must also be able to induce behavior satisfying its policy decisions from each device along that sequence (i.e. each network device must be able to identify the correct output port for each flow.) Under constructions where flows are identified by a distinct per-switch match-rule, this rapidly leads to state-space explosion within the network, as the number of match rules required in aggregate is proportional to the product of the number of concurrent flows, and the number of switches traversed by those flows. Additionally, a controller implementing that desired functionality must also be able to support middleboxes within the network that arbitrarily modify traffic, without relying on an understanding of the behavior of that middlebox. However, as such a middlebox might alter the fields of the packet upon which match-rules rely, it does not appear that a match-rule per-switch per-flow construction could possibly satisfy this requirement. A number of techniques have been proposed to effect the goal of flexible traffic steering through an SDN that do not suffer from the above-described challenges, which we refer to generally as the {\bf traffic steering problem}. Some of these techniques, while superficially plausible, are insufficient for the obviation of, or subtly still-impeded by, the traffic steering problem. Others impose severe limitations that constrain the utility of the approach. We consider these techniques and discuss their shortcomings herein. \subsection{Policy Matching in Edge Switches} An initially appealing, yet naive approach is to install flow rules steering matched traffic in each edge switch of the fabric. While this technique is easy to implement, and requires only those primitives commonly available within software-defined networks (e.g. those defined by the OpenFlow 1.0 standard), it presents a number of serious problems that effectively preclude its utilization in all but the most trivial of scenarios. One such issue with this approach restricts the variety of middlebox employed within the network, or requires undue state-sharing between all middleboxes and the controller. We refer to this issue as the \textit{post-traversal flow re-association} problem: during the traversal of certain types of middlebox (specifically those that mangle L2 and/or L3 headers), it is possible for the packet to be modified such that it no longer maintains the expected values upon which match rules were constructed. For example, a L4-NAT middlebox might conceivably alter all header fields upon which a traditional OpenFlow 10-tuple might match.\footnote{This could be obviated by informing the controller of the behavior of the middlebox in some cases, but we reject this as a suitable general proposal for two reasons: it requires re-implementing substantial middlebox functionality in the controller, and it doesn't work when the behavior of a middlebox is non-deterministic, anyway.} Supposing that the post-traversal flow annotation re-association problem were sufficiently addressed in a given context so as to not represent a severe limitation, another issue exists that would restrict the \textit{scalability} of this system. In general, re-associating packets with their correct annotation at every edge switch requires each switch to maintain at least as many flow- table entries as there are flows that might traverse the switch at any given time.\footnote{In fact, the most straightforward implementation requires even more state than that: each switch would need an ingress rule and an egress rule for traffic entering and departing the middlebox, respectively.} While there are specific situations where a sufficiently informed controller could craft flow-table entries that correctly match on more than one flow, such a technique is not generally applicable, and still suffers from the same flaw: the amount of state in each switch remains proportional to the number of flows that might be expected to traverse it, and, therefore, the number of flows supported across any given switch along the edge of the fabric is limited by the size of the flow table of that switch. \subsection{Packet Tagging in Ingress Switches} We now consider the viability of techniques that require state proportional to the number of current flows at ingress switches only.\footnote{We believe this to be a more tractable scalability limitation, as multiple ingress switches could be employed simultaneously in parallel, or switches with larger flow tables could be used, without requiring such capability at \textit{every} edge-switch in the network.} \subsubsection*{Tunneling} "Tunneling" protocols such as VXLAN~\cite{802.1q}, STT~\cite{stt}, or NVGRE~\cite{nvgre} are often proposed (in a hand-wavey fashion) as potential tools to steer traffic among middleboxes. Upon more serious consideration, however, a number of flaws to these approaches become apparent. When considering the case of a tenant with a single, "linear" policy chain, these technologies may appear to offer a plausible solution to the traffic steering issue. However, when considering the case of a policy-chain with multiple potential next-hops at any point along the path, it is obvious that this approach suffers from the same post-traversal flow annotation re-association problem discussed previously.\footnote{Consider also the case of a tenant with multiple linear policy chains that share a subset of middleboxes, or that describe policy requiring traversal in a different order.} Fundamentally, these techniques operate by assigning a unique identifier to each tunnel or virtualized network existing upon the underlying network, that is carried by all packets belonging to these tunnels. As the identifiers serve to allow the underlying network to differentiate between individual links or tenants, it does not seem plausible that the very same identifier could also be reasonably extended to differentiate between paths \textit{within} the overlay. We believe that tunneling protocols are not appropriate tools to this end, as their utilization for this purpose conflates the goals of tenant isolation and traffic steering. Their design was motivated by the need to provide the appearance of an isolated broadcast domain to tenants. Given that the challenges around traffic-steering are not at all mitigated by the existence of an isolated broadcast domain, it does not seem plausible that tools designed to effect an isolated broadcast domain would be sufficient to meet this need. \subsubsection*{DiffServ Code Point Repurposing} \label{sec:global_dscp} Prior work (e.g. \cite{flowtags}, \cite{stratos}) has proposed repurposing the DSCP bits in the IP header to the end of annotating packets for correct steering. While this approach can work in practice, it, too, suffers from many of the limitations previously described: for example, the number of flows such a technique is capable of supporting concurrently is constrained by the number of annotations that might be encoded with the 6 DSCP bits ($2^6=64$). More significantly, the correctness of this approach is predicated on a pair of brittle and potentially unsafe assumptions: that the DSCP bits are otherwise unused within the network, and that middleboxes emit them unmolested. The former assumption precludes the use of QoS within the network, and the latter may simply not be true. \subsection{Other techniques} Related work has suggested the use of shims between the L2 and L3 header, or requiring modifications to middleboxes that are to be employed within the SDN. We reject the former approach for suffering from the same limitations as tunneling-based solutions, and the latter for not being generally applicable. The elimination of middleboxes entirely has also been suggested, by, e.g., \cite{end_to_middle}. To an extent, we \textit{agree} that this is a valid approach in some cases: software-defined networking has made available, \textit{within the network}, sufficient functionality to effect many network-layer tasks traditionally performed by middleboxes (e.g., NAT). We believe that the responsibility for such functionality should be devolved to the network itself where possible, and that such devolution is entirely appropriate. However, there exist many tasks for which middleboxes are commonly used that do not lend themselves to being integrated into the network proper, as they require actions to be taken based on an understanding of upper-layer protocols. It does not seem plausible to assume that all middlebox functionality possible could ever be devolved to the network entirely, and, as such, we dismiss this approach as fantasy.\footnote{We have, however, limited the types of middleboxes under consideration in this paper to those which cannot be adequately handled by the network itself, as a result of this argument.} \subsection{Active Switching} The remainder of this paper discusses an approach to traffic steering that we believe to be novel as a whole, although inspired by various aspects of the techniques previously discussed. We call this method \textit{active switching}, as it requires that edge switches to act on packets in ways other than just forwarding based on match-rule logic. We believe that it substantially mitigates the flaws of the previously discussed possibilities. \section{Taxonomy} For reasons historical, there are two predominant architectural foundations upon which the higher-level functionality of middleboxes is implemented. In general, the distinction between these types of middleboxes can be distinguished by the type of traffic the middlebox is able to "see": \begin{compactitem} \item A \textbf{bridging middlebox} receives all packets available on the medium. \item A \textbf{routing middlebox} only receives packets that are constructed so as to appear that the middlebox is a valid "next-hop" on a route from the source address to the destination address, given the middlebox's IP configuration. \end{compactitem} While this distinction implies the potential side-effects of a given middlebox, it's important to note that it does not meaningfully describe or constrain the behavior a given middlebox may exhibit with respect to traffic "seen". For the purpose of discussing the behavior and higher-level functionality implemented by a middlebox, this distinction is not useful, as these types of middlebox are roughly equipotent. To the end of describing the behavior of middleboxes from the perspective of the network itself, we can also classify middleboxes based on their possible behavior \textit{upon those packets received}: \begin{compactitem} \item A \textbf{transparent middlebox} emits all packets received from upstream without alteration in the downstream direction. \item A \textbf{translucent middlebox} need not emit all packets, but all packets that are emitted in the downstream direction are identical to some packet received from upstream. \item A \textbf{mangling middlebox} may emit packets downstream that are not identical to some packet received from upstream; however, each packet emitted downstream "corresponds"\footnote{This relationship is left vague somewhat intentionally, as a middlebox which appears to originate and terminate flows when considered as a black-box may, in fact, be recognizable as a mangling middlebox given a sufficiently nuanced understanding of its behavior.} to some packet received from upstream. \item A \textbf{connection-originating middlebox} may act as an originator of new flows. \item A \textbf{connection-terminating middlebox} may terminate flows received.\footnote{This could also be viewed as an extreme case of middlebox translucency.} \end{compactitem} We will be using this latter terminology to describe middleboxes through which traffic might be steered throughout the rest of the paper.\footnote{Only a subset of this taxonomy is employed in the remainder, as only that subset is relevant to the design of active switching. We believe though that this taxonomy, as a whole, could have broader utility for related middlebox research, and offer it to that end.}
train/arxiv
BkiUdTw4eIZjrLrbKFU_
5
1
\section{Introduction} Let $M$ and $G$ be finite dimensional smooth manifolds. Let $Y_k$, $k=1,\dots, m$, be $C^6$ vector fields on $M$, $\alpha_k$ real valued $C^r$ functions on $G$, $\epsilon$ a positive number, and $(z_t^\epsilon)$ diffusions on a filtered probability space $(\Omega, {\mathcal F}, {\mathcal F}_t, \mathbb P)$ with values in $G$ and infinitesimal generator ${\mathcal L}_0^\epsilon={1\over \epsilon} {\mathcal L}_0$ which will be made precise later. The aim of this paper is to study limit theorems associated to the system of ordinary differential equations on $M$, \begin{equation} \label{1} \dot y_t^\epsilon(\omega)=\sum_{k=1}^m Y_k\left(y_t^\epsilon(\omega)\right)\alpha_k(z_t^\epsilon(\omega)) \end{equation} under the assumption that $\alpha_k$ `averages' to zero. The `average' is with respect to the unique invariant probability measure of ${\mathcal L}_0$, in case ${\mathcal L}_0$ satisfies strong H\"ormander's condition, and more generally the `average' is the projection to a suitable function space. We prove that $y_{t\over \epsilon}^\epsilon$ converges as $\epsilon\to 0$ to a Markov process whose Markov generator has an explicit expression. This study is motivated by problems arising from stochastic homogenization. It turned out that in the study of randomly perturbed systems with a conserved quantity, which does not necessarily take value in a linear space, the reduced equations for the slow variables can sometimes be transformed into (\ref{1}). Below, in section \ref{section-Examples} we illustrate this by 4 examples including one on the orthonormal frame bundle over a Riemannian manifold. Of these examples, the first is from \cite{Li-geodesic} where we did not know how to obtain a rate of convergence, and the last three from \cite{Li-homogeneous} where a family of interpolation equations on homogeneous manifolds are introduced. An additional example can be found in \cite{LI-OM-1}. \subsection{Outline of the Paper} In all the examples, which we described in \S\ref{section-Examples} below, the scalar functions average to $0$ with respect to a suitable probability measure on $G$. Bearing in mind that if a Hamiltonian system approximates a physical system with error $\epsilon$ on a compact time interval, over a time interval of size ${1\over \epsilon}$ the physical orbits deviate visibly from that of the Hamiltonian system unless the error is reduced by oscillations, it is natural and a classical problem to study ODEs whose right hand sides are random and whose averages in time are zero. The objectives of the present article are: (1) to prove that, as $\epsilon$ tends to zero, the law of $(y_{s\over \epsilon}^\epsilon, s \le t)$ converges weakly to a probability measure $\bar \mu$ on the path space over $M$ and to describe the properties of the limiting Markov semigroups; (2) to estimate the rate of convergence, especially in the Wasserstein distance. For simplicity we assume that all the equations are complete. In sections \ref{section-formula}, \ref{section-uniform-estimates}, \ref{section-weak} and \ref{section-rate} we assume that ${\mathcal L}_0$ is a regularity improving Fredholm operator on a compact manifold $G$, see Definition \ref{def-Fredholm}. In Theorem \ref{thm-weak} we assume, in addition, that ${\mathcal L}_0$ has Fredholm index $0$. But strong H\"ormander's condition can be used to replace the condition `regularity improving Fredholm operator of index $0$'. For simplicity, throughout the introduction, $\alpha_k$ are bounded and belong to $ N^\perp$ where $N$ is the kernel of ${\mathcal L}_0^*$, the adjoint of the unbounded operator ${\mathcal L}_0$ in $L^2(G)$ with respect to the volume measure. In case ${\mathcal L}_0$ is not elliptic we assume in addition that $r\ge 3$ or $r\ge \max{\{3, {n\over 2} +1\}}$, depending on the result. The growth conditions on $Y_k$ are in terms of a control function $V$ and a controlled function space $B_{V,r}$ where $r$ indicates the order of the derivatives to be controlled, see (\ref{BVr}). For simplicity we assume both $M$ and $G$ are compact. In Section \ref{section-estimates} we present two elementary lemmas, Lemma \ref{lemma3} and Lemma \ref{lemma3-2}, assuming ${\mathcal L}_0$ mixes exponentially in a weighted total variation norm with weight $W: G\to {\mathbb R}$. In Section \ref{section-formula}, for ${\mathcal L}_0$ a regularity improving Fredholm operator and $f$ a $C^2$ function, we deduce a formula for $f(y_{t\over \epsilon}^\epsilon)$ where the transmission of the randomness from the fast motion $(z_t^\epsilon)$ to the slow motion $(y_t^\epsilon)$ is manifested in a martingale. This provides a platform for the uniform estimates over large time intervals, weak convergences, and the study of rate of convergence in later sections. In Section \ref{section-uniform-estimates}, we obtain uniform estimates in $\epsilon$ for functionals of $y_{t}^\epsilon$ over $[0,{1\over \epsilon}]$. Let ${\mathcal L}_0$ be a regularity improving Fredholm operator, $y_0^\epsilon=y_0$, and $V$ a $C^2$ function such that $\sum_{j=1}^m|L_{Y_j}V|\le c+KV$, $\sum_{i,j=1}^m|L_{Y_i}L_{Y_j}V| \le c+KV$ for some numbers $c$ and $K$. Then, Theorem \ref{uniform-estimates}, for every numbers $p\ge 1$ there exists a positive number $\epsilon_0$ such that $\sup_{0<\epsilon\le \epsilon_0}{\mathbf E} \sup_{0\le u\le t} V^p(y_{u\over \epsilon}^\epsilon)$ is finite and belongs to $B_{V,0}$ as a function of $y_0$. This leads to convergence in the Wasserstein distance and will be used later to prove a key lemma on averaging functions along the paths of $(y_t^\epsilon, z_t^\epsilon)$. In Section \ref{section-weak}, ${\mathcal L}_0$ is an operator on a compact manifold $G$ satisfying H\"ormander's condition and with Fredholm index $0$; $M$ has positive injectivity radius and other geometric restrictions. In particular we do not make any assumption on the ergodicity of ${\mathcal L}_0$. Let $\overline{\alpha_i\beta_j}$ denote $\sum_{l} u_l\<\alpha_i\beta_j, \pi_l\>$ where $\{u_l\}$ is a basis of the kernel of ${\mathcal L}_0$ and $\{\pi_l\}$ the dual basis in the kernel of ${\mathcal L}_0^*$. Theorem \ref{thm-weak} states that, given bounds on $Y_k$ and its derivatives and for $\alpha_k\in C^r$ where $r\ge \max{\{3, {n\over 2}+1\}}$, $(y_{s\over \epsilon}^\epsilon, s \le t)$ converges weakly, as $\epsilon \to 0$, to the Markov process with Markov generator $\bar {\mathcal L}=\sum_{i,j=1}^m \overline{\alpha_i\beta_j} L_{Y_i}L_{Y_j}$. This follows from a tightness result, Proposition \ref{tightness} where no assumption on the Fredholm index of ${\mathcal L}_0$ is made, and a law of large numbers for sub-elliptic operators on compact manifolds, Lemma \ref{lln}. Convergences of $\{(y_{t\over \epsilon}^\epsilon, 0\le t\le T)\}$ in the Wasserstein $p$-distance are also obtained. In Section \ref{section-sde} we study the solution flows of SDEs and their associated Kolmogorov equations, to be applied to the limiting operator $\bar {\mathcal L}$ in Section \ref{section-rate}. Otherwise this section is independent of the rest of the paper. Let $Y_k, Y_0$ be $C^6$ and $C^5$ vector fields respectively. If $M$ is compact, or more generally if $Y_k$ are $BC^5$ vector fields, the conclusions in this section holds, trivially. Denote $B_{V,4}$ the set of functions whose derivatives up to order $r$ are controlled by a function $V$, c.f.(\ref{BVr}). Let $\Phi_t(y)$ be the solution flow to $$dy_t=\sum_{k=1}^m Y_k(y_t)\circ dB_t^k+Y_0(y_t)dt.$$ Let $P_t f(y)={\mathbf E} f(\Phi_t(y))$ and $Z={1\over 2}\sum_{k=1}^m \nabla_{Y_k} Y_k+Y_0$. Let $V\in C^2(M, {\mathbb R}_+)$ and $\sup_{s\le t}{\mathbf E} V^q(\phi_s(y))\in B_{V,0}$ for every $q\ge 1$. This assumption on $V$ is implied by the following conditions: $|L_{Y_i}L_{Y_j} V| \le c+KV$, $|L_{Y_j}V|\le c+KV$, where $C, K$ are constants. Let $\tilde V=1+\ln (1+|V|)$. We assume, in addition, for some number $c$ the following hold: \begin{equation} \label{control-conditions} {\begin{split} &\sum_{k=1}^m\sum_{\alpha=0}^5 | \nabla ^{(\alpha)} Y_k| \in B_{V,0}, \; &\sum_{\alpha=0}^4 |\nabla ^{(\alpha)} Y_0| \in B_{V,0},\; \\ &\sum_{k=1}^m| \nabla Y_k|^2 \le c\tilde V,\; &\sup_{|u|=1}\<\nabla_u Z, u\>\le c\tilde V. \end{split}} \end{equation} Then there is a global smooth solution flow $\Phi_t(y)$, Theorem \ref{limit-thm}. Furthermore for $f\in B_{V,4}$, ${\mathcal L} f\in B_{V,2}$, ${\mathcal L}^2 f\in B_{V,0}$, and $ P_tf \in B_{V,4}$. For $M={\mathbb R}^n$, an example of the control pair is: $V(x)=C(1+|x|^2)$ and $\tilde V(x)=\ln(1+|x|^2)$. Our conditions are weaker than those commonly used in the probability literature for $d(P_tf)$, in two ways. Firstly we allow non-bounded first order derivative, secondly we allow one sided conditions on the drift and its first order derivatives. In this regard, we extend a theorem of W. Kohler, G. C. Papanicolaou \cite{Kohler-Papanicolaou74} where they used estimations from O. A. Oleinik- E. V. Radkevi{\v{c}} \cite{Oleinik-Radkevic-book}. The estimates on the derivative flows, obtained in this section, are often assumptions in applications of Malliavin calculus to the study of stochastic differential equations. Results in this section might be of independent interests. Let $P_t$ be the Markov semigroup generated by $\bar {\mathcal L}$. In Section \ref{section-rate}, we prove the following estimate: $|{\mathbf E} f(\Phi_t^\epsilon(y_0))- P_tf(y_0)|\le C(t)\gamma(y_0)\epsilon \sqrt{|\log \epsilon|}$ where $C(t)$ is a constant, $\gamma$ is a function in $B_{V,0}$ and $\Phi_t^\epsilon(y_0)$ the solution to (\ref{1}) with initial value $y_0$. The conditions on the vector fields $Y_k$ are similar to (\ref{control-conditions}), we also assume the conditions of Theorem \ref{uniform-estimates} and that ${\mathcal L}_0$ satisfies strong H\"ormander's condition. We incorporated traditional techniques on time averaging with techniques from homogenization. The homogenization techniques was developed from \cite{Li-averaging} which was inspired by the study in M. Hairer and G. Pavliotis \cite{Hairer-Pavliotis}. For the rate of convergence we were particularly influenced by the following papers: W. Kohler and G. C. Papanicolaou \cite{Kohler-Papanicolaou74, Kohler-Papanicolaou75} and G. C. Papanicolaou and S.R.S. Varadhan \cite{Papanicolaou-Varadhan73}. Denote $\hat P_{y^\epsilon_{t\over \epsilon}}$ the probability distributions of the random variables $y^\epsilon_{t\over \epsilon}$ and $\bar \mu_t$ the probability measure determined by $P_t$. The under suitable conditions, $W_1(\hat P_{y^\epsilon_{t\over \epsilon}}, \bar \mu_t) \le C\epsilon^r$, where $r$ is any positive number less or equal to ${1\over 4}$ and $W_1$ denotes the Wasserstein 1-distance, see \S \ref{Wasserstein}. \subsection{Main Theorems} \label{discussion} We contrive to impose as little as possible on the vector fields $\{Y_k\}$, hence a few sets of assumptions are used. For the examples we have in mind, $G$ is a compact Lie group acting on a manifold $M$, and so for simplicity $G$ is assumed to be compact throughout the article, with few exceptions. In a future study, it would be nice to provide some interesting examples in which $G$ is not compact. If $M$ is also compact, only the following two conditions are needed: (a) ${\mathcal L}_0$ satisfies strong H\"ormander's condition; (b) $\{\alpha_k\} \subset C^r\cap N^\perp$ where $N$ is the annihilator of the kernel of ${\mathcal L}_0^*$ and $r$ is a sufficiently large number. If ${\mathcal L}_0$ is elliptic, `$C^r$' can be replaced by `bounded measurable'. For the convergence condition (a) can be replaced by `${\mathcal L}_0$ satisfies H\"ormander's condition and has Fredholm index $0$'. If ${\mathcal L}_0$ has a unique invariant probability measure, no condition is needed on the Fredhom index of ${\mathcal L}_0$. {\bf Theorem \ref{thm-weak} and Corollary \ref{convergence-wasserstein-p}. } Under the conditions of Proposition \ref{tightness} and Assumption \ref{assumption-Hormander}, $(y_{t\over\epsilon}^\epsilon)$ converges weakly to the Markov process determined by $$\bar {\mathcal L} =-\sum_{i,j=1}^m \overline{ \alpha_i {\mathcal L}_0^{-1} \alpha_j } L_{Y_i}L_{Y_j}, \quad \overline{ \alpha_i {\mathcal L}_0^{-1} \alpha_j }=\sum_{b=1}^{n_0} u_b \< \alpha_i {\mathcal L}_0^{-1} \alpha_j ,\pi_b\>,$$ where $n_0$ is the dimension of the kernel of ${\mathcal L}_0$ which, by the assumption that ${\mathcal L}_0$ has Fredholm index $0$, equals the dimension of the kernel of ${\mathcal L}_0^*$. The set of functions $\{u_b\}$ is a basis of $\mathop{\rm ker} ({\mathcal L}_0)$ and $\{\pi_b\}\subset \mathop{\rm ker} ({\mathcal L}_0^*)$ its dual basis. In case ${\mathcal L}_0$ satisfies strong H\"ormander's condition, then there is a unique invariant measure and $\overline{ \alpha_i {\mathcal L}_0^{-1} \alpha_j }$ is simply the average of $\alpha_i {\mathcal L}_0^{-1} \alpha_j$ with respect to the unique invariant measure. Let $p\ge 1$ be a number and $V$ a Lyapunov type function such that $\rho^p\in B_{V, 0}$, a function space controlled by $V$. If furthermore Assumption \ref{assumption2-Y} holds, $(y_{\cdot\over \epsilon}^\epsilon)$ converges, on $[0,t]$, in the Wasserstein $p$-distance. \\[1em] {\bf Theorem \ref{rate}.} Denote $\Phi_t^\epsilon(\cdot)$ the solution flow to (\ref{1}) and $P_t$ the semigroup for $\bar {\mathcal L}$. If Assumption \ref{assumption-on-rate-result} holds then for $f \in B_{V,4}$, $$\left|{\mathbf E} f\left(\Phi^\epsilon_{T\over \epsilon}(y_0)\right) -P_Tf(y_0)\right|\le \epsilon |\log \epsilon|^{1\over 2}C(T)\gamma_1(y_0),$$ where $\gamma_1\in B_{V,0}$ and $C(T)$ is a constant increasing in $T$. Similarly, if $f\in BC^4$, \begin{equation} \label{rate-summary} \left|{\mathbf E} f\left(\Phi^\epsilon_{T\over \epsilon}(y_0)\right) -P_Tf(y_0)\right| \le \epsilon |\log \epsilon|^{1\over 2}\,C(T)\gamma_2(y_0) \left(1+ |f|_{4, \infty}\right). \end{equation} where $\gamma_2$ is a function in $B_{V,0}$ independent of $f$ and $C$ are increasing functions. \\[1em] A complete connected Riemannian manifold is said to have bounded geometry if it has strictly positive injectivity radius, and if the Riemannian curvature tensor and its covariant derivatives are bounded. {\bf Proposition \ref{proposition-rate}.} Suppose that $M$ has bounded geometry, $\rho_o^2 \in B_{V,0}$, and Assumption \ref{assumption-on-rate-result} holds. Let $\bar \mu$ be the limit measure and $\bar \mu_t=(ev_t)_*\bar \mu$. Then for every $r<{1\over 4}$ there exists $C(T)\in B_{V,0}$ and $\epsilon_0>0$ s.t. for all $\epsilon\le\epsilon_0$ and $t\le T$, $$d_W({\mathrm {Law}} ({y^\epsilon_{t\over \epsilon}}), \bar \mu_t)\le C(T)\epsilon^{r}.$$ Besides the fact that we work on manifolds, where there is the inherited non-linearity and the problem with cut locus, the following aspects of the paper are perhaps new. (a) We do not assume there exists a unique invariant probability measure on the noise and the effective processes are obtained by a suitable projection, accommodating one type of degeneracy. Furthermore the noise takes value in another manifold, accommodating `removable' degeneracy. For example the stochastic processes in question lives in a Lie group, while the noise are entirely in the directions of a sub-group. (b) We used Lyapunov functions to control the growth of the vector fields and their derivatives, leading to estimates uniform in $\epsilon$ and to the conclusion on the convergence in the Wasserstein topologies. A key step for the convergence is a law of large numbers, with rates, for sub-elliptic operators (i.e. operators satisfying H\"ormander's sub-elliptic estimates). (c) Instead of working with iterated time averages we use a solution to Poisson equations to reveal the effective operator. Functionals of the processes $y_{t\over \epsilon}^\epsilon$ splits naturally into the sum of a fast martingale, a finite variation term involving a second order differential operator in H\"ormander form, and a term of order $\epsilon$. From this we obtain the effective diffusion, in explicit H\"ormander form. It is perhaps also new to have an estimate for the rate of the convergence in the Wasserstein distance. Finally we improved known theorems on the existence of global smooth solutions for SDEs in \cite{Li-flow}, c.f. Theorem \ref{limit-thm} below where a criterion is given in terms of a pair of Lyapunov functions. New estimates on the moments of higher order covariant derivatives of the derivative flows are also given. \subsection{Classical Theorems} We review, briefly, basic ideas from existing literature on random ordinary differential equations with fast oscillating vector fields. Let $F(x,t,\omega, \epsilon):=F^{(0)}(x,t,\omega)+\epsilon F^{(1)}(x,t,\omega)$, where $F^{(i)}(x,t,\cdot)$ are measurable functions, for which a Birkhoff ergodic theorem holds whose limit is denoted by $\bar F$. The solutions to the equations $\dot y_t^\epsilon = F(y_t^\epsilon,{t\over \epsilon},\omega, \epsilon)$ over a time interval $[0, t]$, can be approximated by the solution to the averaged equation driven by $\bar F$. If $\bar F^{(0)}=0$, we should observe the solutions in the next time scale and study $\dot x_t^\epsilon ={1\over \epsilon} F(x_t^\epsilon,{t\over \epsilon^2},\omega, \epsilon)$. See R. L. Stratonovich \cite{Stratonovich-rhs, Stratonovich61}. Suppose for some functions $\bar a_{j,k}$ and $\bar b_j$ the following estimates hold uniformly: \begin{equation}\label{condition-rate} {\begin{split} &\left| {1\over \epsilon^3} \int_s^{s+\epsilon} \int_{s}^{r_1} {\mathbf E} F_j^{(0)}(x, {r_2\over \epsilon^2}) F^{(0)}_k(x, {r_1\over \epsilon^2})\, dr_2\; dr_1-\bar a_{j,k}(s,x)\right|\, dr_2\; dr_1\le o(\epsilon),\\ &\left| {1\over \epsilon^3} \int_s^{s+\epsilon} \int_{s}^{r_1} \sum_{k=1}^d {\mathbf E} {\partial F_j^{(0)}\over \partial x_k}( x, {r_2\over \epsilon^2}) F_k^{(0)}( x, {r_1\over \epsilon^2})\, dr_2\; dr_1 +{1\over \epsilon} \int_s^{s+\epsilon} {\mathbf E} F_j^{(1)} ( x, {r\over \epsilon^2}) \,dr-\bar b_j(x, s)\right|\\ & \le o(\epsilon). \end{split}} \end{equation} Then under a `strong mixing' condition with suitable mixing rate, the solutions of the equations $\dot x_t^\epsilon ={1\over \epsilon} F(x_t^\epsilon,{t\over \epsilon^2},\omega, \epsilon)$ converge weakly on any compact interval to a Markov process. This is a theorem of R. L. Stratonovich \cite{Stratonovich61} and R. Z. Khasminskii\cite{Khasminskii-rhs}, further refined and explored in Khasminskii \cite{Khasminskii66} and A. N. Borodin \cite{Borodin77}. These theorems lay foundation for investigation beyond ordinary differential equations with a fast oscillating right hand side. In our case, noise comes into the system via a ${\mathcal L}_0$-diffusion satisfying H\"ormander's conditions, and hence we could by pass these assumptions and also obtain convergences in the Wasserstein distances. For manifold valued stochastic processes, some difficulties are caused by the inherited non-linearity. For example, integrating a vector field along a path makes sense only after they are parallel translated back. Parallel transports of a vector field along a path, from time $t$ to time $0$, involves the whole path up to time $t$ and introduces extra difficulties; this is still an unexplored territory wanting further investigations. For the proof of tightness, the non-linearity causes particular difficulty if the Riemannian distance function is not smooth. The advantage of working on a manifold setting is that for some specific physical models, the noise can be untwisted and becomes easy to deal with. Our estimates for the rate of convergence, section \ref{section-rate} and \ref{Wasserstein}, can be considered as an extension to that in W. Kohler and G. C. Papanicolaou \cite{Kohler-Papanicolaou74, Kohler-Papanicolaou75}, which were in turn developed from the following sequence of remarkable papers: R. Coghurn and R. Hersh \cite{Cogburn-Hersh}, J.B. Keller and G. C. Papanicolaou \cite{Papanicolaou-Keller}, R. Hersh and M. Pinsky \cite{Hersh-Pinsky72}, R. Hersh and G. C. Papanicolaou \cite{Hersh-Papanicolaou72} and G. C. Papanicolaou and S.R.S. Varadhan \cite{Papanicolaou-Varadhan73}. See also T. Kurtz \cite{Kurtz70} and \cite{Papanicolaou-Stroock-Varadhan77} by D. Stroock and S. R. S. Varadhan. The condition $\bar F=0$ needs not hold for this type of scaling and convergence. If $F(x, t, \omega, \epsilon)=F^{(0)} (x, \zeta_t(\omega))$, where $\zeta_t$ is a stationary process with values in ${\mathbb R}^m$, and $\bar F^{(0)}=X_H$, the Hamiltonian vector field associated to a function $H\in BC^3( {\mathbb R}^2; {\mathbb R})$ whose level sets are closed connected curves without intersections, then $H(y_{t\over\epsilon}^\epsilon)$ converge to a Markov process, under suitable mixing and technical assumptions. See A. N. Borodin and M. Freidlin \cite{Borodin-Freidlin}, also M. Freidlin and M. Weber \cite{Freidlin-Weber} where a first integral replaces the Hamiltonian, and also X.-M. Li \cite{Li-geodesic} where the value of a map from a manifold to another is preserved by the unperturbed system. In M. Freidlin and A. D.Wentzell \cite{Freidlin-Wentzell}, the following type of central limit theorem is proved: ${1\over \sqrt \epsilon} \left(H(x_s^\epsilon)-H(\bar x_s)\right)$ converges to a Markov diffusion. This formulation is not suitable when the conserved quantity takes value in a non-linear space. For the interested reader, we also refer to the following articles on limit theorems, averaging and Homogenization for stochastic equations on manifolds: N. Enriquez, J. Franchi, Y. LeJan \cite{Enriquez-Franchi-LeJan}, I. Gargate, P. Ruffino \cite{Gargate-Ruffino}, N. Ikeda, Y. Ochi \cite{Ikeda-Ochi}, Y. Kifer \cite{Kifer92}, M. Liao and L. Wang \cite{Liao-Wang-05}, S. Manade, Y. Ochi \cite{Manabe-Ochi}, Y. Ogura \cite{Ogura01}, M. Pinsky \cite{Pinsy-parallel}, and R. Sowers \cite{Sowers01}. \subsection{Further Question.} (1) I am grateful to the associate editor for pointing out the paper by C. Liverani and S. Olla \cite{Liverani-Olla}, where random perturbed Harmiltonian system, in the context of weak interacting particle systems, is studied. Their system is somewhat related to the completely integrable equation studied in \cite{Li-averaging} leading to a new problem which we now state. Denote $X_f$ the Hamiltonian vector field on a symplectic manifold corresponding to a function $f$. If the symplectic manifold is ${\mathbb R}^{2n}$ with the canonical symplectic form, $X_f$ is the skew gradient of $f$. Suppose that $\{H_1, \dots, H_n\}$ is a completely integrable system, i.e. they are poisson commuting at every point and their Hamiltonian vector fields are linearly independent at almost all points. Following \cite{Li-averaging} we consider a completely integrable SDE perturbed by a transversal Hamiltonian vector field: $$dy_t^\epsilon = \sum_{i=1}^n X_{H_i}(y_t^\epsilon)\circ dW_t^i+X_{H_0}(y_t^\epsilon)dt+\epsilon X_K(y_t^\epsilon)dt.$$ Suppose that $X_{H_0}$ commutes with $X_{H_k}$ for $k=1,\dots, n$, then each $H_i$ is a first integral of the unperturbed system. Then, \cite[Th 4.1]{Li-averaging}, within the action angle coordinates of a regular value of the energy function $H=(H_1, \dots, H_n)$, the energies $\{ H_1(y^\epsilon_{t\over \epsilon^2}), \dots, H_n(y^\epsilon_{t\over \epsilon^2})\}$ converge weakly to a Markov process. When restricted to the level sets of the energies, the fast motions are ellipitic. It would be desirable to remove the `complete integrability' in favour of Hormander's type conditions. There is a non-standard symplectic form on $({\mathbb R}^4)^N$ with respect to which the vector fields in \cite{Liverani-Olla} are Hamiltonian vector fields and when restricted to level sets of the energies the unperturbed system in \cite{Liverani-Olla} satisfies H\"ormander's condition, see \cite[section 5]{Liverani-Olla}, and therefore provides a motivating example for further studies. Finally note that the driving vector fields in (\ref{1}) are in a special form, results here would not apply to the systems in \cite{Li-averaging} nor that in \cite{Liverani-Olla}, and hence it would be interesting to formulate and develop limit theorems for more general random ODEs to include these two cases. (2) It should be interesting to develop a theory for the ODEs below \begin{equation} \label{1} \dot y_t^\epsilon(\omega)=\sum_{k=1}^m Y_k\left(y_t^\epsilon(\omega)\right)\alpha_k(z_t^\epsilon(\omega), y_t^\epsilon)) \end{equation} where $\alpha_k$ depends also on the $y^\epsilon$ process. (3) It would be nice to extend the theory to allow the noise to live in a non-compact manifold, in which ${\mathcal L}_0$ should be an Ornstein-Uhlenbeck type operator whose drift term would provide for a deformed volume measure. \medskip {\it Notation.} Throughout this paper ${\bf \mathcal B}_b(M;N)$, $C_K^r(M;N)$, and $BC^r(M;N)$ denote the set of functions from $M$ to $N$ that are respectively bounded measurable, $C^r$ with compact supports, and bounded $C^r$ with bounded first $r$ derivatives. If $N={\mathbb R}$ the letter $N$ will be suppressed. Also ${\mathbb L}(V_1;V_2)$ denotes the space of bounded linear maps; $C^r(\Gamma TM)$ denotes $C^r$ vector fields on a manifold $M$. \section{Examples} \label{section-Examples} Let $\{W_t^k\}$ be independent real valued Brownian motions on a given filtered probability space, $\circ$ denote Stratonovich integrals. In the following $H_0$ and $ A_k$ are smooth vector fields, and $\{A_1, \dots, A_k\}$ is an orthonormal basis at each point of the vertical tangent spaces. To be brief, we do not specify the properties of the vector fields, instead refer the interested reader to \cite{Li-geodesic} for details. For any $\epsilon>0$, the stochastic differential equations $$du_t^\epsilon=H_0(u_t^\epsilon)dt+{1\over \sqrt \epsilon}\sum_{k=1}^{n(n-1)\over 2} A_k(u_t^\epsilon)\circ dW_t^k$$ are degenerate and they interpolate between the geodesic equation ($\epsilon=\infty$) and Brownian motions on the fibres ($\epsilon=0$). The fast random motion is transmitted to the horizontal direction by the action of the Lie bracket $[H_0, A_k]$. If $ H_0=0$, there is a conserved quantity to the system which is the projection from the orthonormal bundle to the base manifold. This allows us to separate the slow variable $(y_t^\epsilon)$ and the fast variable $(z_t^\epsilon)$. The reduced equation for $(y_t^\epsilon)$, once suitable `coordinate maps' are chosen, can be written in the form of (\ref{1}). In \cite{Li-geodesic} we proved that $(y^\epsilon_{t\over \epsilon})$ converges weakly to a rescaled horizontal Brownian motion. Recently J. Angst, I. Bailleul and C. Tardif gave this a beautiful treatment, \cite{Angst-Bailleul-Tardif}, using rough path analysis. By theorems in this article, the above model can be generalised to include random perturbation by hypoelliptic diffusions, i.e. $\{A_1, \dots, A_k\}$ generates all vertical directions. In \cite{Li-geodesic} we did not know how to obtain a rate for the convergence. Theorem \ref{rate}, in this article, will apply and indeed we have an upper bound for the rate of convergence. As a second example, we consider, on the special orthogonal group $SO(n)$, the following equations: \begin{equation} \label{interpolation} dg_t^\epsilon={1\over \sqrt \epsilon}\sum_{k=1}^{n(n-1)\over 2} g_t^\epsilon E_{k}\circ dW_t^k + g_t^\epsilon Y_0dt, \end{equation} where $\{E_k\}$ is an orthonormal basis of ${\mathfrak {so}}(n-1)$, as a subspace of ${\mathfrak {so}}(n)$, and $Y_0$ is a skew symmetric matrix orthogonal to ${\mathfrak {so}}(n-1)$. The above equation is closely related to the following set of equations: $$dg_t=\gamma\sum_{k=1}^{n(n-1)\over 2} g_t E_{k}\circ dW_t^k +\delta g_t Y_0dt,$$ where $\gamma, \delta$ are two positive numbers. If $\delta=0$ and $\gamma=1$, the solutions are Brownian motions on $SO(n-1)$. If $\delta={1\over |Y_0|}$ and $\gamma=0$, the solutions are unit speed geodesics on $SO(n)$. These equations interpolate between a Brownian motion on the sub-group $SO(n-1)$ and a one parameter family of subgroup on $SO(n)$. See \cite{Li-homogeneous}. Take $\delta=1$ and let $\gamma={1\over \sqrt \epsilon}\to \infty$, what could be the `effective limit' if it exists? The slow components of the solutions, which we denote by $(u_t^\epsilon)$, satisfy equations of the form (\ref{1}). They are `horizontal lifts' of the projections of the solutions to $S^n$. If ${\mathfrak m}$ is the orthogonal complement of ${\mathfrak {so}}(n-1)$ in ${\mathfrak {so}}(n)$ then ${\mathfrak m}$ is ${\mathop {\rm Ad}}_H$-irreducible and ${\mathop {\rm Ad}}_H$-invariant, noise is transmitted from ${\mathfrak h}$ to every direction in ${\mathfrak m}$, and this in the uniform way. It is therefore plausible that $u_{t\over \epsilon}^\epsilon$ can be approximated by a diffusion $\bar u_t$ of constant rank. The projection of $u_t$ to $S^n$ is a scaled Brownian motion with scale $\lambda$. The scale $\lambda$ is a function of the dimension $n$, but is independent of $Y_0$ and is associated to an eigenvalue of the Laplacian on $SO(n-1)$, indicating the speed of propagation. As a third example we consider the Hopf fibration $\pi: S^3\to S^2$. Let $\{X_1, X_2, X_3\}$ be the Pauli matrices, they form an orthonormal basis of ${\mathfrak {su}}(2)$ with respect to the canonical bi-invariant Riemannian metric. $$X_1=\left(\begin{matrix} i &0\\0&-i \end{matrix}\right), \quad X_2=\left(\begin{matrix} 0 &1\\-1&0 \end{matrix}\right), \quad X_3=\left(\begin{matrix} 0 &i\\i&0 \end{matrix}\right). $$ Denote $X^*$ the left invariant vector field generated by $X\in {\mathfrak {su}}(2)$. By declaring $\{{1\over \sqrt \epsilon}X_1^*, X_2^*, X_3^*\}$ an orthonormal frame, we obtain a family of left invariant Riemannian metrics $m^\epsilon$ on $S^3$. The Berger's spheres, $(S^3, m^\epsilon)$, converge in measured Gromov-Hausdorff topology to the lower dimensional sphere $S^2({1\over 2})$. For further discussions see K. Fukaya \cite{Fukaya} and J. Cheeger and M. Gromov \cite{Cheeger-Gromov}. Let $W_t$ be a one dimensional Brownian motion and take $Y$ from ${\mathfrak m}:=\<X_2, X_3\>$. The infinitesimal generator of the equation $ dg_t^\epsilon={1\over \sqrt \epsilon} X_1^*(g_t^\epsilon)\circ dW_t+Y^*(g_t^\epsilon) \;dt$ satisfies weak H\"ormander's conditions. The `slow motions', suitably sacled, converge to a `horizontal' Brownian motion whose generator is ${1\over 2}c\mathop{\rm trace}_{{\mathfrak m}}\nabla d$, where the trace is taken in ${\mathfrak m}$. A slightly different, ad hoc, example on the Hopf fibration is discussed in \cite{LI-OM-1}. An analogous equations can be considered on $SU(n)$ where the diffusion coefficients come from a maximal torus. Finally we give an example where the noise $(z_t^\epsilon)$ in the reduced equation is not elliptic. Let $M=SO(4)$ and let $E_{i,j}$ be the elementary $4\times 4$ matrices and $A_{i,j}={1\over \sqrt 2} (E_{ij}-E_{ji})$. For $k=1,2$ and $3$, we consider the equations $$dg_t^\epsilon={1\over \sqrt\epsilon} A_{1,2}^*(g_t^\epsilon)\circ db_t^1+ {1\over \sqrt\epsilon} A_{1,3}^*(g_t^\epsilon)\circ db_t^2+A_{k4}^*(g_t^\epsilon)dt.$$ The slow components of the solutions of these equations again satisfy an equation of the form (\ref{1}). \section{Preliminary Estimates} \label{section-estimates} Let ${\mathcal L}_0$ be a diffusion operator on a manifold $G$ and $Q_t$ its transition semigroup and transition probabilities. Let $\|\cdot \|_{TV}$ denote the total variation norm of a measure, normalized so that the total variation norm between two probability measures is less or equal to $2$. By the duality formulation for the total variation norm, $$\|\mu\|_{TV}=\sup_{|f| \le 1, f\in {\bf \mathcal B}_b(G;{\mathbb R})} \left|\int_G fd\mu \right|.$$ For $W\in {\bf \mathcal B}( G; [1,\infty))$ denote $\|f\|_W$ the weighted supremum norm and $\|\mu\|_{TV,W}$ the weighted total variation norm: $$\|f\|_W=\sup_{x\in G}{|f(x)|\over W(x)}, \quad \|\mu\|_{TV, W}= \sup_{ \{ \|f\|_W\le 1\}} \left| \int_G f d\mu\right| . $$ \begin{assumption} \label{assumption1} There is an invariant probability measure $\pi$ for ${\mathcal L}_0$, a real valued function $W\in L^1(G, \pi)$ with $W\ge 1$, numbers $\delta>0$ and $a>0$ such that $$\sup_{x\in G}{\|Q_t(x,\cdot)-\pi\|_{TV, W}\over W(x)} \le ae^{-\delta t}.$$ \end{assumption} If $G$ is compact we take $W\equiv 1$. In the following lemma we collect some elementary estimates, which will be used to prove Lemma \ref{lemma3} and \ref{lemma3-2}, for completeness their proofs are given in the appendix. Write $\bar W=\int_G W d\pi$. \begin{lemma} \label{lemma2} Assume Assumption \ref{assumption1}. Let $f,g:G\to {\mathbb R}$ be bounded measurable functions and let $c_\infty=|f|_\infty\|g\|_W$. Then the following statements hold for all $s, t\ge 0$. \begin{enumerate} \item [(1)] Let $(z_t)$ be an ${\mathcal L}_0$ diffusion. If $ \int_G gd\pi=0$, $${\begin{split} &\left|{1\over t-s} \int_s^{t} \int_s^{s_1} \left( {\mathbf E}\left\{ f(z_{s_2}) g(z_{s_1}) \Big| {\mathcal F}_s\right\} -\int_G fQ_{s_1-s_2}g d\pi\right) ds_2ds_1\right|\\ &\le {a^2c_\infty\over (t-s)\delta^2}W(z_s).\end{split}}$$ \item [(2)] Let $(z_t)$ be an ${\mathcal L}_0$ diffusion. If $\int_G gd \pi=0$ then $${ \begin{split} &\left|{1\over t-s} \int_s^{t} \int_s^{s_1} {\mathbf E}\left\{ f(z_{s_2}) g(z_{s_1}) \Big| {\mathcal F}_s\right\} \; d s_2 \; d s_1 - \int_G\int_0^\infty f Q_rg \; d r \; d\pi \right|\\ &\le {c_\infty \over (t-s) \delta^2} (a^2 W(z_s)+a \bar W )+{c_\infty a\over \delta} \bar W. \end{split} } $$ \item [(3)] Suppose that either $\int_G f\; d\pi =0$ or $\int_G g\; d\pi =0$. Let $$C_1={a\over \delta^2} (aW+\bar W)|f|_\infty\|g\|_W, \quad C_2={2a\over \delta}|f|_\infty\|g\|_W\bar W +{a \over \delta}|\bar g| \; \|f\|_W W.$$ Let $(z_t^\epsilon)$ be an ${\mathcal L}_0^\epsilon$ diffusion. Then for every $\epsilon>0$, $$\left| \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ f(z^\epsilon_{s_2}) g(z^\epsilon_{s_1}) \Big| {\mathcal F}_{s\over \epsilon}\right\}\; d s_2\; d s_1 \right| \le C_1(z_{s\over \epsilon}^\epsilon) \epsilon^2+C_2(z_{s\over \epsilon}^\epsilon) (t-s).$$ \end{enumerate} \end{lemma} To put Assumption \ref{assumption1} into context, we consider H\"ormander type operators. Let $L_{X}$ denote Lie differentiation in the direction of a vector field $X$ and $[X, Y]$ the Lie bracket of two vector fields $X$ and $Y$. Let $\{X_i, i=0, 1,\dots, m'\}$ be a family of smooth vector fields on a compact smooth manifold $G$ and ${\mathcal L}_0={1\over 2}\sum_{i=1}^{m'}L_{X_i}L_{X_i}+L_{X_0}$. If $\{X_i, i=1, \dots, m'\}$ and their Lie brackets generate the tangent space $T_xG$ at each point $x$ we say that the operator ${\mathcal L}_0$ satisfies the strong H\"ormander's condition. \begin{lemma} \label{lemma1} Suppose that ${\mathcal L}_0$ satisfies the strong H\"ormander condition on a compact manifold $G$ and let $Q_t(x,\cdot)$ be its family of transition probabilities. Then Assumption \ref{assumption1} holds with $W$ identically $1$. Furthermore the invariant probability measure $\pi$ has a strictly positive smooth density w.r.t. the Lebesgue measure and $$\|Q_t(x,\cdot)-\pi(\cdot)\|_{TV} \le Ce^{-\delta t}$$ for all $x$ in $G$ and for all $t>0$. \end{lemma} \begin{proof} By H\"ormander's theorem there are smooth functions $q_t(x,y)$ such that $Q_t(x,dy)=q_t(x,y)dy$. Furthermore $q_t(x,y)$ is strictly positive, see J.-M. Bony \cite{Bony69} and A. Sanchez-Calle \cite{Sanchez-Calle78}. Let $a=\inf_{x,y\in M}q_t(x,y)>0$. Thus D\"oeblin's condition holds: if ${\mathop {\rm vol}}(A)$ denotes the volume of a Borel set $A$, $Q_t(x, A)\ge a \,{\mathop {\rm vol}}(A)$. \end{proof} We say that $W$ is a $C^3$ Lyapunov function for the ergodicity problem if there are constant $c\not =0$ and $C>0$ s.t. ${\mathcal L}_0 W\le C-c^2 W$. If such a function exists, the ${\mathcal L}_0^\epsilon$ diffusions are conservative. Suppose that the Lyapunov function $V$ satisfies in addition the following conditions: there exists a number $\alpha\in (0,1)$ and $t_0>0$ s.t. for every $R>0$, $$\sup_{\{(x,y): V(x)+V(y)\le R\}} \|Q_{t_0}(x, \cdot)-Q_{t_0}(y, \cdot)\|_{TV} \le 2(1-\alpha),$$ Then there exists a unique invariant measure $\pi$ such that Assumption \ref{assumption1} holds, see e.g. M. Hairer and J. Mattingly \cite{Hairer-Mattingly08}. We mention the following standard estimates which helps to understand the estimates in Lemma \ref{lyapunov-ergodicity}. \begin{lemma} \label{lyapunov-ergodicity} Let $W$ be a $C^3$ Lyapunov function for the ergodicity problem of ${\mathcal L}_0$, $ {\mathbf E} W(z_0^\epsilon)$ is uniformly bounded in $\epsilon$ for $\epsilon$ sufficiently small. Then there exist numbers $\epsilon_0>0$ and $c$ s.t. for all $t>0$, $$\sup_{s\le t}\sup_{\epsilon \le \epsilon_0}{\mathbf E} W(z_{s\over \epsilon}^\epsilon) \le c.$$ \end{lemma} \begin{proof} By localizing $(z_t^\epsilon)$ if necessary, we see that $W(z_t^\epsilon)-W(z_0^\epsilon)-{1\over \epsilon} \int_0^t {\mathcal L}_0 W (z^\epsilon_r) dr$ is a martingale. Let $c\not =0$ and $C>0$ be constant s.t. ${\mathcal L}_0 W\le C-c^2 W$. Then $ {\mathbf E} W(z_{s\over \epsilon}^\epsilon) \le \left ({\mathbf E} W(z_0^\epsilon)+{1\over \epsilon}Ct\right) e^{-{c^2\over \epsilon} t}$. \end{proof} As an application we see that, under the assumption of Lemma \ref{lyapunov-ergodicity}, the functions $C_i$ in part (3) of Lemma \ref{lemma2} satisfy that $\sup_{\epsilon \le \epsilon_0}{\mathbf E} C_i(z_{s\over \epsilon}^\epsilon) <\infty$. \begin{definition}\label{definition-complete} We say that a stochastic differential equation (SDE) on $M$ is {\it complete} or {\it conservative} if for each initial point $y\in M$ any solution with initial value $y$ exists for all $t\ge 0$. Let $\Phi_t(x)$ be its solution starting from $x$. The SDE is {\it strongly complete} if it has a unique strong solution and that $(t,x)\mapsto \Phi_t(x, \omega)$ is continuous for a.s. $\omega$. \end{definition} From now on, by a solution we always mean a globally defined solution. For $\epsilon\in (0,1)$ we define ${\mathcal L}_0^\epsilon={1\over \epsilon } {\mathcal L}_0$. Let $Q_t^\epsilon$ denote their transition semigroups and transition probabilities. For each $\epsilon>0$, let $(z_t^\epsilon)$ be an ${\mathcal L}_0^\epsilon$ diffusion. Let $\alpha_k\in {\bf \mathcal B}_b(G;{\mathbb R})$ and $(y_t^\epsilon)$ solutions to the equations \begin{equation} \label{sde-3} \dot y_t^\epsilon=\sum_{k=1}^m Y_k(y_t^\epsilon)\alpha_k(z_t^\epsilon). \end{equation} Let $\Phi_{s,t}^\epsilon$ be the solution flow to (\ref{sde-3}) with $\Phi_{s,s}^\epsilon(y)=y$. We denote by $\bar g$ the average of an integrable function $g: G\to {\mathbb R}$ with respect to $\pi$. Let \begin{equation} \label{cpsi-0} c_0(a, \delta)={a^2+a\over \delta^2}+{3 a\over\delta}, \quad c_{W}=c(a,\delta)(W+\bar W). \end{equation} \begin{lemma} \label{lemma3} Suppose that Assumption \ref{assumption1} holds. Let $f,g \in {\bf \mathcal B}_b(G; {\mathbb R})$ and $\bar g=0$. Suppose that $\alpha_k$ are bounded. Then for any $F\in C^1(M; {\mathbb R})$, $0\le s \le t$ and $0<\epsilon<1$, $$\left|\epsilon \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ F(y^\epsilon_{s_2}) g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\right|\\ \le 2 \gamma_\epsilon |g|_\infty |f|_\infty ( \epsilon^2+ (t-s)^2 ).$$ Here $$\gamma_\epsilon = \left( |F(y_{s\over\epsilon}^\epsilon)| \;c_W ( z_{s\over \epsilon}^\epsilon) + \sum_{l=1}^m |\alpha_l|_\infty {\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left|(L_{Y_l} F)(y^\epsilon_r)\right|c_W (z^\epsilon_r) \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr \right).$$ \end{lemma} \begin{proof} We first expand $F(y_{s_2}^\epsilon)$ at ${s \over \epsilon}$: $${ \begin{split} &\epsilon \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ F(y^\epsilon_{s_2}) g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1 =\epsilon F(y^\epsilon_{s\over \epsilon}) \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\\ &+\sum_{l=1}^m\epsilon \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over\epsilon}^{s_1}\int_{s\over \epsilon}^{s_2} {\mathbf E}\left\{ (dF)(Y_l(y^\epsilon_{s_3})) \alpha_l(z_{s_3}^\epsilon) g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_3 \; d s_2 \; d s_1 \end{split} } $$ By part (3) of lemma \ref{lemma2} $$ \left|\epsilon F(y^\epsilon_{s\over \epsilon}) \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\right|\le |F(y_{s\over \epsilon}^\epsilon)||f|_\infty|g|_\infty c_W (z^\epsilon_{s\over \epsilon}) \left(\epsilon^3+(t-s)\epsilon\right).$$ It remain to estimate the summands in the second term, whose absolute value is bounded by the following $${ \begin{split} &A_l:= \left|\epsilon\int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over\epsilon}^{s_1}\int_{s\over \epsilon}^{s_2} {\mathbf E} \left\{ (dF)(Y_l(y^\epsilon_{s_3})) \alpha_l(z_{s_3}^\epsilon) g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_3 \; d s_2 \; d s_1\right| \\ &=\left| \epsilon \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{(dF)(Y_l(y^\epsilon_{s_3})) \alpha_l(z_{s_3}^\epsilon) \int_{s_3}^{t\over\epsilon}\int_{s_2}^{t\over \epsilon} {\mathbf E}\left\{ g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s_3}\right\} \; d s_1 \; d s_2 \Big|{\mathcal F}_{s\over \epsilon} \right\} ds_3 \right|. \end{split} } $$ For $s_3\in [{s\over \epsilon}, {t\over \epsilon}]$, we apply part (3) of lemma \ref{lemma2} to bound the inner iterated integral, $${ \begin{split} &\left| \int_{s_3}^{t\over\epsilon}\int_{s_2}^{t\over \epsilon} {\mathbf E}\left\{ g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s_3}\right\} \; d s_1 \; d s_2\right| =\left | \int_{s_3}^{t\over\epsilon}\int_{s_3}^{s_1} {\mathbf E}\left\{ g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s_3}\right\} \; d s_2 \; d s_1\right|\\ & \le \left( \epsilon^2+ t-\epsilon s_3 \right) c_W (z^\epsilon_{s_3}) |f|_\infty |g|_\infty. \end{split} } $$ We bring this back to the previous line, the notation $L_{Y_l}F=dF(Y_l)$ will be used, $${ \begin{split} &A_l\le \epsilon \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left| (dF)(Y_l(y^\epsilon_{s_3})) c_W (z^\epsilon_{s_3}) \alpha_l(z_{s_3}^\epsilon) \Big|{\mathcal F}_{s\over \epsilon} \right\} \right| \left( \epsilon^2+ (t-\epsilon s_3) \right) |f|_\infty |g|_\infty \; d s_3\\ &\le |f|_\infty |g|_\infty |\alpha_l|_\infty (t-s) (\epsilon^2+(t-s)) {\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left|(L_{Y_l} F)(y^\epsilon_{s_3})\right|c_W (z^\epsilon_{s_3})\Big|{\mathcal F}_{s\over \epsilon} \right\} ds_3. \end{split} } $$ Putting everything together we see that, for $\gamma_\epsilon$ given in the Lemma, $\epsilon<1$, $${ \begin{split} &\left| \epsilon \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ F(y^\epsilon_{s_2}) g(z^\epsilon_{s_2})f(z^\epsilon_{s_1}) \big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\right| \le 2\gamma_\epsilon |g|_\infty |f|_\infty \left( \epsilon^2+ (t-s)^2\right). \end{split} } $$ The proof is complete. \end{proof} In Section \ref{uniform-estimates} we will estimate $\gamma_\infty$ and give uniform, in $\epsilon$, moment estimates of functionals of $(y_t^\epsilon)$ on $[0, {T \over \epsilon}]$. \begin{lemma} \label{lemma3-2} Assume that $(z_t^\epsilon)$ satisfies Assumption \ref{assumption1} and $\alpha_j$ are bounded. If $F\in C^2(M;{\mathbb R})$ and $f\in {\bf \mathcal B}_b(G;{\mathbb R})$, then for all $s\le t$, $${\begin{split} &\left| {\epsilon \over t-s} \int_{s\over \epsilon }^{t\over \epsilon} {\mathbf E} \left\{ F (y_r^\epsilon) f(z_{r}^\epsilon) \big| {\mathcal F}_{s\over \epsilon}\right\} dr -\bar f\; F(y_{s\over \epsilon}^\epsilon) \right| \\ & \le {2a\over \delta} |f|_\infty \left( W(z_{s\over \epsilon}^\epsilon)|F|(y_{s\over \epsilon}^\epsilon) +\sum_{j=1}^{\mathfrak m} \gamma^j_\epsilon |\alpha_j|_\infty \right)\left ({\epsilon^2\over t-s} +(t-s)\right) \end{split}} $$ where $$\gamma^j_{\epsilon}(y)=c_W (z_{s\over \epsilon}^\epsilon) \;|L_{Y_j}F (y^\epsilon_{s\over\epsilon})|+ \sum_{l=1}^m |\alpha_l|_\infty {\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left|L_{Y_l} L_{Y_j}F(y^\epsilon_r)\right|c_W (z^\epsilon_r) \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr. $$ \end{lemma} \begin{proof} We note that, $${\begin{split} {\epsilon \over t-s}\int_{s\over \epsilon}^{t\over \epsilon }F (y_r^\epsilon) f(z_{r}^\epsilon)dr =&F(y_{s\over \epsilon}^\epsilon) {\epsilon \over t-s}\int_{s\over \epsilon }^{t\over \epsilon} f(z_{r}^\epsilon)dr\\ &+ \sum_{j=1}^m {\epsilon \over t-s}\int_{s\over \epsilon }^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} dF (Y_j(y_{s_2}^\epsilon)) \alpha_j (z_{s_2}^\epsilon) f(z_{s_1}^\epsilon)ds_2ds_1. \end{split}}$$ Letting $\psi(r)=ae^{-\delta r}$, it is clear that for $k\ge 2$, $${\begin{split} &\left| {\mathbf E} \left\{ \left( F(y_{s\over \epsilon}^\epsilon) {\epsilon \over t-s} \int_{s\over \epsilon }^{t\over \epsilon} f(z_{r}^\epsilon)dr -\bar f\;F(y_{s\over \epsilon}) \right)\big |\;{\mathcal F}_{s\over \epsilon}\right\}\right|\\ &\le \|f\|_W W(z^\epsilon_{s\over \epsilon}) \left|F(y_{s\over \epsilon}^\epsilon)\right| {\epsilon^2\over t-s} \int_{s\over \epsilon^2}^{t\over \epsilon^2} \psi\left ( r-{s\over \epsilon^2}\right) dr \le {a\over \delta} \|f\|_W W(z^\epsilon_{s\over \epsilon}) \left|F(y_{s\over \epsilon}^\epsilon)\right| {\epsilon^2 \over t-s}. \end{split}}$$ To the second term we apply Lemma \ref{lemma3} and obtain the bound $${\begin{split}& \left| {\mathbf E} \left\{ \sum_{j=1}^m {\epsilon\over t-s}\int_{s\over \epsilon }^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} dF (Y_j(y_{s_2}^\epsilon)) \alpha_j (z_{s_2}^\epsilon) f(z_{s_1}^\epsilon)ds_2ds_1\big |\;{\mathcal F}_{s\over \epsilon}\right\}\right|\\ &\le 2 \sum_{j=1}^m \tilde\gamma_\epsilon^j |\alpha_j|_\infty |f|_\infty \left ({\epsilon^2\over t-s} +(t-s)\right) \end{split}}$$ where $$\gamma_\epsilon^j = |L_{Y_j}F|(y_{s\over\epsilon}^\epsilon)| \;c_W ( z_{s\over \epsilon}^\epsilon) + \sum_{l=1}^m |\alpha_l|_\infty {\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left|(L_{Y_l} L_{Y_j}F)(y^\epsilon_r)\right|c_W (z^\epsilon_r) \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr. $$ Adding the error estimates together we conclude the proof. \end{proof} It is worth noticing that if $\phi:{\mathbb R}\to {\mathbb R}$ is a concave function $\phi(W)$ is again a Lyapunov function. Thus by using $\log W$ if necessary, we may assume uniform bounds on ${\mathbf E} W^p(z_{s\over \epsilon}^\epsilon) $ and further estimates on the conditional expectation in the error term are expected from Cauchy-Schwartz inequality. If $G$ is compact then $c_W $ is bounded. In Corollary \ref{corollary-to-lemma3-2}, we will give uniform estimates on moments of $\gamma^j_\epsilon$. \section{A Reduction} \label{section-formula} Let $G$ be a smooth manifold of dimension $n$ with volume measure $dx$. Let $H^s\equiv H^s(G)$ denote the Sobolev space of real valued functions over a manifold $G$ and $\|-\|_s$ the Sobolev norm. The norm $(\|u\|_s)^2 :=(2\pi)^{-n}\int |\hat u(\xi)|^2 (1+|\xi|^2)^s d\xi$ extends from domains in ${\mathbb R}^n$ to compact manifolds, e.g. by taking supremum over $\|u\|_s$ on charts. If $s\in {\mathcal N}$, $H^s$ is the completion of $C^\infty(M)$ with the norm $\|u\|_{s} =\sum_{j=0}^s \int (|\nabla^j u|)^2 dx )^{1\over 2}$ where $\nabla$ is usually taken as the Levi-Civita connection; when the manifold is compact this is independent of the Riemannian metric. And $u\in H^s$ if and only if for any function $\phi\in C_K^\infty$, $\phi u$ in any chart belongs to $H^s$. Let $\{X_i, i=0, 1,\dots, m'\}$ be a family of smooth vector fields on $G$ and let us consider the H\"ormander form operator ${\mathcal L}_0={1\over 2}\sum_{i=1}^{m'}L_{X_i}L_{X_i}+L_{X_0}$. Let $$\Lambda:= \{X_{i_1}, [X_{i_1}, X_{i_2}], [X_{i_1}, [X_{i_2},X_{i_3}]], i_j =0,1, \dots, m'\}.$$ If the vector fields in $\Lambda$ generate $T_xG$ at each $x\in G$, we say that H\"ormander's condition is satisfied. By the proof in a theorem of H\"ormander\cite[Theorem1.1]{Hormander-hypo-acta}, if ${\mathcal L}_0$ satisfies the H\"ormander condition then $u$ is a $C^\infty$ function in every open set where ${\mathcal L}_0 u$ is a $C^\infty $ function. There is a number $\delta>0$ such that there is an $\delta$ improvement in the Sobolev regularity: if $u$ is a distribution such that ${\mathcal L}_0 u\in H^s_{\mathop{\rm loc}}$, then $u\in H^{s+\delta}_{\mathop{\rm loc}}$. Suppose that $G$ is compact. Then $\|u\|_\delta \le C(\|u\|_{L^2}+\|{\mathcal L}_0 u\|_{L^2})$, the resolvents $({\mathcal L}_0+\lambda I)^{-1}$ as operators from $L^2(G;dx)$ to $L^2(G;dx)$ are compact, and ${\mathcal L}_0$ is Fredholm on $L^2(dx)$, by which we mean that ${\mathcal L}_0$ is a bounded linear operator from $\mathop{\rm Dom}({\mathcal L}_0)$ to $L^2(dx)$ and has the Fredholm property : its range is closed and of finite co-dimension, the dimension of its kernel, $\mathop{\rm ker}({\mathcal L}_0)$ is finite. The domain of ${\mathcal L}_0$ is endowed with the norm $|u|_{\mathop{\rm Dom}({\mathcal L}_0)}=|u|_{L_2}+|{\mathcal L}_0 u|_{L_2}$. Let ${\mathcal L}_0^*$ denote the adjoint of ${\mathcal L}_0$. Then the kernel $N$ of ${\mathcal L}_0^*$ is finite dimensional and its elements are measures on $M$ with smooth densities in $L^2(dx)$. Denote $N^\perp$ the annihilator of $N$, $g\in L^2(dx)$ is in $ N^\perp$ if and only if $\<g, \pi\>=0$ for all $\pi\in \mathop{\rm ker} ({\mathcal L}_0^*)$. Since ${\mathcal L}_0$ has closed range, $(\mathop{\rm ker}({\mathcal L}_0^*))^\perp$ can be identified with the range of ${\mathcal L}_0$, and the set of $g$ such that the Poisson equation ${\mathcal L}_0 u=g$ is solvable is exactly $N^\perp$. We denote by ${\mathcal L}_0^{-1}g$ a solution. Furthermore ${\mathcal L}_0^{-1}g$ is $C^r$ whenever $g$ is $C^r$. Denote by $\mathop {{\rm index}}({\mathcal L}_0)$, ${\mathop{\rm dim}} \mathop{\rm ker} {\mathcal L}_0-{\mathop{\rm dim}} \,{\mathop {\rm Coker}} {\mathcal L}_0$, the index of a Fredholm operator ${\mathcal L}_0$, where ${\mathop {\rm Coker}}=L^2(dx)/{\mathop{\rm range}} ({\mathcal L}_0)$. If ${\mathcal L}_0$ is self-adjoint, $\mathop {{\rm index}} ({\mathcal L}_0)=0$. \begin{definition} \label{def-Fredholm} We say that ${\mathcal L}_0$ is a regularity improving Fredholm operator, if it is a Fredholm operator and ${\mathcal L}_0^{-1}\alpha$ is $C^r$ whenever $\alpha \in C^r \cap N^\perp$. \end{definition} Let $\{W_t^k, k=1,\dots, m'\}$ be a family of independent real valued Brownian motions. We may and will often represent ${\mathcal L}_0^\epsilon$-diffusions $(z_t^\epsilon)$ as solutions to the following stochastic differential equations, in Stratonovich form, $$dz_t^\epsilon ={1\over \sqrt \epsilon}\sum_{k=1}^{m'} X_k(z_t^\epsilon) \circ dW_t^k +{1\over \epsilon} X_0(z_t^\epsilon)dt.$$ \begin{lemma} \label{lemma5} Let ${\mathcal L}_0$ be a regularity improving Fredholm operator on a compact manifold $G$, $\alpha_k\in C^3\cap N^\perp$, and $\beta_j={\mathcal L}_0^{-1}\alpha_j$. Let $(y_r^\epsilon)$ be global solutions of (\ref{sde-3}) on $M$. Then for all $0\le s<t$, $\epsilon>0$ and $f\in C^2(M; {\mathbb R})$, \begin{equation}\label{Ito-tight} {\begin{split} f(y^\epsilon_{t\over \epsilon}) =&f(y^\epsilon_{s\over \epsilon}) + \epsilon \sum_{j=1}^m \left( df(Y_j(y^\epsilon_{t\over \epsilon} ) )\beta_j(z^\epsilon_{t\over \epsilon}) -df(Y_j(y^\epsilon_{s\over \epsilon} ))\beta_j( z^\epsilon_{s\over \epsilon})\right)\\ &-\epsilon \sum_{i,j=1}^m\int_{s\over \epsilon}^{t\over \epsilon} L_{Y_i}L_{Y_j} f(y^\epsilon_r)) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \;dr \\ &- \sqrt \epsilon \sum_{j=1}^m\sum_{k=1}^{m'} \int_{s\over \epsilon}^{t\over \epsilon} df( Y_j(y^\epsilon_r)) \; d\beta_j( X_k(z^\epsilon_r)) \;dW_r^k. \end{split}}\end{equation} Suppose that, furthermore, for each $\epsilon>0$, $j,k=1,\dots, m$, $\int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} |df( Y_j(y^\epsilon_r))|^2 |(d\beta_j( X_k)(z^\epsilon_r)|^2 \;dr$ is finite. Then, \begin{equation}\label{Ito-tight-2} {\begin{split} &{\mathbf E}\left\{ f(y^\epsilon_{t\over \epsilon})\big | {\mathcal F}_{s\over \epsilon}\right\} -f(y^\epsilon_{s\over \epsilon}) = \epsilon \sum_{j=1}^m \left( {\mathbf E}\left\{ df(Y_j(y^\epsilon_{t\over \epsilon} ) )\beta_j(z^\epsilon_{t\over \epsilon})\big | {\mathcal F}_{s\over \epsilon}\right\} -df(Y_j(y^\epsilon_{s\over \epsilon} ))\beta_j( z^\epsilon_{s\over \epsilon})\right)\\ &-\epsilon \sum_{i,j=1}^m\int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E}\left\{ L_{Y_i}L_{Y_j} f(y^\epsilon_r)) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \big | {\mathcal F}_{s\over \epsilon}\right\} \;dr. \end{split} } \end{equation} \end{lemma} \begin{proof} Firstly, for any $C^2$ function $f:M\to R$, $$f(y_{t\over \epsilon}^\epsilon)-f(y_{s\over \epsilon}^\epsilon)= \sum_{j=1}^{m}\int_{s\over \epsilon}^{t\over \epsilon} df (Y_j(y_{s_1}^\epsilon )) \alpha_j (z_{s_1}) ds_1.$$ Since the $\alpha_j$'s are $C^2$ so are $\beta_j$, following from the regularity improving property of ${\mathcal L}_0$. We apply It\^o's formula to the functions $(L_{Y_j}f)\beta_j: M\times G\to {\mathbb R}$. To avoid extra regularity conditions, we apply It\^o's formula to the function $df(Y_j)$, which is $C^1$, and the $C^3$ functions $\beta_j$ separately and follow it with the product rule. This gives: \begin{equation*} \label{product} {\begin{split} df(Y_j(y^\epsilon_{t\over \epsilon} ) )\beta_j(z^\epsilon_{t\over \epsilon}) &= df(Y_j(y^\epsilon_{s\over \epsilon} ))\beta_j( z^\epsilon_{s\over \epsilon}) +\sum_{j=1}^m \int_{s\over \epsilon}^{t\over \epsilon} L_{Y_i}L_{Y_j} f(y^\epsilon_r) \,\alpha_i(z^\epsilon_r)\;\beta_j( z^\epsilon_r)\; \;dr \\ &+ {1\over \sqrt \epsilon} \sum_{k=1}^{m'} \int_{s\over \epsilon}^{t\over \epsilon} L_{Y_j} f( y^\epsilon_r) \, d\beta_j\left( X_k (z^\epsilon_r)\right)dW_r^k +{1\over \epsilon} \int_{s\over \epsilon}^{t\over \epsilon} L_{Y_j} f(y^\epsilon_r) \,{\mathcal L}_0 \beta_j(z_r^\epsilon)dr . \end{split} } \end{equation*} Substitute this into the earlier equation, we obtain (\ref{Ito-tight}). Part (\ref{Ito-tight-2}) is obvious, as we note that $${\begin{split} &{\mathbf E} \left(\sum_{k=1}^{m'} \int_{s\over \epsilon}^{t\over \epsilon} df( Y_j(y^\epsilon_r)) (d\beta_j) \left( X_k(z^\epsilon_r)\right) \;dW_r^k\right)^2 \le \sum_{k=1}^{m'}{\mathbf E}\int_{s\over \epsilon}^{t\over \epsilon} df( Y_j(y^\epsilon_r)) |^2 |d\beta(X_k(z_r^\epsilon))|^2 | \; d r<\infty \end{split}}$$ and the stochastic integrals are $L^2$-martingales, so (\ref{Ito-tight-2}) follows. \end{proof} When $G$ is compact, $d\beta(X_k)$ is bounded and the condition becomes: ${\mathbf E}\int_{s\over \epsilon}^{t\over \epsilon} df( Y_j(y^\epsilon_r)) |^2 \; d r$ is finite, which we discuss below. Otherwise, assumptions on ${\mathbf E} |d\beta(X_k(z_r^\epsilon))|^{2+} $ is needed. \section{Uniform Estimates} \label{section-uniform-estimates} If $V: M\to {\mathbb R}_+$ is a locally bounded function such that $\lim_{y\to \infty} V(y)=\infty$ we say that $V$ is a pre-Lyapunov function. Let $\alpha_k\in {\bf \mathcal B}_b(G;{\mathbb R})$. Let $\{Y_k\}$ be $C^1$ smooth vector fields on $M$ such that: either (a) each $Y_k$ grows at most linearly; or (b) there exist a pre-Lyapunov function $V\in C^1(M;{\mathbb R}_+)$, positive constants $c$ and $K$ such that $\sum_{k=1}^m |L_{Y_k} V| \le c+KV$ then the equations (\ref{sde-3}) are complete. In case (a) let $o\in M$ and $a$ be a constant such that $|Y_k(x)|\le a(1+\rho(o,x))$ where $\rho$ denotes the Riemannian distance function on $M$. For $x$ fixed, denote $\rho_x=\rho(x, \cdot)$. By the definition of the Riemannian distance function, $$ \rho(y_t^\epsilon, y_0)\le \int_0^t |\dot y_s^\epsilon|ds =\sum_{k=1}^m \int_0^t |Y_k(y_s^\epsilon) \alpha_k(z_s^\epsilon) |ds \le\sum_{k=1}^m |\alpha_k|_\infty \int_0^t |Y_k(y_s^\epsilon)| ds.$$ This together with the inequality $\rho(y_t^\epsilon, o)\le \rho(y_t^\epsilon, y_0)+\rho(o, y_0^\epsilon)$ implies that for all $p\ge 1$, there exist constants $C_1, C_2$ depending on $p$ such that $$ \sup_{s\le t}\rho^p(y_s^\epsilon, o)\le \left(C_1 \rho^p(o, y_0^\epsilon)+C_2 t\right) e^{C_2t^p}$$ where $C_2=a^pC_1^2( \sum_{k=1}^m |\alpha_k|_\infty)^p$. When restricted to $\{t<\tau^\epsilon\}$, the first time $y_t^\epsilon$ reaches the cut locus, the bounded is simple $Ce^{Ct}$. In case (b), for any $q\ge 1$, $$ \sup_{s\le t} \left(V(y_s^\epsilon)\right)^q\le \left( V^q(y_0^\epsilon)+ctq\sum_{k=1}^m|\alpha_k|_\infty \right) \exp{\left( q\sum_{k=1}^m|\alpha_k|_\infty(K+c)t\right)},$$ which followed easily from the bound $$|dV^q(\alpha_k Y_k)|=|qV^{q-1}dV(\alpha_kY_k)|\le q|\alpha_k|_\infty(c+(c+K) V^q).$$ For the convenience of comparing the above estimates, which are standard and expected, with the uniform estimates of $(y_t^\epsilon)$ in Theorem \ref{uniform-estimates} below in the time scale ${1\over \epsilon}$, we record this in the following Lemma. \begin{lemma} \label{standard-estimates} Let $\alpha_k \in {\bf \mathcal B}_b(G;{\mathbb R})$. Let $\epsilon\in (0,1)$, $0\le s\le t$, $\omega\in \Omega$. \begin{enumerate} \item If $\{Y_k\}$ grow at most linearly then (\ref{sde-3}) is complete and there exists $C,C(t)$ s.t. $$ \sup_{0\le s\le t}\rho^p(y_s^\epsilon(\omega), o)\le \left(C \rho^p(o, y_0^\epsilon(\omega))+C(t)\right) e^{C(t)}.$$ \item If there exist a pre-Lyapunov function $V\in C^1(M;{\mathbb R}_+)$, positive constants $c$ and $K$ such that $\sum_{j=1}^m |L_{Y_j} V| \le c+KV$, then (\ref{sde-3}) is complete. \item If (\ref{sde-3}) is complete and there exists $V\in C^1(M;{\mathbb R}_+)$ such that $\sum_{j=1}^m |L_{Y_j} V| \le c+KV$ then there exists a constant $C$, s.t. $$ \sup_{0\le s\le t} \left(V(y_s^\epsilon(\omega))\right)^q\le \left( \left( V(y_0^\epsilon(\omega))\right)^q+Ct \right) e^{Ct}.$$ \end{enumerate} \end{lemma} If $V\in {\bf \mathcal B}(M;{\mathbb R})$ is a positive function, denote by $B_{V,r}$ the following classes of functions: \begin{equation} \label{BVr} B_{V,r}=\left\{ f\in C^r(M;{\mathbb R}): \sum_{j=0}^r |d^{j}f|\le c+c V^q \hbox{ for some numbers $c$, $q$ }\right\}. \end{equation} In particular, $B_{V,0}$ is the class of continuous functions bounded by a function of the form $ c+c V^q$. In ${\mathbb R}^n$, the constant functions and the function $V(x)=1+|x|^2$ are potential `control' functions. \begin{assumption} \label{assumption2-Y} Assume that (i) (\ref{sde-3}) are complete for every $\epsilon\in (0,1)$, (ii) $\sup_\epsilon{\mathbf E}\left( V(y_0^\epsilon)\right)^q$ is finite for every $q\ge 1$; and (iii) there exist a function $V\in C^2(M;{\mathbb R}_+)$, positive constants $c$ and $K$ such that $$\sum_{j=1}^m |L_{Y_j} V| \le c+KV, \quad \sum_{i,j=1}^m |L_{Y_i}L_{Y_j} V| \le c+KV.$$ \end{assumption} Below we assume that ${\mathcal L}_0$ satisfies H\"ormander's condition. We do not make any assumption on the mixing rate. Let $\beta_j={\mathcal L}_0^{-1}\alpha_j$, $a_1= \sum_{j=1}^m |\beta_j|_\infty$, $a_2=\sum_{i,j=1}^m |\alpha_i|_\infty|\beta_j|_\infty$, $a_3=\sum_{j=1}^m |d\beta_j|_\infty$, and $a_4=\sum_{k=1}^m |X_k|^2_\infty$. We recall that if $\alpha_k$ and ${\mathcal L}_0$ satisfy Assumption \ref{assumption-Hormander} then ${\mathcal L}_0$ is a regularity improving Fredholm operator. \begin{theorem} \label{uniform-estimates} Let ${\mathcal L}_0$ be a regularity improving Fredholm operator on a compact manifold $G$, and $\alpha_k\in C^3(G;{\mathbb R})\cap N^\perp$. Assume that $Y_k$ satisfy Assumption \ref{assumption2-Y}. Then for all $p\ge 1$, there exists a constant $C=C(c, K, a_i,p)$ s.t. for any $0\le s\le t$ and all $\epsilon\le\epsilon_0$, \begin{equation} \label{uniform-estimate-inequality} {\mathbf E} \left\{ \sup_{s\le u\le t} \left( V(y^\epsilon_{u\over \epsilon}) \right)^{2p} \; \big| \; {\mathcal F}_{s\over \epsilon} \right\} \le \left( 4 V^{2p}(y^\epsilon_{s\over \epsilon}) +C(t-s)^2+C \right) e^{ C(t-s+1)t}. \end{equation} Here $\epsilon_0\le 1$ depends on $c,K, p, a_1$ and $V, Y_i, Y_j$. \end{theorem} \begin{proof} Let $0\le s\le t$. We apply (\ref{Ito-tight}) to $f=V^p$: \begin{equation*} {\begin{split} V^p(y^\epsilon_{t\over \epsilon}) =& V^p(y^\epsilon_{s\over \epsilon}) + \epsilon \sum_{j=1}^m dV^p\left(Y_j(y^\epsilon_{t\over \epsilon} )\right )\beta_j(z^\epsilon_{t\over \epsilon}) -\epsilon \sum_{j=1}^m dV^p\left (Y_j(y^\epsilon_{s\over \epsilon} )\right)\beta_j( z^\epsilon_{s\over \epsilon})\\ &-\epsilon \sum_{i,j=1}^m \int_{s\over \epsilon}^{t\over \epsilon} L_{Y_i}L_{Y_j} V^p\left(y^\epsilon_r\right) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \; d r\\ &-\sqrt\epsilon \sum_{k=1}^p \int_{s\over \epsilon}^{t\over \epsilon} \sum_{j=1}^m dV^p (Y_j(y_r^\epsilon))(d\beta_j) (X_k(z_r^\epsilon)) \;dW_r^k. \end{split}}\end{equation*} In the following estimates, we may first assume that $\sum_{j=1}^m |L_{Y_j}V|$ and $\sum_{i,j=1}^m |L_{Y_j}L_{Y_i}V|$ are bounded. We may then replace $t$ by $t\wedge \tau_n$ where $\tau_n$ is the first time that either quantity is greater or equal to $n$. We take this point of view for proofs of inequalities and may not repeat it each time. We take the supremum over $[s,t]$ followed by conditional expectation of both sides of the inequality: \begin{equation*} {\begin{split} & {\mathbf E} \left\{ \sup_{s\le u\le t} V^p(y^\epsilon_{u\over \epsilon}) \; \big| \; {\mathcal F}_{s\over \epsilon} \right\} \le V^p(y^\epsilon_{s\over \epsilon}) + \epsilon {\mathbf E} \left\{ \sup_{s\le u\le t} \sum_{j=1}^m dV^p\left(Y_j(y^\epsilon_{u\over \epsilon} )\right )\beta_j(z^\epsilon_{u\over \epsilon}) \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\\ & \qquad - \sum_{j=1}^m dV^p\left (Y_j(y^\epsilon_{s\over \epsilon} )\right)\beta_j( z^\epsilon_{s\over \epsilon})\\ &\qquad +\epsilon {\mathbf E}\left\{ \sup_{s\le u\le t} \left| \int_{s\over \epsilon}^{u\over \epsilon} \sum_{i,j=1}^mL_{Y_i}L_{Y_j} V^p\left(y^\epsilon_r\right) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \;dr \right|\; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\\ &\qquad +\sqrt\epsilon{\mathbf E} \left\{ \sup_{s\le u\le t} \left| \sum_{k=1}^{m'} \int_{s\over \epsilon}^{u\over \epsilon} \sum_{j=1}^m dV^p (Y_j(y_r^\epsilon))(d\beta_j) (X_k(z_r^\epsilon)) dW_r^k\right| \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}. \end{split}}\end{equation*} By the conditional Jensen inequality and the conditional Doob's inequality, there exists a universal constant $\tilde C$ depending only on $p$ s.t., \begin{equation*} {\begin{split} & {\mathbf E}\left\{ \sup_{s\le u\le t} V^{2p}(y^\epsilon_{u\over \epsilon}) \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\\ &\le 4 V^{2p}(y^\epsilon_{s\over \epsilon}) + 4\epsilon^2 {\mathbf E}\left( \left\{ \sum_{j=1}^m |\beta_j|_\infty \sup_{s\le u \le t} \left| dV^p(Y_j(y^\epsilon_{u\over \epsilon} ))\right| \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\right)^2\\ &+ 4\epsilon^2 \left( \sum_{j=1}^m |\beta_j|_\infty \left| dV^p(Y_j(y^\epsilon_{s\over \epsilon} ))\right| \right)^2 \\ & +8\epsilon(t-s) {\mathbf E}\left\{\left( \int_{s\over \epsilon}^{t\over \epsilon} \sum_{i,j=1}^m |\alpha_i|_\infty|\beta_j|_\infty \left| L_{Y_i}L_{Y_j} V^p\left(y^\epsilon_r\right)\right|\;dr\right)^2 \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\\ &+\tilde C \sum_{k=1}^p{\mathbf E} \left\{ \epsilon \int_{s\over \epsilon}^{t\over \epsilon} \left| \sum_{j=1}^m dV^p( Y_j(y^\epsilon_r)) (d\beta_j) \left( X_k(z^\epsilon_r)\right) \right|^2 \; d r \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}. \end{split}} \end{equation*} Since $\sum_j|L_{Y_j}V|\le c+KV$ and $\sum_{i,j=1}^p |L_{Y_i}L_{Y_j}V|\le c+KV$, there are constants $c_1$ and $K_1$ such that $\max_{j=1, \dots, m}|L_{Y_j}V^p |\le c_1+K_1V^p$ and $\max_{i,j=1, \dots, m} |L_{Y_i}L_{Y_j}V^p|\le c_1+K_1V^p$. This leads to the following estimate: \begin{equation*} {\begin{split} &{\mathbf E}\left\{ \sup_{s\le u\le t} V^{2p}(y^\epsilon_{u\over \epsilon}) \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\\ \le& 4 V^{2p}(y^\epsilon_{s\over \epsilon}) + 8\epsilon^2(a_1)^2 \left( 2(c_1)^2+ (K_1)^2 {\mathbf E}\left \{\sup_{s\le u \le t} V^{2p}(y^\epsilon_{u\over \epsilon} ) \; \big| \; {\mathcal F}_{s\over \epsilon}\right\} +(K_1)^2 V^{2p}(y^\epsilon_{s\over \epsilon} )\right) \\ &+16 (a_2)^2(t-s) \epsilon \int_{s\over \epsilon}^{t\over \epsilon} \left( (c_1)^2+ (K_1)^2{\mathbf E}\left\{ V^{2p}(y^\epsilon_r ) \; \big| \; {\mathcal F}_{s\over \epsilon}\right\}\right)dr \\ &+ \tilde C (a_3a_4)^2 \epsilon \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left (c_1+K_1 V^p((y^\epsilon_r))\right)^2 \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\; d r. \end{split}}\end{equation*} Let $\epsilon_0=\min\{{1\over 8a_1K_1}, 1\}$. For $\epsilon\le\epsilon_0$, \begin{equation*} {\begin{split} &{1\over 2} {\mathbf E}\left\{ \sup_{s\le u\le t} V^{2p}(y^\epsilon_{u\over \epsilon}) \; \big| \; {\mathcal F}_{s\over \epsilon} \right\}\\ \le & 4 V^{2p}(y^\epsilon_{s\over \epsilon}) +16\epsilon^2 ( a_1c_1)^2 +16(t-s)^2 (a_2c_1)^2+ 4\tilde C (a_3a_4c_1)^2(t-s)\\ &+ \left( 16 (a_2K_1)^2(t-s) +4\tilde C (a_3a_4K_1)^2 \right) \epsilon \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ V^{2p}(y^\epsilon_r ) \; \big| \; {\mathcal F}_{s\over \epsilon} \right\} \;dr. \end{split}}\end{equation*} It follows that there exists a constant $ C$ such that for $\epsilon\le \epsilon_0$, \begin{equation*} {\mathbf E}\left\{ \sup_{s\le u\le t} V^{2p}(y^\epsilon_{u\over \epsilon}) \; \big| \; {\mathcal F}_{s\over \epsilon} \right\} \le \left( 4 V^{2p}(y^\epsilon_{s\over \epsilon}) +C(t-s)^2+C \right) e^{ C(t-s+1)t}. \end{equation*} \end{proof} {\it Remark.} If $M={\mathbb R}^n$, $Y_i$ are vector fields with bounded first order derivatives, then $\rho_0^2$ is a pre-Lyapunov function satisfying the conditions of Theorem \ref{uniform-estimates}, hence Theorem \ref{uniform-estimates} holds. Let us recall that $B_{V,r}$ is defined in (\ref{BVr}). We return to Lemma \ref{lemma3-2} in Section \ref{section-estimates} to obtain a key estimation for the estimation in Section \ref{section-rate}. Let us recall that $B_{V,r}$ is defined in (\ref{BVr}). \begin{corollary} \label{corollary-to-lemma3-2} Assume (\ref{sde-3}) is complete, for every $\epsilon \in (0,1)$, and conditions of Assumption \ref{assumption1}. Let $V\in {\bf \mathcal B}(M;{\mathbb R}_+)$ be a locally bounded function and $\epsilon_0$ a positive number s.t. for all $q\ge 1$ and $T>0$, there exists a locally bounded function $C_q: {\mathbb R}_+\to {\mathbb R}_+$, a real valued polynomial $\lambda_q$ such that for $0\le s\le t\le T$ and for all $\epsilon\le\epsilon_0$ \begin{equation} \label{moment-assumption-20} \sup_{s\le u \le t} {\mathbf E}\left\{ V^q(y_{u\over \epsilon}^\epsilon) \; \big| {\mathcal F}_{s\over \epsilon} \right\} \le C_q(t)+C_q(t) \lambda_q( V(y_{s\over \epsilon}^\epsilon)), \quad \sup_{0<\epsilon \le \epsilon_0}{\mathbf E} (V^q(y_0^\epsilon))<\infty. \end{equation} Let $h\in {\bf \mathcal B}_b( G; {\mathbb R})$. If $f\in B_{V,0}$ is a function s.t. $L_{Y_j}f\in B_{V,0}$ and $ L_{Y_l}L_{Y_j}f\in B_{V,0}$ for all $j, l=1, \dots, m$, then for all $0\le s\le t$, $$\left| {\epsilon \over t-s} \int_{s\over \epsilon }^{t\over \epsilon} {\mathbf E} \left\{ f (y_r^\epsilon) h(z_{r}^\epsilon) \big| {\mathcal F}_{s\over \epsilon}\right\} dr -\bar h\; f(y_{s\over \epsilon}^\epsilon) \right| \le \tilde c |h|_\infty \gamma_\epsilon(y_{s\over \epsilon}^\epsilon) \left ({\epsilon^2\over t-s}+(t-s)\right). $$ Here $\tilde c$ is a constant, see (\ref{cpsi}) below, and $$\gamma_\epsilon=|f| + \sum_{j=1}^m |L_{Y_j}f|+\sum_{j,l=1}^m {\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left|L_{Y_l} L_{Y_j}f(y^\epsilon_r)\right | \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr.$$ For all $s<t$ and $p\ge 1$, $$\sup_{s\le u\le t }\sup_{\epsilon \le \epsilon_0} {\mathbf E}\left( \gamma_\epsilon(y_{u\over \epsilon}^\epsilon)\right)^p<\infty.$$ More explicitly, if $\sum_{j=1}^m\sum_{l=1}^m|L_{Y_l} L_{Y_j}f|\le K+KV^q$ where $K, q$ are constants, then there exists a constant $C(t)$ depending only on the differential equation (\ref{sde-3}) s.t. $$\gamma_\epsilon\le |f|+ \sum_{j=1}^m |L_{Y_j}f|+K+C(t)V^q.$$ \end{corollary} \begin{proof} By Lemma \ref{lemma3-2}, $${\begin{split} &\left| {\epsilon \over t-s} \int_{s\over \epsilon }^{t\over \epsilon} {\mathbf E} \left\{ f (y_r^\epsilon) h(z_{r}^\epsilon) \big| {\mathcal F}_{s\over \epsilon}\right\} dr -\bar h\; f(y_{s\over \epsilon}^\epsilon) \right| \\ & \le {2a\over \delta} |h|_\infty \left( W(z_{s\over \epsilon}^\epsilon)|f(y_{s\over \epsilon}^\epsilon)| +\sum_{j=1}^{\mathfrak m} \gamma^j_\epsilon |\alpha_j|_\infty \right)\left ({\epsilon^2\over t-s} +(t-s)\right),\\ \hbox{ where } \gamma^j_{\epsilon}(y)&=c_W (z_{s\over \epsilon}^\epsilon) \;|L_{Y_j}f (y^\epsilon_{s\over\epsilon})|+ \sum_{l=1}^m |\alpha_l|_\infty {\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left|L_{Y_l} L_{Y_j}f(y^\epsilon_r)\right|c_W (z^\epsilon_r) \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr. \end{split}} $$ Since $W$ is bounded so is $c_W $, which is bounded by $2c(a,\delta) |W|_\infty$. Furthermore $$ {\mathbf E} \left\{ \left|L_{Y_l} L_{Y_j}f(y^\epsilon_r)\right|c_W (z^\epsilon_r) \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr \le 2c(a,\delta) |W|_\infty{\mathbf E} \left\{ \left|L_{Y_l} L_{Y_j}f(y^\epsilon_r)\right| \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr.$$ We gather all constant together, \begin{equation} \tilde c= {2a\over \delta}|W|_\infty+2c(a,\delta)|W|_\infty\sum_{j,l=1}^m |\alpha_j|_\infty+2\left(\sum_{j=1}^m |\alpha_j|_\infty\right)^2. \label{cpsi} \end{equation} It is clear that, $$\left| {\epsilon \over t-s} \int_{s\over \epsilon }^{t\over \epsilon} {\mathbf E} \left\{ f (y_r^\epsilon) h(z_{r}^\epsilon) \big| {\mathcal F}_{s\over \epsilon}\right\} dr -\bar h\; f(y_{s\over \epsilon}^\epsilon) \right| \le \tilde c\, \gamma_\epsilon |h|_\infty\left ({\epsilon^2\over t-s} +(t-s)\right).$$ Since $f$, $L_{Y_j}$ and $L_{Y_l}L_{Y_j}f\in B_{V,0}$, by (\ref{moment-assumption-20}), the following quantities are finite for all $p\ge 1$: $$\sup_{\epsilon \le \epsilon_0} \sup_{s\le u \le t} {\mathbf E} \left| (L_{Y_l}L_{Y_j}f)(y_{u\over \epsilon}^\epsilon)\right| ^p, \quad \sup_{\epsilon \le \epsilon_0} \sup_{s\le u \le t} {\mathbf E} \left| L_{Y_j}f(y_{u\over \epsilon}^\epsilon)\right| ^p, \quad \sup_{\epsilon \le \epsilon_0} \sup_{s\le u \le t} {\mathbf E} \left| f(y_{u\over \epsilon}^\epsilon)\right|^p. $$ Furthermore since $\sum_{j=1}^m\sum_{l=1}^m|L_{Y_l} L_{Y_j}f|\le K+KV^q$, \begin{equation*} {\begin{split} \sum_{j=1}^m\sum_{l=1}^m {\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ \left|L_{Y_l} L_{Y_j}f(y^\epsilon_r)\right| \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr \le K+C(t) V^q(y_{s\over \epsilon}^\epsilon). \end{split}} \end{equation*} Consequently, $\gamma_\epsilon \le |f|+ \sum_{j=1}^m |L_{Y_j}f|+K+C(t)V^q$, completing the proof. \end{proof} \section{Convergence under H\"ormander's Conditions} \label{section-weak} Below $\mathrm {inj}(M)$ denotes the injectivity radius of $M$ and $\rho_y=\rho(y, \cdot)$ is the Riemannian distance function on $M$ from a point $y$. Let $o$ denote a point in $M$. The following proposition applies to an operator ${\mathcal L}_0$, on a compact manifold, satisfying H\"ormander's condition. \begin{proposition} \label{tightness} Let $M$ be a manifold with positive injectivity radius and $\epsilon_0>0$. Suppose conditions (1-5) below or conditions (1-3), (4') and (5). \begin{enumerate} \item [(1)] ${\mathcal L}_0$ is a regularity improving Fredholm operator on $L^2(G)$ for a compact manifold $G$; \item [(2)] $\{\alpha_k\} \subset C^3\cap N^\perp$; \item [(3)] Suppose that for $\epsilon \in (0, \epsilon_0)$, (\ref{sde-3}) is complete and $\sup_{\epsilon \le \epsilon_0}{\mathbf E} \rho(y_0^\epsilon, o)<\infty$; \item [(4)] Suppose that there exists a locally bounded function $V$ s.t. for all $\epsilon\le\epsilon_0$ and for any $0\le s\le u\le t$, and for all $p\ge 1$, $${\mathbf E} V^p(y^\epsilon_0) \le c_0, \quad \sup_{s\le u\le t}{\mathbf E} \left\{ \left( V(y^\epsilon_{u\over \epsilon}) \right)^p \; \big| \; {\mathcal F}_{s\over \epsilon} \right\} \le K +K V^{p'}(y^\epsilon_{s\over \epsilon})$$ where $c_0=c_0(p)$, $K=K(p,t)$, and $ p'=p'(p,t)$ is a natural number; $K, p'$ are locally bounded in $t$. \item[(4')] There exist a function $V\in C^2(M;{\mathbb R}_+)$, positive constants $c$ and $K$ such that $$\sum_{j=1}^m |L_{Y_j} V| \le c+KV, \quad \sum_{i,j=1}^m |L_{Y_i}L_{Y_j} V| \le c+KV.$$ \item [(5)] For $V$ in part (4) or in part (4'), suppose that for some number $\delta>0$, $$|Y_j|\in B_{V,0} \quad \sup_{\rho(y,\cdot) \le \delta}| L_{Y_i}L_{Y_j}\rho_y(\cdot) |\in B_{V,0}.$$ \end{enumerate} Then there exists a distance function $\tilde \rho$ on $M$ that is compatible with the topology of $M$ and there exists a number $\alpha>0$ such that $$\sup_{\epsilon\le \epsilon_0} {\mathbf E} \sup_{s\not =t} \left({\tilde \rho\left( y_{s\over \epsilon}^\epsilon, y_{t\over \epsilon}^\epsilon\right) \over |t-s|^\alpha }\right)<\infty,$$ and for any $T>0$, $\{ (y_{t\over\epsilon}^\epsilon, t\le T), 0<\epsilon\le 1\}$ is tight. \end{proposition} \begin{proof} By Theorem \ref{uniform-estimates}, conditions (1-3) and (4') imply condition (4). (a) Let ${\delta}<\min (1, {1\over 2} \mathrm {inj}(M))$. Let $f: {\mathbb R}_+\to {\mathbb R}_+$ be a smooth convex function such that $f(r)=r$ when $r\le{\delta\over 2}$ and $f(r)=1$ when $r\ge \delta$. Then $\tilde \rho(x,y)=f\circ \rho$ is a distance function with $\tilde \rho \le 1$. Its open sets generate the same topology on $M$ as that by $\rho$. Let $\beta_j$ be a solution to ${\mathcal L}_0\beta_j=\alpha_j$. For any $y_0\in M$, $|L_{Y_j}\tilde \rho^2(y_0,\cdot)|\le 2|Y_j(\cdot)|$. Since $|Y_j|\in B_{V,0}$, $\int_0^{t\over \epsilon} {\mathbf E} |L_{Y_j} \tilde \rho|(y_r^\epsilon)|^2 dr<\infty$. We may apply (\ref{Ito-tight-2}) in Lemma \ref{lemma5}, $${ \begin{split} &{\mathbf E}\left\{ \tilde \rho^2\left(y^\epsilon_{s\over \epsilon}, y_{t\over \epsilon}^\epsilon\right) \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} \\ =& \epsilon \sum_{j=1}^m \left( {\mathbf E}\left\{ \left(L_{Y_j}\tilde \rho^2(y^\epsilon_{s\over \epsilon}, y_{t\over \epsilon}^\epsilon)\right) \;\beta_j(z^\epsilon_{t\over \epsilon})\; \big |\; {\mathcal F}_{s\over \epsilon}\right\} - \left(L_{Y_j}\tilde \rho^2(y^\epsilon_{s\over \epsilon}, \cdot)\right) (y_{s\over \epsilon}^\epsilon) \;\beta_j(z^\epsilon_{s\over \epsilon})\right)\\ &-\epsilon \sum_{i,j=1}^m\int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E}\left\{ \left(L_{Y_i}L_{Y_j} \tilde \rho^2( y^\epsilon_{s\over \epsilon}, y^\epsilon_r)\right)\; \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r)\; \big |\; {\mathcal F}_{s\over \epsilon}\right\} \;dr. \end{split} } $$ In the above equation, differentiation of $(\tilde \rho)^2 $ is w.r.t. to the second variable. By construction $ \tilde \rho $ is bounded by $1$ and $|\nabla \tilde \rho |\le |\nabla \rho|\le 1$. Furthermore since $\alpha_j$ are $C^3$ functions on a compact manifold, so $\beta_j$ and $|\beta_j|$ are bounded. For any $y_0\in M$, $L_{Y_j} \tilde \rho ( y_0, \cdot)=\gamma'(\rho_{y_0} )L_{Y_j} \rho_{y_0} $. Thus $$ \left|{\mathbf E}\left\{ \left(L_{Y_j}\tilde \rho^2( y_{s\over \epsilon}^\epsilon, y^\epsilon_{t\over \epsilon})\right) \;\beta_j(z^\epsilon_{t\over \epsilon}) \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} \right| \le |\beta_j|_\infty {\mathbf E} \left\{\tilde \rho(y_{s\over \epsilon}^\epsilon,{y^\epsilon_{t\over \epsilon}}) |Y_j({y^\epsilon_{t\over \epsilon}})| \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} .$$ Recall $\tilde \rho\le 1$ and there are numbers $K_1$ and $p_1$ s.t. $|Y_j|\le K_1+K_1V^{p_1}$, so $$ {\mathbf E} \left\{ |Y_j({y^\epsilon_{t\over \epsilon}})| \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} \le K_1+K_1{\mathbf E}\left\{ V^{p_1}({y^\epsilon_{t\over \epsilon}}) \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} \le K_1+K_1K(p_1,t) V^{p'(p_1,t)}({y^\epsilon_{s\over \epsilon}}).$$ Let $g_1 =K_1+K_1K(p_1) V^{p'(p_1,t)}$, it is clear that $g_1\in B_{V,0}$. We remark that, $$L_{Y_i}L_{Y_j}(\tilde \rho^2) =(f^2)''(\rho) (L_{Y_i}\rho) (L_{Y_j}\rho) + (f^2)'(\rho) L_{Y_i}L_{Y_j}\rho.$$ By the assumption, there exists a function $g_2\in B_{V,0}$ s.t. $${\mathbf E}\left\{ \tilde \rho^2 \left(y^\epsilon_{s\over \epsilon}, y_{t\over \epsilon}^\epsilon\right) \big | {\mathcal F}_{s\over \epsilon}\right\} \le g_2(y_{s\over \epsilon})\epsilon+g_2(y_{s\over \epsilon})(t-s) .$$ For $\epsilon\ge \sqrt {t-s}$, it is better to estimate directly from (\ref{sde-3}): \begin{equation*} {\begin{split} {\mathbf E}\left\{ \tilde \rho^2 \left(y^\epsilon_{s\over \epsilon}, y_{t\over \epsilon}^\epsilon\right) \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} &=\sum_{k=1}^m\int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E}\left\{ 2\tilde \rho \left(y^\epsilon_{s\over \epsilon}, y_{t\over \epsilon}^\epsilon\right) L_{Y_k}\tilde \rho \left(y^\epsilon_{s\over \epsilon}, y_{t\over \epsilon}^\epsilon\right) \alpha_k(z_r^\epsilon) \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} \\ &\le 2|\alpha_k|_\infty \sum_{k=1}^m \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{ |Y_k(y_r^\epsilon)| \;\big |\; {\mathcal F}_{s\over \epsilon}\right\} \;dr \le g_3(y^\epsilon_{s\over \epsilon}) \left({t-s \over \epsilon}\right) \end{split}} \end{equation*} where $g_3\in B_{V,0}$. We interpolate these estimates and conclude that for some function $g_4\in B_{V,0}$ and a constant $c$ the following holds: ${\mathbf E}\left\{ \tilde \rho^2\left(y_{t\over \epsilon}^{\epsilon}, y_{s\over \epsilon}^\epsilon\right)\;\big |\; {\mathcal F}_{s\over \epsilon}\right\} \le (t-s)g_4(y^\epsilon_{s\over \epsilon})$. There is a function $g_5\in B_{V,0}$ s.t. $${\mathbf E} \tilde \rho^2\left(y_{t\over \epsilon}^{\epsilon}, y_{s\over \epsilon}^\epsilon\right) \le {\mathbf E} g_5(y_0^\epsilon) (t-s)\le c(t-s).$$ In the last step we use Assumption (4) on the initial value. By Kolmogorov's criterion, there exists $\alpha>0$ such that $$\sup_\epsilon {\mathbf E} \sup_{s\not =t} \left({\tilde \rho^2 ( y_{s\over \epsilon}^\epsilon, y_{t\over \epsilon}^\epsilon) \over |t-s|^\alpha }\right)<\infty,$$ and the processes $(y_{s\over \epsilon}^\epsilon)$ are equi uniformly H\"older continuous on any compact time interval. Consequently the family of stochastic processes $\{ y_{t\over \epsilon}^\epsilon, 0<\epsilon \le 1 \}$ is tight. \end{proof} If ${\mathcal L}_0$ is the Laplace-Beltrami operator on a compact Riemannian manifold and $\pi$ its invariant probability measure then for any Lipschitz continuous function $f:G\to {\mathbb R}$, \begin{equation} \label{lln1} \sqrt{{\mathbf E} \left({1\over t} \int_0^t f(z_s) ds-\int f \; d\pi \right)^2} \le C(\|f\|_{Osc}){1\over \sqrt t}. \end{equation} where $\|f\|_{Osc}$ denotes the oscillation of $f$. If ${\mathcal L}_0$ is not elliptic we suppose it satisfies H\"ormander's conditions and has index $0$. The dimension of the kernel of ${\mathcal L}_0^*$ equals the dimension of the kernel of ${\mathcal L}_0$. Let $\{u_i, i=1, \dots, n_0\}$ be a basis in $\mathop{\rm ker}( {\mathcal L}_0)$ and $\{\pi_i\, i=1, \dots, n_0\}$ the dual basis for the null space of ${\mathcal L}_0^*$. For $f\in L^2(G;{\mathbb R})$ we define $\bar f= \sum_{i=1}^{n_0} u_i\<f, \pi_i\>$ where the bracket denotes the dual pairing between $L^2$ and $(L^2)^*$. \begin{lemma} \label{lln} Suppose that $(z_t)$ is a Markov process on a compact manifold $G$ with generator $ {\mathcal L}_0$ satisfying H\"ormander's condition and having Fredholm index $0$. Then for any function $f\in C^r(G; {\mathbb R})$, where $r\ge \max{\{3, {n\over 2}+1\}}$, there is a constant $C$ depending on $|f|_{{n\over 2}+1}$, s.t. \begin{equation} \label{llln-1} \sqrt{{\mathbf E} \left({1\over t-s} \int_s^t f(z_r) dr- \bar f \right)^2} \le C(\|f-\bar f\|_{{n\over 2}+1}) {1\over \sqrt{t-s}}. \end{equation} \end{lemma} \begin{proof} Since $\<f, \pi_j\>=\<f, \pi_j\>$, $f-\bar f\in N^\perp$. By working with $f-\bar f$ we may assume that $f\in N^\perp$ and let $g$ be a solution to ${\mathcal L}_0 g=f$. By H\"ormander's theorem, \cite{Hormander-hypo-acta}, there is a positive number $\delta$, such that for all $u\in C^\infty(M)$, $$\|u\|_{s+\delta} \le C( \|{\mathcal L}_0 u\|_s+\|u\|_{L_2} ).$$ The number $\delta=2^{1-k}$ where $k\in {\mathcal N}$ is related to the number of brackets needed to generate the tangent spaces. Furthermore every $u$ such that $\|{\mathcal L}_0 u\|_s<\infty $ must be in $H^s$. If $s>{n\over 2}+1$, $H^s$ is embedded in $C^1$ and for some constant $c_i$, \begin{equation*} {\begin{split} |g|_{C^1(M)}\le c_1 \,\|g\|_{ {n\over 2}+1+\epsilon}\le c_2 \; ( \|f\|_{{n\over 2}+1}+|g|_{L_2}) \le c_3\, \|f\|_{{n\over 2}+1}. \end{split}} \end{equation*} Recall that ${\mathcal L}_0=\sum_{i=1}^{m'} L_{X_i}L_{X_i}+L_{X_0}$. Let $\{W_t^j, j=1, \dots, m'\}$ be independent one dimensional Brownian motions. Let $(z_t)$ be solutions of $dz_t=\sum_{j=1}^{m'} X_j(z_t)\circ dW_t^j$. Since $f$ is $C^2$, $${1\over t-s} \int_s^t f(z_r)dr ={1\over t-s} \left(g(z_t)-g(z_s)\right) -{1\over t-s}\left(\sum_{j=1}^{m'} \int_s^t (dg(X_j))(z_r) dW_r^j\right).$$ We apply the Sobolev estimates to $g$ and use Doob's $L^2$ inequality to see that for $t\ge1$ there is a constant $C$ such that, $${ \begin{split} {\mathbf E} \left({1\over t-s} \int_s^t f(z_r)dr\right)^2 &\le {4 \over t^2} |g|_\infty^2 +{8\over (t-s)^2} \sum_{j=1}^{m'} \int_s^t \left( {\mathbf E}|dg(z_r)|^2|X_j(z_r)|^2 \right) dr\\ & \le {4 \over (t-s)^2} (|g|_\infty)^2 +{8m'\over t-s} (|dg|)^2_{\infty} \sum_{j=1}^{m'} |X_j|_\infty^2 \le C(\|f\|_{{n\over 2}+1})^2{1\over t-s}. \end{split} } $$ \end{proof} We remark that a self-adjoint operator satisfying H\"ormander's condition has index zero. \begin{lemma} \label{weak-convergence} Suppose that ${\mathcal L}_0$ satisfies H\"ormander's condition. In addition it has Fredholm index $0$ or it has a unique invariant probability measure. Let $r\ge \max{\{3, {n\over 2}+1\}}$. Let $h : M \times G\to {\mathbb R}$ be such that $h(y, \cdot)\in C^r$ for each $y$ and that $|h|_\infty+ \sup_{z} |h(\cdot, z)|_{\mathrm {Lip}} +\sup_{y} |h(y, \cdot)|_{C^r}<\infty$. Let $s\le t$ be a pair of positive numbers, and $F\in BC( C([0,s]; M)\to {\mathbb R})$. For any equi -uniformly continuous subsequence, $\tilde y^n_t:=(y^{\epsilon_n}_{t\over {\epsilon_n}})$, of $(y^\epsilon_{t\over \epsilon})$ that converges weakly to a continuous process $\bar y_\cdot$ as $n\to \infty$, the following convergence holds weakly: $$F(y^{\epsilon_n}_{\cdot\over \epsilon_n}) \int_s^t h(y^{\epsilon_n}_{u\over \epsilon_n}, z^{\epsilon_n}_{u\over \epsilon_n}) du \to F( \bar y_\cdot) \int_s^t \overline{ h(\bar y_u, \cdot) }du$$ where $\overline{ h(y, \cdot)}=\sum_{i=1}^{n_0} u_i \< h(y, \cdot ),\pi_i\>$. \end{lemma} \begin{proof} For simplicity we omit the subscript $n$. The required convergence follows from Lemma 4.3 in \cite{Li-geodesic} where it was assumed that (\ref{lln1}) holds and ${\mathcal L}_0$ has a unique invariant measure for $\mu$. It is easy to check that the proof there is valid. We take care to replace $\int_G h(y,z) d\mu(z)$ in Lemma 4.3 there by $\sum_{i=1}^{n_0} u_i \< h(y, \cdot ),\pi_i\>$. We remark that by the regularity improving property each $u_i$ is smooth and therefore bounded. In the first part of the proof, we divide $[s,t]$ into sub-intervals of size $\epsilon$, freeze the slow variable $(y^\epsilon_{u\over \epsilon})$ on $[t_k, t_{k+1}]$, and approximate $h(y_{u\over\epsilon}^\epsilon, z_{u\over\epsilon}^\epsilon)$ by $h(y_{t_k\over\epsilon}^\epsilon, z_{u\over\epsilon}^\epsilon)$ on each sub-interval $[t_k, t_{k+1}]$. This approximation is clear: the computation is exactly as in Lemma 4.3 of \cite{Li-geodesic} and we use the uniform continuity of $(y_t^\epsilon)$, the fact that $|h|_\infty$ and $\sup_z |h( \cdot, z)|_{\mathrm {Lip}}$ are finite. The convergence of $$\int_{t_{k-1}\over \epsilon}^{t_{k-1}\over \epsilon} h(y_{t_k\over\epsilon}^\epsilon, z_{u\over\epsilon}^\epsilon)du \to \Delta t_k \sum_{i=1}^{n_0} u_i \< h(y_{t_{k-1}\over\epsilon}^\epsilon, \cdot), \pi_i\> $$ follows from the law of large numbers in Lemma \ref{lln}. The convergence of $$ \sum_k\Delta t_k \sum_{i=1}^{n_0} u_i \< h(y_{t_{k-1}\over\epsilon}^\epsilon, \cdot), \pi_i\> \to \sum_{i=1}^{n_0} u_i \int_s^t \< h(y_{u\over\epsilon}^\epsilon, \cdot), \pi_i\>du$$ is also clear and follows from the Lipschitz continuity of $h$ in the first variable and the equi continuity of the $y^\epsilon$ path. Finally denote by $y^\epsilon_{[0,s]}$ the restriction of the path $y^\epsilon_\cdot$ to the interval $[0,s]$, the weak convergence of $ \sum_{i=1}^{n_0} u_i F(y^\epsilon_{[0,s]}) \int_s^t \< h(y_{u\over\epsilon}^\epsilon, \cdot), \pi_i\>du$ to the required limit is trivial, as explained in Lemma 4.3, \cite{Li-geodesic}. \end{proof} \begin{assumption} \label{assumption-Hormander} The generator ${\mathcal L}_0$ satisfies H\"ormander's condition and has Fredholm index $0$ (or has a unique invariant probability measure). For $k=1, \dots, m$, $\alpha_k\in C^r( G; {\mathbb R})\cap N^\perp$ for some $r\ge \max\{3,{n\over 2}+1\}$. \end{assumption} If ${\mathcal L}_0$ is elliptic, it is sufficient to assume $\alpha_k\in {\bf \mathcal B}_b(G;{\mathbb R})$, instead of $\alpha_k\in C^r$. \begin{theorem} \label{thm-weak} If ${\mathcal L}_0$, $\alpha_k$, $(y_0^\epsilon)$ and $|Y_j|$ satisfy the conditions of Proposition \ref{tightness} and Assumption \ref{assumption-Hormander}, then $(y_{t\over\epsilon}^\epsilon)$ converge weakly to the Markov process determined by the Markov generator $$\bar {\mathcal L} =-\sum_{i,j=1}^m \overline{ \alpha_i \beta_j } L_{Y_i}L_{Y_j}, \quad \overline{ \alpha_i \beta_j }=\sum_{b=1}^{n_0} u_b \< \alpha_i \beta_j ,\pi_b\>.$$ \end{theorem} \begin{proof} By Proposition \ref{tightness}, $\{(y_{t\over\epsilon}^\epsilon, t\ge 0)\}$ is tight. We prove that any convergent sub-sequence converges to the same limit. Let $\epsilon_n\to 0$ be a a monotone sequence converging to zero such that the probability distributions of $(y_{t\over \epsilon_n}^{\epsilon_n})$ converge weakly, on $[0,T]$, to a measure $\bar \mu$. For notational simplicity we may assume that $\{(y_{t\over\epsilon}^\epsilon, t\ge 0)\}$ converges to $\bar\mu$. Let $s<t$, $\{{\bf \mathcal B}_s\}$ the canonical filtration, $(Y_s)$ the canonical process, and $Y_{[0,s]}$ its restriction to $[0,s]$. By the Stroock-Varadhan martingale method, it is sufficient to prove $f(Y_t)-f(Y_s)-\int_s^t \bar {\mathcal L} f(Y_r)\; d r$ is a local martingale for any $f\in C_K^\infty(M)$. By (\ref{Ito-tight}), the following is a local martingale, \begin{equation*} {\begin{split} &f(y^\epsilon_{t\over \epsilon}) -f(y^\epsilon_{s\over \epsilon}) - \epsilon \sum_{j=1}^m \left( df(Y_j(y^\epsilon_{t\over \epsilon} ) )\beta_j(z^\epsilon_{t\over \epsilon}) +df(Y_j(y^\epsilon_{s\over \epsilon} ))\beta_j( z^\epsilon_{s\over \epsilon})\right)\\ &+\epsilon \sum_{i,j=1}^m\int_{s\over \epsilon}^{t\over \epsilon} L_{Y_i}L_{Y_j} f(y^\epsilon_r)) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \;dr. \end{split}}\end{equation*} Since the third term converges to zero as $\epsilon $ tends to zero, it is sufficient to prove $$\lim_{\epsilon \to 0} {\mathbf E}\left\{\epsilon \sum_{i,j=1}^m\int_{s\over \epsilon}^{t\over \epsilon} L_{Y_i}L_{Y_j} f(y^\epsilon_r)) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \;dr-\int_s^t \bar {\mathcal L} f(y_{r\over \epsilon}^\epsilon)\; d r \; \big| {\mathcal F}_{s\over \epsilon}\right\}=0.$$ This follows from Lemma \ref{weak-convergence}, completing the proof.\end{proof} \begin{corollary} \label{convergence-wasserstein-p} Let $p\ge 1$ be a number and suppose that $\rho^p\in B_{V, 0}$. Then, under the conditions of Theorem \ref{thm-weak} and Assumption \ref{assumption2-Y}, $(y_{\cdot\over \epsilon}^\epsilon)$ converges in the Wasserstein $p$-distance on $C([0,t];M)$. \end{corollary} \begin{proof} By Theorem \ref{uniform-estimates}, $\sup_{\epsilon\le \epsilon_0} {\mathbf E}\sup_{s\le t} \rho^p(o, y_{s\over \epsilon}^\epsilon)<\infty$. Let $W_p$ denote the Wasserstein $p$ distance: $$W_p(\mu_1, \mu_2)=\left( \inf \int_{M\times M} \sup_{s\le t} \rho(\sigma_1(s), \sigma_2(s)) d\mu(\sigma_1, \sigma_2)\right)^{1\over p}.$$ Here the infimum is taken over all probability measures on the path spaces $C([0,t];M)$ with marginals $\mu_1$ and $\mu_2$. Note that $C([0,t];M)$ is a Banach space, a family of probability measures $\mu_n$ converges to $\mu$ in $W_p$, if and only if the following holds: (1) it converges weakly and (2) $\sup_n \int \sup_{s\le t} \rho^p(o, \sigma_2(s)) d\mu_n( \sigma_2)<\infty$. The conclusion follows. \end{proof} \section{A study of the semigroups} \label{section-sde} The primary aim of the section is to study the properties of $P_t f$ for $f\in B_{V,r}$ where $P_t$ is the semigroup for a generic stochastic differential equation. These results will be applied to the limit equation, to provide the necessary a priori estimates. Theorem \ref{limit-thm} should be of independent interest, it also lead to Lemma \ref{semigroup-estimate-2}, which will be used in Section \ref{section-rate}. Throughout this section $M$ is a complete smooth Riemannian manifold. Let $Y_0$ be $C^5$ and $\{Y_k, k=1,\dots, m\}$ be $C^6$ smooth vector fields on $M$, $\{B_t^k\}$ independent real valued Brownian motions. Let $(\Phi_t(y), t<\zeta(y)) $ be the maximal solution to the following equation \begin{equation} \label{limit.sde} dy_t=\sum_{k=1}^m Y_k(y_t)\circ dB_t^k+Y_0(y_t)dt \end{equation} with initial value $y$. Its Markov generator is ${\mathcal L} f={1\over 2} \sum_{k=1}^m L_{Y_k}L_{Y_k}f+L_{Y_0}f$. Let $Z={1\over 2}\sum_{k=1}^m \nabla _{Y_k}Y_k+Y_0$ be the drift vector field, so \begin{equation}\label{generator-0} {\mathcal L} f={1\over 2}\sum_{k=1}^m \nabla df(Y_k,Y_k)+df(Z). \end{equation} If there exists a $C^3$ pre-Lyapunov function $V$, constants $c$ and $K$ such that ${\mathcal L} V\le c+KV$ then (\ref{limit.sde}) is complete. However we do not limit ourselves to Lyapunov test for the completeness of the SDE. Let us denote $|f|_r=\sum_{k=1}^{r}|\nabla^{(k-1)} df|$ and $|f|_{r,\infty}=\sum_{k=1}^{r}|\nabla^{(k-1)} df|_\infty$. The following observation is useful. \begin{lemma} \label{Lf-L2f} Let $V\in {\bf \mathcal B}(M;{\mathbb R})$ be locally bounded. \begin{itemize} \item Suppose that $\sum_{j=1}^m |Y_j| \in B_{V,0}$ and $ |Z|\in B_{V,0}$. Then if $f\in B_{V, 2}$, ${\mathcal L} f\in B_{V,0}$. If $f \in BC^2$, $|{\mathcal L} f|\le |f|_{2, \infty} F_1$ where $F_1\in B_{V,0}$, not depending on $f$. \item Suppose that $$\sum_{j=1}^m( |Y_j| +|\nabla Y_j|+|\nabla^{(2)} Y_j|) \in B_{V,0}, \quad |Z|+|\nabla Z|+|\nabla^{(2)} Z|\in B_{V,0}.$$ If $f\in B_{V,4}$, ${\mathcal L}^2 f\in B_{V,0}$. If $f \in BC^4$, $|{\mathcal L}^2 f|\le |f|_{4,\infty} F_2$ where $F_2$ is a function in $B_{V,0}$, not dependent of $f$. \end{itemize} \end{lemma} \begin{proof} That ${\mathcal L} f$ belongs to $B_{V,0}$ follows from (\ref{generator-0}). If $f\in BC^2$, $|{\mathcal L} f|\le (|f|_2)_\infty (\sum_{k=1}^m|Y_k|^2+|Z|)$. For the second part we observe that ${\mathcal L}^2 f$ involves at most four derivatives of $f$ and two derivatives of $Y_j$ and $Z$ where $j=1, \dots, m$. \end{proof} Let $d\Phi_t(v)$ denote the derivative flow in the direction of $v\in T_yM$. It is the derivative of the function $y\mapsto \Phi_t(y, \omega )$, in probability. Moreover, it solves the following stochastic covariant differential equation along the solutions $y_t:=\Phi_t(y_0)$, $$Dv_t=\sum_{k=1}^m \nabla_{v_t} Y_k\circ dB_t^k+\nabla_{v_t} Y_0dt.$$ Here $D V_t:=/\kern-0.55ex/_t(y_\cdot) d (\parals_t^{-1}(y_\cdot) V_t)$ where $\parals_t(y_\cdot) :T_{y_0}M\to T_{y_t}M$ is the stochastic parallel transport map along the path $y_\cdot$. Denote $|d\Phi_t|_{y_0}$ the norm of $d\Phi_t(y_0): T_{y_0}M\to T_{y_t}M$. For $p>0$, $y\in M$ and $v\in T_yM$, we define $H_p(y)\in {\mathbb L}(T_yM\times T_yM; {\mathbb R})$ by \begin{equation*} H_p(y)(v,v)= \sum_{k=1}^{m} |\nabla Y_k(v)|^2 +(p-2)\sum_{k=1}^{m} {\<\nabla Y_k(v), v\>^2\over |v|^2} + 2\<\nabla Z(v), v\>. \end{equation*} Let $\underline h_p(y)=\sup_{ |v|=1\}} H_p(y)(v,v)$. Its upper bound will be used to control $|d\Phi_t|_y$. \begin{assumption} \label{Y-condition} The equation (\ref{limit.sde}) is complete. Conditions (i) and (ii), or (i') and (ii), below hold. \begin{itemize} \item[(i)] There exists a locally bounded function $V\in {\bf \mathcal B}(M;{\mathbb R}_+)$, s.t. for all $q\ge 1$ and $t\le T$, there exists a number $C_q(t)$ and a polynomial $\lambda_q$ such that \begin{equation} \label{moment-assumption-2} \sup_{s\le t}{\mathbf E}(|V(\Phi_s(y))|^q)\le C_q(t)+C_q(t) \lambda_q( V(y)). \end{equation} \item[(i')] There exists $V\in C^3(M; {\mathbb R}_+)$ and constants $c$ and $K$ such that $${\mathcal L} V\le c+KV, \quad |L_{Y_j}V|\le c+KV, \quad j= 1, \dots, m,$$ \item [(ii)] Let $\tilde V=1+ \ln(1+|V|)$. For some constant $c$, \begin{equation}\label{5.2} \sum_{k=1}^{m} |\nabla Y_k|^2\le c\tilde V, \quad \sup_{|v|=1}{\<\nabla Z(v),v\>} \le c\tilde V. \end{equation} \end{itemize} \end{assumption} {\it Remark.} Suppose that (\ref{limit.sde}) is complete. Since ${\mathcal L} V^q =qV^{q-1} {\mathcal L} V+q(q-1)V^{q-2} |L_{Y_j}V|^2$, (i') implies (i). In fact, $ {\mathbf E} \sup_{s\le t} \left(V(y_s)\right)^q\le \left( {\mathbf E} V(y_0)^q+cq^2t \right) e^{(c+K)q^2t}$. Recall that (\ref{limit.sde}) is strongly complete if $(t,y)\mapsto \Phi_t(y)$ is continuous almost surely on $[0, t]\times M$ for ant $t>0$. \begin{theorem} \label{limit-thm} Under Assumption \ref{Y-condition}, the following statements hold. \begin{enumerate} \item The SDE (\ref{limit.sde}) is strongly complete and for every $t\le T$, $ \Phi_t(\cdot)$ is $C^4$. Furthermore for all $p\ge 1$, there exists a positive number $C(t,p)$ such that \begin{equation} \label{derivative} {\mathbf E}\left(\sup_{s\le t}|d\Phi_s(y)|^p\right)\le C(t,p)+C(t,p)V^{C(t,p)}(y) . \end{equation} \item Let $f \in B_{V,1}$. Define $\delta P_t (df))={\mathbf E} df(d\Phi_t(\cdot))$. Then $d(P_tf)=\delta P_t(df)$ and $|d(P_tf)|\in B_{V,0}$. Furthermore for a constant $C(t,p)$ independent of $f$, $$ |d(P_tf)|\le \sqrt{ {\mathbf E} \left(|df|_{\Phi_t^\epsilon(y)}\right)^2} \sqrt{ C(t,p)(1+ V ^{C(t,p)}(y))}.$$ \item Suppose furthermore that $$\sum_{j=1}^m\sum_{\alpha=0}^3|\nabla^{(\alpha)}Y_j|\in B_{V,0}, \qquad \sum_{\alpha=0}^2|\nabla^{(\alpha)} Y_0| \in B_{V, 0}.$$ Then, (a) ${\mathbf E}\sup_{s\le t}|\nabla d\Phi_s|^2(y)\in B_{V,0}$; (b) If $f\in B_{V, 2}$, then $P_tf\in B_{V,2}$, and $$(\nabla dP_tf)(u_1,u_2)={\mathbf E} \nabla df (d\Phi_t(u_1), d\Phi_t(u_2))+{\mathbf E} df (\nabla _{u_1} d\Phi_t(u_2)).$$ Furthermore, (c) ${d P_t f\over dt }=P_t {\mathcal L} f$, and ${\mathcal L} (P_t f)=P_t ({\mathcal L} f)$. \item Let $r\ge 2$. Suppose furthermore that $$ {\begin{split} \sum_{\alpha=0}^{r}|\nabla ^{(\alpha)}Y_0| \in B_{V, 0}, \quad \sum_{\alpha=0}^{r+1} \sum_{k=1}^m|\nabla^{(\alpha)} Y_k|\in B_{ V,0}. \end{split}}$$ Then ${\mathbf E}\sup_{s\le t}(|\nabla^{(r-1)} d\Phi_s|_y)^2$ belongs to $B_{V,0}$. If $f\in B_{V, r}$, then $P_tf\in B_{V,r}$. \end{enumerate} \end{theorem} \begin{proof} The statement on strong completeness follows from the following theorem, see Thm. 5.1 in \cite{Li-flow}. Suppose that (\ref{limit.sde}) is complete. If $\tilde V$ is a function and $c_0$ a number such that for all $t>0$, $K$ compact, and all constants $\lambda$, \begin{equation} \label{assumption-strong-compl} \sup_{y\in K}{\mathbf E} \exp{\left( \lambda \int_0^t \tilde V(\Phi_s(y))ds\right)}<\infty, \quad \sum_{k=1}^{m} |\nabla Y_k|^2\le c_0\tilde V, \quad \underline h_p\le 6pc_0 \tilde V, \end{equation} then (\ref{limit.sde}) is strongly complete. Furthermore for every $p\ge 1$ there exists a constant $c(p)$ such that \begin{equation} \label{derivative} {\mathbf E}\left(\sup_{s\le t}|d\Phi_s(y)|^p\right)\le c(p){\mathbf E}\left(\exp{\left(6p^2 \int_0^{t}\tilde V(\Phi_s(y)) ds\right)}\right). \end{equation} Since $Y_j$ are $C^6$, then for every $t$, $ \Phi_t(\cdot)$ is $C^4$. It is easy to verify that condition (\ref{assumption-strong-compl}) is satisfied. In fact, by the assumption $\underline h_p\le 6p c \tilde V$. Take $\tilde V=1+ \ln(1+|V|)$ then for $p\ge 1$, $${\begin{split} {\mathbf E}\left(\exp{\left(6p^2 \int_0^{t}\tilde V(\Phi_s(y)) ds\right)}\right)\le C(t,p) +C(t,p) \left( V ^{C(t,p)}(y)\right)<\infty. \end{split}} $$ This proves part (1). For part (2) let $f\in C^1$. Then $y\mapsto f(\Phi_t(y,\omega))$ is differentiable for almost every $\omega$. Let $\sigma: [0,t_0]\to M$ be a geodesic segment with $\sigma(0)=y$. Then $${ f(\Phi_t(\sigma_s, \omega))-f(\Phi_t(y, \omega))\over s} ={1\over s}\int_0^s {d\over dr} f\left(\Phi_t(\sigma_r, \omega)\right)dr.$$ Since ${\mathbf E}|d\Phi_t(y)|^2$ is locally bounded in $y$, $r\mapsto {\mathbf E}|d\Phi_t(\sigma_r, \omega)|$ is continuous and the expectation of the right hand side converges to ${\mathbf E} df(d\Phi_t(\dot \sigma(0))$. The left hand side clearly converges almost surely. Since ${\mathbf E} |df (d\Phi_t(y))|^2$ is locally bounded the convergence is in $L^1$. We proved that $d(P_tf)=\delta P_t(df)$. Furthermore, suppose that $|df|\le K+K V^q$, \begin{equation*} {\begin{split} |d(P_tf)|_y&\le \sqrt{ {\mathbf E} \left(|df|_{\Phi_t^\epsilon(y)}\right)^2} \sqrt{{\mathbf E} |d\Phi_t^\epsilon|_y^2} \\ &\le \sqrt{ 2K^2+2K^2 {\mathbf E} V^{2q}(\Phi_t^\epsilon(y))} \sqrt{ c(p) C(t,p) +c(p) C(t,p) \left( V ^{C(t,p)}(y)\right)}. \end{split}} \end{equation*} The latter, as a function of $y$, belongs to $B_{V,0}$. We proceed to part (3a). Let $v,w \in T_yM$ and $U_t:=\nabla d\Phi_t(w, v)$. Then $U_t$ satisfies the following equation: $${\begin{split} DU_t=&\sum_{k=1}^m\nabla^{(2)} Y_k (d\Phi_t(v), d\Phi_t(w))\circ dB_t^k+\sum_{k=1}^m\nabla Y_k(U_t)\circ dB_t^k\\ &+\nabla^{(2)} Y_0 (d\Phi_t(v), d\Phi_t(w))dt+\nabla Y_0(U_t)dt. \end{split}}$$ It follows that, $${\begin{split} {d} |U_t|^2=& 2\sum_{k=1}^m \left< \nabla^{(2)} Y_k (d\Phi_t(v), d\Phi_t(w))\circ dB_t^k+\nabla^{(2)} Y_0 (d\Phi_t(v), d\Phi_t(w))dt, U_t\right\>\\ & +\left\<\sum_{k=1}^m\nabla Y_k(U_t)\circ dB_t^k+\nabla Y_0(U_t)dt, U_t\right\>. \end{split}}$$ To the first term on the right hand side we apply Cauchy Schwartz inequality to split the first term in the inner product and the second term in the inner product. This gives: $C|U_t|^2$ and other terms that does not involve $U_t$. The Stratonovich corrections will throw out the extra derivative $\nabla^{(3)} Y_k$ which does not involve $U_t$. The second term on the right hand side is a sum of the form $\sum_{k=1}^m \< \nabla Y_k(U_t), U_t\>dB_t^k$ for which only bound on $|\nabla Y_k|$ is required, and $$\left \< \sum_{k=1}^m \nabla^{(2)} Y_k(Y_k, U_t)+\nabla Y_0(U_t), U_t\right\> =\< \nabla Z(U_t), U_t\> -\left \< \sum_{k=1}^m \nabla Y_k(\nabla _{U_t} Y_k), U_t\right\>.$$ The second term is bounded by $$\left| \sum_{k=1}^m\left \< \nabla Y_k(\nabla _{U_t} Y_k), U_t\right\>\right| \le \sum_{k=1}^m |\nabla Y_k|^2 |U_t|^2.$$ By the assumption, there exist $c>0,q\ge 1$ such that, for every $k=1, \dots, m$, $$|\nabla Y_k| \le \tilde V , |\nabla ^2Y_j|\le c+cV^q, |\nabla^{(3)} Y_k|\le c+cV^q, \<\nabla _u Z, u\>\le (c+KV)|u|^2.$$ There is a stochastic process $I_s$, which does not involve $U_t$, and constants $C,q$ such that $${\mathbf E}|U_t|^2 \le {\mathbf E}|U_0|^2+\int_0^t {\mathbf E} I_r dr+\int_0^t C{\mathbf E}\tilde V^q(y_r^\epsilon)|U_r|^2dr.$$ By induction $I_r$ has moments of all order which are bounded on compact intervals. By Gronwall's inequality, for $t\le T$, $${\mathbf E}|U_t|^2\le \left( {\mathbf E}|U_0|^2+\int_0^T {\mathbf E} I_r dr\right)\exp{\left(C \int_0^t \tilde V^q(y_r^\epsilon)dr\right)}.$$ To obtain the supremum inside the expectation, we simply use Doob's $L^p$ inequality before taking expectations. With the argument in the proof of part (1) we conclude that ${\mathbf E}\sup_{s\le t} |\nabla d\Phi_s|^2(y)$ is finite and belongs to $B_{V,0}$. {\it Part (3b).} Let $f\in B_{V, 2}$. By part (1), $d(P_tf)={\mathbf E} df(d\Phi_t(y))$. Let $u_1, u_2\in T_yM$. By an argument analogous to part (3), we may differentiate the right hand side under the expectation to obtain that $$(\nabla dP_tf)(u_1,u_2)={\mathbf E} \nabla df (d\Phi_t(u_1), d\Phi_t(u_2))+{\mathbf E} df (\nabla _{u_1} d\Phi_t(u_2)).$$ Hence $P_tf\in B_{V,2}$. This procedure can be iterated. {\it Part (3c).} By It\^o's formula, $$f(y_t)=f(y_s)+\sum_{k=1}^m\int_s^t df(Y_k(y_r))dB_r^k+\int_s^t {\mathcal L} f(y_r)dr.$$ Since $df(Y_k)\in B_{V,0}$, the expectations of the stochastic integrals with respect to the Brownian motions vanish. Since $ {\mathcal L} f\in B_{V,0}$ by part (3), ${\mathcal L} f(y_r)$ is bounded in $L^2$. It follows that the function $r\mapsto {\mathbf E}{\mathcal L} f (y_r)$ is continuous, $$\lim_{t\to s} {{\mathbf E} f(y_t)-{\mathbf E} f(y_s)\over t-s}={\mathbf E} {\mathcal L} f(y_s)$$ and we obtain Kolmogorov's backward equation, ${\partial \over \partial s}P_sf =P_s ({\mathcal L} f)$. Since $P_sf \in B_{V,2}$, we apply the above argument to $P_sf$, and take $t$ to zero in ${P_t(P_sf)-P_sf\over t}$ and obtain that ${\partial \over \partial s} P_sf = {\mathcal L} (P_sf)$. This leads to the required statement ${\mathcal L} P_sf =P_s {\mathcal L} f$. {\it Part (4).} For higher order derivatives of $\Phi_t$ we simply iterate the above procedure and note that the linear terms in the equation for ${d\over dt} |\nabla^{k-1}d\Phi_t(u_1, \dots, u_k)|^2$ are always of the same form. \end{proof} \begin{remark} With the assumption of part (3), we can show that for all integer $p$, ${\mathbf E}\sup_{s\le t}|\nabla d\Phi_s|_{y}^p\in B_{V,0}$. \end{remark} If we assume the additional conditions that $$ |\nabla Y_0| \le c\tilde V, \quad \sum_{k=1}^m |\nabla^{(2)} Y_k||Y_k|\le c\tilde V,$$ the conclusion of the remark follows more easily. With the assumptions of part (5) we need to work a bit more which we illustrate below. Let $U_t=\nabla d\Phi_t(w,v)$. Instead of writing down all term in $|U_t|^p$ we classify the terms in $|U_t|^p$ into two classes: those involving $U_t$ and those not. For the first class we must assume that they are bounded by $c\tilde V$ for some $c$. For the second class we may use induction and hence it is sufficient to assume that they belong to $B_{V,0}$. The terms that involving $U_t$ are: $$\nabla Y_k(U_t), \quad \sum_{k=1}^m\nabla^{(2)} Y_k(Y_k, U_t)+\nabla Y_0(U_t).$$ The essential identity to use is: $$\sum_{k=1}^m\nabla^{(2)} Y_k(Y_k, U_t)+\nabla Y_0(U_t)=\nabla Z(U_t)-\sum_{k=1}^m \nabla Y_k(\nabla Y_k (U_t)).$$ We do not need to assume that the second order derivatives $|\nabla^{(2)} Y_k||Y_k|\le c\tilde V$, it is sufficient to assume that for $|\nabla Y_k|^2$ and $\nabla Z$ for all $k=1,\dots, m$. With a bit of care, we check that only one sided derivatives of $Z$ are involved. For example we can convert it to the $p=2$ case, $$d|U_t|^p={p\over 2} (|U_t|^{p-2}) \circ d|U_t|^2={p\over 2} |U_t|^{p-2} d|U_t|^p + {1 \over 4} p(p-1)|U_t|^{p-4} \< d|U_t|^2\>.$$ By the first term ${p\over 2} |U_t|^{p-2} d|U_t|^p $ we mean that in place of $d|U_t|^p$ plug in all terms on the right hand side of the equation for $d|U_t|^2$, after formally converting the integrals to It\^o form. By $ \< d|U_t|^2\>$ we mean the bracket of the martingale term on the right hand side of $d|U_t|^2$. It is now easy to check that in all the terms that involving $U_t$, higher order derivatives of $Y_k$ does not appear, except in the form of $|U_t|^{p-2}\<\nabla_{U_t} Z, U_t\>$. \begin{remark} Assume the SDE is complete. Suppose that for some positive number $C$, $$\sum_{k=1}^m \sum_{k=0}^{5} |\nabla ^{(k)}Y_k|\le C, \quad \sum_{k'=0}^{4} |\nabla^{(k')} Y_0|\le C.$$ Then for all $p\ge 1$, there exists a constant $C(t,p)$ such that $$ {\mathbf E}\left(\sup_{s\le t}|d\Phi_s(x)|^p\right)\le C(t,p).$$ Furthermore the statements in Theorem \ref{limit-thm} hold for $r\le 4$.\end{remark} Recall that $|f|_r=\sum_{k=1}^{r} |\nabla^{(k-1)} d f|$ and $|f|_{r,\infty}=\sum_{k=1}^{r} |\nabla^{(k-1)}df|_\infty$. \begin{lemma} \label{semigroup-estimate-2} Assume Assumption \ref{Y-condition} and $$ \sum_{\alpha=0}^{4}|\nabla ^{(\alpha)}Y_0| \in B_{V, 0}, \quad \sum_{\alpha=0}^{5} \sum_{k=1}^m|\nabla^{(\alpha)} Y_k|\in B_{ V,0}.$$ Then there exist constants $q_1, q_2\ge 1$, $c_1$ and $c_2$ depending on $t$ and $f$ and locally bounded in $t$, also functions $\gamma_i \in B_{V,0}$, $\lambda_{q_i}$ polynomials, such that for $s\le t$, \begin{equation*} {\begin{split} |P_tf(y_0)-P_sf (y_0)| & \le (t-s) c_1\left( 1+\lambda_{q_1}(V(y_0))\right), \quad f\in B_{V,2}\\ \left|P_tf(y_0)-P_sf(y_0)-(t-s) P_s( {\mathcal L} f) (y_0)\right| &\le (t-s)^2 c_2 \left( 1+\lambda_{q_2}(V(y_0))\right), \quad f\in B_{V,4}\\ |P_tf(y_0)-P_sf (y_0)| &\le (t-s) \left(1+|f|_{2, \infty}\right)\gamma_1(y_0), \quad \forall f \in BC^2 \\ \left|P_tf(y_0)-P_sf(y_0)-(t-s) P_s( {\mathcal L} f) (y_0)\right| &\le (t-s)^2\left(1+|f|_{4, \infty}\right)\gamma_2(y_0), \quad \forall f \in BC^4. \end{split}} \end{equation*} \end{lemma} \begin{proof} Denote $y_t=\Phi_t(y_0)$, the solution to (\ref{limit.sde}). Then for $f\in C^2$, $$P_tf(y_0)=P_sf (y_0)+\int_s^t P_r ({\mathcal L} f)(y_0 )dr+\sum_{k=1}^{m} {\mathbf E}\left( \int_s^t df(Y_k(y_r)) dB_r^k\right).$$ Since $|L_{Y_k} f| \le |df|_\infty |Y_k|$ and $|df|$, $Y_k$ belong to $ B_{V,0}$, by Assumption \ref{Y-condition}(i), $ \int_0^t {\mathbf E} |L_{Y_k}f|_{y_r}^2 dr $ is finite and the last term vanishes. Hence $ |P_tf(y_0)-P_sf (y_0)|\le \int_s^t P_{s_2} ( {\mathcal L} f)(y_0)ds_2$. By Lemma \ref{Lf-L2f}, ${\mathcal L} f\in B_{V,0}$ if $f\in B_{V,2}$. Let $K, q_1$ be s.t. $|{\mathcal L} f|\le K+KV^{q_1}$. $$ \int_s^r |P_{s_2} ( {\mathcal L} f)(y_0)|ds_2 \le \int_0^r \left(K+K{\mathbf E} V^{q_1}(\Phi_{s_2}(y_0)) \right)ds_2.$$ By the assumption, we see easily that $\sum_{k=0}^3 |\nabla^{(\alpha)} Z|\in B_{V,0}$. By Assumption \ref{Y-condition}, $\sup_{s\le t}{\mathbf E}(|V(\Phi_s(y_0))|^{q_1})\le C_{q_1}(t)+C_{q_1}(t) \lambda_{q_1}( V(y_0))$ and the first conclusion holds. We repeat this procedure for $f\in C^4$ to obtain: $${\begin{split} &P_tf(y_0)-P_sf (y_0)\\ &=\int_s^t \left( P_s ({\mathcal L} f)(y_0 ) +\int_s^r P_{s_2} ( {\mathcal L}^2 f)(y_0)ds_2 +\sum_{k=1}^{m} {\mathbf E} \int_s^t \left( L_{Y_k} ({\mathcal L} f) \right)(y_{s_2})) dB_{s_2}^k \right)ds_1.\end{split}}$$ The last term also vanishes, as every term in $L_{Y_k} {\mathcal L} f $ belongs to $B_{V,0}$. Indeed $${\begin{split} L_{Y_k} {\mathcal L} f &=\sum_i \nabla^{(2)} df(Y_k, Y_i, Y_i)+ 2\sum_i \nabla df\left( \nabla_{Y_k}Y_i, Y_i\right)+ \nabla df \left(Y_k, Z\right)\\ &+\sum_i df(\nabla^{(2)}Y_i(Y_k,Y_i) +\nabla Y_i\left(\nabla_{Y_k}Y_i+\nabla _{Y_k}Y_0)\right). \end{split}} $$ This gives, for all $f \in B_{V,4}$, \begin{equation}\label{type1} \left|P_tf(y_0)-P_sf(y_0)-(t-s) P_s( {\mathcal L} f) (y_0)\right| \le \left| \int_s^t \int_s^{s_1} P_{s_2} ({\mathcal L}^2 f)(y_0)ds_2 ds_1 \right|. \end{equation} Let $q_2, K$ be numbers such that $|{\mathcal L}^2 f| \le K+K V^{q_2}$. Then, $$ \sup_{s\le t} P_{s} ({\mathcal L}^2 f)(y_0)\le K+K {\mathbf E}\left( V(y_s)\right)^{q_2} \le K+C_{q_2}(t)+KC_{q_2}(t) \tilde \lambda_{q_2}(V(y_0)). $$ Consequently, there exist a constant $c_2(t)$ s.t. $$ \left|P_tf(y_0)-P_sf(y_0)-(t-s) P_s( {\mathcal L} f) (y_0)\right| \le (t-s)^2 c_2(t, K, q_2) (1+ \lambda_{q_2}(V(y_0))). $$ completing the proof for $f\in B_{V,2}$ and $B_{V,4}$. Next suppose that $f\in BC^2$. By Lemma \ref{Lf-L2f}, $| {\mathcal L} f|\le |f|_{2, \infty} F_1$, and $|{\mathcal L}^2 f| \le |f|_{4, \infty} F_2$ if $f\in BC^4$. Here $F_1, F_2\in B_{V,0}$ and do not depend on $f$. We iterate the argument above to complete the proof for $f\in BC^4$. \end{proof} \section{Rate of Convergence} \label{section-rate} If ${\mathcal L}_0$ has a unique invariant probability measure $\pi$ and $f\in L^1(G, d\pi)$ denote $\bar f=\int_G fd\pi$. Let $\bar {\mathcal L}=-\sum_{i,j=1}^m \overline{\alpha_i \beta_j} L_{Y_i}L_{Y_j} $. Let $\{\sigma_k^i, i,k=1,\dots, m\}$ be the entries in a square root of the matrix $(\overline{-\alpha_i \beta_j})$. They satisfy $\sum_{k=1}^m \sigma_k^i\sigma_k^j=(\overline{-\alpha_i \beta_j})$ and are constants. Let us consider the SDE: \begin{equation}\label{limit.sde-2} dy_t=\sum_{k=1}^{m} \left( \sum_{i=1}^{m} \sigma_k^i Y_i(y_t) \right)\circ dB_t^k, \end{equation} where $\{B_t^k\}$ are independent one dimensional Brownian motions. Let $$\tilde Y_k= \sum_{i=1}^{m} \sigma_k^i Y_i(y_t), \quad \tilde Z=\sum_{i,j=1}^m\overline{ -\alpha_i\beta_j}\nabla_{Y_i}Y_j.$$ The results from section \ref{section-sde} apply. Recall that ${\mathcal L}_0={1\over 2}\sum_{i=1}^p L_{X_i}L_{X_i}+L_{X_0}$ and $(z_t^\epsilon)$ are ${\mathcal L}^\epsilon={1\over \epsilon} {\mathcal L}_0$ diffusions. Let $\Phi^\epsilon_t(y)$ be the solution to the SDE (\ref{1}): $\dot y_t^\epsilon =\sum_{k=1}^m \alpha_k(z_t^\epsilon)Y_k(y_t^\epsilon)$ with initial value $y$. \begin{assumption} \label{assumption-on-rate-result} $G$ is compact, $Y_0\in C^5(\Gamma TM)$, and $Y_k\in C^6(\Gamma TM)$ for $k=1, \dots, m$. Conditions (1)-(5) below hold or Conditions (1), (2') and (3-5) hold. \begin{enumerate} \item [(1)] The SDEs (\ref{limit.sde-2}) and (\ref{sde-3}) are complete. \item [(2)] $V\in {\bf \mathcal B}(M;{\mathbb R}_+)$ is a locally bounded function and $\epsilon_0$ a positive number s.t. for all $q\ge 1$ and $T>0$, there exists a locally bounded function $C_q: {\mathbb R}_+\to {\mathbb R}_+$, a real valued polynomial $\lambda_q$ such that for $0\le s\le t\le T$ and for all $\epsilon\le\epsilon_0$ \begin{equation} \label{moment-assumption-2} \sup_{s\le u \le t} {\mathbf E}\left\{ V^q(\Phi_{u\over \epsilon}^\epsilon(y)) \; \big| {\mathcal F}_{s\over \epsilon} \right\} \le C_q(t)+C_q(t) \lambda_q\left( V(\Phi_{s\over \epsilon}^\epsilon(y)\right). \end{equation} \item [(2')] There exists a function $V\in C^3(M; {\mathbb R}_+) $ s.t. for all $i,j\in \{1, \dots, m\}$, $|L_{Y_i}L_{Y_j} V|\le c+KV$ and $|L_{Y_j} V| \le c+KV$. \item[(3)] For $V$ defined above, let $\tilde V=1+ \ln(1+|V|)$. Suppose that $$ {\begin{split} &\sum_{\alpha=0}^{4}|\nabla ^{(\alpha)}Y_0| \in B_{V, 0}, \quad \sum_{\alpha=0}^{5} \sum_{k=1}^m|\nabla^{(\alpha)} Y_k|\in B_{ V,0}, \\ & \sum_{j=1}^m|\nabla Y_j| ^2\le c\tilde V, \quad \sup_{|u|=1} \<\nabla \tilde Z(u), u\>\le c\tilde V \end{split}}$$ \item [(4)] ${\mathcal L}_0$ satisfies H\"ormander's conditions and has a unique invariant measure $\pi$ satisfying Assumption \ref{assumption1}. \item [(5)] $\alpha_k\in C^3(G; {\mathbb R})\cap N^\perp$. \end{enumerate} \end{assumption} We emphasize the following: \begin{remark} \label{rate-remark-1} \begin{itemize} \item [(a)] If $V$ in (2') is a pre-Lyapunov function, then (\ref{sde-3}) is complete. Furthermore $|\bar {\mathcal L} V|\le c+KV$ and so (\ref{limit.sde-2}) is complete. \item [(b)] Under conditions (1), (2') and (4-5), (2) holds. See Theorem \ref{uniform-estimates}. Also Corollary \ref{corollary-to-lemma3-2} holds. Conditions (1-5) implies the conclusions of Theorem \ref{limit-thm}. \item[(c)] If ${\mathcal L}_0$ satisfies strong H\"ormander's condition, condition (4) is satisfied. \end{itemize} \end{remark} Let $P_t^\epsilon$ be the probability semigroup associated with $(y_{t}^\epsilon)$ and $P_t$ the Markov semigroup for $\bar {\mathcal L}$. Recall that $|f|_{r,\infty}=\sum_{j=1}^{r} |\nabla^{(j-1)}df|_\infty$. We recall that operator ${\mathcal L}_0$ on a compact manifold $G$ satisfying strong H\"ormander's condition has an exponential mixing rate, so ${\mathcal L}_0$ satisfy Assumption \ref{assumption1}. \begin{theorem} \label{rate} Assume that $Y_k, \alpha_k$ and $ {\mathcal L}_0$ satisfy Assumption \ref{assumption-on-rate-result}. For every $f \in B_{V,4}$, $$\left|{\mathbf E} f\left(\Phi^\epsilon_{T\over \epsilon}(y_0)\right) -P_Tf(y_0)\right|\le \epsilon |\log \epsilon|^{1\over 2}C(T)\gamma_1(y_0),$$ where $\gamma_1\in B_{V,0}$ and $C(T)$ are constant increasing in $T$. Similarly, if $f\in BC^4$, $$\left|{\mathbf E} f\left(\Phi^\epsilon_{T\over \epsilon}(y_0)\right) -P_Tf(y_0)\right| \le \epsilon |\log \epsilon|^{1\over 2}\,C(T)\gamma_2(y_0) \left(1+ |f|_{4, \infty}\right). $$ where $\gamma_2$ is a function in $B_{V,0}$ that does not depend on $f$ and $C(T)$ are constants increasing in $T$. \end{theorem} \begin{proof} {\it Step 1.} To obtain optimal estimates we work on intervals of order $\epsilon$, c.f. Lemma \ref{lemma3}. Let $t_0=0<t_1<\dots<t_N=T$ be a partition of $[0,T]$ with $\Delta t_k=t_k-t_{k-1}=\epsilon$ for $k<N$ and $t_1\le \epsilon$. Write $y_t^\epsilon=\Phi_t^\epsilon(y_0)$. Then, \begin{equation*} {\begin{split} & f\left(y^\epsilon_{T\over \epsilon}\right)-P_T f(y_0) = \sum_{k=1}^N\left( P_{T-t_k}f(y_{t_k\over \epsilon}^\epsilon) - P_{T-t_{k-1}}f(y_{t_{k-1}\over \epsilon}^\epsilon) \right)\\ &= \sum_{k=1}^N \left( P_{T-t_k}f(y_{t_k \over \epsilon}^\epsilon) - P_{T-t_{k}} f(y_{t_{k-1 \over \epsilon}}^\epsilon) + \Delta t_k \left( P_{T-t_{k-1}} \bar {\mathcal L} f (y_{t_{k-1\over \epsilon}}^\epsilon)\right) \right)\\ &+ \sum_{k=1}^N \left( P_{T-t_k}f (y_{t_{k-1\over \epsilon}}^\epsilon) - P_{T-t_{k-1}}f(y_{t_{k-1}\over \epsilon}^\epsilon ) - \Delta t_k \left(P_{T-t_{k-1}} \bar {\mathcal L} f \right)(y_{t_{k-1\over \epsilon}}^\epsilon)\right). \end{split}} \end{equation*} Define \begin{equation*} {\begin{split} I_k^\epsilon&=P_{T-t_k}f(y_{t_k \over \epsilon}^\epsilon) - P_{T-t_{k}} f(y_{t_{k-1 \over \epsilon}}^\epsilon) + \Delta t_k \left( P_{T-t_{k-1}} \bar {\mathcal L} f (y_{t_{k-1\over \epsilon}}^\epsilon)\right),\\ J_k^\epsilon&= P_{T-t_k}f - P_{T-t_{k-1}}f -\Delta t_k P_{T-t_{k-1}} \bar {\mathcal L} f. \end{split}} \end{equation*} Since $f\in B_{V,4}$, Lemma \ref{semigroup-estimate-2} applies and obtain the desired estimate on the second term: \begin{equation*} {\begin{split} &\left| J_k^\epsilon(y_{t_{k-1\over \epsilon}}^\epsilon) \right| &\le(\Delta t_k)^2 \tilde c_2(T,f)\left( 1+ \left(\lambda_{q_2} (V(y_{t_{k-1\over \epsilon}}^\epsilon)\right)\right) \end{split}} \end{equation*} where $\tilde c_2(T,f)$ is a constant and $\lambda_{q_2}$ a polynomial. Let $K, q$ be constants such that $\lambda_{q_2}(V)\le K+KV^q$. We apply (\ref{moment-assumption-2}) from Assumption \ref{assumption-on-rate-result} to see that for some constant $C_q(T)$ depending on $\lambda_{q_2}(V)$, $$ {\mathbf E}\left(\lambda_{q_2} (V(y_{t_{k-1\over \epsilon}}^\epsilon)\right)\le K+KC_{q}(T)+KC_{q}(T) \lambda_{q} (V(y_0)).$$ Since $\Delta t_k\le \epsilon$ and $N\sim {1\over \epsilon}$, \begin{equation}\label{rate-1} \sum_{k=1}^N {\mathbf E} \left| J_k^\epsilon (y_{t_{k-1\over \epsilon}}^\epsilon) \right| \le \epsilon \tilde c_2(T,f)(K+1)\left( 1+C_{q}(T)+C_{q}(T) \lambda_{q} (V(y_0))\right). \end{equation} If $f$ belongs to $ BC^4$, we apply Lemma \ref{semigroup-estimate-2} to see that there exists a function $F\in B_{V,0}$, independent of $f$ s.t. $$\left| J_k^\epsilon(y_{t_{k-1\over \epsilon}}^\epsilon) \right| \le (\Delta t_k)^2\left(1+|f|_{4,\infty}\right) \left(F(y_{t_{k-1\over \epsilon}}^\epsilon)\right).$$ Hence \begin{equation} \label{rate1-2} \sum_{k=1}^N {\mathbf E} \left| J_k^\epsilon(y_{t_{k-1\over \epsilon}}^\epsilon) \right| \le \epsilon\left(1+|f|_{4,\infty}\right) {\mathbf E}\left(F(y_{t_{k-1\over \epsilon}}^\epsilon)\right). \end{equation} The rest of the proof is just as for the case of $f\in B_{V,4}$. {\it Step 2.} Let $0\le s <t$. By part (3) of Theorem \ref{limit-thm}, $\bar {\mathcal L} P_tf=P_t \bar{\mathcal L} f$ for any $t>0$ and $P_{T-t_{k}} \bar {\mathcal L} f =\bar {\mathcal L} P_{T-t_k}f$. We will approximate $P_{T-t_{k-1}} \bar {\mathcal L} f $ by $P_{T-t_{k}} \bar {\mathcal L} f$ and estimate the error $$\sum_{k=1}^N \Delta t_k \left( P_{T-t_{k}} \bar {\mathcal L} f -P_{T-t_{k-1}} \bar {\mathcal L} f \right)(y_{t_{k-1\over \epsilon}}^\epsilon).$$ By Lemma \ref{Lf-L2f}, ${\mathcal L} f\in B_{V,2}$, and we may apply Lemma \ref{semigroup-estimate-2} to $\bar {\mathcal L} f$. We have, \begin{equation*} |P_{T-t_{k}}\bar {\mathcal L} f(y_0)-P_{T-t_{k-1}} \bar {\mathcal L} f (y_0)| \le \Delta t_k \tilde c_1(T)\left( 1+\lambda_{q_1}(V(y_0))\right). \end{equation*} Recall that $\lambda_{q_1} (V)\in B_{V,0}$. Summing over $k$ and take the expectation of the above inequality we obtain that \begin{equation} \label{rate-2} \sum_{k=1}^N \Delta t_k \left| P_{T-t_{k}} \bar {\mathcal L} f (y_{t_{k-1\over \epsilon}}^\epsilon)-P_{T-t_{k-1}} \bar {\mathcal L} f(y_{t_{k-1\over \epsilon}}^\epsilon)\right| \le \epsilon c_1(T)\left( 1+\lambda_{q_1}(V(y_0))\right). \end{equation} If $f\in BC^2$, ${\mathcal L} f\in BC^2$. By Lemma \ref{semigroup-estimate-2} , $$|P_{T-t_{k}}\bar {\mathcal L} f(y_0)-P_{T-t_{k-1}} \bar {\mathcal L} f (y_0)| \le \Delta t_k \tilde c_1(T)\left( 1+\lambda_{q_1}(V(y_0))\right).$$ there exist constant $C(T)$ and a function $\gamma_1 \in B_{V,0}$, independent of $f$, s.t. $$|P_tf(y_0)-P_sf (y_0)| \le (t-s) \left(1+|f|_{2,\infty}\right)\gamma_1(y_0).$$ Here $\gamma_1\in B_{V,0}$. Thus for $f\in BC^2$, \begin{equation} \label{rate-2-2} \sum_{k=1}^N \Delta t_k \left| P_{T-t_{k}} \bar {\mathcal L} f (y_{t_{k-1\over \epsilon}}^\epsilon)-P_{T-t_{k-1}} \bar {\mathcal L} f(y_{t_{k-1\over \epsilon}}^\epsilon)\right| \le 2\epsilon |f|_{2,\infty}( 1+\gamma_1(y_0)). \end{equation} Finally instead of estimating $I_k^\epsilon$, we estimate \begin{equation*} \label{estimate-eq-10} D_k^\epsilon:=P_{T-t_k}f(y_{t_k \over \epsilon}^\epsilon) - P_{T-t_{k}} f(y_{t_{k-1 \over \epsilon}}^\epsilon) + \Delta t_k P_{T-t_{k}} \bar {\mathcal L} f (y_{t_{k-1\over \epsilon}}^\epsilon). \end{equation*} {\it Step 3.} If $f \in B_{V,4}$, by Theorem \ref{limit-thm}, $P_tf\in B_{V,4}$ for any $t$. Since $\alpha_k\in N^\perp\cap C^3$, we may apply Lemma \ref{lemma5} to $P_{T-t_k}f$ and obtain the following formula for $D_k^\epsilon$. \begin{equation*} {\begin{split} & D_k^\epsilon=P_{T-t_k}f(y^\epsilon_{{t_k}\over \epsilon}) - P_{T-t_k}f(y^\epsilon_{t_{k-1}\over \epsilon}) + \Delta t_k P_{T-t_{k}}\bar {\mathcal L} f(y_{t_{k-1\over \epsilon}}^\epsilon)\\ &=\epsilon \sum_{j=1}^m \left( dP_{T-t_k}f(Y_j(y^\epsilon_{{t_k}\over \epsilon} ) )\beta_j(z^\epsilon_{{t_k}\over \epsilon}) - dP_{T-t_k}f(Y_j(y^\epsilon_{t_{k-1}\over \epsilon} ))\beta_j( z^\epsilon_{t_{k-1}\over \epsilon})\right)\\ &+ \Delta t_k P_{T-t_{k}}\bar {\mathcal L} f(y_{t_{k-1\over \epsilon}}^\epsilon) -\epsilon \sum_{i,j=1}^m\int_{t_{k-1}\over \epsilon}^{{t_k}\over \epsilon} \left( L_{Y_i}L_{Y_j} P_{T-t_k}f(y^\epsilon_r)\right) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r)\; dr\\ &- \sqrt \epsilon \sum_{j=1}^m\sum_{k=1}^{m'} \int_{s\over \epsilon}^{t\over \epsilon} d P_{T-t_k}f( Y_j(y^\epsilon_r)) \; d\beta_j( X_k(z^\epsilon_r)) \;dW_r^k. \end{split}}\end{equation*} Since $Y_0, Y_k\in B_{V,0}$, $L_{Y_i} L_{Y_j} P_{T-t_k}f\in B_{V,0}$, which follows the same argument as for Lemma \ref{Lf-L2f}. In particular, for each $0<\epsilon\le \epsilon_0$, $$\int_0^{t\over \epsilon}{\mathbf E} \left(\left| L_{Y_i} L_{Y_j} P_{T-t_k}f(y_r^\epsilon)\right|\right)^2 dr <\infty.$$ The expectation of the martingale term in the above formula vanishes. For $j=1,\dots, m$ and $k=1, \dots, N$, let \begin{equation*} {\begin{split} A_{jk}^\epsilon &= dP_{T-t_k}f \left( Y_j(y^\epsilon_{t_k\over \epsilon} )\right) \beta_j(z^\epsilon_{ t_k \over \epsilon}) - dP_{T-t_k}f\left(Y_j(y^\epsilon_{t_{k-1}\over \epsilon} )\right) \beta_j( z^\epsilon_{t_{k-1}\over \epsilon}),\\ B_{k}^\epsilon&=\Delta t_k (P_{T-t_{k}}\bar {\mathcal L} f)(y_{t_{k-1\over \epsilon}}^\epsilon)-\epsilon \sum_{i,j=1}^m\int_{t_{k-1}\over \epsilon}^{{t_k}\over \epsilon} \left( L_{Y_i}L_{Y_j} P_{T-t_k}f\right)(y^\epsilon_r) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \; dr. \end{split}} \end{equation*} {\it Step 4. } We recall that $\bar {\mathcal L} P_{T-t_k}f=\sum_{i,j=1}^m \overline{\alpha_i\beta_j} L_{Y_i}L_{Y_j}P_{T-t_k}f$. By Theorem \ref{limit-thm}, $L_{Y_i} L_{Y_j} P_{T-t_k}f$ is $C^2$. Furthermore by Assumption \ref{assumption1}, the $(z_t^\epsilon)$ diffusion has exponential mixing rate. We apply Corollary \ref{corollary-to-lemma3-2} to each function of the form $L_{Y_i} L_{Y_j} P_{T-t_k}f$ and take $h=\alpha_i\beta_j$ There exist a constant $\tilde c$ and a function $\gamma_{i,j,,k,\epsilon}\in B_{V,0}$ such that $${\begin{split} & |B_k^\epsilon| \le {\Delta t_k}\sum_{i,j=1}^m \left|\overline{\alpha_i\beta_j}\; L_{Y_i} L_{Y_j} P_{T-t_k}f \left(y_{t_{k-1}\over \epsilon}^\epsilon\right)- {\epsilon \over \Delta t_k} \int_{t_{k-1}\over \epsilon }^{t_k\over \epsilon} {\mathbf E} \left\{ L_{Y_i} L_{Y_j} P_{T-t_k}f (y_r^\epsilon) (\alpha_i\beta_j)(z_{r}^\epsilon) \big| {\mathcal F}_{t_{k-1}\over \epsilon}\right\} dr \right| \\ & \le \sum_{i,j=1}^m\tilde c |\alpha_i\beta_j|_\infty\gamma_{i,j, k,\epsilon}(y_{t_{k-1}\over \epsilon}^\epsilon) \left ({\epsilon^2}+ (\Delta t_k)^2\right), \end{split}} $$ where denoting $G^k_{i,j}:=L_{Y_i} L_{Y_j} P_{T-t_k}f$, $$\gamma_{i,j,k, \epsilon}=|G^k_{i,j}| + \sum_{l'=1}^m |L_{Y_{l'}}G^k_{i,j}|+\sum_{l,l'=1}^m {\epsilon\over \Delta t_k} \int_{t_{k-1}\over \epsilon}^{t_k\over \epsilon} {\mathbf E} \left\{ \left|L_{Y_l} L_{Y_{l'}}G^k_{i,j}(y^\epsilon_r)\right | \; \big|\;{\mathcal F}_{s\over \epsilon} \right\} dr.$$ By Theorem \ref{limit-thm}, $G^k_{i,j}=L_{Y_i} L_{Y_j} P_{T-t_k}f$ belong to $B_{V,2}$. Furthermore $G^k_{i,j}$ and its first two derivatives are bounded by a function in $B_{V,0}$ which depends on $f$ only through $\sum_{k=0}^4 P_{T-t_k} ( |\nabla^{(k)}d f|^p)$, for some $p$. Thus there are numbers $c ,q$ such that for all $k$, $\max_{i,j}|\gamma_{i,j,k, \epsilon}|\le c+cV^q$, for some $c,q$. Since $\Delta t_k\le \epsilon\le 1$, $N\sim O({1\over \epsilon})$, we summing over $k$, \begin{equation} \label{rate-3} \sum_{k=1}^N {\mathbf E} |B_k^\epsilon| \le 2\epsilon \cdot c\cdot \tilde c \sum_{i,j=1}^m |\alpha_i\beta_j|_\infty C_q(T) \sup_k {\mathbf E} \left (1+V^q (y_{t_{k-1}\over \epsilon}^\epsilon) \right) \le \epsilon C(T)\tilde \gamma(y_0), \end{equation} for some constant $C(T)$ and some function $\tilde \gamma$ in $B_{V,0}$. If $f\in BC^4$, it is easy to see that there is a function $g\in B_{V,0}$, not depending on $f$, s.t. $\max_{i,j,k}{\mathbf E} \gamma_{i,j, k,\epsilon}(y_{t_{k-1}\over \epsilon}^\epsilon) \le C(T)g(y_0)|f|_{4,\infty}$. {\it Step 5.} Finally, by Lemma \ref{gap} below, for $ \epsilon\le s\le t\le T$ and $f\in B_{V,3}$, there is a constant $C$ and function $\tilde \gamma\in B_{V,0}$, depending on $T,f$ s.t. for $0\le s <t \le T$, \begin{equation} \label{gap-0} \left| \sum_{j=1}^m {\mathbf E} df(Y_j(y^\epsilon_{t\over \epsilon} ) )\beta_j(z^\epsilon_{t\over \epsilon}) -{\mathbf E} df(Y_j(y^\epsilon_{s\over \epsilon} )) \beta_j( z^\epsilon_{s\over \epsilon}) \right| \le C\gamma(y_0)\epsilon \sqrt{|\log \epsilon|}+C\gamma(y_0) (t-s). \end{equation} For the partition $t_0<t_1<\dots<t_N$, we assumed that $t_1-t_0\le \epsilon$ and $\Delta t_k=\epsilon$ for $k\ge 1$. Let $k\ge 2$. Since $dP_{T-t_k}f(Y_j)\in B_{V,3}$, estimate (\ref{gap-0}) holds also with $f$ replaced by $dP_{T-t_k}f(Y_j)$, and we have: \begin{equation}\label{final-estimate} \left| \sum_{j=1}^m \epsilon {\mathbf E} A_{jk}^\epsilon \right|\le C\tilde \gamma(y_0)\epsilon^2 \sqrt{|\log \epsilon|}, \quad k\ge 2 \end{equation} Since $\beta_j$ are bounded and by Theorem \ref{limit-thm} $dP_{T-t_k}f$ is bounded by a function in $B_{V,0}$ that does not depend on $k$, for $\epsilon\le \epsilon_0$, each term ${\mathbf E}|A_{jk}^\epsilon|$ is bounded by a function in $B_{V,0}$ and $ \sup_{0<\epsilon \le \epsilon_0}|{\mathbf E} A_{jk}^\epsilon|$ is of order $\epsilon \tilde \gamma(y_0)$ for some function $\tilde \gamma\in B_{V,0}$. We ignore a finite number of terms in the summation. In particular we will not need to worry about the terms with $k=1$. Since the sum over $k$ involves $O({1\over \epsilon})$ terms the following bound follows from (\ref{final-estimate}): \begin{equation} \label{rate-4} \sum_{k=1}^N \left|\sum_{j=1}^m \epsilon {\mathbf E} A_{jk}^\epsilon\right|\le C\tilde \gamma(y_0)\epsilon \sqrt{|\log \epsilon|}. \end{equation} Here $\tilde \gamma\in B_{V,0}$ and may depend on $f$. The case of $f\in BC^4$ can be treated similarly. The estimate is of the form $\tilde \gamma(\epsilon)=(1+|f|_{4,\infty})\gamma_0$ where $ \gamma_0\in B_{V,0}$ does not depend on $f$. We putting together (\ref{rate-1}), (\ref{rate-2}), (\ref{rate-3}) and (\ref{rate-4})to see that if $f \in B_{V,4}$, $$\left|{\mathbf E} f\left(\Phi^\epsilon_{t\over \epsilon}(y_0)\right) -P_tf(y_0)\right|\le C(T) \gamma(y_0)\epsilon \sqrt{|\log \epsilon|},$$ where $\gamma\in B_{V,0}$. If $f\in BC^4$, collecting the estimates together, we see that there is a constant $C(T)$ s.t. $$\left|{\mathbf E} f\left(\Phi^\epsilon_{t\over \epsilon}(y_0)\right) -P_tf(y_0)\right| \le\epsilon \sqrt{|\log \epsilon|}\, C(T)\left(1+ \sum_{k=1}^4 |\nabla^{(k-1)}df|_\infty\right) \tilde\gamma(y_0)$$ where $\tilde \gamma$ is a function in $B_{V,0}$ that does not depend on $f$. By induction the finite dimensional distributions converge and hence the required weak convergence. The proof is complete. \end{proof} \begin{lemma} \label{lemma4.2} Assume that (\ref{sde-3}) are complete for all $\epsilon \in (0, \epsilon_0)$, some $\epsilon_0>0$. \begin{enumerate} \item[(1)] ${\mathcal L}_0$ is a regularity improving Fredholm operator on a compact manifold $G$, $\alpha_k\in C^3\cap N^\perp$. \item [(2)] There exists $V\in C^2(M; {\mathbb R}_+)$, constants $c, K$, s.t. $$\sum_{j=1}^m |L_{Y_j} V| \le c+KV, \quad \sum_{j=1}^m |L_{Y_i}L_{Y_j}V| \le c+KV.$$ \item [(2')] There exists a locally bounded $V:M\to {\mathbb R}_+$ such that for all $q\ge 2$ and $t>0$ there are constants $C(t)$ and $q'$, with the property that \begin{equation} \sup_{s\le u\le t}{\mathbf E} \left\{ \left(V(y_u^\epsilon)\right)^q \; \big|\; {\mathcal F}_{s\over \epsilon}\right \}\le C V^{q'}(y^\epsilon_{s\over \epsilon})+C. \end{equation} \item [(3)] For $V$ in part (2) or in part (2'), $ \sup_{\epsilon} {\mathbf E} V^q(y_0^\epsilon)<\infty$ for all $q\ge 2$. \end{enumerate} For $f\in C^2$ with the property that $L_{Y_j} f, L_{Y_i}L_{Y_j}f\in B_{V,0}$ for all $i,j$, there exists a number $\epsilon_0>0$ s.t. for every $0<\epsilon\le\epsilon_0$, $$\left|{\mathbf E}\left \{ f(y_{t\over \epsilon}^\epsilon) \; \big|\; {\mathcal F}_{s\over \epsilon}\right \}- f(y_{s\over \epsilon}^\epsilon)\right| \le \gamma_1(y_{s\over \epsilon}^\epsilon)\max_j |\beta_j|_\infty \; \epsilon+(t-s) \gamma_2(y_{s\over \epsilon}^\epsilon)\max_i|\alpha_i|_\infty\max_j |\beta_j|_\infty.$$ Here $\gamma_1, \gamma_2\in B_{V,0}$ and depend on $|f|$ only through $|L_{Y_j}f|$ and $|L_{Y_j}L_{Y_i}f|$. In particular there exists $\gamma \in B_{V,0}$ s.t. for all $0<\epsilon\le\epsilon_0$, $$\left| {\mathbf E} f(y_{t\over \epsilon}^\epsilon)-{\mathbf E} f(y_{s\over \epsilon}^\epsilon)\right| \le \sup_{0<\epsilon\le\epsilon_0} {\mathbf E}\gamma(y_0^\epsilon)(t-s+\epsilon).$$ Furthermore, $\sup_{0<\epsilon\le\epsilon_0}{\mathbf E}\left|f(y_{t\over \epsilon}^\epsilon)-f(y_{s\over \epsilon}^\epsilon)\right|\le (\epsilon +\sqrt{t-s})){\mathbf E} \gamma(y_0^\epsilon)$. \end{lemma} \begin{proof} Since the hypothesis of Theorem \ref{uniform-estimates} holds, if $V$ is as defined in (2), it satisfies (2'). Since $L_{Y_j}f\in B_{V,0}$, $\sup_{s\le t}{\mathbf E}|L_{Y_j}f(y_{s\over \epsilon}^\epsilon)| ^2$ is finite. We apply Lemma \ref{Ito-tight}: \begin{equation*} {\begin{split} {\mathbf E}\left\{ f(y^\epsilon_{t\over \epsilon}) \; \big|\; {\mathcal F}_{s\over \epsilon}\right \} =&\ f(y^\epsilon_{s\over \epsilon}) + \epsilon \sum_{j=1}^m {\mathbf E}\left\{ \left( df(Y_j(y^\epsilon_{t\over \epsilon} ) )\beta_j(z^\epsilon_{t\over \epsilon}) -df(Y_j(y^\epsilon_{s\over \epsilon} ))\beta_j( z^\epsilon_{s\over \epsilon})\right) \; \big|\; {\mathcal F}_{s\over \epsilon}\right \}\\ &-\epsilon \sum_{i,j=1}^m {\mathbf E}\left\{ \int_{s\over \epsilon}^{t\over \epsilon} L_{Y_i}L_{Y_j} f(y^\epsilon_r)) \alpha_i(z^\epsilon_r)\;\beta_j(z^\epsilon_r) \;dr \; \big|\; {\mathcal F}_{s\over \epsilon}\right \}. \end{split}}\end{equation*} Let $$\gamma_1(y_{s\over \epsilon}^\epsilon)=2 \sup_{s\le r\le t} \sum_{j=1}^m{\mathbf E}\left\{ |L_{Y_j}f(y_{r\over \epsilon}^\epsilon)| \; \big|\; {\mathcal F}_{s\over \epsilon}\right \}, \quad \gamma_2(y_{s\over \epsilon}^\epsilon) = \sup_{s\le r\le t} \sum_{i,j=1}^m{\mathbf E}\left\{ |L_{Y_i}L_{Y_j} f(y^\epsilon_{s\over \epsilon}))| \; \big|\; {\mathcal F}_{s\over \epsilon}\right \}.$$ Since $L_{Y_j}f$ and $ L_{Y_i}L_{Y_j} f\in B_{V,0}$, $\gamma_1, \gamma_2\in B_{V,0}$. The required conclusion follows for there conditioned inequality, and hence the estimate for $\left| {\mathbf E} f(y_{t\over \epsilon}^\epsilon)-{\mathbf E} f(y_{s\over \epsilon}^\epsilon)\right|$. To estimate $ {\mathbf E}\left| f(y_{t\over \epsilon}^\epsilon)-f(y_{s\over \epsilon}^\epsilon)\right|$, we need to involve the diffusion term in (\ref{Ito-tight}) and hence $\sqrt{t-s}$ appears. \end{proof} \begin{lemma} \label{gap} Assume the conditions of Lemma \ref{lemma4.2} and Assumption \ref{assumption1}. Let $y_0^\epsilon=y_0$. If $f\in C^3$ is s.t. $|L_{Y_j}f|$, $|L_{Y_i}L_{Y_j}f|$, $|L_{Y_l}L_{Y_i}L_{Y_j}f|$ belong to $B_{V,0}$ for all $i,j,k$, then for some $\epsilon_0$ and all $0<\epsilon \le \epsilon_0$ and for all $0\le \epsilon \le s<t\le T$ where $T>0$, $$\left| \sum_{l=1}^m {\mathbf E} df(Y_l(y^\epsilon_{t\over \epsilon} ) )\beta_l(z^\epsilon_{t\over \epsilon}) -{\mathbf E} df(Y_l(y^\epsilon_{s\over \epsilon} )) \beta_l( z^\epsilon_{s\over \epsilon}) \right| \le C(T) \gamma(y_0)\epsilon \sqrt{|\log \epsilon|}+C(T) \gamma(y_0) (t-s),$$ where $\gamma\in B_{V,0}$ and $C(T)$ is a constant. If the assumptions of Theorem \ref{rate} holds, the above estimate holds for any $f\in B_{V,3}$; if $f\in BC^3$, we may take $\gamma=(|f|_{3,\infty}+1)\tilde \gamma $ where $\tilde \gamma\in B_{V,0}$. \end{lemma} \begin{proof} Let $t\le T$. Since $\beta_l(z^\epsilon_{t\over \epsilon})$ is the highly oscillating term, we expect that averaging in the oscillation in $\beta_l$ gains an $\epsilon$ in the estimation. We first split the sums: \begin{equation} \label{7.4-1}{ \begin{split} & \left( df(Y_l(y^\epsilon_{t\over \epsilon} ) )\beta_l(z^\epsilon_{t\over \epsilon})\right) -\left( df(Y_l(y^\epsilon_{s\over \epsilon} ))\beta_l( z^\epsilon_{s\over \epsilon}) \right)\\ &= df(Y_l(y^\epsilon_{s\over \epsilon} ) ) \left( \beta_l(z^\epsilon_{t\over \epsilon})- \beta_l(z^\epsilon_{s\over \epsilon})\right) + \left( df(Y_l(y^\epsilon_{t\over \epsilon} ) )- df(Y_l(y^\epsilon_{s\over \epsilon} ) )\right) \beta_l(z^\epsilon_{t\over \epsilon})=I_l+II_l. \end{split} } \end{equation} By Assumption \ref{assumption1}, ${\mathcal L}_0$ has mixing rate $\psi(r)=ae^{-{\delta r}}$. Let $s'<s\le t$, $${ \begin{split} &\left|{\mathbf E} df(Y_l(y^\epsilon_{s'\over \epsilon} ) ) \left( \beta_l(z^\epsilon_{t\over \epsilon})- \beta_l(z^\epsilon_{s\over \epsilon})\right)\right| \le {\mathbf E} \left(\left| df\left(Y_l(y^\epsilon_{s'\over \epsilon} )\right ) \right| \cdot \left| {1\over \epsilon} \int_{s\over \epsilon}^{t\over \epsilon} {\mathbf E} \left\{\alpha_l (z_r^\epsilon) \big | {\mathcal F}_{s'\over \epsilon} \right\}\; d r \right|\right)\\ &\le {\mathbf E} \left| df\left(Y_l(y^\epsilon_{s'\over \epsilon} )\right)\right | {1\over \epsilon} \int_{0}^{t-s\over \epsilon} \psi\left({r+{s-s'\over \epsilon} \over \epsilon}\right) dr\\ &\le {a^2\over \delta} e^{-{\delta(s-s')\over \epsilon^2}} \; {\mathbf E} \left| df\left(Y_l(y^\epsilon_{s'\over \epsilon} )\right)\right|. \end{split} } $$ If $s-s'=\delta_0 \epsilon^2|\log\epsilon|$, $\exp\left(-{\delta(s-s')\over \epsilon^2}\right)=\epsilon^{\delta\delta_0}$. We apply Theorem \ref{uniform-estimates} to the functions $L_{Y_l}f\in B_{V,0}$. For a constant $\epsilon_0>0$, $${a^2\over \delta}\sup_{0<\epsilon\le \epsilon_0}\sup_{0\le s'\le t} {\mathbf E}\left| \left (df(Y_l(y_{s'\over \epsilon}^\epsilon)) \right) \right| \le \tilde \gamma_l(y_0)$$ where $\tilde \gamma_l$ is a function in $B_{V,0}$, depending on $T$. Thus for $s'<s<t$, \begin{equation} \label{7.4-3} \left|{\mathbf E}\left( df(Y_l(y^\epsilon_{s'\over \epsilon} ) ) \left( \beta_l(z^\epsilon_{t\over \epsilon})- \beta_l(z^\epsilon_{s\over \epsilon})\right) \right)\right| \le \tilde \gamma_l(y_0){a^2\over \delta} \exp{\left(-{\delta(s-s')\over \epsilon^2}\right)}. \end{equation} Let us split the first term on the right hand side of (\ref{7.4-1}). Denoting $s'=s-{1\over \delta}\epsilon^2|\log\epsilon|$, $${ \begin{split} &I_l= {\mathbf E} df(Y_l(y^\epsilon_{s\over \epsilon} ) ) \left( \beta_l(z^\epsilon_{t\over \epsilon})- \beta_l(z^\epsilon_{s\over \epsilon})\right)\\ &= {\mathbf E} df(Y_l(y^\epsilon_{s'\over \epsilon} ) ) \left( \beta_l(z^\epsilon_{t\over \epsilon})- \beta_l(z^\epsilon_{s\over \epsilon})\right) +{\mathbf E} \left( \left( df(Y_l(y^\epsilon_{s\over \epsilon} ) ) -df(Y_l(y^\epsilon_{s'\over \epsilon} )) \right) \left( \beta_l(z^\epsilon_{t\over \epsilon})- \beta_l(z^\epsilon_{s\over \epsilon})\right) \right). \end{split} } $$ The first term on the right hand side is estimated by (\ref{7.4-3}). To the second term we take the supremum norm of $\beta_l$ and use Lemma \ref{lemma4.2}. For some $\tilde C(T)$ and $ \gamma\in B_{V,0}$, \begin{equation} \label{horder-rate} {\mathbf E} \left| df(Y_l(y^\epsilon_{s\over \epsilon} ) ) -df(Y_l(y^\epsilon_{s'\over \epsilon} )) \right| \le \tilde C(T)\gamma(y_0) \left(\epsilon+ {1\over \sqrt \delta}\epsilon |\log\epsilon|^{1\over 2}\right). \end{equation} Then for some number $C(T)$, \begin{equation} \sum_l I_l\le {1\over \sqrt \delta} \epsilon\sqrt {|\log \epsilon|}C(T)\gamma(y_0) \end{equation} where $\gamma\in B_{V,0}$. Let us treat the second term on the right hand side of (\ref{7.4-1}). Let $t'=t-{1\over \delta}\epsilon^2|\log \epsilon|$. Then $${\begin{split} II_l&= {\mathbf E} \left( df(Y_l(y^\epsilon_{t\over \epsilon} ) )- df(Y_l(y^\epsilon_{s\over \epsilon} ) )\right) \beta_l(z^\epsilon_{t\over \epsilon}) \\ &={\mathbf E}\left( df(Y_l(y^\epsilon_{t\over \epsilon} ) )- df(Y_l(y^\epsilon_{t'\over \epsilon} ) )\right) \beta_l(z^\epsilon_{t\over \epsilon}) + {\mathbf E} \left( df(Y_l(y^\epsilon_{t'\over \epsilon} ) )- df(Y_l(y^\epsilon_{s\over \epsilon} ) )\right) \beta_l(z^\epsilon_{t\over \epsilon}).\end{split}}$$ To the first term we apply (\ref{horder-rate}) and obtain a rate ${1\over \sqrt \delta}\epsilon\sqrt {|\log \epsilon|}$. We could assume that $\beta_l$ averages to zero. Subtracting the term $\bar \beta_l$ does not change $I_l$. Alternatively Lemma \ref{lemma4.2} provides an estimate of order $\epsilon$ for $\left| {\mathbf E} \left( df(Y_l(y^\epsilon_{t\over \epsilon} ) )- df(Y_l(y^\epsilon_{s\over \epsilon} ) )\right) \right|$. Finally, since $\int \beta d\pi=0$, \begin{equation*} {\begin{split} & \left| {\mathbf E} \left( df(Y_l(y^\epsilon_{t'\over \epsilon} ) )- df(Y_l(y^\epsilon_{s\over \epsilon} ) )\right) \beta_l(z^\epsilon_{t\over \epsilon}) \right| = \left| {\mathbf E} \left( df(Y_l(y^\epsilon_{t'\over \epsilon} ) )- df(Y_l(y^\epsilon_{s\over \epsilon} ) )\right) {\mathbf E}\left\{ \beta_l(z^\epsilon_{t\over \epsilon}) \; \big | {\mathcal F}_{t'\over \epsilon}\right\} \right| \\ &\le {\mathbf E} \left| df(Y_l(y^\epsilon_{t'\over \epsilon} ) )- df(Y_l(y^\epsilon_{s\over \epsilon} ) ) \right| |\beta_l|_\infty ae^{ -\delta{t-t'\over \epsilon^2}}\le \gamma_{l} (y_0) |\beta_l|_\infty a\epsilon. \end{split}} \end{equation*} In the last step we used condition (2') and $\gamma_l$ is a function in $B_{V,0}$. We have proved the first assertion. If the assumptions of Theorem \ref{rate} holds, for any $f\in B_{V,3}$, the following functions belong to $B_{V,0}$: $|L_{Y_j}f|$, $|L_{Y_i}L_{Y_j}f|$, and $|L_{Y_l}L_{Y_i}L_{Y_j}f|$. If $f\in BC^3$, the above mentioned functions can be obviously controlled by $|f|_{3,\infty}$ multiplied by a function in $B_{V,0}$, thus completing the proof. \end{proof} \section{Rate of Convergence in Wasserstein Distance} \label{Wasserstein} Let ${\bf \mathcal B}(M)$ denotes the collection of Borel sets in a $C^k$ smooth Riemannian manifold $M$ with the Riemannian distance function $\rho$; let ${\mathbb P}(M)$ be the space of probability measures on $M$. Let $\epsilon \in (0, \epsilon_0)$ where $\epsilon_0$ is a positive number. If $P_\epsilon\to P$ weakly, we may use either the total variation distance or the Wasserstein distance, both imply weak convergence, to measure the rate of the convergence of $P_\epsilon$ to $P$. Let $\rho$ denotes the Riemannian distance function. The Wasserstein 1-distance is $$d_W(P,Q)=\inf_{ (\pi_1)^*\mu=P, (\pi_2)^*\mu=Q} \int_{M\times M} \rho(x,y)d\mu(x,y).$$ Here $\pi_i:M\times M\to M$ are projections to the first and the second factors respectively, and the infimum are taken over probability measures on $M\times M$ that couples $Q$ and $P$. If the diameter, $\mathrm {diam}(M)$, of $M$ is finite, then the Wasserstein distance is controlled by the total variation distance, $d_W(P,Q)\le \mathrm {diam}(M)\|P-Q\|_{TV}$. See C. Villani \cite{Villani-Optimal-transport}. Let us assume that the manifold has bounded geometry; i.e. it has positive injectivity radius, $\mathrm {inj}(M)$, the curvature tensor and the covariant derivatives of the curvature tensor are bounded. The exponential map from a ball of radius $r$, $r<\mathrm {inj}(M)$, at a point $x$ defines a chart, through a fixed orthonormal frame at $x$. Coordinates that consists of the above mentioned exponential charts are said to be canonical. In canonical coordinates, all transitions functions have bounded derivatives of all order. That $f$ is bounded in $C^k$ can be formulated as below: for any canonical coordinates and for any integer $k$, $|\partial^\lambda f| $ is bounded for any multi-index $\lambda$ up to order $k$. The following types of manifolds have bounded geometry: Lie groups, homogeneous spaces with invariant metrics, Riemannian covering spaces of compact manifolds. In the lemma below we deduce from the convergence rate of $P_\epsilon$ to $P$ in the $(C^k)^*$ norm a rate in the Wasserstein distance. Let $\rho$ be the Riemannian distance with reference to which we speak of Lipschitz continuity of a real valued function on $M$ and the Wasserstein distance on ${\mathbb P}(M)$. If $\xi$ is a random variable we denote by $\hat P_\xi$ its probability distribution. Denote by $|f|_{\mathrm {Lip}}$ the Lipschitz constant of the function $f$. Let $p\in M$. Let $|f|_{C^k}=|f|_\infty+ \sum_{j=0}^{k-1} |\nabla^jdf|_\infty$. \begin{lemma}\label{lemma-Wasserstein} Let $\xi_1$ and $\xi_2$ be random variables on a $C^k$ manifold $M$, where $k\ge 1$, with bounded geometry. Suppose that for a reference point $p\in M$, $c_0:=\sum_{i=1}^2{\mathbf E} \rho^2(\xi_i, p) $ is finite. Suppose that there exist numbers $c\ge 0, \alpha\in (0,1), \epsilon\in (0, 1]$ s.t. for $g\in BC^k$, $$|{\mathbf E} g(\xi_1)-{\mathbf E} g(\xi_2)|\le c\epsilon^\alpha (1+ |g|_{C^k}).$$ Then there is a constant $C$, depending only on the geometry of the manifold, s.t. $$d_W(\hat P_{\xi_1}, \hat P_{\xi_2})\le C(c_0+c)\epsilon^{\alpha\over k}.$$ \end{lemma} \begin{proof} If $k=1$, this is clear. Let us take $k\ge 2$ and let $f:M\to {\mathbb R}$ be a Lipschitz continuous function with Lipschitz constant $1$. Since we are concerned only with the difference of the values of $f$ at two points, $\left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right|$, we first shift $f$ so that its value at the reference point is zero. By the Lipschitz continuity of $f$, $|f(x) | \le |f|_{\mathrm {Lip}}\;\rho(x,p)$. We may also assume that $f$ is bounded; if not we define a family of functions $f_n=(f\wedge n )\vee (-n)$. Then $f_n$ is Lipschitz continuous with its Lipschitz constant bounded by $|f|_{\mathrm {Lip}}$. Let $i=1,2$. The correction term $(f-f_n)(\xi_i)$ can be easily controlled by the second moment of $\rho(p, \xi_i)$: $${\mathbf E} |(f-f_n)(\xi_i)| \le {\mathbf E} |f(\xi_i)|{\mathbf 1}_{\{|f(\xi_i)|>n\}} \le {1\over n} {\mathbf E} f(\xi_i)^2 \le {1\over n} {\mathbf E}\rho^2(p, \xi_i). $$ Let $\eta: {\mathbb R}^n\to {\mathbb R}$ be a function supported in the ball $B(x_0, 1)$ with $|\eta|_{L_1}=1$ and $\eta_\delta=\delta^{-n}\eta({x\over \delta})$, where $\delta$ is a positive number and $n$ is the dimension of the manifold. If $M={\mathbb R}^n$, \begin{equation*} {\begin{split} & \left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right|\\ &\le \left|{\mathbf E} (f*\eta_\delta)(\xi_1)-{\mathbf E} (f*\eta_\delta)(\xi_2)\right| +\sum_{i=1}^2 \left|{\mathbf E} (f*\eta_\delta)(\xi_i)-{\mathbf E} f(\xi_i) \right|\\ &\le c\epsilon^\alpha (1+ |f*\eta_\delta|_{C^k})+2\delta |f|_{\mathrm {Lip}}. \end{split}} \end{equation*} In the last step we used the assumption on ${\mathbf E} |f*\eta_\delta(\xi_1)-f*\eta_\delta(\xi_2)|$ for the $BC^k$ function $f*\eta_\delta$. By distributing the derivatives to $\eta_\delta$ we see that the norm of the first $k$ derivatives of $f*\eta_\delta$ are controlled by $|f|_{\mathrm {Lip}}$. If $f$ is bounded, $$c\epsilon^\alpha(1+ |f*\eta_\delta|_{C^k})\le c \epsilon^\alpha (1+ |f|_\infty+c_1 \delta^{-k+1}|f|_{\mathrm {Lip}}),$$ where $c_1$ is a combinatorial constant. To summarize, for all Lipschitz continuous $f$ with $|f|_{\mathrm {Lip}}=1$, $${\begin{split} \left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right| &\le 2\delta|f|_{\mathrm {Lip}}+ c\epsilon^\alpha(1+ |f_n*\eta_\delta|_{C^k})+{c_0\over n}\\ &\le 2 \delta +c\epsilon^\alpha+ c\epsilon^\alpha n+ c_1c\epsilon^\alpha \delta^{-k+1} +{c_0\over n} . \end{split}}$$ Let $\delta=\epsilon^{\alpha\over k}$. Since $k\ge 2$, we choose $n$ with the property $\epsilon^{-{\alpha\over k}}\le n\le 2\epsilon^{-\alpha+{\alpha\over k}}$, then for $f$ with $|f|_{\mathrm {Lip}}=1$, $$ \left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right| \le (2+2c+c_1c+2c_0)\epsilon^{\alpha\over k}.$$ Let $\delta$ be a positive number with $4\delta<\mathrm {inj}(M)$. Let $B_x(r)$ denotes the geodesic ball centred at $x$ with radius $r$, whose Riemannian volume is denoted by $V(x,r)$. There is a countable sequence $\{x_i\}$ in $M$ with the following property: (1) $\{B_{x_i}(\delta)\}$ covers $M$; (2) There is a natural number $N$ such that any point $y$ belongs to at most $N$ balls from $\{{\bf \mathcal B}_{x_i}(3 \delta)\} $; i.e. the cover $\{{\bf \mathcal B}_{x_i}(3 \delta)\} $ has finite multiplicity. Moreover this number $N$ is independent of $\delta$. See M. A. Shubin \cite{Shubin92}. To see the independence of $N$ on $\delta$, let us choose a sequence $\{x_i, i\ge 1\}$ in $M$ with the property that $\{B_{x_i}( \delta)\}$ covers $M$ and $\{B_{x_i}({\delta \over 2})\}$ are pairwise disjoint. Since the curvature tensors and their derivatives are bounded, there is a positive number $C$ such that $${1\over C}\le {V(x,r)\over V(y,r)}\le C, \quad x,y \in M, r\in (0, 4\delta).$$ Let $y\in M$ be a fixed point that belongs to $N$ balls of the form $B_{x_i}({\delta\over 2})$. Since $B_{x_i}({\delta\over 2})\subset B(y, 4\delta)$, the sum of the volume satisfies: $\sum V(x_i, {\delta \over 2}) \le V(y, 4\delta)$ and ${N\over C} V(y, {\delta\over 2})\le V(y, 4\delta)$. The ratio $\sup_y{V(y, 4\delta )\over V(y, {\delta\over 2})}$ depends only on the dimension of the manifold. Let us take a $C^k$ smooth partition of unity $\{\alpha_i, i \in \Lambda\}$ that is subordinated to $\{B_{x_i}(2\delta)\} $: $1=\sum_{i\in \Lambda}\phi_i$, $\phi_i\ge 0$, $\phi_i$ is supported in $B_{x_i}(2\delta)$, and for any point $x$ there are only a finite number of non-zero summands in $\sum_{i\in \Lambda} \alpha_i(x)$. The partition of unity satisfies the additional property: $\sup_i|\partial ^\lambda \alpha_i|\le C_\lambda$, $\alpha_i\ge 0$. Let $(B_{x_i}(\mathrm {inj}(M)), \phi_i)$ be the geodesic charts. Let $f_i=f\alpha_i$ and let $\tilde g=g\circ \phi_i$ denote the representation of a function $g$ in a chart. \begin{equation*} {\begin{split} & \left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right| = \left|\sum_{i\in \Lambda} {\mathbf E} \tilde f_i \left( \phi^{-1}_i(\xi_1)\right)-\sum_{i\in \Lambda}{\mathbf E} \tilde f_i \left( \phi^{-1}_i(\xi_2)\right)\right|\\ &\le \left|\sum_{i\in \Lambda} {\mathbf E} \tilde f_i*\eta_{\delta} \left( \phi^{-1}_i(\xi_1)\right)-\sum_{i\in \Lambda}{\mathbf E} \tilde f_i*\eta_{\delta}\left( \phi^{-1}_i(\xi_2)\right)\right|\\ &+ \sum_{j=1}^2\left|\sum_{i\in \Lambda} {\mathbf E}\tilde f_i*\eta_{\delta} \left( \phi^{-1}_i(\xi_j)\right)-\sum_{i\in \Lambda}{\mathbf E} \tilde f_i \left( \phi^{-1}_i(\xi_j)\right)\right|. \end{split}} \end{equation*} It is crucial to note that there are at most $N$ non-zero terms in the summation. By the assumption, for each $i$, $$\left| {\mathbf E} \tilde f_i*\eta_{\delta} \left( \phi^{-1}_i(\xi_1)\right)-{\mathbf E} \tilde f_i*\eta_{\delta}\left( \phi^{-1}_i(\xi_2)\right)\right| \le c\epsilon^\alpha | \tilde f_i*\eta_{\delta}\circ \phi_i^{-1}|_{C^k}.$$ By construction, $\sup_i|\alpha_i|_{C^k}$ is bounded. There is a constant $c'$ that depends only on the partition of unity, such that $$| \tilde f_i*\eta_{\delta}\circ \phi_i^{-1}|_{C^k} \le c'| \tilde f_i*\eta_{\delta}|_{C^k} \le c' |\tilde f|_\infty+c'c_1\delta^{1-k}|\tilde f|_{\mathrm {Lip}}$$ Similarly for the second summation, we work with the representatives of $f_i$, $$ \left| \tilde f_i*\eta_{\delta} \left( \phi^{-1}_i(y)\right)-\tilde f_i\left( \phi^{-1}_i(y)\right)\right| \le \delta |\tilde f_i|_{\mathrm {Lip}}\le c'\delta.$$ Since we work in the geodesic charts the Lipschitz constant of $\tilde f_i$ are comparable to that of $|f|_{\mathrm {Lip}}$. Let $|f|_{\mathrm {Lip}}=1$. If $f$ is bounded, \begin{equation*} {\begin{split} \left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right|\le N c\epsilon^\alpha(1+c' | f|_\infty+c'\delta^{1-k})+2 c'\delta N \end{split}} \end{equation*} Let $\delta=\epsilon^{\alpha\over k}$, $$ \left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right|\le Nc\epsilon^\alpha (c' | f|_\infty+1)+ Nc' \epsilon^{\alpha\over k}+2c'N\epsilon^{\alpha\over k}.$$ On a compact manifold, $|f|_\infty$ can be controlled by $|f|_{\mathrm {Lip}}$; otherwise we use the cut off function $f_n$ in place of $f$ and the estimate ${\mathbf E} |(f-f_n)(\xi_i)| \le {c_0\over n} $. Choose $n$ sufficiently large, as before, to see that $ \left|{\mathbf E} f(\xi_1)-{\mathbf E} f(\xi_2)\right|\le C\epsilon^{\alpha\over k}$. Finally we apply the Kantorovich-Rubinstein duality theorem, $$d_W(\hat P_{\xi_1},\hat P_{\xi_2})=\sup_{f: |f|_{\mathrm {Lip}\le 1} }\left\{ |{\mathbf E} f(\xi_1) -{\mathbf E} f(\xi_2)| \right\}\le C\epsilon^{\alpha\over k},$$ to obtain the required estimate on the Wasserstein 1-distance and concluding the proof. \end{proof} Let $\mathop {\rm ev}_t: C([0,T]; M)\to M$ denote the evaluation map at time $t$ : $\mathop {\rm ev}(\sigma)=\sigma(t)$. Let $\hat P_{\xi}$ denote the probability distribution of a random variable $\xi$. Let $o\in M$. \begin{proposition} \label{proposition-rate} Assume the conditions and notations of Theorem \ref{rate}. Suppose that $M$ has bounded geometry and $\rho_o^2 \in B_{V,0}$. Let $\bar \mu$ be the limit measure and $\bar \mu_t=(ev_t)_*\bar \mu$. Then for every $r<{1\over 4}$ there exists $C(T)\in B_{V,0}$ and $\epsilon_0>0$ s.t. for all $\epsilon\le\epsilon_0$ and $t\le T$, $$d_W(\hat P_{y^\epsilon_{t\over \epsilon}}, \bar \mu_t)\le C(T)\epsilon^{r}.$$ \end{proposition} \begin{proof} By Theorem \ref{rate}, for $f\in BC^4$, $$\left|{\mathbf E} f(\Phi^\epsilon_{t\over \epsilon}(y_0)) -P_tf(y_0)\right|\le C(T)(y_0)\epsilon \sqrt{|\log \epsilon|},$$ where $C(T) (y_0)\le\tilde C(T)(y_0) (1+|f|_{C^4})$ for some function $\tilde C(T)\in B_{V,0}$. Since by Theorem \ref{uniform-estimates}, there exists $\epsilon_0>0$ such that $\sup_{\epsilon\le \epsilon_0} {\mathbf E}\rho^2_o (\Phi_t^\epsilon(y_0))$ is finite, we take $\alpha$ in Lemma \ref{lemma-Wasserstein} to be any number less than $1$ to conclude the proposition. \end{proof} \section{Appendix} We began with the proof of Lemma \ref{lemma2}, follow it with a discussion on conditional inequalities without assuming conditions on the $\sigma$-algebra concerned. \subsection*{Proof of Lemma \ref{lemma2}} Step 1. Denote $\psi(t)=ae^{-\delta t}$. Firstly, if $f \in {\bf \mathcal B}_b(G;{\mathbb R})$ and $z\in G$, $$|Q_tf(z)-\pi f| \le \|f\|_W \cdot \psi(t)\cdot W(z) .$$ Next, by the Markov property of $(z_t)$ and the assumption that $\int g d\pi=0$: \begin{equation*} {\begin{split} &\left| {\mathbf E} \{f(z_{s_2}) g(z_{s_1})|{\mathcal F}_{s}\} -\int_G fQ_{s_1-s_2}g d\pi \right|\\ &=\left| {\mathbf E}\left\{ \left(fQ_{s_1-s_2} g\right)(z_{s_2})\Big| {\mathcal F}_s\right\} -\int_G fQ_{s_1-s_2}g d\pi \right|\\ &\le \psi(s_2-s) \;\|fQ_{s_1-s_2}g\|_W \;W(z_s) \le \psi(s_2-s) \sup_{z\in G}\left( { |f(z)| |Q_{s_1-s_2}g(z) |\over W(z)}\right) W(z_s) \\ &\le \psi(s_2-s)\psi(s_1-s_2) |f|_\infty\, \|g\|_W W(z_s) \le a\psi(s_1-s) |f|_\infty \|g\|_W W(z_s) . \end{split}} \end{equation*} From this we see that, $${\begin{split} &\left|{1\over t-s} \int_s^{t} \int_s^{s_1} \left( {\mathbf E}\left\{ f(z_{s_2}) g(z_{s_1}) \Big| {\mathcal F}_s\right\} -\int_G fQ_{s_1-s_2}g d\pi\right) ds_2ds_1\right|\\ &\le a |f|_\infty\, \|g\|_W W(z_s) {1\over t-s} \int_s^{t} \int_s^{s_1} \psi\left(s_1-s\right)\; d s_2 \; d s_1\\ & \le {a^2\over \delta^2 (t-s)}|f|_\infty\, \|g\|_W W(z_s)\int_0^{(t-s)\delta} re^{-r} \; d r \le {a^2\over \delta^2 (t-s)}|f|_\infty\, \|g\|_W W(z_s).\end{split}}$$ This concludes (1). Step 2. For (2), we compute the following: $${ \begin{split} & {1\over t-s} \int_s^{t} \int_s^{s_1}\int_G fQ_{s_1-s_2}g \; d\pi \; d s_2 \; d s_1 =\int_G {1\over t-s} \int_0^{t-s}fQ_rg(t-s-r) \; d r d \pi\\ &=\int_G \int_0^\infty \left(f Q_r g\right) \; d r\; d\pi-\int_G \int_{t-s}^\infty f Q_r g \; d r \; d \pi -{1\over t-s}\int_G \int_0^{t-s} r f Q_r g \; d r d \pi. \end{split} } $$ We estimate the last two terms. Firstly, $${ \begin{split} &\left|\int_G \int_{t-s}^\infty f (z)Q_r g(z) \; d r \; d \pi(z)\right| \le |f|_\infty \left| \int_G \int_{t-s}^\infty |Q_rg(z)| \; d r \; d\pi(z)\right|_\infty\\ & \le |f|_\infty \|g\|_W \int_G W(z) \pi(dz) \int_{t-s}^\infty \psi(r)dr \le {1\over \delta} |f|_\infty \|g\|_W \bar W\int_{(t-s)\delta}^\infty ae^{-r}dr\\ &\le {a\over \delta} |f|_\infty \|g\|_W \bar W. \end{split} } $$ It remains to calculate the following: $${ \begin{split} \left|{1\over t-s} \int_G \int_0^{t-s} r f Q_r g \; d r d \pi \right| &\le {1\over t-s} |f|_\infty \|g\|_W \bar W \int_0^{t-s} r\psi(r)\; d r\\ &\le {a\over (t-s)\delta^2} |f|_\infty\|g\|_W \bar W. \end{split} }$$ Gathering the estimates together we obtain the bound: \begin{equation*} { \begin{split} &\left| {1\over t-s} \int_s^{t} \int_s^{s_1}\int_G fQ_{s_1-s_2}g \; d\pi \; d s_2 \; d s_1 -\int_G \int_0^\infty \left(f Q_r g\right) \; d r\; d\pi\right| \\ &\le{a\over \delta}|f|_\infty\|g\|_W \bar W+ {a\over (t-s)\delta^2} |f|_\infty\|g\|_W \;\bar W. \end{split} } \end{equation*} By adding this estimate to that in part (1), we conclude part (2): \begin{equation} \label{estimate-1-1} { \begin{split} &\left|{1\over t-s} \int_s^{t} \int_s^{s_1} {\mathbf E}\left\{ f(z_{s_2}) g(z_{s_1}) \Big| {\mathcal F}_s\right\} -\int_G \int_0^\infty \left(f Q_r g\right) \; d r\; d\pi\right| \\ &\le{a\over \delta}|f|_\infty\|g\|_W \bar W+ {a\over (t-s)\delta^2} |f|_\infty\|g\|_W \;\bar W +{a^2\over \delta^2 (t-s)}|f|_\infty\|g\|_W W(z_s). \end{split} } \end{equation} We conclude part (2). Step 3. We first assume that $\bar g =0$, then, $${ \begin{split} &\left|{\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ f(z^\epsilon_{s_2}) g(z^\epsilon_{s_1}) \Big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\right|\\ & \le\left|{\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ f(z^\epsilon_{s_2}) g(z^\epsilon_{s_1}) \Big| {\mathcal F}_{s\over \epsilon}\right\}\; d s_2 \; d s_1 -\int_G \int_0^\infty f Q_r^\epsilon g \; d r \; d \pi \right| \\ & +\left|\int_G \int_0^\infty f Q_r^\epsilon g \; d r \; d \pi\right|. \end{split} } $$ We note that for every $x\in G$, $\|Q_r^\epsilon(x, \cdot)-\pi\|_{TV,W}\le \psi({ r\over \epsilon}) W(x)$. In line (\ref{estimate-1-1}) we replace $s$, $t$, $\delta $ by ${s\over \epsilon}$, ${t\over \epsilon}$, and $ {\delta \over \epsilon}$ respectively to see the first term on the right hand side is bounded by $${a\epsilon^3\over \delta^2 (t-s)}( a W(z^\epsilon_{s\over \epsilon})+ \bar W) |f|_\infty\|g\|_W + {a\epsilon\over \delta} |f|_\infty \|g\|_W \bar W. $$ Next we observe that $${\begin{split} \int_0^\infty f(z) Q_s^\epsilon g(z) \; d s&=\int_0^\infty f (z)Q_{s\over \epsilon}(z) \; d s =\epsilon \int_0^\infty f (z)Q_s g(z) \; d s\\ \left|\int_G \int_0^\infty f(z) Q_s^\epsilon g(z) \; d s \; d \pi(z)\right| &\le \epsilon\,|f|_\infty \|g\|_W \bar W \int_0^\infty \psi(s) \; d s= {a\epsilon\over \delta}|f|_\infty \|g\|_W \bar W . \end{split}}$$ This gives the estimate for the case of $\bar g=0$: $$\left|{\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ f(z^\epsilon_{s_2}) g(z^\epsilon_{s_1}) \Big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\right| \le C_1(z_{s\over \epsilon}^\epsilon) {\epsilon^3 \over t-s}+C_2'(z_{s\over \epsilon}^\epsilon)\epsilon. $$ where $$C_1={a\over \delta^2} (aW(\cdot)+\bar W)|f|_\infty\|g\|_W, \quad C_2'={2a\over \delta}|f|_\infty\|g\|_W\bar W.$$ If $\int g \; d\pi\not =0$, we split $g=g-\bar g+\bar g$ and estimate the remaining term. We use the fact that $\pi f=0$, $${ \begin{split} &\left|{\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ f(z^\epsilon_{s_2}) \bar g\big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\right| \le |\bar g| \left|{\epsilon\over t-s} \int_0^{t-s\over \epsilon} \int_0^{s_1} \left| Q^\epsilon _{s_2} f(z_{s\over \epsilon}) \right|\; d s_2 \; d s_1\right|\\ & \le |\bar g\||f\|_W W(z_{s\over \epsilon}^\epsilon) \sup_{s_1>0} \left\{ \left|\int_0^{s_1} \psi({s_2\over \epsilon}) ds_2\right|\right\} \le |\bar g| \; \||f\|_W W(z_{s\over \epsilon}^\epsilon)\epsilon \int_0^\infty \psi(r)dr\\ & ={a\epsilon\over \delta} |\bar g| \; \|f\|_W W(z_{s\over \epsilon}^\epsilon). \end{split} } $$ Finally we obtain the required estimate in part (3): $${ \begin{split} &\left|{\epsilon\over t-s} \int_{s\over \epsilon}^{t\over \epsilon} \int_{s\over \epsilon}^{s_1} {\mathbf E}\left\{ f(z^\epsilon_{s_2}) g(z^\epsilon_{s_1}) \Big| {\mathcal F}_{s\over \epsilon}\right\} \; d s_2 \; d s_1\right| \\ & \le C_1(z_{s\over \epsilon}^\epsilon) \left({\epsilon^3\over t-s}\right) +C_2'(z_{s\over \epsilon}^\epsilon)\epsilon + \epsilon{a \over \delta}|\bar g| \; \|f\|_W W(z_{s\over \epsilon}^\epsilon), \end{split} } $$ thus concluding part (3). The following conditional inequalities are elementary. We include a proof for a partial conditional Burkholder-Davis-Gundy inequality for completeness. We do not assume the existence of regular conditional probabilities. \begin{lemma} Let $(M_t)$ be a continuous $L^2$ martingale vanishing at $0$. Let $(H_t)$ be an adapted stochastic process with left continuous sample paths and right limits. If for stopping times $s<t$, ${\mathbf E} \int_s^t(H_r)^2 d\<M\>_r<\infty$. Then $${\mathbf E}\left \{ \left(\int_s^t H_r dM_r\right)^2\Big |F_s\right\} ={\mathbf E}\left\{ \int_s^t (H_r)^2 d\<M\>_r\Big |F_s\right\}.$$ \end{lemma} \begin{lemma} Let $p>1$ and $(M_t)$ is a right continuous $({\mathcal F}_t)$ martingale or a right continuous positive sub-martingale index by an interval $I$ of ${\mathbb R}_+$. Then, \begin{equation*} {\mathbf E} \left\{ \sup_{s\in I} |M_t|^p \Big| \: {\mathcal F}_s\right\} \le \left({p\over p-1}\right)^p \sup_{s\in I}{\mathbf E} \left\{ |M_s|^p\Big| \: {\mathcal F}_s\right\}. \end{equation*} If $(M_u, s\le u \le t)$ is a right continuous $({\mathcal F}_t)$ martingale and $p\ge 2$, there exists a constant $c(p)>0$ s.t. \begin{equation*} {\begin{split} &{\mathbf E}\left \{\sup_{s\le u \le t} |M_u|^p\Big |F_s\right\} \le c_p {\mathbf E}\left\{ \<M\>_t^{p\over 2} \Big |F_s\right\}. \end{split}} \end{equation*} \end{lemma} This proof is the same as the proof for ${\mathcal F}_s$ the trivial $\sigma$-algebra, c.f. D. Revuz, M. Yor \cite{Revuz-Yor}. \def\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \def$'${$'${\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$}
train/arxiv
BkiUfsY241xiP6pGxoSQ
5
1
\section{Introduction} Vega ($\alpha$ Lyr, A0V) is one of the brightest and most familiar stars in the night sky, termed by astronomers as 'arguably the next most important star in the sky after the Sun' (Gulliver et al. \cite{gulliver94}). Vega demonstrates a low projected rotational velocity ($v$sin$i$ < 22 km/s), but it is a rapidly rotating star seen nearly pole-on. The rapid rotation is causing the equator to bulge outward because of centrifugal effects, and, as a result, there is a latitude variation of temperature that reaches a maximum at the poles. No consensus has yet been reached as to how fast Vega is rotating. Different authors propose the different rotation period: 0.525 d (Aufdenberg et al. \cite{aufdenberg06}, interferometry), 0.662 d (Hill et al. \cite{hill10}, high resolution spectral line profiles), 0.732 d (Petit et al. \cite{petit10}, magnetic maps), 0.733 d (Takeda et al. \cite{takeda08}, modelling of individual spectral lines and the spectral energy distribution). Butkovskaya et al. \cite{butk11} using own spectropolarimetric data have confirmed the 21-year variability of Vega discovered by Vasil'ev et al. \cite{vasil89} from spectrophotometry. They found that the equivalent widths of the four spectral lines Mg I 5167.321, Mg I 5172.684, Mg I 5183.604, and Fe II 5169.033 vary with the 21-year period. Recent spectropolarimetric observations of Vega have revealed the presence of a weak magnetic field of 0.6 $\pm$ 0.2 G (Lignieres et al. \cite{lignieres09}, Petit et al. \cite{petit10}). Alina et al. \cite{alina11} noted the stability of the weak magnetic field on timescale of 3 years. A potentially significant fraction of A stars might display a similar type of magnetism, but the geometry of these fields is poorly constrained. To answer this question it is crucial to accumulate more information about surface magnetic structures. Therefore, the long-term spectropolarimetric observations of Vega are needed. We present here the results of spectropolarimetric study of Vega during 37 nights in 1997-2012. \section{Observations}\label{obs} High-accuracy spectropolarimetric study of Vega has been performed in the spectral region 5170 \AA ~during 37 nights from 1997 to 2012. The star was observed using coude spectrograph at the 2.6-m Shajn telescope mounted at the Crimean Astrophysical Observatory (CrAO, Ukraine). Over 2700 circularly polarized high-resolved spectra with total signal-to-noise ratio per pixel about 37500 were recorded. The resolving power of the spectra is approximately 25000. The effective magnetic field was calculated using the technique which is described in detail by Plachinda \cite{plach04}, and Butkovskaya \& Plachinda \cite{butk07}. We calculated Zeeman splitting for each single line marked at Figure~\ref{spectra}. For each single line the 1353 single values of the longitudinal magnetic field were obtained. Then we calculated 1353 mean longitudinal magnetic field values averaged by all four lines. To ensure an optimal data quality we used Law of Large Numbers (LLN). Due to LLN magnetic field values which exceeded the mean value by 3$\sigma$ were eliminated from our dataset. The final time-string consists of the 1312 mean longitudinal magnetic field measurements. \begin{figure}[!t] \begin{center} \hbox{ \includegraphics[width=10cm]{butkovskaya/butkovskaya_fig1.eps} } \vspace{-5mm} \caption[]{The normalized observed spectrum of Vega, $g$ -- effective Lande factor of the spectral lines. } \label{spectra} \end{center} \end{figure} \section{Results}\label{results} Search for a periodicity in the longitudinal magnetic field data was performed using the Period04 code. The range of the analyzed frequencies encompasses the rotation periods already proposed in the literature (Aufdenberg et al. \cite{aufdenberg06}, Takeda et al. \cite{takeda08}, Hill et al. \cite{hill10}, Petit et al. \cite{petit10}). The power spectrum of the longitudinal magnetic field (Figure~\ref{power}, black line) revealed the most prominent peak at frequency 1.6062959 $d^{-1}$ (signal-to-noise is 3.07) which corresponds to the period 0.62255 d. The period is very close to the rotation period 0.622 d estimated by Hill et al. \cite{hill10}. The power spectrum of the 'null' field (Figure~\ref{power}, gray line) revealed no any significant peaks in this frequency range. \begin{figure}[!t] \begin{center} \hbox{ \includegraphics[width=10cm]{butkovskaya/butkovskaya_fig2.eps} } \vspace{-5mm} \caption[]{The power spectra of the longitudinal magnetic field (\textit{black}), and the 'null' field which characterizes the spurious magnetic signal (\textit{gray}). The maximal peak corresponds to the frequency 1.6062959 $d^{-1}$. The arrow marks the frequency 1.3663 $d^{-1}$ which corresponds to rotation period proposed by Petit et al. \cite{petit10}. } \label{power} \end{center} \end{figure} We adopt the Julian date $JD$ = 2450658.427 as phase origin, and phased the longitudinal magnetic field of Vega with the 0.62255-day period. In the left panel of Figure~\ref{field} the longitudinal magnetic field of Vega averaged within 10 bins is presented. The estimated by Fisher test statistical reliability of the magnetic field curve vs phased 'null' field is 99.8\%. \begin{figure}[!t] \begin{center} \hbox{ \includegraphics[width=10cm]{butkovskaya/butkovskaya_fig3.eps} } \vspace{-25mm} \caption[]{\textbf{Left}: the longitudinal magnetic field (\textit{black circles}) and 'null' field (\textit{crosses}) of Vega phased with the rotation period 0.62255 d, and averaged within 10 bins. \textbf{Right}: the longitudinal magnetic field (\textit{black circles}) and 'null' field (\textit{crosses}) phased with the rotation period 0.7319 d (Petit et al. \cite{petit10}), and averaged within 10 bins. Least-square sinusoidal fit is shown by strong line. } \label{field} \end{center} \end{figure} In attempt to reconstruct a relevant topology of the surface field on Vega, Petit et al. \cite{petit10} calculated the set of magnetic maps, assuming for each map a different value for the rotation period. They adopt a value of 0.7319 d for the rotation period, and choose the Julian date $JD$ = 2454101.5 as phase origin. We tested the agreement of our long-term magnetic field measurements with the period proposed by Petit et al. \cite{petit10}. The frequency 1.3663 $d^{-1}$ (signal-to-noise is 1.28) is marked in Figure~\ref{power} by arrow. We phased the longitudinal magnetic field of Vega with the 0.7319-day period. In the right panel of Figure~\ref{field} the longitudinal magnetic field of Vega averaged within 10 bins is presented. The estimated by Fisher test statistical reliability of the magnetic field curve vs phased 'null' field is 91.5\%. Our long-term longitudinal magnetic field measurements do not confirm the rotation period proposed by Petit et al. \cite{petit10}. On the other hand, the different periods can be caused by different bulk of spectral lines used by us and by Petit et al. \cite{petit10}. The lines can be formed in the different physical conditions at different latitudes. \bigskip {\it Acknowledgements.} The author thanks S. Plachinda for useful discussions, and D. Baklanova and S. Plachinda for the help in observations.
train/arxiv
BkiUdnk5qoTBG7-YNuee
5
1
\section{Introduction} The discovery of superconductivity and correlated insulators in twisted bilayer graphene (TBG) \cite{PabloSC,PabloMott} close to the theoretically predicted magic angle $\theta_{\rm TBG} \approx 1.1^o$ \cite{Bistritzer2011, dosSantos2012, Mele2010, dosSantos2007} has stimulated a huge amount of experimental \cite{Dean-Young, Efetov, YoungScreening, EfetovScreening, CascadeShahal, CascadeYazdani, RozenEntropic2021, CaltechSTM, RutgersSTM, ColombiaSTM, PrincetonSTM} and theoretical work \cite{BalentsReview,AndreiMacdonaldReview,LecNotes,PoPRX, StiefelWhitney, Song, KangVafekPRL, KangVafekPRX, VafekRG,XieMacdonald, ShangNematic, Tarnopolsky,KIVCpaper,Khalaf2020, KhalafSoftMode,Parker,KwanIKS,wagner2021globalHF,TBGIII, TBGIV, TBGV, TBGVI,Potasz,Parker,VafekDMRG,QMC,PanQMC2021} on the system. In addition to the correlated insulators and superconductivity observed in the early works, more recent discoveries include the quantum anomalous Hall states with \cite{Zhang19,Bultinck2019,SharpeQAH, YoungQAH} and without \cite{EfetovQAH} an aligned hBN substrate, charge density waves \cite{PierceCDW}, as well as the prediction \cite{LedwithFCI,AbouelkomsanFCI,RepellinFCI,LauchliFCI} and subsequent observation \cite{XieFCI} of fractional Chern insulators (FCIs). Besides TBG, other graphene multilayers have been explored in search for similarly interesting phases. These include mono-bi graphene \cite{YankowitzMonoBi1, YankowitzMonoBi2, YoungMonoBi,Morell2013ElectronicPO}, twisted double bilayer graphene \cite{TDBG_Pablo,TDBGexp2019,IOP_TDBG,YankowitzTDBG,KimTDBG,he2021chiralitydependent,Lee2019,Zhang2018}, and ABC graphene aligned with hBN \cite{ABCMoire}. While all these systems have displayed interaction-induced phases such as correlated insulators, TBG remained the only Moir\'e system where robust superconductivity has been observed and reproduced by several groups. Note that these other systems, like hBN-aligned twisted bilayer graphene, do not possess $C_2 \mathcal{T}$ symmetry which relates the two degenerate Chern bands in a single valley of TBG. This suggests that $C_2 \mathcal{T}$ symmetry plays a significant role in stabilizing superconductivity, consistent with the predictions of a topological mechanism for superconductivity based on skyrmions \cite{Khalaf2020,chatterjee2020skyrmion,ChristosWZW} (alternative explanations for the importance of $C_2 \mathcal{T}$ symmetry were proposed in Refs.~\cite{phonondichotomy,MacdonaldFluctuations}). This raises the question of designing other graphene Moir\'e structures that retain $C_2 \mathcal{T}$ symmetry. Alternating twist multilayer graphene (ATMG) heterostructures proposed in Ref.~\cite{Khalaf2019}, where the twist angle alternates between $\pm \theta$ from interface to interface, fit the bill, since they have essentially the same symmetry as magic angle TBG. This makes them promising platforms to realize all the phases seen in TBG including superconductivity, if the symmetry ingredients have been correctly identified. Twisted trilayer graphene (TTG) is therefore a natural next step after twisted bilayer both experimentally \cite{Hao2021TTG,Park2021TTG,seaofmagic,caltechTrilayerSTM,TrilayerScreening} and theoretically \cite{ChristosTTG,LakeSenthil,MacdonaldInPlane,MacdonaldTri,TSTGI,TSTGII,guerci2021higherorder,Chou2021,assi2021floquet,li2021induced}. It is a structure that is symmetric under mirror symmetry $M_z$ and it decomposes into twisted bilayer graphene and ordinary graphene where the subsystems are in opposite mirror sectors \cite{Carr2020}. The TBG subsystem has tunneling matrix elements that are scaled by $\sqrt{2}$ which leads to a magic angle $\sqrt{2} \theta_{\rm TBG}\approx 1.56^o$. TTG was expected to host similar correlated physics to TBG due to this symmetry protected TBG subsystem, and the higher magic angle implies elevated energy scales and likely more structural uniformity. In addition, TTG has tunability advantages over TBG because external fields couple in different ways; due to the mirror symmetry $M_z$ out-of-plane electric and in-plane magnetic fields act purely as tunneling terms between the TBG and graphene subsystems. These predictions and advantages were borne out in recent experiments where TTG tuned close to the predicted new magic angle of $1.56^\circ$ was found to exhibit strong coupling superconductivity that is even stronger than TBG superconductivity and tunable via displacement field \cite{Hao2021TTG,Park2021TTG}. Furthermore, the critical in-plane magnetic field was found to exceed the Pauli limiting field \cite{PabloTrilayerInPlane}. Remarkably, the decomposition of TTG into TBG and graphene subsystems is actually a general property of ATMG for an arbitrary number of layers $n$ \cite{Khalaf2019}. For example, twisted quadrilayer graphene (TQG) has two TBG subsystems with magic angles $\varphi \theta_{\rm TBG}\approx 1.78^\circ $ and $\varphi^{-1} \theta_{\rm TBG} \approx 0.68^\circ$ where $\varphi =\frac{1+\sqrt{5}}{2}$ is the golden ratio \cite{LecNotes,Khalaf2019}. Twisted pentalayer graphene (TPG) has two twisted bilayer graphene subsystems with magic angles $\sqrt{3} \theta_{\rm TBG}$ and $\theta_{\rm TBG}$. In this paper we perform a comprehensive comparison between $n$-layer ATMG and $n=2$ TBG. The low energy physics of ATMG is largely controlled by its \emph{magic sector}, the TBG subsystem that has flat bands. Nonetheless, there is much to be learned by studying ATMG and contrasting it with TBG. Indeed, we will quantitatively catalogue many differences between the systems. One important difference is the way external fields couple to ATMG versus TBG; we describe this along with the non-interacting band structure and symmetries in section \ref{sec:BandStructure}. While for twisted bilayer graphene, displacement fields (electric field applied perpendicular to the layers) and magnetic fields, applied parallel to the layers, couple directly to the magic sector, for {\em odd} $n$ the external fields exclusively act as tunnelings between subsystems due to mirror symmetry $M_z$. In particular, the in-plane magnetic field does not, on its own, have a pair-breaking effect, as noticed in $n=3$ TTG \cite{MacdonaldInPlane,LakeSenthil}, because it is symmetric under the time reversal symmetry combined with in plane reflection $M_z \mathcal{T}$. Indeed, experimentally the in-plane critical field of TTG is found to vastly exceeds that of TBG and the Pauli limit \cite{PabloTrilayerInPlane}. Amazingly, we find that the same is very nearly true at the first magic angle for all $n>2$ {\em despite} the absence of a microscopic mirror symmetry for $n$ even which enforces this term to vanish; instead, the intra-magic-subsystem coupling of the external fields at the largest magic angle is suppressed by a factor $\pi^2/(2n^3)$ for large $n$. In section \ref{sec:relax} we present a detailed computation for the lattice relaxation effects for $n=2,\,3,\,4,\,5$ ATMG. It is known that in twisted bilayer graphene, the ratio between the same-sublattice and opposite-sublattice hopping $\kappa = w_0/w_1$ has a crucial role in the interacting physics. In particular small $\kappa$ is found to give rise to favorable quantum band geometry for realizing fractional Chern insulators. \cite{LedwithFCI,RepellinFCI,XieFCI}. While nominally this parameter is $1$ for a fully-rigid lattice, both in-plane and out-of-plane lattice relaxation result in a lower effective $\kappa$. For $n \geq 4$ ATMG, we find that both $w_0$ and $w_1$ depend on the layers connected by tunneling. While the decoupling mapping may be extended to include non-constant $w_1$, leading to slightly different magic angles, the different ratios $w_0/w_1$ lead to small tunnelings between subsystems. In addition, we compute the effective $\kappa$ for the various TBG subsystems. We generally find smaller values of $\kappa$ compared to what has been reported in earlier works \cite{Carr2018relax, Koshino2018}; for example, we find $\kappa \approx 0.51$ for $n = 2$ TBG. While these $\kappa$ values are sensitive to details of the relaxation model we employ, the angle and layer dependence of $\kappa$ that we find should be universal, making our predictions regarding the relative strength of relaxation in different multilayer systems compared to TBG robust. We find that more layers tend to increase the lattice relaxation and reduce $\kappa$ compared to its value in TBG at the same angle. However, the increase in the value of the first magic angle with the number of layers ends up slightly outweighing this effect leading to a slight overall increase in the value $\kappa$ at the first magic angle with $n$. This does not apply to higher magic angles where we can exploit the increased relaxation without necessarily going to larger angle. A notable case that we highlight is the {\em second} magic angle for $n=5$ which has the same magic angle $\theta_{\rm TBG}$ as the {\em first} magic angle of $n=2$ TBG, but experiences enhanced relaxation due to the increased number of layers. This leads to a low effective $\kappa \approx 0.45$. We further note that the TBG sub-block where this magic angle is realized is protected by mirror symmetry from mixing with other sub-blocks. We therefore highlight the second magic angle of $n=5$ as a promising platform for hosting FCIs and other correlated phases. We then turn to the low energy interacting physics of ATMG in section \ref{sec:screen}. When one of the TBG sectors is at its magic angle, the low energy bandstructure of ATMG consists of the flat magic angle TBG (MATBG) bands together with Dirac cones; the latter have much smaller density of states and the associated electrons move much faster than the MATBG electrons. Based on this perspective, we integrate out the non-magic ATMG electrons and argue that to a good approximation they only statically screen the Coulomb interaction. We then perturbatively include the effect of external fields in ATMG at integer filling of the magic sector in section \ref{sec:externalfields}. We build off of the analytic strong coupling approach to integer fillings of MATBG \cite{LecNotes,KIVCpaper,Khalaf2020,VafekRG,KhalafSoftMode,TBGIII,TBGIV}, which has been numerically supported first by Hartree-Fock \cite{XieMacdonald, ShangNematic, KIVCpaper, GuineaHF} and then by less biased methods like DMRG \cite{Tomo, Parker, VafekDMRG}, exact diagonalization \cite{TBGVI, Potasz}, and QMC \cite{QMC,PanQMC2021}. While much of this paper is devoted to $n>2$ due to the extensive theoretical literature for $n=2$ TBG, we first focus on TBG because the strong orbital effect of the in-plane magnetic field on TBG insulators has not been previously understood. In particular, we find that an in-plane magnetic field can drive a phase transition from the Kramers' intervalley coherent state to either a valley Hall state or a ``magnetic semimetal.'' The latter state is similar to the nematic semimetal \cite{ShangNematic} except it spontaneously breaks $C_2$ and $\mathcal{T}$ while preserving $C_2 \mathcal{T}$. Either of these transitions would impede the skyrmion mechanism of superconductivity \cite{Khalaf2020}. As discussed in section \ref{sec:BandStructure}, the external fields have little impact in the magic subsystem for $n>2$ and instead mostly act as tunneling terms between the different subsystems. We find that this tunneling indirectly affects the dispersion of the magic sector, depending on the angle. At small displacement fields, the dispersion increases (decreases) depending on whether the twist angle is smaller (larger) than the magic angle. Any theory of superconductivity that relies on dispersion - such as skyrmion superconductivity \cite{Khalaf2020} - will therefore see $T_c$ enhancement with displacement field for small angles and a reduction in $T_c$ for large angles for small displacement fields. This behavior is consistent with the observations in Refs. \cite{Hao2021TTG,Park2021TTG}, where the angles are slightly smaller than the magic angle and the superconductivity is enhanced by the displacement field. We also find that the displacement field tunneling can transfer symmetry breaking orders from the magic sector to non-magic subsystems. For $n=3$, this implies that the valley Hall state becomes fully gapped in the presence of a displacement field whereas the Kramers' intervalley coherent (KIVC) zero-field ground state \cite{KIVCpaper} remains gapless since the intervalley mass cannot gap out the Dirac cone. This favors the valley Hall (VH) state over the KIVC state which all together implies a transition from a gapless state at small displacement field to a fully gapped state at large field as previously noticed in Refs.~\onlinecite{ChristosTTG,TSTGII} and may correspond to resistance peaks observed in experiments \cite{Park2021TTG,Hao2021TTG}. Note that at similar fields superconductivity disappears, which is compatible with the skyrmion mechanism of superconductivity where pairing is disfavored by the easy-axis anisotropy that picks out the VH state \cite{Khalaf2020,chatterjee2020skyrmion}. At the largest fields the system becomes fully symmetric and metallic. In contrast, for $n = 4$, the non-magic TBG subsystem can gap out from IVC order and thus the system can become gapped at charge neutrality as soon as $V \neq 0$. Finally, to complement our strong coupling theory, we develop a phenomenological weak coupling theory of superconductivity based on a general mean-field pairing whose origin we leave unspecified. We find quite generally that for $n=2$ MATBG the orbital coupling of the in-plane magnetic field is pair breaking with a critical field that is of the same order as the Pauli limiting field. However for $n>2$ ATMG the critical field is expected to be much larger because the intra-subsystem magnetic field either doesn't exist ($n$-odd) or couples very weakly to the magic sector ($n > 2$ at the first magic angle). \section{Non-Interacting Band Structure and Symmetries} \label{sec:BandStructure} We begin with the Bistritzer-Macdonald (BM) model \cite{Bistritzer2011} generalized to alternating twist multilayer graphene (ATMG) with $n$ layers \cite{Khalaf2019}. The Hamiltonian for a single spin and a single graphene valley is given by \begin{equation} H({\vec r}) = \left(\begin{array}{cccc} -i v {\boldsymbol \sigma}_{+} \cdot {\boldsymbol \nabla} & T({\vec r}) & 0 & \cdots \\ T^\dagger({\vec r}) & -i v {\boldsymbol \sigma}_{-} \cdot {\boldsymbol \nabla} & T^\dagger({\vec r}) & \cdots \\ 0 & T({\vec r}) & -i v {\boldsymbol \sigma}_{+} \cdot \nabla & \cdots \\ \cdots & \cdots & \cdots & \cdots \end{array} \right). \label{HATMG} \end{equation} Here ${\boldsymbol \sigma}_{\pm} = e^{\mp\frac{i}{4} \theta \sigma_z} (\sigma_x, \sigma_y ) e^{\pm\frac{i}{4} \theta \sigma_z}$ and \begin{equation} \begin{aligned} T({\vec r}) & = \begin{pmatrix} w_0 U_0({\vec r}) & w_1 U_1({\vec r}) \\ w_1 U^*_1(-{\vec r}) & w_0 U_0({\vec r}) \end{pmatrix}, \\ U_0({\vec r}) & = e^{-i {\vec q}_1 \cdot {\vec r} } + e^{-i {\vec q}_2 \cdot {\vec r} } + e^{-i {\vec q}_3 \cdot {\vec r} }, \\ U_1({\vec r}) & = e^{-i {\vec q}_1 \cdot {\vec r} } + e^{i \phi}e^{-i {\vec q}_2 \cdot {\vec r} } + e^{-i \phi}e^{-i {\vec q}_3 \cdot {\vec r} }, \label{tunnel} \end{aligned} \end{equation} with $\phi = 2\pi/3$. The vectors ${\vec q}_i$ are ${\vec q}_1 = k_\theta(0,-1)$ and ${\vec q}_{2,3} =k_\theta(\pm \sqrt{3}/2,1/2)$. The wavevector $k_\theta = 2k_D\sin \frac{\theta}{2}$ is the moir\'{e} version of the Dirac wavevector $k_D= 4\pi/3a_0$, where $a_0$ is the graphene lattice constant. Note that we have assumed that all layers are aligned - for bilayers this does not matter since a shift may be removed by a gauge transformation. For $n >2$, the shifts are important and may alter the band structure \cite{Khalaf2019, MacdonaldTri}. For $n = 3$, it was argued that the zero shift configuration is energetically favorable \cite{Carr2020} and it is reasonable to assume that this also applies for larger $n$. Thus, in the following, we will assume the layers are aligned. ATMG may be decoupled into $\lfloor \frac{n}{2} \rfloor$ copies of TBG where each copy has a tunneling scaled by a different number $\lambda_k$. \begin{equation} H^{(n)}_k = \begin{pmatrix} -i v {\boldsymbol \sigma}_+ \cdot {\boldsymbol \nabla} & \lambda_k^{(n)} T({\vec r}) \\ \lambda_k T^\dag({\vec r}) & -iv {\boldsymbol \sigma}_- \cdot {\boldsymbol \nabla} \end{pmatrix}. \label{tbgsector} \end{equation} If $n$ is odd there is another decoupled graphene layer $H_G = -iv {\boldsymbol \sigma}_{\theta/2} \cdot {\boldsymbol \nabla}$. The mapping is reviewed in detail in the supplemental material, section \ref{SuppSec:mappingreview}. We will order the TBG subsystems such that $\lambda_k^{(n)}$ is decreasing with $k = 1,\ldots,\lfloor \frac{n}{2} \rfloor$. We refer to the angle $\lambda^{(n)}_k \theta_{\rm TBG}$ as the \emph{$k$'th magic angle} of $n$ layer ATMG; this is really the ``first" magic angle for the $k$'th TBG subsystem, but in this paper we do not consider the higher magic angles of a given TBG subsystem. \begin{figure*} \centering \includegraphics[width=\textwidth]{Multilayer3.pdf} \caption{Illustration of the decompositon of ATMG into TBG and graphene subsystems for $n = 3,\,4,\,5$. Even $n$ ATMG has $n/2$ TBG sectors whereas odd $n$ ATMG has $(n-1)/2$ TBG sectors and one graphene sector. The terms generated by external fields $(V, \,{\bf B}_\parallel)$ can act both within subsystems and also as couplings between subsystems. For odd $n$, mirror symmetry $M_z$ implies that the intra-subsystem action of the external fields vanishes.} \label{fig:decomposition} \end{figure*} ATMG has an approximate particle hole symmetry inherited from TBG and graphene. In particular, TBG has a well known approximate particle hole symmetry $\mathcal{P} = i \sigma_x \mu_y K$ \cite{BernevigTBGTopology, HejaziTopological}, where $\mu_{x,y,z}$ are the Pauli matrices in TBG layer space. In even-layer ATMG, particle-hole symmetry may be realized by applying it to each TBG subsystem \eqref{tbgsector}. For an odd number of layers, we additionally define $\mathcal{P} = \sigma_x \mathcal{K}$ in the graphene sector. It is an exact symmetry of \eqref{HATMG} if we drop the rotations on ${\boldsymbol \sigma}$ which is valid up to corrections proportional to $\theta^2 \ll 1$. There are other sources of particle-hole symmetry breaking including momentum-dependent moir\'{e} hoppings \cite{CarrPH} and next-nearest-layer ATMG tunneling \cite{Carr2020} but both of these terms are quite small. Note however that when we move to ${\vec k}$-space $\mathcal{P}$ does not have a simple action. This is because in each TBG sector the Bloch states have the form \begin{equation} \psi_{{\vec k}}({\vec r}) = u_{{\vec k}}({\vec r})\begin{pmatrix} e^{i({\vec k} - {\vec K}){\vec r}} \\ e^{i({\vec k} - {\vec K}'){\vec r}} \end{pmatrix}, \label{tbgBloch} \end{equation} where ${\vec K}$ and ${\vec K}'$ are the wave-vectors at the the moir\'{e} $K_M$ and $K_M'$ points respectively. We choose the $\Gamma$ point to correspond to ${\vec k} = 0$, such that ${\vec K} = -{\vec q}_1$ and ${\vec K}' = {\vec q}_1.$ Then we obtain that $\mathcal{P}$ takes ${\vec k} \mapsto -{\vec k}$. However the Bloch states for the graphene sector are proportional to $e^{i({\vec k} - {\vec K})\cdot {\vec r}}$, and so $\mathcal{P}$ inverts ${\vec k}$ about the $K_M$ point, not the $\Gamma$ point. An important symmetry for an odd number of layers is a mirror reflection about the middle layer: $M_{zll'} = \delta_{l,n-l'}$. The perturbations associated with an out of plane electric field and in plane magnetic field anticommute with this operation, whereas the TBG and graphene subsystems each have a specific eigenvalue under $M_z$. These external fields therefore act as tunneling terms between subsystems that have opposite $M_z$ eigenvalues. In contrast, for an even number of layers these external fields generically modify the intra-subsystem Hamiltonians \begin{equation} H_k \to H_k - r^{(n)}_k \begin{pmatrix} \Gamma^{(n)}_+ & 0 \\ 0 & -\Gamma^{(n)}_- \end{pmatrix}, \end{equation} where \begin{equation} \Gamma_{\pm}^{(n)} = \frac{V}{n-1} + g_{\rm orb} \mu_B \hat{\boldsymbol z} \times {\bf B} \cdot {\boldsymbol \sigma}_{\pm} \end{equation} Here, $V$ is the voltage induced between the top and bottom layer after accounting for screening by high energy electrons and ${\bf B}$ is the parallel magnetic field with $g_{\rm orb} = \frac{evd_\perp}{\mu_B} \approx 6.14$, where the bilayer graphene thickness is $d_\perp \approx 3.5$ \AA. The number $r^{(n)}_k$ is the external field coupling strength for the $k$'th TBG subsystem and is $1/2$ for $n=2$ TBG. Henceforth we will supress the superscript $(n)$ in $\lambda_k$, $\Gamma_k$ and $r_k$, since it will be clear from the multilayer case being considered. For concreteness we illustrate the above points for $n=3, 4$ and $5$. For trilyer graphene the Hamiltonian is \begin{equation} H = \begin{pmatrix} -i v {\boldsymbol \sigma}_{+} \cdot {\boldsymbol \nabla} & T({\vec r} ) & 0 \\ T^\dag ({\vec r} ) & -iv {\boldsymbol \sigma}_{-} \cdot {\boldsymbol \nabla} & T^\dag({\vec r} ) \\ 0 & T({\vec r} ) & -i v {\boldsymbol \sigma}_{+} \cdot {\boldsymbol \nabla} \end{pmatrix}, \label{ham} \end{equation} and $M_z$ acts by exchanging the first and third layers. An out of plane electric field and parallel magnetic field modify the Hamiltonian via \begin{equation} H \mapsto H + \begin{pmatrix} \Gamma_{+} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -\Gamma_{+} \end{pmatrix}, \label{efieldbfield} \end{equation} The Hamiltonian may be decoupled into a TBG sector, even under $M_z$, and a graphene sector, odd under $M_z$. The external fields act as tunneling terms between the subsystems. \begin{equation} H_{\rm dec} = \begin{pmatrix} -i v {\boldsymbol \sigma}_{+} \cdot {\boldsymbol \nabla} & \sqrt{2} T({\vec r}) & \Gamma_+ \\ \sqrt{2} T^\dag({\vec r}) & -iv {\boldsymbol \sigma}_{-} \cdot {\boldsymbol \nabla} & 0 \\ \Gamma_+ & 0 & -iv {\boldsymbol \sigma}_{+} \cdot {\boldsymbol \nabla} \end{pmatrix} \end{equation} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{atmgbands.png} \end{center} \caption{ ATMG band structures with and without displacement field: Panels \textbf{a}-\textbf{d} have no displacement field and correspond to $n = 2,3,4,5$ respectively. Panels \textbf{e}-\textbf{h} have $V=50$ meV and also correspond to $n = 2,3,4,5$ respectively. We used the parameters $\kappa = 0.58$ and $\theta = \lambda_1 1.05^\circ$ for $\lambda_1 = 1, \sqrt{2}, \varphi, \sqrt{3}$ for $n=2,3,4,5$ respectively. Note, $n=2$ which corresponds to ordinary MATBG shows little change with displacement field ( \textbf{a} vs. \textbf{e}). In contrast, a gap opens at $n=3, 4, 5$ on applying moderate $V$ ( \textbf{b, c, d} vs. \textbf{f, g, h}). } \label{fig:bands} \end{figure*} For four layers we have \begin{widetext} \begin{equation} H^{\rm Full}_{\rm dec} = \left(\begin{array}{cccc} -i v_F {\boldsymbol \sigma}_+ \cdot \nabla - r_1 \Gamma_{+} & \varphi T({\vec r}) & \frac{2}{\sqrt{5}} \Gamma_{+} & 0 \\ \varphi T^\dagger({\vec r}) & -i v_F {\boldsymbol \sigma}_- \cdot \nabla + r_1 \Gamma_- & 0 & \frac{2}{\sqrt{5}} \Gamma_- \\ \frac{2}{\sqrt{5}} \Gamma_+ & 0 & -i v_F {\boldsymbol \sigma}_+ \cdot \nabla - r_2 \Gamma_+ & \frac{1}{\varphi} T({\vec r}) \\ 0 & \frac{2}{\sqrt{5}} \Gamma_- & \frac{1}{\varphi} T^\dagger({\vec r}) & -i v_F {\boldsymbol \sigma}_- \cdot \nabla + r_2 \Gamma_- \end{array} \right) \end{equation} \end{widetext} where $\varphi$ is the golden ratio $\frac{1 + \sqrt{5}}{2}$ and $r_{1,2} = \frac{2\sqrt{5} \mp 5}{10}$. The tetralayer system is the first system in which there is more than one TBG subsystem and therefore more than one magic angle (again, here we do not consider the higher magic angles of a given TBG subsystem). The first subsystem has a magic angle at $\varphi \theta_{\rm TBG}$ and the second at $\varphi^{-1} \theta_{\rm TBG}$. Like trilayer graphene, the external fields lead to a coupling between the two TBGs. However, they also introduce an effective displacement field and in-plane magnetic field in each of the two TBGs that are rescaled by $\frac{1}{3} r_{1,2}$ and $r_{1,2}$, respectively, relative to the corresponding fields applied for the four layer system. Note that $r_1 \approx -0.05$ is ten times smaller in magnitude than in ordinary MATBG. As shown in supplemental material, this smallness arises due to the very rapid decay of $r_1$ with the number of layers for even $n$ with the large $n$ behaviour given by $r_1 \approx - \frac{\pi^2}{2 n^3}$. The final case we would like to explicitly consider is that of five layers, $n = 5$, whose Hamiltonian is mapped to \begin{widetext} \begin{equation} H^{\rm Full}_{\rm dec} = \left(\begin{array}{ccccc} -i v_F {\boldsymbol \sigma}_+ \cdot \nabla & \sqrt{3} T({\vec r}) & \frac{2}{\sqrt{3}} \Gamma_+ & 0 & 0\\ \sqrt{3} T^\dagger({\vec r}) & -i v_F {\boldsymbol \sigma}_- \cdot \nabla & 0 & \Gamma_- & 0 \\ \frac{2}{\sqrt{3}} \Gamma_+ & 0 & -i v_F {\boldsymbol \sigma}_+ \cdot \nabla & T({\vec r}) & \frac{4}{\sqrt{6}} \Gamma_+ \\ 0 & \Gamma_- & T^\dagger({\vec r}) & -i v_F {\boldsymbol \sigma}_- \cdot \nabla & 0 \\ 0 & 0 & \frac{4}{\sqrt{6}} \Gamma_+ & 0 & -i v_F {\boldsymbol \sigma}_+ \cdot \nabla \end{array} \right) \end{equation} \end{widetext} We see that the first magic angle is scaled by a factor of $\sqrt{3}$ compared to TBG whereas the second magic angle is the {\em same} as TBG magic angle. Note that here, similar to the trilayer case, the displacement and in-plane fields only introduce a coupling between the different TBG and graphene sectors without introducing an effective displacement field or in-plane orbital magnetic field in each sector which is a consequence of mirror symmetry. \section{Lattice Relaxation Results} \label{sec:relax} \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{relax.png} \caption{Lattice Relaxation: \textbf{a}, Effective $\kappa$ values for $n=2,3$ and both subsystems for $n=4,5$. Note that the second magic angle for $n=4$ is at $\theta < 1^\circ$ and is beyond the scope of the relaxation calculation. The second magic angle for $n=5$ is within reach and has the minimal value for $\kappa$. \textbf{b-d}, Band structures for the first $n=4$ magic angle and the first and second $n=5$ magic angles respectively. Mixing between the sectors comes from different values of $\kappa$ for the inner and outer interfaces. Note that the second $n=5$ subsystem does not mix with the other sectors between sectors because it is the only subsystem with $M_z = -1$.} \label{fig:relax} \end{figure*} In TBG, consideration of atomic relaxation at the magic angle has an important and well-known role on the interlayer tunneling \cite{Nam2017,Carr2019}. We include this effect for the $n$-layer alternating structures with a continuous plate elasticity model \cite{Carr2018relax,Nam2017}, with strain moduli and stacking fault energy parametrized from first principle calculations of bilayer graphene (see SM). After updating the atomic positions and the inter-layer electronic hopping terms in a twisted supercell \cite{Fang2016,Carr2018pressure}, the effective $w_0$ and $w_1$ can be obtained from inner products of the tight-binding Hamiltonian with appropriate orbital-projected wavefunctions taken at the $K$ valleys of the monolayers \cite{Fang2019}. We note that the parameters $w_i$ are sensitive to the parametrization of the atomic potentials and electronic tight-binding models. See for example Fig. 14 of Ref. \onlinecite{Guinea2019}, where the $\kappa$ in TBG varies between $0.9$ and $0.4$ when considering different inter-atomic potentials and tight-binding parametrizations (because of e.g. electronic screening or corrugation reduction from hBN substrates). The parametrization of the atomic and electronic functions used here are directly derived from first principles calculations~\cite{Fang2016,Carr2018pressure}, and the bands derived from these functionals show remarkably good agreement with full-scale density functional theory (DFT) calculations (Fig. 5 of Ref.~\onlinecite{Carr2020review}), but even DFT potentials in bilayer systems are known to be highly sensitive on the chosen density functional~\cite{Zhou2015}. Therefore, the exact values of $\kappa$ presented here should be taken with a grain of salt, but the $\textit{relative}$ changes in $\kappa$ should be universal, and one can take the $\kappa$ value of TBG from our model as a scalable reference point. In all the multi layer graphene geometries modeled here, the atoms tend to relax in-plane to maximize the amount of AB stacking, and relax out-of-plane to increase in the interlayer distance at AA stacking. This causes a slight increase in the AB tunneling term $w_1$ accompanied by a significant reduction in the AA tunneling term $w_0$ with decreasing angle (see supplemental material for details). This effect becomes stronger as we increase the number of layers at a fixed angle. In addition, there is a distinction between tunneling at inner interfaces and outer interfaces once $n \geq 4$; the former experiences stronger relaxation effects. One of the implications of our relaxation results is that the mapping to decoupled TBGs + Dirac is altered due to the different value of $w_0$ and $w_1$ for the inner and outer interfaces. If we start by ignoring $w_0$ (taking the chiral limit) or, more generally, assuming the chiral ratio $\kappa$ is constant between the layers, the decoupling mapping can still be performed even though the value of $w_1$ differs between the inner and outer interfaces. This was already discussed in Ref.~\cite{Khalaf2019} and is expanded upon in the supplemental material. The main consequence of the different value of $w_1$ between inner and outer interfaces for $n = 4,5$ is a slight decrease in the value of the first magic angle and a slight increase in the value of the second second magic angle with the modified magic angle values given in Table \ref{tab:relax}. For $n = 4$, the second magic angle is given by $\theta \approx 1.05^\circ/\varphi \approx 0.64^\circ$ which lies outside the range of validity of our relaxation calculations. In addition, for angles smaller than $1^\circ$, the effect of higher harmonics in the BM tunneling potential cannot be neglected \cite{MarginalTBG}. In contrast, the second magic angle for $n = 5$ is the same as TBG first magic angle, $\theta \approx 1.05^o$, in the ideal limit and is slightly increased to $\theta \approx 1.14^o$ with relaxation. For this value, the effect of higher harmonics can be neglected and our relaxation calculation can be trusted. Introducing the different values of $\kappa = w_0/w_1$ between the inner and outer layer leads to a small coupling between the different sectors. When the number of layers is odd, such coupling only takes place within the same mirror sector since mirror symmetry is still preserved. The effect of such term can be seen in Fig.~\ref{fig:relax} for the first magic angle for $n = 4$ and for the two magic angles for $n = 5$. We notice that the effect of the coupling is qualitatively similar to applying a small displacement field which couples the different TBG sectors for $n = 4$ and couples the TBG sector in the $M_z = +1$ sector with the single Dirac cone for $n = 5$. The value of the coupling between the sectors is a few meV (see supplemental material for details). Using the mapping to TBGs + Dirac and the relaxation data, we can calculate the effective value of $\kappa$ for each of the two TBG sectors for $n = 4,5$ as shown in Fig.~\ref{fig:relax}a. We note the following. First, the decrease in $\kappa$ due to increased relaxation with increasing the number of layers for $n=2,3,4,5$ is roughly compensated by the increase in the value of the first magic angle leading to an overall slight increase in the value of $\kappa$ for the first TBG block at the first magic angle (see Table \ref{tab:relax}). On the other hand, this implies that the chiral ratio is significantly reduced at the \emph{second} magic angle for $n = 5$. One remarkable consequence of the previous discussion is that the flat band at the second magic angle for $n = 5$ represents the closest realization for the chiral model at the first magic angle since (i) it lives in the $M_z = -1$ mirror sector and is thus completely decoupled from the rest of the system and (ii) has a relatively small value of $\kappa \approx 0.45$. This makes this system a very promising candidate to search for fractional Chern insulator at zero magnetic field \cite{LedwithFCI, AbouelkomsanFCI, RepellinFCI, LauchliFCI}. \begin{table}[t] \centering \begin{tabular}{cccc} \hline \hline Number of layers $n$ & Magic angle & $\theta$ & $\kappa$ \\ \hline 2 & 1st & 1.09${}^o$ & 0.51 \\ 3 & 1st & 1.49${}^o$ & 0.585 \\ 4 & 1st & 1.68${}^o$ & 0.614 \\ 5 & 1st & 1.79${}^o$ & 0.627 \\ 5 & 2nd & 1.14${}^o$ & 0.450 \\ \hline \hline \end{tabular} \caption{Magic angles and effective $\kappa = w_0/w_1$ values for TBG subsystems of relaxed ATMG. Here we define the magic angle as the angle for which the bands are exactly flat in the chiral limit \cite{Tarnopolsky}.} \label{tab:relax} \end{table} \section{ Interacting Multilayers without external fields: magic TBG and Dirac cones decouple modulo screening} \label{sec:screen} \subsection{Graphene electrons screen the Coulomb interaction for TBG electrons} In this section we will argue that ATMG behaves much like MATBG together with extra decoupled layers of the non-magic subsystems even at the interacting level. We will first integrate out the non-magic electrons and obtain an effective model for just MATBG electrons. The usual strong coupling approach can then be applied to the magic sector. The interacting Hamiltonian is \begin{equation} \begin{aligned} \mathcal{H} & = \sum_{{\vec k} \in \rm BZ} c^\dag_{\vec k} (h_{\vec k}-\mu) c_{\vec k} + \sum_{{\vec k} \in \rm BZ} f^\dag_{\vec k} (h_{f {\vec k}} - \mu) f_{\vec k} \\ & + \frac{1}{2A} \sum_{\vec q} V_{\vec q} :\rho_{\vb q} \rho_{\vb -{\vec q}}: \end{aligned} \label{intham} \end{equation} Here $c_{\vb k}$ and $f_{\vec k}$ are vectors of annihilation operators for the MATBG bands. The total density has the form \begin{equation} \begin{aligned} \rho_{\vec q} & = \rho_{c {\vec q}} + \rho_{f {\vec q}} \\ \rho_{c {\vec q}} & = \sum_{{\vec k} \in \rm BZ} c^\dag_k \Lambda_{c{\vec q}}({\vec k})c_{{\vec k} + {\vec q}}, \\ \rho_{f {\vec q}} & = \sum_{{\vec k} \in \rm BZ} f^\dag_{\vec k} \Lambda_{{\vec q}}({\vec k}) f_{{\vec k} + {\vec q}}\\ \Lambda_{t{\vec q}}({\vec k})_{\alpha \beta} & = \braket{u_{t\alpha {\vec k}}}{u_{t\beta {\vec k}+{\vec q}}}, \end{aligned} \label{densitydecomp} \end{equation} where $t = c$ for the magic sector electrons and $t=f$ otherwise. We have written the full form factor of multilayer graphene in the block diagonal form $\tilde{\Lambda}_{{\vec q}}({\vec k}) = {\rm diag} (\Lambda_{c{\vec q}}({\vec k}), \Lambda_{f {\vec q}}({\vec k}) )$. The indices $\alpha,\beta$ vary over the various bands, spins and valleys. The form factor $\Lambda_f$ is additionally block diagonal in the non-magic subsystems. We note that the Hamiltonian \eqref{intham} has a separate internal symmetry for every TBG or graphene subsystem. For example, each TBG subsystem has a separate ${\rm U}(4)$ symmetry in the chiral limit. Most importantly, \eqref{intham} commutes with charge rotations of opposite sign in the magic and non-magic sectors. The origin of this symmetry is that the Coulomb interaction to a good approximation only sees the total density summed over all layers which separates into MATBG and non-magic densities by symmetry. This layer summed density-density form of the interaction also implies that for an odd number of layers it does not matter that the extra graphene Dirac cone is technically at the moir\'{e} K point: only differences in BZ momenta matter for the density. As a result we may use $\mathcal{P}$ symmetry without needing to worry about the subtleties discussed in the previous section. There are small corrections to the above picture stemming from the fact that the interaction depends on the out of plane distance between the layers; such corrections are suppressed by the square ratio of the interlayer separation to the moir\'{e} scale $d_\perp^2/a_M^2 \sim \theta^2 \ll 1$. We now proceed to integrate out the non-magic electrons. Because the non-magic electrons only couple to the TBG density through the Coulomb interaction, they only renormalize the Coulomb interaction and Hartree potential (for $\mu \neq 0$), see Figure \ref{fig:diagrams}. Additionally, we take the static screening limit as it is a very good approximation given how much faster the non-magic electrons are. We therefore obtain the effective Hamiltonian for MATBG electrons \begin{equation} \begin{aligned} \mathcal{H}_{\rm eff} & = \sum_{{\vec k} \in \rm BZ} c^\dag_{\vec k} (\tilde{h}({\vec k}) - \mu) c_{\vec k} \\ & + \frac{1}{2A} \sum_{\vec q} V_{\rm eff}({\vec q}) : \rho_{\vb q} \rho_{\vb -{\vec q}} : . \end{aligned} \label{manybandtbg} \end{equation} where $\tilde{h}$ includes the Hartree term from \ref{fig:diagrams}. To obtain a calculable form of $V_{\rm eff}({\vec q})$ we use the RPA to compute the effective interaction and approximate the non-magic subsystems with the appropriate number of Dirac cones. We take $V({\vec q}) = \frac{e^2}{2\epsilon \epsilon_0 q} \tanh(qd_s)$, where $d_s$ is the screening length due to metallic gates. The effective interaction is then given by \begin{equation} \begin{aligned} V_{\rm eff} ({\vec q}) & = \frac{e^2}{2 \epsilon_{\rm eff}({\vec q}) \epsilon_0 q} \tanh(qd_s)\\ \epsilon_{\rm eff}({\vec q}) & = \epsilon + g \tanh(qd_s)\frac{e^2}{8 \epsilon_0 v} f(q/2k_F) \end{aligned} \label{RPA_interaction} \end{equation} where $f(x) = 1$ for $x \gtrsim 1$ and $f(x) = 4/\pi x$ for $x \lesssim 1$ \cite{Hwang2007}. The number of fermion flavors is nominally $g = 4(n-2)$, however in practice we take $g=4n$ to also include screening from remote MATBG bands. For $qd \gg 1$ the effective dielectric constant is $\epsilon_{\rm eff} = \epsilon + \frac{g\pi \alpha}{8}$, where $\alpha \approx 2.2$ is the fine structure constant in graphene. For nonzero $\mu$ the Dirac electrons will fill a Fermi sea with momentum $k_F = \mu/v$. The main effect of this is to shorten the effective screening length: by calculating $V_{\rm eff}({\vec q} = 0)$ we get $d_{\rm eff} = d/(1+g\alpha k_F d_s/\epsilon)$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{diagramfig.png} \end{center} \caption{Diagrams for integrating out non-magic fermions. The double line denotes the effective Coulomb interaction $V_{\rm eff}({\vec q})$ which is obtained by summing the geometric series of RPA diagrams with non-magic fermion bubbles. The solid line with an arrow is the effective TBG propagator which differs from the bare propagator by a Hartree-like correction. } \label{fig:diagrams} \end{figure} \subsection{Insulating ground states for magic sector} We may now proceed as in Ref. \cite{KIVCpaper} to determine the MATBG ground states at integer fillings: we integrate out the remote MATBG bands to obtain an effective model just for the eight flat bands and use approximate symmetries to determine the quantum Hall ferromagnetic ground states. The flat band projected Hamiltonian is \begin{equation} \begin{aligned} \mathcal{H}_{\rm eff} & = \sum_{{\vec k} \in \rm BZ} c^\dag_{\vec k} (h_c({\vec k}) - \mu - h_\mu({\vec k})) c_{\vec k} \\ & + \frac{1}{2A} \sum_{\vec q} V_{\rm eff}({\vec q}) \delta \rho_{c \vb q} \delta \rho_{c \vb -{\vec q}}, \end{aligned} \label{flatbandprojHam} \end{equation} where the density operators \begin{equation} \begin{aligned} \delta \rho_{c{\vec q}} & = \rho_{c{\vec q}} - \ov \rho_{\vec q} \\ \ov \rho_{\vec q} & = \frac{1}{2}\sum_{{\vec k},{\vec G}} \delta_{{\vec G},{\vec q}}{\rm tr} \Lambda_{{\vec G}}({\vec k}) \end{aligned} \end{equation} give the density measured with respect to charge neutrality, $\delta n_f$ is the average density of the non-magic electrons measured from charge neutrality, and $h_\mu({\vec k}) \propto \mu$ is a Hartree term generated from integrating out non-magic electrons. It is expected to be small because the non-magic electrons do not fill much past charge neutrality. The Hartree term from filled magic sector electrons is contained in interaction term which now only contains magic sector densities measured from charge neutrality. This form \cite{KIVCpaper} of the flat band projected Hamiltonian is convenient because the density operators are exactly odd under particle hole symmetry and the dispersion term is particle hole symmetric if $\mu = 0$. We now discuss the Chern basis for these flat bands and the approximate ${\rm U}(4) \times {\rm U}(4)$ symmetry. The flat bands of twisted bilayer graphene may be viewed in a sublattice basis where the dispersion is off diagonal but the bands have a Chern number $C = \sigma_z \tau_z$ \cite{KIVCpaper,Khalaf2020}. Here we use the Chern and valley basis \begin{equation} \begin{aligned} \gamma_{x,y,z} & = (\sigma_x,\sigma_y\tau_z,\sigma_z\tau_z) \\ \eta_{x,y,z} & = (\sigma_x\tau_x, \sigma_x\tau_y,\tau_z). \label{chernbasis} \end{aligned} \end{equation} In this basis, The dispersion $h_c$ is mostly proportional to $\gamma_{x,y}\eta_0$ whereas the Hartree dispersion is mostly diagonal in the flat band flavors \cite{KIVCpaper,Khalaf2020, KhalafSoftMode}. Including spin, we have four bands with $C = 1$ and four bands with $C=-1$. The ${\rm U}(4) \times {\rm U}(4)$ symmetry in the chiral dispersionless limit corresponds to independent rotations in each Chern sector. The ground state may be understood by starting in the chiral dispersionless limit and adding perturbations. The ground states are described by \cite{KIVCpaper,Khalaf2020,KhalafSoftMode} \begin{equation} Q_{\alpha \beta} = 2 \langle c^\dag_\beta c_\alpha \rangle - \delta_{\alpha \beta} \end{equation} where $Q$ is an $8 \times 8$ matrix that is ${\vec k}$-independent with ${\rm tr} Q = 2\nu$ and $Q^2 = \mathbbm{1}$. The $\pm 1$ eigenvalues of $Q$ label whether a particular band is filled or empty. We also must have $[Q,\gamma_z] = 0$, such that there is a definite Chern number ${\rm tr} Q \gamma_z = 2C$. All such $Q$ are ground states in the chiral dispersionless limit at charge neutrality because such states are annihilated by $\delta \rho_{\vec q}$ for all ${\vec q}$. Away from charge neutrality we generate a Hartree terms from the extra filled bands since $\delta \rho_{\vec G}$ no longer annihilates the ground state. The Hartree terms do not split the ferromagnetic states but can instead favor metals or charge density waves \cite{PierceCDW}. We now discuss symmetry breaking terms. The dispersion $\hat h_c$ may be regarded as a tunneling between Chern sectors and breaks ${\rm U}(4) \times {\rm U}(4)$ to its diagonal subgroup by favoring $Q \propto \gamma_z$. Moving away from the chiral limit results in incomplete sublattice polarization and less symmetric form factors; this perturbation favors states with $[Q, \gamma_x \eta_z] = 0$. We may summarize these perturbations with the energy function \begin{equation} E(Q) = \frac{J}{4} {\rm tr} (Q \gamma_x)^2 - \frac{\lambda}{4} {\rm tr} (Q \gamma_x \eta_z)^2. \label{Qenergy_mt} \end{equation} for some parameters $J, \lambda > 0$ that may be computed in perturbation theory and in Hartree Fock \cite{KIVCpaper,Khalaf2020, KhalafSoftMode}. The ground state that minimizes both perturbations is the Kramers' intervalley coherent state (KIVC) with $Q \propto \eta_{x,y} \gamma_z$. A valley Hall (VH) state with $Q \propto \eta_z \gamma_z = \sigma_z$ minimizes the $J$ term and is penalized by the $\lambda$ term; it is degenerate with the KIVC state in the chiral limit and is favored if the hBN substrate is aligned. Other states include the valley polarized (VP) state, with $Q = \eta_z$, which is degenerate with the KIVC state if there is no dispersion, and also the TIVC state $Q = \eta_{x,y}$ which is disfavored by both $J$ and $\lambda$. \begin{table}[t] \centering \begin{tabular}{ccc} \hline \hline State & $Q_c$ & Broken Symmetry \\ \hline KIVC & $\eta_{x,y}\gamma_z$ & $U(1)_V, C_2, \mathcal{T}$ \\ TIVC & $\eta_{x,y}$ & $U(1)_V$ \\ VH & $\eta_z \gamma_z$ & $C_2, C_2 \mathcal{T}$ \\ VP & $\eta_z$ & $C_2, \mathcal{T}$ \\ NSM & $\gamma_{x,y}$ & $C_3$ \\ MSM & $\gamma_{x,y} \eta_z$ & $C_3, C_2, \mathcal{T}$ \\ BM-SM & $\gamma_{x,y}$ & None \\ \hline \hline \end{tabular} \caption{State names and order parameters. We drop the spin structure of the order parameters for brevity. The KIVC insulator breaks $\mathcal{T} = \eta_x K$ but preserves the Kramers' time reversal $\mathcal{T}' = \mathcal{T} \eta_z = \eta_y K$. The BM-SM and NSM are distinguished by $C_3$; the BM-SM $Q_c$ has vortices at $K_M$ and $K'_M$ whereas the NSM $Q_c$ has two vortices at $\Gamma$. } \label{tab:statenames} \end{table} In practice there are also low energy nematic semimetal (NSM) states with $Q \propto \gamma_{x,y}$ \cite{ShangNematic}. These semimetals can in principle have any valley structure, but we will highlight an ordinary time reversal symmetric NSM state with $Q \propto \gamma_{x,y} \eta_0$, favored by strain, and a magnetic semimetal (MSM) that breaks $C_2$ and $\mathcal{T}$ with $Q \propto \gamma_{x,y} \eta_z$; we will show that the latter is favored by an in-plane magnetic field. Topological considerations obstruct this $Q$ from being ${\vec k}$-independent and so it is nominally not in the low energy manifold of quantum Hall ferromagnetic ground states \cite{ShangNematic, KhalafSoftMode}. However, if the Berry curvature is highly concentrated, as it is for large $\kappa$ at the $\Gamma$ point, the state can be nearly ${\vec k}$-independent with the exception of two vortices localized in a small vicinity of the $\Gamma$ point \cite{ShangNematic, KhalafSoftMode}. A useful limit to consider is one in which the Berry curvature is solenoidal and may be gauged away with a singular gauge transformation \cite{KhalafSoftMode}. With this transformation, the bands lose their topology, Chern sector becomes another flavor, and the nematic semimetal is realized as a low energy quantum Hall ferromagnet. The model also has an approximate ${\rm U}(8)$ symmetry with associated nematic soft modes \cite{KhalafSoftMode}. We review this perspective in the supplemental material in section \ref{Suppsubsec:intraTBGnis2}. If one works far enough away from the magic angle the strong coupling picture breaks down and a completely symmetric ``Bistritzer-Macdonald'' semimetal (BM-SM) becomes the ground state. \subsection{Non-magic electrons remain semimetallic and decoupled} We now ask what happens to the non-magic electrons. We will argue that they exist essentially independently from the MATBG electrons provided the MATBG electrons form a quantum hall ferromagnetic ground state. Such a result is in stark contrast to doped electrons in twisted bilayer graphene on top of the KIVC ground state which interact with order parameter fluctuations and may condense in pairs to form a superconductor \cite{kozii2020}. The essential reason that there is little interaction that the two subsystems are only coupled through the Coulomb interaction. A charge $\delta \rho_{c{\vec q}}$ from order parameter fluctuations must be present to interact with non-magic electrons. In a quantum hall ferromagnetic ground state this charge is the skyrmion density, \begin{equation} \delta \rho_{\rm top} = \frac{i}{16\pi}\varepsilon_{ij} {\rm tr} Q \partial_i Q \partial_j Q, \label{topcharge} \end{equation} which contains two derivatives and two powers of a soft mode $\phi = Q-Q_0$. This interaction is expected to be very weak at low energy. Furthermore, truly gapless fluctuations within the KIVC manifold of states carry zero topological charge as may be seen by noting that the above trace vanishes when $[Q,\eta_z] = 0$. At integer fillings we expect a Chern diagonal $Q$ matrix in the magic sector together with metallic or semimetallic non-magic electrons. For small doping away from integers, the non-magic subsystems will gain charge, and only after the chemical potential reaches some critical level will charge begin to populate the magic sector. However, as in TBG, small amounts of strain in ATMG may favor a metallic state of the magic sector instead at some integers. \section{Effect of external fields on magical insulators and Dirac cones} \label{sec:externalfields} We now move on to adding external fields to ATMG and for now focus on their influence at integer fillings. We begin with $n = 2,3$ and then discuss $n=4$ which combines the methods in $n=2,3$. We then make some comments on patterns for larger $n$. We will use two $Q$ matrices $Q_c$ and $Q_f$ to label the filled bands of the magic and non-magic electrons respectively in the zero-field states that we perturb. \subsection{$n=2$ MATBG} \label{subsec:extfields_tbg} For $n=2$ MATBG we have a single $Q$-matrix $Q=Q_c$ that describes the magic electron occupation. The external fields introduce terms in $E(Q)$ that favor some states over others. Here we describe these terms to second order in $V,{\bf B}$; a full calculation is given in the supplemental material in section \ref{Suppsubsec:intraTBGnis2}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{TBGcritB.png} \caption{\textbf{a}, Hartree Fock phase diagram of MATBG at charge neutrality in an in-plane magnetic field $B$. We used the parameters $\theta = 1.05^\circ$, $d_s = 20nm$, the hBN screening $\epsilon = 6.6$, and the screened interaction \eqref{RPA_interaction}. The blue line and red lines are analytic predictions for the critical fields for small $\kappa$ and large $\kappa$ respectively. The lower red line is for the first order KIVC\textrightarrow VH transition whereas the upper red line is for the second order VH\textrightarrow MSM transition. \textbf{b-d}, Fixed $\kappa$ phase diagram cuts. Here we plot the Hartree Fock band gap $\Delta$ and the order parameters $O_{\eta_z}$ and $O_{C_2\mathcal{T}}$, see Eq. \eqref{OrderParams}, which are nonzero in the KIVC and VH states respectively; the MSM does not break either symmetry. Note, in the KIVC phase the $C_2 \mathcal{T}$ order parameter is not shown since it is sensitive to the spontaneous choice of IVC phase. } \label{fig:tbg_phase} \end{figure} The electric field has little effect in TBG. The electric field projected to the flat bands induces a dispersion $h_V({\vec k})$ that is odd under ${\vec k} \mapsto -{\vec k}$ and is proportional to $\eta_z$. This dispersion has no effect at first order but at second order it leads to a term $\frac{\tilde{\lambda}_V}{4} {\rm tr} (Q \eta_z)^2$. In practice this term is very small, $\lambda_V \lesssim 10^{-5}(\rm meV)^{-1}\frac{V^2}{4} $, because the flat bands have very little layer polarization and the dispersion $h_V({\vec k})$ is proportional to the layer polarization. The in-plane magnetic field has a much stronger effect. Projecting the magnetic field to the flat bands induces a dispersion $h_B({\vec k})$ that is proportional to $\eta_z$ and off diagonal in Chern sector. As a result, within perturbation theory, we obtain the term \begin{equation} \frac{\lambda_B}{4} {\rm tr} (Q \eta_z \gamma_x)^2. \end{equation} The sign of this term is opposite to that of the zero field $\lambda$ term that arises due to imperfect sublattice polarization. Thus, as ${\bf B}$ increases we obtain a transition to a valley Hall state once $\lambda_B>\lambda$. The coefficient $\lambda_B$ has larger magnitude than naive dimensional analysis suggests because $h_B$ has angular momentum under $C_3$ and can therefore couple to nematic soft modes. In particular, we obtain $\lambda_B \approx c_B \frac{g_{\rm orb}^2 \mu_B^2 B^2}{4}$ where $c_B$ increases with $\kappa$. In the range $\kappa = 0-0.8$ we have $c_B = 0.1-0.3 (\rm meV)^{-1}$ which corresponds to an effective gap size much less than the interaction scale of $\approx 15 \rm{meV}$. This perturbation theory may be used to find the critical field for the KIVC\textrightarrow VH transition at small $\kappa$; here the zero field gap is small and the Berry curvature is spread out so the nematic modes have a larger gap. At larger $\kappa$ however perturbation theory breaks down for fields well below the critical field. Instead, at larger $\kappa$ we may view the in plane field as a Zeeman field for the MSM state with $Q \propto \gamma_{x,y} \eta_z$. This state anticommutes with $C_2$ and $\mathcal{T}$ but preserves $C_2 \mathcal{T}$. At zero magnetic field, the MSM state is separated from the KIVC ground state by a gap of around $0.5$ meV per electron which shrinks as $\kappa$ increases. The energy of this state decreases linearly when the magnetic field is added. Because $\{Q_{\rm VH}, Q_{\rm MSM}\} = 0$, the VH state can cant into the MSM state to reduce its energy as well \begin{equation} Q_{\rm VH}(B) = \cos \theta_B Q_{\rm VH} + \sin \theta_B Q_{\rm MSM}. \end{equation} The above perturbative superexchange corresponds to this canting at first order in $\theta_B$, while also allowing $\theta_B$ to be ${\vec k}$-dependent. Here instead we take $\theta_B$ to be ${\vec k}$-independent. We do so because at large $\kappa$ we may regard $Q_{\rm MSM}$ to be a generalized quantum Hall ferromagnet, in the sense that it is ${\vec k}$-independent after a singular gauge transformation at the $\Gamma$ point where the Berry curvature is concentrated \cite{KhalafSoftMode}. For a review of this perspective and a detailed calculation of the critical fields see the supplemental material section \ref{Suppsubsec:intraTBGnis2}. This calculation becomes unreliable at small $\kappa$ where the Berry curvature is more spread out - there we rely on the perturbative calculation described above. At the largest values $\kappa \gtrsim 0.7$ we obtain a direct first order transition KIVC\textrightarrow MSM. For $\kappa \lesssim 0.7$ we first have a first order transition KIVC\textrightarrow VH, then a second order transition VH\textrightarrow MSM. We have performed numerical Hartree Fock simulations for spinless TBG at charge neutrality for comparison with our analytic predictions. We identified the phase of the ground state using the order parameters \begin{equation} \langle O_S\rangle = \frac{A_M}{4A}\sum_{\vec k} {\rm tr} \left(\frac{1}{2}(Q_{\vec k}-S^\dag Q_{\vec k} S)\right)^2 \label{OrderParams} \end{equation} for $S = \eta_z, C_2 \mathcal{T}$. The order parameters are normalized such that they have a maximal value of $1$ when $\{Q,S\} = 0$. The KIVC state has $\langle O_{\eta_z}\rangle > 0$ whereas the VH state has $\langle O_{\eta_z}\rangle = 0$ but $\langle O_{C_2\mathcal{T}} \rangle > 0$. The MSM state on the other hand has $O_{\eta_z} = O_{C_2 \mathcal{T}} = 0$. The Hartree Fock phase diagram is shown in Fig. \ref{fig:tbg_phase}a. The small and large $\kappa$ analytic predictions for the phase boundaries are also plotted, corresponding to perturbative superexchange and non-perturbative canting respectively. We also plot the Hartree Fock band gap and the order parameters we used for phase identification along fixed $\kappa$ cuts in Fig. \ref{fig:tbg_phase}b-d. In particular, the canting of the VH order into the MSM is visible in \ref{fig:tbg_phase}c. The transition to the MSM is accompanied with a gap closing that may correspond to the gap closing of the $\nu = 2$ correlated insulator in Ref. \cite{Dean-Young}. Note also that a transition to the VH or MSM state would destroy skyrmion superconductivity; the former corresponds to an easy-axis anisotropy that strongly disfavors skyrmions and the latter is outside the skyrmion manifold of insulators. \subsection{$n=3$ sandwiches: Mirror Superexchange and Graphene mass generation} \label{subsec:extfields_ttg} \begin{figure*}[h!tb] \centering \includegraphics[width=\textwidth]{trilayerHF.png} \caption{ Analytic peturbation theory and Hartree Fock numerics for twisted trilayer graphene at charge neutrality. \textbf{a}, Chern superexchange parameter $J$ at zero displacement field and finite field contribution $\Delta J(\theta) = J(V = 50 \text{meV}) - J(0)$ as a function of angle. When $\theta$ passes through the magic angle the sign of the dispersion $h_c$ flips leading to a sign change in $\Delta J(\theta)$ as a function of angle. When $\theta$ passes through the magic angle the sign of the dispersion $h_c$ flips leading to a sign change in $\Delta J$ . \textbf{b, c}, Energies of the valley Hall (VH), semimetal (SM), and valley polarized (VP) states per unit cell relative to the Kramers' intervalley coherent (KIVC) ground state as a function of displacement field for $\theta = 1.47^\circ, 1.61^\circ$. For both angles there is a transition from $\rm{KIVC} \to \rm{VH}$ at $V \approx 100$ meV. At the smaller angle the gap to the VP state increases monotonically because the displacement field generated dispersion has the same sign as $h_c$ and $E_{\text{VP}} - E_{\text{KIVC}} = 2J$. At the larger angle the VP gap slightly decreases for small $V$ before increasing at larger $V$. For the smaller angle there is also a transition from the nematic SM to the symmetric Bistritzer-Macdonald SM (BM-SM) which decreases in energy and eventually becomes the ground state. For the larger angle there is no transition to the BM-SM for the displacement fields shown. \textbf{d}, Schematic displacement field dependence of the ground state. $V_1 \approx V_1(\kappa)$ is the transition to the VH state. It increases with increasing $\kappa$ because the zero field gap to the VH state does. The field $V_2 \approx V_2(\theta)$ is the transition to the BM-SM. It increases with increasing $\theta$. \textbf{e}, Mass and valley doping of the graphene Dirac cones in the VH and VP states respectively compared with the perturbative superexchange value \eqref{maintext_superex}, \eqref{mse}. \textbf{f, g, h}, Hartree Fock band structure for the KIVC, VH, and VP states at $V = 40$ meV. The bands are plotted along a line of constant $k_x$ that passes through all high symmetry points. } \label{fig:trilayerHF} \end{figure*} In this section we analytically discuss effects of $M_z$ breaking external field perturbations in trilayer graphene \eqref{efieldbfield} and then compare to Hartree Fock numerics. While nonzero $V,{\bf B}$ strongly reconstructs the noninteracting band structure near $K_M$ and $K_M'$ in the respective graphene valleys, the effect on an interacting TBG ground state at integer filling is expected to be smaller since the TBG sector will have a gap to single particle excitations at least at ${\vec k} = {\vec K}$. Due to the presence of this gap\footnote{Note that a gap in the graphene sector is \emph{not} required: the external field terms always change the charge at ${\vec K}$ in the TBG system and the TBG system is assumed to have a charge gap there}, we may perform a well defined perturbation theory that gives a "mirror-superexchange" energy at second order in $V,{\bf B}$ and, depending on the ground state in the TBG sector, modifies the graphene dispersion. We start with an unperturbed low energy Hamiltonian that consists of the flat band projected TBG sector \eqref{flatbandprojHam} together with the low energy part of the graphene Dirac cone. The external fields contribute the following terms to the effective low energy Hamiltonian \begin{equation} \begin{aligned} \mathcal{H}_\xi & = \sum_{{\bf p}} c^\dag_{\vec k} \xi_{\bf p} f_{\bf p} + f^\dag_{\bf p} \xi^\dag_{\bf p} c_{\vec k} , \\ (\xi_{{\bf p} \eta})_{\gamma \gamma'} & = \left(\frac{V}{2}\delta_{\gamma \gamma'} + \eta g_{\rm orb} \mu_B {\bf B} \times {\boldsymbol \gamma}_{\gamma \gamma'}\right) t_{{\bf p} \eta \gamma}, \\ t_{{\bf p} \eta \gamma} & = \int_{\rm Cell} d^2 {\vec r}\, u^*_{c{\vec k} \eta \gamma +}({\vec r}) e^{i {\vec G} \cdot {\vec r}}. \label{eBfieldproj} \end{aligned} \end{equation} where ${\bf p} = {\vec k} + {\vec G}$ where ${\vec k}$ is in the first BZ and $u_{c {\vec k} \eta \gamma \mu}({\vec r})$ is the MATBG moir\'{e} periodic wavefunction in graphene valley $\eta$ and Chern sector $\gamma$ evaluated in TBG ``layer'' $\mu$ and position ${\vec r}$. We have chosen to use the continuous translation symmetry of the graphene subsystem to label states with ${\bf p}$. These terms are the low energy projection of \eqref{efieldbfield}; they take this form because \eqref{efieldbfield} anticommutes with $M_z$. We now discuss the effect of \eqref{eBfieldproj} in perturbation theory. The tunneling between mirror sectors has zero effect at first order since the expectation value of the term vanishes by $M_z$ symmetry. However, within second order perturbation theory \eqref{eBfieldproj} is expected to have an effect through a ``superexchange": electrons gain energy by virtually tunneling between mirror sectors. We compute this energy in the supplemental material in section \ref{Suppsubsec:trilayersuperexchange}. It is simplest to understand the result at charge neutrality; in the supplement we show that, qualitatively, the results are general. To a good approximation we obtain \begin{equation} \begin{aligned} E(Q_c, Q_f) & = \frac{1}{2}\sum_{{\bf p} \eta} \frac{1}{v\abs{{\bf p}_\eta} + U_{\vec k}} {\rm tr}(Q_c \xi_{{\bf p} \eta} Q_{f \eta {\bf p}} \xi^\dag_{{\bf p} \eta}). \end{aligned} \label{maintext_superex} \end{equation} The matrix $Q_{f\alpha \beta} = 2 \langle f^\dag_\beta f_\alpha \rangle - 1$ describes the occupation of the graphene Dirac cones and $Q_{f \eta}$ is its projection into valley $\eta$. As one would expect, the energy scales with the square of the external fields and is suppressed by a combination of the Dirac dispersion and the TBG-sector exchange energy $U_{{\vec k} \approx {\vec K}_{\pm}} \approx 18$ meV. The trace measures to what extent the superexchange is Pauli blocked in valley $\eta$ at momentum ${\vec k}$. We find that the energy vanishes for the ``unperturbed" states $Q_c \sim \gamma_{0,z}$ and $Q_f \sim \gamma_{x,y}$; the $V^2$ and $B^2$ terms vanish by the trace over Chern sector and the $VB$ terms vanish by $C_3$. The energy also vanishes if the TBG sector is in a NSM state as well; in this case the $V^2$ and $B^2$ terms vanish by $C_3$ and the $VB$ terms vanish by the trace over Chern sector. To take advantage of this term, $Q_c$ must develop a $C_3$-symmetric $\gamma_{x,y}$ component or $Q_f$ must develop a Chern diagonal or nematic component depending on $Q_c$. These two options both happen simultaneously and may be considered separately in the next order of perturbation theory which we now describe. We will focus on the displacement field; the magnetic field may be understood analogously but has very little effect at this order in perturbation theory due to its small scale. Let us first fix $Q_{f \eta}({\bf p})$ to be unperturbed. Then we may understand \eqref{maintext_superex} as a dispersion term $\frac{1}{2} {\rm tr} Q_c h_V$ with \begin{equation} (h_{V}({\vec k}))_{\gamma \eta \gamma' \eta'} = \frac{V^2}{4}\sum_{{\bf p} \eta} \frac{\delta_{[{\bf p}],{\vec k}} \delta_{\eta \eta'}t_{\gamma \eta {\bf p}} t_{\gamma' \eta {\bf p}}^*}{v \abs{{\bf p}_\eta} + U_{\vec k} } \widehat{{\bf p}_\eta} \cdot {\boldsymbol \gamma}_{\gamma \gamma'} \end{equation} The valley-even part $h_{V+} \propto \eta_0$, where $h_{V\pm} = \frac{1}{2}\left( h_V \pm \eta_x h_V \eta_x \right)$, combines with $h_c$ to generate the inter-Chern superexchange quantified by the parameter $J = J(V) \propto (h_c + h_{V +} )^2$. More precisely we obtain \begin{equation} \begin{aligned} J(V) & = 2\frac{A_M}{A} \sum_{{\vec k} {\vec k}'} (z_{c_{\vec k}} + z_{V{\vec k}})^* (R^{-1})_{{\vec k} {\vec k}'} (z_{c{\vec k}'}+ z_{V{\vec k}'}) \\ & = J(0) + 4\frac{A_M}{A}\sum_{{\vec k} {\vec k}'} \Re z_{c{\vec k}}^* (R^{-1})_{{\vec k} {\vec k}'} z_{V{\vec k}'} + O(V^4) \\ R_{{\vec k} {\vec k}'} & = \frac{1}{A} \sum_{\vec q} V({\vec q}) \left( \abs{\Lambda_{c {\vec q}}({\vec k})}^2 \delta_{{\vec k} {\vec k}'} - \delta_{[{\vec k}+{\vec q}],{\vec k}'} \Lambda^\dag_{\vec q}({\vec k})^2 \right), \end{aligned} \end{equation} where we write $h_{c,V} = h_{c,V x} \gamma_x + h_{c,V y} \gamma_y$ and $z_{c,V} = h_{c,V x} + i h_{c,V y}$. The zero field $J(0)$ as well as the order $V^2$ contribution to $J$ are plotted in Fig. \ref{fig:trilayerHF}a as a function of twist angle (for simplicity we neglect the $O(V^4)$ term in this discussion and in Fig. \ref{fig:trilayerHF}a). As $\theta$ passes through the magic angle, the dispersion $h_c$ essentially flips sign. Accordingly, we see that $J(0)$ has a minimum at the magic angle where $h_c$ has the smallest magnitude. The order $V^2$ contribution to $J$ is negative for large angles. This is because for large angles the dispersion $h_c$ has the same sign as the Dirac dispersion and $h_V$ has the opposite sign as the Dirac dispersion because it is proportional to $Q_f$. When $\theta$ drops below the magic angle, $h_c$ flips sign and thus $J(V)$ decreases with $V$ for these angles. The specific angle for which the $V^2$ contribution to $J$ switches sign is parameter dependent and also dependent on the Hartree Fock subtraction scheme but is approximately $1.57-1.58^\circ$ with our conventions. To see an appreciable decrease in $J$, one needs larger angles $\theta \gtrsim 1.63^\circ$ because otherwise the positive $O(V^4)$ term will quickly take over. The valley odd $h_{V-}$ generates the term $\frac{\lambda_V}{4} {\rm tr}(Q_c \eta_z \gamma_x)^2$ with $\lambda_V \sim V^4$. This is somewhat similar to the magnetic field dispersion in TBG and acts against the zero field $\lambda$, though because this term is $C_3$-symmetric it does not couple to nematic soft modes or act as a Zeeman field that favors the nematic semimetal. It favors the VH state over the KIVC state. Next we fix $Q_c$ to be unperturbed and Chern-diagonal. We then consider \eqref{maintext_superex} to be an effective dispersion for the graphene electrons. This only works if $Q_c$ is valley diagonal; the graphene electrons are low energy at opposite ${\vec K}_\eta$-points in each valley. For valley-diagonal states at the ${\vec K}_\eta$ points the graphene electrons obtain \begin{equation} m_{\rm SE} = \frac{\abs{t_{K ++}}^2}{U_K} \left( \frac{V^2}{4}Q_c + g_{\rm orb}^2 \mu_B^2 B^2 \gamma_x Q_c \gamma_x \right). \label{mse} \end{equation} as an additonal dispersion term. For $Q_c \propto \gamma_z$ this acts as a mass term gapping the Dirac cones. If the TBG bands are in a Chern insulator state then the Dirac electrons will develop a Chern number generating mass whereas if the TBG bands are in a VH state the Dirac electrons will also pick up a VH generating mass. For the VP state, $Q_c = \gamma_0 \eta_z$, we instead obtain a valley doping. In our Hartree Fock simulations we see these terms in the Hartree Fock band structures and their magnitude is consistent with our analytic perturbation theory, see Fig \ref{fig:trilayerHF}d-g. The mass terms, together with an energy optimizing $Q_f$, lower the energy of states with $Q_c \propto \eta_{0,z}\gamma_z$ by an amount $\propto V^4$; see the supplemental information section \ref{Suppsubsec:trilayersuperexchange} for a detailed calculation. This combines with the valley odd dispersion to favor VH over KIVC by an amount $\propto V^4$. We have performed Hartree Fock simulations on charge neutral spinless trilayers to test our analytic predictions. We use the Hamiltonian \eqref{intham} but with the interaction \eqref{RPA_interaction} to account for the screening effects that Hartree Fock does not include. We use $\epsilon = 6.6$ for hBN screening, the interaction \eqref{RPA_interaction}, $\kappa = 0.58$, and $d_s = 20$ nm. The Hartree Fock state energies are plotted in Fig. \ref{fig:trilayerHF}b-c and show good qualitative agreement with the analytic results described above. In particular, we see that for small angles, panel b, $E_{\text{VP}} - E_{\rm KIVC} = 2J$ increases with displacement field and for sufficiently large displacement field the flat band dispersion $h_c + h_V$ becomes large enough to favor a $C_3$ symmetric ``BM" semimetal over the nematic semimetal; The ``BM" semimetal is essentially the non-interacting ground state which is favored by dispersion rather than interactions. The BM semimetal decreases in energy with displacement field relative to the IVC state and shows no signs of $C_3$ breaking, whereas the energy of the nematic semimetal is nearly independent of displacement field and its band structure strongly breaks $C_3$. For larger angles, panel c, $J$ first decreases then increases after the order $V^4$ terms take over. There is no transition from the nematic semimetal to the BM semimetal. For both large and small angles, the VH state eventually becomes the ground state at large displacement fields due to $O(V^4)$ terms such as the one generated by $h_{V-}$. This also lowers the VH energy relative to KIVC as seen in the Hartree Fock state energies Fig \ref{fig:trilayerHF}b,c. The valley polarized doping also lowers the energy of the valley polarized state but only by an amount $\propto V^6$. We note that the parameter $J$ is responsible for both the pairing and effective mass of skyrmions; the latter calculation leads to $T_c \propto J$ for the skyrmion superconductor \cite{Khalaf2020}. We therefore conclude that for smaller angles superconductivity should be enhanced at small displacement fields before eventually vanishing due to a transition out of the strong coupling regime or to a valley Hall parent state. This matches the behavior in Refs.~\onlinecite{Park2021TTG,Hao2021TTG} where superconductivity is first enhanced and then destroyed by the displacement field; note also that displacement-field-induced resistance peaks are consistent with the appearance of a gapped VH ground state as discussed in Ref.~\onlinecite{ChristosTTG}. \subsection{$n=4$ and beyond} \label{subsec:extfields_tqgandbeyond} For $n=4$ ATMG the external fields contribute both as intra-TBG external fields as well as tunneling terms between the two TBG subsystems. Each effect may be handled as in $n=2,3$ respectively. We label the occupation of the magic TBG subsystem with $Q_c$ and use $Q_f$ for the non-magic subsystem. We first discuss the intra-TBG effect: it leads to the same term $\frac{\lambda_B}{2} {\rm tr} (Q_c \eta_z \gamma_x)^2$ but with a different coefficient $\lambda_B = c_B r_k^2 g_{\text{orb}}^2 B^2$. The coefficient $r_1 \approx 0.05$ strongly suppresses this effect and multiplies the critical field for valley Hall order by around ten. The inter-TBG superexchange may be computed in a very similar way to the trilayer graphene superexchange; the main difference is that there are Dirac points at \emph{both} ${\vec K}_\zeta$ points in each valley in non-magic TBG and so the final result is more symmetric. For four layers we have \begin{equation} \begin{aligned} E(Q_c, Q_f) & = \frac{1}{2}\sum_{{\vec k}} \frac{1}{\abs{h_f({\vec k})} + U_{\vec k}} {\rm tr}(Q_c \xi_{{\vec k} \eta} Q_{f \eta {\vec k}} \xi^\dag_{{\vec k} \eta}), \\ (\xi_{{\vec k} \eta})_{\gamma \gamma'} & = \frac{2}{\sqrt{5}} \left(\frac{V}{3}\delta_{\gamma \gamma'} + \eta g_{\rm orb} \mu_B {\bf B} \times {\boldsymbol \gamma}_{\gamma \gamma'}\right)t_{{\vec k} \eta \gamma}, \\ t_{{\vec k} \eta \gamma} & = \bra{u_{c{\vec k} \eta \gamma}}\ket{u_{f {\vec k} \eta \gamma}} . \end{aligned} \label{maintext_superex_4lay} \end{equation} Note that here the non-magic subsystem develops a gap when $Q_c$ describes an IVC state as well. If the TBG subsystem develops a gap at charge neutrality, as some two layer devices do \cite{Efetov, EfetovScreening}, then we expect that the displacement field will quickly induce a gap in the non-magic sector as well and the entire four layer device would be gapped at charge neutrality for $V \neq 0$. The displacement field does not favor valley diagonal states for four layers because all correlated states can transfer their order to the non-magic TBG. Indeed, the result \eqref{maintext_superex_4lay} is invariant under rotations of the valley pseudospin $\eta$. Hence it cannot favor valley-diagonal states over IVC states. Note, however, that the displacement field couples to the non-magic subsystem directly with a not-small coefficient. Its projection is proportional to $\eta_z$ and is odd under ${\vec k} \to -{\vec k}$. In practice the mass generation wins out at large enough displacement fields, but the IVC masses are slightly favored because they anticommute with this dispersion and can therefore always open up a gap. We do not expect any transition to the VH state for four layer structures. We may interpret \eqref{maintext_superex_4lay} as an effective dispersion for the magic TBG subsystem. We find that the correction to $J$ is around fifty percent larger for four layers. There is no $O(V^4)$ valley-odd superexchange for four layers. For larger $n$ the intra-MATBG superexchange may largely be ignored because the coefficient $r_1 \approx -\frac{-\pi^2}{2n^3}$ decays very quickly. The inter-subsystem superexchange remains however, and the mass generation is in general more complex for higher $n$ because the non-magic subsystems begin to tunnel into each other as well. This is discussed explicitly in the supplemental material for $n=5$ \ref{Suppsubsec:higher_n_superexchange}. The dispersion generation for the magic sector is qualitatively similar. \section{Weak coupling theory of pair-breaking} \label{sec:weakcoupling} In this section, we switch perspective and consider a weak coupling approach to superconductivity. While the strong coupling approach discussed so far is potentially relevant in the limit of small doping (relative to $\nu = \pm 2$), for sufficiently large doping we adopt a phenomenological weak coupling perspective to understand superconductivity as a Fermi surface instability. It turns out this approach makes the effect of in-plane magnetic field very transparent. \subsection{$n=2$} \label{subsec:tbgweakcoupling} We will start by discussing the case of TBG ($n = 2$) before considering the case of general $n$. We start by assuming that the $\nu = \pm 2$ is spin-polarized. Flavor polarization is suggested by Hall data measurements \cite{PabloMott, PomeranchukYoung} as well as the observation of Cascade transitions \cite{CascadeShahal, CascadeYazdani}. In fact, the ground state in the strong coupling limit is a spin-polarized version of the KIVC as shown in Ref.~\cite{KIVCpaper}. Without including the intervalley Hund's coupling, the spin polarization can be chosen independently in the two valleys. However, once intervalley Hund's coupling $J_H$ is included, it selects a spin ferromagnet where the spin direction in the two valleys is aligned if $J_H < 0$ and a spin-valley locked state where the spin direction in the two valleys is anti-aligned if $J_H > 0$ \cite{KIVCpaper}. Once this state is doped, the doped electrons only fill one spin species in each of the valleys and pairing takes place between these doped species. This implies that, regardless of the pairing mechanism, we expect spin-triplet pairing for $J_H < 0$ and a mixture of singlet and triplet pairing for $J_H > 0$ \cite{Symmetry4e, LakeSenthil}. Only the latter exhibits a finite temperature BKT transitions making it the more likely scenario experimentally \cite{Symmetry4e, LakeSenthil}. This scenario is also more compatible with the observed decrease of the $\nu = 2$ insulating gap with in-plane magnetic field \cite{Dean-Young}. In either case, we can think of pairing that takes place essentially between spinless electrons. Since the KIVC state only preserves a Kramers time-reversal symmetry $\mathcal{T}'$ \cite{KIVCpaper} in addition to $C_2$ and $C_3$, the effective KIVC Hamiltonian close to $\nu = \pm 2$ (see supplemental material for details) is invariant under the spinful representations of wallpaper group $p6$ with valley playing the role of spin: \begin{gather} \mathcal{T}' = \tau_y \mathcal{K}, \qquad C'_2 = i \sigma_x \tau_x, \quad C'_3 = - e^{\frac{2\pi i}{3}\sigma_z \tau_z} \\ {\mathcal{T}'}^2 = {C'_2}^2 = {C'_3}^3 = -1, \quad [\mathcal{T}',C'_2] = [\mathcal{T}',C'_3] = [C'_2,C'_3] = 0 \end{gather} Here, $\mathcal{T}'$ is the Kramers time-reversal symmetry which combines spinless time-reversal $\mathcal{T} = \tau_x \mathcal{K}$ with $\pi$ ${\rm U}(1)$ vally rotation which remains a symmetry of the KIVC state \cite{KIVCpaper}. This symmetry ensures the degeneracy of the KIVC bands at the $\Gamma$ point but the bands are in general non-degenerate away from this point. Thus, upon doping, we obtain two $\mathcal{T}'$-symmetric Fermi surfaces (FS) (see Ref.~\cite{kozii2020}). For the intermediate doping regime of interest, the two FSs are sufficiently far apart in momentum which justifies the assumption that pairing takes place independently in each FS. Thus, we can focus on a single FS. The pairing amplitude between $\mathcal{T}'$ related electrons is defined as \begin{equation} \Delta({\vec k}) = \langle c_{\vec k} (\mathcal{T}' c_{\vec k} \mathcal{T}'^{-1}) \rangle \end{equation} An important subtlety to note here is that, when expressed in the basis of the KIVC band, the operator $\mathcal{T}'$ has to be momentum-dependent to satisfy the relation $\mathcal{T}'^2 = -1$ (this is similar to what happens when considering the implementation of spinful time-reversal at the surface of a topological insulator). The same consideration also applies for $C'_2$ such that \begin{equation} \mathcal{T}' c_{\vec k} {\mathcal{T}'}^{-1} = e^{i \chi_{\vec k}} c_{-{\vec k}}, \quad C'_2 c_{\vec k} {C'_2}^{-1} = e^{i \phi_{\vec k}} c_{-{\vec k}}, \end{equation} where $\chi_{\vec k} - \chi_{-{\vec k}} = \phi_{\vec k} + \phi_{-{\vec k}} = \pi$. This yields the pairing function \begin{equation} \Delta({\vec k}) = e^{i \chi_{\vec k}} \langle c_{\vec k} c_{-{\vec k}} \rangle. \end{equation} which satisfies $\Delta(-{\vec k}) = \Delta({\vec k})$. This pairing function has even parity under $C'_2$ since \begin{align} C'_2 \Delta({\vec k}) {C'_2}^{-1} &= e^{i\chi_{\vec k}} \langle (C'_2 c_{\vec k} {C'_2}^{-1}) (C'_2 c_{-{\vec k}} {C'_2}^{-1}) \rangle \nonumber \\ &= e^{i \chi_{\vec k}} e^{i (\phi_{\vec k} + \phi_{-{\vec k}})} \langle c_{-{\vec k}} c_{\vec k} \rangle = \Delta({\vec k}) \end{align} where the minus sign from exchanging $c_{\vec k}$ and $c_{-{\vec k}}$ cancels against the minus sign from $e^{i (\phi_{\vec k} + \phi_{-{\vec k}})}$. Note that we could have defined $\Delta({\vec k})$ without the extra phase factor in which case we would get $\Delta(-{\vec k}) = -\Delta({\vec k})$ but the action of $C'_2$ will be unchanged. This means that $\Delta({\vec k})$ can be decomposed into even angular momentum channels and since we only have $C_6$ symmetry (rather than contineous rotation), we are left with the channels corresponding to angular momenta $l = 0$ ($s$-wave) and $l = \pm 2$ ($d$-wave). In weak coupling, $T_c$ is determined from the strength of the pairing interaction in the different channels $V_l$, $l = 0, \pm 2$ with time-reversal symmetry enforcing $V_2 = V_{-2}$. Without specifying the origin of pairing, we cannot determine the values of $V_{0,2}$ and which one is larger. Assuming at least one of $V_0$ and $V_2$ is attractive and denoting the larger of $V_0$ and $V_2$ by $V_{\rm max}$, the BCS formula gives $T_c = \Lambda e^{-1/N(0) V_{\rm max}}$ where $\Lambda$ is some cutoff and $N(0)$ is the density of states at the FS. We would now like to understand the effect of in-plane orbital field on such superconductor. In the following, we will ignore Zeeman field which for the effectively spinless superconductor considered here does not act by pair breaking \footnote{More precisely, it has no effect for the ferromagnetic case and it leads to canting of the spins in the spin-valley locked case without affecting the pairing.}. The effect of the in-plane field is obtained by projecting the term $\frac{g_{\text{orb}}}{2} \mu_B \mu_z \hat z \times {\bf B} \cdot {\boldsymbol \sigma}$ onto the KIVC bands: \begin{equation} H_{\bf B} = \mu_B {\vec g}_{n,{\vec k}} \cdot {\bf B}, \qquad {\vec g}_{n,{\vec k}} = \frac{g_{\text{orb}}}{2} \langle u_{n,{\vec k}}| \mu_z (\sigma_y, -\sigma_x \tau_z) | u_{n, {\vec k}} \rangle \label{gnk} \end{equation} where $n$ labels to the two KIVC bands. $\mathcal{T}'$ implies that ${\vec g}_{n,-{\vec k}} = -{\vec g}_{n,{\vec k}}$ which means that this term acts as pair breaking leading to the loss of superconductivity at $\mu_B B_c \sim T_c$ which is of the same order as the Pauli field. Remarkably, the critical in-plane orbital field turns out to be independent of the strength of the interaction as long as $V_0 \neq V_2$ and given by the simple expression (see supplemental material for details) \begin{equation} \frac{B_c}{B_p} = 3.5 \frac{1}{\sqrt{\gamma_0}}, \qquad \gamma_0 = \int_{\rm FS} \frac{d\phi_{\vec k}}{2\pi} (g_{x,{\vec k}}^2 + g_{y,{\vec k}}^2) \end{equation} The value of $B_c/B_p$ is shown in Fig.~\ref{fig:Bc} for the two FSs. We see that it only depends on the doping and the FS where pairing takes place. In addition, it is always a number of order 1 and is extremely close to 1 when pairing takes place in the second (smaller) FS. Thus as far as response to in-plane field is concerned, this superconductor will behave very similarly to a spin-singlet Pauli limited superconductor. \begin{figure} \centering \includegraphics[width = 0.38 \textwidth]{BcIVC.pdf} \caption{(a) The top two bands for the IVC bands relevant for electron doping the KIVC state close to half-filling $\nu = 2 + \delta$ at $\kappa = 0.7$. (b) The ratio of the critical field due to orbital in plane field for intraband superconductivity in the upper/lower KIVC band, $B_c$, and the Pauli field, $B_P$, as a function of doping away from half-filling $\delta$.} \label{fig:Bc} \end{figure} \subsection{$n=3$} \label{subsec:ttgweakcoupling} The situation is very different for the trilayer case. Here, we note the the magnetic field is odd under both mirror symmetry $M_z$ and Kramers time-reversal $\mathcal{T}'$ \cite{MacdonaldInPlane} which means that the combination of the two $M_z \mathcal{T}'$ remains unbroken in the presence of in-plane magnetic field. This symmetry ensures that in the $M_z = +1$ sector, where the KIVC order resides, Kramers time-reversal symmetry remains unbroken and pairing between $\mathcal{T}'$-related electrons is unaffected. Our argument here is similar to the one of Ref.~\cite{MacdonaldInPlane} with the main difference being that we consider pairing on top of KIVC order rather than intervalley pairing with unbroken valley ${\rm U}(1)$ symmetry. The absence of pair-breaking effect of in-plane field relies on $M_z$ symmetry. This symmetry is broken in the presence of vertical displacement field which couples the two mirror sectors. However, we note here that we expect this pair breaking effect to remain small as long as the displacement field is small. The reason is that any pair breaking term should arise from a process where an electron tunnels from the KIVC band to the Dirac band and back to the KIVC band. For a given point on the KIVC Fermi surface, such term will be suppressed by $\frac{V}{\hbar v_F k_{\rm min}}$ where $k_{\rm min}$ is the minimum distance to the Dirac Fermi surface. The two Fermi surfaces may overlap at a few points but for most of the KIVC fermi surface, the distance $k_{\rm min}$ is of the order of the size of the Moire BZ which yields a denominator $\hbar v_F k_{\rm min} \sim 100$ meV. Thus, for displacement fields smaller than this value, we do not expect a significant pair breaking effect due to in-plane field. We note that our analysis here is also compatible with that of Ref.~\cite{MacdonaldInPlane} which only observed pair-breaking due to in-plane field for relatively large values of vertical displacement field. In conclusion, if we assume the pairing has the same structure in TBG ($n=2$) and trilayer ($n=3$), we find that superconductivity in the latter will survive to much higher in-plane magnetic field than the former. The reason is that, while both are expected to survive the Zeeman part of the field, TBG will experience relatively strong orbital component which destroys superconductivity at values close to the Pauli limit. This result is fully consistent with the experimentally observed behavior where superconductivity is destroyed by an in-plane field of the order of the Pauli field in TBG \cite{PabloTBGNematicity} whereas it persists to much higher fields in TTG \cite{PabloTrilayerInPlane}. \subsection{General $n$} \label{subsec:any_n_weakcoupling} For even $n$, we expect an orbital effect of the magnetic field for each TBG subsystem similar to that case of $n = 2$. However, note that for the first TBG block (the one with the largest magic angle), the in-plane field enters the Hamiltonian multiplied by a very small prefactor. For example, as discussed in Sec.~\ref{sec:BandStructure}, the effect of in-plane magnetic field on the first TBG block in tetralayer graphene ($n = 4$) is scaled by a factor $|r_1| \approx 0.05$ which will give rise to a pair breaking field that is ten times larger than that for TBG. Thus, although symmetry-wise the case of even $n$ is similar to TBG rather than trilayer, practically, we expect superconductivity in the first TBG block (at the first magic angle) to be very robust to in-plane field similar to the trilayer case. For odd $n$, we can apply the same reasoning as in the trilayer case to deduce that the in-plane field will not have a pair-breaking effect at zero displacement field and only a weak pair-breaking effect at finite but small field. \section{Conclusions} In this manuscript we have developed a quantitative and controlled analysis of the ATMG systems that enable us to compare and contrast these multilayer systems with $n=2$ MATBG. While the low energy behavior of ATMG is dominated by the magic sector, the different couplings of external fields and different levels of lattice relaxation give ATMG much more tunability than twisted bilayer graphene. Such increased tunability will be vital for stress-testing theories of correlated physics. Let us conclude by highlighting the predictions made by our theory. First, we predict that strain-free four layer systems can be gapped at charge neutrality with small displacement fields. Second, ATMG samples with $n>2$ and larger (smaller) than-magic angles can see superconductivity weakened (strengthened) by small displacement fields. To see an appreciable suppression of $J$ and $T_c$, one may need $\theta \gtrsim 1.63^\circ$ for TTG. Third, in a parallel magnetic field superconductivity can exceed the Pauli limit for $n>2$, when tuned to their first magic angle. Fourth, we predict a novel semimetallic state for $n=2$ MATBG, the magnetic semimetal (MSM - related to but distinct from the nematic semimetal), which is stabilized by an in-plane magnetic field. Finally, our lattice relaxation calculation additionally points to $n=5$ ATMG tuned to their second magic angle, $\theta = 1.14^o$(which happens to be close to the {\em first} magic angle of MATBG) as a promising host for zero-field FCIs. \section{Acknowledgements} We thank Tomohiro Soejima, Nick Bultinck, Michael Zaletel, and Daniel Parker for insightful discussions, collaborations on related projects, and development of the Hartree Fock codebase. We additionally thank Tomohiro Soejima for comprehensive and patient help in performing Hartree Fock simulations. Patrick Ledwith was partly supported by the Department of Defense (DoD) through the National Defense Science and Engineering Graduate Fellowship (NDSEG) Program. Eslam Khalaf was supported by the German National Academy of Sciences Leopoldina through grant LPDS 2018-02 Leopoldina fellowship. Ziyan Zhu is supported by the STC Center for Integrated Quantum Materials, NSF Grant No. DMR-1231319, ARO MURI Grant No. W911NF14-0247, and NSF DMREF Grant No. 1922165. Stephen Carr was supported by the National Science Foundation under grant No. OIA-1921199. Ashvin Vishwanath was supported by a Simons Investigator award and by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, AV).
train/arxiv
BkiUfU3xK3YB9i3RNX5O
5
1
\section{Introduction} \label{sec:01} Throughout this paper $p$ denotes a prime number. Let ${D_{p,\infty}}$ denote the unique definite quaternion $\mathbb Q$-algebra up to isomorphism ramified exactly at $p$ and $\infty$. The classical result of Deuring establishes a bijection between the set $\Lambda_p$ of isomorphism classes of supersingular elliptic curves over $\overline{\bbF}_p$ and the set $\Cl({D_{p,\infty}})$ of ideal classes of a maximal order in ${D_{p,\infty}}$. The class number $h$ of ${D_{p,\infty}}$ is well known due to Eichler \cite{eichler-CNF-1938} (Deuring and Igusa gave different proofs of this result), and is given by \begin{equation} \label{eq:1.1} h=\frac{p-1}{12}+\frac{1}{3}\left (1-\left(\frac{-3}{p}\right )\right )+\frac{1}{4}\left (1-\left(\frac{-4}{p}\right )\right ), \end{equation} where $\left( \frac{\cdot}{p}\right ) $ denotes the Legendre symbol. Under the correspondence $\Lambda_p\simeq \Cl({D_{p,\infty}})$, the type number $t$ of ${D_{p,\infty}}$ is equal to the number of non-isomorphic endomorphism rings of members $E$ in $\Lambda_p$. An explicit type formula is also well known due to Deuring~\cite{Deuring1950}, which is given by \begin{equation} \label{eq:1.2} t=\frac{p-1}{24}+\frac{1}{6}\left (1-\left(\frac{-3}{p}\right )\right )+ \begin{cases} h(-p)/4 & \text{if $p\equiv 1\pmod 4$},\\ 1/4+h(-p)/2 & \text{if $p\equiv 7\pmod 8$},\\ 1/4+h(-p) & \text{if $p\equiv 3\pmod 8$},\\ \end{cases} \end{equation} for $p>3$, and $t=1$ for $p=2,3$. Here for any square-free integer $d\in \bbZ$, we write $h(d)$ for the class number of $\mathbb Q(\sqrt{d})$. Though these classical results were well established by 1950, different proofs with various ingredients such as mass formulas, Tamagawa numbers, theta series, cusp forms, algebraic modular forms, Atkin-Lehner involutions and traces of Hecke operators, have been generalized and revisited many times even until now. Different angles and approaches such as the Eichler-Shimizu-Jacquet-Langlands correspondence and trace formulas also play important roles in the development. This paper is one instance of them, where we would like to generalize the explicit formulas (\ref{eq:1.1}) and (\ref{eq:1.2}) from ${D_{p,\infty}}$ to ${D_{p,\infty}}\otimes \mathbb Q(\sqrt{p})$, which is the unique totally definite quaternion $\mathbb Q(\sqrt{p})$-algebra (up to isomorphism) unramified at all finite places. Recall that a quaternion algebra $D$ over a totally real number field $F$ is said to be \emph{totally definite} if $D\otimes_{F, \sigma}\mathbb R$ is isomorphic to the Hamilton quaternion algebra $\bbH$ for every embedding $\sigma: F\hookrightarrow \mathbb R$. Our interest for totally definite quaternion algebras stems from the study abelian varieties over finite fields. Note that ${D_{p,\infty}}$ and ${D_{p,\infty}}\otimes \mathbb Q(\sqrt{p})$ are the only two algebras that can occur as the endomorphism algebra of an abelian variety over a finite field \cite{tate:eav}, but do not satisfy the Eichler condition \cite[Definition~34.3]{reiner:mo}. The cases where the endomorphism algebras satisfy the Eicher condition are easier to treat. Indeed, by a result of Jacobinski \cite[Theorem 2.2]{Jacobinski1968}, the class number of an order in such an algebra is equal to a ray class number of its center. Observe that the number $h$ in \eqref{eq:1.1} (resp.~$t$ in \eqref{eq:1.2}) is also equal to the number of ${\bbF}_q$-isomorphism classes (resp.~non-isomorphic endomorphism rings) of supersingular elliptic curves in the isogeny class corresponding to the Weil $q$-number $\sqrt{q}$ (or $-\sqrt{q}$), for any field ${\bbF}_q$ containing $\mathbb F_{p^2}$. Thus, the present work may be also reviewed as an generalization of explicit formulas (\ref{eq:1.1}) and (\ref{eq:1.2}) in the arithmetic direction. A generalization of the geometric direction in the sense that the set $\Lambda_p$ is replaced by the superspecial locus of the Siegel moduli spaces has been inverstigated by Hashimoto, Ibukiyama, Katsura and Oort \cite{Hashimoto-Ibukiyama-1, Hashimoto-Ibukiyama-2, Hashimoto-Ibukiyama-3, Ibukiyama-Katsura-Oort-1986, katsura-oort:surface}. Now let $q$ be an odd power of $p$, and let $\Sp(\sqrt{q})$ be the set of isomorphism classes of superspecial abelian surfaces over ${\bbF}_q$ in the isogeny class corresponding to the Weil number $\pm \sqrt{q}$. As a generalization of (\ref{eq:1.1}), T.-C. Yang and the present authors \cite[Theorem 1.2]{xue-yang-yu:ECNF} (also see \cite[Theorem 1.3]{xue-yang-yu:sp_as}) show the following explicit formula for $|\Sp(\sqrt{q})|$. \begin{thm}\label{1.1} Let $F=\mathbb Q(\sqrt{p})$, and $O_F$ be its ring of integers. (1) The cardinality of $\Sp(\sqrt{q})$ depends only on $p$, and is denoted by $H(p)$. (2) We have $H(p)=1,2,3$ for $p=2,3,5$, respectively. (3) For $p>5$ and $p\equiv 3 \pmod 4$, one has \begin{equation*} H(p)=h(F)\left [ \frac{\zeta_F(-1)}{2} + \left(13-5 \left(\frac{2}{p}\right) \right)\frac{h(-p)}{8}+\frac{h(-2p)}{4}+\frac{h(-3p)}{6} \right ], \end{equation*} where $h(F)$ is the class number of $F$ and $\zeta_F(s)$ is the Dedekind zeta function of $F$. (4) For $p>5$ and $p\equiv 1 \pmod 4$, one has \begin{equation*} H(p)= \begin{cases} h(F) \left [8 \zeta_F(-1)+ h(-p)/2+\frac{2}{3} h(-3p) \right ] & \text{for $p\equiv 1 \pmod 8$;} \\ h(F) \left [\left(\frac{45+\varpi}{2\varpi}\right) \zeta_F(-1)+\left (\frac{9+\varpi}{8 \varpi}\right ) h(-p) +\frac{2}{3} h(-3p) \right ] & \text{for $p\equiv 5 \pmod 8$;} \\ \end{cases} \end{equation*} where $\varpi:=[O_F^\times: A^\times]\in\{1,3\}$ and $A:=\mathbb Z[\sqrt{p}]$ is the suborder of index $2$ in $O_F$. \end{thm} Let $\calT(\Sp(\sqrt{q}))$ denote the set of isomorphism classes of endomorphism rings of abelian surfaces in $\Sp(\sqrt{q})$. The cardinality of $\calT(\Sp(\sqrt{q}))$ again depends only on the prime $p$ (\cite[Theorem 1.3]{xue-yang-yu:sp_as}, see also Sect.~3), and is denoted by $T(p)$. In this paper we give an explicit formula for $T(p)$, which generalizes (\ref{eq:1.2}). \begin{thm}\label{1.2} Let $F=\mathbb Q(\sqrt{p})$ and $T(p):=|\calT(\Sp(\sqrt{q}))|$. (1) We have $T(p)=1,2,3$ for $p=2,3,5$, respectively. (2) For $p\equiv 3 \pmod 4$ and $p\ge 7$, we have \begin{equation} \label{eq:1.5} T(p)= \frac{\zeta_F(-1)}{2} + \left(13-5\left(\frac{2}{p}\right) \right)\frac{h(-p)}{8}+\frac{h(-2p)}{4}+\frac{h(-3p)}{6}. \end{equation} (3) For $p\equiv 1 \pmod 4$ and $p\ge 7$, we have \begin{equation} \label{eq:1.6} T(p)= 8 \zeta_F(-1) + \frac{1}{2}h(-p)+ \frac{2}{3} h(-3p). \end{equation} \end{thm} It follows from Theorems~\ref{1.1} and \ref{1.2} that $H(p)=T(p) h(F)$ except for the case where $p\equiv 5 \pmod 8$ and $\varpi=1$. When $p\equiv 3\pmod 4$, we actually prove this result first and use it to get formula (\ref{eq:1.5}). For the case where $p\equiv 1 \pmod 4$, we explain how this coindence arises in part (1) of Remark~\ref{3.3}. Similar to Deuring's result for supersingular elliptic curves, the proof of Theorem~\ref{1.2} is reduced to the calculation of the type number of a maximal order $\bbO_1$ in in ${D_{p,\infty}}\otimes \mathbb Q(\sqrt{p})$, as well as those of certain proper $\mathbb Z[\sqrt{p}]$-orders $\bbO_{8}$ and $\bbO_{16}$ (see (\ref{eq:bbOr1}) and (\ref{eq:bbOr2}) for definition of these orders). We recall briefly the concept of \emph{proper $A$-orders}. Henceforth all orders are assumed to be of full rank in their respective ambient algebras. Let $F$ be a number field, and $A$ be a $\mathbb Z$-order in $F$ (not necessarily maximal). A $\mathbb Z$-order $\mathcal{O}$ in a finite dimensional semisimple $F$-algebra $\mathscr{D}$ is said to be a proper $A$-order if $\mathcal{O}\cap F=A$. Unlike \cite[Definition~23.1]{curtis-reiner:1}, we do not require $\mathcal{O}$ to be a projective $A$-module. The class number of $\mathcal{O}$ is denoted by $h(\mathcal{O})$. When $F$ is a totally real number field, a proper $A$-order in a CM-extension (i.e. a totally imaginary quadratic extension) of $F$ is call a \emph{CM proper $A$-order}. If $A$ coincides with $O_F$, then we drop the adjective ``proper'' and simply write ``$O_F$-order''. It is well known that the type number of a totally definite Eichler order of square-free level can be calculated by Eichler's trace formula \cite{eichler:crelle55}; see also \cite{korner:1987} for general $O_F$-orders. Some errors of Eichler's formula were found and corrected later independently by M.~Peters \cite{Peters1968} and by A.~Pizer~\cite{Pizer1973}. Eichler's trace formula contains a number of data which are generally not easy to compute. In \cite[p.~212]{vigneras:inv} M.-F, Vign\'eras gave an explicit formula for the type number of any totally definite quaternion algebra $D$, over any real quadratic field $F=\mathbb Q(\sqrt{d})$, that is unramified at all finite places. Her formula was based on the explicit formula for the class number $h(D)$ in \cite[Theorem 3.1]{vigneras:ens} and the class-type number relationship $h(D)=t(D)h(F)$; see \cite[p.~212]{vigneras:inv}. Unfortunately, it was pointed out by Ponomarev in \cite[Concluding remarks, p.~103]{ponomarev:aa1981} that the formula in \cite[p.~212]{vigneras:inv} is not a formula for $t(D)$ in general. This conclusion is based on his explicit calculations for class numbers of positive definite quaternary quadratic forms \cite[Sect.~5] {ponomarev:aa1981}, and the correspondence between quaternary quadratic forms and types of quaternion algebras established in \cite[Sect.~4]{ponomarev:aa1976}. The source of the difficulty is that the class-type number relationship $h(D)=t(D)h(F)$ may fail in general\footnote{In \cite[p.~103] {ponomarev:aa1981} it reads ``the class number of $\grA_K$ divided by the proper class number of $K$ is not, in general, the type number $t$''. However it should read ``the class number'' instead of ``the proper class number'' as the former one is what Vign\'eras' formula is based on.}, even if $D$ is unramified at all finite places. To remedy this, we examine more closely the Picard group action described below. Let $F$ be an arbitrary number field, $A\subseteq O_F$ a $\mathbb Z$-order in $F$, and $\mathcal{O}$ a proper $A$-order in a quaternion $F$-algebra $D$. The Picard group $\Pic(A)$ acts naturally on the finite set $\Cl(\mathcal{O})$ of locally principal right ideal classes of $\mathcal{O}$ by the map \begin{equation} \label{eq:8} \Pic(A)\times \Cl(\mathcal{O})\to \Cl(\mathcal{O}), \qquad ([\gra], [I])\mapsto [\gra I], \end{equation} where $\gra$ (resp.~$I$) denotes a locally principal fractional $A$-ideal in $F$ (resp.~right $\mathcal{O}$-ideal in $D$), and $[\gra]$ (resp.~$[I]$) denotes its ideal class. Let $\overline{\Cl}(\mathcal{O})$ be the set of orbits of this action, and $r(\mathcal{O})$ be its cardinality: \begin{equation} \label{eq:21} r(\mathcal{O}):=\abs{\overline{\Cl}(\mathcal{O})}=\abs{\Pic(A)\backslash \Cl(\mathcal{O})}. \end{equation} One of main results of the this paper is a formula for $r(\mathcal{O})$ when $D$ is a totally definite quaternion algebra over a totally real field. \begin{thm}\label{thm:main} Let $A$ and $\mathcal{O}$ be as above. Suppose that $F$ is a totally real number field, and $D$ is a totally definite quaternion $F$-algebra. The number of orbits of the Picard group action (\ref{eq:8}) can be calculated by the following formula: \[r(\mathcal{O})=\frac{1}{h(A)}\left(h(\mathcal{O})+\sum_{1\neq [\gra]\in \grC_2(A, \widetilde A)}\ \sum_{\dbr{B}\in \mathscr{B}_{[\gra]}} \frac{1}{2}(2-\delta(B))h(B)\prod_{\ell}m_\ell(B) \right),\] where we refer to (\ref{eq:def-grC2}), (\ref{eq:13}), (\ref{eq:46}), (\ref{eq:35}) for the definitions of $\grC_2(A, \widetilde A)$, $\mathscr{B}_{[\gra]}$, $\delta_B$, $m_\ell(B)$ respectively. \end{thm} If $A=O_F$, then the formula for $r(\mathcal{O})$ can be simplified further (see Theorem~\ref{thm:orbit-num-formula}). Assume that $F$ is a totally real field of even degree over $\mathbb Q$, and $D$ is the unique up to isomorphism quaternion $F$-algebra unramified at all finite places of $F$. Let $\mathcal{O}$ be a maximal $O_F$-order in $D$. Then $t(D)=r(\mathcal{O})$, and hence Theorem~\ref{thm:main} leads to a type number formula for such $D$ in Corollary~\ref{cor:type-numer-tot-def-unram}. In \cite[Remarque, p.~82]{vigneras:ens}, Vig\'enras asserted that $h(D)/h(F)$ is always an integer. However, the assertion was mixed with the misconception that $h(D)=t(D)h(F)$ holds unconditionally on $F$, and we could not locate a precise proof of this integrality elsewhere. As an application of our orbit number formula, we prove in Theorem~\ref{thm:integrality} that $h(D)/h(F)\in \bbN$ for all $F$. On the other hand, we give in Corollary~\ref{2.7} a necessary and sufficient condition on $F$ such that the relationship $h(D)=h(F)t(D)$ remains valid. In particular, for real quadratic fields satisfying this condition, Vig\'enras's formula \cite[p.~212]{vigneras:inv} does give a formula for the type number $t(D)$. This approach of calculating the type number via the class number also paves the way to our proof of Theorem~\ref{1.2}, where we treat certain orders (namely, $\bbO_8$ and $\bbO_{16}$) that does not contain $O_F$. The type numbers of such orders are not covered by previous methods of Eichler-Pizer and Ponomarev. This paper is organized as follows. In Section~\ref{sec:02} we provide some preliminary studies on the $\Pic(A)$-action on $\Cl(\mathcal{O})$. This is carried out more in depth for totally definite quaternion algebras in Section~\ref{sec:orbit-number-formula}, and we derive the orbit number formula and its corollaries there. The calculations for Theorem~\ref{1.2} are worked out in Section~\ref{sec:03}, and we prove the integrality of $h(D)/h(F)$ in Section~\ref{sec:integrality}. \section{Preliminaries on the Picard group action} \label{sec:02} Let $F$ be a number field, $O_F$ its ring of integers, and $A\subseteq O_F$ a $\mathbb Z$-order in $F$. Let $D$ be a quaternion $F$-algebra and $\mathcal{O}$ a {\it proper} $A$-order in $D$. This section provides a preliminary study of the $\Pic(A)$-action on $\Cl(\mathcal{O})$ in (\ref{eq:8}). We follow the notation of \cite[Sections 2.1--2.3]{xue-yang-yu:ECNF}. Recall that $D$ admits a canonical involution $x\mapsto \bar x$ such that $\Tr(x)=x+\bar x$ and $\Nr(x)=x\bar x$ are respectively the \emph{reduced trace} and \emph{reduced norm} of $x\in D$. The \emph{reduced discriminant} $\grd(D)$ of $D$ is the product of all finite primes of $F$ that are ramified in $D$. Let $\widetilde A:=\Nr_A(\mathcal{O})$ be the norm of $\mathcal{O}$ over $A$. More explicitly, $\widetilde A$ is the $A$-submodule of $O_F$ spanned by the reduced norms of elements of $\mathcal{O}$. Clearly, $\widetilde A$ is closed under multiplication, hence a suborder of $O_F$ containing $A$. By \cite[Lemma~3.1.1]{xue-yang-yu:ECNF}, $\mathcal{O}$ is closed under the canonical involution if and only if $\widetilde A=A$. In particular, any $O_F$-order in $D$ is closed under the canonical involution. We have a the natural surjective map between the Picard groups: \begin{equation} \label{eq:9} \pi:\Pic(A)\twoheadrightarrow \Pic(\widetilde A),\qquad \gra \mapsto \gra \widetilde A. \end{equation} Given an ideal class $[I]\in \Cl(\mathcal{O})$, we study the stabilizer $\Stab([I])\subseteq \Pic(A)$ of the $\Pic(A)$ action on $\Cl(\mathcal{O})$ as in (\ref{eq:8}). Let $I^{-1}$ be the inverse of $I$, and $\mathcal{O}_l(I)=\{x\in D\mid xI\subseteq I\}$ the associated left order of $I$. We have $\mathcal{O}_l(I)=II^{-1}$ \cite[Section~I.4]{vigneras}. Thus \begin{equation} \label{eq:17} [\gra I] =[I]\quad \text{if and only if}\quad [\gra \mathcal{O}_l(I)] =[\mathcal{O}_l(I)], \end{equation} where $[\mathcal{O}_l(I)]$ is the trivial ideal class of $\mathcal{O}_l(I)$. Therefore, the study of $\Stab([I])$ often reduces to that of $\Stab([\mathcal{O}])$. We have \begin{equation} \label{eq:31} \Stab([\mathcal{O}])=\{ [\gra]\in \Pic(A)\mid \exists \lambda\in D \text{ such that } \gra\mathcal{O}=\lambda\mathcal{O}\}. \end{equation} Since $\gra \mathcal{O}$ is a nonzero two-sided $\mathcal{O}$-ideal, $\lambda$ lies in the normalizer $\calN(\mathcal{O})\subseteq D^\times$ by \cite[Exercise~I.4.6]{vigneras}. To study more closely the relationship between $\gra$ and $\lambda$, we make use of the following lemma from commutative algebra. \begin{lem}\label{lem:eq-unit-ideal} Let $R\subseteq S$ be an extension of unital rings, with $R$ commutative and $S$ a finite $R$-module. \begin{enumerate}[(i)] \item If $\grc\subseteq R$ is an $R$-ideal with $\grc S=S$, then $\grc =R$. \item Let $L$ be the totally quotient ring of $R$. Suppose that the natural map $S\to S\otimes_R L$ is injective, and $S\cap L=R$ in $S\otimes_R L$. If $\grc\subset L$ is an $R$-submodule with $\grc S=\lambda S$ for some $\lambda \in L^\times$, then $\grc = \lambda R$. \end{enumerate} \end{lem} \begin{proof}(i) Since $S$ is a finite $R$-module, the equality $\grc S=S$ implies that there exists $a\in \grc$ such that $(1-a)S=0$ by \cite[Corollary~4.7]{Eisenbud-Com-alg}. Necessarily $a=1$ since $1\in S$, and hence $1\in \grc$ and $\grc=R$. \\ (ii) Let $\grc'=\frac{1}{\lambda} \grc$. Then $\grc'S=S$, which implies that $\grc' \subset S$. We have $\grc' \subseteq S\cap L=R$, and hence $\grc'$ is an integral ideal of $R$. Now it follows from part~(i) that $\grc'=R$, equivalently, $\grc= \lambda R$. \end{proof} We return to the study of the $\Pic(A)$-action on $\Cl(\mathcal{O})$. \begin{cor}\label{endo.1} Suppose that $\gra\subset F$ is a locally principal nonzero fractional $A$-ideal with $\gra \mathcal{O}=\lambda \mathcal{O}$ for some $\lambda \in D$. \begin{enumerate}[(i)] \item If $\lambda\in F$, then $\gra=\lambda A$. \item If $\lambda\not \in F$, then $\gra B=\lambda B$ with $B=F[\lambda]\cap \mathcal{O}$. In particular, $[\gra]$ belongs to the kernel of the canonical map $\Pic(A)\to \Pic(B)$. \end{enumerate} \end{cor} \begin{proof} Part~(i) follows directly from Lemma~\ref{lem:eq-unit-ideal}(ii) with $R=A$, $S=\mathcal{O}$ and $\grc=\gra$, and part~(ii) follows with $R=B$, $S=\mathcal{O}$, and $\grc=\gra B$. \end{proof} We say a locally principal fractional ideal $\gra$ of $A$ \emph{capitulates} in $B$ if the extended ideal $\gra B$ is principal. The capitulation problems (for abelian extensions of number fields $K/F$ with $A=O_F$ and $B=O_K$) was studied by Hilbert (see Hilbert Theorem~94), and it continues to be a field of active research up to this day \cite{Iwasawa-capitulation-1989, Bond-2017, suzuki-2001}. We will follow up on this line of investigation in Section~\ref{sec:orbit-number-formula}, particularly in the derivation of the orbit number formula (see Theorem~\ref{thm:orbit-num-formula}). However, our result does not explicitly depend on the works just cited. \begin{lemma}\label{endo.2} For any ideal class $[I]\in \Cl(\mathcal{O})$, the stabilizer $\Stab([I])$ is contained in the kernel of the homomorphism \begin{equation} \label{eq:12} {\mathrm Sq}:\Pic(A)\to \Pic(\widetilde A), \qquad [\gra]\mapsto [\,\gra\widetilde A\,]^2. \end{equation} \end{lemma} \begin{proof} Clearly, $\Nr_A$ commutes with any localization of $A$. Thus $\Nr_A(\gra \mathcal{O}')=\gra^2\Nr_A(\mathcal{O}')$ for every proper $A$-order $\mathcal{O}'$ and \textit{locally principal} fractional $A$-ideal $\gra$. Since $I$ is locally principal, $\mathcal{O}_l(I)$ is a proper $A$-order with $\Nr_A(\mathcal{O}_l(I))=\widetilde A$ by \cite[Section~2.3]{xue-yang-yu:ECNF}. Suppose that $ \gra \mathcal{O}_l(I) =\lambda \mathcal{O}_l(I)$ for some $\lambda \in D^\times$. Taking $\Nr_A$ on both sides, we get \begin{equation}\label{eq:square-norm-lambda} \gra^2 \widetilde A= \Nr(\lambda) \widetilde A, \end{equation} and hence $[\gra]\in \ker({\mathrm Sq})$. \end{proof} \begin{cor}\label{endo.3} Let $\Pic(\widetilde A)^2$ be the subgroup of $\Pic(\widetilde A)$ consisting of the $\widetilde A$-ideal classes that are perfect squares. Then the number $h(\mathcal{O})/|\Pic(\widetilde A)^2|$ is an integer. \end{cor} \begin{proof} The map ${\mathrm Sq}: \Pic(A)\to \Pic(\widetilde A)$ factors as $\Pic(A)\twoheadrightarrow \Pic(\widetilde A)^2\hookrightarrow \Pic(\widetilde A)$. By Lemma~\ref{endo.2}, each $\Pic(A)$-orbit of $\Cl(\mathcal{O})$ has cardinality divisible by $|\Pic(\widetilde A)^2|$. The corollary follows. \end{proof} \begin{cor}\label{endo.4} Suppose that $\mathcal{O}$ is closed under the canonical involution and $h(A)$ is odd. Then the action of $\Pic(A)$ on $\Cl(\mathcal{O})$ is free. \end{cor} \begin{proof} As remarked before, $\mathcal{O}$ is closed under the canonical involution if and only if $\widetilde A=A$. The two conditions imply that $\ker ({\mathrm Sq})$ is trivial, so the corollary follows from Lemma~\ref{endo.2}. \end{proof} \begin{rem} See \cite[Corollary (18.4), p.~134]{Conner-Hurrelbrink} for the complete list of all quadratic fields $F=\mathbb Q(\sqrt{d})$ with odd class number. When $A\neq \widetilde A$, the action of $\Pic(A)$ on $\Cl(\mathcal{O})$ needs not to be free, even if $h(A)$ is odd. For example, let $A=\mathbb Z[\sqrt{37}]$ and $\mathcal{O}=\bbO_8$ (see (\ref{eq:bbOr1}) and (\ref{eq:bbOr2}) for its definition). We have $\widetilde A=O_F$, $h(A)=3$ and $h(\mathcal{O})=7$ by \cite[(6.10)]{xue-yang-yu:ECNF}. Clearly, the $\Pic(A)$-action on $\Cl(\mathcal{O})$ cannot be free. \end{rem} In some cases, the Picard group action (\ref{eq:8}) can be utilized to calculated type numbers of certain orders. Concrete examples will be worked out in Section~\ref{sec:03}. To explain the ideas, we adopt the adelic point of view. Let $\widehat \mathbb Z:=\varprojlim \zmod{n}=\prod_\ell \mathbb Z_\ell$ be the profinite completion of $\mathbb Z$, where the product runs over all primes $\ell\in \bbN$. Given any finite dimensional $\mathbb Q$-vector space or finitely generated $\mathbb Z$-module $M$, we set $\widehat M:= M\otimes_\mathbb Z \widehat\mathbb Z$, and $M_\ell:=M\otimes_\mathbb Z\Z_\ell$. Two orders $\mathcal{O}_1$ and $\mathcal{O}_2$ in the quaternion $F$-algebra $D$ are said to be in the same \emph{genus} if $\widehat\mathcal{O}_1\simeq \widehat \mathcal{O}_2$, or equivalently, if there exists $x\in \widehat D^\times$ such that $\widehat\mathcal{O}_1=x\widehat\mathcal{O}_2 x^{-1}$. They are said to be of the same \emph{type} if $\mathcal{O}_2=x\mathcal{O}_1 x^{-1}$ for some $x\in D^\times$. Let $\calT(\mathcal{O})$ denote the set of $D^\times$-conjugacy classes of orders in the genus of $\mathcal{O}$, and let $t(\mathcal{O}):=|\calT(\mathcal{O})|$ denote the \emph{type number} of $\mathcal{O}$. For example, it is well known that all maximal $O_F$-orders of $D$ lie in the same genus. If $\mathcal{O}$ is a maximal $O_F$-order, then the set $\calT(\mathcal{O})$ (resp.~its cardinality $t(\mathcal{O})$) depends only on $D$ and is denoted by $\calT(D)$ (resp.~$t(D)$) instead. Let $\calN(\widehat\mathcal{O})\subseteq \widehat D^\times$ be the normalizer of $\widehat\mathcal{O}$, which admits a filtration $\calN(\widehat \mathcal{O})\unrhd \widehat F^\times\widehat \mathcal{O}^\times\unrhd \widehat\mathcal{O}^\times$. It is easy to see that $\Pic(A)\simeq \widehat F^\times/F^\times \widehat A^\times$, and \begin{align} \Cl(\mathcal{O})&\simeq D^\times\backslash \widehat D^\times /\widehat \mathcal{O}^\times, \\ \overline \Cl(\mathcal{O})&\simeq D^\times\backslash \widehat D^\times /\widehat F^\times\widehat \mathcal{O}^\times, \label{eq:36}\\ \calT(\mathcal{O})&\simeq D^\times\backslash \widehat D^\times /\calN(\widehat \mathcal{O}). \label{eq:37} \end{align} Denote the normal subgroup $\calN(\mathcal{O})\cap (\widehat F^\times\widehat\mathcal{O}^\times)$ of $\calN(\mathcal{O})$ by $\calN_0(\mathcal{O})$. \begin{lem}\label{lem:normalizer-stablizer} Keep the notation of (\ref{eq:31}). There is a canonical isomorphism \[\Stab([I])\simeq \calN_0(\mathcal{O}_l(I))/F^\times\mathcal{O}_l(I)^\times, \qquad [\gra]\leftrightarrow (\lambda \bmod F^\times\mathcal{O}_l(I)^\times). \] \end{lem} \begin{proof} In light of (\ref{eq:17}), it is enough to show that $\Stab([\mathcal{O}])\simeq \calN_0(\mathcal{O})/F^\times\mathcal{O}^\times$, where $[\mathcal{O}]\in \Cl(\mathcal{O})$ is the principal right $\mathcal{O}$-ideal class. Applying the Zassenhaus lemma \cite[Lemma~5.49]{Rotman-alg} to $F^\times\unlhd D^\times$ and $\widehat \mathcal{O}^\times \unlhd \widehat F^\times\widehat\mathcal{O}^\times$, we get \[ \begin{split} \frac{\calN_0(\mathcal{O})}{F^\times\mathcal{O}^\times}&=\frac{(D^\times\cap \widehat F^\times\widehat\mathcal{O}^\times)F^\times}{(D^\times\cap \widehat \mathcal{O}^\times)F^\times}\simeq\frac{(D^\times\cap \widehat F^\times \widehat \mathcal{O}^\times)\widehat \mathcal{O}^\times}{(F^\times\cap \widehat F^\times\widehat \mathcal{O}^\times)\widehat \mathcal{O}^\times}=\frac{(D^\times\widehat\mathcal{O}^\times\cap \widehat F^\times)\widehat\mathcal{O}^\times}{F^\times\widehat\mathcal{O}^\times}\\ &\simeq\frac{D^\times\widehat\mathcal{O}^\times\cap \widehat F^\times}{(D^\times\widehat\mathcal{O}^\times\cap \widehat F^\times)\cap F^\times\widehat\mathcal{O}^\times}= \frac{D^\times\widehat\mathcal{O}^\times\cap \widehat F^\times}{F^\times(\widehat F^\times\cap \widehat \mathcal{O}^\times)}=\frac{D^\times\widehat\mathcal{O}^\times\cap \widehat F^\times}{F^\times\widehat A^\times}=\Stab([\mathcal{O}]). \end{split} \] We leave it as a routine exercise to check that the adelic isomorphism constructed above matches with the concrete one given in the lemma. \end{proof} There is a natural surjective map \begin{equation} \label{eq:32} \Upsilon: \Cl(\mathcal{O})\twoheadrightarrow \calT(\mathcal{O}), \qquad [I]\mapsto \dbr{\mathcal{O}_l(I)}, \end{equation} where $\dbr{\mathcal{O}_l(I)}$ denotes the $A$-isomorphism class of $\mathcal{O}_l(I)$ (equivalently, the $D^\times$-conjugacy class of $\mathcal{O}_l(I)$). Clearly, $\Upsilon$ factors through the projection $\Cl(\mathcal{O})\to \overline \Cl(\mathcal{O})$. \begin{prop}\label{prop:class-type-number} If $\calN(\widehat \mathcal{O})=\widehat F^\times\widehat\mathcal{O}^\times$, then $t(\mathcal{O})=r(\mathcal{O})$. If further $\mathcal{O}$ is stable under the canonical involution and $h(A)$ is odd, then \begin{equation} \label{eq:39} h(\mathcal{O})=h(A)t(\mathcal{O}). \end{equation} Moreover, for any order $\mathcal{O}'$ in the genus of $\mathcal{O}$, we have \begin{equation} \label{eq:40} \abs{\Upsilon^{-1}(\dbr{\mathcal{O}'})}=h(A), \quad \text{and} \quad \calN(\mathcal{O}')=F^\times\mathcal{O}'^\times. \end{equation} \end{prop} \begin{proof} The assumption $\calN(\widehat \mathcal{O})=\widehat F^\times\widehat\mathcal{O}^\times$ allows the canonical identification $\overline{\Cl}(\mathcal{O})=\calT(\mathcal{O})$, and hence identifications of the orbits of the $\Pic(A)$-action with the fibers of $\Upsilon$. In particular, we have $t(\mathcal{O})=r(\mathcal{O})$. By definition, $\widehat \mathcal{O}'\simeq \widehat \mathcal{O}$ for any order $\mathcal{O}'$ in the genus of $\mathcal{O}$. Hence $\calN(\mathcal{O}')=D^\times\cap \calN(\widehat\mathcal{O}')=\calN_0(\mathcal{O}')$. Suppose further that $\mathcal{O}$ is stable under the canonical involution and $h(A)$ is odd. By Corollary~\ref{endo.4}, $\Pic(A)$ acts freely on $\Cl(\mathcal{O})$, so we have $t(\mathcal{O})=r(\mathcal{O})=h(\mathcal{O})/h(A)$, and $\abs{\Upsilon^{-1}(\dbr{\mathcal{O}'})}=h(A)$ for each $\dbr{\mathcal{O}'}\in \calT(\mathcal{O})$. The equality $\calN(\mathcal{O}')=F^\times\mathcal{O}'^\times$ follows from Lemma~\ref{lem:normalizer-stablizer}. \end{proof} It is well known that the condition $\calN(\widehat \mathcal{O})=\widehat F^\times\widehat\mathcal{O}^\times$ holds if $D$ is unramified at all the finite places of $F$ and $\mathcal{O}$ is a maximal $O_F$-order (see Corollary~\ref{2.7}). We provide a class of non-maximal orders (namely, $\bbO_{16}$ defined by (\ref{eq:bbOr1}) and (\ref{eq:bbOr2})) that satisfies this condition in Section~\ref{sec:03}. \section{Picard group action for totally definite quaternion algebras} \label{sec:orbit-number-formula} Throughout this section, we assume that $F$ is a totally real field, and $D$ is a totally definite quaternion $F$-algebra. In particular, for any $\lambda \in D$ with $\lambda \not \in F$, the $F$-algebra $F[\lambda]$ is a CM-extension of $F$. Let $A$ be a $\mathbb Z$-order in $F$, and $\mathcal{O}$ be a proper $A$-order in $D$. The main goal of this section is to derive a orbit number formula for the $\Pic(A)$ action on $\Cl(\mathcal{O})$. This also leads to a type number formula for $t(D)$ when $F$ has even degree over $\mathbb Q$ and $D$ is unramified at all finite places of $F$. \begin{notn} Let $R_1\subseteq R_2$ be an extension of commutative Noetherian rings. The canonical homomorphism between the Picard groups is denoted by \[i_{R_2/R_1}:\Pic(R_1) \to \Pic(R_2), \qquad [\gra]\mapsto [\gra R_2].\] If $F_2/F_1$ is a finite extension of number fields, we write $i_{F_2/F_1}$ for $i_{O_{F_2}/O_{F_1}}$. Because of Corollary~\ref{endo.1}, we are interested in the following set of isomorphism classes of CM proper $A$-orders: \begin{equation} \label{eq:41} \mathscr{B}:=\{\dbr{B}\mid B \text{ is a CM proper $A$-order, and } \ker(i_{B/A})\neq \{1\}\}, \end{equation} where $\dbr{B}$ denotes the $A$-isomorphism class of $B$. \end{notn} \begin{prop}\label{prop:finiteness-CM-proper-A-order} The set $\mathscr{B}$ is finite. \end{prop} \begin{proof} Let $B$ be a CM proper $A$-order with $\dbr{B}\in \mathscr{B}$ and $K$ be its fractional field. Pick a nontrivial ideal class $[\gra]\in \ker(i_{B/A})$ so that \begin{equation} \label{eq:11} \gra B=\lambda B \quad \text{with}\quad \lambda \in K^\times. \end{equation} Necessarily $\lambda \not\in F^\times$, otherwise $\gra=\lambda A$ by part~(ii) of Lemma~\ref{lem:eq-unit-ideal}. There is a commutative diagram \[ \begin{CD} \Pic(A) @>>> \Pic(O_F)\\ @VVV @VVV\\ \Pic(B)@>>> \Pic(O_K) \end{CD} \] If $[\gra]\not\in\ker(i_{O_F/A})$, then $[\gra O_F]$ is a nontrivial ideal class in $\ker(i_{K/F})$. By \cite[Section~14]{Conner-Hurrelbrink}, there are only finitely many CM-extensions $K'/F$ such that $\ker(i_{K'/F})\neq \{1\}$. On the other hand, suppose that $[\gra]\in \ker(i_{O_F/A})$, so that $\gra O_F=\gamma O_F$ for some $\gamma \in F^\times$. We have $\gamma O_K=\gra O_K=\lambda O_K$, and hence $\lambda=\gamma u$ for some $u\in O_K^\times$. Note that $u\not\in O_F^\times$ since $\lambda \not\in F^\times$. There are only finitely many CM-extensions $K''/F$ such that $[O_{K''}^\times:O_F^\times]>1$. Indeed, let $\boldsymbol \mu(K'')$ be the group of roots of unity in $K''$. There are only finitely many CM-extensions $K''/F$ such that $\boldsymbol \mu(K'')\neq \{\pm 1\}$. If $\boldsymbol \mu(K'')=\{\pm 1\}$, then $[O_{K''}^\times: O_F^\times]>1$ if and only if $K''/F$ is a CM-extension of type II (i.e. $[O_{K''}^\times: O_F^\times\boldsymbol \mu(K'')]=2$), and there are finitely many of these by \cite[Lemma~13.3]{Conner-Hurrelbrink}. We conclude that the following set of CM-extensions of $F$ is finite: \[\mathscr{K}:=\{B\otimes_A F\mid \dbr{B}\in \mathscr{B} \}.\] Now fix $K\in \mathscr{K}$ and a nontrivial $A$-ideal class $[\gra]\in \ker(i_{O_K/A})$. Pick $\lambda'\in K^\times$ such that $\gra O_K=\lambda' O_K$. The CM proper $A$-orders $B\subseteq O_K$ such that $[\gra]\in \ker(i_{B/A})$ are characterized in Lemma~\ref{lem:char-CM-order-nontrivial-i} below. For each $u\in O_K^\times$ with $u^{-1}\lambda'\not\in F^\times$, we have $A\cap \frac{u\gra}{\lambda'}=\{0\}$, so $A+\frac{u\gra}{\lambda'}$ forms an $A$-lattice in $O_K$. There are only finitely many $A$-orders in $O_K$ containing $A\oplus \frac{u\gra}{\lambda'}$. On the other hand, there is one-to-one correspondence between $O_K^\times/A^\times$ and the set of $A$-submodules of the form $u\gra/\lambda'$ in $O_K$. Since $K/F$ is a CM-extension, $O_K^\times/A^\times$ is a finite group. Therefore, there are only finitely many CM proper $A$-order $B$ in $K$ with $[\gra]\in \ker(i_{B/A})$. Since $\ker(i_{O_K/A})$ is finite as well, the proposition follows. \end{proof} \begin{lem}\label{lem:char-CM-order-nontrivial-i} Let $K/F$ be a CM-extension with $\ker(i_{O_K/A})\neq \{1\}$, and $[\gra]$ be a nontrivial $A$-ideal class in $\ker(i_{O_K/A})$ so that $\gra O_K=\lambda' O_K$ for some $\lambda'\in K^\times$. Given a CM proper $A$-order $B\subseteq O_K$, we have $[\gra]\in \ker(i_{B/A})$ if and only if there exists $u\in O_K^\times$ such that\footnote{Up to multiplication by a unit in $O_K^\times$, the $A$-submodule $\gra/\lambda'\subset O_K$ depends only on the $A$-ideal class $[\gra]\in \ker(i_{O_K/A})$ and not on the choice of $\gra$.} $u\gra/\lambda'\subseteq B$. In addition, we always have $u^{-1}\lambda'\not\in F^\times$ if such a unit $u\in O_K^\times$ exists. \end{lem} \begin{proof} First, suppose that $[\gra]\in \ker(i_{B/A})$ so that (\ref{eq:11}) holds. We have $\lambda' O_K=\gra O_K=\lambda O_K$, and hence $\lambda'=u\lambda$ for some $u\in O_K^\times$. It follows that \[u\gra/\lambda'=\gra/\lambda\subset B.\] As remarked right after (\ref{eq:11}), $u^{-1}\lambda'=\lambda\not\in F^\times$ since $[\gra]$ is assumed to be nontrivial. Conversely, suppose that there exists $u\in O_K^\times$ such that $u\gra/\lambda'\subset B$. Let $\grb=\frac{u\gra}{\lambda'}B$, the ideal of $B$ generated by $ u\gra/\lambda'$. Then $\grb O_K=\frac{u\gra}{\lambda'} O_K=O_K$. It follows from Lemma~\ref{lem:eq-unit-ideal} that $\grb=B$, i.e. $\gra B=(u^{-1}\lambda') B$. \end{proof} When $A$ coincides with the maximal order $O_F$ in $F$, we give a more precise characterization of $O_F$-orders $B$ in $K$ with $\ker(i_{B/O_F})\neq \{1\}$ in Lemma~\ref{lem:suborder-nontrivial-kernel}. Let $B$ be an arbitrary CM proper $A$-order with fractional field $K$, and $x \mapsto \bar{x}$ be the unique nontrivial involution of $K/F$. We define a symbol \begin{equation}\label{eq:46} \delta(B):= \begin{cases} 1\qquad & \text{if } B=\bar B,\\ 0\qquad & \text{otherwise.} \end{cases} \end{equation} Note that $\delta(B)=1$ for all $O_F$-orders $B$. If $\mathcal{O}$ is stable under the canonical involution of $D$, then so is $\mathcal{O}_l(I)$ for every locally principal right $\mathcal{O}$-ideal $I$, in which case $\delta(F(\lambda)\cap \mathcal{O}_l(I))=1$ for every $\lambda\in D^\times$ with $\lambda \not \in F^\times$. \begin{lem}\label{lem:kernel-2-torsion} If $B$ is a CM proper $A$-order with $\delta(B)=1$, then $\abs{\ker (i_{B/A})}\leq 2$. \end{lem} \begin{proof} This result is well known when $A=O_F$ and $B=O_K$. In fact, the proof of \cite[Theorem~10.3]{Washington-cyclotomic} applies, \textit{mutatis mutandis}, to the current setting as well and shows that $\abs{\ker (i_{B/A})}\leq 2$. \end{proof} Lemma~\ref{lem:kernel-2-torsion} does not hold in general when $\delta(B)=0$. We exhibit in Example~\ref{ex:ker-cyclic-order-3} an infinite family of pairs $(A, B)$ such that $\ker(i_{B/A})\simeq \zmod{3}$. Denote by ${\rm Emb}(B,\mathcal{O})$ the set of optimal embeddings from $B$ into $\mathcal{O}$: \[{\rm Emb}(B, \mathcal{O})=\{\varphi\in \Hom_F(K, D)\mid \varphi(K)\cap \mathcal{O}=\varphi(B)\}. \] Fix a \emph{nontrivial} $A$-ideal class $[\gra]\in \Pic(A)$. We define a few subsets of $\mathscr{B}$: \begin{align} \mathscr{B}(\mathcal{O})&=\{\dbr{B}\in \mathscr{B}\mid {\rm Emb}(B, \mathcal{O})\neq \emptyset\},\\ \mathscr{B}_{[\gra]}&=\{\dbr{B}\in \mathscr{B}\mid [\gra]\in \ker(i_{B/A})\}, \label{eq:13}\\ \mathscr{B}_{[\gra]}(\mathcal{O})&=\{\dbr{B}\in \mathscr{B}\mid [\gra]\in \ker(i_{B/A}),\text{ and }{\rm Emb}(B, \mathcal{O})\neq \emptyset\}.\label{eq:19} \end{align} Corollary~\ref{endo.1} implies that for any ideal class $[I]\in \Cl(\mathcal{O})$, \begin{equation} \label{eq:stabilizer} \Stab([I])=\bigcup_{\dbr{B}\in \mathscr{B}(\mathcal{O}_l(I))} \ker(i_{B/A}:\Pic(A) \to \Pic(B)). \end{equation} Suppose that $\mathscr{B}_{[\gra]}(\mathcal{O}_l(I))\neq \emptyset $ for some ideal class $[I]\in \Cl(\mathcal{O})$ so that $[\gra]\in \Stab([I])$. It then follows from \eqref{eq:square-norm-lambda} that $\gra^2\widetilde A=\Nr(\lambda)\widetilde A$ for some $\lambda \in D^\times$, where $\widetilde A=\Nr_A(\mathcal{O})$ is the suborder of $O_F$ spanned over $A$ by the reduced norms of elements of $\mathcal{O}$. Note that $\Nr(\lambda)$ is totally positive since $D$ is totally definite over $F$. For any $\mathbb Z$-order $A'$ in $F$, Let $\Pic^+(A')$ be the quotient of the multiplicative group of locally principal fractional $A'$-ideals by the subgroup of principal fractional $A'$-ideals that are generated by \emph{totally positive} elements. If $A'=O_F$, then $\Pic^+(A')$ is simply the narrow class group of $F$. We denote the subgroup of 2-torsions of $\Pic^+(A')$ by $\Pic^+(A')[2]$, and the canonical map $\Pic^+(A')\to \Pic(A')$ by $\theta_{A'}$. For any order $A'$ intermediate to $A\subseteq O_F$, let us define \begin{equation}\label{eq:def-grC2} \grC_2(A, A'):=\{[\gra]\in \Pic(A)\mid i_{A'/A}([\gra])\in \theta_{A'}(\Pic^+(A')[2])\}. \end{equation} If $A=O_F$, then necessarily $A'=O_F$ as well, and we simply write $\grC_2(F)$ for $\grC_2(O_F, O_F)$, which coincides with the image of $\Pic^+(O_F)[2]$ in $\Pic(O_F)$. Clearly, $\grC_2(A, \widetilde A)$ is a subgroup of $\ker({\mathrm Sq})$, where ${\mathrm Sq}: \Pic(A)\to \Pic(\widetilde A)$ is the homomorphism defined in \eqref{eq:12}. By our construction, \begin{equation}\label{eq:tot-def-not-stab} [\gra]\not\in \grC_2(A, \widetilde A)\quad \text{implies that}\quad [\gra] \not\in \Stab([I]), \ \forall [I]\in \Cl(\mathcal{O}). \end{equation} Using the norm map $\Nm_{K/F}$ for CM-extensions $K/F$, one shows similarly that $[\gra]\in \grC_2(A, O_F)$ if $\mathscr{B}_{[\gra]}\neq\emptyset$. We claim that this is in fact an ``if and only if" condition when $A=O_F$. If $\gra$ is a non-principal fractional $O_F$-ideal such that $\gra^2=\tau O_F$ for some totally positive $\tau\in F^\times$, then $K=F(\sqrt{-\tau})$ is a CM-extension of $F$ with $[\gra]\in \ker(i_{K/F})$ (cf.~\cite[Theorem~14.2]{Conner-Hurrelbrink}). Thus $\mathscr{B}_{[\gra]}\neq \emptyset $ if and only if $[\gra]\in \grC_2(F)$. Combining this with Lemma~\ref{lem:kernel-2-torsion}, we obtain that \begin{equation} \label{eq:30} \mathscr{B}=\coprod_{1\neq [\gra]\in \grC_2(F)}\mathscr{B}_{[\gra]}\qquad\text{if } A=O_F. \end{equation} Therefore, $\mathscr{B}=\emptyset$ if and only if $\grC_2(F)$ is trivial. \begin{cor}\label{2.7} Let $F$ be a totally real number field of even degree over $\mathbb Q$, and $\mathcal{O}$ be a maximal $O_F$-order in the (unique up to isomorphism) totally definite quaternion $F$-algebra $D$ unramified at all finite places of $F$. The following are equivalent: (1) the group $\grC_2(F)$ is trivial; (2) the action of $\Pic(O_F)$ on $\Cl(\mathcal{O})$ is free; (3) $t(D)=h(D)/h(F)$. \noindent Moreover, if $h(F)$ is odd, then the statements (1)--(3) hold. \end{cor} \begin{proof} By \cite[Section~II.2]{vigneras}, we have $\calN(\widehat \mathcal{O}^\times)=\widehat F^\times \widehat \mathcal{O}^\times$ in this case, and hence $t(\mathcal{O})=r(\mathcal{O})\geq h(D)/h(F)$. The equality $r(\mathcal{O})= h(D)/h(F)$ holds if and only if the action of $\Pic(O_F)$ on $\Cl(\mathcal{O})$ is free, establishing the equivalence between (2) and (3). For the equivalence between (1) and (2), it is enough to show that the latter holds if and only if $\mathscr{B}=\emptyset$. The sufficiency is clear by (\ref{eq:stabilizer}). As $D$ is unramified at all finite places, any CM extension $K$ of $F$ can be embedded into $D$ by the local-global principle. If $\mathscr{B}\neq \emptyset$, then there exists a CM-extension $K$ of $F$ such that $\ker(i_{K/F})\neq \{1\}$. Fix an $F$-embedding $\varphi$ from such a $K$ into $D$. The image $\varphi(O_K)$ is contained in a maximal $O_F$-order $\mathcal{O}'$. Set $I:=\mathcal{O}'\mathcal{O}$, which is a fractional right $\mathcal{O}$-ideal with $\mathcal{O}_l(I)=\mathcal{O}'$. We have $\dbr{O_K}\in \mathscr{B}(\mathcal{O}_l(I))$, and hence $\ker(i_{K/F})\subseteq \Stab([I])$ by (\ref{eq:stabilizer}). Therefore, $\Pic(O_F)$ acts freely on $\Cl(\mathcal{O})$ if and only if $\mathscr{B}=\emptyset$. \end{proof} It is possible that $\grC_2(F)$ is trivial despite $h(F)$ being even. Indeed, one of such examples is given by $F=\mathbb Q(\sqrt{34})$ (see \cite[Section~12, p.~64]{Conner-Hurrelbrink}). We provide two applications of Corollary~\ref{2.7}. First, if $F$ is a real quadratic field satisfying condition (1) of Corollary~\ref{2.7} and $D$ is a totally definite quaternion $F$-algebra unramified at all finite places of $F$, then an explicit formula for $t(D)$ of $D$ is given in \cite[p.~212]{vigneras:inv}. Second, recall that $\mathcal{O}^\times/O_F^\times$ is a finite group \cite[Theorem~V.1.2]{vigneras}. For any finite group $G$, we define \begin{align} h(D, G)&:=\abs{\{[I]\in \Cl(\mathcal{O})\mid \mathcal{O}_l(I)/A^\times\simeq G\}},\\ t(D, G)&:=\abs{\{\dbr{\mathcal{O}'}\in \calT(D)\mid \mathcal{O}'/A^\times\simeq G\}}. \end{align} Here $h(D, G)$ does not depends on the choice of the maximal order $\mathcal{O}$, hence denoted as such. The following result is used in \cite{li-xue-yu:unit-gp}. \begin{cor} Keep the assumptions of Corollary~\ref{2.7} and suppose that $\grC_2(F)$ is trivial. Then $h(D, G)=h(F) t(D, G)$ for any finite group $G$. \end{cor} \begin{proof} The assumptions guarantee that (\ref{eq:40}) holds in the current setting as well. The corollary follows immediately. \end{proof} We return to the general setting of this section, that is, $\mathcal{O}$ being a proper $A$-order. Following \cite[Section~3]{xue-yang-yu:ECNF}, we set up the following notation \begin{align*} w(\mathcal{O})&:=[\mathcal{O}^\times: A^\times], & w(B)&:=[B^\times: A^\times],\\ m(B, \mathcal{O})&:=\abs{{\rm Emb}(B, \mathcal{O})}, & m(B, \mathcal{O}, \mathcal{O}^\times)&:=\abs{{\rm Emb}(B, \mathcal{O})/\mathcal{O}^\times}, \end{align*} where $\mathcal{O}^\times$ acts on ${\rm Emb}(B, \mathcal{O})$ from the right by $\varphi\mapsto g^{-1}\varphi g$ for all $\varphi\in {\rm Emb}(B,\mathcal{O})$ and $g\in \mathcal{O}^\times$. Note that we have \begin{equation} \label{eq:25} m(B, \mathcal{O})=m(B, \mathcal{O}, \mathcal{O}^\times)w(\mathcal{O})/w(B). \end{equation} For each prime $\ell\in \bbN$, let $B_\ell$ (resp.~$\mathcal{O}_\ell$) be the $\ell$-adic completion of $B$ (resp.~$\mathcal{O}$). For simplicity, we put \begin{equation}\label{eq:35} m_\ell(B):=m(B_\ell, \mathcal{O}_\ell, \mathcal{O}_\ell^\times)=\abs{{\rm Emb}(B_\ell, \mathcal{O}_\ell)/\mathcal{O}_\ell^\times}. \end{equation} It is well known \cite[Theorem~II.3.2]{vigneras} that $m_\ell(B)=1$ for almost all prime $\ell\in \bbN$. \begin{thm}\label{thm:orbit-num-formula} Let $A$ be a $\mathbb Z$-order in a totally real number field $F$, and $\mathcal{O}$ be a proper $A$-order in a totally definite quaternion $F$-algebra $D$. The number of orbits of the Picard group action (\ref{eq:8}) can be calculated by the following formula: \begin{equation*}\label{eq:48} r(\mathcal{O})=\frac{1}{h(A)}\left(h(\mathcal{O})+\sum_{1\neq [\gra]\in \grC_2(A, \widetilde A)}\ \sum_{\dbr{B}\in \mathscr{B}_{[\gra]}} \frac{1}{2}(2-\delta(B))h(B)\prod_{\ell}m_\ell(B) \right), \end{equation*} where $\grC_2(A, \widetilde A)$ is the subgroup of $\Pic(A)$ defined in (\ref{eq:def-grC2}), and $\mathscr{B}_{[\gra]}$ is the set of isomorphism classes of CM proper $A$-orders defined in (\ref{eq:13}). Moreover, if $A=O_F$, then \begin{equation}\label{eq:49} r(\mathcal{O})=\frac{1}{h(F)}\left(h(\mathcal{O})+\frac{1}{2}\sum_{\dbr{B}\in \mathscr{B}} h(B)\prod_{\ell}m_\ell(B) \right), \end{equation} where $\mathscr{B}$ is the set of isomorphism classes of CM $O_F$-orders in (\ref{eq:41}) with $A=O_F$. \end{thm} The proof of this theorem relies on Burnside's lemma \cite[Theorem~2.113]{Rotman-alg}, which we recall briefly for the convenience of the reader. \begin{lem}[Burnside's lemma] Let $G$ be a finite group acting on a finite set $X$, and $r$ be the number of orbits. For each $g\in G$, let $X^g\subseteq X$ be the subset of elements fixed by $g$. Then $r=\frac{1}{G}\sum_{g\in G}\abs{X^g}$. \end{lem} Hence we reduce the proof of Theorem~\ref{thm:orbit-num-formula} to the calculation of $\abs{\Cl(\mathcal{O})^{[\gra]}}$ for each $[\gra]\in \Pic(A)$, where $\Cl(\mathcal{O})^{[\gra]}\subseteq \Cl(\mathcal{O})$ denotes the subset of locally principal right $\mathcal{O}$-ideal classes fixed by $[\gra]$. By \eqref{eq:tot-def-not-stab}, if $[\gra]\not\in \grC_2(A, \widetilde A)$, then $\Cl(\mathcal{O})^{[\gra]}=\emptyset$. \begin{prop} \label{prop:fixed-number} For each nontrivial $A$-ideal class $[\gra]\in \grC_2(A, \widetilde A)$, \begin{equation} \label{eq:14} \abs{\Cl(\mathcal{O})^{[\gra]}}= \sum_{\dbr{B}\in \mathscr{B}_{[\gra]}} \frac{1}{2}(2-\delta(B))h(B)\prod_{\ell}m_\ell(B). \end{equation} \end{prop} \begin{proof} We fix a complete set of representatives $\{I_j\mid 1\leq j \leq h:=h(\mathcal{O})\}$ of the ideal classes in $\Cl(\mathcal{O})$, and define $\mathcal{O}_j:=\mathcal{O}_l(I_j)$. For each $1\leq j\leq h$, consider the following two sets \begin{equation}\label{eq:20} X(\mathcal{O}_j):=\{\lambda \in D^\times\mid \gra \mathcal{O}_j=\lambda \mathcal{O}_j\},\qquad Y(\mathcal{O}_j):=X(\mathcal{O}_j)/A^\times, \end{equation} where $A^\times$ acts on $X(\mathcal{O}_j)$ by multiplication. Note that $X(\mathcal{O}_j)\cap F=\emptyset$ since $\gra$ is assume to be non-principal. By (\ref{eq:17}), we have \begin{equation} \label{eq:16} X(\mathcal{O}_j)\neq \emptyset \quad \text{if and only if}\quad [I_j]\in \Cl(\mathcal{O})^{[\gra]}. \end{equation} If $X(\mathcal{O}_j)\neq \emptyset$, then $\mathcal{O}_j^\times$ acts simply transitively on $X(\mathcal{O}_j)$ from the right by multiplication. Thus \begin{equation} \label{eq:18} \frac{\abs{Y(\mathcal{O}_j)}}{w(\mathcal{O}_j)}= \begin{cases} 1 \qquad &\text{if } [I_j]\in \Cl(\mathcal{O})^{[\gra]},\\ 0 \qquad &\text{otherwise.} \end{cases} \end{equation} We count the cardinality $\abs{Y(\mathcal{O}_j)}$ in another way. By Corollary~\ref{endo.1}, each $\lambda \in X(\mathcal{O}_j)$ gives rise to a CM proper $A$-order $B_\lambda:=F(\lambda)\cap \mathcal{O}_j$ such that $[\gra]\in \ker(i_{B_\lambda/A})$. In other words, the $A$-isomorphism class $\dbr{B_\lambda}$ belongs to the set $\mathscr{B}_{[\gra]}(\mathcal{O}_j)$ defined in (\ref{eq:19}). Thus we have \begin{align} \label{eq:23} X(\mathcal{O}_j)&=\coprod_{\dbr{B}\in \mathscr{B}_{[\gra]}(\mathcal{O}_j)}X(\mathcal{O}_j,\dbr{B}),\qquad \text{where}\\ X(\mathcal{O}_j,\dbr{B})&:=\{\lambda \in X(\mathcal{O}_j)\mid B_\lambda \simeq B\}\subseteq X(\mathcal{O}_j)\subset D^\times. \end{align} Clearly, each $X(\mathcal{O}_j,\dbr{B})$ is $A^\times$-stable, so we set $Y(\mathcal{O}_j,\dbr{B}):=X(\mathcal{O}_j,\dbr{B})/A^\times$. Let $K$ be the fractional field of $B$. Similar to (\ref{eq:20}), we define \begin{equation} \label{eq:22} X(B):=\{\lambda \in K^\times\mid \gra B=\lambda B\},\qquad Y(B):=X(B)/A^\times. \end{equation} Then $\abs{Y(B)}=w(B):=[B^\times:A^\times]$. For each $\dbr{B}\in \mathscr{B}_{[\gra]}(\mathcal{O}_j)$, we have an $A^\times$-equivariant surjective map \begin{equation} \label{eq:24} X(B)\times {\rm Emb}(B,\mathcal{O}_j)\to X(\mathcal{O}_j,\dbr{B}), \qquad (\lambda, \varphi)\mapsto \varphi(\lambda). \end{equation} Let $\varphi$ and $\varphi'$ be two distinct optimal embeddings of $B$ into $\mathcal{O}_j$. If $\varphi(B)\neq \varphi'(B)$, then $\varphi(X(B))\cap \varphi'(X(B))=\emptyset$. On the other hand, $\varphi(B)=\varphi'(B)$ if and only if \[\varphi'=\bar\varphi, \qquad B=\bar B.\] Therefore, if $\delta(B)=0$, then the map in (\ref{eq:24}) is bijective; otherwise it is two-to-one. It follows that from (\ref{eq:25}) that \begin{equation} \label{eq:26} \frac{\abs{Y(\mathcal{O}_j,\dbr{B})}}{w(\mathcal{O}_j)}=\frac{1}{2}(2-\delta(B))\frac{w(B)m(B, \mathcal{O}_j)}{w(\mathcal{O}_j)}=\frac{1}{2}(2-\delta(B))m(B, \mathcal{O}_j, \mathcal{O}_j^\times). \end{equation} A priori, (\ref{eq:26}) holds for all $\dbr{B}\in \mathscr{B}_{[\gra]}(\mathcal{O}_j)$, but we may extend it to all members of $\mathscr{B}_{[\gra]}$. Indeed, if $\dbr{B'}\in \mathscr{B}_{[\gra]}-\mathscr{B}_{[\gra]}(\mathcal{O}_j)$, then both ends of (\ref{eq:26}) are zero since the sets ${\rm Emb}(B', \mathcal{O}_j)$ and $X(\mathcal{O}_j,\dbr{B'})$ are both empty. Similarly, the disjoint union in (\ref{eq:23}) can be taken to range over all $ \dbr{B}\in\mathscr{B}_{[\gra]}$. By \cite[Theorem 5.11, p.~92]{vigneras} (see also \cite[Lemma~3.2]{wei-yu:classno} and \cite[Lemma~3.2.1]{xue-yang-yu:ECNF}), \begin{equation} \label{eq:27} \sum_{j=1}^h m(B, \mathcal{O}_j, \mathcal{O}_j^\times) =h(B)\prod_{\ell}m_\ell(B). \end{equation} Summing (\ref{eq:26}) over all $\dbr{B}\in \mathscr{B}_{[\gra]}$ and $1\leq j \leq h$, we obtain \begin{equation} \label{eq:28} \sum_{\dbr{B}\in \mathscr{B}_{[\gra]}} \sum_{j=1}^h \frac{\abs{Y(\mathcal{O}_j,\dbr{B})}}{w(\mathcal{O}_j)}=\sum_{\dbr{B}\in \mathscr{B}_{[\gra]}}\frac{1}{2}(2-\delta(B))h(B)\prod_{\ell}m_\ell(B). \end{equation} We exchange the order of summation on the left of (\ref{eq:28}) and apply (\ref{eq:18}) to get \begin{equation} \label{eq:29} \sum_{j=1}^h\sum_{\dbr{B}\in \mathscr{B}_{[\gra]}} \frac{\abs{Y(\mathcal{O}_j,\dbr{B})}}{w(\mathcal{O}_j)}= \sum_{j=1}^h\frac{\abs{Y(\mathcal{O}_j)}}{w(\mathcal{O}_j)}= \abs{ \Cl(\mathcal{O})^{[\gra]}}. \end{equation} This concludes the proof of the proposition. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:orbit-num-formula}] The general formula for $r(\mathcal{O})$ follows by combining Burnside's lemma with Proposition~\ref{prop:fixed-number}. Assume that $A=O_F$. Then $\delta(B)=1$ for all $B$. Recall that $\mathscr{B}_{[\gra]}\neq \emptyset$ if and only if $1\neq [\gra]\in \grC_2(F)$. Thus \eqref{eq:49} is a direct consequence of (\ref{eq:30}). \end{proof} As a consequence of Theorem~\ref{thm:orbit-num-formula}, we prove the following slight generalization of \cite[Corollaire~1.1, p.~81]{vigneras:ens}, which is stated for Eichler orders. \begin{cor} Let $D$ be a totally definite quaternion algebra over a totally real field $F$, and $\mathcal{O}$ be an $O_F$-order in $D$. Then $h(F)$ divides $2h(\mathcal{O})$. \end{cor} \begin{proof} It is well known \cite[Corollary~5.4]{Conner-Hurrelbrink} that $h(B)/h(F)$ is integral for any CM $O_F$-order $B$. So the corollary follows directly from \eqref{eq:49}. \end{proof} \begin{cor}\label{cor:type-numer-tot-def-unram} Let $F$ be a totally real number field of even degree, and $D$ be the unique totally definite quaternion $F$-algebra up to isomorphism unramified at all finite places of $F$. We have \begin{equation} \label{eq:33} t(D)=\frac{1}{h(F)}\left(h(D)+ \frac{1}{2}\sum_{\dbr{B}\in \mathscr{B}} h(B) \right), \end{equation} \end{cor} \begin{proof} As shown in the proof of Corollary~\ref{2.7}, we have $t(D)=r(\mathcal{O})$ for any maximal $O_F$-order $\mathcal{O}$ in $D$. By \cite[Theorem~II.3.2]{vigneras}, $m_\ell(B)=1$ for every prime $\ell\in \bbN$. The corollary then follows from \eqref{eq:49}. \end{proof} Of course, (\ref{eq:33}) is just a special case of the general type number formula for Eichler orders \cite[Theorem~A]{Pizer1973} (see also \cite[Corollary~V.2.6]{vigneras}). However, it describes more concretely the CM $O_F$-orders $B$ with nonzero contribution to the type number. Thanks to this, we prove in Theorem~\ref{thm:integrality} that $h(D)/h(F)$ is an integer, which is asserted by Vign\'eras \cite[Remarque, p.~82]{vigneras:ens}. In light of \eqref{eq:33}, it is enough to show the cardinality of the following subset of $\mathscr{B}$ is even: \begin{equation}\label{eq:44} \mathscr{B}_{\mathrm odd}:=\{\dbr{B}\in \mathscr{B}\mid h(B)/h(F) \text{ is odd}\}. \end{equation} We will work out the details in Section~\ref{sec:integrality}. \section{Calculation of type numbers} \label{sec:03} Throughout this section, we fix a prime number $p$. Let $q$ be an odd power of $p$, and $\Sp(\sqrt{q})$ and $\calT(\Sp(\sqrt{q}))$ be as in Section~\ref{sec:01}. Let $F=\mathbb Q(\sqrt{p})$, $A=\mathbb Z[\sqrt{p}]$, and $D={D_{p,\infty}}\otimes_\mathbb Q F$ be the unique definite quaternion $F$-algebra unramified at all finite places. We may regard $D$ as the endomorphism algebra $\End_{\mathbb F_q}^0(X_0)$ of a member $X_0/\mathbb F_q$ in $\Sp(\sqrt{q})$. Let $\mathbb{O}_1$ be a maximal $O_F$-order in $D$. When $p\equiv 1 \pmod 4$, one has $A\neq O_F$, and $A/2O_F\cong \mathbb{F}_2$. In this case, we define $A$-orders $\bbO_r$ in $D$ for $r\in\{8,16\}$ as follows. Let $\mathbb{O}_8, \mathbb{O}_{16}\subset \mathbb{O}_1$ be the proper $A$-orders such that \begin{gather} (\mathbb{O}_8)_2= \begin{pmatrix} A_2 & 2 O_{F_2} \\ O_{F_2} & O_{F_2}\\ \end{pmatrix},\qquad (\mathbb{O}_{16})_2=\Mat_2(A_2), \label{eq:bbOr1} \\ (\mathbb{O}_r)_\ell=(\mathbb{O}_1)_\ell\qquad \text{for } r\in \{8,16\} \text{ and every prime } \ell\neq 2. \label{eq:bbOr2} \end{gather} Here for any $\mathbb Z$-module $M$ and any prime $\ell\in \bbN$, we write $M_\ell$ for $M\otimes \mathbb Z_\ell$. The order $\mathbb{O}_r$ is of index $r$ in $\mathbb{O}_1$. By \cite[Theorem 6.1.2]{xue-yang-yu:ECNF} and \cite[Theorem 1.3]{xue-yang-yu:sp_as}, there is a bijection (depending on the choice of a base point in each genus) \begin{equation} \label{eq:3.3} \Sp(\sqrt{q})\simeq \coprod_{r} \Cl(\bbO_r), \end{equation} where $r\in \{1,8,16\}$ if $p\equiv 1 \pmod 4$ and $r=1$ otherwise. If $[I]$ is an ideal class corresponding to a class $[X]$ in $\Sp(\sqrt{q})$ in this bijection, then $\mathcal{O}_l(I)\simeq \End(X)$. Therefore, there is a bijection $\calT(\Sp(\sqrt{q}))\simeq \coprod_r \calT(\bbO_r)$, and \begin{equation} \label{eq:3.4} T(p):=|\calT(\Sp(\sqrt{q}))|= \begin{cases} t(\bbO_1) & \text{for $p\not \equiv 1 \pmod 4$}; \\ t(\bbO_1)+t(\bbO_8)+t(\bbO_{16}) & \text{for $p\equiv 1 \pmod 4$}. \end{cases} \end{equation} Put $\bbO_4:=O_F \bbO_8$, which is of index $4$ in $\bbO_1$. It is an Eichler order of level $2O_F$, and one has $\bbO_4=\bbO_1\cap w \bbO_1 w^{-1}$, where $w= \begin{pmatrix} 0 & 2 \\ 1 & 0 \\ \end{pmatrix}$. Since $D$ is unramified at all finite places, $\calN(\widehat \bbO_1) =\widehat F^\times \widehat \bbO_1^\times$. Clearly, $\calN(\widehat \bbO_8)\subseteq \calN(\widehat \bbO_4)$, and $\calN(\widehat\bbO_{16})\subseteq \calN(\widehat\bbO_1)$. Direct calculations show that $\calN(\widehat \bbO_8)=\widehat F^\times \widehat \bbO_4^\times$ and $\calN(\widehat \bbO_{16})=\widehat F^\times \widehat \bbO_{16}^\times$ for $p\equiv 1 \pmod 4$. Therefore, we have natural bijections \begin{equation} \label{eq:3.5} \calT(\bbO_1)\simeq \frac{\Cl(\bbO_1)}{\Pic(O_F)}, \quad \calT(\bbO_8) \simeq \frac{\Cl(\bbO_4)}{\Pic(O_F)}, \quad \calT(\bbO_{16})\simeq \frac{\Cl(\bbO_{16})}{\Pic(A)}. \end{equation} \begin{prop}\label{3.1} For every prime $p$, we have $t(\bbO_1)=h(\bbO_1)/h(F)$. Moreover, if $p\equiv 1 \pmod 4$, then $t(\bbO_8)=h(\bbO_4)/h(F)$ and $t(\bbO_{16})=h(\bbO_{16})/h(A)$. \end{prop} \begin{proof} It is known that $h(F)$ is odd for all prime $p$ (see \cite[Corollary (18.4), p.~134]{Conner-Hurrelbrink}). By Corollary~\ref{endo.4}, $\Pic(O_F)$ acts freely on $\Cl(\bbO_1)$ and $\Cl(\bbO_4)$. For $r=16$, we have $\widetilde A=A$. On the other hand, one easily computes that $h(A)=h(F)$ if $p\equiv 1 \pmod 8$, and $h(A)=3h(F)/\varpi$ if $p\equiv 5 \pmod 8$, where $\varpi:=[O_F^\times:A^\times]$. Particularly, $h(A)$ is odd. Then the action of $\Pic(A)$ on $\Cl(\bbO_{16})$ is also free by Corollary~\ref{endo.4} again. The proposition then follows from (\ref{eq:3.5}). \end{proof} \def{\rm Mass}{{\rm Mass}} \def{\rm Emb}{{\rm Emb}} For any square free integer $d\in \bbZ$, we write $h(d)$ for the class number $h(\mathbb Q(\sqrt{d}))$. The discriminant of $F/\mathbb Q$ is denoted by $\grd_F$. We also set $K_j:=F(\sqrt{-j})=\mathbb Q(\sqrt{p},\sqrt{-j})$ for $j\in \{1,2,3\}$. \begin{lemma}\label{3.2} Assume that $p\equiv 1\pmod 4$. Then \begin{equation} \label{eq:3.6} h(\bbO_4)= \begin{cases} \frac{9}{2} \zeta_F(-1)h(F)+\frac{1}{4} h(K_1) & \text{for $p\equiv 1\pmod 8$;} \\ \frac{5}{2} \zeta_F(-1)h(F)+\frac{1}{4} h(K_1)+\frac{2}{3} h(K_3) & \text{for $p\equiv 5 \pmod 8$.} \end{cases} \end{equation} In particular, $h(\bbO_4)=1$ if $p=5$. \end{lemma} The special value $\zeta_F(-1)$ of the Dedekind zeta-function $\zeta_F(s)$ can be calculated by Siegel's formula \cite[Table~2, p.~70]{Zagier-1976-zeta}. It is also known that $\zeta_F(-1)=\frac{1}{24}B_{2,\chi}$, where $B_{2,\chi}$ is the generalized Bernoulli number attached to the primitive quadratic Dirichlet character (of conductor $\grd_F$) associated to the quadratic field $F$. This can be obtained, for example, by combining \cite[formula (16), p.~65]{Zagier-1976-zeta} together with \cite[Exercise~4.2(a)]{Washington-cyclotomic}. \begin{proof} In this case $\bbO_4$ is an Eichler order of square-free level $2O_F$. Let $m_2(B)$ be as in (\ref{eq:35}), with $\ell=2$ and $\mathcal{O}=\bbO_4$. Using the formula \cite[p.~94]{vigneras}(cf. \cite[Section~3.4]{xue-yang-yu:ECNF}), we compute that $m_2(O_{K_1})=1$ and $m_2(O_{K_3})=1-\Lsymb{2}{p}$, where $\Lsymb{2}{p}$ is the Legendre symbol. Let $\zeta_n$ be a primitive $n$-th root of unity for each $n>1$. When $p=5$, we also need to compute $m_2(\mathbb Z[\zeta_{10}])$, which is zero. Using Eichler's class number formula (see \cite[Chapter V]{vigneras}) and the data \cite[Sect.~2.8 and Prop.~3.1]{xue-yang-yu:num_inv}, we obtain a formula for $h(\bbO_4)$. One can also apply Vign\'eras's formula \cite[Theorem 3.1]{vigneras:ens} to the Eichler order $\bbO_4$. \end{proof} \begin{proof}[Proof of Theorem~\ref{1.2}] By \cite[Theorem 3.1]{vigneras:ens} (cf.~\cite[Proposition~6.2.10]{xue-yang-yu:ECNF}), we have $h(\bbO_1)=1, 2, 1$ for $p=2,3, 5$, respectively. Since $h(F)=h(p)=1$ for $p=2,3$ and $5$, by Proposition~\ref{3.1} we obtain \begin{equation} \label{eq:tO1} t(\bbO_1)= 1,2,1 \quad \text{ for $p=2,3,5$, respectively.} \end{equation} By Lemma~\ref{3.2} and \cite[Proposition~6.2.10]{xue-yang-yu:ECNF}, we have $h(\bbO_4)=h(\bbO_{16})=1$ for $p=5$. Therefore, \begin{equation} \label{eq:O8-16} t(\bbO_8)=t(\bbO_{16})=1 \quad \text{for $p=5$}. \end{equation} Assume that $p\equiv 3 \pmod 4$ and $p\ge 7$. By \cite[(6.17)]{xue-yang-yu:ECNF} and Proposition~\ref{3.1}, we have \begin{equation} \label{eq:type-O1-p3mod4} t(\bbO_1)= \frac{\zeta_F(-1)}{2} + \left(13-5\left(\frac{2}{p}\right) \right)\frac{h(-p)}{8}+\frac{h(-2p)}{4}+\frac{h(-3p)}{6}. \end{equation} Assume $p\equiv 1 \pmod 4$ and $p\ge 7$. By \cite[(6.16)]{xue-yang-yu:ECNF} and Proposition~\ref{3.1}, we obtain \begin{equation} \label{eq:type-O1-p1mod4} t(\bbO_1)=\frac{\zeta_F(-1)}{2} + \frac{h(-p)}{8}+\frac{h(-3p)}{6}. \end{equation} By Proposition~\ref{3.1} and Lemma~\ref{3.2}, we obtain \begin{equation} \label{eq:tO8} t(\bbO_8)= \begin{cases} \frac{9}{2}\zeta_F(-1)+h(-p)/8 & \text{for $p\equiv 1 \pmod 8$;} \\ \frac{5}{2}\zeta_F(-1)+h(-p)/8+h(-3p)/3 & \text{for $p\equiv 5 \pmod 8$.} \end{cases} \end{equation} Here we use the result of Herglotz \cite{MR1544516} (see also \cite[Chapter~3]{vigneras:ens} and \cite[Subsection~2.10]{xue-yang-yu:num_inv}) to factor out $h(F)$ from $h(K_1)$ and $h(K_3)$. By the formulas for $h(\bbO_{16})$ \cite[Section~6.2.4]{xue-yang-yu:ECNF} and Proposition~\ref{3.1}, we get \begin{equation} \label{eq:tO16} t(\bbO_{16})= \begin{cases} 3 \zeta_F(-1)+h(-p)/4+h(-3p)/2 & \text{for $p\equiv 1 \pmod 8$;} \\ 5 \zeta_F(-1)+h(-p)/4+h(-3p)/6 & \text{for $p\equiv 5 \pmod 8$.} \end{cases} \end{equation} Theorem~\ref{1.2} follows from (\ref{eq:3.4}) and (\ref{eq:tO1}) -- (\ref{eq:tO16}). \end{proof} \begin{remark}\label{3.3} (1) Comparing our formulas for $h(\bbO_8)$ and $h(\bbO_4)$, we see that $h(\bbO_4)=h(\bbO_8)$ if $p\equiv 1\pmod 8$, or if $p\equiv 5\pmod 8$ and $\varpi=[O_F^\times:A^\times]=3$. In both cases, we have $h(A)=h(F)$, and hence \begin{equation} \label{eq:3.13} \begin{split} T(p)&=\frac{h(\bbO_1)}{h(F)}+\frac{h(\bbO_4)}{h(F)}+ \frac{h(\bbO_{16})} {h(A)} \\ &=\frac{h(\bbO_1)}{h(F)}+\frac{h(\bbO_8)}{h(F)}+ \frac{h(\bbO_{16})}{h(F)}=\frac{H(p)}{h(F)}. \\ \end{split} \end{equation} (2) Note that $\bbO_4$ is also defined for $p\not \equiv 1\pmod 4$, as an Eichler order of (non-square-free) level $2O_F$. We calculate $h(\bbO_4)$ and obtain that $h(\bbO_4)=1,2$ for $p=2,3$, and for $p\ge 7$ and $p\equiv 3 \pmod 4$, \begin{equation} \label{eq:3.14} h(\bbO_4)=3 \zeta_F(-1) h(F)+\left (15- 3\left(\frac{2}{p}\right)\right)\frac{h(K_1)}{4}. \end{equation} More work is needed for computing the numbers of local optimal embeddings at $2$. We omit the details as (\ref{eq:3.14}) is not used in the present paper. \end{remark} Assume that $p\equiv 1\pmod{4}$. Let us calculate $r(\bbO_8)$, the number of orbits for the $\Pic(A)$-action on $\Cl(\bbO_8)$. As mentioned above, if either $p\equiv 5\pmod{8}$ and $\varpi=3$ or $p\equiv 1\pmod{8}$, then we have a natural isomorphism $\Pic(A)\simeq \Pic(O_F)$. Since the class number of $F$ is odd, $\Pic(A)$ acts freely on $\Cl(\bbO_8)$ in this case by Lemma~\ref{endo.2}. Combining with part (1) of Remark~\ref{3.3}, we obtain \begin{equation} \label{eq:38} r(\bbO_8)=\frac{h(\bbO_4)}{h(F)}=t(\bbO_8). \end{equation} Surprisingly, this holds for the case that $p\equiv 5\pmod{8}$ and $\varpi=1$ as well (to be proved in Subsection~\ref{sect:orbit-O8}). But before that, we relax the condition on $p$ a little bit and study a more general real quadratic field $F=\mathbb Q(\sqrt{d})$. \begin{ex}\label{ex:ker-cyclic-order-3} Let $d\in \bbN$ be a square free positive integer congruent to $5$ modulo $8$. Whether the fundamental unit $\varepsilon$ of $F=\mathbb Q(\sqrt{d})$ lies in $A=\mathbb Z[\sqrt{d}]$ or not is a classical problem that dates back to Eisenstein \cite{Eisenstein1844}, and there seems to be no simple criterion on $d$ for it. However, it is known \cite{Alperin, Stevenhagen} that the number of $d$ for each case is infinite. By \cite[Section~3]{Alperin}, the kernel of $i_{O_F/A}:\Pic(A)\to \Pic(O_F)$ is generated by the locally principal $A$-ideal class $[\gra]$ represented by $\gra=4\mathbb Z + (1+\sqrt{d})\mathbb Z$. Moreover, $\gra^3=8A$, and $\gra$ is non-principal if and only if $\varepsilon \in A$. Let $K=\mathbb Q(\sqrt{d}, \sqrt{-3})$, and $B_{3,2}=A[\sqrt{-3},\frac{1+\sqrt{d}}{2}\zeta_6]$ with $\zeta_6=(1+\sqrt{-3})/2$. One calculates that \begin{equation} \label{eq:42} B_{3,2}=\mathbb Z+\mathbb Z\sqrt{d}+\mathbb Z\frac{\sqrt{d}+\sqrt{-3}}{2}+\mathbb Z\frac{(1+\sqrt{d})(1+\sqrt{-3})}{4}\subset O_K. \end{equation} In particular, $B_{3,2}$ is a CM proper $A$-order with $[O_K:B_{3,2}]=2$ and $\delta(B_{3,2})=0$. We have $2O_K\subset B$, and $B/2O_K=\mathbb F_4\oplus \mathbb F_2\subset \mathbb F_4\oplus \mathbb F_4=O_K/2O_K$ (cf.~\cite[Section~4.8]{xue-yang-yu:num_inv}). Note that $\zeta_6\not\in B_{3,2}$, otherwise $B_{3,2}=O_K$ by \cite[Exercise~II.42(d), p.~51]{MR0457396}. Thus, we have \[3=[(O_K/2O_K)^\times: (B_{3,2}/2O_K)^\times]\geq [O_K^\times: B_{3,2}^\times]\geq 3,\] and hence $[O_K^\times: B_{3,2}^\times]=3$. It follows from \cite[Theorem~I.12.12]{Neukirch-ANT} that $h(B_{3,2})=h(O_K)$, and the canonical map $\Pic(B_{3,2})\to \Pic(O_K)$ is an isomorphism. Therefore, $\ker(i_{B_{3,2}/A})=\ker(i_{O_K/A})\supseteq \ker(i_{O_F/A})=\dangle{[\gra]}$. Now assume that $3\nmid d$, and $\varepsilon\in A$ so that $\gra$ is non-principal. Then $K/F$ is ramified at every place of $F$ above $3$. Hence $O_K^\times =O_F^\times \boldsymbol \mu(K)$ and $i_{K/F}: \Pic(O_F)\to \Pic(O_K)$ is an embedding by \cite[Lemma~13.5]{Conner-Hurrelbrink}. Therefore, we have \[\ker(i_{B_{3,2}/A})=\dangle{[\gra]}\simeq \zmod{3}. \] It is known \cite[Theorem~4.1]{Alperin} that there are infinitely many square free $d$ of the form $4n^2+1$ with an odd $n>3$ such that $\varepsilon\in A$. In particular, there are infinitely many $d$ satisfying our assumptions. \end{ex} \begin{sect}\label{sect:orbit-O8} We return to the case that $F=\mathbb Q(\sqrt{p})$ with $p\equiv 5\pmod{8}$. Assume that $\varepsilon\in A=\mathbb Z[\sqrt{p}]$ so that $\ker(i_{O_F/A})=\dangle{[\gra]}\simeq \zmod{3}$, with $\gra=4\mathbb Z+(1+\sqrt{p})\mathbb Z$ as in Example~\ref{ex:ker-cyclic-order-3}. In order to compute $r(\bbO_8)$, we list all CM proper $A$-orders $B$ with $\ker(i_{B/A})$ nontrivial. Let $K$ be the fractional field of $B$. Since $h(F)$ is odd, the morphism $i_{K/F}: \Pic(O_F)\to \Pic(O_K)$ is injective, and hence $\ker(i_{B/A})\subseteq \ker(i_{O_F/A})$. By the proof of Proposition~\ref{prop:finiteness-CM-proper-A-order}, necessarily $[O_K^\times:O_F^\times]>1$, and there exists a unit $u\in O_K^\times$ such that $u\not\in O_F^\times$ and $u\gra/2\subseteq B$. According to \cite[Subsection~2.8]{xue-yang-yu:num_inv}, $K$ coincides with either $F(\sqrt{-1})$ or $F(\sqrt{-3})$, and $O_K^\times=O_F^\times\boldsymbol \mu(K)$ in both cases. By the assumption, $O_F^\times=A^\times$. First, suppose that $K=F(\sqrt{-1})$. Then $O_K^\times/A^\times$ is a cyclic group of order $2$ generated by the coset of $\sqrt{-1}$. Note that $A[\gra\sqrt{-1}/2]\supseteq O_F$, because \[\left(\frac{(1+\sqrt{p})\sqrt{-1}}{2}\right)^2=-\left(\frac{p-1}{4}+\frac{1+\sqrt{p}}{2}\right).\] Hence there is no CM \emph{proper} $A$-order $B$ in $F(\sqrt{-1})$ with $\ker(i_{B/A})$ nontrivial. Second, suppose that $K=F(\sqrt{-3})$. Then $O_K^\times/A^\times$ is a cyclic group of order $3$ generated by the coset of $\zeta_6$. We have $A[\gra\zeta_6/2]=B_{3,2}$ in (\ref{eq:42}), and $A[\gra\zeta_6^{-1}/2]=\bar B_{3,2}$, the complex conjugate of $B_{3,2}$. Therefore, up to isomorphism, $B=B_{3,2}$ is the unique CM proper $A$-order with $\ker(i_{B/A})$ nontrivial. The same proof as that in \cite[Subsection~6.2.5]{xue-yang-yu:ECNF} shows that $m_2(B_{3,2})=1$ for $\mathcal{O}=\bbO_8$. It follows from Theorem~\ref{thm:orbit-num-formula} that \begin{equation} \label{eq:43} r(\bbO_8)=\frac{1}{3h(F)}(h(\bbO_8)+2h(B_{3,2}))=\frac{5}{2}\zeta_F(-1)+\frac{1}{8}h(-p)+\frac{1}{3}h(-3p). \end{equation} Comparing with (\ref{eq:tO8}), we find that (\ref{eq:38}) holds in the current setting as well. \end{sect} \section{Integrality of $h(D)/h(F)$ when $\grd(D)=O_F$} \label{sec:integrality} Let $F$ be a totally real number field, and $D$ be a totally definite quaternion algebra over $F$. The goal of this section is two-fold: first, we characterize all the CM $O_F$-orders $B$ for which the kernel of $i_{B/O_F}:\Pic(O_F)\to \Pic(B)$ is nontrivial; second, we show that $h(D)/h(F)$ is integral when $[F:\mathbb Q]$ is even and $D$ is unramified at all finite places of $F$ (i.e. the reduced discriminant $\grd(D)=O_F$). For simplicity, we identify $F$ with a subfield of $\mathbb C$, and consider CM extensions of $F$ as subfields of $\mathbb C$ as well. This way two CM $O_F$-orders are isomorphic if and only if they are the same. The set $\mathscr{B}$ defined in (\ref{eq:41}) with $A=O_F$ becomes to \[\mathscr{B}=\{B\mid B \text{ is a CM $O_F$-order, and } \ker(i_{B/O_F})\neq \{1\}\}. \] By Lemma~\ref{lem:kernel-2-torsion}, $\ker(i_{B/O_F})\simeq \zmod{2}$ for every $B\in \mathscr{B}$. In particular, if $h(F)$ is odd, then $\mathscr{B}=\emptyset$ and $h(D)/h(F)$ is an integer by Corollary~\ref{2.7}. So we shall focus on the case where $h(F)$ is even. The \emph{finitely many} CM extensions $K/F$ with $\ker(i_{K/F})$ nontrivial are classified in \cite[Section~14]{Conner-Hurrelbrink}. It remains to characterize all $O_F$-orders $B\subseteq O_K$ with $\ker(i_{B/O_F})=\ker(i_{K/F})$ for each such $K$. \begin{lem}\label{lem:suborder-nontrivial-kernel} Let $K/F$ be a CM-extension with $\ker(i_{K/F})\neq \{1\}$, and let $[\gra]\in \ker(i_{K/F})$ be the unique nontrivial ideal class so that $\gra O_K=\lambda O_K$ for some $\lambda\in K^\times$. \begin{enumerate}[(i)] \item Suppose that $K\neq F(\sqrt{-1})$. Then $[\gra]\in \ker(i_{B/O_F})$ if and only if $B$ contains the $O_F$-order $B_0:=O_F\oplus\grI$, where $\grI$ denotes the purely imaginary $O_F$-submodule $\{z\in O_K\mid \bar{z}=-z\}$ of $O_K$. \item Suppose that $K=F(\sqrt{-1})$. Let $n=\max\{ m \in \bbN\mid \mathbb Q(\zeta_{2^m})\subseteq K\}$, where $\zeta_{2^m}$ denotes a primitive $2^m$-th root of unity. Put $\eta=\zeta_{2^n}$, and $\grI_\eta=\{z\in O_K\mid \bar{z}=\eta z\}$. Then $[\gra]\in \ker(i_{B/O_F})$ if and only if $B$ contains the $O_F$-order $B_0:=O_F\oplus\grI_\eta$. Moreover, $\dangle{\eta}\subseteq B_0^\times$, so $B_0$ does not depend on the choice of $\zeta_{2^n}$. \end{enumerate} \end{lem} \begin{proof} By Lemma~\ref{lem:char-CM-order-nontrivial-i}, $[\gra]\in \ker(i_{B/A})$ if and only if there exits $u\in O_K^\times$ such that $u\gra/\lambda\subset B$. Let $\boldsymbol \mu(K)$ be the group of roots of unity in $K$. Because $\ker(i_{K/F})$ is nontrivial, it follows from \cite[Section~13, p.~68]{Conner-Hurrelbrink} that the CM-extension $K/F$ is of type I, i.e. $O_K^\times=O_F^\times\boldsymbol \mu(K)$. Write $u=v\xi$ with $v\in O_F^\times$ and $\xi\in \boldsymbol \mu(K)$. Then $\xi\gra/\lambda=u\gra/\lambda\subset B$. Therefore, $[\gra]\in \ker(i_{B/O_F})$ if and only if there exists a root of unity $\xi\in \boldsymbol \mu(K)$ such that $\xi\gra/\lambda\subset B$. (i) Now suppose that $K\neq F(\sqrt{-1})$. By \cite[Section~14]{Conner-Hurrelbrink}, replacing $\lambda$ by $\lambda u'$ for some $u'\in O_K^\times$, we may assume that $\bar{\lambda}=-\lambda$ and $\gra^2=-\lambda^2 O_F$. Clearly, $\gra/\lambda \subseteq\grI$, and $B_0:=O_F\oplus\grI$ is an $O_F$-suborder of $O_K$. By Lemma~\ref{lem:char-CM-order-nontrivial-i}, $[\gra]\in \ker(i_{B/O_F})$ for any $O_F$-order $B\supseteq B_0$. Now we prove the other direction. We first claim that $\gra/\lambda =\grI$. Since both sides are invertible $O_F$-modules, there exists a nonzero $O_F$-ideal $\grc\subseteq O_F$ such that $\gra/\lambda =\grc\grI$. Then $O_K=\frac{\gra}{\lambda}O_K=\grc\grI O_K\subseteq \grc O_K$. It follows that $\grc O_K=O_K$, and hence $\grc=O_F$ by Lemma~\ref{lem:eq-unit-ideal}. Let $B$ be an $O_F$-suborder of $O_K$ such that $\xi\gra/\lambda\subseteq B$ for some $\xi\in \boldsymbol \mu(K)$. Replacing $\xi$ by $-\xi$ if necessary, we may assume that $\xi$ has odd order. Then $\xi^2\in \xi^2O_F=(\xi\gra/\lambda)^2\subseteq B$, and hence $\dangle{\xi}\subset B$. It follows that $\grI=\gra/\lambda\subseteq B$, and hence $B\supseteq B_0$. (ii) Suppose that $K=F(\sqrt{-1})$. Note that both $\eta$ and $\bar\eta$ belong to $B_0$ as $1+\bar\eta\in \grI_\eta$ and $\eta+\bar\eta\in O_F$. If both $x$ and $y$ are elements of $\grI_\eta$, then $xy\in O_F\bar\eta$ as $\overline{xy}=(xy \eta)\cdot \eta$ and $\overline{xy\eta}= xy\eta$. Thus $B_0$ is an $O_F$-suborder of $O_K$, so $\dangle{\eta}\subset B_0^\times$. By \cite[Corollary~14.7]{Conner-Hurrelbrink} and the Remark on \cite[p.~88]{Conner-Hurrelbrink}, the unique nontrivial ideal class in $\ker(\Pic(O_F)\to \Pic(O_K))$ is represented by an $O_F$-ideal $\gra$ such that $\gra^2=(2+\eta+\bar\eta)O_F$ and $\gra O_K=(1+\eta)O_K$. We have $\gra/(1+\eta)\subseteq \grI_\eta\subset B_0$. By Lemma~\ref{lem:char-CM-order-nontrivial-i} again, $[\gra]\in \ker(i_{B/O_F})$ for any order $B\supseteq B_0$. Conversely, let $B\subseteq O_K$ be an $O_F$-suborder such that $\xi\gra/(1+\eta)\subseteq B$ for some $\xi\in \boldsymbol \mu(K)$. The same proof as that in (i) shows that $\gra/(1+\eta)=\grI_\eta$. Note that $2+\eta+\bar \eta\in \gra^2$. We have \begin{equation} \label{eq:2} \xi^2\bar\eta=\frac{\xi^2(2+\eta+\bar\eta)}{(1+\eta)^2}\in \left(\frac{\xi\gra}{1+\eta}\right)^2\subseteq B. \end{equation} Write $\xi=\xi'\eta^r$ such that $s:=\mathrm{ord}(\xi')$ is odd. Then \[\dangle{\eta}=\dangle{\eta^{s(2r-1)}}=\dangle{(\xi^2\bar{\eta})^s}\subseteq \dangle{\xi^2\bar\eta}\subseteq B^\times.\] It follows from (\ref{eq:2}) that $\xi'^2\in B$, and hence $\xi'\in B$ as well. We conclude that $\xi\in B$, and $\grI_\eta=\gra/(1+\eta)\subseteq B$. This completes the proof of the lemma. \end{proof} \begin{cor}\label{cor:conductor} Keep the notation and assumptions of Lemma~\ref{lem:suborder-nontrivial-kernel}. The index $[O_K^\times: B^\times]$ is odd for any $O_F$-order $B\subseteq O_K$ with $[\gra]\in \ker(i_{B/O_F})$. If $K\neq F(\sqrt{-1})$, then $2O_K\subseteq B$; if $K=F(\sqrt{-1})$, then $(2-\eta-\bar\eta)O_K\subseteq B$. \end{cor} \begin{proof} It is enough to prove the corollary for the case $B=B_0$. As pointed in the proof of Lemma~\ref{lem:suborder-nontrivial-kernel}, we have $O_K^\times=O_F^\times\boldsymbol \mu(K)$. The 2-primary subgroup of $\boldsymbol \mu(K)$ is contained in $B_0^\times$, so $[O_K^\times: B_0^\times]$ is odd. Clearly, $2O_K\subseteq B_0$ if $K\neq F(\sqrt{-1})$. Suppose that $K=F(\sqrt{-1})$. For any $\alpha\in O_K$, write $\alpha=a+b$ with $a\in F$ and $b\in \grI_\eta\otimes_{O_F} F\subseteq K$. We then have \begin{equation} \label{eq:1} a=\frac{-\eta \alpha+\bar \alpha}{1-\eta},\qquad b=\frac{\alpha-\bar\alpha}{1-\eta}. \end{equation} Both $(2-\eta-\bar\eta)a$ and $(2-\eta-\bar\eta)b$ are integral, and the corollary follows. \end{proof} Now assume that $h(F)$ is even and consider the following set in (\ref{eq:44}): \begin{equation}\label{eq:45} \mathscr{B}_{\mathrm odd}=\{B\in \mathscr{B}\mid h(B)/h(F) \text{ is odd}\}\subseteq \mathscr{B}. \end{equation} Our goal is to show that $\abs{\mathscr{B}_{\mathrm odd}}$ is even. Clearly, if $B\in \mathscr{B}_{\mathrm odd}$, then $O_K\in \mathscr{B}_{\mathrm odd}$ as well with $K$ being the fractional field of $B$. By a theorem of Kummer (cf.~\cite[Theorem 13.14]{Conner-Hurrelbrink}), if $h(F)$ is even and $h(K)/h(F)$ is odd, then $\ker (i_{K/F})$ is nontrivial. The CM extensions $K/F$ with odd relative class number $h(K)/h(F)$ are classified in \cite[Section~16]{Conner-Hurrelbrink}. We fix such a $K$ and characterize all the $O_F$-suborders $B\subseteq O_K$ that lie in $\mathscr{B}_{\mathrm odd}$. \begin{prop}\label{prop:minimal-odd-rlCN-order} Let $K/F$ be a CM-extension with $h(F)$ even and the relative class number $h(K)/h(F)$ \emph{odd}. Denote by $\mathfrak{f}_\diamond$ the product of all dyadic prime $O_F$-ideals that are unramified in $K$. An $O_F$-order $B\subseteq O_K$ has odd relative class number $h(B)/h(F)$ if and only if $B$ contains the $O_F$-order $B_\diamond:=O_F+\mathfrak{f}_\diamond O_K$. Moreover, $B_\diamond=O_K$ if and only if both of the following conditions hold: \begin{itemize} \item $F$ has a single dyadic prime, and \item it is ramified in $K/F$. \end{itemize} \end{prop} \begin{proof} Since $h(F)$ is assumed to be even and $h(K)/h(F)$ is odd, we know by \cite[Section~16]{Conner-Hurrelbrink} that the $2$-primary subgroup $\Pic(O_F)[2^\infty]$ of $\Pic(O_F)$ is cyclic, and \begin{equation} \label{eq:3} \ker(i_{K/F}:\Pic(O_F)\to \Pic(O_K))=\Pic(O_F)[2]\simeq \zmod{2}. \end{equation} Let $[\gra]\in \ker (i_{K/F})$ be the unique nontrivial ideal class as in Lemma~\ref{lem:suborder-nontrivial-kernel}. Then $i_{B/O_F}([\gra])\in \ker(\Pic(B)\twoheadrightarrow \Pic(O_K))$ for any $O_F$-order $B\subseteq O_K$. If $h(B)/h(F)$ is odd, then $h(B)/h(K)$ is odd and $[\gra]\in \ker (i_{B/O_F})$, and hence $B\supseteq B_0$ by Lemma~\ref{lem:suborder-nontrivial-kernel}. By Corollary~\ref{cor:conductor}, for such an order $B$, its conductor $\mathfrak{f}(B)\subseteq O_F$ is a product of dyadic primes of $O_F$, and $[O_K^\times:B^\times]$ is odd. The class number of an $O_F$-order $B\subseteq O_K$ is given by \cite[p.~75]{vigneras:ens} \begin{equation} \label{eq:4} h(B)=\frac{h(K)\Nm_{F/\mathbb Q}(\mathfrak{f}(B))}{[O_K^\times: B^\times]}\prod_{\mathfrak{p}\mid \mathfrak{f}(B)}\left(1- \frac{\Lsymb{K}{\mathfrak{p}}}{\Nm_{F/\mathbb Q}(\mathfrak{p})}\right), \end{equation} where $\Lsymb{K}{\mathfrak{p}}$ is the Artin's symbol \cite[p.~94]{vigneras}. More explicitly, \[\Lsymb{K}{\mathfrak{p}}=\begin{cases} 1 \quad &\text{if } \mathfrak{p} \text{ splits in } K;\\ 0 \quad &\text{if } \mathfrak{p} \text{ ramifies in } K;\\ -1 \quad &\text{if } \mathfrak{p} \text{ is inert in } K.\\ \end{cases}\] It follows that $h(B)/h(F)$ is odd if and only if \begin{enumerate} \item $\mathfrak{f}(B)$ divides $\mathfrak{f}_0:=\mathfrak{f}(B_0)$ and is square-free; and \item every prime divisor $\mathfrak{p}$ of $\mathfrak{f}(B)$ is unramified in $K$. \end{enumerate} We claim that $\mathfrak{f}_0$ is divisible by every dyadic prime $\mathfrak{p}$ of $F$ that is unramified in $K$. It is enough to prove it locally, so we use a subscript $\mathfrak{p}$ to indicate completion at $\mathfrak{p}$. For example, $F_\mathfrak{p}$ denotes the $\mathfrak{p}$-adic completion of $F$. First, suppose that $K=F(\sqrt{-1})$. Let $\mathfrak{p}$ be a dyadic prime of $F$, and $\nu_\mathfrak{p}$ be its associated valuation. By \cite[63:3]{o-meara-quad-forms}, $K/F$ is unramified at $\mathfrak{p}$ if and only if there exists a unit $u\in O_{F_\mathfrak{p}}^\times$ such that $-1\equiv u^2 \pmod{4O_{F_\mathfrak{p}}}$. Assume that this is the case. Then $O_{K_\mathfrak{p}}=O_{F_\mathfrak{p}}+O_{F_\mathfrak{p}}(u+\sqrt{-1})/2$. Given $m\in \bbN$, we have $\mathfrak{p}^mO_{K_\mathfrak{p}}\subset (B_0)_\mathfrak{p}$ if and only if $\mathfrak{p}^m(u+\sqrt{-1})/2\subset (B_0)_\mathfrak{p}$. Write $(u+\sqrt{-1})/2=a+b$ with $a\in F_\mathfrak{p}$ and $b\in \grI_\eta\otimes_{O_F}F_\mathfrak{p}$. Then by (\ref{eq:1}), \[a=\frac{u}{2}-\frac{(1+\eta)\sqrt{-1}}{2(1-\eta)}, \qquad b=\frac{\sqrt{-1}}{1-\eta}.\] We obtain that \begin{equation} \label{eq:5} \nu_\mathfrak{p}(\mathfrak{f}_0)=\frac{1}{2}\nu_\mathfrak{p}(\Nm_{K/F}(1-\eta))=\frac{1}{2}\nu_\mathfrak{p}(2-\eta-\bar\eta)>0. \end{equation} Next, suppose that $K\neq F(\sqrt{-1})$. By \cite[p.~82]{Conner-Hurrelbrink}, $K=F(\sqrt{-\varsigma})$ for some totally positive element $\varsigma\in F$ with $\nu_\grq(\varsigma)\equiv 0\pmod{2}$ for every finite prime $\grq$ of $F$. Thus for every $\grq$, there exists a unit $u\in O_{F_\grq}^\times$ such that $K_\grq=F(\sqrt{-u})$. Let $\mathfrak{p}$ be a dyadic prime of $F$ that is unramified in $K$. Another application of \cite[63:3]{o-meara-quad-forms} as above shows that \begin{equation} \label{eq:6} \nu_\mathfrak{p}(\mathfrak{f}_0)=\nu_\mathfrak{p}(2)>0. \end{equation} This conclude the verification of the claim and proves the first part of the proposition. For the last part of the proposition, recall that by \cite[Theorem~16.1]{Conner-Hurrelbrink}, one of the following holds for $K/F$: \begin{itemize} \item exactly one finite prime of $F$ ramifies in $K/F$ and it is dyadic; or \item no finite prime of $F$ ramifies in $K/F$. \end{itemize} If $\mathfrak{f}_\diamond=O_F$, or equivalently, every dyadic prime of $F$ ramifies in $K$, then $F$ has a single dyadic prime and it ramifies in $K/F$. The converse is obvious. \end{proof} As mentioned before, the following theorem is asserted by Vign\'eras \cite[Remarque, p.~82]{vigneras:ens}. \begin{thm}\label{thm:integrality} Let $F$ be a totally real number field of even degree over $\mathbb Q$, and $D$ be the totally definite quaternion $F$-algebra unramified at all the finite places of $F$. Then $h(D)/h(F)$ is integral. \end{thm} \begin{proof} The theorem follows from Corollary~\ref{endo.4} if $h(F)$ is odd. Suppose that $h(F)$ is even. We show that the cardinality of the set $\mathscr{B}_{\mathrm odd}$ in (\ref{eq:45}) is even. If $\Pic(O_F)[2]\not\simeq \zmod{2}$, then $\mathscr{B}_{\mathrm odd}=\emptyset$ by the proof of Proposition~\ref{prop:minimal-odd-rlCN-order}. Suppose further that $\Pic(O_F)[2]\simeq \zmod{2}$, and $K/F$ is a CM-extension with odd relative class number $h(K)/h(F)$. Let $\omega(\mathfrak{f}_\diamond)$ be the number of prime factors of $\mathfrak{f}_\diamond$. Then \begin{equation} \label{eq:7} \abs{\{B\mid B_\diamond\subseteq B\subseteq O_K\}}=2^{\omega(\mathfrak{f}_\diamond)}. \end{equation} If $B_\diamond\neq O_K$, then $\omega(\mathfrak{f}_\diamond)>1$ and the right hand side of (\ref{eq:7}) is even. If $B_\diamond=O_K$, then $F$ has a single dyadic prime $\mathfrak{p}$, and it ramifies in $K$. By \cite[Theorem~16.1]{Conner-Hurrelbrink} and \cite[Corollary~16.2]{Conner-Hurrelbrink}, $F$ admits exactly two CM-extensions with odd relative class numbers, and $\mathfrak{p}$ ramifies in both of them. It follows that $\abs{\mathscr{B}_{\mathrm odd}}=2$ in this case. Therefore, $\abs{\mathscr{B}_{\mathrm odd}}$ is even in all cases, as claimed. Now the theorem follows from Corollary~\ref{cor:type-numer-tot-def-unram}. \end{proof} \section*{Acknowledgments} The second named author is grateful to Paul Ponomarev for answering his questions on type numbers. The manuscript was prepared during the authors' visit at M\"unster University. They thank Urs Hartl and the institution for warm hospitality and excellent research environment. J.~Xue is partially supported by the 1000-plan program for young talents and Natural Science Foundation grant \#11601395 of PRC. Yu is partially supported by the MoST grants 104-2115-M-001-001MY3 and 107-2115-M-001-001-MY2. \bibliographystyle{hplain}
train/arxiv
BkiUdt3xK3YB9ohkNKb9
5
1
\section{Introduction} \label{sec:introduction} Topological phases of matter are a subject of active research during the last decades, as they constitute a whole new paradigm in condensed matter physics. In contrast to the well-studied standard quantum phases of matter, described by local order parameters (see for example Anderson's classification~\cite{and:07}), the ground states of topological systems are globally characterized by topological invariants~\cite{tknn:82,ber:84,zen:che:zho:wen:15,ando:13}. Hamiltonians of gapped systems with different topological orders cannot be smoothly transformed from one into the other unless passing through a gap-vanishing region of criticality. In particular, insulators and superconductors with an energy gap exhibit topological orders and are classified according to the symmetries that their Hamiltonians possess~\cite{kit:09, sch:ryu:fur:lud:08}, namely Time Reversal, Particle-Hole and Chiral symmetry. As opposite to the standard Landau symmetry breaking theory of quantum phase transitions (PTs), in topological PTs the symmetries of the Hamiltonian are not violated. For a topological PT to occur, that is a gapped state of the system to be deformed in another gapped state in a different topological class, the energy gap has to close. In other words, the quantum state of the system undergoing a topological PT is gapless. A manifestation of the topological order of a system is the presence of robust symmetry-protected edge states on the boundary between two distinct topological phases, as predicted by the bulk-to-boundary principle~\cite{ryu:hat:02}. A question that naturally arises is whether there is any kind of topological order at finite temperatures, and different approaches have been used to tackle this problem~\cite{riv:viy:del:13,viy:riv:del:12}. One of the most promising approaches is based on the work of Uhlmann~\cite{uhl:89}, who extended the notion of geometrical phases from pure states to density matrices. The concept of the Uhlmann holonomy, and the quantities that can be derived from it, were used to infer PTs at finite temperatures~\cite{hub:93,viy:riv:del:14, viy:riv:del:2d:14,zho:aro:14,pau:vie:08,viy:riv:del:15}. Nevertheless, the physical meaning of these quantities and their relevance to the observable properties of the corresponding systems stay as an interesting open question~\cite{sjo:15, kem:que:mor:16, bud:die:15}. There exist several proposals for the observation of the Uhlmann geometric phase~\cite{tid:sjo:03, abe:kul:sjo:oi:07, viy:riv:gas:wal:fil:del:16} and an experimental realization has been reported in~\cite{uhl:pha:exp:11}. Information-theoretic quantities such as entanglement measures~\cite{ham:ion:zan:05, ham:zha:haa:lid:08, hal:ham:12} and the fidelity, a measure of distinguishability between two quantum states~\cite{aba:ham:zan:08, zan:pau:06, zha:zho:09, oli:sac:14, pau:sac:nog:vie:dug}, were extensively used in the study of PTs. Whenever there is a PT, the density matrix of a system changes significantly and therefore, a sudden drop of the fidelity $F(\rho,\sigma)\equiv \text{Tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}$, signals out this change. The fidelity, is closely related to the Uhlmann connection, through the Bures metric~\cite{zan:ven:gio:07}. Therefore, they can both be used to infer the possibility of PTs, as in~\cite{pau:vie:08} (for the pure-state case of the Berry phase, see~\cite{car:pac:05,reu:har:ple:07}).\\ We analyze the behavior of the fidelity and the Uhlmann connection associated to thermal states in fermionic systems. We consider the space consisting of the parameters of the Hamiltonian and the temperature, as it provides a physically sensible base space for the principal bundle, describing the amplitudes of the density operator. We study paradigmatic models of 1D topological insulators (TIs) (Creutz Ladder~\cite{cre:99,ber:pat:ami:del:09} and Su-Schrieffer-Heeger~\cite{su:sch:hee:79} models) and superconductors (TSCs) (Kitaev chain~\cite{kit:cha:01}) with chiral symmetry. We conclude that the effective temperature only smears out the topological features exhibited at zero temperature, without causing a thermal PT. We also analyze the BCS model of superconductivity~\cite{bar:coo:sch:57}, previously studied in~\cite{pau:vie:08}, by further identifying the significance of thermal and purely quantum contributions to PTs, using the fidelity and the Uhlmann connection. In contrast to the aforementioned nontrivial topological systems, both quantities indicate the existence of thermal PTs.\\ This Letter is organized as follows: first, we elaborate on the relationship between the fidelity and the Uhlmann connection and motivate their use in inferring PTs, both at zero and finite temperatures. In the following section, we present our results on the fidelity and the Uhlmann connection for the aforementioned systems and discuss the possibility of temperature driven PTs. In the last section we summarize our conclusions and point out possible directions of future work. \section{Fidelity and the Uhlmann Parallel Transport} Given a Hilbert space, one can consider the set of density matrices with full rank (e.g. thermal states) and the associated set of amplitudes (generalization to the case of the sets of singular density matrices with fixed rank is straightforward). For a state $\rho$, an associated amplitude $w$ satisfies $\rho = ww^\dagger$. Thus, there exists a unitary (gauge) freedom in the choice of the amplitude, since both $w$ and $w' = wU$, with $U$ being an arbitrary unitary, are associated to the same $\rho$. Two amplitudes $w_1$ and $w_2$, corresponding to states $\rho_1$ and $\rho_2$, respectively, are said to be \emph{parallel} in the Uhlmann sense if and only if they minimize the Hilbert- Schmidt distance $||w_1 - w_2|| = \sqrt{\text{Tr} [(w_1 - w_2)^\dagger (w_1 - w_2)]}$, induced by the inner product $\langle w_1,w_2\rangle = \text{Tr}(w_1^\dagger w_2)$. The condition of parallelism turns out to be equivalent to maximizing $\text{Re}\langle w_1,w_2\rangle$, since $||w_1 - w_2||^2 = 2(1 - \text{Re}\langle w_1,w_2\rangle)$. By writing $w_i=\sqrt{\rho}_i U_i$, $i=1,2$, where the $U_i$'s are unitary matrices, we get \begin{align} \text{Re}\langle w_1,w_2\rangle & \leq |\langle w_1,w_2\rangle| =|\text{Tr} \left(w_2^\dagger w_1\right)|\nonumber\\ &= |\text{Tr}\left( U_2^\dagger \sqrt{\rho_2}\sqrt{\rho_1}U_1\right)| \nonumber \\ & = |\text{Tr}\left(|\sqrt{\rho_2}\sqrt{\rho_1}|U U_1 U_2^\dagger\right)| \nonumber \\ & \leq \text{Tr}\left(|\sqrt{\rho_2}\sqrt{\rho_1}|\right)\nonumber\\ & = \text{Tr}\sqrt{\sqrt\rho_1 \rho_2 \sqrt\rho_1} = F(\rho_1,\rho_2), \end{align} where $U$ is the unitary associated to the polar decomposition of $\sqrt{\rho_2}\sqrt{\rho_1}$, and the penultimate step is the Cauchy-Schwartz inequality. Hence, the equality holds if and only if $U (U_1U_2^\dagger) =I$. Note that in this case, also the first equality holds, and we have $\text{Re}\langle w_1,w_2\rangle\ = \langle w_1,w_2\rangle \in \mathbb R^+$, which provides yet another interpretation of the Uhlmann parallel transport condition as a generalization of the Berry pure-state connection: the phase, given by $\Phi_U = \arg \langle w_1,w_2\rangle$, is trivial, i.e., zero. Given a curve of density matrices $\gamma:[0,1] \ni t\mapsto \rho(t)$ and the initial amplitude $w(0)$ of $\rho (0)$, the Uhlmann parallel transport gives a unique curve of amplitudes $w(t)$ with the property that $w(t)$ is parallel to $w(t+\delta t)$ for an infinitesimal $\delta t$ (the horizontal lift of $\gamma$). The length of this curve of amplitudes, according to the metric induced by the Frobenius inner product, is equal to the length, according to the Bures metric (which is the infinitesimal version of the Bures distance $D_{B}(\rho_1,\rho_2)^2=2[1-F(\rho_1,\rho_2)]$), of the corresponding curve $\gamma$ of the density matrices. This shows the relation between the Uhlmann connection and the fidelity (for details, see for example~\cite{uhl:11, uhl:89}). We see that the ``Uhlmann factor'' $U$, given by the polar decomposition $\sqrt{\rho (t+\delta t)}\sqrt{\rho (t)} = |\sqrt{\rho (t+\delta t)}\sqrt{\rho (t)}| U$, characterizes the Uhlmann parallel transport. For two close points $t$ and $t + \delta t$, if the two states $\rho (t)$ and $\rho (t+\delta t)$ belong to the same phase, one expects them to almost commute, resulting in the Uhlmann factor being approximately equal to the identity, $\sqrt{\rho (t+\delta t)}\sqrt{\rho (t)} \approx |\sqrt{\rho (t+\delta t)}\sqrt{\rho (t)}|$. On the other hand, if the two states belong to two different phases, one expects them to be drastically different (confirmed by the fidelity approach), both in terms of their eigenvalues and/or eigenvectors, potentially leading to nontrivial $U \neq I$ (see the previous study on the Uhlmann factor and the finite-temperature PTs for the case of the BCS superconductivity~\cite{pau:vie:08}). To quantify the difference between the Uhlmann factor and the identity, and thus the nontriviality of the Uhlmann connection, we consider the following quantity: \begin{align} \Delta (\rho(t),\rho(t+\delta t)):=& F(\rho(t),\rho(t+\delta t)) \nonumber \\ &-\text{Tr}(\sqrt{\rho(t+\delta t)}\sqrt{\rho(t)}). \end{align} Note that $\Delta = \text{Tr} \big[|\sqrt{\rho(t+\delta t)}\sqrt{\rho(t)}|(I-U)\big]$. When the two states are from the same phase we have $\rho (t) \approx \rho (t+ \delta t)$, and thus $\Delta \approx 0$. Otherwise, if the two states belong to different phases, and the Uhlmann factor is nontrivial, we have $\Delta \neq 0$. Thus, sudden departure of $\Delta$ from zero (for $\delta t << 1$) signals the points of PTs. Since both the Uhlmann parallel transport and the fidelity give rise to the same metric (the Bures metric), the non-analyticity of $\Delta$ is accompanied by the same behaviour of the fidelity. Note that the other way around is not necessarily true: in case the states commute with each other and differ only in their eigenvalues, the Uhlmann connection is trivial, and thus $\Delta=0$. In order for the Uhlmann connection and the fidelity to be in tune, they must be taken over the same base space. In previous studies~\cite{viy:riv:del:14}, an Uhlmann connection in 1D translationally invariant systems was considered. The base space considered is the momentum space and the density matrices are of the form $\{\rho_k:=e^{-\beta H(k)}/Z: k\in \mathcal B \}$, where $H(k)=E(k)\vec{n}(k)\cdot \vec{\sigma}/2$ and $\mathcal B$ is the first Brillouin zone. Since we are in 1D, there exists no curvature and hence the holonomy along the momentum space cycle becomes a topological invariant (depends only on the homotopy class of the path). It was found that the Uhlmann geometric phase $\Phi_U(\gamma_c)$ along the closed curve given by $\gamma_c(k)=\rho_k$, changes abruptly from $\pi$ to $0$ after some ``critical'' temperature $T_U$. Namely, the Uhlmann phase is given by \begin{align} \Phi_U(\gamma_c)&=\arg\text{Tr}\{w(-\pi)^{\dagger}w(\pi)\} \nonumber\\ &=\arg\text{Tr}\{\rho_{\pi}U(\gamma_c)\}, \end{align} where $w(k)$ is the horizontal lift of the loop of density matrices $\rho_{k}$, and $U(\gamma_c)$ is the Uhlmann holonomy along the first Brillouin zone. This temperature, though, is not necessarily related to a physical quantity that characterizes a system's phase. It might be the case that the Uhlmann phase is trivial, $\Phi_U(\gamma_c) = 0$, while the corresponding holonomy is not, $U(\gamma_c)\neq I$. For the systems studied in~\cite{viy:riv:del:14}, the Uhlmann holonomy is a smooth function of the temperature and is given, in the basis in which the chiral symmetry operator is diagonal, by: \begin{align} U(\gamma_c) = \exp \Big\{-\frac{i}{2} \int_{-\pi}^{\pi} \left[1 - \text{sech} \left(\frac{E(k)}{2T}\right)\right]\frac{\partial \varphi}{\partial k}dk\ \sigma_z\Big\}, \label{eq:hol} \end{align} where $\varphi (k)$ is the polar angle coordinate of the vector $\vec{n}(k)$ lying on the equator of the Bloch sphere. Note that $\lim_{T\rightarrow 0} U(\gamma_c) = e^{-i\nu\pi\sigma_z}$, with the Berry phase being $\Phi_B = \lim_{T\rightarrow 0} \Phi_U = \nu\pi$, and $\nu$ the winding number. While in this case the Uhlmann phase suffers from an abrupt change (step-like behaviour), the Uhlmann holonomy is smooth and there is no PT-like behaviour. In the paradigmatic case of the quantum Hall effect, at $T=0$, the Hall conductivity is quantized in multiples of the first Chern number of a vector bundle in momentum space through several methods. For example, one can use linear response theory or integrate the fermions to obtain the effective action of an external $\text{U}(1)$ gauge field. The topology of the bands appears, thus, in the response of the system to an external field. It is unclear, though, that the former mathematical object, the Uhlmann geometric phase along the cycle of the 1D momentum space, has an interpretation in terms of the response of the system. In order to measure this Uhlmann geometric phase, one would have to be able to change the quasi-momentum of a state in an adiabatic way. In realistic setups, the states at finite temperatures are statistical mixtures over all momenta, such as the thermal states considered, and realizing closed curves of states $\rho_k$ with precise momenta changing in an adiabatic way seems a tricky task. The fidelity computed in our Letter though, refers to the change of the system's {\em overall} state, with respect to its parameters (controlled in the laboratory much like an external gauge field), and is related to an, \textit{a priori}, physically relevant geometric quantity, the Uhlmann factor $U$. The quantity $\Delta=\text{Tr} \big[|\sqrt{\rho(t+\delta t)}\sqrt{\rho(t)}|(I-U)\big]$ contains information concerning the Uhlmann factor, since $U=U(t+\delta t) U^\dagger(t)$, where $U(t)=T\exp\left\{-\int_{0}^{t} \mathcal{A}(d\rho/ds)ds\right\}$ is the parallel transport operator and $\mathcal{A}$ is the Uhlmann connection differential $1$-form (for details, see~\cite{chr:jam:12}). \section{The results} In our analysis we probe the fidelity and $\Delta$ with respect to the parameters of the Hamiltonian describing the system and the temperature, independently. We perform this analysis for paradigmatic models of TIs (SSH and Creutz ladder) and TSCs (Kitaev Chain) in 1D. We analytically calculate the expressions for the fidelity and $\Delta$, for thermal states $\rho=e^{-\beta H}/Z$, where $\beta$ is the inverse temperature (see SM1 for the details of the derivation). We use natural units: $\hbar=k_{\text{B}}=1$.\\ \indent Here we focus on the Creutz Ladder model, while the results for SSH and the Kitaev chain are presented in SM2, since they are qualitatively the same. The Hamiltonian for the Creutz Ladder model~\cite{cre:99,ber:pat:ami:del:09} is given by \begin{align} \mathcal{H}=& -\sum_{i\in\mathbb{Z}}K\left(e^{-i\phi}a_{i+1}^{\dagger}a_{i}+e^{i\phi}b_{i+1}^{\dagger}b_{i}\right)\nonumber\\ &+K(b_{i+1}^{\dagger}a_{i}+a_{i+1}^{\dagger}b_{i})+Ma_{i}^\dagger b_i +\text{H.c.}, \end{align} where $a_i,b_i$, with $i\in\mathbb{Z}$, are fermion annihilation operators, $K$ and $M$ are hopping amplitudes (horizontal/diagonal and vertical, respectively) and $e^{i\phi}$ is a phase factor associated to a discrete gauge field. We take $2K=1$, $\phi=\pi/2$. Under these conditions, the system is topologically nontrivial when $M<1$ and trivial when $M>1$. Given two close points $(M,T)$ and $(M',T')=(M+\delta M, T+\delta T)$, we compute $F(\rho,\rho')$ and $\Delta(\rho,\rho')$ between the states $\rho=\rho(M,T)$ and $\rho'=\rho(M',T')$. To distinguish the contributions due to the change of Hamiltonian's parameter and the temperature, we consider the cases $\delta T = 0$ and $\delta M = 0$, respectively, see Fig.~\ref{fig:fid}. \begin{widetext} \begin{figure}[h!] \begin{minipage}{0.33\textwidth} \includegraphics[width=0.8\textwidth,height=0.6\textwidth]{CL_fidelity_theta.pdf} \end{minipage}% \begin{minipage}{0.33\textwidth} \includegraphics[width=0.8\textwidth,height=0.6\textwidth]{CL_fidelity_T.pdf} \end{minipage} \begin{minipage}{0.33\textwidth} \includegraphics[width=0.8\textwidth,height=0.6\textwidth]{CL_uhlmann_theta.pdf} \end{minipage}% \caption{The fidelity for thermal states $\rho$, when probing the parameter of the Hamiltonian that drives the topological PT $\delta M =M'-M=0.01$ (left), and the temperature $\delta T=T'-T=0.01$ (center), and the Uhlmann connection, when probing the parameter of the Hamiltonian $M$ (right), for the Creutz ladder model (representative of the symmetry class AIII). The plot for $\Delta$ when deforming the thermal state along $T$ is omitted since it is equal to zero everywhere.} \label{fig:fid} \end{figure} \end{widetext} We see that for $T=0$ both fidelities exhibit a sudden drop in the neighbourhood of the gap-closing point $M=1$, signalling the topological quantum PT. As temperature increases, the drops of both fidelities at the quantum critical point are rapidly smoothened towards the $F=1$ value. This shows the absence~of both finite-temperature parameter-driven, as well as temperature-driven (i.e., thermal) PTs. The plot for $\Delta$, for the case $\delta T=0$, shows a behavior similar to that of the fidelity, while for $\delta M = 0$ we obtain no information, as $\Delta$ is identically equal to zero, due to the triviality of the Uhlmann connection associated to the mutually commuting states (a consequence of the Hamiltonian's independence on the temperature). $\Delta$ is sensitive to PTs for which the state change is accompanied by a change of the eigenbasis (in contrast to fidelity, which is sensitive to both changes of eigenvalues and eigenvectors). For TIs and TSCs, this corresponds to parameter-driven transitions only. We further study a topologically trivial superconducting system, given by the BCS theory, with the effective Hamiltonian \begin{align} \mathcal{H}=\sum_{k} (\varepsilon_{k}-\mu)c_{k}^{\dagger}c_{k}-\Delta_{k} c_{k}^{\dagger}c_{-k}^{\dagger} + \text{H.c.}, \end{align} where $\varepsilon_k$ is the energy spectrum, $\mu$ is the chemical potential, $\Delta_{k}$ is the superconducting gap, $c_{k}\equiv c_{k\uparrow}$ and $c_{-k}\equiv c_{-k \downarrow}$ are operators annihilating, respectively, an electron with momentum $k$ and spin up and an electron with momentum $-k$ and spin down. The gap parameter is determined in the above mean-field Hamiltonian through a self-consistent mass gap equation and it depends on the original Hamiltonian's coupling associated to the lattice-mediated pairing interaction $V,$ absorbed in $\Delta_k$ (for more details, see~\cite{pau:vie:08}). The solution of the equation renders the gap temperature-dependent. In Fig.~\ref{fig:bcs}, we show the quantitative results for the fidelity and $\Delta$. We observe that both quantities show the existence of thermally driven PTs, as their abrupt change in the point of criticality at $T=0$, survive and drift, as temperature increases. Unlike TSCs, in this model the temperature does not only appear in the thermal state, but it is also a parameter of the effective Hamiltonian, resulting in the change of the system's eigenbasis and consequently nontrivial Uhlmann connection. For a detailed analysis and the explanation of the differences between the two, see SM4. \begin{widetext} \begin{figure}[h!] \begin{minipage}{0.24\textwidth} \includegraphics[width=0.8\textwidth,height=0.6\textwidth]{BCS_fidelity_theta.pdf} \end{minipage}% \begin{minipage}{0.24\textwidth} \includegraphics[width=0.8\textwidth,height=0.6\textwidth]{BCS_fidelity_T.pdf} \end{minipage} \begin{minipage}{0.24\textwidth} \includegraphics[width=0.8\textwidth,height=0.6\textwidth]{BCS_delta_theta.pdf} \end{minipage}% \begin{minipage}{0.24\textwidth} \includegraphics[width=0.8\textwidth,height=0.6\textwidth]{BCS_delta_T.pdf} \end{minipage} \caption{The fidelity for thermal states $\rho$ when probing the parameter of the Hamiltonian $\delta V =V'-V=10^{-3}$ (left) and the temperature $\delta T=T'-T=10^{-3}$ (center left), and the Uhlmann connection (center right and right, respectively), for BCS superconductivity.} \label{fig:bcs} \end{figure} \end{widetext} Finally, we also studied the behavior of the edge states for the TIs and the Majorana modes for the Kitaev chain, on open chains of 500 and 300 sites, respectively. In the case of TIs, we showed that the edge states localized at the boundary between two distinct topological phases, present at zero temperature, are gradually smeared out with the temperature increase, confirming the absence of temperature-driven PTs (see SM3.1 for detailed quantitative results and technical analysis). Our results on the edge states, obtained for systems in thermal equilibrium, agree with those concerning open systems treated within the Lindbladian approach~\cite{viy:riv:del:12} (and consequently, due to considerable computational hardness, obtained for an open chain of 8 sites). Similarly, we showed that the Majorana modes exhibit an abrupt change at the zero-temperature point of the quantum PT, while the finite-temperature behavior is smooth, confirming the results obtained through the fidelity and the Uhlmann connection analysis (see SM3.2). In the case of zero-temperature, Majorana modes of the Kitaev model are known to be good candidates for encoding qubit states in stable quantum memories, see~\cite{ali:12,ipp:riz:gio:maz:16} and references therein. The presence of robust Majorana modes at low, but finite, temperatures is a significant property which could be used in designing stable quantum memories in realistic setups. \section*{Conclusions and Outlook} \label{Conclusions and Outlook} We studied the relationship between the fidelity and the Uhlmann connection over the system's \textit{parameter space} (including parameters of the system's Hamiltonian and temperature) and found that their behaviors are consistent whenever the variations of the parameters produce variations in the eigenbasis of the density matrix. By means of this analysis, we showed the absence of temperature-driven PTs in 1D TIs and TSCs. We clarified that the Uhlmann geometric phase considered in \textit{momentum space} is not adequate to infer such PTs, since it is only a part of the information contained in the Uhlmann holonomy. Indeed, this holonomy, as a function of temperature, is smooth (Eq. \eqref{eq:hol}), hence no PT-like phenomenon is expected. Furthermore, we performed the same analysis in the case of BCS superconductivity, where, in contrast to the former systems, thermally driven PTs occur and are captured by both the fidelity and the Uhlmann connection. This shows that, when changing the temperature, the density operator is changing both at the level of its spectrum and its eigenvectors. We analyzed in detail the origin of the differences between the BCS and the Kitaev chain and suggested that in realistic scenarios the gap of topological superconductors could also, generically, be temperature-dependent. Finally, we would like to point out possible future lines of research. The study of Majorana modes at finite temperature suggests that they can be used in achieving realistic quantum memories. This provides a relevant path for future research. Another related subject is to perform the same analysis using an open system approach where the system interacts with a bath and eventually thermalizes. There the parameter space would also include the parameters associated to the system-bath interaction. \acknowledgements{B. M. and C. V. acknowledge the support from DP-PMI and FCT (Portugal) through the grants SFRH/BD/52244/2013 and PD/BD/52652/2014, respectively. B. M. acknowledges the support of Physics of Information and Quantum Technologies Group. C. V. and N. P. acknowledge the support of SQIG -- Security and Quantum Information Group and UID/EEA/50008/2013. N. P. acknowledges the IT project QbigD funded by FCT PEst-OE/EEI/LA0008/2013. Support from FCT (Portugal) through Grant UID/CTM/04540/2013 is also acknowledged. } \section*{Supplemental Material} \subsection{1. Analytic derivation of the closed expression for the fidelity} \hspace{0.5cm} The fidelity between two states $\rho$ and $\rho'$ is given by \begin{align} F(\rho,\rho')=\text{Tr}\sqrt{\sqrt{\rho}\rho'\sqrt{\rho}}. \end{align} We consider unnormalized thermal states $\rho=\exp(-\beta H)$ and $\rho'=\exp(-\beta' H')$. At the end of the calculation one must, of course, normalize the expression appropriately. We wish to find closed expressions for the thermal states considered in the main text. In order to do that we will proceed by finding $e^C$, such that \begin{align} e^{A}e^{B}e^{A}=e^{C}, \end{align} for $A=-\beta H$, $B=-\beta'H'$ and, ultimately, take the square root of the result. The previous equation is equivalent to \begin{align} e^{A}e^{B}=e^{C}e^{-A}. \label{eq:1} \end{align} The Hamiltonians $H$ and $H'$ are taken to be of the form $\q{h}$, and thus we can write \begin{align} e^{A}=a_0+\q{a}, \nonumber \\ e^{B}=b_0+\q{b}, \nonumber \\ e^{C}=c_0+\q{c}, \end{align} where all the coefficients are real, with the following constraints: \begin{eqnarray} \label{eq:c1} \begin{cases} & 1=\det e^{A}=a_{0}^2-\vec{a}^2,\\ \label{eq:c2} & 1=\det e^{B}=b_{0}^2-\vec{b}^2,\\ \label{eq:c3} & 1=\det e^{C}=c_{0}^2-\vec{c}^2, \end{cases} \end{eqnarray} which are equivalent to $\text{Tr} A = \text{Tr} B = \text{Tr} C = 0$, since Pauli matrices are traceless. Let us proceed by expanding the LHS and the RHS of Eq.\eqref{eq:1}, \begin{align} & (a_0+\q{a})(b_0+\q{b})=(c_0+\q{c})(a_0-\q{a}) \nonumber \\ & \Leftrightarrow a_0 b_0+ a_0\q{b}+\q{a}b_0+ (\q{a})(\q{b}) = c_0 a_0-c_0\q{a}+\q{c}a_0 -(\q{c})(\q{a}) \nonumber \\ & \Leftrightarrow a_0 b_0 +a_0 \q{b}+\q{a}b_0+ \vec{a}\cdot\vec{b}+i(\vec{a}\times \vec{b})\cdot \vec{\sigma}=c_0 a_0-c_0\q{a}+\q{c}a_0 -\vec{c}\cdot\vec{a}-i(\vec{c}\times \vec{a})\cdot\vec{\sigma}. \end{align} Now, collecting terms in $1$, $\vec{\sigma}$ and $i\vec{\sigma}$, we get a system of linear equations on $c_0$ and $\vec{c}$, \begin{align} \label{Eq:LS1} \begin{cases} & a_0 b_0+\vec{a}\cdot\vec{b}- a_0 c_0 +\vec{a}\cdot\vec{c}=0,\\ & a_0\vec{b}+b_0\vec{a}+\vec{a}c_0-a_0\vec{c}=0,\\ & \vec{a}\times \vec{b}-\vec{a}\times \vec{c}=0. \end{cases} \end{align} The third equation from \eqref{Eq:LS1} can be written as $\vec{a}\times(\vec{b}-\vec{c})=0$, whose solution is given by $\vec{c}=\vec{b}+\lambda\vec{a}$, where $\lambda$ is a real number. This means that the solution depends only on two real parameters: $c_0$ and $\lambda$. Hence, we are left with a simpler system given by, \begin{align} \begin{cases} & a_0 b_0+\vec{a}\cdot\vec{b}- a_0 c_0 +\vec{a}\cdot(\vec{b}+\lambda\vec{a})=0\\ & a_0\vec{b}+b_0\vec{a}+\vec{a}c_0-a_0(\vec{b}+\lambda \vec{a})=0 \end{cases}. \end{align} Or, \begin{align} \begin{cases} & a_0 c_0 -\lambda\vec{a}^2=a_0 b_0+2\vec{a}\cdot\vec{b}\\ & (a_0\lambda-c_0)\vec{a}=b_0\vec{a} \end{cases}. \end{align} In matrix form, the above system of equations can be written as \begin{align} \left[\begin{array}{cc} a_0 & -\vec{a}^2\\ -1 & a_0 \end{array}\right]\left[\begin{array}{c} c_0\\ \lambda \end{array} \right]=\left[\begin{array}{c} a_0 b_0+2\vec{a}\cdot\vec{b}\\ b_0 \end{array} \right]. \end{align} Inverting the matrix, we get \begin{align} \left[\begin{array}{c} c_0\\ \lambda \end{array} \right]&=\frac{1}{a_0^2-\vec{a}^2}\left[\begin{array}{cc} a_0 & \vec{a}^2\\ 1 & a_0 \end{array}\right]\left[\begin{array}{c} a_0 b_0+2\vec{a}\cdot\vec{b}\\ b_0 \end{array} \right] \nonumber \\ &=\left[\begin{array}{c} (2 a_0 ^2-1) b_0+2 a_0\vec{a}\cdot\vec{b}\\ 2(a_0 b_0+\vec{a}\cdot\vec{b}) \end{array} \right], \end{align} where we used the constraints \eqref{eq:c1}. Because of the constraints, $c_0$ and $\lambda$ are not independent, namely, $e^C=c_0+(\vec{b}+\lambda \vec{a})\cdot\vec{\sigma}$, and we get \begin{align} c_0^2-(\vec{b}+\lambda \vec{a})^2=c_0^2-\vec{b}^2-2\lambda \vec{a}\cdot \vec{b}-\vec{a}^2=1. \end{align} Now we want to make $A=-\beta H/2\equiv-\xi \vec{x}\cdot \vec\sigma/2$ and $B=-\beta 'H'\equiv-\zeta \vec{y}\cdot \vec\sigma$, with $\vec{x}^2=\vec{y}^2=1$ and $\xi$ and $\zeta$ real parameters, meaning, \begin{align} a_0=\cosh(\xi/2) \text{ and } \vec{a}=-\sinh(\xi/2) \vec{x},\\ b_0=\cosh(\zeta) \text{ and } \vec{b}=-\sinh(\zeta) \vec{y}. \end{align} If we write $C=\rho \vec{z}\cdot \vec \sigma$ (because the product of matrices with determinant $1$ has to have determinant $1$, it has to be of this form), \begin{align} c_0 &=\cosh(\rho) \nonumber \\ &=(2 a_0 ^2-1) b_0+2 a_0\vec{a}\cdot\vec{b} \nonumber \\ &=(2\cosh ^2(\xi/2)-1)\cosh(\zeta)+2\cosh(\xi/2)\sinh(\xi/2)\sinh(\zeta)\vec{x}\cdot\vec{y} \nonumber \\ &=\cosh(\xi)\cosh(\zeta)+\sinh(\xi)\sinh(\zeta)\vec{x}\cdot\vec{y}. \end{align} For all the expressions concerning fidelity, we wish to compute $\text{Tr}(e^{C/2})=2\cosh(\rho/2)$. If we use the formula $\cosh(\rho/2)=\sqrt{(1+\cosh(\rho))/2}$, we obtain, \begin{align} \text{Tr}(e^{C/2})=2\sqrt{\frac{(1+\cosh(\xi)\cosh(\zeta)+\sinh(\xi)\sinh(\zeta)\vec{x}\cdot\vec{y})}{2}}. \end{align} Hence, if we let $\xi= \beta E/2$, $\vec{x}=\vec{n}$, $\zeta=\beta' E'/2$ and $\vec{y}=\vec{n}'$, then \begin{align} \text{Tr}(\sqrt{e^{-\beta H/2}e^{-\beta' H'}e^{-\beta H/2}})=2\sqrt{\frac{(1+\cosh(\beta E/2 )\cosh(\beta'E'/2)+\sinh(\beta E/2)\sinh(\beta'E'/2)\vec{n}\cdot\vec{n}')}{2}}. \end{align} To be able to compute the fidelities, we will just need the following expression relating the traces of quadratic many-body fermion Hamiltonians (preserving the number operator) and the single-particle sector Hamiltonian obtained by projection: \begin{align} \text{Tr}(e^{-\beta \mathcal{H}})=\text{Tr}(e^{-\beta \Psi^{\dagger}H\Psi})=\det(I+e^{-\beta H}). \end{align} From the previous results, it is straightforward to derive the following formulae for the fidelities concerning the BG states considered:\\ \begin{align} F(\rho,\rho')&=\prod_{k\in\mathcal{B}}\frac{\text{Tr}(e^{-\mathcal{C}_k/2})}{\text{Tr}(e^{-\beta \mathcal{H}_k})\text{Tr}(e^{-\beta' \mathcal{H}'_k})}\nonumber\\ &=\prod_{k\in\mathcal{B}}\frac{\det(I+e^{-C_k/2})}{\det^{1/2}(I+e^{-\beta H_k})\det^{1/2}(I+e^{-\beta' H'_k})}\nonumber\\ &=\prod_{k\in\mathcal{B}}\frac{2+\sqrt{2\left(1+\cosh(E_k/2T)\cosh(E'_k/2T')+\sinh(E_k/2T)\sinh(E'_k/2T')\vec{n}_k\cdot\vec{n}'_k\right)}}{\sqrt{(2+ 2\cosh (E_k/2T))(2+2\cosh (E'_k/2T'))}},\\ \nonumber\\ \end{align} where the matrix $C_k$ is such that $e^{-C_k}=e^{-\beta H_k/2}e^{-\beta' H'_k}e^{-\beta H_k/2}$ and $\mathcal{C}_k=\Psi_k^{\dagger}C_k\Psi_k$ is the corresponding many-body quadratic operator. To compute $\Delta(\rho,\rho')$ one needs, in addition, $\text{Tr} \sqrt{\rho}\sqrt{\rho'}$. This can be done along the lines of what was done above, hence we shall omit the proof and just state the result: \begin{align} \text{Tr} \sqrt{\rho}\sqrt{\rho'}=\prod_{k\in \mathcal{B}}\frac{2+2\left(\cosh(E_k/4T)\cosh(E'_k/4T')+\sinh(E_k/4T)\sinh(E'_k/4T')\vec{n}_k\cdot\vec{n}'_k\right)}{\sqrt{(2+ 2\cosh (E_k/2T))(2+2\cosh (E'_k/2T'))}} \end{align} \subsection{2. Results for the other models considered} \subsubsection{i. Su-Schrieffer-Heeger (SSH)} The Hamiltonian for the SSH model~\cite{su:sch:hee:79} is given by \begin{align} \mathcal{H}&=\sum_{i\in\mathbb{Z}}v c^{\dagger}_{i,A}c_{i,B}+w c^{\dagger}_{i,B}c_{i+1,A}+\text{H.c.}, \end{align} where $c_i$ are fermionic annihilation operators, $A,B$ correspond to the two parts of the dimerized chain and $v,w$ are coupling constants. The change of the difference $|v-w|$ between the two parameters of the Hamiltonian drives the topological phase transition. In particular, the phase transition occurs for $|v-w|=0$. \begin{figure}[h!] \begin{minipage}{0.33\textwidth} \includegraphics[width=0.7\textwidth,height=0.5\textwidth]{SSH_fidelity_theta.pdf} \end{minipage}% \begin{minipage}{0.33\textwidth} \includegraphics[width=0.7\textwidth,height=0.5\textwidth]{SSH_fidelity_T.pdf} \end{minipage} \begin{minipage}{0.33\textwidth} \includegraphics[width=0.7\textwidth,height=0.5\textwidth]{SSH_uhlmann_theta.pdf} \end{minipage}% \caption{The fidelity for thermal states $\rho$, when probing the parameter of the Hamiltonian that drives the topological phase transition $\delta |v-w| =|v-w|'-|v-w|=0.01$ (left), and the temperature $\delta T=T'-T=0.01$ (center), and the Uhlmann connection, when probing the parameter of the Hamiltonian $|v-w|$ (right), for the SSH model (representative of the symmetry class BDI).} \label{fig:fid} \end{figure} \subsubsection{ii. Kitaev Chain} The Hamiltonian for the Kitaev Chain model~\cite{kit:cha:01} is given by \begin{align} \mathcal{H}&=-\mu\sum_{i=1}^N c^{\dagger}_i c_i+\sum_{i=1}^{N-1}\left[-t(c^{\dagger}_{i+1}c_i+c^{\dagger}_ic_{i+1})-|\Delta|(c_ic_{i+1}+c^{\dagger}_{i+1}c^{\dagger}_i)\right], \end{align} where $\mu$ is the chemical potential, $t$ is the hopping amplitude and $\Delta$ is the superconducting gap. We fix $t=0.5,\Delta=1$, while the change of $\mu$ along the sites of the line drives the topological phase transition. In particular, the phase transition occurs at $\mu=1$ (gap closing point). \begin{figure}[h!] \begin{minipage}{0.33\textwidth} \includegraphics[width=0.7\textwidth,height=0.5\textwidth]{kitaev_fidelity_theta.pdf} \end{minipage}% \begin{minipage}{0.33\textwidth} \includegraphics[width=0.7\textwidth,height=0.5\textwidth]{kitaev_fidelity_T.pdf} \end{minipage} \begin{minipage}{0.33\textwidth} \includegraphics[width=0.7\textwidth,height=0.5\textwidth]{kitaev_uhlmann_theta.pdf} \end{minipage}% \caption{The fidelity for thermal states $\rho$, when probing the parameter of the Hamiltonian that drives the topological phase transition $\delta \mu =\mu'-\mu=0.01$ (left), and the temperature $\delta T=T'-T=0.01$ (center), and the Uhlmann connection, when probing the parameter of the Hamiltonian $\mu$ (right), for the Kitaev chain model (topological superconductor).} \label{fig:fid} \end{figure} \subsection{3. The edge states} For the case of the Creutz ladder, the Su-Schrieffer-Heeger (SSH) model and the Kitaev chain, when considering the system on a finite-size chain with open boundary conditions, the bulk-to-boundary principle predicts the existence of zero modes localized at the ends of the chain, whenever the bulk is in a topologically non-trivial phase. It is then possible to consider the associated thermal states, $\rho=\exp(-\beta \mathcal{H})/Z$, and probe the effects of temperature. The study of the Uhlmann connection and the fidelity conducted for the above models suggests that at zero temperature the edge states should exhibit an abrupt change as the system passes the point of quantum phase transition, while at finite temperatures they should smoothly change, slowly being washed away with the temperature increase, as a consequence of the absence of finite-temperature transitions. Below, we first study topological insulators in Subsection~3.1, while in Subsection~3.2 we analyze a topological superconductor given by the Kitaev model, showing the agreement with the above inferred behavior. \subsubsection{3.1 Topological Insulators} Let us consider the Creutz ladder model as a representative of topological insulators. In the trivial phase, the spectrum decomposes into two bands of states separated by a gap. At zero chemical potential, the zero-temperature limit of $\rho$ is the projector onto the Fermi sea state $\ket{\text{FS}},$ obtained by occupying the lower band. On a topologically non-trivial phase, however, the spectrum is composed of the two bands {\em and} the zero modes. At zero chemical potential, the zero temperature limit of $\rho$ is now the projector onto the ground state manifold of $\mathcal{H}$, which is spanned by $\ket{\text{FS}}$ and additional linearly independent states by creating excitations associated to the zero modes. Since the Fermi sea does not have these edge state excitations (exponentially localized at the boundary) included, the occupation number as a function of position, $n_i=a_i^\dagger a_i+b_i^{\dagger}b_i$, will see this effect at the boundary of the chain. Indeed, this is what we see in Fig.~\ref{fig:FS_occupation_number}: the occupation number, as a function of position, drops significantly at the edges in the topologically non-trivial phase. On the other hand, on the topologically trivial side the occupation number stays constant throughout the whole chain (both bulk and the edges). \begin{figure}[h!] \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=0.5]{fermisea_topological.pdf} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=0.5]{fermisea_trivial.pdf} \end{minipage} \caption{Fermi sea expectation value of the occupation number $n_i=a_i^\dagger a_i+b_i^{\dagger}b_i$ as a function of position $i$ on a chain of 500 sites with open boundary conditions. On the left panel the system is in a topologically non-trivial phase with $2K=1,M=0.1,\phi=\pi/2$. On the right panel the system is in a topologically trivial phase with $2K=1,M=1.0001,\phi=\pi/2$.} \label{fig:FS_occupation_number} \end{figure} If we want the thermal state's $T=0$ limit to be the Fermi sea, we have to add a very small (negative) chemical potential. It has to be small enough so that the lower band gets completely filled. In the following Fig.~\ref{fig:BG_occupation_number}, we see that the expectation value $\text{Tr}(\rho n_i)$ coincides with $\bra{\text{FS}}n_i\ket{\text{FS}}$ in the $T=0$ limit and that the deviation of the occupation number at the edge from that in the bulk gets washed out smoothly as the temperature increases. In fact, in the large temperature limit, the state is totally mixed, implying that the expected value of the occupation number will be constant and equal to $1$, as a function of position. Similar situation occurs in the SSH model. We see that the above results in Figs.~\ref{fig:FS_occupation_number} and~\ref{fig:BG_occupation_number} coincide with the results obtained by the fidelity analysis. Moreover, they, too, confirm the results based on the study of the Uhlmann connection in terms of the quantity $\Delta$. \begin{figure}[h!] \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=0.55]{occupation_T_100.pdf} \includegraphics[scale=0.55]{occupation_zero_T.pdf} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=0.55]{occupation_finite_T_trivial.pdf} \includegraphics[scale=0.55]{occupation_zero_T_trivial.pdf} \end{minipage} \caption{Expectation value of the occupation number $n_i=a_i^\dagger a_i+b_i^{\dagger}b_i$ as a function of position $i$ on a chain of 500 sites with open boundary conditions, in a topologically non-trivial phase with $2K=1,M=0.1,\phi=\pi/2$ (left panel), for temperatures $T=10^{-5}$ (down) and $T=0.2$ (up). On the right panel we have a topologically trivial phase near the critical value of the parameter $2K=1,M=1.0001,\phi=\pi/2$, for temperatures $T=10^{-5}$ (down) and $T=0.2$ (up). Increasing $M$, the edge behavior is washed out smoothly, for finite $T$, and it becomes trivial as for the $T=0$ case.} \label{fig:BG_occupation_number} \end{figure} \subsubsection{3.2 Topological Superconductors} As far as the superconducting Kitaev model is concerned, the chemical potential is a parameter of the Hamiltonian and we cannot lift the zero modes from the zero-temperature limit of $\rho$ with the above method. Moreover, the Kitaev Hamiltonian does not conserve the particle number, and adding chemical potential associated to the total particle number would not lift the zero modes even if $\mu$ were not a parameter of the Hamiltonian. Note though, that the total number of Bogoliubov {\em quasi-particles} which diagonalize the Hamiltonian is conserved. Hence, we add a very small (negative) chemical potential associated with the total quasi-particle number, thereby lifting the Majorana zero modes energy. Notice that in the case of topological insulators this procedure coincides with the one applied above: since the Hamiltonian conserves the total particle number, and the quasi-particle creation operators are linear combinations of {\em just} the particle creation operators (and not of the holes as well), the total quasi-particle and particle numbers coincide. We found that the good quantity to be studied is not the occupation number as a function of the position in the chain, but the ratio between the average particle occupation number at the edge and the average particle occupation number at the bulk $f(\mu;T) = \langle n_{\text{edge}}\rangle/\langle n_{\text{bulk}}\rangle$ (without loss of generality, we have chosen for $n_{\text{bulk}}$ the site in the middle of the chain, since it is approximately constant throughout the bulk). In Fig. 5, we present the results obtained for a chain with open boundary conditions, consisting of 300 sites. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{majorana_paper_plot.pdf} \caption{$\langle n_{\text{edge}}\rangle/\langle n_{\text{bulk}}\rangle$ as a function of the chemical potential $\mu$ for a chain of 300 sites with open boundary conditions, for several values of the temperature $T$.} \label{fig:majorana} \end{figure} The results are consistent with the behavior inferred by the Uhlmann connection (and the fidelity): Majorana modes exhibit an abrupt change at zero temperature (a signature of the quantum phase transition), while for fixed finite temperatures they smoothly change with the parameter change, and are slowly washed away with the temperature increase. Indeed, the behaviour of the finite-temperature curves is smooth, while the zero-temperature quench-like curve is expected to develop a discontinuity at $\mu = 1$ in the thermodynamic limit (see for example Fig.4(b) and the respective discussion in~\cite{qua:zur:10}). To show this more accurately, one needs considerably higher computational power to probe chain lengths of much higher orders of magnitude, a relevant future direction of work. We note that this new method to study Majorana modes is more general and also applicable in the case of insulators. The results obtained for topological insulators using this new method lead to the same qualitative conclusion regarding the behavior of the edge states (consistent with our previous results). In order to avoid repetition, we omit presenting the respective results. In general, the behavior of the edge states and the associated Majorana modes show an interesting property of the systems which, at finite temperatures, despite the absence of phase transitions, they keep exhibiting their topological features even on the ``trivial side'' of the phase diagram (for parameter values for which on zero temperature the system is topologically trivial). At zero temperature, the Majorana modes are known to be good candidates for qubit encoding, thus they could be used to achieve fault-tolerant quantum computation, see~\cite{ali:12,ipp:riz:gio:maz:16} and references therein. Therefore, the aforementioned property of Majorana modes at the low but finite-temperature regime, is potentially significant in constructing stable quantum memories in more realistic scenarios. Furthermore, the existence of stable quantum memories has considerable impact in cryptography~\cite{rod:mat:pau:sou:17,pir:ott:spe:wee:bra:llo:geh:jac:and:15,ber:chr:col:ren:ren:10,lou:alm:and:pin:mat:pau:14,lou:ars:pau:pop:prv:16}. \subsection{4. The Uhlmann connection and the temperature-driven transitions: BCS vs. Kitaev chain} The study of the Uhlmann connection at finite temperatures showed, both for topological insulators (Creutz ladder and SSH models), as well as for topological superconductors (Kitaev model), that there exist no finite-temperature transitions driven by the Hamiltonians' parameter(s). Regarding the finite-temperature transitions driven by the temperature, as stated in the manuscript, the Uhlmann connection quantifies the rate of change of a system's eigenvectors. Since this is one of the main points of our work, we here briefly recall the argument presented in the second section (third paragraph) of the main text. Indeed, if the two states $\rho$ and $\rho'$ commute, $[\rho , \rho']=0$, we have $F(\rho , \rho') = \mbox{Tr}|\sqrt{\rho}\sqrt{\rho'}| = \mbox{Tr}\sqrt{\rho}\sqrt{\rho'}$. Thus, the Uhlmann factor $U$ associated to the two states is trivial, $U=I$, and consequently $\Delta(\rho , \rho') = \mbox{Tr}|\sqrt{\rho}\sqrt{\rho'}| -\mbox{Tr}\sqrt{\rho}\sqrt{\rho'} = 0$. We see that the Uhlmann connection, and in particular the quantity $\Delta$, quantifying the rate of change of the system's eigenbasis, reflects the quantum contribution to the state distinguishability (see also~\cite{zan:ven:gio:07}, equation (3), in which the Bures metric is split into classical and non-classical terms, the first quantifying the change of system's eigenvalues and the second the change of the corresponding eigenvectors). Thus, the Uhlmann connection is trivial in the cases of the topological systems considered, as their Hamiltonians do not explicitly depend on temperature and thus commute with each other. On the other hand, the mean-field BCS Hamiltonian considered in our paper does explicitly depend on the temperature. Indeed, as the results presented in the Results section of the main paper clearly show, the change of the eigenbasis of the BCS thermal states carries the signature of a thermally driven phase transition (see the bottom right plot of FIG.~2). Note that, having a purely non-classical contribution, such a temperature driven transition has quantum features as well, which is on its own an interesting consequence of the study of the Uhlmann connection. It is an interesting question to compare the two cases of superconductors studied, and try to isolate the reasons for such difference. To see why the two cases are different, note that in the case of the BCS superconductivity we consider the mean-field Hamiltonian \begin{equation} \label{eq:BCS_MF} \mathcal{H}=\sum_{k} (\varepsilon_{k}-\mu)c_{k}^{\dagger}c_{k}-\Delta_{k} c_{k}^{\dagger}c_{-k}^{\dagger} + \text{H.c.}, \end{equation} in which the gap $\Delta(V,T)$ is a function of temperature. Had we considered a more fundamental ``pairing Hamiltonian',' which takes into account the quartic electron interaction mediated by the phonons of the lattice, \begin{equation} \label{eq:BCS_P} \mathcal{H}^{P}=\sum_{k} (\varepsilon_{k}-\mu)c_{k}^{\dagger}c_{k}-\sum_{k,k'} V_{k,k'}c_{k'}^{\dagger}c_{-k'}^{\dagger}c_{-k}c_{k} + \text{H.c.}, \end{equation} the Uhlmann connection would be trivial. The mean-field Hamiltonian is obtained by the BCS decoupling scheme with an averaging procedure setting the effective gap for the mean-field state $\rho = e^{-\beta\mathcal{H}}/Z$ (for simplicity, we assume $V_{kk'} = -V$ for $k,k'$ close to the Fermi momentum $k_F$, and zero otherwise) to be \begin{equation} \label{eq:gap} \Delta(V,T) = -\sum_{k'} V_{kk'} \langle c_{-k}c_{k}\rangle = V \sum_k \mbox{Tr}(c_{-k}c_{k}\rho ). \end{equation} In other words, the effective mean-field Hamiltonian~\eqref{eq:BCS_MF} is obtained from~\eqref{eq:BCS_P} by expanding $c_{-k}c_{k} = \langle c_{-k}c_{k}\rangle + \delta(c_{-k}c_{k})$ around the suitably chosen superconducting ground state. Thus, $\mathcal{H}$ breaks the $\mbox{U}(1)$ particle-number conservation symmetry of $\mathcal{H}^P$ to a residual $\mathbb Z_2$ symmetry, to accommodate the superconducting properties of the system. As a result, the Uhlmann connection becomes sensitive to temperature-driven transitions as well. On the other hand, the Hamiltonian of the Kitaev model is phenomenological, modelled upon the success of the related BCS mean-field Hamiltonian. In this model, the gap is, for simplicity, considered to be temperature-independent. It would be interesting to probe this assumption in experiments with realistic topological superconducting materials. Our method based on the Uhlmann connection could then be useful in the analysis of such experiments. The above discussion shows an interesting consequence of our analysis in terms of the Uhlmann connection. The temperature dependence of the BCS mean-field gap originally came as a consequence of a particular formal decoupling scheme. One might thus question whether a gap of a general superconducting material should also a priori depend on the temperature. Our analysis shows that a generic superconducting Hamiltonian of the form of~\eqref{eq:BCS_MF} would exhibit temperature-driven transitions, accompanied by the enhanced state distinguishability in terms of the system's eigenbasis, only in the case that the gap does depend on the temperature. \bibliographystyle{unsrt}
train/arxiv
BkiUbaA5qrqCyt4L04b8
5
1
\section{Measure derivatives of Lions, associated It\^o formula and examples} \label{sec measure derivatives and ito} For the construction of the measure derivative in the sense of Lions we follow the approach from~\cite[Section 6]{cardaliaguet2012}. There are three main differences: The first difference is that we define the measure derivative in a domain. More precisely we will define the measure derivative for any measure as long as it has support on $D_k \subset D$ for some $k\in \mathbb N$ (recall that $D_k \subset D_{k+1}$ and $\bigcup_k D_k = D$ and every $D_k$ is compact) \wh{Here we recall that every $D_k$ is compact - this is inconsistent with the rest of the paper - see pg 7 and my note there.} This is precisely what is needed for the analysis in this paper. The second difference is that we are explicit in making it clear why the measure derivative is independent of the probability space used to realise the measure as well as the random variable used. The third difference is in proving the ``Structure of the gradient'', see~\cite[Theorem 6.5]{cardaliaguet2012}. Thanks to an observation by Sandy Davie (University of Edinburgh), we can show as part iii) of Proposition~\ref{propn L deriv} that the measure derivative has the right structure even if it only exists at the point $\mu$ instead of for every square integrable measure, as is required in~\cite{cardaliaguet2012}. The method of Sandy Davie also conveniently results in a much shorter proof. \subsection{Construction of first-order Lions' measure derivative on $D_k\subset D \subseteq \mathbb{R}^d$} Consider $u:\mathcal{P}_2(D) \to \mathbb R$. Here $\mathcal{P}_2(D)$ is a space of probability measures on $D$ that have second moments i.e. $\int_D x^2 \mu (dx) < \infty$ for $\mu \in \mathcal{P}_2(D)$. We want to define the derivative at points $\mu \in \mathcal P_2(D)$ such that $\text{supp}(\mu)\subseteq D_k$. We shall write $\mu \in \mathcal P(D_k)$ if $\mu$ is a probability measure on $D$ with support in $D_k$. \begin{definition}[L-differentiability at $\mu \in \mathcal{P}(D_k)$] \label{def L differentiability} We say that $u$ is {\em L-differentiable at} $\mu \in \mathcal{P}(D_k)$ if there is an atomless\footnote{ Given $(\Omega, \mathcal F, \mathbb P)$ an {\em atom} is $E\in \mathcal F$ s.t. $\mathbb P(E)>0$ and for any $G\in \mathcal F$, $G\subset E$, $\mathbb P(E) > \mathbb P(G)$ we have $\mathbb P(G) = 0$.}, Polish probability space $(\Omega, \mathcal F, \mathbb P)$ and an $X \in L^2(\Omega)$ such that $\mu = \mathscr{L}(X)$ and the function $U:L^2(\Omega) \to \mathbb R$ given by $U(Y) := u(\mathscr{L}(Y))$ is Fr\'echet differentiable at $X$. We will call $U$ the {\em lift} of $u$. \end{definition} Clearly $X$ s.t. $\mu = \mathscr L(X)$ can always be chosen so that $\text{supp}(X) \subseteq D_k$ for $\mu \in \mathcal P(D_k)$. We recall that saying $U:L^2(\Omega; D) \to \mathbb R$ is Fr\'echet differentiable at $X$ with $\text{supp}(X)\subseteq D_k$ means that there exists a bounded linear operator $A:L^2(\Omega) \to \mathbb R$ such that for \[ \lim_{\substack{|Y|_2 \to 0 \\ \text{supp}(X+Y)\subseteq D} } \bigg| \frac{U(X+Y)-U(X)}{|Y|_2} - \frac{AY}{|Y|_2} \bigg| = 0\, . \] Note that Since $L^2(\Omega)$ is a Hilbert space with the inner product $(X,Y) := \mathbb E [XY]$ we can identify $L^2(\Omega)$ with its dual $L^2(\Omega)^*$ via this inner product. Then the bounded linear operator $A$ defines an element $DU(X) \in L^2(\Omega)$ through \[(DU(X),Y) := AY \quad \forall Y \in L^2(\Omega).\] \begin{proposition} \label{propn L deriv} Let $u$ be L-differentiable at $\mu \in \mathcal P(D_k)$, with some atomless $(\Omega, \mathcal F, \mathbb P)$, lift $U$ and $X\in L^2(\Omega)$. Let $(\bar \Omega, \bar{\mathcal F}, \bar {\mathbb P})$ be an arbitrary atomless, Polish probability space which supports $\bar{X} \in L^2(\bar \Omega)$ and on which we have the lift $\bar{U}(Y) := u(\mathscr{L}(Y))$. Then \begin{enumerate}[i)] \item The lift $\bar U$ is Fr\'echet differentiable at $\bar X$ with derivative $D\bar U(\bar X)\in L^2(\bar \Omega)$. \item The joint law of $(X,DU(X))$ equals that of $( \bar X ,D\bar U(\bar X))$. \item There is $\xi:D_k \to D_k$ measurable such that $\int_{D_k} \xi^2(x) \mu (dx) < \infty$ and \[ \xi(X) = DU(X), \quad \xi(\bar X) = D\bar U(\bar X)\,.\] \end{enumerate} \end{proposition} Once this is proved we will know that the notion of L-differentiability depends neither on the probability space used nor on the random variable used. Moreover the function $\xi$ given by this proposition is again independent of the probability space and random variable used. \begin{definition}[L-derivative of $u$ at $\mu$] \label{def L derivative} If $u$ is L-differentiable at $\mu$ then we write $\partial_\mu u(\mu):= \xi$, where $\xi$ is given by Proposition~\ref{propn L deriv}. Moreover we have $\partial_\mu u: \mathcal P_2(D_k) \times D_k \to D_k$ given by \[ \partial_\mu u(\mu, y) := [\partial_\mu u(\mu)](y)\,. \] \end{definition} To prove Proposition~\ref{propn L deriv} we will need the following result: \begin{lemma} \label{lemma tau} Let $(\Omega, \mathcal F, \mathbb P)$ and $(\bar \Omega, \bar{\mathcal F}, \bar {\mathbb P})$ be two atomless, Polish probability spaces supporting $D_k$-valued random variables $X$ and $\bar X$ such that $\mathscr{L}(X) =\mathscr{L}(\bar X)$. Then for any $\epsilon > 0$ there exists $\tau:\Omega \to \bar \Omega$ which is bijective, such that both $\tau$ and $\tau^{-1}$ are measurable and measure preserving and moreover \[ |X - \bar X \circ \tau |_\infty < \epsilon \,\,\, \text{ and }\,\,\, |X \circ \tau^{-1} - \bar X|_\infty < \epsilon\,. \] \end{lemma} \begin{proof} Let $(A_n)_n$ be a measurable partition of $D_k$ such that $\text{diam}(A_n) < \epsilon$. Let \[ B_n := \{X \in A_n\},\quad \bar B_n := \{\bar X \in A_n\}\,. \] These form measurable partitions of $\Omega$ and $\bar \Omega$ respectively and moreover $\mathbb P(B_n) = \bar{\mathbb P}(\bar B_n)$. As the probability spaces are atomless, there exist $\tau_n :B_n \to \bar B_n$ bijective, such that $\tau_n$ and $\tau_n^{-1}$ are measurable and measure preserving. See~\cite[Sec. 41, Theorem C]{hamlos1950} for details\footnote{The theorem in fact provides the required isomorphism between a measure on separable atomless probability space and the unit interval.}. Let \[ \tau(\omega) := \tau_n(\omega)\,\,\, \text{if $\omega \in B_n$},\quad \tau^{-1}(\bar \omega) := \tau_n^{-1}(\bar \omega)\,\,\, \text{if $\bar \omega \in \bar B_n$} \,. \] We can see that these are measurable, measure preserving bijections. Now consider $\omega \in B_n$. Then $\tau(\omega) = \tau_n(\omega) \in \bar B_n$. But then $X(\omega) \in A_n$ and $\bar{X}(\tau(\omega) \in A_n$ too. Hence \[ |X(\omega) - \bar{X}(\tau(\omega))| < \epsilon \quad \forall \omega \in \Omega\,. \] The estimate for the inverse is proved analogously. \end{proof} We use the notation $L^2:=L^2(\Omega)$ and $\bar L^2:= L^2(\bar \Omega)$. \begin{proof}[Proof of Proposition~\ref{propn L deriv}, part i)] For any $h>0$ we have $\tau_h, \tau_h^{-1}$ given by Lemma~\ref{lemma tau} measure preserving and such that $|X - \bar{X}\circ\tau_h |_\infty < h$. This means that $|X - \bar{X}\circ\tau_h |_2 < h$ and we have the analogous estimate with $\tau_h^{-1}$. Our first aim is to show that $(DU(X)\circ \tau_h^{-1})_{h>0}$ is a Cauchy sequence in $\bar L^2$. Fix $\epsilon > 0$. Then $\exists \,\delta > 0$ such that we have \[ |U(X+Y) - U(X) - (DU(X),Y)| < \frac{\epsilon}{2}|Y|_2 \quad \text{for all}\,\,\,|Y|_2 < \delta\,\,\, \text{and}\,\,\,\text{supp}(X+Y)\subseteq D \,, \] since $U$ is Fr\'echet differentiable at $X$. Fix $h, h' < \delta/2$ and consider $|\bar Y|_2 < \delta / 2$ and $supp(\bar X+\bar Y)\subseteq D$\,. Then, since the maps $\tau_h^{-1}$ are measure preserving, we have \[ (DU(X)\circ \tau_h^{-1}, \bar Y) = (DU(X), \bar Y \circ \tau_h)\,. \] Note that the inner product on the left is in $\bar L^2$ but the one on the right is in $L^2$. This will not be distinguished in our notation. Let $Z_h := \bar Y \circ \tau_h - X + \bar X\circ \tau_h$. Then $|Z_h|_2 \leq |\bar Y|_2 + |\bar X\circ \tau_h - X|_2 <\delta$ and since $\text{supp}(\bar X + \bar Y) \subseteq D$, we have $\text{supp}(X+ Z_h )\subseteq D$. Moreover \begin{equation*} \begin{split} (DU(X)\circ \tau_h^{-1} & - DU(X)\circ \tau_{h'}^{-1},\bar Y) = (DU(X), Z_h) - (DU(X),Z_{h'})\\ & + (DU(X), \bar X \circ \tau_h - X) + (DU(X), X - \bar X \circ \tau_{h'} )\\ = & + U(X+Z_h) - U(X) + (DU(X),Z_h) - [U(X+Z_h) - U(X)]\\ & + U(X+Z_{h'}) - U(X) - (DU(X),Z_{h'}) - [U(X+Z_{h'}) - U(X)]\\ & + (DU(X), \bar X \circ \tau_h - X) + (DU(X), X - \bar X \circ \tau_{h'} )\,. \end{split} \end{equation*} But as $\tau_h$ is measure preserving and $U$ and $\bar U$ only depend on the law, we have \[ U(X+Z_h) = U(\bar Y \circ \tau_h + \bar X\circ \tau_h) = \bar U(\bar Y + \bar X) = U(X+Z_{h'}). \] Hence \[ \begin{split} |(DU(X)\circ \tau_h^{-1} & - DU(X)\circ \tau_{h'}^{-1},\bar Y)| \leq \frac{\epsilon}{2}|Z_{h'}|_2 + \frac{\epsilon}{2}|Z_{h}|_2 + 2|DU(X)|_2\max(h,h') \, \\ & \leq \epsilon |Y|_2 + \epsilon \max(h,h') + 2|DU(X)|_2\max(h,h') \,. \end{split} \] This means that \begin{equation*} \begin{split} & |DU(X)\circ \tau_h^{-1} - DU(X)\circ \tau_{h'}^{-1}|_2\\ & = \sup_{|\bar Y|_2 = \delta/2} \frac{|(DU(X)\circ \tau_h^{-1} - DU(X)\circ \tau_{h'}^{-1},\bar Y)|}{|\bar Y|_2}\leq \epsilon + (2 \epsilon + 4|DU(X)|_2) \frac{\max(h,h')}{\delta}\,. \end{split} \end{equation*} Since we can choose $h,h' < \tfrac{\delta}{2}$ and also $h,h' < \tfrac{\epsilon \delta}{4|DU(X)|_2}$ we have the required estimate and see that $(DU(X)\circ \tau_h^{-1})_{h>0}$ is a Cauchy sequence in $\bar L^2$. Thus, there is $\psi \in \bar L^2$ such that \[DU(X)\circ \tau_h^{-1}\to \psi \,\,\, \text{as} \,\,\, h \searrow 0.\] The next step is to show that $\bar U$ is Fr\'echet differentiable at $\bar X$ and $\psi = D\bar U(\bar X)$. To that end we note that $\bar U(\bar X + \bar Y) = U(X+Z_h)$ and \[ (DU(X),\bar Y\circ \tau_h) = (DU(X),Z_h) + (DU(X),X- \bar X \circ \tau_h ). \] Hence \begin{equation*} \begin{split} & |\bar U(\bar X + \bar Y) - \bar U(\bar X) - (\psi,\bar Y)|\\ & = |U(X+Z_h) - U(X) - (DU(X),\bar Y\circ \tau_h) + (DU(X),\bar Y \circ \tau_h) - (\psi,\bar Y)|\\ & \leq \varepsilon|Z_h|_2 + |DU(X)|_2 h + |DU(X)\circ \tau_h^{-1} - \psi||\bar Y|_2 \leq 4 \epsilon |\bar Y|_2, \end{split} \end{equation*} for $h$ sufficiently small. Thus $\bar U$ is differentiable at $\bar X$ and $\psi = D\bar U(\bar X)\in \bar L^2$. \end{proof} \begin{proof}[Proof of Proposition~\ref{propn L deriv}, part ii)] We first note that \[ \mathscr{L}(X\circ \tau_h^{-1}, DU(X) \circ \tau_h^{-1}) = \mathscr{L}(X, DU(X)) \] since the mapping $\tau_h^{-1}$ is measure preserving. Moreover \[ \text{$(X\circ \tau_h^{-1}, DU(X) \circ \tau_h^{-1}) \to (\bar X, D \bar U((\bar X)))$ in $L^2(\bar \Omega; \mathbb R^{2d})$ as $h\searrow 0$.} \] Hence we get that $\mathscr{L}(X,DU(X))= \mathscr{L}(\bar X,D\bar U(\bar X))$. \end{proof} \begin{proof}[Proof of Proposition~\ref{propn L deriv}, part iii)] Note that $\mu$ is not necessarily atomless. We take $\lambda$, the translation invariant measure on $\mathscr{B}(S^1)$, with $S^1$ denoting the unit circle. The probability space $( D_k \times S^1, \mathscr{B}(D_k)\otimes \mathscr{B}(S^1), \mu \otimes \lambda)$ is atomless. Let $\tilde L^2$ denote the space of square integrable random variables on this probability space. The random variable $\tilde X(x,s):=x$ is in $\tilde L^2$ and has law $\mu$. With the usual lift $\tilde U$ we know, from part i), that $D\tilde U(\tilde X)$ exists in $\tilde L^2$. Let $\xi(x,s) := D\tilde U(\tilde X)(x,s)$. We see that \[ \mathscr{L}(D\tilde U(\tilde X))(B) = \mu \otimes \lambda(D\tilde U(\tilde X) \in B) \] depends only on the law of $X$ which is $\mu$. So must $D\tilde U(\tilde X)(x,s)$ not change with $s$ and thus $\xi(x,s) = \xi(x)$. Then \[ 1 = \mu\left(x\in D_k:\xi(x) = D\tilde U(\tilde X)(x)\right) = \mathbb P\left(\xi(X) = D U (X)\right) \] since $\mathscr{L}(X,DU(X))= \mathscr{L}(\tilde X ,D\tilde U(\tilde X))$ due to part ii). Hence $\xi(X) = DU(X)$ $\mathbb P$-a.s. \end{proof} \subsection{Higher-order derivatives} We observe that if $\mu$ is fixed then $\partial_\mu u(\mu)$ is a function from $D_k\to D_k$. If, for $y \in D_k$, $\partial_y \left[ \partial_\mu u(\mu)(y)_j \right]$ exists for each $j=1,\ldots,d$ then $\partial_y \partial_\mu u:\mathcal P(D_k) \times D_k \to D_k \times D_k$ is the matrix \[ \partial_y \partial_\mu u(\mu,y) := \big(\partial_y \left[ \partial_\mu u(\mu)(y)_j \right]\big)_{j=1,\ldots,d} \,\,. \] If we fix $y\in D_k$ then $\partial_\mu u(\cdot)(y)$ is a function from $\mathcal P(D_k) \to D_k$. Fixing $j=1,\ldots,d$, if $\partial_\mu u(\cdot)(y)_j : \mathcal P(D_k) \to \mathbb R$ is L-differentiable at some $\mu$ then its L-derivative is the function given by part iii) of Proposition~\ref{propn L deriv}, namely $\partial_\mu \big(\partial_\mu u(\mu)(y)_j \big):D_k \to D_k$. The second order derivative in measure thus constructed is $\partial_\mu^2 : \mathcal P(D_k) \times D_k \times D_k \to D_k \times D_k$ given by \[ \partial_\mu^2(\mu, y, \bar y) := \Big( \partial_\mu \big(\partial_\mu u(\mu,y)_j \big)(\bar y) \Big)_{j=1,\ldots,d}\,\,. \] \subsection{It\^o formula for functions of measures} \ls{There is no need to redefine anything here to $D_k$ as these a generic results. \\ {\bf D:} I don't agree, as we only have derivative for $\mu$ supported in $D_k$. So we need the process $x=(x_t)$ to live in $D_k$ (this is in line with how we apply things). } Assume we have a filtered probability space $(\Omega, \mathcal F, \mathbb P)$ with filtration $(\mathcal F_t)_{t\geq 0}$ satisfying the usual conditions supporting an $(\mathcal F_t)_{t\geq 0}$-Brownian motion $w$ and adapted processes $b$ and $\sigma$ satisfying appropriate integrability conditions. We consider the It\^o process \[ dx_t = b_t \, dt + \sigma_t dw_t, \,\,\,x_0 \in L^2(\mathcal F_0) \] which satisfies $x_t \in D_k$ for all $t$ a.s. \begin{definition} We say that $u:\mathcal P_2(D) \to \mathbb R$ is in $\mathcal C^{(1,1)}(\mathcal P_2(D))$ if there is a continuous version of $y \mapsto \partial_\mu u(\mu)(y)$ such that the mapping $\partial_\mu u : \mathcal P_2(D) \times D \to D$ is jointly continuous at any $(\mu,y)$ s.t. $y\in \text{supp}(\mu)$ and such that $y\mapsto \partial_\mu u(\mu,y)$ is continuously differentiable and its derivative $\partial_y \partial_\mu u:\mathcal P_2(D) \times D \to D \times D$ is jointly continuous at any $(\mu,y)$ s.t. $y\in \text{supp}(\mu)$. \end{definition} The notation $\mathcal C^{(1,1)}$ is chosen to emphasise that we can take one measure derivative which is again differentiable (in the usual sense) with respect to the new free variable that arises. Note that in~\cite{chassagneux2014classical} such functions are called partially $\mathcal C^2$. \begin{proposition} \label{propn ito for meas only} Assume that \[ \mathbb E \int_0^\infty |b_t|^2 + |\sigma_t|^4 \, dt < \infty\,. \] Let $u$ be in $\mathcal C^{(1,1)}(\mathcal P_2(D))$ such that for any compact subset $\mathcal K \subset \mathcal P_2(D)$ \begin{equation} \label{eq ito integrability condition} \sup_{\mu \in \mathcal K} \int_{D} \left[|\partial_\mu u(\mu)(y)|^2 + |\partial_y \partial_\mu u(\mu)(y)|^2 \right]\mu(dy) < \infty \,. \end{equation} Then, for $\mu_t := \mathscr L(x_t)$, \[ \begin{split} u( \mu_t) = & u(\mu_0) + \int_0^t \mathbb E\left[ b_s \partial_\mu u(\mu_s)(x_s) + \frac{1}{2}\text{tr}\left[ \sigma_s \sigma_s^* \partial_y \partial_\mu u(\mu_s)(x_s)\right]\right]\,ds\,. \end{split} \] \end{proposition} Note that since we are assuming that the process $x$ never leaves some $D_k$, we have $\text{supp}(\mu_t) \subset D_k$ for all times $t$. The proof relies on replacing $\mu_t$ by an approximation arising as the empirical measure of $N$ independent copies of the process $x$. For marginal empirical measures there is a direct link between measure derivatives and partial derivatives, see~\cite[Proposition 3.1]{chassagneux2014classical}. One can then apply classical It\^o formula to the approximating system of independent copies of $x$ and take the limit. This is done in~\cite[Theorem 3.5]{chassagneux2014classical}. Proposition~\ref{propn ito for meas only} can be used to derive an It\^o formula for a function which depends on $(t,x,\mu)$. \begin{definition} \label{def c122} By $\mathcal C^{1,2,(1,1)}([0,\infty)\times D \times \mathcal P_2(D))$ we denote the functions $v=v(t,x,\mu)$ such that $v(\cdot,\cdot,\mu) \in C^{1,2}([0,\infty)\times D)$ for each $\mu$, and such that $v(t,x,\cdot)$ is in $\mathcal C^{(1,1)}(\mathcal P_2(D))$ for each $(t,x)$. Moreover all the resulting (partial) derivatives must be jointly continuous in $(t,x,\mu)$ or $(t,x,\mu,y)$ as appropriate. Finally, by $\mathcal C^{2,(1,1)}(D \times \mathcal P_2(D))$ we denote the subspace of $\mathcal C^{1,2,(1,1)}([0,\infty)\times D \times \mathcal P_2(D))$ of functions $v$ that are constant in $t$. \end{definition} To conveniently express integrals with respect to the laws of the process taken \textit{only} over the ``new'' variables arising in the measure derivative we introduce another probability space $(\tilde \Omega, \tilde{\mathcal F}, \tilde{\mathbb P})$ a filtration $(\tilde{\mathcal F}_t)_{t\geq 0}$ and processes $\tilde w$, $\tilde b$, $\tilde \sigma$ and a random variable $\tilde x_0$ on this probability space such that they have the same laws as $w$, $b$, $\sigma$ and $x_0$. We assume $\tilde w$ is a Wiener process. Then \[ d\tilde x_t = \tilde b_t \, dt + \tilde \sigma_t d\tilde w_t, \,\,\,\tilde x_0 \in L^2(\tilde{\mathcal F}_0) \] is another It\^o process which satisfies $\tilde x_t \in D_k$ for all $t$ a.s. Moreover if we now consider the probability space $(\Omega\times \tilde \Omega, \mathcal F \otimes \tilde{\mathcal F}, \mathbb P \otimes \tilde{\mathbb P})$ then we see that the processes with and without tilde are independent on this new space. \begin{proposition}[It\^o formula] \label{propn ito formula full} Assume that \[ \mathbb E \int_0^\infty |b_t|^2 + |\sigma_t|^4 \, dt < \infty\,. \] Let $v \in \mathcal C^{1,2,(1,1)}([0,\infty)\times D \times \mathcal P_2(D))$ such that for any compact subset $\mathcal K \subset \mathcal P_2(D)$ \begin{equation} \label{eq ito integrability condition tx} \sup_{t,x,\mu \in \mathcal K} \int_{D} \left[|\partial_\mu v(t,x,\mu)(y)|^2 + |\partial_y \partial_\mu v(t,x,\mu)(y)|^2 \right]\mu(dy) < \infty \,. \end{equation} Then, for $\mu_t := \mathscr L(\tilde x_t)$, \[ \begin{split} v(t,x_t, \mu_t) - v(0,x_0,\mu_0) = & \int_0^t \left[\partial_t v(s,x_s,\mu_s) + \frac{1}{2}\text{tr}\left[\sigma_t\sigma_t^* \partial_x^2 v(s,x_s,\mu_s)\right] \right]\,ds \\ & + \int_0^t b_s \partial_x v(s,x_s,\mu_s)\, dw_s \\ & + \int_0^t \tilde{\mathbb E}\left[ \tilde b_s \partial_\mu v(s,x_s,\mu_s)(\tilde x_s) + \frac{1}{2}\text{tr}\left[\tilde{\sigma}_s\tilde{\sigma}_s^* \partial_y \partial_\mu v(s,x_s,\mu_s)(\tilde x_s)\right]\right]\,ds\,. \end{split} \] \end{proposition} Here we follow the argument from~\cite{buckdahn2017mean} explaining how to go from an It\^o formula for function of measures only, i.e. from Proposition~\ref{propn ito for meas only}, to the general case. Note that it is possible to assume that $\tilde w$, $\tilde b$, $\tilde \sigma$ and $\tilde x_0$ have the same laws as $w$, $b$, $\sigma$ as $x_0$ above, but in fact this is not necessary. In this paper this generality is needed in the proof of Lemma~\ref{lemma:uniform}. \begin{proof}[Outline of proof for Proposition~\ref{propn ito formula full}] Fix $(\bar t,\bar x)$ and apply Proposition~\ref{propn ito for meas only} to the function $u(\mu):=v(\bar t,\bar x,\mu)$ and the law $\mu_t:=\mathscr L(\tilde x_t)$. Then \[ \begin{split} v(\bar t,\bar x,\mu_t) - v(\bar t,\bar x,\mu_0) & = \int_0^t \tilde{\mathbb E}\left[ \tilde b_s \partial_\mu v(\bar t,\bar x,\mu_s)(\tilde x_s) + \frac{1}{2}\text{tr}\left[\tilde{\sigma}_s\tilde{\sigma}_s^* \partial_y \partial_\mu v(\bar t,\bar x,\mu_s)(\tilde x_s)\right]\right]\,ds \\ & =: \int_0^t M(\bar t,\bar x, \mu_s)\,ds\,. \end{split} \] We thus see that the map $t\mapsto v(\bar t, \bar x, \mu_t)$ is absolutely continuous for all $(\bar t, \bar x)$ and so for almost all $t$ we have $\partial_t v(\bar t, \bar x, \mu_t) = M(\bar t, \bar x, \mu_t)$. Note that for completeness we would need to use the definition of $\mathcal C^{1,2,(1,1)}$ functions and a limiting argument to get the partial derivative for all $t$. See the proof of the corresponding It\^o formula in~\cite{chassagneux2014classical}. We now consider $\bar v$ given by $\bar v(t, x) := v(t,x,\mu_t)$. Then $\partial_t \bar v(t,x) = (\partial_t v)(t,x,\mu_t) + M(t,x,\mu_t)$. Using the usual It\^o formula we then have \[ \begin{split} \bar v(t,x_t) - \bar v(0,x_0) = & \int_0^t \left[\partial_t v(s,x_s,\mu_s) + M(s,x_s,\mu_s) + \frac{1}{2}\text{tr}\left[\sigma_t\sigma_t^* \partial_x^2 v(s,x_s,\mu_s)\right] \right]\,ds \\ & + \int_0^t b_s \partial_x v(s,x_s,\mu_s)\, dw_s\,. \end{split} \] \end{proof} \section{Introduction} We will consider either the time interval $I=[0,T]$ for some fixed $T>0$ or $I=[0,\infty)$. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, $(\mathcal{F}_t)_{t\in I}$ a filtration such that $\mathcal{F}_0$ contains all sets of $\mathcal{F}$ that have probability zero and such the filtration is right-continuous. Let $w=(w_t)_{t\in I}$ be an $\mathbb{R}^{d'}$-valued Wiener process which is an $(\mathcal{F}_t)_{t\in I}$-martingale. We consider the McKean--Vlasov stochastic differential equation (SDE) \begin{equation} \label{eq mkvsde} x_t = x_0 + \int_0^t b(s,x_s,\mathscr{L}(x_s))\,ds + \int_0^t \sigma(s,x_s,\mathscr{L}(x_s))\,dw_s\,,\,\,\, t\in I\,. \end{equation} Here we use the notation $\mathscr{L}(x)$ to denote the law of the random variable $x$. The law of such an SDE satisfies a nonlinear Fokker--Planck--Kolmogorov equation (see also~\cite{bogachev2016distances} and more generally~\cite{BogachevKrylovRocknerShaposhnikov}): writing $\mu_t := \mathscr L(x_t)$ and $a:= \frac12\sigma \sigma^*$ we have, for $t\in I$, \begin{equation} \label{eq fwd kolmogorov} \langle \mu_t, \varphi \rangle = \langle \mu_0, \varphi \rangle + \int_0^t \left \langle \mu_s, b(s,\cdot, \mu_s)\partial_x \varphi + \text{tr}\left(a(s,\cdot,\mu_s) \partial_x^2 \varphi\right) \right \rangle \, ds\,\,\,\, \forall \varphi \in C^2_0( D )\,. \end{equation} The aim of this article is to study the existence and uniqueness of solutions to the equation~\eqref{eq mkvsde}. We will show that a weak solution to~\eqref{eq mkvsde} exists for unbounded and continuous coefficients, provided that we can find an appropriate measure-dependent Lyapunov function which ensures integrability of the equation. This generalises the results of~\cite{funaki1984certain} and~\cite{gyongy1996existence}. The work on SDEs with coefficients that depend on the law was initiated by~\cite{McKean66}, who was inspired of Kac's programme in Kinetic Theory~\cite{Kac56}. An excellent and thorough account of the general theory of McKean--SDEs and their particle approximations can be found in~\cite{sznitman1991topics}. Sznitman showed that if the coefficients of \eqref{eq mkvsde} are globally Lipschitz continuous, a fixed point argument on Wasserstein space can be carried out, and consequently a solution to \eqref{eq mkvsde} is obtained as the limit of classical SDEs. This means that existence and uniqueness results from classical SDEs allows one to establish existence and uniqueness of~\eqref{eq mkvsde}. If Lipschitz continuity does not hold, the fixed point argument typically fails. However, in the setting of SDEs with non-degenerate diffusion coefficient, the regularisation effect of the noise allowed Zvonkin and Krylov~\cite{Zvonkin75} and later Veretennikov~\cite{Veretennikov80}, in a general multidimensional case, to show that the fixed point argument works, assuming only that the drift coefficient is H\"{o}lder continuous. This result has been recently generalised to McKean--Vlasov SDEs in~\cite{de2015strong}. The key step of the proof is to establish smoothness property of the corresponding PDE on $D \times \mathcal{P}(D)$. To go beyond H\"{o}lder continuity one typically, uses a compactness argument to establish the existence of a solution to stochastic differential equations. In the context of McKean--Vlasov SDEs, this has been done by Funaki who was interested in probabilistic representation for Boltzmann equations~\cite{funaki1984certain}. Funaki formulated a non-linear martingale problem for McKean--Vlasov SDEs that allowed him to established existence of a solution to~\eqref{eq mkvsde} by studying a limiting law of Euler discretisation. His proof of existence holds for continuous coefficients satisfying a Lyapunov type condition in the state variable $x\in \mathbb{R}^d$ with polynomial Lyapunov functions. Whilst we also assume continuity of the coefficients, we allow for a much more general Lyapunov condition that depends on a measure. Furthermore, Funaki is using Lyapunov functions to establish integrability of the Euler scheme which is problematic if one wants to depart from polynomial functions, \cite{SzpruchZhang18}. Recently~\cite{mishura2016existence} assuming only linear growth condition in space and boundedness in measure argument and non-degeneracy of diffusion obtained existence results for~\eqref{eq mkvsde}. This novel result was achieved through use of Krylov's estimates in~\cite[Ch. 2, Sec. 2]{MR601776}) in the context of McKean--Vlasov SDEs. An alternative approach to establishing existence of solutions to McKean--Vlasov equations is to approximate the equation with a particle system (a system of classical SDEs that interact with each other through empirical measure) and show that the limiting law solves Martingale problem. In this approach, one works with laws of empirical laws i.e. on the space $\mathcal{P}(\mathcal{P}(D))$ and proves its convergence to a (weak) solution of~\eqref{eq mkvsde} by studying the corresponding non-linear martingale problem. We refer to~\cite{meleard1996asymptotic} for a general overview of that approach and to~\cite{bossy2011conditional,FournierJourdain17} and references within for recent results exploring this approach. A general approach to establish the existence of martingale solutions has also been presented in \cite{MR3609379}. Here, inspired by~\cite{mishura2016existence}, we tackle the problem using the Skorokhod representation theorem and convergence lemma~\cite{skorokhod1965}. For classical SDEs (equations with no dependence on the law), the lack of sufficient regularity of the coefficients, say Lipschitz continuity, proves to be the main challenge in establishing existence and uniqueness of solutions. Lack of boundedness of the coefficients, typically, does not lead to significant difficulty, provided these are at least locally bounded. In that case one can work with local solutions and the only concern is the possible explosion. The conditions that ensure that the solution does not explode can be formulated by using Lyapunov function techniques as has been pioneered in \cite{Khasminskii80}. The key observation is that if one considers two SDEs with coefficients that agree on some bounded open domain then the solutions if unique also agree until first time the solution leaves the domain, see, for example~\cite[Ch. 10]{StroockVaradhan2006}. This classical localisation procedure does not carry over, at least directly, from the setting of classical SDEs to McKean--Vlasov SDEs. Indeed, if we stop a classical SDE then until the stopping time the stopped process satisfies the same equation. If we take~\eqref{eq mkvsde} and consider the stopped process $y_t := x_{t\wedge \tau}$, with some stopping time $\tau$, then the equation this satisfies is \[ y_t = y_0 + \int_0^{t\wedge \tau} b(s,y_s,\mathscr{L}(x_s))\,ds + \int_0^{t\wedge \tau} \sigma(s,y_s,\mathscr{L}(x_s))\,dw_s\,,\,\,\, t\in I\,. \] Clearly, even for $t\leq \tau$ this is not the same equation since $\mathscr L(x_s) \neq \mathscr L(y_s)$. Furthermore, this is not a McKean--Vlasov SDE. This could be problematic if one would like to obtain a solution to McKean--Vlasov SDEs through a limiting procedure of stopped processes. Furthermore, let $D_k\subseteq D_{k+1}$ be a sequence of nested domains, and consider functions $\bar b$ and $\bar \sigma$ such that $\bar b = b $ and $\bar \sigma = \sigma$ on $D_k$. The equation \begin{equation*} \bar x_t =\bar x_0 + \int_0^t \bar b(s,\bar x_s,\mathscr{L}(\bar x_s))\,ds + \int_0^t \bar \sigma(s,\bar x_s,\mathscr{L}(\bar x_s))\,dw_s\,,\,\,\, t\in I\,, \end{equation*} is a McKean--Vlasov SDE, but $x_t \neq \bar x_t$ even for $t\leq \bar \tau^k$, where $\bar \tau^k = \inf\{t \geq 0: \bar x_t \notin D_k \}$. This implies that if one considers a sequence of SDEs with coefficients that agree on these subdomains, one no longer has monotonicity for the corresponding stopping times. We show that despite these difficulties it still possible to establish the existence of weak solutions to the McKean--Vlasov SDEs \eqref{eq mkvsde} using the idea of localisation, but extra care is needed. \subsection{Main Contributions} Our first main contribution is the generalisation of Lyapunov function techniques to the setting of McKean--Vlasov SDEs. The coefficients of the equation~\eqref{eq mkvsde}, depend on $(x,\mu)\in D \times \mathcal{P}(D)$ for $D\subseteq \mathbb{R}^d$. Hence the class of Lyapunov functions considered in this paper also depend on $(x,\mu)\in D \times \mathcal{P}(D)$. See~\eqref{a-nonint}. Furthermore, it is natural to formulate the {\em integrated Lyapunov} condition, in which the key stability assumption is required to hold only on $\mathcal P(D) $, see~\eqref{a-int} and Section~\ref{sec:motivating examples} for motivating examples. Note that it is not immediately clear how one can obtain tightness estimates for the particle approximation under the integrated conditions we propose. To work with Lyapunov functions on $\mathcal P(D)$, we take advantage of the recently developed analysis on Wasserstein spaces, and in particular derivatives with respect to a measure as introduced by Lions in his lectures at College de France, see~\cite{cardaliaguet2012} and~\cite[Ch. 5]{carmona2017probabilistic}. This analysis is presented in the appendix to give the measure derivative in a domain. Our second main contribution is the probabilistic proof of the existence of a stationary solution to the nonlinear Fokker--Planck--Kolmogorov \eqref{eq fwd kolmogorov}. Furthermore the calculus on the Wasserstein spaces allows to study a type Fokker--Planck--Kolmogorov on $\mathcal P_2(D)$. Indeed, for $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ and $t\in I$, \begin{equation} \label{eq fwd measure} \begin{split} \phi(\mathscr L(x^\mu_t))=\phi(\mathscr L(x^\mu_0)) \, + & \int_0^t\langle \mathscr L(x^\mu_s), b(s,\cdot,\mathscr L(x^\mu_s)) \partial_\mu \phi(\mathscr L(x^\mu_s)) + \text{tr}\left[a(s,\cdot,\mathscr L( x^\mu_s)) \partial_y \partial_\mu \phi(\mathscr L(x^\mu_s))\right]\rangle\,ds. \end{split} \end{equation} Following the remark by Lions from his lectures at College de France, the equation~\eqref{eq fwd measure} can be interpreted as non-local transport equation on the space of measures. Reader may consult~\cite[Ch. 5 Sec. 7.4]{carmona2017probabilistic} for further details. Another angle is to notice that while~\eqref{eq fwd kolmogorov} gives an equation for linear functionals of the measure, equation~\eqref{eq fwd measure} is an equation for nonlinear functionals of the measure. The existence results obtained in this paper imply existence of a stationary solution to \eqref{eq fwd measure} in the case where $b$ and $\sigma$ do not depend on time. Finally, we formulate uniqueness results under the Lyapunov type condition and {\em integrated Lyapunov type condition} that is required to hold only on $\mathcal P(D) $. This extends the standard monotone type conditions studied in literature e.g~\cite{MR2860672,MR3226169,DosReis2017LDP}. Interestingly, in some special cases we are able to obtain uniqueness only under local monotone conditions. Again we do not require a non-degeneracy condition on diffusion coefficient. We support our results with the example inspired by Scheutzow~\cite{Scheutzow87} who has showed that, in general, uniqueness of solution to McKean--Vlasov SDEs does not hold if the coefficients are only locally Lipschitz. Again, we would like to highlight that since classical localisation techniques used in the SDEs seem not to work in our setting, we cannot simply obtain global uniqueness results from local uniqueness and suitable estimates on the stopping times. \subsection{Motivating Examples} \label{sec:motivating examples} Let us now present some example equations to motivate the choice of the Lyapunov condition. Consider first the McKean--Vlasov stochastic differential equation \begin{equation} \label{eq example1} d x_t = -x_t \bigg[\int_\mathbb{R} y^4 \mathscr L(x_t)(dy)\bigg]\, dt + \frac{1}{\sqrt{2}}x_t\, d w_t\,,\,\,\, x_0 \in L^4(\mathcal F_0, \mathbb R^+)\,. \end{equation} The diffusion generator for~\eqref{eq example1} is \begin{equation} \label{eq ex diff gen} L(x,\mu) v(x) := \frac{1}{4}x^2 v''(x) - x \bigg[ \int_{\mathbb R} y^4 \, \mu(dy) \bigg] v'(x)\,. \end{equation} It is not clear whether one can find a Lyapunov function such that the classical Lyapunov condition holds i.e. $L(x,\mu)v(x)\leq m_1 v(x) m_2$, for $m_1<0$ and $m_2\in \mathbb{R}$. However, with the Lyapunov function given by $v(x) = x^4$ we can establish that \begin{equation} \label{eq lyapunov example} \int_{\mathbb{R}}L(x,\mu) v(x)\mu(dx) \leq -\int_\mathbb{R} v(x) \mu(dx) + 4 \end{equation} holds. See Example~\ref{example integrated lyapunov} for details. We will see that this is sufficient to establish integrability of~\eqref{eq example1} on $I=[0,\infty)$. See Theorem~\ref{thm:weakexistence} and the condition~\eqref{eq b1}. Another way to proceed, is to directly work with $v(\mu):=\int_\mathbb{R} x^4 \, \mu(dx)$ as Lyapunov function on the measure space $\mathcal{P}_4(\mathbb{R})$. This requires the use of derivatives with respect to a measure as introduced by Lions in his lectures at College de France, see~\cite{cardaliaguet2012} or Appendix~\ref{sec measure derivatives and ito}\footnote{Derivatives with respect to a measure are defined in $\mathcal{P}_2(\mathbb{R})$, and therefore one cannot apply It\^{o} formula to $v(\mu):=\int_\mathbb{R} x^4 \, \mu(dx)$. However, in this paper we will only apply the It\^{o} formula for measures supported on compact subsets of $\mathbb R^d$.}. Then \[ \partial_\mu v(\mu)(y) = 4 y^3, \quad \partial_y\partial_\mu v(\mu)(y) = 12 y^2, \,\, y \in \mathbb{R}\,. \] The generator corresponding to the appropriate It\^o formula, see e.g. Proposition~\ref{propn ito for meas only}, is \begin{align*} & L^{\mu} v(\mu) \\ & := \int_\mathbb{R} \left( -x \int_\mathbb{R} y^4\, \mu(dy) \partial_\mu v(\mu)(x) +\frac{1}{4}x^4 \partial_y \partial_\mu v(\mu)(x) \right) \mu(d x) = \int_\mathbb{R} \left( - 4 x^4 \int_\mathbb{R} y^4 \mu(dy) + 3x^{4} \right) \mu(d x)\,. \end{align*} We note that this is the same expression as found when $v(x) = x^4$ in~\eqref{eq ex diff gen} and we integrate over $\mu$ (and so~\eqref{eq lyapunov example} again holds). In this case using the It\^o formula for measure derivatives brings no advantages. However the advantage of working with a Lyapunov function on the measure space appears where the dependence on the measure in the Lyapunov function is not linear. Consider the following McKean--Vlasov stochastic differential equation \begin{equation} \label{eq example2} dx_t = - \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^3\, dt + \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^2 \sigma \, dw_t\,, \end{equation} for $t\in I$, $\alpha$ and $\sigma$ constants and with $x_0 \in L^4(\mathcal{F}_0,\mathbb R)$. Assume that $m:= -(6\sigma^2 - 4 + 4\alpha) >0$. Since the drift and diffusion are non-linear functions of the law and state of the process, it is natural to seek a Lyapunov function $v \in \mathcal C^{2,(1,1)}(\mathbb{R} \times \mathcal{P}(\mathbb{R}))$. See Definition~\ref{def c122}. The generator corresponding to the appropriate It\^o formula, see e.g. Proposition~\ref{propn ito formula full}, is then given by~\eqref{eq Lmu} and we will show that for the Lyapunov function \[ v(x,\mu) = \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^4\,, \] we have \[ \int_{\mathbb{R}}(L^\mu v)(x,\mu)\,\mu(dx) \leq m - m \int_{\mathbb{R}}v(x,\mu)\,\mu(dx)\,. \] See Example~\ref{example 2} for details. Thus the condition~\eqref{eq b1} holds. This is sufficient to establish existence of solutions to~\eqref{eq example1} on $I=[0,\infty)$ as Theorem~\ref{thm:weakexistence} will tell us. Regarding our continuity assumptions for existence of solutions to~\eqref{eq mkvsde} we note that we only require a type of joint continuity of the coefficients in $(x,\mu) \in \mathbb R^d \times \mathcal P(\mathbb R^d)$ and that this allows us to consider coefficients where the dependence on the measure does not arise via an integral with respect to the said measure. This could be for example \[ S_\alpha(\mu):=\frac{1}{\alpha}\int_0^\alpha \inf\{x\in \mathbb{R}\,:\, \mu((-\infty, x]) \geq s]\}\,ds\,, \] for $\alpha > 0$ fixed. This quantity is known as the ``expected shortfall'' and is a type of risk measure. See Example~\ref{example 3} for details. \section{Existence results} For a domain $D\subseteq \mathbb R^d$, we will use the notation $\mathcal{P}(D)$ for the space of probability measures over $( D,\mathcal{B}(D))$. We will consider this as a topological space with the topology induced by the weak convergence of probability measures. We will write $\mu_n \Rightarrow \mu$ if $(\mu_n)_n$ converges to $\mu$ in the sense of weak convergence of probability measures. For $p\geq 1$ we use $\mathcal{P}_p(D)$ to denote the set of probability measures that are $p$ integrable (i.e. $\int_D |x|^p \mu(dx) < \infty$ for $\mu \in \mathcal P_p(D)$). We will consider this as a metric space with the metric given by the Wasserstein distance with exponent $p$, see~\eqref{eq p-wasserstein}. Denote by $C_b(D)$ and $C_0(D)$ the subspaces of continuous functions that are bounded and compactly supported, respectively. We use $\sigma^*$ to denote the transpose of a matrix $\sigma$ and for a square matrix $a$ we use $\text{tr}(a)$ to denote its trace. We use $\partial_x v$ to denote the (column) vector of first order partial derivatives of $v$ with respect to the components of $x$ (i.e. the gradient of $v$ with respect to $x$) and $\partial_x^2 v$ to denote the square matrix of all the mixed second order partial derivatives with respect to the components of $x$ (i.e. the Hessian matrix of $v$ with respect to $x$). If $a,b \in \mathbb R^d$ then $ab$ denotes their dot product. Recall that we are using the concept of derivatives with respect to a measure as introduced by Lions in his lectures at Coll${\grave{\textnormal e}}$ge de France, see~\cite{cardaliaguet2012}. For convenience, the construction and main definitions are in Appendix~\ref{sec measure derivatives and ito}. In particular, see Definition~\ref{def c122} to clarify what is meant by the space $\mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}(D))$. In short, saying that a function $v$ is in such space means that all the derivatives appearing in~\eqref{eq Lmu} exist and are appropriately jointly continuous so that we may apply the It\^o formula for a function of a process and a flow of measures, see Proposition~\ref{propn ito formula full}. The use of such an It\^o formula naturally leads to the following form of a diffusion generator. First we note that throughout this paper we assume that for a domain $D\subseteq{\mathbb R^d}$ there is a nested sequence of bounded sub-domains, i.e. bounded, open connected subsets of $\mathbb R^d$, $(D_k)_k$ such that $D_k \subset D_{k+1}$, $\overline{D_k} \subset D$ and $\bigcup_k D_k = D$. For $(t,x)\in I \times D$, $\mu \in \mathcal P( D_k)$ for some $k\in \mathbb N$ and for some $v \in \mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}_2(D))$ we define $L^{\mu} = L^\mu(t,x,\mu)$ as \begin{equation} \label{eq Lmu} \begin{split} (L^\mu v)(t,x,\mu) & := \bigg(\partial_t v + \frac{1}{2}\text{tr}\big(\sigma \sigma^*\partial_x^2 v\big) + b \partial_x v\bigg)(t, x,\mu) \\ & + \int_{\mathbb{R}^d}\left( b (t, y, \mu) (\partial_\mu v) (t,x,\mu)(y) + \frac{1}{2}\text{tr}\big((\sigma\sigma^*)(t,y,\mu)(\partial_y \partial_\mu v)(t,x,\mu)(y) \big) \right) \mu(dy). \end{split} \end{equation} We note that in the case $v \in C^{1,2}(I \times D )$, i.e when $v$ does not depend on the measure, the above generator reduces to \begin{equation*} (L^\mu v)(t,x) = (Lv)(t,x) := \bigg(\partial_t v + \frac{1}{2}\text{tr}\big(\sigma \sigma^*\partial_x^2 v\big) + b \partial_x v\bigg)(t,x) \,. \end{equation*} \subsection{Assumptions and Main Result} We assume that $b:I\times D \times \mathcal{P}(D) \to \mathbb{R}^d$ and $\sigma:I\times D \times \mathcal{P}(D)\to \mathbb{R}^d\times \mathbb{R}^{d'}$ are measurable (later we will add joint continuity and local boundedness assumptions). We require the existence of a Lyapunov function satisfying either of the following conditions. \begin{assumption}[Lyapunov condition] \label{a-nonint} There is $v \in \mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}_2(D))$, $v\geq 0$, such that \begin{enumerate}[i)] \item There are locally integrable, non-random, functions $m_1= m_1(t)$ and $m_2=m_2(t)$ on $I$ such that: for all $t \in I$, all $x\in D$ and all $\mu \in \mathcal P(D_k)$, $k\in \mathbb N$, we have, \begin{equation} \label{eq b2} L^\mu(t,x,\mu) v(t,x,\mu) \leq m_1(t) v(t,x,\mu) + m_2(t)\,. \end{equation} \item There is $V = V(t,x)$ satisfying for all $\mu \in \mathcal P(D_k)$, $k\in \mathbb N$, we have, \begin{equation} \label{eq V ineq} \int_D V(t,x)\,\mu(dx) \leq \int_D v(t,x,\mu)\,\mu(dx)\,\,\,\, \forall t\in I, \, \end{equation} and \begin{equation} \label{eq Vk limit} V_k := \inf_{s \in I, x\in \partial D_k } V(s,x) \,\,\, \text{$\to \infty$ as $k\to \infty$.} \end{equation} \item The initial value $x_0$ is $\mathcal{F}_0$-measurable, $\mathbb P(x_0 \in D) = 1$ and $\mathbb{E} v(0,x_0,\mathscr L(x_0)) < \infty$. \end{enumerate} \end{assumption} \begin{assumption}[Integrated Lyapunov condition] \label{a-int} There is $v \in \mathcal C^{1,2,(1,1)}(I \times D \times \mathcal{P}_2(D))$, $v\geq 0$, such that: \begin{enumerate}[i)] \item There are locally integrable, non-random, functions $m_1= m_1(t)$ and $m_2=m_2(t)$ on $I$ such that for all $t \in I$ and for all $\mu \in \mathcal P(D_k)$, $k\in \mathbb N$, we have, \begin{equation} \label{eq b1} \int_{ D }L^\mu(t,x,\mu) v(t,x,\mu) \mu(dx)\leq m_1(t) \int_{D } v(t,x,\mu) \mu(dx) + m_2(t)\,. \end{equation} \item There is $V = V(t,x)$ satisfying~\eqref{eq V ineq} and \begin{equation} \label{eq Vk D limit} V_k := \inf_{s \in I, x\in D_k^c } V(s,x) \,\,\, \text{$\to \infty$ as $k\to \infty$.} \end{equation} \item The initial value $x_0$ is $\mathcal{F}_0$-measurable, $\mathbb P(x_0 \in D) = 1$ and $\mathbb{E} v(0,x_0,\mathscr L(x_0)) < \infty$. \end{enumerate} \end{assumption} We make the following observations. \begin{remark} \label{rmk assumption} \hfill{ } \begin{enumerate}[i)] \item We have deliberately not specified the signs of the functions $m_1$ and $m_2$. \color{blue} \color{black} \item If~\eqref{eq b2} holds then for $L^{\mu,k} := \mathbbm{1}_{x\in D_k} L^\mu$ we have that~\eqref{eq b2} also holds with $L^\mu$ replaced by $L^{\mu,k}$. Indeed, since $v$ is non-negative, then $ L^\mu v (t,x,\mu) \mathbbm{1}_{x \in D_k} \leq [m_1(t) v(t,x,\mu) + m_2(t)]\mathbbm{1}_{x \in D_k} $. On the other hand if only~\eqref{eq b1} holds then, in general, this does not imply that~\eqref{eq b1} holds with $L^\mu$ replaced by $L^{\mu,k}$, unless $ \mu \in \mathcal P(D_k)$. \end{enumerate} \end{remark} Regarding the continuity of coefficients in~\eqref{eq mkvsde} and their local boundedness we require the following. \begin{assumption}[Continuity] \label{ass continuity} Functions $b:I\times D \times \mathcal{P}(D) \to \mathbb{R}^d$ and $\sigma:I\times D \times \mathcal{P}(D)\to \mathbb{R}^d\times \mathbb{R}^{d'}$ are jointly continuous in the last two arguments in the following sense: if $(\mu_n) \in \mathcal P(D)$ are such that for all $n$ \[ \sup_{t \in I} \int_{ D} v(t,x,\mu_n)\,\mu_n(dx) < \infty \] and if $(x_n \rightarrow x, \mu_n \Rightarrow \mu )$ as $n\to \infty$ then $b (t,x_n,\mu_n) \to b(t,x,\mu)$ and $\sigma (t,x_n,\mu_n) \to \sigma(t,x,\mu)$ as $n\to \infty$. \end{assumption} \begin{assumption}[Local boundedness] \label{ass local boundedness} There exist constants $c_k \geq 0$ such that for any $\mu \in \mathcal P(D)$ \[ \sup_{x\in D_k} |b(t,x,\mu)| \leq c_k \left(1+\int_{D} v(t,y,\mu)\mu(dy)\right)\,, \] \[ \sup_{x\in D_k} |\sigma(t,x,\mu)| \leq c_k \left(1+\int_{D} v(t,y,\mu)\mu(dy)\right)\,. \] \end{assumption} \begin{assumption}[Integrated growth condition] \label{ass integrated growth} There exists a constant $c \geq 0$ such that for all $\mu \in \mathcal P(D_k)$, $k\in \mathbb{N}$, we have, \[ \int_{D_k} |b(t,x,\mu)| + |\sigma(t,x,\mu)|^2 \mu(dx) \leq c \left(1+\int_{D_k} v(t,x,\mu)\mu(dx)\right),\,\,\,\, \forall t\in I. \] \end{assumption} Continuity Assumption~\ref{ass continuity} in the measure argument is very weak, but might be hard to verify. In case of unbounded domains function the property \eqref{eq V ineq} will often will often hold for $V(x)=|x|^p$, $p \geq 1$. In such case we have $\mu_n \in \mathcal P_p(D)$ for all the measures $\mu_n$ under consideration for convergence of the coefficients. But from~\cite[Theorem 6.9]{villani2009}, we know that for $\mu_n \in \mathcal P_p(D)$ weak convergence of measures is equivalent to convergence in the $p'$-th Wasserstein metric, for $p'<p$. Hence, in such case, it is enough to check that if $x_n \rightarrow x$ and $W_{p'}(\mu_n,\mu) \to 0$ as $n\to \infty$ then $b (x_n,\mu_n) \to b(x,\mu)$ and $\sigma (x_n,\mu_n) \to \sigma(x,\mu)$ as $n\to \infty$. This will be satisfied in particular if \begin{equation} \label{eq wasserstein cont crit} | b(x_n,\mu_n) - b(x,\mu) | + | \sigma(x_n,\mu_n) - \sigma(x,\mu) | \leq \rho(|x-x_n|) + W_{p'}(\mu_n,\mu), \end{equation} for some function $\rho = \rho(x)$ such that $\rho(|x|) \to 0$ as $x \to 0$. We note that this is a common assumption, see e.g.~\cite{funaki1984certain}. At this point it may be worth noting that the $p$-Wasserstein distance on $\mathcal P_p(D)$ is \begin{equation} \label{eq p-wasserstein} W_p(\mu,\nu) := \left( \inf_{\pi \in \Pi(\mu,\nu)} \int_{D\times D} |x-y|^p \, \pi(dx,dy) \right)^{\frac{1}{p}}\,, \end{equation} where $\Pi(\mu,\nu)$ denotes the set of {\em couplings} between $\mu$ and $\nu$ i.e. all measures on $\mathscr{B}(D\times D)$ such that $\pi(B,D) = \mu(B)$ and $\pi(D, B) = \nu(B)$ for every $B \in \mathscr B(D)$. Note that in the case of McKean--Vlasov SDEs it is often useful to think of the solution as a pair consisting of the process $x$ and its law i.e. $(x_t,\mathscr{L}(x_t))_{t \in I}$. The coefficients of the McKean--Vlasov SDE depend on the law of the solution and the main focus of this paper is on equations with unbounded coefficients, therefore a condition on integrability of the law is natural. \begin{definition}[$v$-integrable weak solution] \label{def soln} A $v$-integrable weak solution to~\eqref{eq mkvsde}, on $I$ in $D$ is \[ \big(\Omega, \mathcal F, \mathbb P, (\mathcal F_t)_{t \in I}, (w_t)_{t \in I}, (x_t)_{t \in I}\big), \] where $(\Omega, \mathcal F, \mathbb P)$ is a probability space, $(\mathcal F_t)_{t \in I}$ is a filtration, $(w_t)_{t \in I}$ is a Wiener process that is a martingale w.r.t. the above filtration, $(x_t)_{t \in I}$ is an adapted process satisfying~\eqref{eq mkvsde} such that $x\in C(I; D)$ a.s. and finally, for all $t\in I$ we have $\mathbb E v(t,x_t,\mathscr L(x_t)) < \infty$. \end{definition} Before we state the main theorem of this paper, we state the conditions on $m_1,m_2$ that allow one to establish the integrability and tightness estimate, which in the case $I=[0,\infty)$ needs to be uniform in time. \begin{remark}[On finiteness of $M(t)$] \label{remark on Mt} Define $\gamma(t) := \exp\left(-\int_0^t m_1(s)\,ds\right)$ and \begin{equation} \begin{split} \label{eq: uniform tauk est} M(t):= & \frac{\mathbb E v(0,\tilde x_0,\mathscr L(\tilde x_0))}{\gamma(t)} + \int_{0}^t \frac{\gamma(s)}{\gamma(t)} m_2(s) ds,\\ M^+(t):= & e^{\int_0^t (m_1(s))^+\,ds} \left(\mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \int_0^{t } \gamma(s) m_2^+(s) \, ds\right)\, . \end{split} \end{equation} Note that $M(t)\leq M^+(t)$. \begin{enumerate}[i)] \item If $I=[0,T]$, $m_1$ and $m_2$ are set to $0$ outside $I$, leading to \[ \sup_{t<\infty} \int_{0}^t \frac{\gamma(s)}{\gamma(t)} m_2(s) ds \leq \int_0^{T} e^{\int_s^{T} m_1(r)\,dr} |m_2(s)|ds < \infty\,. \] \item If $I=[0,\infty)$ and we have \begin{equation} \label{eq extra condition on m1m2} m_1(t) \leq 0 \,\,\,\forall t\geq0 \,\,\, \text{and}\,\,\, \int_0^\infty \gamma(s)|m_2(s)|\,ds<\infty\,, \end{equation} then \[ \sup_{t<\infty} \int_0^t e^{\int_s^{t} m_1(r)\,dr} m_2(s)\,ds \leq \int_0^\infty\gamma(s)|m_2(s)| \, ds < \infty. \] \end{enumerate} In both of these cases we have $\sup_{t\in I}M(t)$ and $\sup_{t\in I}M^+(t) <\infty$. \end{remark} \begin{theorem}\label{thm:weakexistence} Let $D\subseteq \mathbb R^d$ and assumptions~\ref{ass continuity} and~\ref{ass local boundedness} hold. Then we have the following. \begin{enumerate}[i)] \item If Assumption~\ref{a-nonint} holds and $\sup_{t\in I } M^+(t) < \infty$, then there exists a $v$-integrable weak solution to~\eqref{eq mkvsde} on $I$. \item If Assumptions~\ref{a-int} and~\ref{ass integrated growth} hold and $\sup_{t\in I } M(t) < \infty$, then there exists a $v$-integrable weak solution to~\eqref{eq mkvsde} on $I$. \end{enumerate} Additionally, \[ \sup_{t\in I} \mathbb E v(t, x_t, \mathscr L( x_t)) < \infty\,. \] \end{theorem} We make the following comment. By virtue of Assumption~\ref{ass local boundedness} we have that under conditions of Theorem~\ref{thm:weakexistence}, the $v$-integrable weak solution to~\eqref{eq mkvsde} obtained by the theorem satisfies the forward nonlinear Fokker--Planck--Kolmogorov equation~\eqref{eq fwd kolmogorov}, where $\mu_t = \mathscr L(x_t)$. \newline \subsection{Proof of the existence results} We will use the convention that the infimum of an empty set is positive infinity. We extend $b$ and $\sigma$ in a measurable but discontinuous way to functions on $\mathbb{R}^+\times \mathbb R^d \times \mathcal{P}(\mathbb R^d)$ by taking \[ b(t,x,\mu) = \sigma(t,x,\mu) = 0 \,\,\, \text{if $x\in \mathbb R^d \setminus D$ or if $t\notin I$}. \] For $t\notin I$ we set $m_1(t) = m_2(t)=0$. We define \[ b^k(t,x,\mu) := \mathds 1_{x\in D_k} b(t,x,\mu)\,\,\,\text{and}\,\,\, \sigma^k(t,x,\mu) := \mathds 1_{x\in D_k} \sigma(t,x,\mu)\,. \] \begin{lemma} \label{lemma:uniform} Let Assumption~\ref{ass local boundedness} hold. Let $(\tilde \Omega, \tilde{\mathcal F}, \tilde{\mathbb P})$ be a probability space, $(\tilde{\mathcal F}_t)_{t\in I}$ a filtration, a Wiener process $\tilde w$ and a process $\tilde{x}^k$ that satisfies, for all $t \in I$, \begin{equation} \label{eq mkvsde k} d\tilde x^k_t = b^k(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,dt + \sigma^k(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,d\tilde w_t\,,\,\,\,\tilde x^k_0 = \tilde x_0\,. \end{equation} For $m\leq k$, let $\tilde{\tau}^k_m := \inf\{t\in I:\tilde{x}^k_t \notin D_m\}$. \begin{enumerate}[i)] \item If either Assumption~\ref{a-nonint} or~\ref{a-int} hold then for any $t\in I$, \begin{equation*} \sup_k \mathbb E v(t,\tilde x^k_t,\mathscr L(\tilde x^k_t)) \leq M(t) \,. \end{equation*} \item If either Assumption~\ref{a-nonint} or~\ref{a-int} hold then for any $t\in I$, \[ \mathbb P(\tilde{\tau}_k^k < t) \leq \mathbb P (\tilde{x}_0 \notin D_k) + M(t) V_k^{-1}\,. \] \item If Assumption~\ref{a-nonint} holds then for any $t\in I$, \[ \sup_k \mathbb P(\tilde{\tau}^k_m < t) \leq \mathbb P (\tilde{x}_0 \notin D_m) + M^+(t)V_m^{-1}\,. \] \item If Assumption~\ref{a-int} holds then for any $t\in I$ \[ \sup_{k}\mathbb P (\tilde x^k_t \notin D_m) \leq M(t)V_m^{-1}\,. \] \end{enumerate} \end{lemma} \begin{proof} For each $k$, $\mathscr{L}(\tilde x^k_t) \in \mathcal{P}_2(D)$ and therefore we can apply It\^o's formula from Proposition~\ref{propn ito formula full} to $\tilde{x}^k$, its law and $\gamma v$. Thus \[ \begin{split} & \gamma(t) v(t,\tilde x^k_t,\mathscr{L}(\tilde x^k_t)) = \gamma(0) v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) \\ & \qquad + \int_0^{t} \gamma(s) [L^\mu v - m_1 v](s,\tilde x^k_s,\mathscr{L}(\tilde x^k_s)) \, ds + \int_0^{t} \gamma(s) [(\partial_x v) \sigma](s,\tilde x^k_s,\mathscr{L}(\tilde x^k_s)) \, d\tilde w_s\,. \end{split} \] Due to the local boundedness of the coefficients and either Lyapunov condition~\eqref{eq b2} or~\eqref{eq b1} we get \begin{equation} \label{eq: bound} \mathbb E \gamma(t) v(t ,\tilde x^k_t,\mathscr{L}(\tilde x^k_t)) \leq \mathbb E \gamma(0) v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \int_0^t \gamma(s) m_2(s)ds \,. \end{equation} This proves the first part of the lemma. For the second part we proceed as follows. Since $\tilde{x}^k_t = \tilde{x}^k_{t\wedge \tilde{\tau}_k^k}$ for all $t\in I$, which implies $\mathscr{L}(\tilde x^k_t) = \mathscr{L}(\tilde x^k_{t\wedge \tilde{\tau}^k_k})$ for all $t\in I$, we further observe \[ \begin{split} \mathbb E v\left(t,\tilde x^k_{t},\mathscr{L}(\tilde x^k_{t})\right) = \, \mathbb E v\left(t,\tilde x^k_{t\wedge \tilde{\tau}_k^k},\mathscr{L}(\tilde x^k_{t\wedge\tilde{\tilde{\tau}}^k_k})\right) \geq \mathbb E \left[ V\left(t,\tilde x^k_{t\wedge \tilde{\tau}_k^k}\right) \mathds 1_{0< \tilde{\tau}_k^k < t} \right] = & \, \mathbb E \left[ V\left(t ,\tilde x^k_{\tilde{\tau}_k^k}\right) \mathds 1_{0 < \tilde{\tau}_k^k < t} \right]\\ \geq & V_k\mathbb P (0< \tilde{\tau}_k^k < t)\,. \end{split} \] Hence, \[ \begin{split} & \mathbb P( \tilde{\tau}_k^k < t) = \mathbb P( \tilde{\tau}_k^k < t, \tilde{\tau}_k^k >0) + \mathbb P( \tilde{\tau}_k^k < t, \tilde{\tau}_k^k= 0) \leq \mathbb P(0 < \tilde{\tau}_k^k < t) + \mathbb P (x_0 \notin D_k) \\ & \leq \frac{\mathbb E v\left(t,\tilde x^k_{t\wedge \tilde{\tau}_k^k},\mathscr{L}(\tilde x^k_t)\right)}{V_k} + \mathbb P (\tilde{x}_0 \notin D_k)\,. \end{split} \] This completes the proof of the second statement. To prove the third statement we first note that for $m>k$ we have $\mathbb P(\tau^k_m < t) = \mathbb P(x_0 \notin D_m )$. Thus we may assume that $m\leq k$. We proceed similarly as above but with the crucial difference that $\tilde{x}^k_t$ is not equal to $\tilde{x}^k_{t\wedge \tau^k_m}$ anymore. Our aim is to apply It\^{o} formula to the function $v$, the process $(\tilde x^k_{t\wedge \tau^k_m})_{t\in I}$ and the flow of marginal measures $(\mathscr{L}(\tilde x^k_t))_{t\in I}$. Note that $\mathscr{L}(\tilde x^k_{t\wedge \tau^k_m}) \neq \mathscr{L}(\tilde x^k_t)$. Nevertheless the It\^o formula~\ref{propn ito formula full} may be applied. After taking expectations this yields \[ \mathbb{E} \left[ \gamma(t\wedge \tau^k_m) v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t)) \right] = \mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \mathbb{E} \int_0^{t\wedge \tau^k_m} \gamma(s) \left[ L^{\mu} v - m_1 v\right](s,\tilde x^k_s,\mathscr{L}(\tilde x^k_s)) \, ds. \] We now use~\eqref{eq b2} to see that \[ \begin{split} & \mathbb{E} \left[ \gamma(t\wedge \tau^k_m) v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t) )\right] \leq \mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \mathbb{E} \int_0^{t\wedge \tau^k_m} \gamma(s) m_2(s) \, ds \\ & \leq \mathbb{E} v(0,\tilde x_0,\mathscr{L}(\tilde x_0)) + \int_0^{t } \gamma(s) m_2^+(s) \, ds =: \bar M(t)\,. \end{split} \] Then \[ \inf_{s\leq t}\gamma(s) \mathbb{E} v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t) ) \leq \mathbb{E} \gamma(t\wedge \tau^k_m) v(t\wedge \tau^k_m,\tilde x^k_{t\wedge \tau^k_m},\mathscr{L}(\tilde x^k_t) ) \leq \bar M(t) \] and so, following same argument as in the proof of the first part of this lemma, \[ \begin{split} & \mathbb P( \tau^k_m < t) \leq \frac{1}{\inf_{s\leq t} \gamma(s)} \frac{\bar M(t)}{V_m(\mathscr L (\tilde x^k_t))} + \mathbb P (\tilde{x}_0 \notin D_m)\,. \end{split} \] We conclude by observing that \[ \inf_{s\leq t} \gamma(s) \geq e^{-\int_0^t (m_1(s))^+\,ds}. \] To prove the fourth statement, first note that for $m>k$, $\mathbb{P} (\tilde x_t^k \notin D_m)=0$ and hence we take $m\leq k$. Conditions \eqref{eq V ineq} and \eqref{eq Vk D limit} imply that \begin{equation*} \begin{split} \mathbb E v\left(t,\tilde x^k_{t},\mathscr{L}(\tilde x^k_t)\right) \geq & \int_{D} V(t,x) \mathscr{L}(\tilde x^k_t)(dx) \geq \int_{D\cap D_m^c} V(t,x) \mathscr{L}(\tilde x^k_t)(dx) \geq V_m \mathbb{P}( \tilde x^k_t \notin D_m)\,. \end{split} \end{equation*} \end{proof} \begin{remark} \label{remark on x_0} Since we are assuming that $\mathbb P(x_0 \in D) = 1$ we have \[ \lim_{k \to \infty} \mathbb P(x_0 \notin D_k) = 1 - \lim_{k\to \infty} \mathbb P(x_0 \in D_k) = 1 - \mathbb P \left(\bigcup_k \{x_0 \in D_k\}\right) = 0\,. \] \end{remark} \begin{corollary} \label{corollary bound for limit process} Let Assumption~\ref{ass local boundedness} hold. Let $(\tilde \Omega, \tilde{\mathcal F}, \tilde{\mathbb P})$ be a probability space, $(\tilde{\mathcal F}_t)_{t\in I}$ a filtration, a Wiener process $\tilde w$ and a process $\tilde{x}^k$ such that~\eqref{eq mkvsde k} holds for all $t \in I$. Assume that $\tilde x^k \to \tilde x$ in $C(I;\bar D)$. If either Assumption~\ref{a-nonint} or~\ref{a-int} hold then \[ \sup_{t\in I} \mathbb E v(t,\tilde x_t, \mathscr L(\tilde x_t)) \leq \sup_{t\in I} M(t) \,, \] where $M$ is given in~\eqref{eq: uniform tauk est}. \end{corollary} \begin{proof} By Fatou's lemma, continuity of $v$ and~\eqref{eq: uniform tauk est} we get \[ \mathbb E v(t,\tilde x_t, \mathscr L(\tilde x_t)) \leq \liminf_{k\to \infty} \mathbb E v(t,\tilde x^k_t, \mathscr L(\tilde x^k_t)) \leq \sup_{t\in I} M(t)\,. \] The results follows if we take supremum over $t$ and consider Remark~\ref{remark on Mt}. \end{proof} Our aim is to use Skorokhod's arguments to prove the existence of a weak (also known as martingale) solution to the equation~\eqref{eq mkvsde}. Before we proceed to the proof of the main Theorem~\ref{thm:weakexistence} we need to establish tightness of the law of the process \eqref{eq mkvsde k}. \begin{lemma}[Tightness] \label{lemma tightness} Let $\tilde{x}^k$ be the process defined in \eqref{eq mkvsde k}. \begin{enumerate}[i)] \item Let Assumptions~\ref{a-nonint} and~\ref{ass local boundedness} hold and $\sup_{t\in I } M^+(t) < \infty$, then the law of $(\tilde x^k)_k$ is tight on $C(I; \bar D)$. \todo[inline]{\textbf{D: Is there a reason why we don't claim tightness on $C(I;D)$? }} \item Let Assumptions~\ref{a-int},~\ref{ass local boundedness} and~\ref{ass integrated growth} hold and $\sup_{t\in I } M (t) < \infty$, then the law of $(\tilde x^k)_k$ is tight on $C(I; \bar D)$. Additionally for any $\epsilon >0$, there is $m_\epsilon$ such that for $m\geq m_{\epsilon}$ \[ \sup_k \mathbb{P}(\tau^k_m \in I)\leq \epsilon. \] \end{enumerate} \end{lemma} \begin{proof} $i)$ Under the Assumption~\ref{a-nonint} tightness of the law of $(\tilde x^k)_k$ on $C(I; \bar D)$ follows from the first statement in Lemma~\ref{lemma:uniform}, together with Remarks~\ref{remark on Mt} and~\ref{remark on x_0}. Indeed given $\varepsilon > 0$ we can find $m_0$ such that for any $m>m_0$ \[ \mathbb P(\tilde{\tau}^k_m < \infty) \leq \mathbb P(\tilde x_0 \notin D_m) + \sup_{t\in I}M(t) V_m^{-1} \leq \varepsilon/2 + \varepsilon/2\,, \] due to, in particular, our assumption that $V_m \to \infty$ as $m \to \infty$. $ii)$ First we observe that for every $\ell$ and $(t_1,\ldots,t_{\ell})$ in $I$, the joint distribution of $(\tilde{x}_{t_1}^k, \ldots, \tilde{x}_{t_\ell}^k )$ is tight. Indeed, statement iii) in Lemma \ref{lemma:uniform} guarantees tightness of the law of $\tilde{x}_{t}^k$ for any $t\in I$. Given $\varepsilon > 0$, for any $\ell \in \mathbb{N}$ we can find $m_0$ such that for any $m>m_0$ \[ \mathbb P(\tilde{x}_{t_1}^k\notin D_m, \ldots, \tilde{x}_{t_\ell}^k\notin D_m ) \leq \ell \sup_{t\in I}M(t) V_m^{-1} \leq \varepsilon\,, \] due to, the assumption that $V_m \to \infty$ as $m \to \infty$. We will use Skorokhod's Theorem (see~\cite[Ch. 1 Sec. 6]{skorokhod1965}). This will allow us to conclude tightness of the law of $(\tilde x_k)_k$ on $C(I; \bar D)$ as long as we can show that for any $\varepsilon>0$ \[ \lim_{h\rightarrow 0} \sup_k \sup_{|s_1 - s_2| \leq h } \mathbb{P}(| \tilde{x}_{s_1}^k - \tilde{x}_{s_2}^k | > \varepsilon ) =0\,. \] From~\eqref{eq mkvsde k}, using the Assumption~\ref{ass integrated growth}, we get, for $0< s_1 - s_2 < 1$, \begin{equation*} \begin{split} \mathbb{E} |\tilde x^k_{s_1} - x^k_{s_2} | \leq & \int_{s_2}^{s_1} \mathbb{E} | b^k(r,\tilde x^k_r,\mathscr L(\tilde x^k_r))|\,dr + \left( \mathbb{E} \int_{s_2}^{s_1} | \sigma^k(r,\tilde x^k_r,\mathscr L(\tilde x^k_r))|^2 \,dr \right)^{\frac12} \\ \leq & c \int_{s_2}^{s_1} (1 + \sup_k \mathbb{E} v(r,\tilde x^k_r,\mathscr L(\tilde x^k_r)) ) \,dr + \left( c \int_{s_2}^{s_1} (1 + \sup_k \mathbb{E} v(r,\tilde x^k_r,\mathscr L(\tilde x^k_r)) ) \,dr \right)^{\tfrac12} \\ \leq & c \left(1+\sup_{t\in I}M(t)\right) (s_2-s_1)^{\tfrac12} \,. \end{split} \end{equation*} Markov's inequality leads to \[ \sup_k \sup_{|s_1 - s_2|\leq h} \mathbb P\left(|\tilde x^k_{s_1} - \tilde x^k_{s_2}| > \varepsilon\right) \leq c \varepsilon (s_1 - s_1)^{\frac12} \] which concludes the proof of tightness. We will now prove the second statement in $ii)$. Note that $C(I,D)$ is open and in $C(I,\bar D)$. Note also that $C(I,D_k) \subset C(I,D_{k+1})$ and $\bigcup_k C(I,D_k) = C(I,D)$. We know that for any $\varepsilon>0$ there is a compact set $\mathcal{K}_\varepsilon \subset C(I,D)$ such that \[ \sup_k \mathbb{P}(\tilde x^k \notin \mathcal{K}_{\epsilon}) \leq \epsilon. \] Since $\mathcal{K}_\varepsilon \subset C(I;D) = \bigcup_k C(I;D_k)$ is compact and $C(I;D_k)$ are nested and open there must be some $k^\ast$ such that $\mathcal{K}_\varepsilon \subset C(I;D_{k^\ast})$. But this means that \[ \P(\tilde x^k \notin C(I;D_{k^\ast})) \leq \P(\tilde x^k \notin \mathcal{K}_\varepsilon) \] and so $ \mathbb{P}(\tau^k_m \in I )= \mathbb{P}(\tilde x^k \notin C(I;D_m)) \leq \mathbb{P}( \tilde x^k \notin C(I;D_{k^{\ast}})) \leq \P(\tilde x^k \notin \mathcal{K}_\varepsilon)\leq \varepsilon\color{black} $ for all $m\geq k^{*}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:weakexistence}] Recall that we have extended $b$ and $\sigma$ so that they are now defined on $[0, \infty) \times \mathbb R^d \times \mathcal{P}(\mathbb R^d)$. Hence we will from now on work with $I=[0,\infty)$. Let us define $t^n_i := \tfrac{i}{n}$, $i=0,1,\ldots$ and $\kappa_n(t) = t^n_i$ for $t\in [t^n_i, t^n_{i+1})$. Fix $k$. We introduce Euler approximations $x^{k,n}$, $n\in \mathbb{N}$, \[ x^{k,n}_t = x_0 + \int_0^t b^k\left(s,x^{k,n}_{\kappa_n(s)},\mathscr L(x^{k,n}_{\kappa_n(s)})\right)\,ds + \int_0^t \sigma^k\left(s,x^{k,n}_{\kappa_n(s)},\mathscr L(x^{k,n}_{\kappa_n(s)})\right)\,dw_s\,. \] Let us outline the proof: As a first step we fix $k$ and we show tightness with respect to $n$ and Skorokhod's theorem to take $n\to \infty$. The second step is then to use Lemma~\ref{lemma:uniform} and remark~\ref{remark on x_0} to show tightness with respect to $k$. Finally we can use Skorokhod's theorem again to show that (for a subsequence) the limit as $k\to \infty$ satisfies~\eqref{eq mkvsde} (on a new probability space). {\em First Step.} Using standard arguments, we can verify that, for a fixed $k$, the sequence $(x^{k,n})_n$ is tight (in the sense that the laws induced on $C([0,\infty), \bar D)$ are tight). By Prohorov's theorem (see e.g.~\cite[Ch. 1, Sec. 5]{billingsley}), there is a subsequence (which we do not distinguish in notation) such that $\mathscr{L}(x^{k,n}) \Rightarrow \mathscr{L}(x^k)$ as $n \to \infty$ (convergence in law). Hence we may apply Skorokhod's Representation Theorem (see e.g. \cite[Ch. 1, Sec. 6]{billingsley}) and obtain a new probability space $(\tilde{\Omega}^k, \tilde{\mathcal{F}}^k,\tilde{\mathbb{P}}^k)$ where on this space there are new random variables $(\tilde x_0^n, \tilde x^{k,n}, \tilde w^n)$ and $(\tilde x_0, \tilde x^k, \tilde w)$ such that \[ \mathscr{L}(\tilde{x}_0^n, \tilde{x}^{k,n}, \tilde{w}^n) = \mathscr{L}(x_0^n,x^{k,n},w^n) \quad \forall n \in \mathbb{N}\quad \text{and}\quad \mathscr{L}(\tilde{x}_0, \tilde{x}^{k}, \tilde{w}) = \mathscr{L}(x_0,x^{k},w). \] After taking another subsequence to obtain almost sure convergence from convergence in probability, \[ (\tilde{x}_0^n, \tilde{x}^{k,n}, \tilde{w}^n) \to (\tilde x_0,\tilde{x}^k, \tilde w) \,\,\, \text{as $n\to \infty$ in $C([0,\infty),D\times \bar D \times \mathbb R^{d'})$ a.s.} \] We let \[ \tilde{\mathcal{F}}^k_t := \sigma\{\tilde{x}_0\}\vee \sigma\{\tilde{x}_s, \tilde{w}_s : s\leq t\}. \] and define $\tilde{\mathcal{F}}^{k,n}_t$ analogously. Then $\tilde{w}^n$ and $\tilde{w}$ are respectively $(\tilde{\mathcal{F}}^n_t)_{t\geq 0}$ and $(\tilde{\mathcal{F}}_t)_{t\geq 0}$-Wiener processes. Define \[ \tilde{\tau}^{k,n}_k := \inf\{ t \geq 0: \tilde{x}^{k,n}_t \notin D_k \} \,\,\,\text{and}\,\,\, \tilde{\tau}_k^k := \inf\{ t \geq 0: \tilde{x}^k_t \notin D_k \}\,. \] These are respectively $\tilde{\mathcal F}^{k,n}$ and $\tilde{\mathcal F}^k$ stopping times. Moreover, due to the a.s. convergence of the trajectories $\tilde{x}^{k,n}$ to $\tilde{x}^k$ we can see that \[ \liminf_{n\to \infty} \tilde{\tau}^{k,n}_k \geq \tilde{\tau}^k_k\,. \] From the fact that the laws of the sequences are identical we see that we still have the Euler approximation on the new probability space: for $t\geq 0$ \[ d\tilde x^{k,n}_t = b^k\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,dt + \sigma^k\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,d\tilde w^n_t\,. \] Moreover for all $t\leq \tilde{\tau}^{k,n}_k$ the process $\tilde x^{k,n}$ satisfies the same equation as above but without the cutting applied to the coefficients: \[ d\tilde x^{k,n}_t = b\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,dt + \sigma\left(t,\tilde x^{k,n}_{\kappa_n(t)},\mathscr L(\tilde x^{k,n}_{\kappa_n(t)})\right)\,d\tilde w^n_t\,. \] Using Skorohod's Lemma, see~\cite[Ch. 2, Sec. 3]{skorokhod1965}, together with the continuity conditions in Assumption~\ref{ass continuity}, we can take $n\to \infty$ and conclude that for all $t\leq \tau_k^k$ we have \begin{equation} \label{eq cut mkvsde} d\tilde x^k_t = b(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,dt + \sigma(t,\tilde x^k_t,\mathscr L(\tilde x^k_t))\,d\tilde w_t\,. \end{equation} At this point we remark that the process $\tilde x^k$ is well defined and continuous on $[0,\infty)$ but we only know that it satisfies~\eqref{eq cut mkvsde} until $\tau^k_k$. {\em Second Step.} Tightness of the law of $(\tilde x^k)_k$ in $C(I; \bar D)$ follows from Lemma \ref{lemma tightness} and Remark~\ref{remark on Mt}. From Prohorov's theorem we thus get that for a subsequence $\mathscr L(\tilde x^k) \Rightarrow \mathscr L(\tilde x)$ as $k\to \infty$ (convergence in law). From Skorokhod's Representation Theorem we then obtain a new probability space $(\bar \Omega, \bar{\mathcal{F}},\bar{\mathbb{P}})$ carrying new random variables $(\bar x_0^k, \bar x^k, \bar w^k)$ and $(\bar x_0, \bar x, \bar w)$ such that \[ \mathscr{L}(\bar x_0, \bar x, \bar w ) = \mathscr{L}(\tilde{x}_0, \tilde{x}, \tilde{w})\,, \] \[ \mathscr{L}(\bar x_0^k, \bar x^k, \bar w^k) = \mathscr{L}(\tilde{x}_0^k, \tilde{x}^k, \tilde{w}^k ) \quad \forall k \in \mathbb{N}, \] and (after taking a further subsequence to go from convergence in probability to almost sure convergence) \[ (\bar{x}_0^k, \bar{x}^k, \bar{w}^k) \to (\bar x_0,\bar{x}, \bar w) \,\,\, \text{as $k\to \infty$ in $C( I ; D \times \bar D \times \mathbb{R}^{d'})$ a.s.}\,. \] Let $\bar \tau_k^k := \inf\{ t: \bar x^k_t \notin D_k \}$, $\bar \tau^k_m := \inf\{ t: \bar x^k_t \notin D_m \}$ and $\bar \tau^{\infty}_m := \inf\{ t: \bar x_t \notin D_m \}$. Since $\sup_{t < \infty} |\bar x^k_t - \bar x_t| \to 0$ we get $\bar \tau^k_m \to \bar \tau^{\infty}_m$ as $k\to \infty$. Then from Fatou's Lemma, Remark~\ref{remark on x_0} and either part iii) of Lemma \ref{lemma:uniform} or part ii) of Lemma \ref{lemma tightness} we have that, \begin{equation}\label{eq tightness limit} \mathbb P(\bar \tau^{\infty}_m \in I) \leq \liminf_{k\to \infty} \mathbb P(\bar \tau^k_m \in I) \leq \sup_k \mathbb P(\bar \tau^k_m \in I) \to 0 \,\,\, \text{as $m \to \infty$.} \end{equation} Then the distribution of $\bar \tau^{\infty}_m$ converges in distribution, as $m\to \infty$, to a random variable $\bar \tau$ with distribution $\mathbb P(\bar \tau \leq T) = 0$ and $\mathbb P(\bar \tau = \infty) = 1$. In general convergence in distribution does not imply convergence in probability. But in the special case that the limiting distribution corresponds to a random variable taking a single value a.s. we obtain convergence in probability (see e.g.~\cite[Ch. 11, Sec. 1]{dudleybook}). Hence $\bar \tau^{\infty}_m \to \infty$ in probability as $m\to \infty$. From this we can conclude that there is a subsequence that converges almost surely. Since~\eqref{eq cut mkvsde} holds for $\tilde x^k$ we have the corresponding equation for $\bar x^k$ i.e. for $t\leq \bar \tau^k_k$, \begin{equation} \label{eq cut mkvsde bar} d\bar x^k_t = b(t,\bar x^k_t,\mathscr L(\bar x^k_t))\,dt + \sigma(t,\bar x^k_t,\mathscr L(\bar x^k_t))\,d\bar w^k_t\,. \end{equation} Fix $m < k'$. We will consider $k > k'$. Then~\eqref{eq cut mkvsde bar} holds for all $t\leq \inf_{k\geq k'} \bar \tau^k_m$. We can now consider $\bar x^k_{t\wedge \tau^k_m}$ (these all stay inside $D_m$ for all $k > k' > m$) and use dominated convergence theorem for the bounded variation integral and Skorokhod's lemma on convergence of stochastic integrals, see~\cite[Ch. 2, Sec. 3]{skorokhod1965}, and our assumptions on continuity of $b$ and $\sigma$ to let $k\to \infty$. We thus obtain, for $t\leq \inf_{k\geq k'} \bar \tau^k_m$, \begin{equation} \label{eq mkvsde bar} d\bar x_t = b(t,\bar x_t,\mathscr L(\bar x_t))\,dt + \sigma(t,\bar x_t,\mathscr L(\bar x_t))\,d\bar w_t\,. \end{equation} Now, for each fixed $m<k'$, \[ \lim_{k'\to \infty} \inf_{k\geq k'} \bar \tau^k_m = \lim_{k\to\infty} \bar \tau^k_m = \bar \tau^{\infty}_m. \] Finally we take $m \to \infty$ and since $\bar \tau^{\infty}_m \to \infty$ we can conclude that~\eqref{eq mkvsde bar} holds for all $t\in I$. The last statement of the theorem follows from Corollary~\ref{corollary bound for limit process}. \end{proof} \color{black} \subsection{Examples} \begin{example}[Integrated Lyapunov condition] \label{example integrated lyapunov} Consider the McKean--Vlasov stochastic differential equation~\eqref{eq example1} i.e. \[ d x_t = -x_t \bigg[\int_\mathbb{R} y^4 \mathscr L(x_t)(dy)\bigg]\, dt + \frac{1}{\sqrt{2}} x_t\, d w_t\,,\,\,\, x_0 = \xi >0\,. \] Then for $v(x) = x^4$ we have, \[ L(x,\mu) v(x) = 3 x^4 - 4 x^4 \int_{\mathbb R} y^4 \, \mu(dy)\,. \] We see that the stronger Lyapunov condition~\eqref{eq b2} will not hold with $m_1 < 0$ (at least for chosen $v$, which seems to be a natural choice). However, integrating leads to \[ \int_{\mathbb{R}}L(x,\mu) v(x)\mu(dx) = 3 \int_\mathbb{R} x^4 \mu(dx) - 4 \bigg(\int_\mathbb{R} x^4 \mu(dx)\bigg)^2 \] using this we will show that the integrated Lyapunov condition~\eqref{eq b1} holds i.e. that \[ \int_{\mathbb{R}}L(x,\mu) v(x)\mu(dx) \leq -\int_\mathbb{R} v(x) \mu(dx) + 4 \] is satisfied. To see this we note that $-x^2 \leq - x +1 $. Moreover, Assumption \ref{ass integrated growth} is satisfied. Condition~\eqref{eq lyapunov example} allows us to obtain uniform in time integrability properties for $(x_t)$ needed to study e.g. ergodic properties. \end{example} \begin{example}[Non-linear dependence of measure and integrated Lyapunov condition] \label{example 2} Consider the McKean--Vlasov stochastic differential equation~\eqref{eq example2} i.e. \[ dx_t = - \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^3dt + \left( \int_\mathbb{R} (x_t - \a y)\mathscr{L}{(x_t)}(dy) \right)^2 \sigma \, dw_t\,, \] for $t\in I$ and with $x_0 \in L^4(\mathcal{F}_0,\mathbb R)$. Assume that $m:= -(6\sigma^2 - 4 + 4\alpha) >0$. The diffusion generator given by~\eqref{eq Lmu} is \[ \begin{split} & (L^\mu v)(x,\mu) = \bigg( \frac{\sigma^2}{2}\left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^4 \partial_x^2 v - \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^3 \partial_x v\bigg)(x,\mu) \\ & + \int_{\mathbb{R}}\left( \frac{\sigma^2}{2} \left( \int_{\mathbb{R}} (z - \a y)\mu(dy) \right)^4 (\partial_z \partial_\mu v)(t,x,\mu)(z) - \left( \int_{\mathbb{R}} (z - \a y)\mu(dy) \right)^3 (\partial_\mu v) (t,x,\mu)(z) \,. \right) \mu(dz) \end{split} \] We will show that for the Lyapunov function \[ v(x,\mu) = \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^4\,, \] we have \[ \int_{\mathbb{R}}(L^\mu v)(x,\mu)\,\mu(dx) \leq m - m \int_{\mathbb{R}}v(x,\mu)\,\mu(dx)\,. \] Indeed, \[ \partial_x v(x,\mu) = 4 \left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^3\,, \,\,\,\, \partial^2_x v(x,\mu) = 12\left( \int_{\mathbb{R}} (x - \a y)\mu(dy) \right)^2\, \,, \] \[ \partial_\mu v(x,\mu)(z) = -4\a \left( \int_{\mathbb{R}} (x - \alpha y)\mu(dy) \right)^3 \,,\,\,\,\, \partial_z \partial_\mu v(x,\mu)(z) = 0 \,. \] Hence \[ (L^{\mu}v)(x,\mu) = (6\sigma^2-4) \left(\int_\mathbb{R} (x-\alpha y) \mu(dy)\right)^6 + 4\alpha \int_\mathbb{R} \left[ \left(\int_\mathbb{R}(z-\alpha y)\mu(dy)\right)^3\left(\int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3 \right]\,\mu(dz)\,. \] Since we want an estimate over the integral of the diffusion generator we observe that \[ \begin{split} I & := \int_\mathbb{R} \int_\mathbb{R} \left[ \left(\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right)^3 \left(\int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3 \right]\,\mu(dz)\,\mu(dx)\\ & = \int_\mathbb{R}\left[ \int_\mathbb{R} \left(\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right)^3 \,\mu(dz) \left(\int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3\right]\,\mu(dx)\\ & = \int_\mathbb{R} \left(\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right)^3 \,\mu(dz)\int_\mathbb{R} \left( \int_\mathbb{R} (x-\alpha y)\mu(dy)\right)^3\,\mu(dx)\,\\ & \leq \left(\int_\mathbb{R} \left|\int_\mathbb{R}(z-\alpha y)\,\mu(dy)\right|^3 \,\mu(dz)\right)^2\,. \end{split} \] By Cauchy-Schwarz's inequality we obtain \[ I \leq \int_\mathbb{R} \left(\int_\mathbb{R}(x-\alpha y)\,\mu(dy)\right)^6 \,\mu(dx) \,. \] Hence, recalling $m:= -(6\sigma^2 - 4 + 4\alpha) > 0$ and using the inequality $-x^6 \leq 1-x^4$, we obtain that \[ \int_\mathbb{R} (L^\mu v)(x,\mu)\,\mu(dx) \leq \int_\mathbb{R} (6\sigma^2-4+4\alpha) \left(\int_\mathbb{R} (x-\alpha y)\, \mu(dy)\right)^6 \mu(dx) \leq m - m \int_{\mathbb{R}}v(x,\mu)\,\mu(dx)\,. \] Moreover, Assumption \ref{ass integrated growth} is readily satisfied. \end{example} \begin{example}[Dependence on measure without an integral] \label{example 3} Let $\mu$ be a law on $(\mathbb R, \mathscr B(\mathbb R))$ and let $F^{-1}_\mu:[0,1]\rightarrow \mathbb{R}$ be the generalized inverse cumulative distribution function for this law. Recall that the $\alpha$-Quantile is given by \[ F_\mu^{-1}(\alpha):=\inf\{x\in \mathbb{R}\,:\, \mu((-\infty, x]) \geq \alpha]\}\,. \] Define the Expected Shortfall of $\mu$ at level $\alpha$, $ES_\mu(\alpha)$, as \[ ES_\mu(\alpha):=\frac{1}{\alpha}\int_0^\alpha F_\mu^{-1}(s)\,ds\,. \] It is easy to see that for fixed $\alpha$, Expected Shortfall is a Lipschitz continuous function of measure w.r.t $p$-th Wasserstein distances for $p\geq 1$. Indeed fix $\mu$, $\nu \in \mathcal P_p(\mathbb R)$ and observe that \[ \begin{split} \left|ES_\mu(\alpha)-ES_\nu(\alpha)\right|\leq\frac{1}{\alpha}\int_0^\alpha |F_\mu^{-1}(s)-F^{-1}_\nu(s)|\, ds \leq \frac{1}{\alpha}\int_0^1 |F_\mu^{-1}(s)-F^{-1}_\nu(s)|\,ds=\frac{1}{\alpha}W_1(\mu,\nu)\leq \frac{1}{\alpha}W_p(\mu,\nu). \end{split} \] We consider the following one-dimensional example, based loosely on transformed CIR: \[ d x_t = \frac{\kappa}{2}\big[((ES_{\mathscr{L}(x_t)}(\alpha)\vee \theta)-\frac{\sigma^2}{4\kappa}) x_t^{-1}-x_t\big]\, dt +\frac{1}{2}\sigma \,dw_t. \] Here $x_0$ satisfies $\P[x_0>0]=1$ and $\kappa \theta \geq \sigma^2$. Note that by defining $D:=(0,\infty)$ and $D_k:=[\frac{1}{k},k]$, we have boundedness of the coefficients on $D_k$ and from the above observations and assumptions one can easily verify that the conditions of Theorem \ref{thm:weakexistence} are satisfied. In particular consider $v(x)=x^2+x^{-2}$. Then, \begin{equation}\notag \begin{alignedat} {1} L(x,\mu)v(x) &= \frac{\kappa}{2}\big[((ES_{\mu}(\alpha)\vee \theta)-\frac{\sigma^2}{4\kappa}) x^{-1}-x\big](2(x-x^{-3}))+\frac{1}{8}\sigma^2(2+6x^{-4}) \\ &={\kappa}\big[((ES_{\mu}(\alpha)\vee \theta)-\frac{\sigma^2}{4\kappa})\big] -\kappa x^2 - \big[\kappa (ES_{\mu}(\alpha)\vee \theta)- {\sigma^2} \big]x^{-4}+\kappa x^{-2}+\frac{\sigma^2}{4}\\ &\leq {|\kappa|} |ES_{\mu}(\alpha)|+ \kappa \theta +\kappa x^{-2}\\ &\leq {|\kappa|} |\int_\mathbb{R} x\,\mu(dx)|+ \kappa \theta +\kappa x^{-2} \\ &\leq \frac{1}{2}{\kappa^2} + \frac{1}{2}\int_\mathbb{R} x^2\,\mu(dx) + \kappa \theta +\kappa x^{-2}.\\ \end{alignedat} \end{equation} Integrating with respect to $\mu$ we see that condition \eqref{eq b1} holds. Therefore, due to Theorem~\ref{thm:weakexistence}, we have existence of a weak solution to the above McKean--Vlasov equation. \end{example} \section{Uniqueness}\label{sec uniq} In this Section we prove continuous dependence on initial conditions and uniqueness under two types of Lyapunov conditions. For the novel {\em integrated} global Lyapunov condition we provide an example that has been inspired by the work of~\cite{Scheutzow87} on non-uniqueness of solutions to McKean--Vlasov SDEs. \subsection{Assumptions and Results}\label{subsec uniq results} \wh{I think we need $\bar v\in C^{1,2}(I \times \mathbb R^d)$ instead of $\bar v\in C^{1,2}(I \times D)$ since $x-y$ is not necessarily in $D$. I have changed this from how it was written before.} Recall that by $\pi \in \Pi(\mu, \nu)$ we denote a coupling between measures $\mu$ and $\nu$. In this section we work with a subclass of Lyapunov functions $\bar v\in C^{1,2}(I \times \mathbb R^d)$ that has the properties: $\bar v\geq 0$, $\text{Ker}\,\, \bar v = \{0\}$ and $\bar v(x) = \bar v(-x)$ for $x\in \mathbb R^d$. For this class of Lyapunov functions we define semi-Wasserstein distance on $\mathcal{P}(D)$ as\color{black}, \begin{equation} \label{eq v-wasserstein} W_{ \bar v}(\mu,\nu) := \left( \inf_{\pi \in \Pi(\mu,\nu)} \int_{D\times D} \bar v(x-y) \, \pi(dx,dy) \right)\,. \end{equation} Indeed $W_v$ is a semi-metric and the triangle inequality, in general, does not hold. Note that $\bar v$ does not depend on a measure. For $(t,x,y)\in I \times D \times D$, $(\mu,\nu) \in \mathcal P(D) \times \mathcal P(D)$, we define the generator as follows \begin{align*} L(t,x,y,\mu,\nu)\bar v(t,x-y) := & \partial_t \bar v (t,x-y) + \frac{1}{2}\text{tr}\big((\sigma(t,x,\mu) - \sigma(t,y,\nu))(\sigma(t,x,\mu) - \sigma(t,y,\nu))^* \partial_x^2 \bar v(t,x-y)\big) \\ & + ( b(t,x,\mu) - b(t,y,\nu)) \partial_x \bar v(t,x-y) \,. \end{align*} \ls{Note that at this point Lyapunov function does not depend on measure. There are two reasons for that: a) We would need to assume $L^2$ integrability to use Ito formula; b) We would need a non-trivial example of the function such that $v(x,\pi)=0$ for $x=0$ and $\pi$ being a coupling of two identical laws.} \begin{assumption}[Global Lyapunov condition] \label{ass gmc} There exist locally integrable, non-random, functions $g=g(t)$ and $h=h(t)$ on $I$, such that for all $(t,x,\mu)$ and $(t,y,\nu)$ in $I \times D \times \mathcal{P}(D)$ \begin{equation}\label{eq gmc} L(t,x,y,\mu,\nu)\bar v(t,x-y) \leq g(t) \bar v(x-y) + h(t) W_{\bar v}(\mu,\nu)\,. \end{equation} \end{assumption} \wh{Have left non-integrated condition with the semi-wasserstein distance - we could alternatively write it for all couplings like we do below... since LHS in the above doesn't depend on the coupling, the two conditions are equivalent, but perhaps notationally $W_v$ is nice.} \begin{assumption}[Integrated Global Lyapunov condition] \label{ass int gmc} There exists locally integrable, non-random, function $h=h(t)$ on $I$, such that for all $(t,\mu)$, $(t,\nu)$ in $I \times \mathcal{P}(D)$ and for all couplings $\pi \in \Pi(\mu,\nu)$ \begin{equation} \label{eq igmc} \int_{D \times D }L(t,x,y,\mu,\nu)\bar v(t,x-y) \pi(dx, dy) \leq h(t) \int_{D\times D} \bar v(x-y) \, \pi(dx,dy)\,. \end{equation} \ls{holds for one measure} \end{assumption} Theorem~\ref{thm contdep} gives a stability estimate for the solution to \eqref{eq mkvsde} with respect to initial condition (continuous dependence on the initial conditions). \begin{theorem}[Continuous Dependence on Initial Condition]\label{thm:pathuniq} \label{thm contdep} Let Assumption~\ref{ass local boundedness} hold. Let $x^i$, $i=1,2$ be two solutions to~\eqref{eq mkvsde} on the same probability space such that $\mathbb{E} \bar v(x^1_0 - x^2_0)<\infty$. \begin{enumerate}[i)] \item If Assumption~\ref{ass gmc} holds then for all $t\in I$ \begin{equation} \label{eq cont dep} \mathbb{E} \bar v( x^1_t - x^2_t) \leq \exp\left( \int_0^t \left[g(s) + h(s) + 2|h(s)|\right] \,ds \right) \mathbb{E} \bar v(x^1_0 - x^2_0)\,. \end{equation} \item If Assumptions~\ref{ass int gmc} and either of~\ref{a-int} or~\ref{a-nonint} hold and if there are $p,q$ with $1/p+1/q = 1$ and a constant $\kappa$ such that for all $(t,x,\mu)$ in $I \times D \times \mathcal{P}(D)$ \begin{equation} \label{eq integrability for uniqueness} |\partial_x \bar v(x -y)|^{2p} + |\sigma(t,x,\mu)|^{2q} + |\sigma(t,y,\nu)|^{2q} \leq \kappa (1 + v(t,x,\mu)+v(t,y,\nu)) \end{equation} then for all $t\in I$ \begin{equation} \label{eq int cont dep} \mathbb E \bar v(x^1_t - x^2_t) \leq \exp\left( \int_0^t h(s) \,ds \right) \mathbb E \bar v(x^1_0 - x^2_0)\,. \end{equation} \end{enumerate} \end{theorem} First we note that in the case when $I$ is a finite time interval then the sign of the functions $g$ and $h$ plays no significant role. In relation to the study of ergodic SDEs e.g.~(18) in~\cite{Bolley2007} we make the following observations. If $I=[0,\infty)$ and Assumption~\ref{ass gmc} holds then if $g + h + 2|h| < 0$ then $\lim_{t\to \infty} \mathbb E \bar v(x^1_t - x^2_t)^2 = 0$. However we see that while the spatial dependence of coefficients can play a positive role for the stability of the equation (if $g$ is negative) it seems that the measure dependence never has such positive role, regardless of the sign of $h$. If $I=[0,\infty)$ and we are in the second case of Theorem~\ref{thm contdep} then negative $h$ can play a positive role for stability (but unlike the first case we also need the condition~\eqref{eq integrability for uniqueness}). \begin{proof} Note that if we are in case ii) then, in the following we set $g = 0$ for all $t\in I$. Let \[ \varphi(t) = \exp\left(-\int_0^t [g(s) + h(s)]\, ds\right) \,. \] Applying the classical It\^o formula to $\varphi\, \bar v(x^1-x^2)$ we have that for $t\in I$ \begin{equation} \label{eq after ito for uniq} \begin{split} \varphi(t) \bar v(x^1_t-x^2_t) = & \bar v(x^1_0 - x^2_0) \\ & + \int_0^{t} \varphi(s) \big[ L(s,x^1_s,x^2_s,\mathscr L(x^1_s),\mathscr L(x^2_s))\bar v(t,x^2_s-x^2_s) - (g(s) + h(s)) \bar v(x^1_{s}-x^2_{s}) \big]\,ds\\ & + \int_0^{t} \varphi(s) \partial_x \bar v( x^1_s-x^2_s)(\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))) dw_s. \end{split} \end{equation} {\em Case i)} Assumption~\ref{ass gmc} implies \begin{equation*} \begin{alignedat}{1} \varphi(t) \bar v(x^1_{t}-x^2_{t}) \leq \, & \bar v(x^1_0 - x^2_0) + \int_0^t \varphi(s)\big[ h(s) W_{\bar v}(\mathscr L(x^1_s),\mathscr L(x^2_s)) - h(s) \bar v(x^1_{s}-x^2_{s}) \big] \,ds\\ & + \int_0^{t} \varphi(s) \partial_x \bar v( x^1_s-x^2_s)(\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))) dw_s. \end{alignedat} \end{equation*} Define the stopping times $\{\tau^i_m\}_{m\geq 1}$, $i=1,2$ and $\{\tau_m\}_{m\geq 1}$ \[ \tau^i_m := \inf \{t\in I \,:\, x^i_t\notin D_m \}\,,\,\, i = 1,2\,\,\,\text{and}\,\,\, \tau_m := \tau^1_m \wedge \tau^2_m\,. \] By Definition~\ref{def soln} we know that $x^i \in C(I;D)$ a.s. and so $\tau^i_m \nearrow \infty$ a.s. and hence $\tau_m \nearrow \infty$ a.s. as $m\to \infty$. The local boundedness of $\sigma$ ensures that the stochastic integral in the above is a martingale on $[t\wedge\tau_m]$, hence \begin{equation*} \begin{alignedat}{1} & \mathbb{E}[ \varphi(t\wedge\tau_m) \bar v( x^1_{t\wedge\tau_m}-x^2_{t\wedge\tau_m})]\\ & \leq \mathbb{E}[\bar v(x^1_0 - x^2_0)] + \mathbb{E}\left[\int_0^{t\wedge\tau_m} \varphi(s)\big[ h(s)W_{\bar v}(\mathscr L(x^1_s),\mathscr L(x^2_s)) - h(s) \bar v( x^1_{s}-x^2_{s}) \big] \,ds \right]\\ & \leq \mathbb{E}[\bar v( x^1_0 - x^2_0 )] + \mathbb{E}\left[\int_0^{t} \varphi(s)\big[ |h(s)| \bar v(x^1_{s}-x^2_{s}) \big]\,ds \right] \,, \end{alignedat} \end{equation*} where the last inequality follows from the definition of the semi-Wasserstein distance. Since $\tau_m\nearrow\infty$ as $m\rightarrow\infty$, application of Fatou's Lemma gives \[ \mathbb{E}[\varphi(t)\bar v( x^1_t - x^2_t )] \leq \mathbb{E} \bar v(x^1_0 - x^2_0) + \int_0^t |h(s)|\mathbb{E}[ \varphi(s) \bar v( x^1_s-x^2_s )]\,ds. \] From Gronwall's lemma we get~\eqref{eq cont dep}. {\em Case ii)} Taking expectation in~\eqref{eq after ito for uniq}, recalling that in this case $g=0$ and then using Assumption~\ref{ass int gmc} we have \begin{equation*} \mathbb E \left[\varphi(t) \bar v (x^1_{t}-x^2_{t}) \right] \leq \mathbb E\bar v(x^1_0 - x^2_0) + \mathbb E\int_0^{t} \varphi(s) \partial_x \bar v( x^1_s-x^2_s)(\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s)))\, dw_s. \end{equation*} Corollary~\ref{corollary bound for limit process} together with~\eqref{eq integrability for uniqueness} and local integrability of $g$ and $h$ ensures that stochastic integral in the above expression is a martingale. Indeed \[ \begin{split} & \int_0^t \varphi(s)^2\mathbb E\left[ |\partial_x \bar v(x^1_s - x^2_s)|^2 |\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))|^2 \right]\,ds \\ & \leq \int_0^t \varphi(s)^2\mathbb E\left[ \frac{1}{p}|\partial_x \bar v(x^1_s - x^2_s)|^{2p} + \frac{1}{q}|\sigma(s,x^1_s,\mathscr L(x^1_s))-\sigma(s,x^2_s,\mathscr L(x^2_s))|^{2q} \right]\,ds \\ & \leq c_{p,q} \int_0^t \varphi(s)^2\mathbb E\left[\partial_x \bar v(x^1_s - x^2_s)|^{2p} + |\sigma(s,x^1_s,\mathscr L(x^1_s))|^{2q}+|\sigma(s,x^2_s,\mathscr L(x^2_s))|^{2q} \right]\,ds \\ & \leq c_{p,q} \int_0^t \varphi(s)^2 \kappa \left( 1 + \mathbb E v(s,x^1_s,\mathscr L(x^1_s)) + \mathbb Ev(s,x^2_s,\mathscr L(x^2_s)) \right)\,ds < \infty\,. \end{split} \] Hence \begin{equation*} \varphi(t)\mathbb{E}[ \bar v(x^1_{t}-x^2_{t})] \leq \mathbb{E}[ \bar v(x^1_0 - x^2_0)]\,. \end{equation*} \end{proof} \begin{corollary} \label{corollary uniqueness} If the conditions for either case i) or ii) of Theorem~\ref{thm contdep} hold and if $x^1_0 = x^2_0$ a.s. then the solutions to~\eqref{eq mkvsde} are pathwise unique. \end{corollary} \begin{proof} If $I=[0,T]$ then uniqueness follows immediately from Theorem~\ref{thm contdep}, the fact that $Ker\,\, \bar v = \{0\}$ and local integrability of $g$ and $h$. If $I=[0,\infty)$ then it is enough to observe that when $x_0^1=x_0^2$ then, due to Theorem~\ref{thm contdep}, uniqueness holds on the interval $[0,s]$, for some $s>0$ and in particular $x^1_s = x^2_s$ a.s. Thus we can continue in a recursive manner to obtain uniqueness on the intervals $[ks,(k+1)s]$ for $k\in \mathbb{N}$. \end{proof} \subsection{Example due to Scheutzow.} Consider the McKean--Vlasov SDE of the form \begin{equation} \label{eq:examplescheutzow} x_t=x_0+\int_0^t B(x_s,\mathbb{E}[\bar b(x_s)])\,ds + \int_0^t \Sigma(x_s,\mathbb{E}[\bar \sigma(x_s)])\,dw_s\,. \end{equation} Our study of this more specific form of McKean--Vlasov SDE is inspired by~\cite{Scheutzow87}, where it has been shown that in the case when $\sigma=0$ and either of functions $b$ or $\bar b$ is locally Lipschitz then uniqueness, in general, does not hold. We will show that if we impose some structure on the local behaviour of the functions then these, together with the integrability conditions established in Theorem~\ref{thm:weakexistence}, are enough to obtain unique solution~\eqref{eq:examplescheutzow}. To be more specific: we impose local (in the second variable) monotone condition on functions $b$ and $\sigma$, which is weaker than local (in the second variable) Lipschitz condition, and local Lipschitz condition on functions $\bar b$ and $\bar \sigma$. \begin{assumption} \label{ass scheutzow} \hfill{} \begin{enumerate}[i)] \item Local Monotone condition: there exists locally bounded function $M=M(x',y',x'',y'')$ such that $\forall x,x',x'',y,y',y'' \in D$ \[ 2(x-y)(B(x,x')-B(y,y')) + |\Sigma(x,x'') - \Sigma(y,y'')|^2 \leq M(x',y',x'',y'')(| x - y |^2 + | x' - y' |^2 + |x'' - y''|^2) \] \item there exists $\kappa$ such that $\forall (t,x,\mu) \in I \times D \times \mathcal{P}(D)$ \[ |\bar b(x)| + |\bar \sigma (x)| \leq \kappa(1 + v(t,x,\mu) )\,. \] \item $\forall (t,x,\mu) \in I \times D \times \mathcal{P}(D)$ there exists $\kappa$ such that \[ |\bar b(x)-\bar b(y)|+ |\bar \sigma(x)-\bar \sigma(y)|\leq \kappa(1+\sqrt{v(t,x,\mu) } + \sqrt{v(t,y,\mu)})|x-y|\,. \] \end{enumerate} \end{assumption} \begin{theorem} If Assumptions~\ref{a-int} hold, if $\sup_{t\in I} M(t) < \infty$ and if Assumptions~\ref{ass local boundedness}, \ref{ass scheutzow} hold then the solution to~\eqref{eq:examplescheutzow} is unique. \end{theorem} We will need the following observation: if $\pi \in \Pi(\mu, \nu)$ then, due to the theorem on disintegration, (see for example~\cite[Theorem 5.3.1]{ambrosio2008}) there exists a family $(P_{x})_{x\in D} \subset \mathcal P(D)$ such that \[ \int_{D\times D} f(x,y)\,\pi(dx,dy) = \int_D \left(\int_D f(x,y)\,P_x(dy)\right)\,\mu(dx) \] for any $f=f(x,y)$ which is a $\pi$-integrable function on $D\times D$. In particular if $f=f(x)$ then \[ \int_{D\times D} f(x)\,\pi(dx,dy) = \int_D f(x)\left(\int_D \,P_x(dy)\right)\,\mu(dx) = \int_D f(x)\,\mu(dx)\,. \] \begin{proof} Our aim is to show that Assumption~\ref{ass gmc} holds since then uniqueness follows from Corollary~\ref{corollary uniqueness}. We know, from Lemma~\ref{lemma:uniform} that for any $t\in I$ we have $\int_D v(t,x,\mathscr L(x_t)) \, \mathscr L(x_t)(dx) \leq \sup_{t\in I}M(t)$ and so it is in fact enough to verify~\eqref{eq gmc} for measures $\mu$ such that $\int_D v(t,x,\mu)\, \mu(dx) \leq \sup_{t\in I}M(t)$. From Assumption~\ref{ass scheutzow} i), we have \[ 2(x-y)(b(x,\mu)-b(y,\nu))+|\sigma(x,\mu) - \sigma(y,\nu)|^2 \leq M(x',y',x'',y'')[|x-y|^2 + |x'-y'|^2 + |x''-y''|^2]\,, \] where $x' = \int_D \bar b(z)\mu(dz)$, $y' = \int_D \bar b(z)\nu(dz)$, $x'' = \int_D \bar \sigma(z)\mu(dz)$ and $y'' = \int_D \bar \sigma(z)\nu(dz)$. We note that each of $|x'|$,$|y'|$,$|x''|$ and $|y''|$ are in a compact subset of $\mathbb R$, since due to Assumption~\ref{ass scheutzow} ii) we have \[ \kappa\left(1+\int_D v(t,z,\mu)\,\mu(dz)\right) + \kappa\left(1+\int_D v(t,z,\mu)\,\nu(dz)\right) \leq 2\sup_{t\in I}M(t) \,. \] As $M$ maps bounded sets to bounded sets we can choose a constant $g$ sufficiently large so that $M(x',y',x'',y'')\leq g$ for all $\mu, \nu$. We apply the remark on disintegration to see that \[ |x'-y'|^2 = \left|\int_D \bar b(\bar x)\mu(d\bar x) - \int_D \bar b(\bar y)\nu(d\bar y)\right|^2 = \left|\int_{D\times D} (\bar b(\bar x) - \bar b(\bar y))\,\pi(dx,d\bar y)\right|^2\,. \] From Assumption~\ref{ass scheutzow} iii) we get \[ \begin{split} |x'-y'|^2 & \leq \kappa^2 \int_{D \times D} (1+\sqrt{v(t,\bar x,\mu) } + \sqrt{v(t,\bar y,\mu)})^2\, \pi(d\bar x,d\bar y) \int_{D \times D} |\bar x-\bar y|^2 \pi(d\bar x,d\bar y) \\ & \leq 4\left(\kappa^2 \sup_{t\in I}M(t)\right)\int_{D \times D} |\bar x-\bar y|^2 \pi(d\bar x,d\bar y)\,. \end{split} \] Since the calculation for $|x''-y''|^2$ is identical we finally obtain \[ 2(x-y)(b(x,\mu)-b(y,\nu))+|\sigma(x,\mu) - \sigma(y,\nu)|^2 \leq g|x-y|^2 + 8 g \kappa^2 \sup_{t\in I}M(t)\int_{D \times D} |\bar x-\bar y|^2 \pi(d\bar x,d\bar y) \] as required to have Assumption~\ref{ass gmc} satisfied. \end{proof} \section{Invariant Measures} \subsection{Semigroups on $C_b(D)$} We will establish the existence of a stationary measure for semigroups associated with solutions to~\eqref{eq mkvsde} via the Krylov--Bogolyubov Theorem (see \cite[Chapter 7]{DaPrato}). Let the conditions of Theorem \ref{thm:weakexistence} hold with suitable assumptions on $m_1$ and $m_2$ such that we are within the regime where $I=[0,\infty )$. For every point $y\in D$ fix a process $(x^y_t)_{t\geq 0}$ that is a $v$-integrable solution to the McKean--Vlasov SDE~\eqref{eq mkvsde} started from $y$. We then define a semigroup $(P_t)_{t\geq 0}$ by \[ P_t\varphi(y):=\mathbb E[\varphi(x^y_t)]\,\,\,\text{for $t\geq 0$, $\varphi\in C_b(D)$.} \] Clearly $P_t\varphi(y) = \langle \varphi, \mathscr L(x^y_t)\rangle$ and if $\varphi\in C_0^2(D)$ then $\langle \varphi, \mu_t \rangle := \langle \varphi, \mathscr L(x^y_t)\rangle$ is given by~\eqref{eq fwd kolmogorov}. This means that establishing existence of invariant measure to~\eqref{eq mkvsde} shows that if $b$ and $\sigma$ are independent of $t$ then there is a stationary solution to~\eqref{eq fwd kolmogorov}. The two main conditions for Krylov--Bogolyubov's theorem to hold is that the semigroup is Feller and a tightness condition. As we are not assuming any non-degeneracy of the diffusion coefficient we cannot always guarantee that the semigroup is Feller. See, however, Lemma~\ref{cor station} for a partial result. \begin{theorem}\label{cor station feller} If the conditions of Theorem \ref{thm:weakexistence} hold with $I=[0,\infty)$ (i.e. we have either $\sup_{t\in [0,\infty) } M^+(t) < \infty$ or $\sup_{t\in [0,\infty) } M(t) < \infty$) and the semigroup $(P_t)_{t\geq 0}$ has the Feller property then there exists an invariant measure for $(P_t)_{t\geq 0}$ acting on $C_b(D)$. \end{theorem} \begin{proof} Fix $y\in D$ and let $(\mu_t)_{t\geq 0}$ be defined as \[ \mu_t := \frac1t\int_0^t \P(x^y_s\in \cdot )\, ds = \frac1t\int_0^t\mathscr L(x^y_s) \,ds\,. \] By Fatou's Lemma and Lemma~\ref{lemma tightness} we know that for any $\varepsilon > 0$ there exists sufficiently large $m_0$ such that for all $m>m_0$ we have $\sup_{t\in I}\mathbb P[x^{y_0}_t\notin D_m]<\varepsilon$. Therefore $\mu_t(D\setminus D_m)=\frac{1}{t}\int_0^t\mathbb P(x^y_s\notin D_m)\,ds<\varepsilon$ and hence $(\mu_t)_{t\geq 0}$ is tight. Since we are assuming that the Feller property holds, the conclusion now follows from Krylov--Bogolyubov Theorem (see \cite[Chapter 7]{DaPrato}). \end{proof} \wh{Changed from strictly increasing to non-decreasing. We don't need the additional assumption the $W_{\bar v}(x-y)<\infty$ if we let $\bar v\in C^{1,2}(I \times \mathbb R)$ } \begin{lemma}\label{cor station} If the assumptions of Theorem \ref{thm:weakexistence} hold with $I=[0,\infty)$ along with either Assumption \ref{ass gmc} or \ref{ass int gmc} and that $\bar v$ is non-decreasing, then the semigroup $(P_t)_{t\geq 0}$ acting on $C_b(D)$ is Feller. \end{lemma} \begin{proof} For $\varepsilon>0$, by continuity of $\varphi$ there exists $\delta_\varphi >0$ s.t. $|x^{y_1}_t-x^{y_2}_t|<\delta_\varphi\implies |\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)|<\varepsilon/2$. Then \begin{equation*} \begin{split} |P_t\varphi(y_1)-P_t\varphi(y_2) |& =|\mathbb E[\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)]| \leq \mathbb E[|\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)| ]\\ & = \mathbb E[|\varphi(x^{y_1}_t)-\varphi(x^{y_2}_t)|(\mathbbm{1}_{|x^{y_1}_t-x^{y_2}_t|< \delta_\varphi}+\mathbbm{1}_{|x^{y_1}_t-x^{y_2}_t|\geq \delta_\varphi}) ]. \\ & <\frac{\varepsilon}{2}\mathbb P[|x^{y_1}_t-x^{y_2}_t|<\delta_\varphi ]+2|\varphi|_\infty\mathbb P[|x^{y_1}_t-x^{y_2}_t|\geq\delta_\varphi ]\\ & \leq \frac{\varepsilon}{2}+2|\varphi|_\infty\mathbb P[|x^{y_1}_t-x^{y_2}_t|\geq\delta_\varphi ].\\ \end{split} \end{equation*} \wh{Slightly changed the ordering of the english here to hopefully make it more clear} We have, via the non-decreasing property of $\bar v$ (first inequality) and the continuous dependence on initial condition~\eqref{eq cont dep} and~\eqref{eq int cont dep} (second inequality), \wh{Changed from Markov's inequality. Non-decreasing property of $\bar v$ gives first inequality. Helps to note that at this stage, $\delta_\varphi$ is fixed. } \begin{equation*} \bar{v}(\delta_\varphi)\mathbb P[|x^{y_1}_t-x^{y_2}_t|\geq\delta_\varphi ]\leq \mathbb E[\bar{v}(x^{y_1}_t-x^{y_2}_t)]\leq c_t\mathbb E[\bar v(y_1-y_2)]. \end{equation*} By continuity of $\bar{v}$, for any $\varepsilon_{\bar v}>0$ there exists $\delta_{\bar v}$ such that, if $|y_1-y_2|<\delta_{\bar{v}}$, $\bar v(y_1-y_2)<\varepsilon_{\bar v}$. Therefore, by choosing $\varepsilon_{\bar v}$ small enough such that $\frac{2c_t|\varphi|_\infty}{\bar{v}(\delta_\varphi)} \bar v(y_1-y_2) < \frac{2c_t|\varphi|_\infty}{\bar{v}(\delta_\varphi)} \varepsilon_{\bar{v}} <\varepsilon/2$, we have, for $|y_1-y_2|<\delta_{\bar v}$, $|P_t\varphi(y_1)-P_t\varphi(y_2)|<\varepsilon$. Boundedness of $P_t\varphi $ is immediate by definition. \end{proof} \subsection{Semigroups on $C_b(\mathcal P_2(D))$} Now we consider semigroups acting on functions of measures. Define the semigroup $(\mathscr P_t)_{t\geq 0}$ by \begin{equation} \label{eq semigroup for fns of meas} \mathscr P_t\phi(\mu)=\phi(\mathscr L(x^\mu_t)) \,\,\, \text{for $\phi\in C_b(\mathcal{P}_2(D))$ and $t\geq 0$.} \end{equation} Here $x^\mu_t$ denotes a solution to~\eqref{eq mkvsde} started from $\mu$. To ensure that $\mathscr L(x^\mu_t) \in \mathcal P_2(D)$ we assume that the conditions of Theorem~\ref{thm:weakexistence} hold with $V$ satisfying $V(t,x)\geq |x|^2$. If $D=\mathbb R^d$ then we can apply the chain rule for functions of measures from e.g.~\cite{buckdahn2017mean} or~\cite{chassagneux2014classical} to obtain that for $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ \begin{equation} \label{eq fpkeq meas} \begin{split} & \phi(\mathscr L(x_t)) - \phi(\mathscr L(x_0)) \\ & = \int_0^t\langle \mathscr L(x^{\mu}_s), b(s,\cdot,\mathscr L(x^{\mu}_s)) \partial_\mu \phi(\mathscr L(x^{\mu}_s)) + \text{tr}\left[a(s,\cdot,\mathscr L(x^{\mu}_s)) \partial_y \partial_\mu \phi(\mathscr L(x^{\mu}_s))\right]\rangle\,ds. \end{split} \end{equation} In the case that $D\subseteq \mathbb R^d$ we have to assume that there is $\varepsilon>0$ and $k\in \mathbb N$ such that $V(t,x) \geq |x|^{2+\varepsilon}$ for $x\in D\setminus D_k$. We consider first $x^{k,\mu}$ given by~\eqref{eq mkvsde k} started from $\mu$. By Proposition~\ref{propn ito for meas only} we have for $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ that \begin{equation} \label{eq fpkeq meas on k} \begin{split} & \phi(\mathscr L(x^k_t)) - \phi(\mathscr L(x^k_0))\\ & = \int_0^t\left\langle \mathscr L(x^{k,\mu}_s), b(s,\cdot,\mathscr L(x^{k,\mu}_s)) \partial_\mu \phi(\mathscr L(x^{k,\mu}_s)) + \text{tr}\left[a(s,\cdot,\mathscr L(x^{k,\mu}_s)) \partial_y \partial_\mu \phi(\mathscr L(x^{k,\mu}_s))\right]\right\rangle\,ds. \end{split} \end{equation} From Lemma~\ref{lemma:uniform} we get that $\sup_k \sup_t \mathbb E |x^k_t|^{2+\varepsilon} < \infty$. Moreover Lemma~\ref{lemma tightness} implies, together with Prohorov's theorem convergence of a subsequence of the laws (and since we know the limit of these is given by~\eqref{eq mkvsde} due to the proof of Theorem~\ref{thm:weakexistence}). We thus have convergence $W_2(\mathscr L(x^k_t), \mathscr L(x_t)) \to 0$ as $k\to \infty$. Due to continuity of coefficients $b$, $\sigma$ and since $\phi\in \mathcal C^{(1,1)}(\mathcal P_2(D))$ we can take the limit $k \to \infty$ in~\eqref{eq fpkeq meas on k} to obtain~\eqref{eq fpkeq meas}. \wh{I previously sent in an email a question asking why we use $V(t,x)\geq|x|^2$ here and not $V(t,x)\geq|x|^p$ for $1\leq p< \infty $. I am happy with why $|x|^2$ is used above, but can't we consider semigroups on $C_b(\mathcal{P}_p(D))$ in the following theorem with 'no' change to the proof? } \begin{theorem}\label{cor meas station feller} Let the conditions of Theorem~\ref{thm:weakexistence} hold with $I=[0,\infty)$, and $V(t,x)\geq|x|^2$ for $x\in D\setminus D_k$ for some $k\in \mathbb N$. If the semigroup $(\mathscr P_t)_{t\geq 0}$ given by~\eqref{eq semigroup for fns of meas} is Feller then there exists an invariant measure. \end{theorem} We will need to following fact from~\cite{meleard1996asymptotic} to prove this theorem: Let $S$ be a Polish space and $(m_t)_{t\geq 0} $ be a family of probability measures on $\mathcal{P}(S)$ i.e. $m_t \in \mathcal P(\mathcal P(S))$. Define the intensity measure $I(m_t)$ by \[ \langle I(m_t),f \rangle=\int_{\mathcal{P}(S)} \langle \nu,f \rangle \, m_t(d\nu)\,,\,\,\,\, f \in B(S)\,. \] Here $B(S)$ denotes all the bounded measurable functions from $S$ to $\mathbb R$. Then $(m_t)_{t\geq 0}$ is tight if and only if the family of intensity measures $(I(m_t))_{t\geq 0}\subset \mathcal{P}(S)$ is tight. \begin{proof}[Proof of Theorem~\ref{cor meas station feller}] We recall that $\mathcal{P}_2(D)$ with the Wasserstein distance $W_2$ is Polish~\cite[Theorem 6.18]{villani2009}. Fix $\mu \in \mathcal P_2(D)$ and let $x^\mu$ be a solution to~\eqref{eq mkvsde bar}. We note that with $\pi_t(\mu, B) := \delta_{\mathscr L(x^\mu_t)}(B)$ we have, from~\eqref{eq semigroup for fns of meas}, that \[ \mathscr P_t \phi(\mu) = \phi(\mathscr L(x^\mu_t)) = \int_{\mathcal P_2(D)} \delta_{\mathscr L(x^\mu_t)}(\nu)\phi(\nu)\,d\nu = \int_{\mathcal P_2(D)} \phi(\nu)\,\pi_t(\mu,d\nu)\,. \] Define the family of measures $(m^\mu_t)_{t\geq 0} \subset \mathcal P(\mathcal P_2(D))$ by \[ m^\mu_t(B) := \frac1t \int_0^t \mathbb \pi_s(\mu, B)\,ds = \frac1t \int_0^s \delta_{\mathscr L(x_s^\mu)}(B)\,ds\,,\,\,\,\, B \in \mathscr B(\mathcal P_2(D))\,. \] To apply the Krylov--Bogolyubov Theorem we need to show that the family $(m_t)_{t\geq 0}$ is tight. We observe that for all $f\in B(D)$ we have \[ \begin{split} \int_{\mathcal P(D)} \langle \nu,f \rangle \, m^\mu_t(d\nu) = & \int_{\mathcal{P}(D)} \langle \nu,f \rangle \, \frac{1}{t}\int_0^t\delta_{\mathscr L (x^\mu_s)}\,(d \nu)\,ds = \frac{1}{t}\int_0^t\langle \mathscr L (x^\mu_s),f \rangle ds = \left\langle \frac{1}{t}\int_0^t \mathscr L (x^\mu_s)\,ds, f \right\rangle. \\ \end{split} \] Therefore $I(m^\mu_t) = \frac{1}{t}\int_0^t \mathscr L (x^\mu_s)\,ds$. It remains to show that family of intensity measures $(I(m_t))_{t\geq 0}\subset \mathcal{P}(D)$ is tight. For $B \in \mathscr B(D)$ we have \[ I(m^\mu_t)(B) = \frac{1}{t}\int_0^t\langle \mathscr L (x^\mu_s), \mathds 1_B \rangle \,ds = \frac{1}{t}\int_0^t \mathscr L (x^\mu_s)(B) \, ds = \frac{1}{t}\int_0^t \mathbb P(x^\mu_s \in B)\, ds\,. \] By Fatou's Lemma and Lemma~\ref{lemma tightness} we know that for any $\varepsilon > 0$ there exists sufficiently large $m_0$ such that for all $m>m_0$ we have $\sup_{t\in I}\mathbb P[x^\mu_t\notin D_m]<\varepsilon$. Therefore $I(m^\mu_t)(D\setminus D_m)=\frac{1}{t}\int_0^t\mathbb P(x^\mu_s\notin D_m)ds<\varepsilon$ and hence $(I(m^\mu_t))_{t\geq 0}$ is tight. \end{proof} We do not assume non-degeneracy of the diffusion thus, in general, the semigroup $\mathscr P_t$ is not expected to be Feller. However Lemma~\ref{cor station meas} gives a partial result. \begin{lemma}\label{cor station meas} Let assumptions of Theorem \ref{thm:weakexistence} hold for $I=[0,\infty)$ along with either Assumption~\ref{ass gmc} or~\ref{ass int gmc}. Assume further that \[ W_{\bar v}(\mu,\nu)<\infty \,\,\,\text{ for $\mu,\nu$ in }\,\,\, \mathcal P_{v}(D):=\bigg\{\mu\in \mathcal P(D) : \int_D v(0,x,\mu)\,\mu(dx)<\infty\bigg\}\,. \] Then the semigroup $(\mathscr P_t)_{t\geq 0}$ acting on $C_b(\mathcal{P}_v(D) )$ and defined as in~\eqref{eq semigroup for fns of meas} is Feller. \end{lemma} \wh{Not sure if you want this little explanation below, seems as you didn't want to draw attention to the polynomial case... I wrote it anyway please comment out as preferred.} Note that here, we are considering a semigroup acting on space of measures possibly different to that previously considered. In the case where $v$ and $\bar v$ are polynomials is often simple process to have the assumptions of Lemma \ref{cor station meas} replace the assumption of the Feller property in Theorem \ref{cor meas station feller} and $W_{\bar v}(\mu,\nu)< \infty$ for any $\mu,\nu\in \mathcal{P}_v(D)$ is no longer required. \begin{proof} Fix $t\in I$ and $\mu_1, \mu_2 \in \mathcal P_{\bar v}(D)$. From the continuous dependence on initial condition, Theorem~\ref{thm contdep}, we have \begin{equation}\notag \begin{alignedat}{1} W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t))\leq \mathbb E [ {\bar v}(x^{\mu_1}_t - x^{\mu_2}_t) ] \leq c_t \mathbb E [ {\bar v}(x^{\mu_1}_0 - x^{\mu_2}_0)] = c_t \int_{D\times D} {\bar v}(x-y) \pi(dx,dy) \,. \end{alignedat} \end{equation} Taking the infimum over all the possible couplings yields, \begin{equation} \label{eq feller for meas 1} \begin{alignedat}{1} W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t))\leq c_t W_{\bar v}(\mu_1,\mu_2). \end{alignedat} \end{equation} Let $\varepsilon > 0$ be given. For any $\phi \in C_b(\mathcal{P}_v(D) )$ there is $\delta_\phi$ such that $W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t)) < \delta_\phi$ implies that $|\phi(\mathscr L(x^{\mu_1}_t)) - \phi(\mathscr L(x^{\mu_2}_t))|< \varepsilon$. Now take $\delta := \delta_\phi / c_t$. Then, due to~\eqref{eq feller for meas 1}, if $W_{\bar v}(\mu_1,\mu_2) \leq \delta$ then $W_{\bar v}(\mathscr L(x^{\mu_1}_t),\mathscr L(x^{\mu_2}_t))<\delta_\phi$ and we get $|P_t \phi(\mu_1) - P_t \phi(\mu_2)| < \varepsilon$ as required. \end{proof} \section*{Acknowledgements} We are grateful to Sandy Davie and X\={i}l\'{i}ng Zh\={a}ng, both from the University of Edinburgh, for numerous discussions on the topic of this work and many helpful suggestions. William Hammersley was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh.
train/arxiv
BkiUdF44ubnjot1QXpBC
5
1
\section{Introduction} \label{sec:Intro} A major part of the theoretical study of classical Hamiltonian dynamics \cite{Arnold2013} concerns the ability of purely conservative systems to reach thermal equilibrium. This line of inquiry took its origin from Boltzmann's demonstration of the celebrated H-theorem, which provided for the first time a microscopic explanation of the second law of thermodynamics. This pioneering work quickly gave rise to many paradoxes due to the fact that the entropy of a Hamiltonian system $\cal S$ is conserved. Even 150 years after Boltzmann's work, problems like Loschmidt's paradox \cite{loschmidt1876uber}, Poincar\'e's recurrence theorem \cite{poincare1890probleme} or the Fermi-Pasta-Ulam-Tsingou \cite{fermi1955studies,dauxois2008fermi} problem remain largely unresolved \cite{savitt1995time}. A possible approach towards their resolution lies in the notion that any small sub-ensemble ${\cal S}_1$ can be described as an open system interacting with the rest of $\cal S$. The latter therefore plays the role of a bath allowing the thermalization of ${\cal S}_1$. In the quantum world, this picture is known as the Eigenstate Thermalization Hypothesis \cite{deutsch91quantum,sredniki94chaos,rigol2008thermalization}, which also applies to classical systems \cite{jin2013equilibration}. In this context, an intriguing question concerns the minimal system size required for thermalization. Despite the fact that the thermodynamic limit is usually associated with large-size systems, small objects such as nanoparticles \cite{voisin2000size}, nuclei \cite{sarkar2017thermalization} or atoms trapped in optical lattices \cite{kaufman2016quantum} are nevertheless known to relax towards thermal equilibrium. With decreasing system size, a natural question to consider is to which degree a single particle is able to reach thermal equilibrium. This extreme limit can be studied experimentally using cold fermions by taking advantage of the Pauli exclusion principle. The associated suppression of interactions at low temperature gives rise to a unique experimental platform facilitating the study of purely Hamiltonian systems. In this work, we consider an ensemble of non-interacting particles confined in non-separable power law potentials. The question of thermalization in this class of potentials was already raised in the context of collisionless atoms in quadrupole traps \cite{Surkov1994,Davis1995}. This problem was recently revived in the context of quantum simulation of high-energy physics, where the behavior of (harmonically confined) massless Weyl fermions was studied experimentally using cold atoms in a quadrupole trap \cite{Suchet2015}. In this latter work, it was shown that after a rapid quench of the trap position, the center of mass motion is damped after a few oscillations and the system reaches a steady state characterized by partial thermalization of its momentum degrees of freedom. The corresponding distribution of the atomic ensemble closely resembles a thermal distribution, $np_{i=x,y,z}\propto \exp(-p_i^2/2mk_B T_i)$, but with anisotropic temperatures. In this paper, we present a detailed theoretical analysis of these relaxation dynamics. Furthermore, we provide analytical calculations of the steady state properties in an isotropic as well as in a pancake geometry. These results are compared to numerical solutions of the corresponding dynamical equations. Our work clarifies the memory effect leading to the anisotropy of the momentum distribution and predicts a singular behavior for spherical potentials. \section{Relaxation dynamics in quadrupole traps}\label{sec:simulations} Motivated by recent experiments with non-degenerate spin-polarized fermions \cite{Suchet2015}, we consider an ensemble of classical noninteracting particles confined by a quadrupole trapping potential \begin{equation} V(\mathbf{r}) = \mu_B b \sqrt{x^2 + y^2 + 4 z^2}. \label{eq:potential} \end{equation} where $\mu_B$ is the Bohr magneton and $b$ is the magnetic field gradient, a positive quantity. This potential is {\it non-integrable} since it has three degrees of freedom but only two constants of the motion (total energy $E$ and angular momentum $L_z$). As a consequence its dynamics exhibits chaotic behaviour in some regimes. In contrast, the more usual potential of standard atomic traps is a sum of harmonic terms of the form $V_1(x)+V_2(y)+V_3(z)$ allowing us to define three conserved energies, leading to an integrable problem. Note that, since the quadrupole potential cannot be written as the sum of potentials as in the harmonic case, the motion along one direction depends on the other two so that momentum and energy are constantly being exchanged between the three directions as the atom moves along the orbit. We will study the {\em relaxation dynamics} in this potential i.e., what happens to the gas after it is slightly perturbed from equilibrium. At $t=0$ with the gas in thermal equilibrium, the atoms receive a ``momentum kick'' $\bq$ that shifts every atom's momentum ${\bf p} \rightarrow {\bf p}+ \bq$ and increases its energy by ${\bf p} \cdot \bq/m +q^2/2m$. Since the original (thermal) distribution before the kick is an even function of each component of ${\bf p}$, the first term drops out when averaged over that distribution, so that the average energy change $\Delta E$ per atom is: \begin{equation} \Delta E =q^2/2m. \label{eq:deltaE} \end{equation} We are interested in the subsequent evolution: how the gas relaxes to steady state and how the energy $\Delta E$ of the kick is redistributed along the different directions of motion. Normally, as is usually assumed, collisions would be responsible for this redistribution leading to a return to thermal equilibrium. However, in our case, there are no collisions nor mean-field interactions, so any relaxation process is due purely to the nonintegrability of the potential. The state of the gas can be described by the Boltzmann distribution $f(\mathbf{r},\mathbf{p},t)$ which we normalize to unity: \begin{equation} \int d^3 \mathbf{r} \int d^3 \mathbf{p} \ f(\mathbf{r},\mathbf{p},t)=1. \end{equation} All extensive quantities are to be taken as ensemble averages over this distribution. For example, the final measured momentum distribution $np_z$ is given by \begin{equation} np_z= \int d^3 \mathbf{r} \int dp_x \ dp_y \ f(\mathbf{r},\mathbf{p},t \rightarrow \infty). \label{eq:doublyintegrated} \end{equation} \begin{figure} \begin{overpic}[width=0.9\linewidth]{fig1.pdf} \put(1,73){\rm\bfseries a)} \end{overpic} \begin{overpic}[width=0.9\linewidth]{fig0.pdf} \put(1,73){\rm\bfseries b)} \end{overpic} \caption{Numerical simulation of the relaxation dynamics in a quadrupole trap. a) Kinetic energy per atom along different directions as a function of time after a kick $q=1$ along $x$ at $t=0$. The energies seem to reach a stationary state for $t\geq 80$. Along $z$ the average kinetic energy is almost unchanged from its initial value $\sim 1$, but along $x$ and $y$ the corresponding values increase by the same amount to a final value of $\sim 1.2$. b) Momentum distribution $np_z$ of the steady state. The solid line $\mathcal{G}_{\rm eq}$ is a Gaussian distribution with the same variance.\label{fig:simul1}} \end{figure} To simulate this distribution we perform molecular dynamics simulations of the gas \cite{boltzmannletter, goulko2012boltzmann} where the trajectory of each atom is calculated following the classical equations of motion, without suffering any collision. This method gives us full access to all observables, including the Boltzmann distribution itself. For example, we can measure the phase-space average $\left< p_i^{2} \right>$ for $i=x,y,z$ over the entire system as a function of time by averaging over the trajectories of individual atoms: \begin{equation} \left< p_i^2 \right>_t \equiv \int d^3 \mathbf{r} \int d^3 \mathbf{p} \ p_i^2 f(\mathbf{r},\mathbf{p},t) \simeq \frac{1}{N}\sum_{\mbox{all $N$ atoms}} p_i^2(t), \label{eq:phase_space_average} \end{equation} We start with a gas of $N=10^5$ atoms sampled from the initial Boltzmann distribution at temperature $k_B T_0=1$ with a momentum kick of $q$ along $x$: \begin{equation} f \propto \exp \left(-\frac{(p_x-q_x)^2+p_y^2+p_z^2}{2} -V(x,y,z)\right) \label{eq:xkickeddistribution} \end{equation} (analogously for a kick $q_z$ along $z$ etc.) and let each individual atom evolve according to the classical trajectory. From now on we set $m=k_BT_0=\mu_Bb=1$, which is equivalent to choosing $m$ as the mass unit, $l_0=k_BT_0/\mu_Bb$ as the unit of length and $t_0=\sqrt{m k_BT_0}/\mu_Bb$ as the unit of time. The time evolution is calculated using the velocity Verlet algorithm \cite{Verlet, urban}. We use a time step $\Delta t=0.001t_0$, which provides sufficient accuracy, as the error of the algorithm is of the order $\mathcal{O}(\Delta t^2)$. This very simple setup gives rise to some surprises which have also been confirmed experimentally \cite{Suchet2015}: \begin{enumerate} \item {\em Stationary ``thermal" distribution}: In Fig.~\ref{fig:simul1} a) we plot $\left< p_i^2 \right>_t$. We see that, at long times, it has reached an apparently stationary distribution. In Fig.~\ref{fig:simul1} b) we plot the long time doubly integrated momentum distribution $np_z$ (\ref{eq:doublyintegrated}) and we see that it fits closely to a Gaussian (thermal) distribution $np_z\propto \exp (-p_z^2/2mk_BT_z)$ where we define an effective temperature analogously to the experiment \cite{Suchet2015}: \begin{equation} T_i \equiv \left< p_i^2 \right>_{t \rightarrow \infty}, \mbox{ } i=x,y,z \end{equation} so that \begin{equation} \Delta T_i \equiv \left< p_i^2 \right>_{t \rightarrow \infty} -\left< p_i^2 \right>_{t =0}. \end{equation} \item {\em Anisotropic temperatures}: From Fig.~\ref{fig:simul1} a) we see that, even though the doubly integrated distributions $np_i$ along the different directions $i$ are Gaussian, their widths are different: generally we find $T_x \sim T_y \neq T_z$. We also see that $T_z \sim T_0 < T_{x,y}$ which we find to be true whenever the kick is in the $xy$ plane, the opposite being true if the kick is along the $z$ direction. This is unexpected because the quadrupole potential is non-separable, continuously transfering energy and momentum between all directions for each atom, so we might expect na\"ively that on average $T_x \sim T_y \sim T_z$, i.e.\ there would be a certain degree of ergodicity. \item {\em Apparent separability of the $z$ and $x-y$ distributions}: for a kick along $z$, the width of the momentum distribution along $x$ and $y$ seems to be unchanged (i.e. $T_{x,y} \simeq 1$) whereas $T_z$ increases. The energy increase $\Delta E$ due to the kick is mainly concentrated into the $z$ direction so that $\Delta E\simeq 3/2 \mbox{ } \Delta T_z$. Likewise, if the kick is along $x$, the increase in kinetic energy along the $z$ is negligible ($T_z\simeq 1$) but {\it both} $T_x$ and $T_y$ increase by the same amount ($T_x=T_y$) so that $\Delta E \simeq 3 \Delta T_x$. In fact we will see below that this separation is not exact; there is a slight increase of energy in directions transverse to the kick. Nevertheless this behaviour is consistent with a strong separation of the dynamics into $z$ and $xy$ plane components even though the potential is non-separable. \end{enumerate} The na\"ive, straightforward conclusion from these observations is that {\it the gas seems to have thermalized in the absence of collisions (since the doubly-integrated momentum distributions \eqref{eq:doublyintegrated} become Gaussian-like, a hallmark of thermalization) but with some effective ``decoupling" of the motion along $z$ and $xy$ directions leading to different temperatures $T_z$ and $T_{xy}$.} \subsection{Apparent Thermalization} \label{sec:thermalisation} In point 1. above we noted that the gas becomes stationary after some time. This stationary state of the gas is not due to collisions but to the fact that, in the quadrupole trap, the orbits of different atoms will have different, incommensurate periods leading to the relative dephasing of individual trajectories. This dephasing, when averaged over the whole gas, leads to a stationary distribution. Note that the appearance of a stationary distribution would not happen in the standard harmonic trap since a momentum kick would lead to undamped oscillations of the center of mass. Note also that that irreversibility has not set in by this stage since there are no collisions. We also mentioned above that the gas seemed to have thermalized in the absence of interactions since the doubly-integrated momentum distributions \eqref{eq:doublyintegrated} become Gaussian after the kick. Of course, since the effective temperatures deduced from the width of the Gaussians are different ($T_z \neq T_{x,y}$) the state cannot be a true thermal state. Indeed, collisions are necessary to redistribute the kick energy $\Delta E$ among all accessible phase space regions of energy $E+\Delta E$ so that the entropy increases $S(E) \rightarrow S(E+\Delta E)$ whereas in this experiment, $E\rightarrow E+\Delta E$ but entropy is unchanged. Nevertheless, as can be seen from the simulations, several ``thermal" properties {\it can} be achieved, e.g. stationarity and equilibration of temperatures along the $x$ and $y$ directions. \begin{figure} \includegraphics[width=0.9\linewidth]{fig2.pdf} \caption{The long time Boltzmann distribution $f$ after a kick along $z$ plotted as a function of $p_z$ keeping all other variables $x,y,z,p_x,p_y$ fixed and for different values of $x$. To obtain a nonzero number of atoms in the six dimensional volume we considered a narrow region in phase space given by the coordinates in the figure and divided it into bins. We plot the number of atoms in each bin averaged over time. It can be seen that $f$ does not resemble a Gaussian thermal function and that the three peaks feature becomes more prominent for $x$ closer to the center. Similar results are found plotting along all other coordinates.} \label{fig:simul2} \end{figure} We can ask to what extent the final state of the gas is close to a thermal state. For example, could it be e.g. a product of three different Gaussians (with different temperatures) of the type \begin{equation} f \sim e^{-p_x^2/2T_x} e^{-p_y^2/2T_y} e^{-p_z^2/2T_z} \times e^{-V/T} \quad ? \end{equation} It is easy to see that this is not possible since it does not satisfy the time independent collisionless Boltzmann equation. In fact we can plot a ``slice'' of $f$ as a function of one of its six coordinates keeping all others fixed as in Fig.~\ref{fig:simul2} which shows a markedly non bell-shaped curve. In fact, the Gaussian character is only restored upon integration of the other five coordinates of the Boltzmann distribution e.g.: \begin{equation} np_x= \int d^3 \mathbf{r} \int dp_y dp_z f(\mathbf{r},\mathbf{p},t \rightarrow \infty) \propto {\rm e}^{-p_x^2/2mk_BT_x} \end{equation} which raises the question of why averages over complex distributions such as those of Fig.~\ref{fig:simul2} lead to a Gaussian profile. We will not consider this question further here, leaving it for further study. \subsection{Symmetries and sum rule of the distribution} We can be more quantitative regarding the $\Delta T_i$. We first notice that the quadrupole potential \eqref{eq:potential} is homogeneous of order one, (a potential homogeneous of order $\alpha$ has the property that $V(\lambda {\bf r})=\lambda^\alpha V({\bf r})$). So we can apply the virial theorem which leads to the following relation \cite{Landau1982}: \begin{equation} \Delta E = \frac{3}{2} (\Delta T_x + \Delta T_y + \Delta T_z). \label{eq:VT} \end{equation} Furthermore, for small kicks we can derive some symmetry considerations and a sum rule. Defining the matrix $\Theta_{ij}$ as \begin{equation} \Delta T_i \equiv \sum_j \Theta_{ij} \frac{q_j^2}{2} \label{eq:Theta} \end{equation} where $i,j=x,y,z$, and $q_i$ is the momentum kick along the $ith$ direction, it is possible to show that this matrix is symmetric so that, for small kick momentum, we have $\Theta_{ij}=\Theta_{ji}$. More generally it is straightforward to show using \eqref{eq:deltaE} and \eqref{eq:VT} that, if the potential is homogeneous of order $\alpha$, the $\Theta_{ij}$ satisfy the constraint \begin{equation} \sum_j \Theta_{ij} = \frac{2 \alpha}{2 + \alpha}. \label{eq:sum_rule} \end{equation} For potentials with axial symmetry around the $z$ axis, which is the case of the quadrupole trap, the fact that the matrix is symmetric and that in any kick $\Delta T_x=\Delta T_y$ imply that the matrix can be written using only three distinct elements $\theta_{1,2,3}$ as \begin{equation} \Theta= \begin{pmatrix} \theta_1 & \theta_1 & \theta_2 \\ \theta_ 1 & \theta_1 & \theta_2 \\ \theta_2 & \theta_2 & \theta_3 \end{pmatrix} \label{eq:theta_matrix_axial} . \end{equation} Using the sum rule we find $\theta_2+2\theta_1=\theta_3+2 \theta_2=2/3$ which leaves us with a single unknown parameter. The experimentally measured value $\Delta T_z/q^2_z/2=2/3$ (see point 3. above) implies $\theta_3=2/3$, $\theta_2=0$ and $\theta_1=1/3$, the latter also being in agreement with the measured value. The quadrupole simulations confirm the experimental observations (1-3) (see Fig.~\ref{fig:simul1}) even though there is a small correction to the experimental values: a slight cooling of the directions transverse to the kick so that $\Theta_{xx}=0.36$ (instead of 1/3) and $\Theta_{xz}=-0.05$. So the observation of point 3., the apparent separability of the $z$ and $x-y$ distributions, seems not to be perfect but rather an excellent approximation. \footnote{We also investigated anisotropic potentials, finding very similar behavior.} We can study the gas dynamics by analyzing individual atomic trajectories and then averaging over initial conditions. However the trajectories can be quite difficult to find due to the nonintegrability of the potential. To show this we constructed a Poincar\'e map: in Fig.~\ref{Poincare}, we see that there are both chaotic and quasi-integrable regions. A study of the gas starting from its individual trajectories would be quite complex analytically. \begin{figure} \includegraphics[width=0.9\linewidth]{fig3_lr.pdf} \caption{\label{Poincare}Poincar\'e map of the quadrupole potential. We study trajectories with the same energy but different initial phase-space coordinates. The values of $x$ and $p_x$ are recorded whenever $z = 0$ and $p_z > 0$. We see the appearance of small islands denoting invariant torii close to which quasi-integrable trajectories evolve, separated by contiguous regions of chaotic dynamics.} \end{figure} For this reason, it is easier to study not the potential \eqref{eq:potential} but cases which might contain the same physics but in which all or nearly all trajectories are integrable or quasi-integrable. For example let us consider the family of potentials \begin{equation} \label{eq:general_potential_with_epsilon} V_\epsilon(x,y,z) = \sqrt{x^2 + y^2 + (1+\epsilon) z^2}. \end{equation} When $\epsilon=3$ we get the quadrupole potential (\ref{eq:potential}). But if we take $\epsilon=0$ the potential becomes spherically symmetric and therefore integrable. Alternatively, if $\epsilon \gg 1$ then we are left with a highly confined potential along the $z$ direction (a ``pancake') so that the motion simplifies again and an effective motion in the $x-y$ plane can be studied. We will begin with the study of the spherical potential in Sec.~\ref{sec:spherical} which, surprisingly, exhibits many of the phenomena of the quadrupole potential, including the anisotropy of the momentum distribution. After this we will analyze the pancake case in Sec.~\ref{sec:pancake}, comparing both of these limits with the quadrupole potential. \section{Spherical limit} \label{sec:spherical} The simulations in the quadrupole potential suggest that after perturbing an equilibrium gas along a particular direction, the ensemble average of the momentum widths $\langle p_i^2 \rangle_{t}$ converges to a stationary distribution in the long time limit $t \rightarrow \infty$. In particular, we observed that $\langle p_x^2 \rangle_{\infty} = \langle p_y^2 \rangle_{\infty}$ and in general $\langle p_x^2 \rangle_{\infty} \neq \langle p_z^2 \rangle_{\infty}$. Calculating the final momentum widths $\langle p_i^2 \rangle_{\infty}$ for a gas of atoms in the quadrupole potential from first principles is difficult without understanding the individual trajectories. Therefore, as mentioned above, it is a natural simplification to consider instead the case where we remove the anisotropy in the quadrupole potential: \begin{equation} \label{eq:spherical_potential} V_{\epsilon=0}(x,y,z) = \sqrt{x^2 + y^2 + z^2}=r, \end{equation} where $r$ is the radial coordinate (for the rest of this section we will drop the subscript $\epsilon=0$). Na\"ively, one would expect that perturbing a gas along any direction in such an spherical potential will lead to an isotropic distribution at long times: $\langle p_x^2 \rangle_{\infty} = \langle p_y^2 \rangle_{\infty} = \langle p_z^2 \rangle_{\infty}$. However, as we shall see, the final momentum width along the direction of the perturbation will be different to that along perpendicular directions. To anticipate some of the conclusions of this section: this is intuitively plausible: in a spherical potential all three components of angular momentum are conserved, so the motion of each atom is confined to a plane passing through $r=0$ and perpendicular to its angular momentum. The population of each plane is therefore constant during the motion. In thermal equilibrium, this population is the same for all planes but a momentum kick will cause a transfer of atoms between planes, so that the population of each plane will depend on its angle relative to the kick direction. This anisotropy in populations in the distribution is preserved at long times again due to conservation of angular momentum and translates into different final temperatures along the different directions. \subsection{Averages over the motion in planes} \label{sec:Planes} With a particle in a central field \cite{Landau1982,Goldstein2002}, the trajectory stays on the plane perpendicular to its angular momentum $\mathbf{L}$ which includes the origin $r=0$. Using polar coordinates $(r,\theta)$ for the plane, the energy $E$ is given by the usual expression: \begin{equation} E = \frac{1}{2} \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right) + V(r) = \frac{1}{2} \left( \dot{r}^2 + \frac{L^2}{ r^2} \right) + V(r) \end{equation} where $L =|\mathbf{L}|= r^2 \dot{\theta} = \text{constant}$. In a potential such as (\ref{eq:spherical_potential}), the motion is confined between two values of the radial coordinate $r_{\rm min} \le r \le r_{\rm max}$ which are solutions of $\dot{r} = 0$. During the time in which $r$ varies from $r_{\rm max}$ to $r_{\rm min}$ and back, the radius vector turns through an angle $\Delta \theta$. The condition for the path to be closed is that this angle should be a rational fraction of $2 \pi$, i.e. that $\Delta \theta = 2 \pi m / n$, where $m$ and $n$ are integers. But according to Bertrand's theorem \cite{Goldstein2002} the only central potentials for which all paths are closed are Kepler's ($\propto -\frac{1}{r}$) and the harmonic potential ($ \propto r^2$). For all other potentials (and excluding the particular case of trajectories with zero angular momentum), the trajectory will behave as in Fig.~\ref{Bertrand}: it will become dense everywhere, filling the allowed annulus region isotropically so that the orbital density is only a function of the radius $r$ as the propagation time tends to infinity. Using Bertrand's theorem, we would like to analyze the long time behavior of trajectories, in particular the time averages of different quantities. For a quantity $A(t)$, the time average of a quantity $\overline{A}$ is defined as, cf.~(\ref{eq:phase_space_average}): \begin{equation} \overline{A} \equiv \lim_{t \rightarrow \infty} \frac{1}{t} \int_0^{t} A(t^\prime) d t^\prime. \label{eq:time_average} \end{equation} We can convert the time average to one over the orbital density discussed above by a change of variables. We immediately conclude that, since Bertrand's theorem implies that the orbital density is isotropic, so will the time average also be: \begin{eqnarray} \overline{x^2} & = & \overline{y^2} \label{eq:x2_eq_y2} \\ \overline{p_x^2} & = & \overline{p_y^2}. \label{eq:px2_eq_py2} \end{eqnarray} We will use this fact to calculate $\langle p_i^2 \rangle_{\infty}$ for a gas of atoms. \begin{figure} \includegraphics[width=0.9\linewidth]{trajectories_isotropic_2D.pdf} \caption{\label{Bertrand}Orbit of atom in a plane with a central potential $V(r)=r$ after increasingly long times from left to right. Since the trajectory never closes, according to Bertrand's theorem, it fills the annular region between $r_{\rm max}$ and $r_{\rm min}$ in an isotropic, dense fashion as $t \rightarrow \infty$.} \end{figure} \subsection{Calculation of momentum averages in terms of integrals of planes} \label{sec:IsotropicDerivation} Although our purpose is to study the potential $V(r)=r$ as a limiting case of the family \eqref{eq:general_potential_with_epsilon}, it is straightforward to consider in this section a more general potential than (\ref{eq:spherical_potential}), namely \begin{equation} \label{eq:general_isotropic_potential} V(r) = r^\alpha \end{equation} with $0<\alpha \neq 2$. This will allow us to examine qualitatively different behavior as a function of $\alpha$. The case $\alpha = 2$ corresponds to the isotropic harmonic potential for which in general (\ref{eq:x2_eq_y2}) and (\ref{eq:px2_eq_py2}) are not true. For $\alpha=1$ we recover (\ref{eq:spherical_potential}). For a gas in an spherical potential, the atoms belonging to the same plane in coordinate space are also confined to the same plane in momentum space making each plane an independent system. So our strategy will be to treat the motion in each plane separately and then add over all of them at the end. For this we choose a coordinate system (see Appendix~\ref{sec:coordinates}) where two of the coordinates (the angles $\theta$ and $\phi$) define the plane, and the remaining four correspond to the in-plane coordinates ($u$ and $v$) and momenta ($p_u$ and $p_v$). Then we can write the total energy as \begin{equation} \langle E \rangle = \int_0^{\pi} d \phi \int_0^{\pi} d \theta \langle E \rangle _{\rm plane}. \label{eq:energy_energy_plane_relationship} \end{equation} where $\langle E \rangle _{\rm plane}$ is the average energy of all the planes lying between $\theta$ and $\theta+d\theta$, $\phi$ and $\phi+d\phi$. Even though the probability density $f(\mathbf{r},\mathbf{p},t)$ is a function of time, the energy of each atom is constant in time as the potential is time-independent and there is no exchange of energy between the atoms, so the average energy is also a constant. Therefore if we know the probability density $f(\mathbf{r},\mathbf{p},t)$ at any one time, we will know the average energy for all time. This allows us to calculate the final momentum widths $\langle p_i^2\rangle_\infty$ from the distribution of energies at $t=0$ after the initial momentum kick. Since the class of potentials (\ref{eq:general_isotropic_potential}) is homogeneous of order $\alpha$ we use the virial theorem, \begin{equation} \overline{K} = \frac{\alpha}{2} \overline{V}, \end{equation} where $K$ is the kinetic energy and the averages are over time as in \eqref{eq:time_average}. Note that the virial theorem is valid both for each atom individually as well as for the entire gas. If we assume that at long times, when the gas has reached a steady state, the ergodic hypothesis applies for such systems, we can replace the time average with the ensemble average \begin{equation} \label{eq:virial_theorem_power_potentials} \langle K \rangle = \frac{\alpha}{2} \langle V \rangle. \end{equation} As each plane is a closed individual system, (\ref{eq:virial_theorem_power_potentials}) also applies to \begin{equation} \label{eq:plane_virial_theorem_power_potentials} \langle K \rangle_{\rm plane} = \frac{\alpha}{2} \langle V \rangle_{\rm plane}, \end{equation} and using (\ref{eq:plane_virial_theorem_power_potentials}), $\langle E \rangle _{\rm plane}$ can be written as \begin{eqnarray} \langle E \rangle _{\rm plane} & = & \langle K \rangle _{\rm plane} + \langle V \rangle _{\rm plane} \nonumber \\ & = & \frac{2+\alpha}{\alpha} \langle K \rangle _{\rm plane} \nonumber \\ & = & \frac{2+\alpha}{2 \alpha} \left( \langle p_u^2 \rangle + \langle p_v^2 \rangle \right). \end{eqnarray} According to Bertrand's Theorem, Kepler's potential $V(r) = -\frac{k}{r}$ and radial harmonic oscillator $V(r) = \frac{1}{2} k r^2$ are the only two types of central force potentials where all bound orbits are also closed orbits. Therefore, if we restrict ourselves to cases where $0< \alpha \ne 2$ where almost all orbits are open (except for the circular orbit), we see that $\langle p_u^2 \rangle = \langle p_v^2 \rangle$ as $t \rightarrow \infty$ so that, following the argument of Sec.~\ref{sec:Planes}, \begin{equation} \label{eq:energy_plane_pu2_relation} \langle p_u^2 \rangle = \langle p_v^2 \rangle = \frac{ \alpha}{2 + \alpha} \langle E \rangle _{\rm plane}. \end{equation} We can now express the the averages of $p_x^2$, $p_y^2$ and $p_z^2$ through $\langle E \rangle _{\rm plane}$ as shown in Appendix~\ref{sec:momenta} (assuming that the final distribution does not depend on $\phi$) \begin{eqnarray} T_{x,y}= \langle p_{x,y}^2 \rangle & = & \frac{ \alpha \pi}{2(2+\alpha)} \int_0^{\pi} d \theta \langle E \rangle _{\rm plane} (1 + \sin^2 \theta) \label{eq:average_pxy2_final} \\ T_z= \langle p_z^2 \rangle & = & \frac{\alpha \pi}{2+\alpha} \int_0^{\pi} d \theta \langle E \rangle _{\rm plane} \cos^2 \theta. \label{eq:average_pz2_final} \end{eqnarray} It remains now to calculate $\langle E\rangle_{\rm plane}$ as a function of $\theta$ and $\phi$ after the momentum kick. \subsection{Momentum Kick} We perturb the Maxwell-Boltzmann distribution in a potential given by (\ref{eq:general_isotropic_potential}) at $t=0$ by applying a momentum kick $q_z$ along the $z$-direction. The resulting initial distribution at temperature $k_BT_0=1$ is: \begin{eqnarray} \label{eq:momentum_kick_spherical_distribution} \lefteqn{f(\mathbf{r},\mathbf{p},t=0) =}\nonumber \\ && A \exp \left( - \frac{p_x^2 + p_y^2 + (p_z - q_z)^2}{2} - r^\alpha \right) \end{eqnarray} where \begin{equation} A = \frac{3}{8 \sqrt{2} \pi^{5/2}\Gamma \left( \frac{3+\alpha}{\alpha} \right)}. \end{equation} If we transform (\ref{eq:momentum_kick_spherical_distribution}) using (\ref{eq:polar_plane_transformation}), we get: \begin{eqnarray} \label{eq:momentum_kick_spherical_distribution_polar_plane} \lefteqn{f = A \exp \left( - \frac{q_z^2}{2 } \right) } \nonumber\\ &&\exp \left( - r^{\alpha} \right) \exp \left( - \frac{p_r^2 - 2 p_r q_z \cos \theta \sin \alpha_p}{2} \right). \end{eqnarray} Using (\ref{eq:momentum_kick_spherical_distribution_polar_plane}), we can calculate $\langle V \rangle _{\rm plane}=\langle r^\alpha\rangle$, $\langle K \rangle _{\rm plane}=\langle p_r^2\rangle/2m$, and finally $\langle E \rangle _{\rm plane}= \langle V \rangle _{\rm plane} + \langle K \rangle _{\rm plane}$ as follows: \begin{eqnarray} \!\!\!\!\!\langle V \rangle _{\rm plane} (t=0) & &= \frac{3 |\cos \theta|}{ \alpha \pi} {\rm e}^{ - \frac{q_z^2}{2}} {\rm e}^{ \frac{q_z^2 \cos^2 \theta}{4} } \nonumber \\ & & \times \left[ I_1 \left( \frac{q_z^2 \cos^2 \theta}{4} \right) \frac{q_z^2 \cos^2 \theta}{4}\right.\nonumber \\ & &\left.+ I_0 \left( \frac{q_z^2 \cos^2 \theta}{4} \right) \left(\frac{1}{2} + \frac{q_z^2 \cos^2 \theta}{4}\right) \right], \label{eq:plane_pe_momentum_kick} \end{eqnarray} \begin{eqnarray} && \langle K \rangle _{\rm plane} (t=0) = \frac{|\cos \theta|}{8 \pi} \exp \left( - \frac{q_z^2}{2} \right) \exp \left( \frac{q_z^2 \cos^2 \theta}{4} \right) \nonumber \\ & & \times \left[ q_z^2 I_1 \left( \frac{q_z^2 \cos^2 \theta}{4} \right) \cos^2 \theta (4+ q_z^2 \cos^2 \theta)\right. \nonumber \\ & & \left.+ I_0 \left( \frac{q_z^2 \cos^2 \theta}{4} \right) (6+ 6 q_z^2 \cos^2 \theta + q_z^4 \cos^4 \theta) \right], \label{eq:plane_ke_momentum_kick} \end{eqnarray} where $I_0$ and $I_1$ are modified Bessel functions of the first kind. Since $\langle E \rangle _{\rm plane}$ does not change with time we can use this to obtain the $\langle p_i^2\rangle$ at $t\rightarrow\infty$ via Eqs.~(\ref{eq:average_pxy2_final},\ref{eq:average_pz2_final}). For $\alpha=1$ the resulting expressions read \begin{eqnarray} \langle p_x^2 \rangle &=& \frac{q_z^2}{12} + \frac{5}{6} + \frac{1}{2q_z^2} - \frac{\sqrt{2}}{2q_z^3} F \left( \frac{q_z}{\sqrt{2}} \right),\label{eq:average_pxy2_analytical}\\ \langle p_z^2 \rangle &=& \frac{q_z^2}{6} + \frac{4}{3} - \frac{1}{q_z^2} + \frac{ \sqrt{2}}{q_z^3} F \left( \frac{q_z}{\sqrt{2}} \right),\label{eq:average_pz2_analytical} \end{eqnarray} where $F$ is the Dawson function. In Fig.~\ref{Figure:spherical_theory_vs_simulation} we show the excellent agreement of the simulations with these analytical predictions. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{fig5.pdf} \caption{Comparison of the simulation results for $T_i=\langle p_i^2 \rangle_{t\rightarrow\infty}$ ($i=x,y,z$) with the analytical predictions (\ref{eq:average_pxy2_analytical}) and (\ref{eq:average_pz2_analytical}) for an isotropic potential (\ref{eq:general_isotropic_potential}) with $\alpha=1$ for different kick strengths $q_z$ along the $z$-direction. Note that the predicted $\langle p_x^2 \rangle_\infty$ and $\langle p_y^2 \rangle_\infty$ are identical.} \label{Figure:spherical_theory_vs_simulation} \end{figure} For a small momentum kick, we can find some illuminating expressions. Expanding $\langle E \rangle _{\rm plane}$ about $q_z = 0$ up to $\mathcal{O}(q_z^2)$ we obtain from (\ref{eq:average_pxy2_final}) and (\ref{eq:average_pz2_final}) \begin{eqnarray} T_{x,y} & \approx & 1 + \frac{5 \alpha - 2}{20 (2 + \alpha)} q_z^2 \label{eq:px2_approx} \\ T_z & \approx & 1 + \frac{2 + 5 \alpha}{10 (2 + \alpha)} q_z^2. \label{eq:pz2_approx} \end{eqnarray} For the case $\alpha=1$ we find \begin{eqnarray} T_{x,y} & \approx & 1 + \frac{1}{20} q_z^2 \Rightarrow \Delta T_{x,y}=\frac{1}{10} \Delta E, \label{eq:sphericaltemperatures1} \\ T_z & \approx & 1 + \frac{7}{30} q_z^2 \Rightarrow \Delta T_{x,y}=\frac{7}{15} \Delta E. \label{eq:sphericaltemperatures2} \end{eqnarray} which satisfies the virial theorem \eqref{eq:VT}. Comparing with the quadrupole experiment (point 3. above) where $\Delta T_{x,y}=0$ and $\Delta T_z = 2/3 \Delta E$, we see that the spherical case leads to some increased heating in the $xy$ plane although small. In terms of the matrix $\Theta_{ij}$ from \eqref{eq:Theta}, for a spherically symmetric case we can show that \begin{equation} \Theta_{ij} = \begin{pmatrix} \theta_1 & \theta_2 & \theta_2 \\ \theta_2& \theta_1 & \theta_2 \\ \theta_2 & \theta_2 & \theta_1 \end{pmatrix} \label{eq:theta_matrix_spherical} \end{equation} so that e.g. $\Delta T_x=\theta_1 q^2_x/2$ and $\Delta T_x=\theta_2 q^2_y/2$. As before, using the sum rule, we find that $\theta_1+2\theta_2=2/3$ so that the matrix depends only on a single unknown parameter. Then (\ref{eq:px2_approx}) and (\ref{eq:pz2_approx}) imply that \begin{equation} \theta_1=\frac{2 + 5 \alpha}{5 (2 + \alpha)} \mbox{ and } \theta_2=\frac{5 \alpha - 2}{10 (2 + \alpha)} \end{equation} which satisfy the sum rule \eqref{eq:sum_rule}. For the case $\alpha=1$ (\ref{eq:spherical_potential}) we get $\theta_1=7/15$ and $\theta_2=1/10$. \subsection{Heating and cooling of transverse directions} These results allow us to answer an interesting question: if we kick the gas along a direction, do the transverse directions heat or cool? For an interacting gas, we know collisions will distribute the energy along all directions, hence the transverse directions will be heated by the same amount as the kicked direction. For an ideal gas in e.g.\ a harmonic potential, the transverse directions will not be affected. Using (\ref{eq:px2_approx}) and (\ref{eq:pz2_approx}) we see that, for a noninteracting gas in a spherical potential of the form (\ref{eq:general_isotropic_potential}), we can have different types of behaviour (up to $\mathcal{O}(q_z^2)$) for the transverse directions: \begin{itemize} \item for $\alpha < \frac{2}{5}$: cooling; \item for $\alpha = \frac{2}{5}$: no change; \item for $\alpha > \frac{2}{5}$: heating. \end{itemize} This surprising result tells us that it is possible in some cases to {\it cool} the gas along some directions while heating it up along others. In fact, as we will see later the quadrupole potential is of this type: it cools along the $x$ and $y$ directions if kicked along $z$. Nevertheless, the spherical potential, which most closely resembles it, with $\alpha=1$, behaves more conventionally since it heats up. \subsection{Population redistribution due to kick} We would like to gain some insight into why the final momentum widths are different $\langle p_x^2 \rangle = \langle p_y^2 \rangle \ne \langle p_z^2 \rangle$ for $q_z \ne 0$. We can rewrite \eqref{eq:average_pxy2_final} using the fact that the total energy of the gas $E_{\rm total}(q=0)+\Delta E$ with $\Delta E$ given by \eqref{eq:deltaE}, can be expressed as the sum of the plane energies: \begin{equation} E_{\rm total}(q=0)+\frac{q_z^2}{2}=\int_0^\pi d \phi \int_0^{\pi} d \theta \langle E \rangle _{\rm plane}= \pi \int_0^{\pi} d \theta \langle E \rangle _{\rm plane} \end{equation} The term $E_{\rm total}(q=0)$ can be easily found from the $q_z=0$ limit of \eqref{eq:plane_pe_momentum_kick} and \eqref{eq:plane_ke_momentum_kick}. It follows that: \begin{eqnarray} \langle p_{x,y}^2 \rangle &= & \frac{ \alpha \pi}{2(2+\alpha)} \int_0^{\pi} d \theta \langle E \rangle _{\rm plane} (1 + \sin^2 \theta) \nonumber \\ &=&\frac{ \alpha}{2(2+\alpha)} \left( \frac{q^2}{2 } +\frac{6 + 3 \alpha}{2 \alpha } +\pi \int_0^{\pi} d \theta \langle E \rangle _{\rm plane} \sin^2 \theta \right) \nonumber \\ & \stackrel{\alpha=1}{=}& \frac{1}{6} \left( \frac{q_z^2}{2 } + \frac{9}{2} \right) + \frac{ \pi}{6} \int_0^{\pi} d \theta \sin^2 \theta \langle E \rangle _{\rm plane}. \label{eq:average_px2_separate_a_1} \end{eqnarray} We can study how each of the terms in $\langle p_x^2 \rangle$ varies with $q_z$. In Fig.~\ref{Figure:momentum_width_terms_vs_qz}, we can see that the contribution of the integral term of \eqref{eq:average_px2_separate_a_1} is small compared to the $q_z^2$ term and becomes less important as $q_z$ increases. To understand why the integral term becomes small, we can investigate how $\langle E \rangle _{\rm plane}$ changes as a function of $\theta$ for different values of $q_z$. From Fig.~\ref{Figure:energy_plane_vs_theta_momentum_kick}, we can see that the value of $\langle E \rangle _{\rm plane}$ near $\theta=0$ and $\theta=\pi$ increases with increasing $q_z$ and the opposite happens near $\theta = \pi / 2$. As the integrand multiplies this factor by $\sin^2 \theta$ which is $0$ at $\theta = 0, \pi$ and peaks at $\theta = \pi/2$ the integral will decrease as $q_z$ increases. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{fig6.pdf} \caption{Comparing the different terms of $\langle p_x^2 \rangle$ in \eqref{eq:average_px2_separate_a_1} with $\langle p_z^2 \rangle$ \eqref{eq:average_pz2_analytical} for different values of momentum kick $q_z$ with $m=k_B T=1$.} \label{Figure:momentum_width_terms_vs_qz} \vspace{0.4cm} \includegraphics[width=0.9\linewidth]{fig7.pdf} \caption{Using Eqs.~\eqref{eq:plane_pe_momentum_kick} and \eqref{eq:plane_ke_momentum_kick} to plot $\langle E \rangle _{\rm plane} (\theta)$ for different values of momentum kick $q_z$.} \label{Figure:energy_plane_vs_theta_momentum_kick} \vspace{0.4cm} \includegraphics[width=0.9\linewidth]{fig8.pdf} \caption{Using Eqs.~\eqref{eq:plane_pe_momentum_kick} and \eqref{eq:plane_ke_momentum_kick} to plot $\langle E \rangle _{\rm plane} (\theta)/|\cos \theta|$ for different values of momentum kick $q_z$.} \label{Figure:energy_plane_over_cos_vs_theta_momentum_kick} \end{figure} To make it even clearer, it is useful to plot not $\langle E \rangle_{\rm plane}$ but $\langle E \rangle_{\rm plane}/|\cos \theta|$ which removes the effect of the Jacobian \eqref{eq:jacobian_cartesian_plane} which simply accounts for the variation of the density of planes as a function of $\theta$, leaving us with the change in plane energy as a result of the kick. From Fig.~\ref{Figure:energy_plane_over_cos_vs_theta_momentum_kick}, we can see that when there is no momentum kick, the energy of all the planes are the same. When we apply a momentum kick along the $z$-axis, planes lying along that direction ($\theta = 0$ or $\pi$) gain energy whereas directions close $\theta = \pi /2$ lose it. This means that, when we project the energy of each plane to obtain the momentum widths, $\langle p_z^2 \rangle > \langle p_x^2 \rangle$. We can also see that as $q_z \rightarrow \infty$, $\langle E \rangle _{\rm plane} / |\cos \theta|$ is only non-zero at $\theta = 0$ and $\pi$ which explains the momentum widths ratio constraint derived in \eqref{eq:pz2_px2_ratio_constraint} (note that Fig.~\ref{Figure:momentum_widths_ratio} agrees with the ratio constraint). \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{fig9.pdf} \caption{Using Eqs.~\eqref{eq:plane_pe_momentum_kick} and \eqref{eq:plane_ke_momentum_kick} to plot the ratio between $\langle p_z^2 \rangle$ and $\langle p_x^2 \rangle$ for different values of momentum kick $q_z$.} \label{Figure:momentum_widths_ratio} \end{figure} \subsection{Memory loss in isotropic potentials} A natural question arising from the study of this section is whether a gas can remember in which direction it was kicked after a long time has passed. For example, we could start with a gas in thermal equilibrium in an isotropic potential, i.e.\ a spherically (3D) or circularly (2D) symmetric potential, apply a momentum kick along an arbitrary direction and wait for a very long time. Is the final gas distribution anisotropic? I.e.\ does it preserve a memory of the direction of the kick? In a collisional gas, the extra energy from the momentum kick is redistributed along all directions equally, leading to isotropic heating and therefore a loss of memory. A non-interacting gas in a harmonic oscillator preserves this memory because its center of mass oscillates along the kick direction indefinitely. However, quite surprisingly, a non-interacting gas in a non-separable potential can also preserve it due to the existence of integrals of motion which encode the direction. For example in a 3D spherical potential the memory is associated with the three components of angular momentum $L_{x,y,z}$ being integrals of the motion, as we have seen. An interesting question is: can there be memory loss with no interactions and a non-separable potential? Unexpectedly the answer is yes: for example a gas in a 2D circular symmetric potential has $\langle p_x^2 \rangle = \langle p_y^2 \rangle$ due to Bertrand's theorem, so memory is lost (excluding harmonic and Kepler's potential). There is only a single component of angular momentum so the direction cannot be encoded in the integrals of the motion. After the kick the extra energy is redistributed to all directions, the ``orbit density'' becomes isotropic as $t \rightarrow \infty$ which leads to loss of memory. This macroscopic loss of information is due to ergodicity of the individual trajectories rather than to collisions. Of course, microscopically the memory is preserved since, if we reversed the momenta of all atoms at the same time, we could recover the initial kicked distribution. \subsection{First order transition due to breaking of the potential's spherical symmetry} As we have seen, if we start with an isotropic equilibrium thermal distribution in a spherical trap ($\epsilon=0$) and we kick the gas along the $z$ direction then, when $t \rightarrow \infty$, we find that $T_x=T_y \neq T_z$. Likewise, by spherical symmetry, kicking along the $x$ direction will lead to the temperatures along the perpendicular directions being equal ($T_y=T_z \neq T_x$, see Fig.~\ref{fig:aniso}). However, this is in seeming contradiction with the experimental results for the quadrupole case ($\epsilon=3$), see point 2. above and Fig.~\ref{fig:simul1}, where a kick along the $x$ direction leads to $T_x=T_y$. It seems that breaking the spherical symmetry by setting $\epsilon>0$ and making the $z$ direction unequal, enforces a cylindrical symmetry of the steady state gas distribution along the perpendicular directions after the kick. This discrepancy in behaviour indicates a discontinuous (first order) transition in gas behavior as a function of $\epsilon$ when going from spherical to non-spherical potentials. To study this better we plot the three final temperatures after a kick along $x$ as a function of $\epsilon$ near $\epsilon=0$ (Fig.~\ref{fig:aniso}). We see that at $\epsilon=0$, $T_y=T_z < T_x$, as expected. However, for values of $\epsilon$ immediately above that, we find that $T_x=T_y>T_z$, the behavior of the quadrupole trap. In other words: \begin{equation} \lim_{\epsilon \rightarrow 0} \lim_{t \rightarrow \infty} \langle p_x^2-p_y^2 \rangle_t \neq \lim_{t \rightarrow \infty} \lim_{\epsilon \rightarrow 0} \langle p_x^2-p_y^2 \rangle_t, \end{equation} the lhs being zero and the rhs not. We will see that the reason for this is due to $\langle p_x^2-p_y^2 \rangle_t $ relaxing to zero with a relaxation or dephasing time scale $\tau$ which diverges as $\epsilon \rightarrow 0$. \begin{figure} \includegraphics[width=0.9\linewidth]{fig10.pdf} \caption{Behavior of the final temperatures $\Delta T_i/\Delta E$ as a function of the anisotropy $\epsilon$ near the spherical limit after a kick along $x$. At $\epsilon=0$, $T_z=T_y$ after which there is a discontinuous change in the temperatures due to the breaking of spherical symmetry along $z$. Data are obtained by numerical simulation over 100 000 atoms. Dashed, dotted and plain lines correspond to the theoretical expectations of the fully isotropic (\ref{eq:sphericaltemperatures1}, \ref{eq:sphericaltemperatures2}), almost isotropic (\ref{eq:pz2_approx} and \ref{eq:finalsphericaltemperatures2}) and quadrupole geometries, respectively. } \label{fig:aniso} \end{figure} There is a characteristic relaxation time $\widetilde{\tau}$ before the momentum widths reach their final steady state value during which there is a gradual dephasing of the orbits of atoms with different angular momenta and energy in each plane. This timescale is related to the width of the thermal distribution and does not depend on $\epsilon$ as $\epsilon \rightarrow 0$. From dimensional analysis we see that $\widetilde{\tau} \sim \sqrt{T_0} \sim 1$. However, there is a second much longer characteristic relaxation time $\tau$ during which $T_x$ and $T_y$ converge to each other and which was not present in the perfectly spherical case. This timescale appears because of the rotation (precession) of the orbital planes of each atom around the $z$ axis and is due to the potential's anisotropy. This phenomenon is known in astronomy when studying the orbit of satellites around slightly non-spherical planets, where it is called nodal precession \cite{Goldstein2002}. For sufficiently small $\epsilon$ and at long times $t \gg \tau$, we expect that $\langle p_x^2-p_y^2 \rangle_t $ will decay at long times as some function of $ t/\tau $, where the decay time scale is given by \begin{equation} \tau \sim \epsilon^\nu \label{eq:tau}. \end{equation} The value of $\nu$ is independent of the kick strength if it is weak enough, and the dependence on $\sqrt{T_0}$ sets the dimensions of $\tau$. We show in Appendix \ref{sec:dephasing} that $\nu=-1$ so that $\tau \sim 1/\epsilon$; this is confirmed in Fig.~\ref{fig:time}. While Bertrand's equilibrium leads to a higher temperature along the kicked direction, the orbital precession redistributes the energy equally between the $x$- and $y$-axes, leading eventually to the equilibration of $T_x$ and $T_y$. The first process takes place in about 40 time units, while the latter process is much slower as the anisotropy is smaller, as shown in Fig.~\ref{fig:time}. \begin{figure} \includegraphics[width=0.9\linewidth]{fig11.pdf} \caption{Equilibration time as a function of the trap anisotropy $\epsilon$. The equilibration time is defined as the time required for $T_y$ to reach 99\% of the steady $T_x$ value. The solid line is a fit $A\times \epsilon^{-1}$ following the expression (\ref{eq:tau}). The best fit leads to an R-squared value of 0.99.} \label{fig:time} \end{figure} This analysis leads to quantitative predictions for the final temperatures at small anisotropy, particular for the temperature discontinuities. For $\epsilon \gtrsim 0$, the imparted energy gets first redistributed in the plane, before the orbital precession slowly equilibrates temperatures so that we can express the final temperatures in terms of the spherical temperatures \eqref{eq:sphericaltemperatures1}, \eqref{eq:sphericaltemperatures2}: \begin{align} \Delta T _x ^{\epsilon \rightarrow 0} = \Delta T _y ^{\epsilon \rightarrow 0} &= \frac{1}{2}\left( \Delta T _x ^{\epsilon = 0} + \Delta T _y ^{\epsilon = 0} \right) \\ &= \frac{17}{60} \Delta E \quad \quad {\rm and} \\ \Delta T _z ^{\epsilon \rightarrow 0} & = \Delta T _z ^{\epsilon =0}\\ & = \frac{1}{10} \Delta E. \label{eq:finalsphericaltemperatures2} \end{align} From here we can extract the matrix elements of \eqref{eq:theta_matrix_axial} since in the above equations $\Delta E=q^2_x/2$: $\theta_1=17/60$, $\theta_2=1/10$, which means that $\theta_3=7/15$. This prediction is in very good agreement with the results presented in Fig.~\ref{fig:aniso} and is valid near $\epsilon=0$ as long as $\tau \gg \widetilde{\tau}$. \section{Pancake limit} \label{sec:pancake} In the previous section we analyzed the spherically symmetric case, which could be solved analytically. There is another case where the motion can be solved analytically, namely the limit when the confinement along the $z$-direction is strong $(\epsilon \gg 1)$. As we will see later, this case exhibits behavior which is much closer to the quadrupole. We consider the case of strong confinement of the gas along the $z$ direction of the potential \eqref{eq:general_potential_with_epsilon} with $\epsilon \gg 1$ so that \begin{equation} V({\bf r}) = \sqrt{x^2 + y^2 + (1+\epsilon) z^2} \simeq \sqrt{\rho^2+\epsilon z^2}, \label{eq:pancakepot} \end{equation} where we have used cylindrical coordinate $\rho=\sqrt{x^2+y^2}$. Since the potential is tightly confined, motion along the $z$ direction is fast compared to that in the plane. To check this we compare $t_z$, the period of oscillation along $z$, with the period of oscillation along $\rho$, $t_\rho$, the characteristic timescales of the two motions. An atom whose motion is along the $x$-axis experiences a potential $V=x$, whereas if the atom moves along the $z$-axis, it sees a potential $V=\sqrt{\epsilon} z$. Assuming that both of these atoms have the same overall thermal energy $T$, then, in the first case, its period of oscillation is $ \propto \sqrt{E}$ whereas in the second case it is $ \propto \sqrt{E/\epsilon}$, so that the ratio of the two periods is $\epsilon \gg 1$ and therefore the approximation of considering the motion along $z$ to be fast compared with that in the plane is justified. We start by analyzing the motion of a single atom. We split up the energy as \begin{equation} E=\frac{p^2_\rho}{2}+\frac{p_\phi^2}{2\rho^2}+E_z, \label{eq:totalenergy} \end{equation} where $p_\rho$ and $p_\phi$ are the canonically conjugate momenta and $\phi=\arctan(y/x)$ is the angle with the $x$-axis in the $xy$ plane. $E$ and $p_\phi$ are constants of the motion (the latter being the angular momentum $L_z$ which is always conserved due to the potential being independent of $\phi$). Also, \begin{equation} E_z=\frac{p^2_z}{2}+\sqrt{\rho^2+\epsilon z^2}. \label{eq:energyz} \end{equation} We now replace $p_z,z$ with the action-angle variables $I$, $\eta$ in the usual way. In particular \begin{eqnarray} I&=&\oint p_z \frac{dz}{2 \pi}= 4\int_0^{z_{\rm{max}}} p_z \frac{dz}{2 \pi} \nonumber \\ &=& \frac{2 \sqrt{2}}{\pi} \int^{\sqrt{\frac{E_z^2-\rho^2}{\epsilon}}}_0 \sqrt{E_z-\sqrt{\rho^2+\epsilon z^2}} dz \nonumber \\ & = & \frac{2 \sqrt{2}}{\pi \sqrt{\epsilon}} I_0 \label{eq:I} \end{eqnarray} with the definition \begin{equation} I_0 \equiv \int^{\sqrt{E_z^2-\rho^2}}_0 \sqrt{E_z-\sqrt{\rho^2+z^{\prime 2}}} dz^\prime \label{eq:I_0} \end{equation} where we made the substitution $z^\prime=\sqrt{\epsilon} z$ to show that $I \propto 1/\sqrt{\epsilon}$, since $I_0$ does not depend on $\epsilon$. Note that, for the same reason, in \eqref{eq:I_0} $E_z$ depends implicitly only on $I_0$ and $\rho$ but not on $\epsilon$. The trajectory $p_z(z)$ is determined by \eqref{eq:energyz} and therefore depends on $E_z$ and $\rho$. Also, since $\rho$ is slowly varying, by the standard arguments, $I$ (or $I_0$) can be considered an adiabatic invariant (i.e.\ another constant) for the motion in the plane. A simple approximate solution to \eqref{eq:I_0} which allows us to express $E_z$ explicitly in terms of $I_0$ and $\rho$ is \beq E_z(I_0,\rho)=\left(\frac{3}{2} I_0 +\rho^{3/2} \right)^{2/3}, \label{eq:I2} \eeq which allows us to rewrite \eqref{eq:totalenergy} as \beq E=\frac{p_\rho^2}{2}+\frac{p_\phi^2}{2\rho^2}+\left(\frac{3}{2}I_0 +\rho^{3/2} \right)^{2/3} \label{eq:totalenergy2} \eeq and we see that the effective potential for the radial motion is a sum of the centrifugal term plus a confining term increasing linearly at large distances. Since we had originally three degrees of freedom, a particular trajectory is completely determined by the three integrals of the motion $E$, $p_\phi$ and $I_0$. Therefore, the time averaged in-plane kinetic energy \begin{equation} \overline{ \frac{p^2_\rho}{2}+\frac{p_\phi^2}{2\rho^2}} \label{eq:kinetic_energy_in_plane} \end{equation} is also determined by these constants. It is now clear that the averaged kinetic energy is only a function of the constants of the motion $E$, $p_\phi$, and $I_0$ for that orbit. The adiabatic principle tells us that the atom executes a motion in the plane under the effective potential $E_z$ given by \eqref{eq:I2}. Since $I_0$ is not the same for all atoms, each atom moves in slightly different potentials, labelled by their value of $I_0$. When we apply a kick to an atom along $z$ at a time $t_0$, its in-plane momenta $p_\phi$, $p_\rho$ and its position $\rho$, $z$ are unchanged. What changes instead is its momentum $p_z$ and therefore its corresponding kinetic energy $p_z^2/2 \rightarrow (p_z+q)^2/2=p_z^2/2 + p_z q + q^2/2$. After averaging over the whole gas, the term $p_z q$ drops out so that only the third term remains, an average increase of kinetic energy of $q^2/2$ per atom (and so of $E_z(\rho_0)$ as we see from \eqref{eq:energyz}). This has two effects: it changes the effective potential \eqref{eq:I2} and it increases the total energy $E$. Since $I_0$ is an increasing function of $E_z$, an increase of the kinetic energy along $z$ at $t_0$ implies an instantaneous change $I_0 \rightarrow I_0 + \Delta I_0$, $\Delta I_0>0$. In the subsequent motion, the effective potential is changed $E_z(I_0,\rho) \rightarrow E_z(I_0+\Delta I_0,\rho)$, transforming it into a shallower effective 2D potential as we can see from \eqref{eq:I2}. On the other hand, the increased kinetic energy also means an increased total energy $E \rightarrow E+\Delta E$: \beq \Delta E=E_z(I_0+ \Delta I_0,\rho_0)-E_z(I_0,\rho_0). \eeq The first effect leads naturally to a reduction in speed in the plane, i.e.\ a reduction in the average in-plane kinetic energy \eqref{eq:kinetic_energy_in_plane}. However, the increase $\Delta E$ has the opposite effect, that of increasing the average kinetic energy. This latter effect is dominant for atoms which were near the bottom of the potential at the moment of the kick, whereas the reduction in $\Delta E$ is most felt by those which were away from the center. To determine what happens to the gas as a whole, we use numerical simulations. We compare the results of the pancake case after a kick along $z$ with a very large $\epsilon$ with the case of a 2D gas in the effective potential \eqref{eq:I2}, with the same number of atoms, temperature, and kick momentum $q$. \begin{figure} \centering \begin{overpic}[width=0.9\linewidth]{fig12_1.pdf} \put(1,73){\rm\bfseries a)} \end{overpic} \begin{overpic}[width=0.9\linewidth]{fig12_2.pdf} \put(1,73){\rm\bfseries b)} \end{overpic} \caption{Comparison of the 2D potential given by \eqref{eq:I2} and the pancake potential. Here $\epsilon=100$, but larger $\epsilon$ values produced consistent results. The plots are for $\langle p_x^2\rangle$ as a function of $q_z^2$ (top panel), and as a function of $q_x^2$ (bottom panel). Solid and dashed lines represent guides to the eye. Note that the two potentials give almost identical results for the heating along the kick direction and a small discrepancy for the orthogonal cooling.} \label{fig:pancake_2D} \end{figure} For the simulation of the 2D gas, we use the same initialization of the system as for the regular pancake case, namely the kicked Boltzmann distribution with potential \eqref{eq:pancakepot}. Then for each individual atom we evaluate $I_0$ via Eq.~\eqref{eq:I2}, where $E_z$ and $\rho$ are determined by the initial position and momentum coordinates of the atom. The subsequent time-evolution of each atom is governed by Eq.~\eqref{eq:totalenergy2}, where the last term corresponds to the new effective potential \eqref{eq:I2} which replaces the regular pancake potential \eqref{eq:pancakepot} ($I_0$ is assumed constant for each atom during the time-evolution). Note that we only evaluate the movement of the gas in the $xy$ plane in this approximation---the position and momentum coordinates in $z$-direction do not appear in the equations. We are also able to use the same method to study the change in average kinetic energy along $z$ due to a kick in the plane along $x$. Our findings are in Fig.~\ref{fig:pancake_2D}. We see that there is excellent agreement between the 3D pancake gas and the 2D case, especially for the heating along the direction of the kick. Although both show cooling of the transverse directions, the agreement is less good there, a fact which we attribute to the inexact ansatz \eqref{eq:I2}. So the physical interpretation of the pancake case is clear: there is a slight overall cooling of the transverse directions when the gas is kicked along the $z$ direction due to the effective potential becoming shallower for the atoms closer to the center of the trap. This effect dominates over the heating of the atoms near the edges of the gas, although not by much so that the overall cooling is very small. \section{Comparison of the potentials} In Fig.~\ref{fig:comparing_potentials} we compare the quadrupole potential with the two others we have discussed, the spherical and the pancake. It is clear that the quadrupole behavior is much closer to that of the pancake so that, in this respect, it seems that the $\epsilon=3$ is already very close to the limit of $\epsilon=\infty$. There is a remarkably good quantitative agreement between the two cases. For example we obtain approximately the same heating and cooling in both the kicked and transverse directions. We find for the pancake $\Theta_{xx}=0.38$ and $\Theta_{zx}=-0.09$ which can be compared with the very similar values for the quadrupole $\Theta_{xx}=0.36$ and $\Theta_{zx}=-0.05$ mentioned in Sec.~\ref{sec:simulations}. In Sec.~\ref{sec:simulations} we mentioned two puzzles: one was the anisotropy of the temperatures along the kicked and orthogonal directions in the quadrupole potential. Both the spherical and pancake potentials exhibit this. For the spherical case this is due to a geometric reason, the fact that the kick redistributes the atoms into planes which are more aligned with the direction of the kick. They will subsequently remain there due to the conservation of angular momentum. In the pancake case this is due to strong potential anisotropy coming from the large value of $\epsilon$. This latter reason is responsible also for the anisotropy in the quadrupole potential. Also, in the spherical case we saw that the temperatures of the kicked direction and of the plane orthogonal to it were different. In the quadrupole case we find generally in all simulations that $T_x=T_y \neq T_z$. This was interpreted in terms of the 2D effective description where Bertrand's theorem applies; it leads naturally to the isotropy of the distribution in the $xy$ plane. Clearly the quadrupole gas has this behavior for the same reason. The second puzzle was the apparent (near) separability of the kicked and orthogonal directions, i.e.\ that the kick energy is {\em not} redistributed into the momentum distributions of all directions but rather it is concentrated only in the kick direction. As we see from Fig.~\ref{fig:comparing_potentials}, the spherical and pancake potentials behave very differently: the pancake reproduces closely the quadrupole's separability (in fact a slight cooling of the orthogonal directions) while the spherical potential shows a clear heating of those directions. The reason for the separability in the quadrupole case can be traced to the 2D model where it is due to a near cancellation of the contributions of the atoms which, at the moment of the kick, are close to the center of the trap and are cooled and that of the atoms at the periphery which are heated. \begin{figure} \begin{overpic}[width=0.9\linewidth]{fig13_1.pdf} \put(1,73){\rm\bfseries a)} \end{overpic} \begin{overpic}[width=0.9\linewidth]{fig13_2.pdf} \put(1,73){\rm\bfseries b)} \end{overpic} \caption{\label{fig:comparing_potentials} Comparison of the three types of potentials: spherical ($\epsilon=0$), quadrupole ($\epsilon=3$) and pancake ($\epsilon=100$) showing the much better agreement between the quadrupole and the pancake compared with the spherical case. The top panel shows $\langle p_z^2 \rangle$ and the bottom panel $\langle p_x^2 \rangle$ as a function of the momentum kick $q_z^2$ in $z$-direction. The blue solid lines are the analytical predictions (\ref{eq:average_pxy2_analytical}) and (\ref{eq:average_pz2_analytical}) for the spherical potential. Dashed and dotted lines for the other potentials are guides to the eye.} \end{figure} \section{Conclusion} We began this analysis with some puzzling experimental results for a non-interacting classical gas in a quadrupole trap whereby momentum kicks along one spatial direction were found to mostly affect only that direction, despite the fact that the potential is non-separable. By analyzing the extreme case of the spherical potential ($\epsilon=0$) we understood that, in 3D, the constants of the motion (e.g.\ the angular momentum components) can allow the system to retain a memory of the direction of the kick. Consequently, the long time momentum distribution can remain anisotropic in this {\em isotropic} system. However, this effect strongly depends on the dimensionality of the problem, and the situation is completely different in 2D, where Bertrand's theorem leads to an isotropic distribution. Furthermore, as soon as the potential becomes slightly anisotropic ($0<\epsilon \ll 1$), the competition between the in-plane isotropic behavior and the symmetry-induced precession of orbital planes results in a qualitatively different steady-state, which we were able to characterize analytically. Finally, we investigated the pancake limit (large $\epsilon$), which was shown numerically to be much closer to the experimental situation (i.e.\ the quadrupole potential). We were able to explain analytically, based on an effective potential, most of its characteristic features, including the peculiar cooling effect experienced by the transverse degrees of freedom with respect to the kick direction. For future analysis we would like to investigate the apparent ``thermalization" of the gas after the kick as discussed in section \ref{sec:thermalisation} as well as studying more in detail the region between the spherical to pancake limits.
train/arxiv
BkiUaNHxK1ThhCdy5myM
5
1
\section{Introduction} The decay $\etp$ is particularly interesting because it is forbidden by isospin symmetry and thus can only happen via isospin breaking due to either strong interactions, \begin{equation} \mathcal{H}_\tn{QCD}(x)=\frac{m_d-m_u}{2}(\bar{d}d-\bar{u}u)(x)\;, \end{equation} which are proportional to the light-quark mass difference $m_d-m_u$, or electromagnetic interactions, \begin{equation} \mathcal{H}_\tn{QED}(x)=-\frac{e^2}{2}\int dy\; D^{\mu\nu}(x-y)T(j_\mu(x)j_\nu(y))\;, \end{equation} which are proportional to the electric charge squared. A well-known low-energy theorem by Sutherland~\cite{sutherland} states that the electromagnetic effects in $\etp$ are small. This decay is therefore very sensitive to $m_d-m_u$ and hence it potentially yields a particularly clean access to the determination of quark mass ratios. The systematic machinery that can cope with both effects accurately and leads to many experimentally testable predictions like Dalitz plot parameters, the decay width and quark mass ratios is chiral perturbation theory~(ChPT)~\cite{weinbergchpt,glannphys,glchpt} with the inclusion of electromagnetism~\cite{urech}. In view of new and upcoming high statistics experiments~\cite{nKLOEnew,CELSIUSWASA,cKLOEnew,WASAatCOSY,unverzagt,prakhov} we reconsider the electromagnetic corrections to these observables. While early calculations of the \textit{strong} amplitude at tree level yielded a decay width which is off from the experimental value by a factor of a few, Gasser and Leutwyler~(GL)~\cite{gleta} calculated the strong contributions at one-loop level and observed large unitarity corrections due to strong final-state interactions (i.e.\ $\pi\pi$ rescattering), but their result still differs from experiment by a factor of roughly 2. Thus strong corrections beyond one loop were studied using dispersive methods~\cite{kambor,anisovich}, unitarized ChPT~\cite{beisert,borasoy}, and finally with a complete two-loop calculation~\cite{bijnens}. All of these found considerable enhancement compared to the one-loop calculation. According to Sutherland's theorem, the \textit{electromagnetic} contributions at tree level vanish. Baur, Kambor, and Wyler~(BKW)~\cite{bkw} studied corrections to Sutherland's theorem by evaluating the electromagnetic effects at one-loop level, but they found them to be very small. The motivation for reconsidering electromagnetic corrections at chiral order $p^4$ in the present work hinges on the fact that BKW neglected terms proportional to $e^2\delta$, where $\delta=m_d-m_u$, by arguing that these are of second order in isospin breaking and therefore expected to be suppressed even further. However, by restricting oneself to terms of the form $e^2m_s$ and $e^2\hat{m}$, where $\hat{m}=(m_u+m_d)/2$, one excludes some of the most obvious electromagnetic effects like real- and virtual-photon contributions as well as effects due to the charged-to-neutral pion mass difference $\Delta\mpi=\mpc-\mpn=\ord(e^2)$, both of which scale as $e^2\delta$. These mechanisms fundamentally affect the analytic structure of the amplitudes in question: in the charged decay channel $\etpc$ there is a Coulomb pole at the $\pi^+\pi^-$ threshold $s=4\mpc$, while in the neutral decay channel $\etpn$ the pion mass difference induces a cusp behavior at this threshold due to $\pi^+\pi^-\to\pi^0\pi^0$ rescattering, compare e.g.\ Ref.~\cite{MMS}. In contrast, the corrections identified by BKW are all polynomials (due to counterterms) or quasi-polynomials (due to kaon loop effects) inside the physical region. Furthermore, Sutherland's theorem guarantees that at the soft-pion point these corrections scale as $e^2\hat{m}$ only (and not $e^2m_s$); hence the relative suppression of the neglected terms is of the order of $\delta/\hat{m}\approx2/3$ and therefore not a priori small. The details of the presented analysis can be found in Ref.~\cite{dkm}. \section{$\boldsymbol{\eta\to}3\boldsymbol{\pi}$ decay amplitudes at $\boldsymbol{\ord(e^2\delta)}$} At leading chiral order the $\etp$ decay amplitudes for the charged and the neutral channel, \begin{align} \begin{split} \langle\pi^0\pi^+\pi^-|\eta\rangle &= i(2\pi)^4\delta^4(p_{\pi^0}+p_{\pi^+}+p_{\pi^-}-p_\eta)\amp_c(s,t,u)\;, \\ \langle3\pi^0|\eta\rangle &= i(2\pi)^4\delta^4(p_{\pi^0_1}+p_{\pi^0_2}+p_{\pi^0_3}-p_\eta)\amp_n(s,t,u)\;, \end{split} \end{align} follow from the tree graphs where the fields in the LO Lagrangian are diagonalized by use of the $\eta\pi^0$ mixing angle $\epsilon=(\sqrt{3}/4)\times\delta/(m_s-\hat{m})+\ord(\delta^3)$. For the \textit{charged} channel, the Mandelstam variables $s=(p_\eta-p_{\pi^0})^2$, $t=(p_\eta-p_{\pi^+})^2$ and $u=(p_\eta-p_{\pi^-})^2$ are related by $s+t+u=$\linebreak$\me+\mpn+2\mpi=3s_0^c$. Expanding in isospin breaking parameters up to $\ord(\delta,e^2,e^2\delta)$ and rewriting everything in terms of physical observables like $\me$, $\mpn$, $\Delta\mpi$ and the pion decay constant $F_\pi$ yields for the charged amplitude at leading chiral order \begin{equation}\label{eqn:cloamp} \amp_c^\tn{LO}=-\frac{B_0\delta}{3\sqrt{3}F_\pi^2}\left\{1+\frac{3(s-s_0^c)+2\Delta\mpi}{\me-\mpn}\right\}=-\frac{(3s-4\mpn)(3\me+\mpn)}{Q^216\sqrt{3}F_\pi^2\mpn}\;. \end{equation} An additional electromagnetic term of order $e^2\delta$ cancels the pion mass difference that is implicitly included in $s_0^c$. The LO charged amplitude is completely proportional to $\delta$ and can thus be expressed in terms of the quark mass double ratio $Q^2=(m_s^2-\hat{m}^2)/(m_d^2-m_u^2)$ which is particularly stable with respect to strong higher-order corrections~\cite{glchpt,leuellipse}. The amplitude depends linearly on $s$, and by inserting $s_0^c$ it explicitly displays the Adler zero at $s=4\mpn/3$. For the \textit{neutral} channel, the Mandelstam variables $s=(p_\eta-p_{\pi^0_1})^2$, $t=(p_\eta-p_{\pi^0_2})^2$ and \mbox{$u=(p_\eta-p_{\pi^0_3})^2$} are related by \mbox{$s+t+u=\me+3\mpn=3s_0^n$}. The neutral amplitude at leading chiral order, \begin{equation}\label{eqn:nloamp} \amp_n^\tn{LO}=-\frac{B_0\delta}{\sqrt{3}F_\pi^2}=-\frac{3(\me-\mpn)(3\me+\mpn)}{Q^216\sqrt{3}F_\pi^2\mpn}\;, \end{equation} also carries an overall factor of $\delta$, but it contains neither derivatives nor electromagnetic terms and hence it is just a constant. While for the charged decay invariance under charge conjugation implies that the amplitude $\amp_c(s,t,u)$ is symmetric under the exchange of $t$ and $u$, the amplitude $\amp_n(s,t,u)$ for the neutral decay has to be symmetric under exchange of all pions and thus all Mandelstam variables due to Bose symmetry. All calculations of both $\etp$ decay channels performed in ChPT so far used the following relation between the charged and the neutral amplitude that utilizes isospin symmetry, \begin{equation}\label{eqn:isoamprel} \amp_n(s,t,u)=\amp_c(s,t,u)+\amp_c(t,u,s)+\amp_c(u,s,t)\;. \end{equation} However, this relation is only valid at leading order in isospin breaking and can not be used at $\ord(e^2\delta)$. This is most easily seen by the fact that e.g.\ photon loop contributions do not respect it. Furthermore, an explicit check using the LO amplitudes~\eqref{eqn:cloamp} and~\eqref{eqn:nloamp} shows that this relation is only valid except for terms proportional to $\delta\times(3s_0^c-3s_0^n)=\delta\times2\Delta\mpi$. Thus both decay channels have to be calculated separately. At next-to-leading chiral order the $\etp$ decay amplitudes receive various strong and electromagnetic contributions. Besides renormalization effects and $\eta\pi^0$ mixing at NLO, both strong and electromagnetic NLO low-energy constants enter the calculation via the tree diagram with a vertex from the NLO Lagrangian. The $I=0$ rescattering of intermediate pions is particularly important since it gives rise to about half of the total NLO corrections, cf.\ Ref.~\cite{gleta}. For the charged decay, the infrared~(IR) divergences due to virtual-photon loops are canceled by including real-photon radiation (bremsstrahlung) in the soft-photon approximation at the amplitude-squared level. The exchange of a virtual photon in the final state between the charged pions leads to a triangle loop function which has some interesting features: both the real and the imaginary part are IR-divergent; while the IR divergence in the real part is cancelled against bremsstrahlung contributions, the imaginary part can be resummed in the Coulomb phase; furthermore it contains a kinematical singularity at threshold $s=4\mpc$, the Coulomb pole. The kinematical singularities in the radiative corrections, i.e.\ the Coulomb pole and the kinematical bremsstrahlung singularity at $s=(M_\eta-M_{\pi^0})^2$, are part of the universal soft-photon corrections, see e.g.\ Ref.~\cite{isidorisoftphoton}, that are usually already applied in the analysis of the experimental data in order to perform a meaningful fit of the Dalitz plot distribution. Hence we omit the unobservable Coulomb phase and subtract the Coulomb pole and the kinematical Bremsstrahlung singularity from the squared amplitude for the derivation of the numerical results. We have checked both the charged and the neutral amplitude in several ways: they are finite and renormalization-scale independent and, by use of relation~\eqref{eqn:isoamprel}, both amplitudes reduce to the results of GL~\cite{gleta} at $\ord(\delta)$ and of BKW~\cite{bkw} at $\ord(e^2)$. In addition, we have checked explicitly that at the soft-pion point~\cite{sutherland} the new electromagnetic corrections are relatively suppressed in comparison with the BKW corrections only by $\delta/\hat{m}\approx2/3$, as expected. \section{Results} It is well known that one has to go beyond one-loop chiral order to obtain a phenomenologically successful representation of the $\etp$ decay amplitudes. Since we focus on the electromagnetic contributions anyway, all results are given as relative corrections to the results obtained via the purely strong amplitude at $\ord(\delta)$ which corresponds precisely to the GL amplitude. Furthermore we only consider uncertainties in the electromagnetic corrections and diregard higher-order hadronic corrections. Hence the purely electromagnetic corrections at $\ord(e^2)$ corresponding to the BKW amplitude as well as the new mixed corrections at $\ord(e^2\delta)$ are successively added to the strong amplitude and their uncertainties are obtained by variation of the rather unknown electromagnetic low-energy constants. A comparison of the different electromagnetic contributions at the amplitude level is given in Figs.~\ref{fig:campzo} and~\ref{fig:namp}, where the real and the imaginary parts of the amplitudes at different orders in isospin breaking (GL, BKW and DKM) are shown separately for each decay channel. The amplitudes are plotted along the lines with $t=u$, the vertical dotted lines show the limits of the corresponding physical regions. The BKW corrections are purely real for both decay channels, as no pion rescattering diagrams contribute at that particular order, and thus the imaginary parts of GL and BKW coincide with each other. Furthermore, all imaginary parts are independent of any low-energy constants and therefore plotted without an error range. The \textit{charged} decay amplitude exhibits the kinematical singularities at threshold $s=4\mpc$ (Coulomb pole \& phase) that are retained here for illustrative reasons, while the IR divergences in the amplitude are cured by hand. By our choice of the neutral pion mass for the isospin limit, the threshold cusp in the GL amplitude is artificially removed from the physical threshold energy to $s=4\mpn$. Since the LO \textit{neutral} decay amplitude is constant and also at NLO the dependence on $s$ is weak, the overall variation and thereby the scale of Fig.~\ref{fig:namp} is very small. Thus the electromagnetic uncertainties as well as the expected cusp at the energy of the $\pi^+\pi^-$ threshold inside the physical region at $\ord(e^2\delta)$ are clearly visible. Furthermore the figure shows that the size of the DKM contributions is comparable to or even larger than the BKW contributions. \begin{figure} \centering \includegraphics[width=0.48\linewidth]{campre.eps} \hfill \includegraphics[width=0.48\linewidth]{campim.eps} \caption{Real and imaginary parts of the charged amplitudes GL (dashed/black), BKW (dot-dashed/blue), and DKM (full/red) for $t=u$. The inserts show the region close to the two-pion thresholds. The line widths in the real part indicate the error bands.} \label{fig:campzo} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\linewidth]{nampre.eps} \hfill \includegraphics[width=0.48\linewidth]{nampim.eps} \caption{Real and imaginary parts of the neutral amplitudes GL (dashed/black), BKW (dot-dashed/blue), and DKM (full/red) for $t=u$. The hatched regions denote the error bands.} \label{fig:namp} \end{figure} The Dalitz plot for the charged decay is described in terms of the symmetrized coordinates \begin{equation} x=\sqrt{3}\frac{T_+-T_-}{Q_c}=\frac{\sqrt{3}(u-t)}{2M_\eta Q_c}\;, \qquad y=\frac{3T_0}{Q_c}-1=\frac{3\bigl[(M_\eta-M_{\pi^0})^2-s\bigr]}{2M_\eta Q_c}-1\;, \end{equation} whereas for the neutral decay it is convenient to use the fully symmetrized coordinate \begin{equation} z=\frac{2}{3}\sum_{i=1}^{3}\biggl(\frac{3T_i}{Q_n}-1\biggr)^2=x^2+y^2\;. \end{equation} Here $Q_c=T_0+T_++T_-=M_\eta-2M_{\pi^\pm}-M_{\pi^0}$ and $Q_n=T_1+T_2+T_3=M_\eta-3M_{\pi^0}$, where the $T_i$ are the kinetic energies of the respective pions in the $\eta$ rest frame. In order to obtain the Dalitz plot parameters the squared absolute values of the decay amplitudes are expanded around the center of the corresponding Dalitz distribution according to the standard parameterizations \begin{align} \begin{split} |\amp_c(x,y)|^2 &= |\mathcal{N}_c|^2\bigl\{1+ay+by^2+dx^2+...\bigr\}\;, \\ |\amp_n(x,y)|^2 &= |\mathcal{N}_n|^2\bigl\{1+2\alpha z+...\bigr\}\;. \end{split} \end{align} For the \textit{charged} channel, the purely strong results and the successive electromagnetic relative corrections are given in Tab.~\ref{tab:cdalitzpar}. The normalization gets reduced and the slopes tend to increase. All corrections are at the percent level, but the BKW corrections do not represent a valid estimate of the dominant electromagnetic corrections. \begin{table} \centering \newcommand{\hspace{-4mm}}{\hspace{-4mm}} \newcommand{\tbk$\pm$}{\hspace{-4mm}$\pm$} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|crclrclrclrcl|} \hline &\multicolumn{3}{c}{$|\mathcal{N}_{c}|^2$}&\multicolumn{3}{c}{$a$}&\multicolumn{3}{c}{$b$}&\multicolumn{3}{c|}{$d$}\\ \hline GL & \multicolumn{3}{c}{$0.0325$} & \multicolumn{3}{c}{$-1.279$} & \multicolumn{3}{c}{$0.396$} & \multicolumn{3}{c|}{$0.0744$} \\ $\Delta$BKW & $(-1.1 $&\tbk$\pm$&\hspace{-4mm}$0.9)\%$ & $(+0.6$&\tbk$\pm$&\hspace{-4mm}$0.1)\%$ & $(+1.4$&\tbk$\pm$&\hspace{-4mm}$0.2)\%$ & $(+1.5$ &\tbk$\pm$&\hspace{-4mm}$0.5)\%$ \\ $\Delta$DKM & $(-2.4 $&\tbk$\pm$&\hspace{-4mm}$0.7)\%$ & $(+0.7$&\tbk$\pm$&\hspace{-4mm}$0.4)\%$ & $(+1.5$&\tbk$\pm$&\hspace{-4mm}$0.7)\%$ & $(+4.4$ &\tbk$\pm$&\hspace{-4mm}$0.4)\%$ \\ \hline \end{tabular} \renewcommand{\arraystretch}{1.0} \caption{Dalitz normalization and slopes for $\etpc$ resulting from the GL amplitude and relative electromagnetic corrections due to BKW and DKM amplitudes.} \label{tab:cdalitzpar} \end{table} The Dalitz plot parameters for the \textit{neutral} channel are shown in Tab.~\ref{tab:ndalitzpar}. By DKM(w/o cusp) we denote a fit to the inner part of the Dalitz plot with $z\leq z_0$ such that the border region from the cusp outward is excluded, since the cusp is incompatible with a simple polynomial fit. As for the charged decay channel, the normalization gets reduced by a few percent. Here, the corrections of $\ord(e^2\delta)$ are even bigger than those of $\ord(e^2)$. Note that the cusp effect leads to the single biggest modification of any Dalitz plot parameter: trying to fit the cusp with the polynomial function reduces $\alpha$ by 4\% (compare $\Delta$DKM to $\Delta$BKW), while excluding the cusp region increases it again by more than 5\%. The significance of this non-analytic structure is also reflected in the fit quality as quantified by the quoted $\chi^2/\ndf$ values. \begin{table} \centering \newcommand{\hspace{-4mm}}{\hspace{-4mm}} \newcommand{\tbk$\pm$}{\hspace{-4mm}$\pm$} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|crclrclc|} \hline &\multicolumn{3}{c}{$|\mathcal{N}_{n}|^2$}&\multicolumn{3}{c}{$10^2\times\alpha$}&$\chi^2/\ndf$\\ \hline GL & \multicolumn{3}{c}{$ 0.269$} & \multicolumn{3}{c}{$ 1.27$} & $\equiv1$ \\ $\Delta$BKW & $(-1.1$&\tbk$\pm$&\hspace{-4mm}$0.9)\%$ & $(+3.7$&\tbk$\pm$&\hspace{-4mm}$0.5)\%$ & $0.99$ \\ $\Delta$DKM & $(-3.3$&\tbk$\pm$&\hspace{-4mm}$1.8)\%$ & $(-0.2$&\tbk$\pm$&\hspace{-4mm}$1.0)\%$ & $6.20$ \\ $\Delta$DKM(w/o cusp) & $(-3.3$&\tbk$\pm$&\hspace{-4mm}$1.8)\%$ & $(+5.0$&\tbk$\pm$&\hspace{-4mm}$1.1)\%$ & $0.35$ \\ \hline \end{tabular} \renewcommand{\arraystretch}{1.0} \caption{Dalitz normalization and slopes for $\etpn$ resulting from the GL amplitude and relative electromagnetic corrections due to BKW and DKM amplitudes. For details, see main text.} \label{tab:ndalitzpar} \end{table} This behavior can easily be understood qualitatively by looking at the Dalitz plot distribution at $\ord(e^2\delta)$ presented in Fig.~\ref{fig:3Dcusp}. The distribution can be thought of as a bowl with three symmetrical handles where the surface bends down due to the cusp at constant $\pi^+\pi^-$ threshold in $s$, $t$ and $u$ each. \begin{figure} \centering \includegraphics[scale=0.9]{nadalitzgray.eps} \caption{$\etpn$ Dalitz plot distribution corresponding to the DKM amplitude.} \label{fig:3Dcusp} \end{figure} The decay widths that can easily be calculated from the Dalitz plot distributions are given in Tab.~\ref{tab:dwbrqmr}. For comparison we also show the result for the charged amplitude without subtraction of the universal soft-photon corrections, named DKM(uc). For the neutral decay, the shifts of the width are completely dominated by the corrections of the Dalitz normalizations and the DKM corrections are twice as large as the BKW corrections. When looking at the neutral-to-charged branching ratio, the difference between the BKW and the DKM result becomes even larger, since at $\ord(e^2)$ the relative shifts nearly cancel in the ratio. Finally, by use of the (approximate) relation $\Gamma_{c/n}\propto Q^{-4}$ one can read off the corrections to the quark mass ratio $Q$ which are also quoted in Tab.~\ref{tab:dwbrqmr}. This relation does not hold for the BKW contributions, but the additional error in factorizing $Q^{-2}$ in the complete amplitude can be safely neglected. In order to purify a real measurement (which includes electromagnetic effects) such that the purely strong value for $Q$ can be extracted, the opposite shift has to be applied. Note that there are also ongoing theoretical efforts to obtain $Q$ from a dispersive treatment of $\etp$ decays~\cite{lanz}. \begin{table} \centering \newcommand{\hspace{-4mm}}{\hspace{-4mm}} \newcommand{\tbk$\pm$}{\hspace{-4mm}$\pm$} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|crclrcl|crcl|} \hline & \multicolumn{3}{c}{$\etpc$} & \multicolumn{3}{c|}{$\etpn$} & \multicolumn{4}{c|}{$r=\Gamma_n/\Gamma_c$} \\ \hline $\Gamma^\tn{GL}$ & \multicolumn{3}{c}{$154.5\ev$} & \multicolumn{3}{c|}{$222.8\ev$} & $r^\tn{GL}$ & \multicolumn{3}{c|}{$1.442$} \\ $\Delta\Gamma^\tn{BKW}$ & $(-1.0$&\tbk$\pm$&\hspace{-4mm}$0.9)\%$ & $(-1.1$&\tbk$\pm$&\hspace{-4mm}$0.9)\%$ & $\Delta r^\tn{BKW}$ & $(-0.1$&\tbk$\pm$&\hspace{-4mm}$1.2)\%$ \\ $\Delta\Gamma^\tn{DKM}$ & $(-1.9$&\tbk$\pm$&\hspace{-4mm}$0.5)\%$ & $(-3.3$&\tbk$\pm$&\hspace{-4mm}$1.8)\%$ & $\Delta r^\tn{DKM}$ & $(-1.4$&\tbk$\pm$&\hspace{-4mm}$1.8)\%$ \\ $\Delta\Gamma^\tn{DKM(uc)}$ & $(-1.0$&\tbk$\pm$&\hspace{-4mm}$0.5)\%$ & & & & $\Delta r^\tn{DKM(uc)}$ & $(-2.3$&\tbk$\pm$&\hspace{-4mm}$1.8)\%$ \\ \hline $\Delta Q^\tn{BKW}$ & $(+0.24$&\tbk$\pm$&\hspace{-4mm}$0.22)\%$ & $(+0.28$&\tbk$\pm$&\hspace{-4mm}$0.22)\%$ & & & & \\ $\Delta Q^\tn{DKM}$ & $(+0.48$&\tbk$\pm$&\hspace{-4mm}$0.12)\%$ & $(+0.84$&\tbk$\pm$&\hspace{-4mm}$0.46)\%$ & \multicolumn{4}{c|}{$\Gamma_{c/n}\propto Q^{-4}$} \\ $\Delta Q^\tn{DKM(uc)}$ & $(+0.24$&\tbk$\pm$&\hspace{-4mm}$0.12)\%$ & & & & & & & \\ \hline \end{tabular} \renewcommand{\arraystretch}{1.0} \caption{Decay widths, branching ratio and quark mass ratio $Q$.} \label{tab:dwbrqmr} \end{table} \section{Summary \& Outlook} Although electromagnetic contributions to $\etp$ decays are small in general, they ought to be accounted for in high-precision studies. The effects at isospin breaking $\ord(e^2\delta)$ are as large as the effects at $\ord(e^2)$. Since the present calculation is performed in ChPT at NLO and hence includes the rescattering effect leading to the cusp only at LO in the quark mass expansion, it is not suited to serve for a precision extraction of $\pi\pi$ scattering lengths from the cusp effect, but rather illustrates the phenomenon. The theoretical framework perfectly suited for such an extraction is non-relativistic effective field theory~\cite{colangelocusp,bisseggercusp,bisseggerradcorr,gullstrom,schneider}. The present calculation is in a sense dual to the non-relativistic calculation as it aims at predicting electromagnetic effects in those parts of the amplitude that are merely parameterized in the latter \acknowledgments I am very grateful to my collaborators and thank the organizers for making this exciting conference possible. Partial financial support by the Helmholtz Association through funds provided to the virtual institute ``Spin and strong QCD'' (VH-VI-231), by the EU Integrated Infrastructure Initiative Hadron Physics Project (contract number RII3--CT--2004--506078), and by DFG (SFB/TR 16, ``Subnuclear Structure of Matter'') is gratefully acknowledged.
train/arxiv
BkiUa1y6NNjgBvv4Molg
5
1
\section{Introduction} All graphs considered in this paper are finite, simple, and undirected. Call a graph $G$ {\em planar} if it can be embedded into the plane so that its edges meet only at their ends. As proved by Graey et al~\cite{Gar}, the problem of deciding whether a planar graph is properly 3-colorable is NP-complete. In 1959, Gr\"{o}tzsch~\cite{G59} showed that every triangle-free planar graph is 3-colorable. A lot of research was devoted to find sufficient conditions for a planar graph to be $3$-colorable, by allowing a triangle together with some other conditions. The well-known Steinberg's conjecture~\cite{S76} stated below is one of such numerous efforts. \begin{conj}[Steinberg, \cite{S76}] \label{con1} All planar graphs without $4$-cycles and $5$-cycles are $3$-colorable. \end{conj} Towards this conjecture, Erd\H{o}s suggested to find a constant $c$ such that a planar graph without cycles of length from $4$ to $c$ is $3$-colorable. The best constant people so far is $c=7$, found by Borodin, Glebov, Raspaud, and Salavatipour~\cite{BGRS05}. For more results, see the recent nice survey by Borodin~\cite{B12}. Another relaxation of the conjecture is to allow some defects in the color classes. A graph is {\em $(c_1, c_2, \cdots, c_k)$-colorable} if the vertex set can be partitioned into $k$ sets $V_1,V_2, \ldots, V_k$, such that for every $i\in [k]:=\{1, 2, \ldots, k\}$ the subgraph $G[V_i]$ has maximum degree at most $c_i$. With this notion, a properly 3-colorable graph is $(0, 0, 0)$-colorable. Chang, Havet, Montassier, and Raspaud~\cite{CHMR11} proved that all planar graphs without $4$-cycles or $5$-cycles are $(2,1,0)$-colorable and $(4,0,0)$-colorable. It is shown in~\cite{HSWXY13, HY13, XMW12} that planar graphs without $4$-cycles or $5$-cycles are $(3,0,0)$- and $(1,1,0)$-colorable. Wang and his coauthors (private communication) further proved such graphs are $(2,0,0)$-colorable. As usual, a 3-cycle is also called a {\em triangle}. Havel (1969, \cite{H69}) asked if each planar graph with large enough distances between triangles is $(0,0,0)$-colorable. This was resolved by Dvo\"{r}\'{a}k, Kr\'{a}l and Thomas~\cite{DKT09}. We say that two cycles are {\em adjacent} if they have at least one edge in common and {\em intersecting} if they have at least one common vertex. Borodin and Raspaud in 2003 made the following two conjectures, which have common features with Havel's and Steinberg's 3-color problems. \begin{conj}[Bordeaux Conjecture, \cite{BR03}] \label{con2} Every planar graph without intersecting $3$-cycles and without $5$-cycles is $3$-colorable. \end{conj} \begin{conj}[Strong Bordeaux Conjecture, \cite{BR03}] \label{con3} Every planar graph without adjacent $3$-cycles and without $5$-cycles is $3$-colorable. \end{conj} Let $d^{\bigtriangledown}$ denote the minimal distance between triangles. Towards the conjectures, Borodin and Raspaud~\cite{BR03} showed that planar graphs with $d^{\bigtriangledown}\ge 4$ and without $5$-cycles are $(0,0,0)$-colorable. This result was later improved to $d^{\bigtriangledown}\ge 3$ by Borodin and Glebov~\cite{BG04}, and independently by Xu~\cite{X07}. Borodin and Glebov~\cite{BG11} further improved this result to $d^\bigtriangledown\ge2$. With the relaxed coloring notation, Xu~\cite{X08} showed that all planar graphs without adjacent 3-cycles and without $5$-cycles are $(1, 1, 1)$-colorable. Recently, Liu, Li and Yu~\cite{LLY15a, LLY15b} proved that every planar graph without intersecting 3-cycles and without $5$-cycles is $(2, 0, 0)$-colorable and $(1, 1, 0)$-colorable. In this paper, we improve the results by Xu~\cite{X08} and by Liu-Li-Yu~\cite{LLY15a}, and prove the following result. \begin{thm}\label{main1} Every planar graph without $5$-cycles and adjacent 3-cycles is $(1, 1, 0)$-colorable. \end{thm} We actually prove a stronger result. To state it, we introduce the following notion. Let $H$ be a subgraph of $G$. Then $(G,H)$ is {\em superextendable} if every $(1,1,0)$-coloring of $H$ can be extended to a $(1,1,0)$-coloring of $G$ such that each vertex $u\in G-H$ is coloured differently from its neighbors in $H$. If $(G,H)$ is superextendable, then we call $H$ a {\em superextendable sugraph} of $G$. Let $\mathcal{F}$ be the family of planar graphs without $5$-cycles and adjacent $3$-cycles. \begin{thm}\label{main} Every triangle or $7$-cycle of a graph in $\mathcal{F}$ is superextendable. \end{thm} \noindent{\it Proof of Theorem~\ref{main1} from Theorem~\ref{main}}:\quad Let $G$ be a graph in $\mathcal{F}$. If $G$ is triangle-free, then $G$ is 3-colorable by the Gr\"{o}ztch Theorem, and is naturally $(1, 1, 0)$-colorable; if $G$ has a triangle, then every $(1, 1, 0)$-coloring of this triangle can be superextended to the whole graph $G$ by Theorem~\ref{main}. So Theorem~\ref{main1} follows.{\hfill \rule{2.5mm}{2.5mm}} \medskip Like many of the results of this kind, we also use a discharging argument to prove Theorem~\ref{main}. The main difficulty still lies on the cases when a $4$-vertex or a $5$-vertex is incident with many triangles or many $4$-faces. Fortunately, we could utilize many of the lemmas from Xu~\cite{X08} and Liu-Li-Yu~\cite{LLY15a} to handle those difficult situations. We use $G= (V, E, F)$ to denote a plane graph with vertex set $V(G)$, edge set $E(G)$, and face set $F(G)$. For a face $f\in F(G)$, let $b(f)$ denote the boundary of a face $f$. A $k$-vertex ($k^+$-vertex, $k^-$-vertex) is a vertex of degree $k$ (at least $k$, at most $k$). The same notation will apply to faces and cycles. An $(l_1, l_2, \ldots, l_k)$-face is a $k$-face $v_1v_2\ldots v_k$ with $d(v_i)=l_i$, respectively. If a 3-vertex is incident with a triangle, then its neighbor not on the triangle is called its {\em outer neighbor}, and the $3$-face is a {\em pendant $3$-face} of its outer neighbor. Let $C$ be a cycle of a plane graph $G$. We use $int(C)$ and $ext(C)$ to denote the sets of vertices located inside and outside $C$, respectively. A cycle $C$ is called a {\em separating cycle} if $int(C)\ne\emptyset\ne ext(C)$, and is called a {\em nonseparating cycle} otherwise. We also use $C$ to denote the set of vertices of $C$. Let $S_1, S_2, \ldots, S_l$ be pairwise disjoint subsets of $V(G)$. We use $G[S_1, S_2, \ldots, ,S_l]$ to denote the graph obtained from $G$ by contracting all the vertices in $S_i$ to a single vertex for each $i\in\{1, 2, \ldots, l\}$. Let $x(y)$ be the resulting vertex by identifying $x$ and $y$ in $G$. The paper is organized as follows. In Section $2$, we show the reducible structures useful in our proof. In Section $3$, we are devoted to the proof of Theorem~\ref{main} by a discharging procedure. \section{Reducible configurations} Suppose that $(G, C_0)$ is a counterexample to Theorem~\ref{main} with minimum $\sigma(G)=|V(G)|+|E(G)|$, where $C_0$ is a triangle or a 7-cycle in $G$. If $C_0$ is a separating cycle, then $C_0$ is superextendable in both $G\setminus ext(C_0)$ and $G\setminus int(C_0)$. Hence, $C_0$ is superextendable in $G$, contrary to the choice of $C_0$. Thus we assume that $C_0$ is the boundary of the outer face of $G$. Let $F_k=\{f: \text{ $f$ is a $k$-face and } b(f)\cap C_0=\emptyset\}$, $F_k'=\{f: \text{ $f$ is a $k$-face and } |b(f)\cap C_0|=1\}$, and $F_k''=\{f: \text{ $f$ is a $k$-face and } |b(f)\cap C_0|=2\}$. Since $G\in {\mathcal F}$, the following is immediate. \begin{prop}\label{pr21} Every vertex not on $C_0$ has degree at least $3$, and no 3-face shares an edge with a 4-face in $G$. \end{prop} The following is a summary of some basic properties of $G$ when we consider superextendablity of a $3$-cycle or a $7$-cycle. The proofs of those results can be found, for example, in ~\cite{X08} or ~\cite{LLY15b}. \begin{Lemma}[Xu, \cite{X08}; Liu-Li-Yu, \cite{LLY15b}]\label{le22} The following are true about $G$: \begin{enumerate}[(1)] \item The graph $G$ contains neither separating triangles nor separating $7$-cycles. \item If $G$ has a separating $4$-cycle $C_1=v_1v_2v_3v_4v_1$, then $ext(C_1)=\{b,c\}$ such that $v_1bc$ is a $3$-cycle. Furthermore, the $4$-cycle is the unique separating $4$-cycle. \item Let $x,y$ be two nonadjacent vertices on $C_0$. Then $xy\not\in E(G)$ and $N(x)\cap N(y)\subseteq C_0$. \item Let $f$ be a $4$-face with $b(f)=v_1v_2v_3v_4v_1$ and let $v_1\in C_0$. Then, $v_3\not\in C_0$. Moreover, $|N(v_3)\cap C_0|=1$ if $f\in F_4''$, and $|N(v_3)\cap C_0|=0$ if $f\in F_4'$. \item Let $u, w$ be a pair of diagonal vertices on a $4$-face. Then $G[\{u, w\}]\in \mathcal{F}$. \end{enumerate} \end{Lemma} The following holds for minimum graphs that are not $(1,1,0)$-colorable. \begin{Lemma}\label{le27} The following are true in $G$. \begin{enumerate} \item (Lemma 2.5 from~\cite{HY13}) No $3$-vertex $v\not\in C_0$ is adjacent to two $3$-vertices in $int(C_0)$. \item (Lemma 2.3 from~\cite{HY13}) $G$ has no $(3, 3, 4^-)$-face $f\in F_3$. \item (Lemma 2.8 from~\cite{HY13}) If $v\in int(C_0)$ be a $4$-vertex incident with exactly one $3$-face that is a $(3, 4, 4)$-face in $F_3$, then a neighbor of $v$ not on the face is either in $C_0$ or a $4^+$-vertex. \item (Lemma 2.6 from~\cite{HY13}) Let $v\in int(C_0)$ be the $3$-vertex on a $(3, 4, 4)$-face $f\in F_3$. Then the neighbor of $v$ not on $f$ is either on $C_0$ or a $4^+$-vertex. \item (Lemma 3(3) from~\cite{XMW12}) Suppose that $v\in int(C_0)$ is a $4$-vertex incident with two faces from $F_3$. If one of the faces is a $(3, 4, 4)$-face, then $v$ has a $5^+$-neighbor on the other face. \end{enumerate} \end{Lemma} A $4$-vertex $v\in int(C_0)$ is {\em bad} if it is incident with a $(3, 4, 4)$-face from $F_3$, A $(3, 4, 5^+)$-face from $F_3$ is {\em bad} if the $4$-vertex on it is bad. A $5$-vertex {\em bad} if it is incident with a bad $(3, 4, 5)$-face or a $(3, 3, 5)$-face. \begin{Lemma}\label{le29} Suppose that $v\in int(C_0)$ is a $5$-vertex incident with two $3$-faces $f_1$ and $f_3$ from $F_3$. Let $v_5$ be the remaining neighbor of $v$. Then each of the followings holds. \begin{enumerate}[(1)] \item (Lemma 5 from~\cite{XMW12}) If both $f_1$ and $f_3$ are $(3, 4^-, 5)$-faces, then $v_5$ is either on $C_0$ or a $4^+$-vertex. \item (Lemma 4(1) from~\cite{XMW12}) At most one of $f_1$ and $f_3$ is bad. \item (Lemma 4(2) from~\cite{XMW12}) If $f_1$ is a bad $(3, 4, 5)$-face and $f_3$ is a $(3, 4, 5)$-face, then the outer neighbor of the $3$-vertex on $b(f_3)$ is either on $C_0$ or a $4^+$-vertex. \item If $f_1$ is a bad $(3, 4, 5)$-face and $f_3$ is a $(4, 4, 5)$-face, then at most one $4$-vertex on $b(f_3)$ is bad. \item \label{l212}(Lemma 8 from~\cite{XMW12}) No $6$-vertex in $int(C_0)$ is incident with three $(3, 4^-, 6)$-faces from $F_3$. \end{enumerate} \end{Lemma} \begin{proof} We only prove (4). Let $f_1=v v_1 v_2$ with $d(v_1)= 4$ and $d(v_2)= 3$, and $f_3=v v_3 v_4$ with $d(v_3)=d(v_4)= 4$. And let $v'_i$ and $v''_i(\text{if any})$ be the other neighbors of $v_i$ for $i=1, 2, 3, 4$. Suppose that both $4$-vertices on $b(f_3)$ are bad. Let $S=\{v, v_1,v_2, v_3, v_4, v'_1, v''_1, v'_3, v''_3, v_4', v_4''\}$, where $d(v''_1)=d(v''_3)=d(v_4'')= 4$, and let $H= G\setminus S$. Since $\sigma(H)<\sigma(G)$, $C_0$ has a superextension $c$ on $H$. Based on $c$, we properly color $\{v''_1, v'_1, v''_3, v'_3, v_3, v_4'', v_4', v, v_2\}$ in order. Now $v_4$ can be colored as it has four properly colored neighbors. If $v, v_4$ are colored differently, then $v_1$ can also be colored, as it has four properly colored neighbors as well. Thus, $c(v)=c(v_4)=1$, and $v_1$ cannot be colored. It follows that $\{c(v_1'), c(v_1'')\}=\{c(v_4'), c(v_4'')\}=\{2,3\}$. If $c(v_3)=3$, then we can recolor $v_4$ with $2$, and color $v_1$, so let $c(v_3)=2$. If $c(v_5)=2$, then we recolor $v$ with $3$ and color $v_1$ with $1$ and recolor $v_2$ accordingly, so let $c(v_5)=3$. Recolor $v$ with $2$ and color $v_1$ with $1$. Now we can recolor $v_2$ with $1$ (if $c(v_2')=3$) or $3$ (if $c(v_2')\not=3$). \end{proof} For a $3$-vertex in a $3$-face $f\in F_3$, it is {\em weak} if it is adjacent to a $3$-vertex not on $f$ or $C_0$, and {\em strong} if it is adjacent to a vertex on $C_0$ or a $4^+$-vertex not on $f$. For a vertex $v\in int(C_0)$ with $d(v)\in \{5,6\}$, $v$ is {\em weak} if $v$ is incident with two $(5, 5^-, 3)$-faces from $F_3$ one of which is bad and adjacent to a pendant 3-face in $F_3$ when $d(v)=5$, or $v$ is incident to two bad $(6, 4, 3)$-faces and one $(3,5^+,6)$-face from $F_3$ when $d(v)=6$. \begin{Lemma}\label{l210} \begin{enumerate}[(1)] \item There is no $(3,5^+,5^+)$-face with three weak vertices. \item (Lemma 11 in \cite{HY13}) There is no $(3, 5^+, 5)$-face $f=uvw$ such that $u,v$ are weak and $w$ is incident with a $(5, 3, 3)$-face. \end{enumerate} \end{Lemma} \begin{proof} We only give the proof of (1) here. Suppose that a $(3, 5^+, 5^+)$-face $f=uvw$ contains three weak vertices. When $d(v)=5$, we label $N(v)-\{u,w\}$ as $v_1, v_2, v_3$ such that $d(v_2)=d(v_3)=3$ and $v_1$ is a bad $4$-vertex whose neighbors are $v_2, v_1', v_1''$ with $d(v_1')=4$; when $d(v)=6$, we label $N(v)-\{u,w\}$ as $v_1, v_2, v_4, v_5$ such that $v_1, v_4$ are bad $4$-vertices with $N(v_1)=\{v_2, v_1', v_1''\}$, $N(v_4)=\{v_5, v_4', v_4''\}$ and $d(v_1')=d(v_4')=4$. Similarly, label $N(w)-\{u, v\}$ as $w_1,w_2, w_3$. Let $S_1=N(v)\cup \{v_1', v_1''\}$ if $d(v)=5$, and $S_1=N(v)\cup \{v_1', v_1'', v_4', v_4''\}$ if $d(v)=6$. We first have the following claim: \begin{equation*}\label{C1} \text{In a $(1,1,0)$-coloring of $G-S_1$, $w$ can be properly colored.} \end{equation*} {\em Proof of the claim:} Consider a $(1,1,0)$-coloring $c$ of $G-S_1$. First let $d(w)=5$. We may assume that $w_1, w_2, w_3$ are colored differently. Note that we may recolor $w_3, w_1', w_1'', w_1, w_2$ in the order so that they are all properly colored. If $c(w_3)=3$, then $\{c(w_1), c(w_2)\}=\{1,2\}$, thus we can recolor $w_2$ so that it has the same color with $w_1$. Then $w$ can be properly colored. If $c(w_3)=1$ (or $2$ by symmetry), then $\{c(w_1), c(w_2)\}=\{2,3\}$; when $c(w_1)=2$, we can recolor $w_2$ with $2$, and color $w$ properly; when $c(w_1)=3$, $w_1', w_2''$ are colored $1$ and $2$, respectively, and we can recolor $w_1$ with $1$, then color $w$ properly. Now assume that $d(w)=6$. Again, we may recolor $w_1',w_1'', w_1, w_2, w_4',w_4'', w_4, w_5$ properly in the order. If there are only two colors on $w_1, w_2, w_4, w_5$, then $w$ can be properly colored. If $w_2$ (or $w_5$) is colored with $3$, then we can recolor it with $1$ or $2$; if $c(w_1)=3$, then we can recolor $w_1$ with $1$ or $2$ so that it is different from the color of $w_2$. By doing this, we may assume that $3$ is not on the four neighbors of $w$, so $w$ can be properly color with $3$. Thus we have the claim. The following claim now gives a contradiction: \begin{quote}\label{C2} A $(1,1,0)$-coloring of $G-S_1$ with $w$ being properly colored can be extended to a $(1,1,0)$-coloring of $G$. \end{quote} {\em Proof of the claim:} Let $c$ be a $(1, 1, 0)$-coloring of $G-S_1$ in which $w$ is properly colored. First assume that $d(v)=5$. We color $u, , v_3$ properly in the order. If $v$ can be properly colored, then we color $v_2, v_1', v_1''$ properly in the order, and finally color $v_1$, which can be color as it has only four properly colored neighbors. If $v$ cannot be properly colored, then $w, u,v_3$ have different colors, thus $v$ can be colored $1$ or $2$. Color $v$ with $1$ for a moment. Color $v_2, v_1', v_1''$ properly in the order. Now try to color $v_1$. If $v_1$ is not colorable, then it must be $(c(v_1'), c(v_1''), c(v_2))=(3,2,2)$, in which case, we can color $v_1$ with $1$ and color $v$ with $2$. Now assume that $d(v)=6$. We color $u, v, v_2, v_5, v_1', v_1'', v_4',v_4''$ properly in the order. Now $v_1, v_4$ can be colored, unless that both of them have the same color, say $1$, with $v$. In the bad case, we can recolor $v_2, v_5$ with $1$ or $3$, then color $v$ with $2$. Thus the claim is true and we have a contraction. \end{proof} Now we discuss the configurations about $4$-faces from $F_4$. Some of Lemmas~\ref{l213}-~\ref{l220} have their initial forms in~\cite{X08, LLY15b}. \begin{Lemma}\label{l213}(Adapted from Lemma 3.6 in \cite{LLY15a}) \begin{enumerate}[(1)] \item No $4$-face is from $F'_4$ in $G$. \item Let $f\in F_4$ and let $v, x$ be a pair of diagonal vertices on $b(f)$. Then $d(v)\ge 4$ or $d(x)\ge 4$. \end{enumerate} \end{Lemma} \begin{Lemma}\label{l214} Let $v\in int(C_0)$ be a bad $4$-vertex, or a $5$-vertex incident with a bad $(5, 4, 3)$-face, or a $5$-vertex incident with a $(5,3,3)$-face from $F_3$. If $v$ is incident to a $4$-face $f$, then its diagonal vertex on $b(f)$ is a $4^+$-vertex. \end{Lemma} \begin{proof} We consider the case when $d(v)=5$. The other cases are very similar and simpler. Let $f_1=v v_1 v_2$ be a bad $(5, 4, 3)$-face with $d(v_1)= 4$, and let $f_3=v v_3 u_3 v_4$ be a $4$-face with $d(u_3)= 3$ in $G$. Let $v'_1, v''_1$ be the two other neighbors of $v_1$ with $d(v'_1)=4$ and $d(v''_1)= 3$. Let $G'= G \setminus S$ and $H=G'[\{v_3, v_4\}]$, where $S= \{v, v_1, v_2, v'_1, v''_1\}$. Let $v^*_3$ be the resulting vertex by identifying $v_3$ with $v_4$. By Lemma~\ref{le22} (5), $H \in {\mathcal F}$. Since $\sigma(H)< \sigma(G)$, $C_0$ has a superextension $\phi_H$ on $H$. Based on $\phi_H$, we color $v_3, v_4$ with the color $\phi_H(v^*_3)$ and recolor properly $u_3$ with a color in $\{1, 2, 3\}\setminus \{\phi_H(v^*_3), \phi_H(u'_3)\}$, where $u'_3$ is the other neighbor of $u_3$ in $G$. Next, properly color $v$ with a color in $\{1, 2, 3\}\setminus \{\phi_H(v^*_3), \phi_H(v_5)\}$, and properly color $v_2$, $v'_1, v''_1$ in order, and finally color $v_1$ as it has four properly colored neighbors. Thus, $C_0$ has a superextension $\phi_G$ on $G$, a contradiction. \end{proof} \medskip For a positive integer $n$, let $[n] = \{1, 2, \ldots, n\}$. For a vertex $v\in int(C_0)$ with $d(v)=k$, let $v_1, v_1, \ldots, v_k$ denote the neighbors of $v$ in a cyclic order. Let $f_i$ be the face with $vv_i$ and $vv_{i+1}$ as two boundary edges for $i=1, 2, \ldots, k$, where the subscripts are taken modulo $k$. A $k$-vertex $v\in int(C_0)$ is {\em poor} if it is incident with $k$ $4$-faces from $F_4$, and {\em rich} otherwise. The following is a very useful lemma in the remaining proofs. \begin{Lemma}\label{l215}((Lemma 3. 10 from~\cite{LLY15b}) Let $v\in int(C_0)$ be a $4$-vertex with $N(v) = \{v_i \, :\,i\in [4]\}$. If $v$ is incident with two $4$-faces that share exactly an edge, then no $t$-path joins $v_i$ and $v_{i+2}$ in $G$ for $t\in \{1, 2, 3, 5\}$, where the subscripts of $v$ are taken modulo $4$. \end{Lemma} \begin{Lemma}\label{l217} Let $v\in int(C_0)$ be a $4$-vertex incident with a $4$-face $f_i=vv_iu_iv_{i+1}$. Then each of the followings holds, where the subscripts are taken modulo $4$. \begin{enumerate} \item If $d(v_i)=d(u_i)=3$, then $f_{i-1}$ and $f_{i+1}$ are $6^+$-faces. Consequently, if $v$ is poor, then $v$ is not incident with $(3,3,4,4^+)$-faces. \item (Lemma 3.11 (1) in \cite{LLY15b}) If $f_{i+1}=vv_{i+1}u_{i+1}v_{i+2}$, then $d(u_i)\geq 4$ or $d(u_{i+1})\geq 4$. \item (Lemma 3.11 (2) in \cite{LLY15b}) If $f_{i+2}=vv_{i+2}u_{i+2}v_{i+3}$, then $d(u_i)\geq 4$ or $d(u_{i+2})\geq 4$. \item (Lemma 3.12 from~\cite{LLY15b}) If $v$ is a poor $4$-vertex, then either $d(v_i)\geq 5$ or $d(v_{i+2})\ge 5$. \end{enumerate} \end{Lemma} \begin{proof} (1) Suppose that $f_{i-1}$ and $f_i$ are 4-faces with $d(u_i)=d(v_i)=3$, where $v_i$'s are neighbors of 4-vertex $v$. Identify $v_{i-1}, v_i$, and $v_{i+1}$ into one vertex, we get a new graph in $\mathcal{F}$, so the new graph is $(1,1,0)$-colorable. Now the original graph has a $(1,1,0)$-coloring, unless $u_{i-1}$ has the same color (1 or 2), which by symmetry we assume to be $1$, as $v_{i-1}, v_i$ and $v_{i+1}$. We uncolor $v_i$ and $v$, and then color $u_i$ and $v$ properly. Clearly, $u_i$ and $v$ are colored 2 or 3. If $u_i$ and $v$ are colored differently, color $v_i$ with 2; if $u_i$ and $v$ are colored the same, color $v_i$ with an available color. \end{proof} \iffalse \begin{Lemma}\label{l218} (1) Suppose that $v\in int(C_0)$ is a poor $3$-vertex. If $d(v_i)=d(v_{i+1})=4$, then $d(v_{i+2})\geq 5$, where the subscripts are taken modulo $3$. (2) Suppose that a $4$-vertex $v\in int(C_0)$ is incident with exactly three $4$-faces $f_i=vv_iu_iv_{i+1}$ for $i\in \{1, 2, 3\}$. If $d(v_2)=d(v_3)=4$, then either $d(v_1)\geq 4$ or $d(v_4)\geq 4$. \end{Lemma} \begin{proof} (1) Let $d(v_1)=d(v_2)=4$. Suppose otherwise that $d(v_{3})\leq 4$. Let $H= G[\{v, u_1, u_2, u_3\}]$. By Lemmas~\ref{le22} (5) and~\ref{l215}, $H\in \mathcal{F}$. Let $c$ be a coloring of $(H,C_0)$. Let $v'$ be the resulting vertex of the identification. In $G$, color $u_1, u_2, u_3$ with $c(v')$, and properly color $v_1, v_2, v_3$ in the order, and finally color $v$. Thus, $(G, C_0)$ is superextendable, a contradiction. (2) Assume that $d(v_1)=d(v_4)=3$. Let $H=G'[\{v, u_1, u_2, u_3\}]$. By Lemma~\ref{l215}, $H\in \mathcal{F}$. Then $(H,C_0)$ is superextendable, and let $c$ be a coloring of $(H,C_0)$. In $G$, color $u_1, u_2, u_3$ with the identified vertex, and properly (re)color $v_2, v_3, v_1, v_4$ in the order, then color $v$, we obtain a coloring of $G$, a contradiction. \end{proof} \begin{Lemma}\label{l219} Let $v\in int(C_0)$ be a poor $4$-vertex in $G$. Then \begin{enumerate}[(1)] \item (Lemma 3.12 from~\cite{LLY15b}) Either $d(v_i)\geq 5$ or $d(v_{i+2})\ge 5$. \item If $d(u_i)= 3$, then either $d(v_{i+2})\geq 5$ or $d(v_{i+3})\geq 5$. \item $v$ is incident with at most one $(4, 4, 4, 3)$-face. \end{enumerate} \end{Lemma} \begin{proof} (1) Suppose otherwise that $d(v_1\leq 4$ and $d(v_3)\leq 4$. Let $G'=G[\{u_1, u_2\}, \{v_2, v_4\}, \{u_3, u_4\}]$. It is easy to verify that $G'\in {\mathcal F}$. By the choice of $G$, $G'$ has a $(1,1,0)$-coloring. Color $v_1, v_3$ properly. Note that $v$ has at most three different colors on its neighborhood. Thus, 1 or 2 can be used to color $v$. (2). Assume by symmetry that $d(u_1)= 3$. Suppose to the contrary that $d(v_3)\leq 4$ and $d(v_4)\leq 4$. Let $H=G[\{v_1, v_2\}, \{v, u_2, u_3, u_4\}]$. By Lemmas~\ref{le22} (5) and~\ref{l215}, $H\in {\mathcal F}$. Then $(H,C_0)$ is superextendable, and let $c$ be a coloring of $(H, C_0)$. In $G$, color $v_1, v_2$ and $u_2, u_3, u_4$ with the colors of the identified vertices, respectively. Recolor $v_3, v_4, u_1$ properly in the order. Now $v$ can be colored, as it is adjacent to four properly colored vertices, a contradiction. (3) By symmetry, assume first that $d(u_1)= 3$. By Lemmas~\ref{l217} and~\ref{l219}(2), $d(u_i)\geq 4$ for $i\in [4]$ while $i\not= 1$, and either $d(v_3)\geq 5$ or $d(v_4)\geq 5$. In this case, if $d(v_3)\geq 5$, by Lemma~\ref{l217}(4), either $d(v_2)\geq 5$ or $d(v_4)\geq 5$; if $d(v_4)\geq 5$, by Lemma~\ref{l217}(4), either $d(v_1)\geq 5$ or $d(v_3)\geq 5$. In either case, It implies that $H(v)$ contains at least three $4$-faces, each of which holds at least one $5$-vertex. Assume next that $d(v_1)= 3$. By Lemmas~\ref{l219}(1), $d(v_3)\geq 5$ and either $d(v_2)\geq 5$ or $d(v_4)\geq 5$. In both cases, $H(v)$ contains at most one $(4, 4, 4, 3)$-face, and so the result follows. \end{proof} \fi Let $v$ be a $5^+$-vertex in $int(C_0)$. For convenience, we use $Q_4(v)$ to denote the set of poor $4$-vertices in $N(v)\setminus C_0$ that are incident with $(3, 4, 4, 4)$-faces from $F_4$. \begin{Lemma}\label{l220} Let $v$ be a poor $5$-vertex in $G$. Then \begin{enumerate} \item (Lemma 3.13(2) from~\cite{LLY15b}) At most two vertices in $\{u_i\, : \, i\in [5]\}$ are $3$-vertices. \item (Lemma 3.13(3) from~\cite{LLY15b}) If $d(u_i)= 3$, then either $d(v_{i-1})\geq 5$ or $d(v_{i+2})\geq 5$. \item (Lemma 3.13(1) from~\cite{LLY15b}) If $d(u_i)=d(v_i)= 3$, then $d(u_{j})\geq 4$ for $j\in [5]\setminus \{i\}$. \item If $f_i$ is a $(5, 4, 3, 4)$-face, then at most one of $v_i, v_{i+1}$ is in $Q_4(v)$. \item If $d(v_i)=d(u_i)=d(v_{i+2})=3$, then $d(v_{j})\geq 5$ for $j\in [5]\setminus \{i, i+2\}$. \item If $f_i$ is a $(5, 3, 4, 4)$-face such that $u_i$ and $v_{i+1}$ are poor $4$-vertices, then $v_{i+1}\not\in Q_4(v)$. \end{enumerate} \end{Lemma} \begin{proof} (4) Without loss of generality, assume that $f_1=vv_1u_1v_2$ is a $(5, 4, 3, 4)$-face. Assume further that $v_1, v_2$ are in $Q_4(v)$. By Lemma~\ref{l217}(4),$d(u_2)\geq 5$ and $d(u_5)\geq 5$ as $d(u_1)=3$. Let $N(u_1)=\{v_1, v_2, w\}$ and $N(w)\cap N(v_1)=\{v_1'\}$ and $N(w)\cap N(v_2)=\{v_2'\}$. It implies that $u_1v_1v_1 w$ and $u_1v_2v_2'w$ are $(3,4,4,4)$-faces. So $d(v_1')=d(v_2')=d(w)=d(v_1)=d(v_2)=4$. Let $H= G[\{u_1, v_1', v_2', v\}]$. By Lemmas~\ref{le22} (5) and~\ref{l215}, $H\in \mathcal{F}$. Let $c$ be a coloring of $(H,C_0)$. Let $v'$ be the resulting vertex of the identification. In $G$, color $u_1, u_2, u_3$ with $c(v')$, and properly color $v_1, v_2, w$ in the order, and finally color $u_1$. Thus, $(G, C_0)$ is superextendable, a contradiction. (5) By symmetry, assume that $d(v_1)=d(u_1)=d(v_3)=3$. By Lemma~\ref{l214} (2), $d(v_2)\ge 4$, and furthermore, by Lemma~\ref{l217} (2), $d(v_2)\geq 5$. We show that $d(v_5)\geq 5$. Suppose otherwise that $d(v_5)\le 4$. Then by Lemma~\ref{l213}(2), $d(v_5)=4$. Let $G'=G-\{v,v_5\}$ and $H=G'[\{u_4, u_5\}, \{v_2, v_4\}]$. By Lemmas~\ref{le22} (5) and~\ref{l215}, $H\in \mathcal{F}$. Then $(H,C_0)$ is superextendable, and let $c$ be a coloring of $(H,C_0)$. In $G$, color $v_2, v_4$ and $u_4, u_5$ with the colors of the identified vertices, respectively, then properly recolor $v_5, u_1, v_1, v_3$ in the order. Let $c'$ be the resulting coloring of $G-v$. Now we color $v$. If $c'(v_2)=c'(v_4)=3$, then $v$ can be colored, as the other three colored neighbors are all properly colored, so we may assume that $c'(v_2)=c'(v_4)=1$. If $c'(v_1)=1$, then clearly $v$ can be colored; If $c'(v_1)=3$, then we uncolor $v_1$ and color $v$ with $3$ (if $c'(v_3), c'(v_5)\not=3$) or with $2$ (if $c'(v_3)=3$ or $c'(v_5)=3$), and now $v_1$ can be colored, as $c'(u_1)\in \{1,2\}$ and $u_1$ is properly colored, a contradiction. Similarly, we have $d(v_4)\ge 5$. (6) As $v_{i+1}$ is a poor $4$-vertex and $d(u_i)=4$, by Lemma~\ref{l217}(4), $d(u')\ge 5$, where $u'$ is the diagonal vertex to $v$ on the $4$-face incident with $v_{i+1}$; similarly, $u_i$ is a poor $4$-vertex and $d(v_i)=3$, by Lemma~\ref{l217}(4), $d(u_1')\ge 5$, where $u_1'$ is the diagonal vertex to $v_2$ on the $4$-face incident with $u_1$. It follows that no $4$-face incident with $v_{i+1}$ is a $(3,4,4,4)$-face, so $v_{i+1}\not\in Q_4(v)$. \end{proof} \section{Discharging procedure} In this section, we prove the main theorem by a discharging argument. Let the initial charge of a vertex $v$ be $\mu(v)=2d(v)-6$, the initial charge of a face $f\not=C_0$ be $\mu (f)=d(f)-6$, and $\mu(C_0)= d(C_0)+6$. By Euler's formula, $\sum_{x\in V\cup F}\mu (x)=0$. A $(3, 4, 4, 5^+)$- or $(3, 4, 5^+, 4)$-face $f\in F_4$ is {\em superlight} if both $4$-vertices on $b(f)$ are poor and {\em light} otherwise. \medskip The following are the discharging rules: \begin{enumerate}[(R1)] \item[(R1)] Let $v\in int(C_0)$ with $d(v)=4$ and $f\in F_3\cup F_4$ be a face incident with $v$. \begin{enumerate} \item[(R1.1)] When $f\in F_3$, $f$ gets $1$ from $v$, unless $v$ is incident with $f$ and a $(3,4,4)$-face $f'\in F_3$, in which case, $v$ gives $\frac{5}{4}$ to $f'$ and $\frac{3}{4}$ to $f$. \item[(R1.2)] When $f\in F_4$, $f$ gets $1$ from $v$ if it is a $(4, 3, 3, 4^+)$-face, $\frac{2}{3}$ if it is a $(4, 4, 4, 3)$-face or $v$ is rich, and $\frac{1}{2}$ otherwise. \end{enumerate} \item[(R2)] Let $v\in int(C_0)$ with $d(v)\ge 5$. \begin{enumerate} \item[(R2.1)] Let $f=vuw$ be a $3$-face in $F_3$ incident with $v$. \begin{enumerate}[(R2a)] \item Let $f$ be a $(5^+, 3,3)$-face. Then $v$ gives $2$ to $f$. \item Let $f$ be a $(5^+, 4, 3)$-face. Then $v$ gives $\frac{9}{4}$ to $f$ if $u$ is bad and $w$ is weak; $2$ to $f$ if $u$ is not bad and $w$ is weak; $\frac{7}{4}$ to $f$ if $u$ is bad and $w$ is strong; $\frac{3}{2}$ to $f$ if $u$ is not bad and $w$ is strong. \item Let $f$ be a $(5^+, 5^+, 3)$-face. Then $v$ gives $\frac{5}{4}$ to $f$ if $v$ is weak, and $\frac{7}{4}$ otherwise. \item Let $f$ be a $(5^+, 4^+, 4^+)$-face. Then $v$ gives $\frac{3}{2}$ to $f$ if both $u$, $w$ are bad $4$-vertices; $\frac{5}{4}$ to $f$ if exactly one of $u$ and $w$ is a bad $4$-vertex; $1$ to other $(5^+, 4^+, 4^+)$-faces. \end{enumerate} \item[(R2.2)] For each $4$-face $f\in F_4$ incident with $v$, $v$ gives $1$ to $f$ if $f$ is a $(5^+, 4^+, 3, 3)$-face or a superlight $(3,4,4,5^+)$- or $(3,4,5^+,4)$-face; $\frac{5}{6}$ to $f$ if $f$ is a light $(3,4,4,5^+)$- or $(3,4,5^+,4)$-face; $\frac{3}{4}$ to $f$ if $f$ is a $(3, 4, 5^+, 5^+)$-face or a $(3, 5^+, 4, 5^+)$-face; $\frac{1}{2}$ to $f$ if $f$ is a $(4^+, 4^+, 4^+, 5^+)$-face. \item[(R2.3)] If $Q_4(v)\neq \emptyset$, then $v$ gives $\frac{1}{6}$ to each $4$-vertex in $Q_4(v)$. \end{enumerate} \item[(R3)] Each $4^+$-vertex sends $\frac{1}{2}$ to each of its pendant 3-faces in $F_3$. \item[(R4)] Let $v\in C_0$. Then $v$ gives $3$ to each incident $3$-face from $F'_3$; $\frac{3}{2}$ to each incident face from $F''_3$; $1$ to each incident $4$-face from $F''_4$. \item[(R5)] $C_0$ gives $2$ to each $2$-vertex on $C_0$; $\frac{3}{2}$ to each $3$-vertex on $C_0$; $1$ to each $4$-vertex on $C_0$; and $\frac{1}{2}$ to each $5$-vertex on $C_0$. In addition, if $C_0$ is a $7$-face with six $2$-vertices, then it gets $1$ from the incident face. \end{enumerate} We will show that each $x\in F(G)\cup V(G)$ has final charge $\mu^*(x)\ge 0$ and at least one face has positive charge, to reach a contradiction. As $G$ contains no $5$-faces, and $6^+$-faces other than $C_0$ are not involved in the discharging procedure, we will check the final charge of the $3$- and $4$-faces other than $C_0$ first. \begin{Lemma}\label{le31} Let $f$ be a $i$-face in $F(G)\setminus C_0$ for $i=3, 4$. Then $\mu^*(f)\geq 0$. \end{Lemma} \begin{proof} Suppose that $d(f)=3$ and $f=v u w$ with $d(v)\leq d(u)\leq d(w)$. By Lemma~\ref{le22} (3), $|b(f)\cap C_0|\leq 2$. If $|b(f)\cap C_0|= 2$, then $f\in F''_3$, by (R4), $\mu^*(f)\geq -3+2\times \frac{3}{2}=0$; if $|b(f)\cap C_0|= 1$, then $f\in F'_3$, by (R4), $\mu^*(f)\geq -3+3=0$. Hence, let $|b(f)\cap C_0|= 0$. By Proposition~\ref{pr21}, $d(v)\geq 3$. Assume first that $d(v)=3$. If $f$ is a $(3, 3, a)$-face, by Lemma~\ref{le27} (2), $a\geq 5$ and the outer neighbors of $u, v$ are of degree at least $4$ or on $C_0$, then by (R2a) and (R3), $\mu^*(f)\geq -3+2\times \frac{1}{2}+2=0$. If $f$ is a $(3, 4, 4)$-face, by Lemma~\ref{le27} (4), the third neighbor of $v$ is a $4^+$-vertex or on $C_0$, then by (R1.1) and (R3), $\mu^*(f)\geq -3+2\times \frac{5}{4}+\frac{1}{2}=0$. Now let $f$ be a $(3, 4, 5^+)$-face. Then by (R1.1) and (R2b), $\mu^*(f)\ge -3+\frac{9}{4}+\frac{3}{4}=0$ if $v$ is weak and $u$ is bad; $\mu^*(f)\ge -3+\frac{7}{4}+\frac{3}{4}+\frac{1}{2}=0$ if $v$ is strong and $u$ is bad; $\mu^*(f)\ge -3+2+1=0$ if $v$ is weak and $u$ is not bad; $\mu^*(f)\ge -3+\frac{3}{2}+1+\frac{1}{2}=0$ if $v$ is strong and $u$ is not bad. Assume that $d(v)=4$. Then $d(w)\geq d(u)\geq 4$. If $f$ is a $(4, 4, 4)$-face, then by Lemma~\ref{le27} (5), none of the $4$-vertices on $f$ can be bad, thus by (R1.1), $\mu^*(f)\geq -3+3\times 1=0$. Now assume that $f$ is a $(4, 4^+, 5^+)$-face. In this case, if $v, u$ are two bad $4$-vertices, then by (R1.1) and (R2c), $f$ receives at least $\frac{3}{2}$ from $w$ and at least $\frac{3}{4}$ from each of $v$ and $u$, thus $\mu^*(f)\geq -3+2\times \frac{3}{4}+\frac{3}{2}=0$; if one of $v, u$ is not bad, then by (R1.1) and (R2c), $w$ gives at least $\frac{5}{4}$ to $f$ and $u, v$ give at least $(\frac{3}{4}+1)$ to $f$, then $\mu^*(f)\geq -3+(\frac{3}{4}+1)+\frac{5}{4}=0$; for other cases, by (R1.1), and (R2c), $f$ receives at least $1$ from each vertex on $b(f)$, thus $\mu^*(f)\geq -3+3\times 1=0$. Finally, let $d(v)\ge 5$. It follows that $f$ is a $(5^+, 5^+, 3)$-face, then by Lemma~\ref{l210} (1), at most two of the three vertices are weak, so by (R2c) and (R3), $\mu^*(f)\geq -3+\min\{\frac{5}{4}+ \frac{7}{4}, 2\times \frac{7}{4}, 2\times\frac{5}{4}+\frac{1}{2}\}\ge 0$. \medskip Suppose that $d(f)= 4$ and $f=vuwx$. By Lemma~\ref{le22} (4), $|b(f)\cap C_0|\leq 2$. If $|b(f)\cap C_0|= 2$, then $f\in F''_4$, by (R4), $\mu^*(f)\geq -2+2\times 1=0$. By Lemma~\ref{l213}(1) $F'_4=\emptyset$. Hence, assume that $|b(f)\cap C_0|= 0$. By Proposition~\ref{pr21}, $d(z)\geq 3$ for each $z\in b(f)$. By Lemma~\ref{l213} (2), if $d(z)=3$ for some $z\in b(f)$, then its diagonal vertex on $b(f)$ is a $4^+$-vertex. If $f$ is a $(3, 3, 4^+, 4^+)$-face, then by (R1.2) and (R2.2), $\mu^*(f)\geq -2+2\times 1=0$. If $f$ is a $(3, 4, 4, 4)$-face, then by (R1.2), $\mu^*(f)\geq -2+3\times \frac{2}{3}=0$. If $f$ is a $(3, 4^+, 5^+, 5^+)$-face or $(3, 5^+, 4^+, 5^+)$-face, then by (R2.2), $\mu^*(f)\ge -2+2\cdot \frac{3}{4}+\frac{1}{2}=0$. If $f$ is a $(4^+, 4^+, 4^+, 4^+)$-face, then by (R1.2) and (R2.2), $\mu^*(f)\geq -2+4\times \frac{1}{2}=0$. Finally, let $f$ be a $(3, 4, 4, 5^+)$-face or $(3, 4, 5^+, 4)$-face. If $f$ is superlight, then $f$ gets $1$ from the $5^+$-vertex on $b(f)$ and at least $\frac{1}{2}$ from each $4$-vertex on $b(f)$, thus $\mu^*(f)\geq -2+1+2\times \frac{1}{2}=0$. Otherwise, $f$ is light, by (R2.2), $f$ receives $\frac{5}{6}$ from the $5^+$-vertex, $\frac{2}{3}$ from a rich $4$-vertex and at least $\frac{1}{2}$ from the other $4$-vertex on $b(f)$, then $\mu^*(f)\geq -2+\frac{5}{6}+\frac{2}{3}+\frac{1}{2}=0$. \end{proof} \medskip Let $v$ be a $k$-vertex in $int(C_0)$. Let $t_i$ be the number of $i$-faces incident with $v$ in $F_i$ for $i\in \{3,4\}$. Let $t_p$ be the number of pendant $3$-faces adjacent to $v$. By Proposition~\ref{pr21}, \begin{equation}\label{eq1} t_3\leq \lfloor \frac{k}{2}\rfloor,\; \text{and}\;\ t_4\leq \text{max}\{0, k-2t_3-t_p-1\} \text{ if $t_4>0$.} \end{equation} \begin{Lemma}\label{le32} Let $v\in int(C_0)$ be a $4$-vertex. Then $\mu^*(v)\geq 0$. \end{Lemma} \begin{proof} If $N(v)\cap C_0\not=\emptyset$, then $t_3\le 1$, thus $\mu^*(v)\ge 2-\max\{\frac{5}{4}+\frac{1}{2}, 2\cdot 1, 3\cdot \frac{1}{2}\}\ge 0$. So, let $N(v)\cap C_0=\emptyset$. Clearly, $t_3\le 2$. If $t_3= 2$, then by Lemma~\ref{le27} (5), at most one of the triangles is a $(3,4,4)$-face, thus by (R1.1), $\mu^*(v)\geq 2-\max\{\frac{5}{4}+\frac{3}{4}, 2\cdot 1\}=0$. If $(t_3, t_4)=(1,1)$, then when $v$ is not bad, by (R1.1) and (R1.2), $v$ gives at most one to each of the incident faces, thus $\mu^*(v)\ge 2-2\cdot 1=0$, and when $v$ is bad, $v$ cannot be incident with a $(3, 3, 4, 4^+)$-face by Lemma~\ref{l214}, then by (R1.1) and (R1.2), $\mu^*(v)\geq 2-\frac{5}{4}-\frac{2}{3}=\frac{1}{12}>0$. Let $(t_3, t_4)=(1,0)$. Then $0\le t_p\leq 2$. By Lemma~\ref{le27} (3), at least one of the other neighbors of $v$ is a $4^+$-vertex or in $C_0$ when $v$ is bad, thus by (R1.1) and (R3), $\mu^*(v)\geq 2-\max\{\frac{5}{4}+\frac{1}{2}, 1+2\times\frac{1}{2}\}=0$. Now, we assume that $t_3= 0$. If $t_p\ge 2$, then $t_4\leq 1$, so by (R1.2) and (R3), $\mu^*(v)\geq 2- \max\{4\cdot \frac{1}{2}, 1+2\times \frac{1}{2}\}=0$. Assume that $t_p=1$ and $t_4=2$. Let $v$ be incident with $4$-faces $f_3=v v_2 u_2 v_3$ and $f_4=v v_3 u_3 v_4$ in $F_4$. By Lemmas~\ref{l213} (2) and~\ref{l217}, at most two of the vertices in $\{v_2, u_2, v_3, u_3, v_4\}$ are $3$-vertices, and when $d(v_3)=3$, none of the other vertices is a $3$-vertex, then by (R1.2) and (R3), $v$ gives at most $\max\{2\cdot\frac{2}{3}, 1+\frac{1}{2}\}=\frac{3}{2}$ to $f_3$ and $f_4$, thus $\mu^*(v)\geq 2-\frac{3}{2}-\frac{1}{2}=0$. Lastly, let $t_3=t_p=0$. If $t_4\le 2$, by (R1.2), $\mu^*(v)\geq 2-2\times 1=0$. Let $t_4= 3$. If $v$ is not incident with a $(4, 3, 3, 4^+)$-face, then by (R1.2), $\mu^*(v)\geq 2-3\times \frac{2}{3}=0$; If $v$ is incident with a $(4, 3, 3, 4^+)$-face, then by Lemma~\ref{l217}, the other incident $4$-faces are $(4, 4^+, 4^+, 4^+)$-faces, so by (R1.2), $\mu^*(v)\geq 2-1-2\times \frac{1}{2}=0$. Hence assume that $t_4= 4$, that is, $v$ is poor. By Lemma~\ref{l217} (4), $v$ is adjacent to at least two $5^+$-vertices, and without loss of generality, let $d(v_3), d(v_4)\ge 5$. By Lemma~\ref{l217} (1), $v$ is not incident with $(3,3,4,4^+)$-faces. Thus, if $v$ is not incident with $(3,4,4,4)$-faces, then by (R1), $\mu^*(v)\ge 2-4\times \frac{1}{2}=0$, and if $v$ is incident with a $(3,4,4,4)$-face, then by (R2.3), $v$ gets $\frac{1}{6}$ from each of its $5^+$-neighbors, so by (R1) and (R2.3), $\mu^*(v)\geq 2-3\times \frac{1}{2}-\frac{2}{3}+2\cdot \frac{1}{6}>0$. \end{proof} \begin{Lemma}\label{le33} Let $v\in int(C_0)$ be a $k$-vertex with $k\geq 5$. If $u\in Q_4(v)$, then one of the $4$-faces that contain $uv$ as an edge contains no $3$-vertices or is a $(3, 5^+,4,5^+)$-face. \end{Lemma} \begin{proof} As $u\in Q_4(v)$, $u$ is a poor $4$-vertex and incident with one $(3, 4, 4, 4)$-face. Suppose that $f_i=u v_i u_i v_{i+1}$ for $i\in [4]$, where $v= v_4$ and the subscripts are taken modulo $4$. We show that $f_3$ or $f_4$ contains no $3$-vertices or is a $(3, 5^+,4,5^+)$. If $d(u_3)\geq 4$ and $d(u_4)\geq 4$, then by Lemma~\ref{l217}(4), either $d(v_1)\ge 5$ or $d(v_3)\geq 5$, so $f_3$ or $f_4$ cannot contain $3$-vertices. Thus, by symmetry, let $d(u_3)= 3$. By Lemma~\ref{l217} (1)-(3), $d(v_3)\geq 4$ and $d(u_j)\ge 4$ for $j\in [4]\setminus \{3\}$. So $f_4$ contains no $3$-vertices if $d(v_1)\not=3$. Let $d(v_1)=3$. Then $v_1uv_2u_1$ is the $(3,4,4,4)$-face. By Lemma~\ref{l217}(4), $d(v_3)\geq 5$, so $f_3$ is a $(3,5^+,4,5^+)$-face, as desired. \end{proof} Let $v\in int(C_0)$ with $d(v)=k\geq 5$. By Lemma~\ref{le33}, a vertex in $Q_4(v)$ must either share a $4$-face without $3$-vertices with $v$, or is on a $(3,5^+,4,5^+)$-face. In the former case, the $4$-face could contain at most two vertices from $Q_4(v)$, then the charges from $v$ to the vertices and the $4$-face are at most $\frac{1}{2}+2\cdot \frac{1}{6}<1$. In the latter case, the face contains exactly one vertex from $Q_4(v)$, then by (R2), the charges from $v$ to the vertex and the $4$-face are at most $\frac{3}{4}+\frac{1}{6}<1$. Thus, by (R2), \begin{align} \mu^*(v) &\geq 2k-6-\frac{9}{4}t_3-t_4-\frac{1}{2}t_p\label{eq2}\\ &\geq (k-2t_3-t_4-t_p)+(\frac{7}{8}k-6)\ge \frac{7}{8}k-6\; \text{ (as $t_3\leq \lfloor\frac{k}{2}\rfloor$)}.\label{eq3} \end{align} \begin{Lemma}\label{le34} Suppose that $v\in int(C_0)$ is a $5$-vertex with $t_3> 0$. Then $\mu^*(v)\geq 0$. \end{Lemma} \begin{proof} If $|N(v)\cap C_0|\geq 2$, then $t_3+t_p\leq 2$ and $t_4=0$, as $t_3>0$, so by (R1)-(R5), $\mu^*(v)\ge 4-9/4-1/2>0$. If $|N(v)\cap C_0|=1$, then $v$ cannot be incident with two bad 3-faces in $F_3$ by Lemma~\ref{le29} (2), and when $v$ is incident with a bad $3$-face and a $(3,4,5)$-face, the $3$-vertex is strong on the $(3,4,5)$-face by Lemma~\ref{le29} (3), so $\mu^*(v)\geq 4-\max\{\frac{9}{4}+\frac{1}{2}, \frac{9}{4}+\frac{7}{4}, 2\times 2\}=0$ by (R2.1)-(R2.3) and (R3). Therefore, we assume that $|N(v)\cap C_0|=0$. Assume first that $t_3=2$. Let $f_1=vv_1v_2$ and $f_3=vv_3v_4$ be the incident $3$-faces and $v_5$ be the fifth neighbor of $v$. By Lemma~\ref{le29} (2), at most one of $f_1, f_3$ is bad. If both $f_1$ and $f_3$ are $(3, 4^-, 5)$-faces, then by Lemma~\ref{le29} (1), $d(v_5)\ge 4$ or $v_5\in C_0$, , and by Lemma~\ref{le29} (3), if one is bad, then the $3$-vertex on the other one is strong, thus by (R2b), $\mu^*(v)\geq 4-\max\{\frac{9}{4}+\frac{7}{4}, 2\times 2\}= 0$. If $f_1$ is a $(3,4,5)$-face and $f_3$ is a $(3, 5, 5^+)$-face, then by (R2) and (R3), $\mu^*(v)\geq 4-(\frac{9}{4}+\frac{5}{4}+\frac{1}{2})= 0$ if $v$ is weak, and $\mu^*(v)\ge 4-\max\{\frac{9}{4}+\frac{7}{4}, \frac{7}{4}+\frac{7}{4}+\frac{1}{2}\}\ge 0$ if $v$ is not weak. If none of $f_1,f_3$ is a $(3,4,5)$-face, then by (R2), $\mu^*(v)\ge 4-2\cdot \frac{7}{4}-\frac{1}{2}=0$. Finally, let $t_3= 1$. Then $t_4\leq 2$. If $t_4\leq 1$, then $|Q_4(v)|=0$, thus, by (R2a), (R2.2) and (R3), $\mu^*(v)\geq 4-\frac{9}{4}-1-\frac{1}{2}=\frac{1}{4}>0$. Thus assume that $t_4= 2$ and let $f_1=vv_1v_2$, $f_3=vv_3u_3v_4$ and $f_4=vv_4u_4v_5$ be the incident faces. Note that $v_3, v_5$ are rich and $|Q_4(v)|\leq 1$. If $f_1$ is not bad, then by Lemma~\ref{le33} and by (R2.1), (R2.2) and~(R2.3), $\mu^*(v)\geq 4-2-\max\{1+\frac{3}{4}+\frac{1}{6}, 2\cdot 1\}=0$. Therefore, let $f_1$ be a bad $(5, 4, 3)$-face. By Lemma~\ref{l214}, $d(u_3)\geq 4$ and $d(u_4)\geq 4$. Consider $d(v_4)=3$ first. Then $|Q_4(v)|= 0$, and by Lemma~\ref{l213}, $d(v_3)\geq 4$ and $d(v_5)\geq 4$. Therefore, if $d(v_3)=d(v_5)=4$, then as $v_3, v_5$ are rich, by (R2a) and (R2.2), $v$ gives at most $\frac{5}{6}$ to each of $f_3, f_4$, thus, $\mu^*(v)\geq 4-\frac{9}{4} -2\times \frac{5}{6}=\frac{1}{12}>0$; if $d(v_3)\geq 5$ or $d(v_5)\geq 5$, then there are at least two $5^+$-vertices in $b(f_3)$ or $b(f_4)$, thus, by (R2a) and (R2.2), $\mu^*(v)\geq 4-\frac{9}{4}-1-\frac{3}{4}=0$. Assume next that $d(v_4)= 4$. Then $|Q_4(v)|\leq 1$. By Lemma~\ref{l217} (2), either $d(v_3)\geq 4$ or $d(v_5)\geq 4$. It means that $f_3$ or $f_4$ is a $(5, 4, 4^+, 4^+)$-face, by (R2a), (R2.2) and (R2.3), $\mu^*(v)\geq 4-\frac{9}{4}-1-\frac{1}{2}-\frac{1}{6}=\frac{1}{12}>0$. Assume that $d(v_4)\geq 5$. Then $|Q_4(v)|=0$, and $f_3, f_4$ are $(5, 5^+, 4^+, 3^+)$-faces, by (R2a) and (R2.2), $v$ gives at most $\frac{3}{4}$ to each of $f_3, f_4$, then $\mu^*(v)\geq 4-\frac{9}{4}-2\times \frac{3}{4}=\frac{1}{4}>0$. \end{proof} \medskip For a poor $5$-vertex $v\in int(C_0)$, let $f(v)=(f_1, f_2, f_3, f_4, f_5)$, where $f_i=vv_iu_iv_{i+1}$ with $i\in {\mathcal Z}_5$, the cyclic group of order $5$. We say that $v$ gives a {\em charge sequence $(a_1, a_2, a_3, a_4, a_5)$ to $f(v)$} if $v$ gives at most $a_i$ to $f_i$ by (R2.2). \begin{Lemma}\label{le35} For each $5$-vertex $v\in int(C_0)$, $\mu^*(v)\geq 0$. \end{Lemma} \begin{proof} By Lemma~\ref{le34}, we may assume that $t_3=0$. By~(\ref{eq1}), $\mu^*(v)\ge 4-t_4-t_p/2\ge 0$ if $t_4\le 4$. Thus, we let $t_4= 5$, that is, $v$ is poor. Let $M(v)=\{u_1, u_2, u_3, u_4, u_5\}$. By Lemma~\ref{l220}(1), $M(v)$ has at most two $3$-vertices, and by Lemma~\ref{l213} (2), there are at most two $3$-vertices in $N(v)$. \medskip \noindent{\bf Case 1.} $N(v)$ has exactly two $3$-vertices. \medskip By symmetry and Lemma~\ref{l213} (2), we may assume that $d(v_1)= d(v_3)= 3$. By Lemma~\ref{l213} (2), $d(v_2), d(v_4), d(v_5)\ge 4$. Furthermore, by Lemma~\ref{l217} (2), $d(v_2)\not=4$, thus $d(v_2)\ge 5$. Assume that some vertex, say $u_1$, in $M(v)$ has degree $3$. Then by Lemma~\ref{l220} (3), $d(u_j)\geq 4$ for $j\in [5]\setminus \{1\}$, and by Lemma~\ref{l220}(5), $d(v_j)\ge 5$ for $j\in [5]\setminus \{1, 3\}$, thus $|Q_4(v)|=0$, and $f_2, f_3, f_4, f_5$ are $(5, 5^+, 4^+, 3)$- $(5, 3, 4^+, 5^+)$-, $(5, 4^+, 4^+, 5^+)$-, and $(5, 5^+, 4^+, 3)$-faces, respectively. By (R2.2) and (R2.3), $v$ gives a charge sequence $(1, \frac{3}{4}, \frac{3}{4}, \frac{1}{2}, \frac{3}{4})$ to $f(v)$, thus $\mu^*(v)\geq 4-1-3\times\frac{3}{4}-\frac{1}{2}=\frac{1}{4}>0$. Hence we assume that $M(v)$ has no $3$-vertex. As $f_1, f_2$ are $(3,4^+,5^+,5)$-faces and $f_4$ is a $(4^+, 4^+, 4^+,5)$-face, by (R2.2), $v$ gives $2\cdot\frac{3}{4}+\frac{1}{2}=2$ to them. Consider $f_3$ (and similarly $f_5$), which is a $(3,4^+,4^+,5)$-face. We claim that $$\text{$v$ gives at most $1$ to the face and the vertex in $Q_4(v)\cap b(f_3)$, }$$ which shows that $\mu^*(v)\ge 4-2-2\cdot1=0$. In fact, if it contains another $5^+$-vertex, then by (R2.2) and (R2.3), $v$ gives at most $\frac{3}{4}+\frac{1}{6}<1$, as desired. So let it be a $(3,4,4,5)$-face. If it contains two poor $4$-vertices, then it is superlight and by Lemma~\ref{l220}(4), it contains no vertex in $Q_4(v)$, thus by (R2.2), $v$ gives $1$ to it; otherwise, it is light, thus by (R2.2) and (R2.3), $v$ gives $\frac{5}{6}+\frac{1}{6}=1$ to it. \medskip \noindent{\bf Case 2.} $N(v)$ has exactly one $3$-vertex. By symmetry, we assume that $d(v_1)= 3$. \medskip Assume first that $M(v)$ contains no $3$-vertex. Then each of $f_2, f_3,f_4$ is a $(5, 4^+, 4^+, 4^+)$-face, and $f_1$ is a $(5, 3, 4^+, 4^+)$-face and $f_5$ is a $(5, 4^+, 4^+, 3)$-face. By (R2.2) and (R2.3), $v$ gives a charge sequence $(1, \frac{1}{2}, \frac{1}{2},\frac{1}{2}, 1)$ to $f(v)$, thus $\mu^*(v)\geq 4-2\times 1-3\times \frac{1}{2}-3\times \frac{1}{6}=0$ if $|Q_4(v)|\le 3$. Thus, assume that $v_j\in Q_4(v)$ for $j\in [5]\setminus \{1\}$. If $u_1$ is a poor $4$-vertex, then $f_1=vv_1u_1v_2$ is a $(5,3,4,4)$-face such that $u_1,v_2$ are poor and $v_2\in Q_4(v)$, a contradiction to Lemma~\ref{l220}(6). Thus, assume that $u_1$ is not a poor 4-vertex. In this case, $f_1$ is light. By (R2.2), $v$ gives $\frac{5}{6}$ to $f_1$. Thus, $\mu^*(v)\geq 4-1-\frac{5}{6}-3\times \frac{1}{2}-4\times \frac{1}{6}=0$. \medskip Next, assume that $M(v)$ contains exactly one $3$-vertex. Let $d(u_1)= 3$ (or by symmetry $d(u_5)=3$). By our assumption, $d(u_j)\geq 4$ for $j\not= 1$ and $j\in [5]$. Thus, each of $f_2, f_3,f_4$ is a $(5, 4^+, 4^+, 4^+)$-face, $f_5$ is a $(5, 4^+, 4^+, 3)$-face. Note that if $d(v_2)\geq 5$, then $|Q_4(v)|\leq 3$; if $d(v_2)= 4$, then by Lemma~\ref{l217}(1), $v_2$ is rich, which implies that $v_2\not\in Q_4(v)$, then $|Q_4(v)|\leq 3$. By (R2.2) and (R2.3), $v$ gives a charge sequence $(1, \frac{1}{2}, \frac{1}{2},\frac{1}{2}, 1)$ to $f(v)$, and $\mu^*(v)\geq 4-2\times 1-3\times \frac{1}{2}-3\times \frac{1}{6}=0$. Hence, we may assume, by symmetry, that either $d(u_3)=3$ or $d(u_2)=3$. Let $d(u_3)=3$. By Lemma~\ref{l220} (2), either $d(v_2)\geq 5$ or $d(v_5)\geq 5$, and by symmetry we may assume that $d(v_5)\ge 5$. This implies that $f_5$ is a $(3, 5, 4^+, 5^+)$-face and $v_5\notin Q_4(v)$. In this case, both $f_2, f_4$ are two $(5, 4^+, 4^+, 4^+)$-faces, and $f_1$ is a $(5, 3, 4^+, 4^+)$-face. If $d(v_3)\geq 5$ or $d(v_4)\geq 5$, then $|Q_4(v)|\leq 2$, and by (R2.2) and (R2.3), $v$ gives a charge sequence $(1, \frac{1}{2}, \frac{3}{4}, \frac{1}{2}, \frac{3}{4})$ to $f(v)$, thus, $\mu^*(v)\geq 4-1-2\times \frac{3}{4}-2\times \frac{1}{2}-2\times \frac{1}{6}=\frac{1}{6}>0$. Then assume that $d(v_3)=d(v_4)= 4$. By Lemma~\ref{l220}(4), either $v_3\not\in Q_4(v)$ or $v_4\not\in Q_4(v)$. If $f_1$ is a $(5,3,4,4)$-face with two poor $4$-vertices, then by Lemma~\ref{l220}(6), $v_2\not\in Q_4(v)$, it follows that $|Q_4(v)|\le 1$, so by (R2.2) and (R2.3), $v$ gives a charge sequence $(1, \frac{1}{2}, 1,\frac{1}{2},\frac{3}{4})$ to $f(v)$, thus, $\mu^*(v)\geq 4-1-\frac{1}{2}-1-\frac{1}{2}-\frac{3}{4}-\frac{1}{6}=\frac{1}{12}>0$; otherwise, $f_1$ is a light $(5,3,4^+,4^+)$-face, so by (R2.2) and (R2.3), $\mu^*(v)\ge 4-\frac{5}{6}-\frac{1}{2}-1-\frac{1}{2}-\frac{3}{4}-2\cdot \frac{1}{6}>0$. Let $d(u_2)=3$ now. By Lemma~\ref{l220} (2), $d(v_4)\geq 5$. Then both $f_3, f_4$ are $(5, 4^+, 4^+, 4^+)$-faces. If $v_2$ is a rich $4$-vertex or $5^+$-vertex, then $v_2\notin Q_4(v)$ and $|Q_4(v)|\leq 2$, so by (R2.2) and (R2.3), $v$ gives at most $\frac{5}{6}$ to each of $f_1, f_2$, and $v$ gives a charge sequence $(\frac{5}{6}, \frac{5}{6}, \frac{1}{2}, \frac{1}{2}, 1)$ to $f(v)$, it follows that $\mu^*(v)\geq 4-2\times \frac{5}{6}-2\times \frac{1}{2}-1- 2\times\frac{1}{6}=0$. Therefore, we may assume that $v_2$ is a poor $4$-vertex. Then by Lemmas~\ref{l217} (4), $d(u_1)\geq 5$ as $d(u_2)=3$, and by Lemma~\ref{l220}(4), and $v_2\not\in Q_4(v)$ or $v_3\not\in Q_4(v)$. Consider $f_5$. By Lemma~\ref{l220}(6), it is either a light $(5,3,4^+, 4^+)$-face, or $(5,3,4,4)$-face with two poor $4$-vertices but $v_5\not\in Q_4(v)$. By (R2.2) and (R2.3), $v$ gives 1 to $f_2$, $\frac{3}{4}$ to $f_1$, and $\frac{5}{6}$ or 1 to $f_5$ (depend on whether it is light or superlight). Thus, $\mu^*(v)\geq 4-\frac{3}{4}-1-2\times \frac{1}{2}-\max\{1+\frac{1}{6}, \frac{5}{6}+2\cdot \frac{1}{6}\}=\frac{1}{12}>0$. \medskip Assume finally that $M(v)$ contains exactly two $3$-vertices. If $d(u_1)=3$ or $d(u_5)=3$, then by Lemma~\ref{l220} (3), $M(v)$ contains exactly one $3$-vertex, contrary to our assumption, so by symmetry, $d(u_2)=d(u_3)=3$ or $d(u_2)=d(u_4)=3$. Let $d(u_2)=d(u_3)=3$. By Lemma~\ref{l220} (2), $d(v_4)\geq 5$ as $d(u_2)=3$, and either $d(v_2)\geq 5$ or $d(v_5)\geq 5$ as $d(u_3)=3$. By Lemma~\ref{l217}(4), $v_3$ is not a poor $4$-vertex as $d(u_2), d(u_3)<4$. It follows that $|Q_4(v)|\le 1$. If $d(v_2)=4$, then $d(v_5)\geq 5$ and $f_2$ is a light $(5, 4, 3, 4^+)$-face, so by (R2.2) and (R2.3), $v$ gives a charge sequence $(1, \frac{5}{6}, \frac{3}{4}, \frac{1}{2}, \frac{3}{4})$ to $f(v)$, and $\mu^*(v)\geq 4-1-\frac{5}{6}-2\cdot\frac{3}{4}-\frac{1}{2}-\frac{1}{6}=0$; if $d(v_2)\geq 5$, then $f_2, f_3$ are $(5, 5^+, 3, 4)$-faces, so by (R2.2) and (R2.3), $v$ gives a charge sequence $(\frac{3}{4}, \frac{3}{4}, \frac{3}{4}, \frac{1}{2}, 1)$ to $f(v)$, and $\mu^*(v)\geq 4-1-3\cdot \frac{3}{4}-\frac{1}{2}-\frac{1}{6}>0$. Let $d(u_2)=d(u_4)=3$. By Lemma~\ref{l220} (2), $d(v_3)\geq 5$ and $d(v_4)\geq 5$. If both $v_2, v_5$ are poor $4$-vertices, then by applying Lemma~\ref{l217}(4) to $v_2$ and $v_5$, respectively, $d(u_1)\geq 5$ and $d(u_5)\geq 5$ as $d(u_2)=d(u_4)= 3$, so by (R2.2) and (R2.3), $v$ gives a charge sequence $(\frac{3}{4}, \frac{3}{4}, \frac{1}{2}, \frac{3}{4}, \frac{3}{4})$ to $f(v)$, and $\mu^*(v)\geq 4-4\times\frac{3}{4}-\frac{1}{2}-2\times\frac{1}{6}=\frac{1}{6}>0$. Then let $v_2$ be a rich $4$-vertex or a $5^+$-vertex. By (R2.2) and (R2.3), $v$ gives a charge sequence $(\frac{5}{6}, \frac{3}{4}, \frac{1}{2}, \frac{3}{4}, 1)$ to $f(v)$, and $\mu^*(v)\geq 4-1-\frac{5}{6}-2\cdot \frac{3}{4}-\frac{1}{2}-\frac{1}{6}=0$. \medskip \noindent{\bf Case 3.} $N(v)$ has no $3$-vertex. \medskip If $M(v)$ has at most one $3$-vertex, then $f(v)$ has at least four $(5, 4^+, 4^+, 4^+)$-faces, by (R2.2) and (R2.3), $v$ gives the charge sequence $(1, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2})$ to $f(v)$, thus $\mu^*(v)\geq 4-1-4\times\frac{1}{2}-5\times\frac{1}{6}>0$. Hence by Lemma~\ref{l220} (1), we assume that $M(v)$ has exactly two $3$-vertices. By symmetry, we assume that $d(u_2)=d(u_3)=3$ or $d(u_2)=d(u_4)=3$. In the former case, by Lemma~\ref{l217}(4) $v_3$ is not a poor $4$-vertex, which implies that $v_3\not\in Q_4(v)$, thus $f_2, f_3$ are light $(5, 4, 3, 4^+)$-faces or $(5, 5^+, 3, 4^+)$-faces. By (R2.2) and (R2.3), $v$ gives a charge sequence $(\frac{1}{2}, \frac{5}{6}, \frac{5}{6}, \frac{1}{2}, \frac{1}{2})$ to $f(v)$, thus, $\mu^*(v)\geq 4-2\times \frac{5}{6}-3\times\frac{1}{2}-4\times\frac{1}{6}=\frac{1}{6}>0$. In the latter case, by Lemma~\ref{l220} (2), $d(v_1)\ge 5$ or $d(v_3)\ge 5$ as $d(u_4)=3$, and $d(v_1)\ge 5$ or $d(v_4)\ge 5$ as $d(u_2)=3$, so $|Q_4(v)|\le 4$. Note that $|Q_4(v)|\not=4$, for otherwise, $d(v_1)\geq 5$, $d(v_j)=4$ for $j\in [5]\setminus \{1\}$, and $f_2$ is a $(5, 4, 3, 4)$-face with $v_2, v_3\in Q_4(v)$, a contradiction to Lemma~\ref{l220}(4). Therefore, by (R2.2) and (R2.3), $v$ gives a charge sequence $(\frac{1}{2}, 1, \frac{1}{2}, 1, \frac{1}{2})$ to $f(v)$, and we have $\mu^*(v)\geq 4-2\times 1-3\times\frac{1}{2}-3\times\frac{1}{6}=0$. \end{proof} \begin{Lemma}\label{le36} For each $v\in int(C_0)$, $\mu^*(v)\geq 0$. \end{Lemma} \begin{proof} By Lemmas~\ref{le32} and~\ref{le35}, we may assume that $d(v)\geq 6$. We may further assume that $d(v)=6$, as when $d(v)\ge 7$, $\mu^*(v)\geq \frac{7}{8}\times 7-6=\frac{1}{8}>0$ by~(\ref{eq3}). If $t_3=0$, then $t_4+t_p\le 6$, so by~(\ref{eq2}), $\mu^*(v)\geq 6-(t_4+t_p)+\frac{1}{2}t_p\ge 0$. It $t_3=1$, then $t_4\leq 3$, so $\mu^*(v)\geq 6-\frac{9}{4}-t_4-\frac{1}{2}t_p>0$. If $t_3=2$, then by Proposition~\ref{pr21} $t_4\leq 1$, so $\mu^*(v)\geq 6-2\times \frac{9}{4}-1>0$. Thus we assumer that $t_3=3$. By (R2.1), $v$ gives at most $\frac{9}{4}$ to a $(6, 4^-, 3)$-face, $\frac{7}{4}$ to a $(6, 5^+,3)$-face, and $\frac{3}{2}$ to other incident $3$-faces, thus $\mu^*(v)\geq 6-\frac{9}{4}k_1-2k_2-\frac{7}{4}k_3-\frac{3}{2}k_4$, where $k_1, k_2, k_3, k_4$ are the numbers of $3$-faces that receive $\frac{9}{4}, 2, \frac{7}{4}, \mbox{at most}\, \frac{3}{2}$ from $v$, respectively. Note that $k_1+k_2+k_3+k_4=3$, and by Lemma~\ref{l212} (5), $v$ is incident with at most two $(6, 4^-, 3)$-faces, thus $k_1+k_2\leq 2$. Clearly, $\mu^*(v)\ge 9-\frac{9}{4}\cdot 2-\frac{7}{4}=-\frac{1}{4}$, and $\mu^*(v)<0$ only if $k_1=2$ and $k_3=1$, in which case, $v$ is weak, so by (R2c), $v$ should give $\frac{5}{4}$ instead of $\frac{7}{4}$ to the $(6, 5^+,3)$-face, a contradiction. \end{proof} \begin{Lemma}\label{le37} For each $v\in C_0$, $\mu^*(v)\geq 0$. \end{Lemma} \begin{proof} Let $d(v)= k$. By Proposition~\ref{pr21}, $k\geq 2$. If $k= 2$, then by (R4), $\mu^*(v)= 2\times 2-6+2=0$. If $k=3$, then $v$ cannot be incident with faces in $F_3'\cup F_4'$. In this case, $v$ may be incident with a face in $F_3''\cup F_4''$. By (R4) and (R5), $\mu^*(v)\geq \frac{3}{2}-\frac{3}{2}=0$. Let $k=4$. If $v$ is incident with a $3$-face in $F_3'$, then it is not incident with other $3$- or $4$-faces, thus by (R4) and (R5), $\mu^*(v)\geq 2-3+1=0$; if $v$ is incident with faces from $F_3''\cup F_4''$, then by (R4) and (R5), $\mu^*(v)\geq 2-\frac{3}{2}\cdot 2+1=0$. Let $k\ge 5$. The vertex $v$ is incident with at most $\lfloor \frac{k-2}{2}\rfloor$ faces in $F'$. By (R3), (R4) and (R5), $$\mu^*(v)\ge (2k-6)-3\cdot \lfloor \frac{k-2}{2}\rfloor-\frac{3}{2}\cdot (k-2-2\cdot \lfloor \frac{k-2}{2}\rfloor)=\frac{k}{2}-3.$$ Thus, $\mu^*(v)\ge 0$ if $k\ge 6$. When $k=5$, $v$ gains $\frac{1}{2}$ from $C_0$, so $\mu^*(v)\ge 0$ as well. \end{proof} \medskip Finally, we consider $\mu^*(C_0)$. For $i\in \{2, 3, 4,5\}$, let $s_i$ be the number of $i$-vertices on $C_0$. Then $|C_0|\geq s_2+s_3+s_4+s_5$. By (R5), \begin{align*} \mu^*(C_0)&\geq |C_0|+6-2s_2-\frac{3}{2}s_3-s_4-\frac{1}{2}s_5\geq |C_0|+6-\frac{3}{2}(s_2+s_3+s_4+s_5)-\frac{1}{2}s_2\\ &\geq |C_0|+6-\frac{3}{2}|C_0|-\frac{1}{2}s_2=6-\frac{1}{2}(|C_0|+s_2) \end{align*} Note that $|C_0|=3$ or $7$. If $|C_0|=3$ or $s_2\le 5$, then $\mu^*(C_0)\geq 0$. Hence we may assume that $|C_0|= 7$ and $(s_2, s_3, s_4, s_5)\in \{(6, 1, 0, 0), (7, 0, 0, 0)\}$. If $s_2= 7$, then $G= C_0$ and it is trivially superextendable. If $s_2 = 6$ and $s_3 = 1$, then by (R5), $C_0$ gains $1$ from the adjacent face which has degree more than $7$. Thus, $\mu^*(C_0)\geq \frac{1}{2}>0$. We have shown that all vertices and faces have non-negative final charges. Furthermore, the outer-face has positive charges, except when $|C_0| = 7$ and $s_2 = 5$ and $s_3 = 2$ (the two $3$-vertices must be adjacent and has a common neighbor not on $C_0$), in which there must be a face other than $C_0$ having degree more than $7$. Thus the face has positive final charge. Therefore, $\sum_{x\in V(G)\cup F(G)}\mu^*(x)>0$, a contradiction. \small
train/arxiv
BkiUdRI5qYVBRaD6OimB
5
1
\section{Introduction} \label{Section:Introduction} Refactoring is an essential practice to preserve code quality from degradation, as software evolves. Refactoring has grown from the act of cleaning the code, to play a critical role in modern software development. Therefore, refactoring has been attracting various researchers, with over three thousand research papers, according to recent surveys \cite{abid202030}. Another key practice in maintaining software quality is code review \cite{bacchelli2013expectations}. It has become another important to reduce technical debt, and to detect potential coding errors \cite{bacchelli2013expectations,sadowski2018modern,kashiwa2022empirical}. Code review represents the manual inspection of any newly performed changes to the code, for the purpose of verifying integrity, compliance with standards, and error-freedom \cite{mcintosh2016empirical}. Modern code review is a lightweight tool-based process that heavily relies on discussions between commit authors and reviewers to merge/abandon a given code change \cite{yang2016mining}. Similarly to any other code change, refactoring changes has to also be reviewed before being merged That is, if not applied well, refactoring changes can have their side effects such as hindering software quality \cite{hamdi2021empirical,alomar2019impact,hamdi2021longitudinal,peruma2020exploratory} and inducing bugs \cite{bavota2012does,di2020relationship} making refactoring changes more challenging to review. However, little is known about how reviewers \textit{examine} refactoring related code changes, especially when it is intended to serve the same \textit{purpose} of improving software quality. According to recent industrial case study, AlOmar \textit{et al. \xspace} \cite{alomar2021icse} has found that reviewing refactoring related code changes takes a significantly longer time, in comparison with other code changes, demonstrating the need of refactoring review \textit{culture}. Yet, little is known about what criteria reviewers consider when they review refactoring. Most of refactoring studies focus on its automation by recommending refactoring opportunities in the source code \cite{tsantalis2008jdeodorant,mkaouer2015many,ouni2016multi}, or mining performed refactorings in change histories of software repositories \cite{tsantalis2018accurate}. Moreover, the research on code reviews has been focused on automating it by recommending the most appropriate reviewer for a given code change \cite{bacchelli2013expectations}. However, despite the the wide adoption of refactoring in practice, its review process is largely unexplored. The goal of this paper is to understand what developers care about when they review code, \textit{i.e.,} what are the main criteria developers rely on to develop a decision about accepting or rejecting a submitted refactored code, and what makes this process challenging. This paper seeks to develop a taxonomy of all refactoring \textit{contexts}, where reviewers are raising concerns about refactoring. We drive our study using the following research questions: \noindent\textbf{RQ1.} \textit{How do refactoring reviews compare to non-refactoring reviews in terms of code review efforts?\xspace \noindent\textbf{RQ2.} \textit{What are the factors mostly associated with refactoring review decision?\xspace} To answer these research questions, we first extracted set of 5,505 refactoring-related code reviews, from OpenStack ecosystem. Then, we compared this set of refactoring-related code reviews, with another set of code reviews, in terms of number of reviewers, number of review comments, number on inline comments, number of revision, number of changed files, review duration, discussion and description length, and code churn. Our empirical investigation indicates that refactoring-related code reviews take significantly longer to be resolved and typically triggers more discussions between developers and reviewers to reach a consensus. To understand the key characteristics of reviewing refactored code, we perform a thematic analysis on a significant sample of these reviews. This process resulted in a hierarchical taxonomy composed of 6 categories, and 28 sub-categories. We also externally validated our taxonomy using a survey of 11 questions related to our categories' correctness and representativeness. We also conducted a follow-up interview to further discuss the survey outcomes. \begin{comment} \hl{This paper makes the following contributions:} \begin{itemize} \item \hl{Empirical analysis?} \item \hl{Taxonomy?} \item We provide our comprehensive experiments package \cite{ReplicationPackage} to further replicate and extend our study. The package contains the raw data, analyzed data, statistical test results, survey questions, interview transcription, and custom-built scripts used in our research. \end{itemize} \end{comment} We provide our comprehensive experiments package \cite{ReplicationPackage} to further replicate and extend our study. The package contains the raw data, analyzed data, statistical test results, survey questions, interview transcription, and custom-built scripts used in our research. The remainder of this paper is organized as follows: Section \ref{Section:RelatedWork} reviews the existing studies related to refactoring awareness and code review. Section \ref{Section:Methodology} outlines our empirical setup in terms of data collection, analysis and research question. Section \ref{Section:Result} discusses our findings, while the research implication is discussed in Section \ref{Section:Implications}. Section \ref{Section:Threats} captures any threats to the validity of our work, before concluding with Section \ref{Section:Conclusion}. \section{Related Work} \label{Section:RelatedWork} \input{Tables/RelatedWork_CodeReview.tex} Research on code review has been of importance to practitioners and researchers. A considerable effort has been spent by the research community in studying traditional and modern code review practices and challenges. This literature has been includes case studies (\textit{e.g.}, \cite{ge2014towards,mcintosh2014impact,ge2017refactoring,morales2015code,sadowski2018modern,rigby2013convergent,alomar2021icse}), user studies (\textit{e.g.}, \cite{barnett2015helping,tao2015partitioning,zhang2015interactive,alves2017refactoring,peruma2022refactor}), surveys (\textit{e.g.}, \cite{tao2012software,bacchelli2013expectations,macleod2017code,alomar2021icse}), and empirical experiments (\textit{e.g.}, \cite{mcintosh2014impact,morales2015code,tao2015partitioning,guo2017interactively,peruma2019contextualizing}). However, most of the above studies focus on studying and improving the effectiveness of modern code review in general, as opposed to our work that focuses on understanding developers' perception of code review involving refactoring. In this section, we are only interested in research related to refactoring-aware code review. We summarize these approaches in Table~\ref{Table:Related_Work_in_Refatoring_Code_Review}. In a study performed at Microsoft, Bacchelli and Bird \cite{bacchelli2013expectations} observed, and surveyed developers to understand the challenges faced during code review. They pointed out purposes for code review (\textit{e.g.}, improving team awareness and transferring knowledge among teams) along with the actual outcomes (\textit{e.g.}, creating awareness and gaining code understanding). In a similar context, MacLeod \textit{et al. \xspace} \cite{macleod2017code} interviewed several teams at Microsoft and conducted a survey to investigate the human and social factors that influence developers' experiences with code review. Both studies found the following general code reviewing challenges: (1) finding defects, (2) improving the code, and (3) increasing knowledge transfer. Ge \textit{et al. \xspace} \cite{ge2014towards,ge2017refactoring} developed a refactoring-aware code review tool, called ReviewFactor, that automatically detects refactoring edits and separates refactoring from non-refactoring changes with the focus on five refactoring types. The tool was intended to support developers' review process by distinguishing between refactoring and non-refactoring changes, but it does not provide any insights on the quality of the performed refactoring. Inspired by the work of \cite{ge2014towards,ge2017refactoring}, Alves \textit{et al. \xspace} \cite{alves2014refdistiller,alves2017refactoring} proposed a static analysis tool, called RefDistiller, that helps developers inspect manual refactoring edits. The tool compares two program versions to detect refactoring anomalies' type and location. It supports six refactoring operations, detects incomplete refactorings, and provides inspection for manual refactorings. Coelho \textit{et al. \xspace} \cite{coelho2019refactoring} performed a systematic literature mapping study on refactoring tools to support modern code review. They raised the need for more tools to explain composite refactorings. They also reported the need for more surveys to assess the existing refactoring tools for modern code review in both open source and industrial projects. Pascarella \textit{et al. \xspace} \cite{pascarella2018information} investigated the effect of code review on bad programming practices (\textit{i.e.}, code smells). Their approach mainly focused on comparing code smells at file-level before and after the code review process. Additionally, they manually investigated whether the severity of code smells was reduced in a code review or not. Their results show, in 95\% of the cases, the severity of code smells does not decrease with a review. The reduction of code smells in remaining few cases was impacted by code insertion and refactoring-related changes. Paix{\~a}o \textit{et al. \xspace} \cite{paixao2020behind} explored if developers' intents influence the evolution of refactorings during the review of a code change by mining 1,780 reviewed code changes from 6 open-source systems. Their main findings show that refactorings are most often used in code reviews that implement new features, accounting for 63\% of the code changes we studied. Only in 31\% of the code reviews that employed refactorings the developers had the explicit intent of refactoring. Uch{\^o}a \textit{et al. \xspace} \cite{uchoa2020does} reported the multi-project retrospective study that characterizes how the process of design degradation evolves within each review and across multiple reviews. The authors utilized software metrics to observe the influence of certain code review practices on combating design degradation. The authors found that the majority of code reviews had little to no design degradation impact in the analyzed projects. Additionally, the practices of long discussions and high proportion of review disagreement in code reviews were found to increase design degradation. In their study on predicting design impactful changes in modern code review with technical and/or social aspects, Uch{\^o}a \textit{et al. \xspace} \cite{uchoa2021predicting} analyzed reviewed code changes from seven open source projects. By evaluating six machine learning algorithms, the authors found that technical features results in more precise predictions and the use of social features alone also leads to accurate predictions. A couple of studies considered pull requests as a main source of the study code review process. Pantiuchina \textit{et al. \xspace} \cite{pantiuchina2020developers} presented a mining-based study to investigate why developers are performing refactoring in the history of 150 open source systems. Particularly, they analyzed 551 pull requests implemented refactoring operations and reported a refactoring taxonomy that generalizes the ones existing in the literature. Coelho \textit{et al. \xspace} \cite{coelho2021empirical} performed a quantitative and qualitative study exploring code reviewing-related aspects intending to characterize refactoring-inducing pull requests. Their main finding show that refactoring-inducing pull requests take significantly more time to merge than non-refactoring-inducing pull requests. AlOmar \textit{et al. \xspace} \cite{alomar2021icse} conducted a case study in an industrial setting to explore refactoring practices in the context of modern code review from the following five dimensions: (1) developers motivations to refactor their code, (2) how developers document their refactoring for code review, (3) the challenges faced by reviewers when reviewing refactoring changes, (4) the mechanisms used by reviewers to ensure the correctness after refactoring, and (5) developers and reviewers assessment of refactoring impact on the source code's quality. Their findings show that refactoring code reviews take longer to be completed than the non-refactoring code reviews. Brito \& Valente \cite{brito2020refactoring} introduced RAID, a refactoring-aware and intelligent diff tool to alleviate the cognitive effort associated with code reviews. The tool relied on RefDiff \cite{silva2020refdiff} and is fully integrated with the state-of-the-art practice of continuous integration pipelines (GitHub Actions) and browsers (Google Chrome). The authors evaluated the tool with eight professional developers and found that RAID indeed reduced the cognitive effort required for detecting and reviewing refactorings. In another study, Kurbatova \textit{et al. \xspace} \cite{kurbatova2021refactorinsight} presented RefactorInsight, a plugin for IntelliJ IDEA that integrates information about refactorings in diffs in the IDE, auto folds refactorings in code diffs in Java and Kotlin, and shows hints with their short descriptions. To summarize, the study of open source projects that use either the Gerrit tools or GitHub pull requests has been extensively studied (\textit{e.g.}, \cite{rigby2013convergent,thongtanunam2015should,zhang2018multiple, pantiuchina2020developers}). Since notable open source organizations such as Eclipse and OpenStack adopted Gerrit as their code review management tool, we chose to analyze refactoring practice in modern code review from projects that adopted Gerrit as their code review tool. Although there are recent studies that explored the motivation behind refactoring in pull requests \cite{pantiuchina2020developers,coelho2021empirical}, to the best of our knowledge, no prior studies have manually extracted all the criteria developers are facing when submitting their refactored code for review. To gain more in-depth understanding of factors mostly associated with refactoring review discussion and to advance the understanding of refactoring-aware code review, in this paper, we performed an empirical study on a rapidly evolving open source project, with large number of files with 100\% review coverage. This study complements the existing efforts that are done in an industrial environment \cite{alomar2021icse} and an open source systems \cite{pantiuchina2020developers,coelho2021empirical} using GitHub pull-based development. \section{Study Design} \label{Section:Methodology} The main goal of our study is to understand refactoring practice in the context of modern code review to characterize the criteria that influence the decision making when reviewing refactoring changes. Thus, we aim at answering the following research questions: \begin{itemize} \item \textbf{RQ1.} \textit{How do refactoring reviews compare to non-refactoring reviews in terms of code review efforts?\xspace} \item \textbf{RQ2.} \textit{What are the factors mostly associated with refactoring review decision?\xspace} \end{itemize} \begin{figure*}[t \centering \includegraphics[width=1.0\textwidth]{Images/Approach_v2.pdf} \caption{Overview of our experiment design.} \label{fig:approach} \end{figure*} According to the guidelines reported by Runeson and H{\"o}st \cite{runeson2009guidelines}, we designed an empirical study that consists of three steps, as depicted in Figure \ref{fig:approach}. Since our research questions are both quantitative and qualitative, we used tools/scripts along with manual activities to investigate our data. Furthermore, the dataset utilized in this study is available on our project website \cite{ReplicationPackage} for extension and replication purposes. \noindent \textbf{Gerrit-based code review process.} The code review process of the studied systems is based on Gerrit\footnote{\url{https://www.gerritcodereview.com/}}, collaborative code review framework allowing developers to directly tag submitted code changes and request its assignment to a reviewer. Generally, a code change author opens a code review request containing a title, a detailed description of the code change being submitted, written in natural language, along with the current code changes annotated. Once the review request is submitted, it appears in the requests backlog, open for reviewers to choose. Once reviewers are assigned to the review request, they inspect the proposed changes and comment on the review request's thread, to start a discussion with the author. This way, the authors and reviewers can discuss the submitted changes, and reviewers can request revisions to the code being reviewed. Following up discussions and revisions, a review decision is made to either accept or decline, and so the proposed code changes are either \say{\textit{Merged}} to production or \say{\textit{Abandoned}}. A diagram, modeling a simplified bird's view of the Gerrit-based code review process, is shown in Figure \ref{fig:review}. \subsection{Data Collection} \subsubsection{Studied Systems} To select the subject systems, we identified three important criteria: \noindent \textbf{Criterion \#1: Active MCR practices.} Our goal is to study a system that actively examines code changes through a code review tool. Therefore, we focus on systems where a number of reviews are performed using a code review tool \textcolor{black}{(\textit{i.e.}, systems which have review procedures in place)}, similar to \cite{thongtanunam2015investigating,thongtanunam2016revisiting,morales2015code}. \noindent \textbf{Criterion \#2: Full review coverage.} Since we investigate the practice of refactoring-related code reviews, we focus on systems that have many files with 100\% review coverage \textcolor{black}{(\textit{i.e.}, files where every change made to them is reviewed before they are merged into the repositories)}, similar to the studies that explored code review practices in defective files \cite{thongtanunam2015investigating,thongtanunam2016revisiting,mcintosh2014impact}. \noindent \textbf{Criterion \#3: High number of refactoring-related reviews.} Since we want to study refactoring practices in MCR, we need to ensure that the subject systems have sufficient refactoring-related instances to help us perform our statistical analysis. \textcolor{black}{So, we selected the project with the highest number of refactoring reviews.} To satisfy criterion 1, we started by considering five systems (\textit{i.e.}, OpenStack, \footnote{https://review.opendev.org/} Qt ,\footnote{ https://codereview.qt-project.org/} LibreOffice, \footnote{https://gerrit.libreoffice.org/} VTK, \footnote{http://vtk.org/} ITK \footnote{http://itk.org/}) that use Gerrit code review tool and have been widely studied in previous research in MCR, \textit{e.g.}, \cite{thongtanunam2020review,ouni2016search,fan2018early,chouchen2021whoreview}. We then discarded VTK and ITK since Thongtanunam \textit{et al. \xspace} \cite{thongtanunam2016revisiting} reported that the linkage rate of code changes to the reviews for VTK is too low and ITK does not satisfy criterion 2. As for criterion 3, after mining the code review data, we found that OpenStack has a higher number of refactoring-related code review instances than Qt and LibreOffice. Due to the human-intensive nature of carefully studying and analyzing refactoring practice in MCR, we opt for performing an in-depth study on a single system. With the above-mentioned criteria in mind, we select OpenStack, an open-source software for cloud infrastructure service that is developed by many well-known companies, \textit{e.g.}, IBM, VMware, and NEC. \subsubsection{Mining code review data} We mined code review data using the \texttt{RESTful API}\footnote{https://gerrit-review.googlesource.com/Documentation/rest-apichanges.html} provided by Gerrit, which returns the results in a \texttt{JSON} format. We used a script to automatically mine the review data and store them in \texttt{SQLite} database. All collected reviews are closed (\textit{i.e.}, having a status of either \say{\textit{Merged}} or \say{\textit{Abandoned}}). In total, we mined 775,657 code changes between December 2012 and April 2021 from OpenStack projects. An overview of the project's statistics is provided in Table~\ref{Table:DATA_Overview}. \input{Tables/Project_Overview} \subsection{Data Preparation} To extract the set of refactoring-related code reviews, we follow a two-step procedure: (1) automatic filtering, and (2) manual filtering. \textbf{(1) Automatic Filtering.} In the first step, we utilize a keyword-based mechanism to filter out all entries that do not contain the keyword \textit{refactor*} in both the title and description of the submitted code change. Specifically, we start by searching for the term `\textit{refactor*}' in the title or description (we use * to capture extensions like refactors, refactoring etc.). The keyword-based approach has been widely used in prior studies related to identify refactoring changes or defect-fixing or defect-inducing changes \cite{kim2008classifying,kamei2012large,mockus2000identifying,hassan2008automated,thongtanunam2016revisiting,mcintosh2016empirical,pantiuchina2020developers,Ratzinger:2008:RRS:1370750.1370759,coelho2021empirical,alomar2019can,alomar2021ESWA,stroggylos2007refactoring, alomar2019impact,tang2021empirical,alomar2021documentation,alomar2021toward,alomar2020developers,alomar2021refactoringreuse}, as it allows the pruning of the search space to only consider code changes whose documentation matches a specific intention. The choice of `\textit{refactor}', besides being used by various related studies, is intuitively the first term to identify ideal refactoring-related code changes \cite{kim2008classifying,alomar2021ESWA,alomar2021toward,alomar2021behind,alomar2021documentation}. Also, the choice of enforcing the existence of the keyword in both code change's title or description was piloted by a first trial of only applying the filter to description. This initial filtering gives us a total of 10,440 review instances. After performing a manual inspection of a subset of these reviews, we realized that there exist several false positive cases. To reduce them, we only kept code changes having the term `\textit{refactor*}' in both the title and description. The keyword-based filtering resulted in only selecting 5,547 code changes and their corresponding reviews. We notice that the ratio of these reviews is very small in comparison with the total number of the mined reviews, \textit{i.e.}, 775,657. However, these observations aligned with previous studies \cite{murphy2012we,szoke2014bulk} as developers typically do not provide details when documenting their refactorings. Yet, despite this strict filtering, this approach is still prone to high false positiveness, and therefore, the second step of manual analysis is needed. \textbf{(2) Manual Filtering.} To ensure the correctness of data, we manually inspected and read through all these refactoring reviews to remove false positives. An example of a discarded review is: \say{\textit{Revert "Refactor create\_pool." and "Add request\_access\_to\_group method"}} \cite{Example-1}. We discarded this code review because the refactoring action is undone by developers. This step resulted in only considering 5,505 reviews. Our goal is to have a \textit{gold set} of reviews in which the developers explicitly reported the refactoring activity. This \textit{gold set} will serve to check later criteria that are mostly associated with refactoring review discussion. \begin{figure}[ \centering \includegraphics[width=1.0\columnwidth]{Images/GerritReviewActivityDiagram.pdf} \caption{Gerrit-based code review process overview.} \label{fig:review} \vspace{-.4cm} \end{figure} \subsection{Data Analysis} To address our research questions, a structured mixed-method study was designed to combine elements of both quantitative and qualitative research. \subsubsection{Quantitative data analysis.} We leverage the data collected to compare refactoring and non-refactoring reviews using review efforts, \textit{i.e.}, code review metrics. As we calculate the metrics of refactoring and non-refactoring code reviews, we want to distinguish, for each metric, whether the variation is statistically significant. We first test for normality using Shapiro-Wilk normality test \cite{taeger2014statistical} and we observe that the distribution of code review activity metrics does not follow a normal distribution. Therefore, we use the Mann-Whitney U test \cite{conover1998practical}, a non-parametric test, to compare between the two groups, since these groups are independent of one another. The null hypothesis is defined by no variation in the metric values of refactoring and non-refactoring code reviews. Thus, the alternative hypothesis indicates that there is a variation in the metrics values. Additionally, the variation between values of both sets is considered significant if its associated \textit{p}-value is less than 0.05. Furthermore, we use the Cliff's Delta ($\delta$) \cite{cliff1993dominance}, a non-parametric effect size measure, to estimate the magnitude of the differences between refactoring and non-refactoring reviews. As for its interpretation, we follow the guidelines reported by Romano \textit{et al. \xspace} \cite{romano2006appropriate}: \begin{itemize} \item Negligible for $\mid \delta \mid< 0.147$ \item Small for $0.147 \leq \mid \delta \mid < 0.33$ \item Medium for $0.33 \leq \mid \delta \mid < 0.474$ \item Large for $\mid \delta \mid \geq 0.474$ \end{itemize} To measure the extent of the relationship between these metrics, we conducted a Spearman rank correlation test (a non-parametric measure) \cite{wissler1905spearman}. We chose a rank correlation because this type of correlation is resilient to data that is not normally distributed. \subsubsection{Qualitative data analysis.} To answer RQ2, two of the authors perform the analysis of the data. One of the author manually inspects refactoring review discussions \textcolor{black}{by considering both the general comments and the inline comments}, and the other author reviews the taxonomy. As the complete list of refactoring review data is too large to be manually examined, we select a statistically significant sample for our analysis by considering the reviews with higher review duration, and we annotated 384 reviews. This quantity roughly equates to a sample size with a confidence level of 95\% and a confidence interval of 5. The manual analysis process took approximately 40 days in total. Next, we describe the methodology for building and refining the taxonomy, followed by the validation method. \vspace{.1cm} {\textbf{Taxonomy Building and Refinement.}} When analyzing the review discussions, we adopted a thematic analysis approach based on guidelines provided by Cruzes \textit{et al. \xspace} \cite{cruzes2011recommended}. Thematic analysis is one of the most used methods in Software Engineering literature (\textit{e.g.}, \cite{Silva:2016:WWR:2950290.2950305}), which is a technique for identifying and recording patterns (or \say{themes}) within a collection of descriptive labels, which we call \say{codes}. For each refactoring review, we proceeded with the analysis using the following steps: \begin{comment} \begin{itemize} \item Initial reading of the review discussions \item Generating initial codes (\textit{i.e.}, labels) for each review \item Translating codes into themes, sub-themes, and higher-order themes \item Reviewing the themes to find opportunities for merging \item Defining and naming the final themes, and creating a model of higher-order themes and their underlying evidence \end{itemize} \end{comment} i) Initial reading of the review discussions; ii) Generating initial codes (\textit{i.e.}, labels) for each review; iii) Translating codes into themes, sub-themes, and higher-order themes; iv) Reviewing the themes to find opportunities for merging; v) Defining and naming the final themes, and creating a model of higher-order themes and their underlying evidence. The above-mentioned steps were performed independently by two authors. \textcolor{black}{One author performed the labeling of review discussions independently from the other author who was responsible for reviewing the currently drafted taxonomy. By the end of each iteration, the authors met and refined the taxonomy}. At the time of the study, \textcolor{black}{one of the author} had 4 years of research experience on refactoring, while \textcolor{black}{the other author} had 9 years of research experience on refactoring. It is important to note that the approach is not a single step process. As the codes were analyzed, some of the first cycle codes were subsumed by other codes, relabeled, or dropped all together. As the two authors progressed in the translation to themes, there was some rearrangement, refinement, and reclassification of data into different or new codes. For example, we aggregated, into \say{\textit{Refactoring}}, the preliminary categories \say{\textit{incorrect refactoring}}, \say{\textit{behavior preservation violation}}, \say{\textit{separation of other changes from refactoring}}, \say{\textit{interleaving other changes with refactoring}}, and \say{\textit{domain constraint}} that were discussed by different reviewers. We used the thematic analysis technique to address RQ2. \vspace{.1cm} {\textbf{\textcolor{black}{Taxonomy Validation.}}} \textcolor{black}{In addition to the iterative process of building the taxonomy, we need to also externally validate it from a practitioner's point of view \cite{pascarella2018self,dougan2022towards}. The aim of this validation is to investigate whether it reflects actual MCR practices. To do so, we validated the taxonomy with a senior developer, with 8 years of industrial and refactoring experience, and with 6 years of experience in code review. The survey contained 11 questions related to the correctness and representativeness of our taxonomy. We also conducted a follow-up interview to further discuss the survey outcomes. The interview took an hour and was recorded for further analysis. The interview summary is available in the replication package.} \begin{comment} \subsubsection{Qualitative data analysis.} To answer RQ2, two of the authors manually inspect refactoring review discussions. As the full list of refactoring review data is large to manually examine in its entirely, we randomly select a statistically significant sample for our analysis. To this end, we annotated \hl{384} reviews. This quantity roughly equates to a sample size with a confidence level of 95\% and a confidence interval of 5. When analyzing the review data following the iterative process, we used an open coding approach \cite{miles1994qualitative} since we had no predefined categories. As we analyzed the data, categories were emerged and evolved during the open coding process. One of the authors created all of the cards, and then two of paper authors acted as coders to aggregate cards into themes and merging these themes into categories. For each refactoring review, we proceeded with the analysis using the following steps: \begin{itemize} \item Two of paper authors acted as coders to independently read review discussions and sort the cards on the 20 \% of the cards to identify initial card groups. Then, the authors met to discuss their identified groups. \item The two authors collaboratively merged the two sets of individually aggregated categories, comparing them, deciding on disagreements and searching for themes among codes, obtaining 5 main categories and 31 sub-categories. \item After defining the categories, the two authors reviewed the codes to find opportunities for merging. \item The two authors performed another independent round to sort another 20 \% of the reviews into the groups that were agreed-upon in the previous step. \item The two authors met again to refine the categories (\textit{i.e.}, group similar categories or define new ones), obtaining 6 main categories and 26 sub-categories. \item The rest of the card sort (60 \% of the reviews) was performed by both authors together. \end{itemize} \end{comment} \section{Results and Discussion} \label{Section:Result} \input{Tables/RQ1_results} \subsection{How do refactoring reviews compare to non-refactoring reviews in terms of code review efforts?\xspace} \noindent\textbf{Approach.} To address RQ1, we intend to compare \textit{refactoring reviews} with \textit{non-refactoring reviews}, to see whether there are any differences in terms of code review efforts or metrics listed in Table \ref{Table:RQ1_results}. Since our refactoring set contains 5,505 reviews, we need to sample 5,505 non-refactoring reviews from the remaining ones in the review framework. To ensure the representativeness of the sample \cite{clarkson1989applications}, we use stratified random sampling by choosing reviews from the rest of reviews. \noindent\textbf{Results.} By looking at the statistical summary in Table \ref{Table:RQ1_results}, we found that reviewing refactoring changes significantly differ (\textit{i.e.}, more reviewers ($\mu$ = 5.56), more review comments ($\mu$ = 20.87), more inline comments ($\mu$ = 6.61), more revisions ($\mu$ = 4.79), more file changes ($\mu$ = 5.98), lengthier review time ($\mu$ = 928.21), discussions and descriptions ($\mu$ = 3450.29, $\mu$ = 327.61, respectively), and more added and deleted lines between revisions ($\mu$ = 367.63) from reviewing non-refactoring changes. As shown in Table \ref{Table:RQ1_results}, we performed a non-parametric Mann-Whitney U test and we obtained a statistically significant \textit{p}-value when the values of these two groups were compared (\textit{p}-value $< 0.05$ for all review efforts), and accompanied with a small, medium, or large effect size depending on the review effort/metric. We speculate that reviewing refactoring triggers longer discussions between the code change authors and the reviewers as we notice that several refactoring-related actions are being extensively discussed before reaching an agreement. While previous studies have found a similar pattern in GitHub's pull requests in open-source systems \cite{coelho2021empirical} and using code review tools in industry \cite{alomar2021icse}, there is not a study that looked at what are the main reasons for refactoring-related discussions to take significantly longer effort to be reviewed. Therefore, the findings of RQ1 has motivated us to manually analyze these reviews and extract the main criteria related to reviewing refactored code (RQ2). Further, we observe that refactoring related code reviews impact larger code churn and more changes across files than non-refactoring code changes. These results are expected and are in agreement with prior works \cite{coelho2021empirical,hegedHus2018empirical,paixao2019impact}, which found that refactored code has higher size-related metrics and larger changes promote refactorings. We also notice that the number of developers who participated in the refactoring code review process is also higher due to the high number of added, modified, or deleted lines between revisions. In contrast to a previous finding \cite{coelho2021empirical}, however, no evidence of the correlation between the number of reviewers and refactoring was detected. Moreover, our correlation analysis is measured using the Spearman rank correlation test reveals that the number of review comments, discussion length, and number of revisions are highly correlated with the review duration. The Spearman rank correlation test yielded a statistically significant (\textit{i.e.}, \textit{p}-value $< 0.05$) correlation coefficient of 0.57, 0.53 and 0.49, respectively. Further, the number of reviewers are highly correlated with the discussion length and number of review comments with \textit{p}-value $< 0.05$ and correlation coefficient of 0.73 and 0.77, equating to a strong correlation. Additionally, we observe that very high number of deleted or added lines are correlated with very high number of file changes (\textit{i.e.}, \textit{p}-value $< 0.05$ and correlation coefficient of 0.58). In contrast, the Spearman's correlation values detect no significance in the relationship between the number of files and review duration. \begin{comment} \begin{figure*}[t] \centering \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/NumberOfReviewers.pdf} \label{Figure:no-of-reviewers} \caption{Number of reviewers} \end{subfigure}% \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/NumberOfReviewComments.pdf} \label{Figure:no-of-review-comments} \caption{Number of review comments} \end{subfigure}% \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/NumberOfInlineComment.pdf} \label{Figure:no-of-inline-comment} \caption{Number of inline comments} \end{subfigure} % \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/NumberOfRevision.pdf} \label{Figure:no-of-revisions} \caption{Number of revisions} \end{subfigure}% \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/NumberOfChangedFiles.pdf} \label{Figure:no-of-changed-file} \caption{Number of changed files} \end{subfigure}% \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/Reviewduration.pdf} \label{Figure:review-duration} \caption{Review duration} \end{subfigure} % \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/LengthOfDiscussion.pdf} \label{Figure:length-of-discussion} \caption{Length of discussion} \end{subfigure}% \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/LengthOfDescription.pdf} \label{Figure:length-of-description} \caption{Length of description} \end{subfigure}% \begin{subfigure}{4cm} \includegraphics[width=3.5cm]{Images/CodeChurn.pdf} \label{Figure:code-churn} \caption{Code churn} \end{subfigure} % \caption{Distribution of refactoring and non-refactoring reviews.} \label{Figure:projectsc_violinplot} \end{figure*} \end{comment} \subsection{What challenges do developers face when reviewing refactoring tasks?} \noindent\textbf{Approach.} To get a more qualitative sense, we manually inspect the OpenStack ecosystem using a thematic analysis technique \cite{cruzes2011recommended}, to study the challenges that reviewers catch when reviewing refactoring changes, so we understand the main reasons for which refactoring reviews take longer compared to non-refactoring reviews. \noindent\textbf{Results.} Upon analyzing the review discussions, we create a comprehensive high-level categories of review criteria. Figure \ref{fig:challenges} shows the proposed taxonomy of the criteria related to reviewing refactored code. The taxonomy is composed of two layers: the top layer contains 6 categories that group activities with similar purposes, whereas the lower layer contains 28 subcategories that essentially provide a fine-grained categorization. Due to space constraints, we made the iterative taxonomy available in our replication package \cite{ReplicationPackage}. These refactoring review criteria are centered around six main categories as shown in the figure: (1) quality, (2) refactoring , (3) objective , (4) testing, (5) integration, and (6) management. It is worth noting that our categorization is not mutually exclusive, meaning that a review can be associated with more than one category. An example of each category is provided in Table \ref{Table:example}. \textcolor{black}{Further, according to the interview with the senior developer, they conformed that our taxonomy is representative of the reviews cases they have experienced.} In the rest of this subsection, we provide more in-depth analysis of these categories. \noindent\textbf{Category \#1: Quality.} The design quality is found to be a vital part of the refactoring review process. According to the review discussions, reviewers enforce the adherence to \textit{coding convention}, optimization of \textit{internal quality attribute}, \textit{external quality attribute}, the avoidance of (\textit{i.e.}, \textit{code smell}, resolution of \textit{technical debt}, correctness of \textit{design pattern} implementations), and \textit{lack of documentation}. For instance, developers recommend appropriate ways to write code, optimize \textit{internal and external quality attributes} as developers may not \textit{draw the full picture} of the software design, which makes their decision adequate locally, but not necessarily at the global level. Moreover, developers only reason on the actual \textit{snapshot} of the current design, and there is no systematic way for them to recognize its evolution by, for instance, accessing previously performed refactorings. This may also narrow their decision making, and they may end up \textit{reverting} some previous refactorings. Moreover, providing a clear and meaningful explanation of the reasons behind the proposed changes are equally important in review decisions. The requested documentation change includes (but is not limited to) expanding code comments, removing code smells, and reusing existing functionality. Further, developers typically refactor classes and methods that they frequently change. We observe this as various reviews were containing similar recurrent files. So, the more they change the same code elements, the more familiar with the system they become, and so it improves their design decisions. \textcolor{black}{While the follow-up interview indicates the importance of each sub-category in real life, it was pointed out by the senior developer that \textit{technical debt} was not allowed to be submitted since the company has a strict policy preventing developers from submitting near optimal code. Additionally, according to the senior developer's experience, reviewers did allow the relaxation of design patterns strict implementations, as long as it is properly justified.} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{Images/ChallengesTaxonomy-v7.jpg} \caption{Refactoring review criteria in modern code review.} \label{fig:challenges} \end{figure*} \begin{comment} \begin{quote} \say{\textit{Perhaps we should explore using callbacks to provide a similar decoupling of the various backend drivers from the main service plugin that simply manages the state of the DB. That way the backend implementation is free to develop its own interfaces and maintain a loose coupling using the callbacks framework.}}; \say{\textit{I think there are some design considerations to explore. The callbacks framework seems like it could come in handy here. We've used it heavily for other features such as trunks to implement a clean separation between driver-specific functionality and common functionality such as managing the state of the model in the DB.}}; \say{\textit{In any case this work will be huge improvement of code quality and it remove utils/wrappers tons of ugly code and take development to the right direction.}}; \say{\textit{I do understand the desire to refactor some code to eliminate duplicate code. The purpose of the common class was to contain all of the duplicate code between the 2 drivers. This seems like a half baked approach to refactoring to accomplish that goal, when the common class should have been used as a new base. Now with this patch there are 2 classes (base and common) that contain common code.}}; \par \say{\textit{We've seen how difficult it is to arrive at a single naming convention that works for all OpenStack projects, but as Rally expands beyond benchmarking OpenStack alone, it is quite simply impossible to create a single name format that will work for literally everything. That's why I propose to make this configurable in multiple places; that way I could use the same Rally installation to benchmark OpenStack (where the default formats will hopefully work fine) and also another system that perhaps disallows dashes. Additionally, I could configure Rally to use basic random names for some scenarios, while using all printable unicode for Nova scenarios, since Nova is generally fairly good about supporting that. (We'd need some clever ways to set resource\_name\_allowed\_characters to permit things like "all printable Unicode" without requiring the end user to actually write out all printable Unicode characters.)}}; \say{\textit{This is much more readable than before, but I would like to see a satisfactory answer to the question of why this is necessary, before this is merged. As far as I am aware, there is no consensus from OpenStack on following this guideline; do you have a spec for this work? I am concerned that it increases the risk of introducing bugs, for little benefit, and that it's an approach we shouldn't accept uncritically, especially so close to summit. I don't want to set a precedent for accepting infinite further patches that reorder parameters, unless they can be justified, because it quickly takes up a lot of time for all of us as reviewers.}}; \end{quote} \end{comment} \noindent\textbf{Category \#2: Refactoring. } This category gathers reviews with a focus on evaluating the correctness of the code transformation and checking whether or not the submitted changes lead to a safe and trustworthy refactoring. These reviews discuss \textit{refactoring correctness}, \textit{behavior preservation}, \textit{refactoring co-occurrence with changes}, and \textit{domain constraint}. Because developers interleave refactorings with other tasks, reviewers highlighted that mixing refactoring with other changes may lead to overshadowing errors, and so the intrusion of bugs. \textcolor{black}{Similar concerns were also raised by the senior developer during our interview. In their regular reviews, they would typically recommend the separation of refactoring from other code changes, whenever possible. } \begin{comment} \begin{quote} \par \say{\textit{I think, this piece is incorrectly refactored. You should create neutron.conf.agent.linux with IP\_LIB for linux and corresponding change for neutron.conf.agent.windows with IP\_LIB for windows. Then you could import it, similar to this, how it was done previously.}}; \par \say{\textit{I would have separate this change from your refactor. The value returned by this function is transferred to the compute manager and to the virt drivers. Some of them could have considered True/False/None to handle different behavior and even if that is not the case, to have this isolated can help reviewer to ensure all is OK.}}; \par \say{\textit{Also, refactor and feature implementation are weird to lump together here, generally a refactor should be low impact, unless you're confusing refactor with redesign. When I think of refactor, I think of cleaning up code that should be covered by existing tests, i.e. method decomposition.}}; \par \say{\textit{Yeah, you right that it is antipattern refactoring, but I think, if we need, we could change documentation in favor of "default" using of handle\_delete (i.e. using in resource entity argument and describing it's using in documentation).}}; \par \say{\textit{I know it's a pure refactoring change, but I want to make sure we don't need to modify unittests also.}}; \par \say{\textit{This is how the code worked before, so I just left it the same. The goal of this change is just to refactor things to put them together into a class and not to change behavior.}}; \end{quote} \end{comment} \noindent\textbf{Category \#3: Objective.} In this category, we have gathered cases where reviewers eventually ask to clearly document the \textit{goal}, \textit{benefit}, \textit{side effects}, \textit{scope}, \textit{feature-related}, and \textit{bug fix-related} activities to better understand the rationale of the submitted code changes. This reveals how reviewers keep proposing areas of improvement in developers' documentation practices, pertaining to the perception and the rationale of the change. It appears that the clarity of the documented changes is of paramount importance as reviewers also struggle with identifying the \textit{benefit} and/or \textit{side effects} and spend time understanding them. We realized that the clarity of the explanation of what is being changed and why affects review time and decision. Missing rationale and documentation are also frequently reported as confusing during code review \cite{ebert2021exploratory}. Reviewers are requested to \textit{clarify the scope of the change}. Changing the scope is one of the influential factors for reviewers to make their decisions. Reviewers check the actual appropriateness of the change before it is merged into the code base. Further, we realized that source code is not the only artifact that developers refactor. Other artifacts, such as database elements, are also subject to refactoring. \textcolor{black}{During the follow up interview, the senior developer indicated that \textit{unclear goal} and \textit{unknown benefit} were less likely to be encountered due to the company review guidelines. Also \textit{bug fixing} is not frequent in the expert's current company because of the enforcement of performing minimal refactoring changes when fixing bugs.} \begin{comment} \begin{quote} \par \say{\textit{I would suggest to clarify merits and demerits the proposed appraoch first. We need a design discussion. It is more than a code review.}}; \par \say{\textit{I don't think some of this patch is needed since we merged Gage's fix. On the other hand, this change seems to be doing quite a bit of refactoring. Is there a reason to continue pursuing that refactor, or was it mainly to fix the bug?}}; \par \say{\textit{I do agree with that - there's no spec outlining the end goal, thus this changeset isn't obviously useful. I tried to put some context in the commit message, but this is really only a step in larger goal. That larger goal hasn't really been articulated outside of one-on-one conversations in side channels.}}; \par \say{\textit{It seems promising, but the change in pattern, if not thoroughly thought out, may lead to other unforeseen side effects.}}; \par \say{\textit{I'm curious what the motivation is for moving some of these methods out of the controller and into a new module? I don't think isn't necessarily a bad idea, I'm just missing the point of doing it.}}; \par \say{\textit{I would like understand what is the added value or improvement with these changes to l3-agent scheduler.}}; \par \say{\textit{That will reduce the chances of introducing a bug in one of these changes, and also it is a great help on some joined loads.}}; \par \say{\textit{Yeah, this is one of my problem with the goal of this refactoring. If we want to be explicit with action and phase and at the same time we want to keep one single function both for legacy and versioned notifications then we have to change the legacy code as that is not explicit about action and phase. I personally would keep the legacy and versioned code path separate as much as possible so that the versioned one can be made better (e.g. explicit about parameters) while we don't have to refactor the legacy one that will be removed in the future.}}; \par \say{\textit{I think worth doing the refactor of this pxe method in this patch if we're aiming to make the interfaces interchangeable. We can move this method safely to deploy\_utils I think}}; \par \say{\textit{If you want to mention a specific feature that needs this refactor you should define it in brief to better understand why we really need such refactor. What's blocking such features and how this new API would help.}}; \par \say{\textit{This is much more readable than before, but I would like to see a satisfactory answer to the question of why this is necessary, before this is merged. As far as I am aware, there is no consensus from OpenStack on following this guideline; do you have a spec for this work? I am concerned that it increases the risk of introducing bugs, for little benefit, and that it's an approach we shouldn't accept uncritically, especially so close to summit. I don't want to set a precedent for accepting infinite further patches that reorder paramaters, unless they can be justified, because it quickly takes up a lot of time for all of us as reviewers.}} \par \say{\textit{To be perfectly honest, I am not in position to vouch for this proposal. It seems promising, but the change in pattern, if not thoroughly thought out, may lead to other unforeseen side effects. If we can have a peep to the code and see how it behaves under load it would give us more confidence that we are indeed on the right track. I am fearful that if we're not careful we'd be trading certain (well understood) issues with others to be discovered at a later date.}} \par \say{\textit{We may also raise PBM faults as VIM exceptions. So we have to introduce ServiceException with two children: VimException and PBMException. The refactoring to separate PBM faults may not be trivial.}} \par \say{\textit{Now, with regards to the refactor. The plan was to make a progressive refactor that would keep the old API around while the new one is adopted. The motivations behind this refactor are related to the lack of any kind of architecture and design in the current code. The current code, as it is, is not easily consumable by other services, which is the whole point of this library.}} \end{quote} \end{comment} \noindent\textbf{Category \#4: Testing.} Refactoring is supposed to preserve the behavior of the software. Ideally, using the existing unit tests to verify that the behavior is maintained should be sufficient. However, since refactoring can also be interleaved with other tasks, then there might be a change in the software's behavior, and so, unit tests may not capture such changes if they were not revalidated to reflect the newly introduced functionality. We have seen various reviews discussions raising this concern, especially when developers are unaware of such non behavior preserving changes, and so, deprecated unit tests will not guarantee the refactoring correctness. Based on our analysis of the discussion, reviewers suggested adding unit tests before the refactoring, so that they can be more confident the code has not broken anything. They also recommend adding test cases when the refactored code lowers the test coverage (\textit{e.g.}, extracting new methods). Moreover, when developers submit a review, they can include the results of running the tests as reviewers expect code changes to be accompanied by a corresponding test change. So, to capture these various cases, under the \textit{Testing} category, we have the following sub-categories: \textit{lack of coverage}, \textit{absence of result}, and \textit{poor test quality}. \textcolor{black}{The outcome of the interview shows the existence of this type of review criteria.} \begin{comment} \begin{quote} \par \say{\textit{right now it works for only 1 scenario. If it looks good then i will do the same changes for the rest of the code and fix failed test cases.}}; \say{\textit{Does this change have an impact on overall test run time? Thanks. Stuart, looks like this slows the tests a bit (about ten seconds) the way it stands.}}; \say{\textit{besides making functional test work, some unit test cases are also needed to be refactored.}}; \say{\textit{The value of this spec would be around the testing strategy to catch regressions.}}; \say{\textit{Is there enough coverage on this to ensure that all changes are tested?}}; \say{\textit{this patch has no unit test changes. With this approach of refactoring, I would expect a change in the approach of unit testing as well. Where are the new base class unit tests?}}; \end{quote} \end{comment} \noindent\textbf{Category \#5: Integration.} This category gathers reviews highlighting how refactoring has complicated the merging process, or triggered configuration issues. Several sub-categories raise, namely, \textit{configuration issue}, \textit{merge failure}, \textit{API management}, and \textit{build failure}. These categories show that refactoring may complicate the review process. As per our analysis, it appears that the code changes involving refactoring reviews tend to be more problematic for failure or issue, compared to non-refactoring reviews. This observation is partially in lined with previous studies \cite{mahmoudi2019refactorings,dig2007refactoring}. They found that merge conflicts that involve refactoring are more complex than conflicts with no refactoring changes. Moreover, we noticed that API upgrades and migrations typically trigger discussions, mainly when there are API breaking changes. Developers tend to ask about the appropriate migration plans and the necessary changes to preserve the software's behavior during the migration process. Despite the existence of various tools to support the detection of breaking changes, discussions revealed that this process is still manual. \textcolor{black}{Besides agreeing with the existence of these types subcategories, the participant also reported that his current company supports developers with review \textit{bots} to early detect any merge-related issues.} \input{Tables/RQ2_taxonomy} \begin{comment} \begin{quote} \par \say{\textit{This change was unable to be automatically merged with the current state of the repository.}}; \par \say{\textit{this is an example of a breaking change to the plugin interface (third-party plugins could reasonably expect that not providing a handle\_delete() method would mean that nothing happens at delete time, since there was explicit code to make sure nothing would break if they didn't provide one), which we should never do.}}; \par \say{\textit{This seems to be a breaking change in the interface, meaning this new cli wouldn't work with the old api and vice versa. I think it would be appropriate if this new code (in the API) produced dag\_execution\_date on the action dictionary returned, so as to be backwards compatible with old clients, and this new client should probably sniff this for the new way, and revert to the old way if needed.}} pretty quick. \end{quote} \end{comment} \noindent\textbf{Category \#6: Management.} Another category emerged from the manual coding analysis is review management. The subcategories we found were \textit{no ongoing review activity}, \textit{forgotten review}, \textit{change dependency}, and \textit{review guideline violation}. In other words, we observe that some of the reviews take longer due to a lack of reviewer attention, as there is a case of little prompt discussion about the proposed changes. For instance, some reviews received a review score of +1 from a reviewer, but there was no other activity after waiting a couple of days or months. \textcolor{black}{The follow-up interview confirms that this category is common in industry. Reviewers have their own workload, and some discussions can be subject to delays depending on the developer and reviewers' participation \cite{thongtanunam2015investigating}.} \begin{comment} \begin{quote} \par \say{\textit{This is a significant enough change that I think it warrants discussion with the drivers team and a spec. The current agent-based approach does allow you to mix and match different types of BGP implementations. I think there are patterns we can follow for supporting different backends we should look at, and a spec is the right place to hammer that out.}}; \par \say{\textit{Why is this necessary? Is there a clear benefit to making this change, other than adhering to a guideline which hasn't been adopted by OpenStack? Is there a plan to add linting of some sort to this module?}}; \end{quote} \end{comment} \section{Implications} \label{Section:Implications} \subsection{Implications for Practitioners} \textbf{Establishing guidelines for refactoring-related reviews.} Our taxonomy shows reviewing refactoring goes beyond improving the code structure. To improve the practice of reviewing refactored code, and contribute to the quality of reviewing code in general, managers can collaboratively work with developers to establish customized guidelines for reviewing refactoring changes which could establish beneficial and long-lasting habits or themes to accelerate the process of reviewing refactoring. Additionally, since our RQ2 findings show that integration and testing are one of the challenges caught by developers when reviewing refactoring changes, it is recommended to utilize continuous integration to keep the testing suite in sync with the code base during and after refactoring. Further, adherence to coding conventions is considered one of difficulties when reviewing refactoring tasks. To cope with this challenge, we recommend project leaders to educate their developers about the coding conventions adopted in their systems. Considering the above-mentioned characteristics not only save developers time and effort, but also can assist taking informed decision and bring a discipline toward reviewing code involving refactorings within a software development team. \subsection{Implications for Researchers} \textbf{Exploring the potential of combining multiple behavior preservation strategies when reviewing refactorings.} Our study shows that preserving the behavior of software refactoring during the code review process is a critical concern for developers, and developers determine whether a behavior is preserved based on the context. Recently, AlOmar \textit{et al. \xspace} \cite{alomar2021preserving} aggregated the behavior preservation strategies that have been evaluated using single or multiple refactoring operations, and some of these refactorings are applied using multiple strategies. In order to accelerate the process of reviewing code involving refactoring, future researchers are advised to explore the potential of combining several behavior preservation approaches and use those which would be useful in the context of modern code review according to a defined set of criteria. \textcolor{black}{For instance, Soares \textit{et al. \xspace} \cite{soares2009saferefactor} have implemented the tool `SafeRefactor' to identify behavioral changes in transformation. It would be an interesting idea to verify the correctness after the application of refactoring by embedding test results and generating a test suite for capturing unexpected behavioral changes in code review board.} \textbf{Supporting for the refactoring of non-source code artifacts.} From RQ2, we discover that refactoring operations are not limited to source code files. Artifacts such as databases and log files are also susceptible to refactoring. Similarly, we also observed discussions about refactoring test files. While it can be argued that test suites are source code files, recent studies by \cite{pantiuchina2020developers,alomar2021ESWA} show that the types of refactoring operations applied to test files are frequently different from those applied to production files. Hence, future research on refactoring is encouraged to introduce refactoring mechanisms and techniques exclusively geared to refactoring non source code artifact types and test suites. \subsection{Implications for Tool Builders} \textbf{Developing next generation refactoring-related code review tools.} Finding that reviewing refactoring changes takes longer than non-refactoring changes reaffirms the necessity of developing accurate and efficient tools and techniques that can assist developers in the review process in the presence of refactorings. Refactoring toolset should be treated in the same way as CI/CD tool set and integrated into the tool-chain. Researchers could use our findings with other empirical investigations of refactoring to define, validate, and develop a scheme to build automated assistance for reviewing refactoring considering the refactoring review criteria as review code become an easier process if the code review dashboard augmented with the factors to offer suggestions to better document the review. Moreover, we noticed that poorly naming the code elements is one of the major bad refactoring practices typically catch by developers when reviewing refactoring changes. Constructing tools that enforce code conventions aid in speeding up reviewing refactoring changes and benefiting reviewers to focus on deeper design defects. \textcolor{black}{Furthermore, to accelerate code review process and limit having a back-and-forth discussion for clarity on the problem faced by the developer, tool builders can develop \textit{bots} for the integration, testing, and management categories. Additionally, it would be interesting to use a popular and widely adopted quality framework, \textit{e.g.}, Quality Gate of SonarQube \cite{olivier2013sonar}, as part of quality verification process by embedding its results in the code review. This might facilitate convincing the reviewer about the impact and the correctness of the performed refactoring.} \section{Threats To Validity} \label{Section:Threats} In this section, we describe potential threats to validity of our research method, and the actions we took to mitigate them. \textbf{Internal Validity.} Concerning the identification of refactoring related code review, we select reviews with the keyword `\textit{refactor*}' in their title and description. Such selection criteria may have resulted in missing refactoring-related reviews and there is the possibility that we may have excluded synonymous terms/phrases. However, even though this approach reduces the number of reviews in our dataset, it also decreases the false positiveness of our selection. While our data collection may results in missing some reviews, our approach ensures that we analyze reviews that are explicitly geared towards refactoring. In other words, these are reviews where developers were explicitly documenting a refactoring action and they wanted it to be reviewed. Additionally, upon performing the manual inspection on review discussions, we realized that refactoring is heavily emphasized on discussions that start with a title or a description containing the keyword `\textit{refactor*}'. Yet, this does not prevent other discussions from bringing refactoring into the picture, and these will be missed by our selection (\textit{i.e.}, false negatives). We opted for such picky selection to only consider discussions when code authors explicitly wanted their refactored code to be reviewed, and so reviewers eventually propose a refactoring-aware feedback, which is what we are aiming for in this study. Therefore, it would be interesting to consider scenarios where reviewers have raised concerns about refactoring a code change that was not intended to be associated with refactoring. Since refactoring can easily be interleaved with other functional changes, it would be interesting to extract scenarios where reviewers thought it was misused. Study can also help developers better understand not only how to refactor their code, but also how to document it properly for easier review. Further, we focus on the code review activity that is reported by the tool-based code review process, \textit{i.e.}, Gerrit, of the studied systems due to the fact that other communication media (\textit{e.g.}, in-person discussion \cite{beller2014modern}, a group IRC \cite{shihab2009studying}, or a mailing list \cite{rigby2011understanding}) do not have explicit links of code changes and recovering these links is a daunting task \cite{thongtanunam2016revisiting,bacchelli2010linking}. \textbf{Construct Validity.} About the representativeness and the correctness of our refactoring review criteria, we derive these criteria from a manual analysis of a subset of refactoring-related reviews that have lengthier review duration. This approach may not cover the whole spectrum of all the review criteria done with refactoring in mind. To mitigate this threat, we reviewed the top 7.3\% lengthier refactoring reviews assuming that these reviews will capture the most critical challenges. Additionally, to avoid personal bias during the manual analysis, each step in the manual analysis was conducted by two authors, and the results were always cross-validated. Another potential threat to validity relates to refactoring reviews. Since refactorings could interleave with other changes \cite{murphy2012we} (\textit{i.e.}, developers performed changes together with refactorings), we cannot claim that the selected refactoring reviews are exclusively about refactoring. Nevertheless, during our qualitative analysis, we identified this activity as one of the challenges that contribute to slowing down the review process. \textbf{External Validity.} We focus our study on one open-source system, due to the low number of systems that satisfied our eligibility criteria (see Section \ref{Section:Methodology}). Therefore, our results may not generalize to all other open-source systems or to commercially developed projects. However, the goal of this paper is not to build a theory that applies to all systems, but rather to show that refactoring can have an impact on code review process. Another potential threat relates to the proposed taxonomy. Our taxonomy may not generalize to other open source or commercial projects since the refactoring review criteria may be different for another set of projects (\textit{e.g.}, outside the OpenStack community). Consequently, we cannot claim that the results of refactoring review criteria (see Figure \ref{fig:challenges}) can be generalized to other software systems where the need for improving the design might be less important. \textcolor{black}{To mitigate this threat, we validate the taxonomy with an experienced software developer, by conducting a follow-up interview to gather further insight and possible clarification. \textcolor{black}{Yet, performing the validation with only one developer brings its own bias. The choice of one developer was driven by their experience with code review. We mitigated the bias by selecting a developer that does not belong to the software systems we analyzed. This brings an external opinion that has no conflict of interests with the current projects.}} \textbf{Conclusion Validity.} To compare between two groups of code review requests, we used appropriate statistical procedures with \textit{p}-value and effect size measures to test the significance of the differences and their magnitude. A statistical test was deployed to measure the significance of the observed differences between group values. This test makes no assumption that the data is normally distributed. Also, it assumes the independence of the groups under comparison. We cannot verify whether code review requests are completely independent, as some can be re-opened, or one large code change can be treated using several requests. To mitigate this, we verified all the reviews we sampled for the test. \section{Conclusion} \label{Section:Conclusion} Understanding the practice of refactoring code review is of paramount importance to the research community and industry. Although modern code review is widely adopted in open-source and industrial projects, the relationship between code review and refactoring practice remains largely unexplored. In this study, we performed a quantitative and qualitative study to investigate the review criteria discussed by developers when reviewing refactorings. Our results reveal that reviewing refactoring changes take longer to be completed compared to non-refactoring changes, and developers rely on a set of criteria to develop a decision about accepting or rejecting a submitted refactoring change, which makes this process to be challenging. For future work, we plan on conducting a structured survey with software developers from both open-source and industry. The survey will explore their general and specific review criteria when performing refactoring activities in code review. This survey will complement and validate our current study to provide the software engineering community with a more comprehensive view of refactoring practices in the context of modern code review. Another interesting research direction is to link refactoring-related reviews to refactoring detection tools such as the Refactoring Miner \cite{tsantalis2018accurate} or RefDiff \cite{silva2017refdiff} to better understand the impact of these reviews on refactoring types specifically. \section{Data Availability}
train/arxiv
BkiUeLo4eIOjRyLa5m2r
5
1
\section{Introduction}\label{sec:INT} During their evolution, massive stars are known to go through a very short and high luminosity important transitional phase, called the luminous blue variable (LBV) phase. During this phase, LBVs undergo significant variations in photometric magnitudes and spectral features, characterized in particular by the appearance of broad components in the hydrogen and helium emission lines and in some heavy element ion lines in the UV and optical ranges. The broad emission in the LBVs may be due to sharp eruptions of the massive stars, reaching a total mass loss of up to $\sim$30 -- 100 M$_\odot$. It can also be caused by stellar winds and expanding dense circumstellar envelopes. In these cases, H$\alpha$ luminosities are in the range of 10$^{36}$ -- 10$^{39}$ ergs s$^{-1}$ \citep{IzTG07}. The mass loss rate of hydrogen-rich layers through stellar winds is $\sim$10$^{-6}$ -- $\sim$10$^{-3}$M$_\odot$ yr$^{-1}$ \citep{HumphreysDavidson1994,Smith1994,Drissen1997,Drissen2001}. However, very strong broad emission (FWHM $>$ 1000 km s$^{-1}$) can also be present in the spectra of objects other than massive stars, such as Type IIn Supernovae (SNe) and Active Galactic Nuclei (AGNs). In these objects, the luminosities of the broad H$\alpha$ component are larger and can reach values up to 10$^{40}$ -- 10$^{42}$ ergs s$^{-1}$ \citep{IzTG07,Sobral2020,Kokubo2021,Burke2021}. LBVs frequently show recurring eruptive events through various evolution phases, during their transition from young massive main-sequence stars to WR stars, SN explosions or massive black holes (BHs). It is believed that stars with masses greater than 20 -- 30 M$_\odot$ and luminosities $L$ $\sim$ 10$^{3}$--10$^{6}$$L_\odot$ go through the LBV phase \citep{Crowther2007,Solovyeva2020}. Among all types of variable stars, only LBV stars show significant variability both in photometric brightness and spectroscopic features: rapidly amplified broad emission and blueward absorption lines, strongly enhanced continuum that becomes bluer in the UV and optical spectra. To date, about a few hundreds LBVs and candidate LBVs (cLBVs) are known to show irregular cyclic quasi-periodic brightness variations of $\sim$0.5 - 2 mag on timescales from several years to decades. They are called S Dor LBVs \citep[see e.g. ][]{Massey2000,Humphreys2013,Humphreys2017,Humphreys2019,Grassitelli2020,Weis2020}. On the other hand, there exists a tiny number of LBV stars which show giant eruptions, with amplitudes more than 2.5 - 3 mag, on timescales of up to thousands years \citep{DavidsonHumphreys1997,Smith2011,Vink2012,Weis2020}. Well-known prototypes of this category are $\eta$ Carinae and P Cygni with luminosities $\sim$10$^{40}$ ergs s$^{-1}$ \citep{Lamers1983,Davidson1999}. In some cases, the peak luminosity during the outbursts can reach $\sim$10$^{42}$ ergs s$^{-1}$ \citep{Kokubo2021}. Nearly all these known luminous LBVs are either in our Galaxy or in nearby galaxies. High intensity broad and very broad components of emission lines, with P Cygni profiles, have also been observed in the integral spectra of star-forming galaxies (SFGs) underlying their strong narrow emission lines produced in H~{\sc ii} regions \citep[see e.g. ][]{SCP99,GIT00,Lei01,IzT08,Guseva2012}. The most prominent spectral features in SFGs with LBVs are broad components of hydrogen and often helium lines with blueward absorption, and Fe {\sc ii} emission. To understand the physical mechanism responsible for the broad emission, time-monitoring of the broad spectral features is necessary. The reason is that, as said previously, broad emission occurs not only in LBV spectra but also in those of SNe and AGNs. It is difficult to distinguish between these different possibilities without a long-term time monitoring of the broad features. It has now been established that a significant number of the objects detected in supernova surveys are not true supernovae, but belong to a category of objects called ``supernova impostors''. Ordinary LBVs of the S Dor type or LBVs with giant eruptions, like $\eta$ Car at maximum luminosity, appear among these "impostors". Despite the fact that many stars with LBV features have been discovered \citep{Weis2020}, only a few dozen of them are confirmed as genuine Galactic and extragalactic LBVs \citep{Wofford2020}. The remaining are cLBV which require time-monitoring to confirm their true nature. A genuine LBV would show a significant enhancement of the spectral and photometric features on a time scale of tens of years, followed by the disappearance of these features. This disappearance is necessary to rule out SNe, AGNs or other physical mechanisms \citep{Kokubo2021}. In addition, it is of particular interest to understand how LBV evolution in SFGs depends on the properties of the host galaxy, such as gas metallicity, interstellar medium density, star formation rate (SFR) and specific SFR (sSFR). Only very few LBVs are known up to date in metal-poor SFGs with strong star-forming activity \citep[see e.g. ][and references therein]{Weis2020}. We discuss in this paper the time monitoring over two decades of the photometric and spectroscopic properties of cLBV stars in two extremely metal-deficient dwarf star-forming galaxies (SFG), DDO 68 (located in H~{\sc ii} region \#3) with 12 + log (O/H) = 7.15 and PHL 293B with 12 + log (O/H) = 7.72. These two SFGs are the lowest-metallicity galaxies where LBV stars have been detected, allowing the study of the LBV phenomenon in the extremely low metallicity regime, and shedding light of the evolution of the first generation of massive stars possibly born from primordial gas. The paper is structured as follows: the LBT/MODS and 3.5m APO observations and data reduction are described in Sect.~\ref{sec:OBS}. In Sect.~\ref{sec:Results} we present the study of multi-epoch optical spectra of DDO\,68~\#3 and PHL\,293B. Finally, in Sect.~\ref{sec:conclusion} we summarize our main results. \input{tab6.tex} \input{tab7-DDO-G1.tex} \input{tab7-PHL-G.tex} \begin{figure*} \hbox{ \includegraphics[angle=-90,width=0.98\linewidth]{spectraDDO68.ps} } \caption{The rest-frame H$\alpha$ emission line profile in the DDO\,68~\#3 spectrum at different epochs, observed with the 3.5m APO telescope. Wavelengths are in \AA\ and fluxes are in units of 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$.} \label{fig6} \end{figure*} \begin{figure*} \hbox{ \includegraphics[angle=-90,width=0.48\linewidth]{fPHL293BbAPO_1.ps} \includegraphics[angle=-90,width=0.48\linewidth]{fPHL293BrAPO_1.ps} } \hbox{ \includegraphics[angle=-90,width=0.48\linewidth]{fPHL293Bb_1.ps} \includegraphics[angle=-90,width=0.48\linewidth]{fPHL293Br_1.ps} } \caption{The most time-separated spectra of PHL\,293B. Panels a) and b) show blue and red parts of the SDSS spectrum obtained with the 2.5m APO telescope on August 22, 2001. Panels c) and d) show the spectrum obtained with LBT/MODS on November 18, 2018. Absorption features of P Cygni profiles for hydrogen and helium emission lines are indicated in the rest-frame spectra by red and blue arrows, respectively. Wavelengths are in \AA\ and fluxes are in units of 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$.} \label{fig6a} \end{figure*} \begin{figure*} \hbox{ \includegraphics[angle=-90,width=0.98\linewidth]{spectraPHL293B.ps} } \caption{The rest-frame H$\alpha$ emission line profiles in the PHL\,293B spectra at the different epochs listed in Table~\ref{tab7-PHL-G}. Wavelengths are in \AA\ and fluxes are in units of 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$.} \label{fig7} \end{figure*} \begin{figure} \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2008-10-27-Ha-2.ps} \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2008-11-06-Ha-2.ps} } \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2009-02-22-Ha-2.ps} \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2009-11-19-Ha-2.ps} } \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2010-02-06-Ha-2.ps} \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2010-03-20-Ha-2.ps} } \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2010-10-31-Ha-2.ps} \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2012-02-15-Ha-2.ps} } \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2012-05-16-Ha-1.ps} \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2013-06-01-Ha-1.ps} } \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2016-04-09-Ha-1.ps} \includegraphics[angle=-90,width=0.49\linewidth]{DDO-2018-04-07-Ha-0.ps} } \caption{Decomposition by Gaussians of the H$\alpha$ narrow (blue dotted lines) and broad emission (blue solid lines) profiles in the spectra of DDO\,68~\#3, listed in Table~\ref{tab7-DDO-G} and shown in Fig.~\ref{fig6}. The observed profile is shown by the black solid line whereas the summed flux of narrow+broad components is represented by the red solid line. Wavelengths are in \AA\ and fluxes are in units of 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$.} \label{fig2} \end{figure} \begin{figure} \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2012-02-15-Ha-3.ps} \includegraphics[angle=-90,width=0.49\linewidth]{DDO_2012-05-16-Ha-3.ps} } \caption{The observed profile of H$\alpha$ in DDO\,68~\#3 with an excess flux at high velocities in redward wings. This excess is seen only for the observations on 15 February and 16 May 2012 (see Fig.~\ref{fig2}h and Fig.~\ref{fig2}i). Three Gaussians were used to fit the narrow, broad and very broad components (blue dots, turquoise solid and blue solid lines, respectively). The total fitted profile is represented by a red solid line. Wavelengths are in \AA\ and fluxes are in units of 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$.} \label{fig3} \end{figure} \begin{figure} \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{PHL_2001-08-22-Ha-3-linear.ps} \includegraphics[angle=-90,width=0.49\linewidth]{PHL_2010-10-06-Ha-3.ps} } \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{PHL_2014-11-17-Ha-2.ps} \includegraphics[angle=-90,width=0.49\linewidth]{PHL_2015-11-06-Ha-2.ps} } \hbox{ \includegraphics[angle=-90,width=0.49\linewidth]{PHL_2017-12-15-Ha-2.ps} \includegraphics[angle=-90,width=0.49\linewidth]{PHL_2020-11-18-Ha-2.ps} } \caption{The H$\alpha$ profiles in the PHL\,293B spectra obtained with different telescopes during different epochs. The black lines represent the observed spectra. The fits of H$\alpha$ by three or four Gaussians are shown by blue dotted lines (narrow component), by turquoise solid lines (broad component), by blue solid lines (very broad component) and by magenta dashed lines (for absorption features of P Cygni profiles whenever possible). Note that, in contrast to Fig.~\ref{fig2} and Fig.~\ref{fig3}, the y-axes have different scales, to better see the shape of the broad components. Fluxes are in units of 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ and wavelengths are in \AA. } \label{fig4} \end{figure} \begin{figure} \centering{ \includegraphics[angle=-90,width=0.7\linewidth]{timevar_G.ps} \includegraphics[angle=-90,width=0.7\linewidth]{fluxvar_G.ps} } \caption{a) Temporal variations of the flux ratio of broad-to-narrow H$\alpha$ components in DDO\,68~\#3. The black solid line shows the APO observations discussed here while the blue parts of the curve indicate Special Astrophysical Observatory (SAO) and APO observations by \citet{Pustilnik2005,Pustilnik2008} (dotted lines) and by \citet{IzT09} (solid line). The temporal variations of the same flux ratio in PHL\,293B is shown by the red dashed line. Note that the total flux of the whole broad bump (i.e. the sum of the broad and very broad emission components, and of the absorption component when appropriate) was taken to be the broad component when calculating this broad-to-narrow ratio. b) Temporal variations of H$\alpha$ fluxes in different components for DDO\,68~\#3. The thick black line shows the APO observations discussed here while the thin black line represents data by \citet{Pustilnik2005,Pustilnik2008} and \citet{IzT09}. The colored lines show the data for PHL\,293B. } \label{fig8} \end{figure} \section{OBSERVATIONS AND DATA REDUCTION}\label{sec:OBS} Over the course of more than a decade, starting in October 2008, we have obtained a series of spectroscopic observations for the two SFGs DDO\,68 (its H~{\sc ii} region \#3) and PHL\,293B, using two different telescopes and instrumental setups. \subsection{APO Observations} The first instrumental setup consisted of the Dual Imaging Spectrograph (DIS) mounted on the ARC 3.5m Apache Point Observatory (APO) telescope. The blue and red channels of the DIS permitted to simultaneously observe the objects over a wide range of wavelengths. A long slit giving medium resolution ($R$ = 5000) was used during all APO observations. The B1200 grating giving medium resolution ($R$ = 5000) with a central wavelength of $\sim$4800 \AA\ and a linear dispersion of 0.62 \AA\ pixel$^{-1}$ was used in the blue range. In the red range, we employ the R1200 grating with a central wavelength of $\sim$6600 \AA\ and a linear dispersion of 0.56 \AA\ pixel$^{-1}$. In this way, APO medium resolution spectra of the two SFGs DDO\,68~\#3 and PHL\,293B were obtained, that span two wavelength ranges, from $\sim$4150 to 5400 \AA\ in the blue range, and from $\sim$6000 to 7200\AA\ in the red range. The journal of the APO observations, giving the observation dates, the exposure times, the slit widths and the airmasses, are shown in Table~\ref{tab6}. \subsection{LBT Observations} For PHL\,293B, one high signal-to-noise ratio spectroscopic observation was also carried out with the 2$\times$8.4m Large Binocular Telescope (LBT). We employ the LBT in the binocular mode utilizing both the MODS1 and MODS2 spectrographs, equipped with 8022$\times$3088 pixel CCDs. Two gratings, one in the blue range, G400L, with a dispersion of 0.5\AA\ pixel$^{-1}$, and another in the red range, G670L, with a dispersion of 0.8\AA\ pixel$^{-1}$, were used. The PHL\,293B spectrum was obtained in the wavelength range $\sim$3150 -- 9500 \AA\ with a 60$\times$1.2 arcsec slit, giving a resolving power $R$ $\sim$ 2000. The seeing during the observation was in the range 0.5 -- 0.6 arcsec. Three subexposures were derived, resulting in an effective exposure time of 2$\times$2700 s when adding the fluxes obtained with both spectrographs, MODS1 and MODS2. The spectrum of the spectrophotometric standard star GD~71, obtained during the LBT observation with a 5 arcsec wide slit, was used for flux calibration. It was also used to correct the red part of the PHL\,293B spectrum for telluric absorption lines. The journal of the LBT observation is also shown in Table~\ref{tab6}. \subsection{Data reduction} The two-dimensional APO spectra were bias- and flat-field corrected, fixed for distortion and tilt of frame and background subtracted using {\sc iraf} routines. For the LBT observation, the MODS Basic CCD Reduction package {\sc mods}CCDR{\sc ed} \citep{Pogge2019} was used for flat field correction and bias subtraction. Wavelength calibrations of both LBT and APO observations were performed using spectra of comparison lamps obtained every night before and after the observations. Each two-dimensional spectrum was aligned along the brightest part of the galaxy. After wavelength and flux calibration and removal of cosmic particle trails, all subexposures were summed. One-dimensional spectra were then extracted along the spatial axis so that the entire bright part of the H~{\sc ii} region falls into the selected aperture. In summary, we have obtained twelve new APO observations of DDO\,68~\#3 and four new APO observations of PHL\,293B, together with one new LBT spectrum of PHL\,293B. To increase the time baseline for PHL\,293B, we have also included in our analysis the Sloan Digital Sky Survey (SDSS) spectrum obtained with the 2.5m APO telescope on 22 August 2001 and available in the SDSS archive. This brings the total number of PHL\,293B spectra to be analyzed to six. More details are given in Tables~\ref{tab7-DDO-G} and \ref{tab7-PHL-G}. The spectra are shown in Fig.~\ref{fig6} and Fig.~\ref{fig7}, respectively. All spectra are displayed in the wavelength range around H$\alpha$ to better emphasize the temporal changes of this line. In Fig.~\ref{fig6a} we have also shown two spectra of PHL\,293B over the whole wavelength range of observations, and that are most separated in time. \section{RESULTS}\label{sec:Results} \subsection{Profile decomposition} The most remarkable features in the DDO\,68~\#3 and PHL\,293B spectra are the strong broad components with P Cygni profiles underlying the narrow nebular emission of the hydrogen and helium lines. For derive quantitative measurements of these features, we have decomposed the profiles of the hydrogen emission lines into the sum of several Gaussian profiles: a high-intensity narrow component, and low-intensity broad and very broad (the latter when needed) components, using the {\sc iraf/splot} deblending routine. The fluxes and full widths at half maximum (FWHM) of the H$\alpha$ narrow and broad components, along with the terminal velocities and the total luminosities of the broad components in the two SFGs are given in Table~\ref{tab7-DDO-G} and Table~\ref{tab7-PHL-G} for all spectra shown in Figs.\ref{fig6} and \ref{fig7}. The data for the very broad components in PHL\,293B are also given in Table~\ref{tab7-PHL-G}. Details of the profile decomposition of the H$\alpha$ line can be seen in Figs.~\ref{fig2},~\ref{fig3},~\ref{fig4}. For luminosity determination, the observed fluxes of narrow and broad emission lines were corrected for the extinction and underlying stellar absorption taken from \citet{IzT09}, derived in accordance with the prescription of \citet{IzThLip1994}. Note that the extinction coefficient $C$(H$\beta$) and equivalent width of the underlying absorption (EW$_{\rm abs}$) derived from the high signal-to-noise ratio MMT spectrum of DDO\,68~\#3 and the VLT spectrum of PHL\,293B \citep[see table 3 in ][]{IzT09} are very small in both galaxies. $C$(H$\beta$) and EW$_{\rm abs}$ have zero values in DDO\,68~\#3 and are respectively equal to 0.08 and 0.05 in PHL\,293B. \subsection{DDO\,68~\#3}\label{subsec:DDO68} DDO\,68 (UGC 05340) is one of the most metal-deficient SFG known \citep[12 + logO/H = 7.15$\pm$0.04, 7.14$\pm$0.07, ][]{IzT09,Annibali2019}. It is likely in the process of forming by hierarchical merging \citep{Pustilnik2017,Annibali2019a}. Broad spectral features with blueshifted absorption in the lines of the hydrogen series as well as in some He~{\sc i} lines were first noticed in DDO\,68 by \citet{Pustilnik2008}. Those authors attributed the broad features to the outburst of a LBV located in one of the H~{\sc ii} region of DDO\,68 named Knot 3, and which we will designate by \#3 in the remainder of this paper. Based on photometric and spectroscopic observations, \citet{IzT09} dated the start of the strong LBV outburst in DDO\,68~\#3 to be between 2007 February and 2008 January. They confirmed the presence of P Cygni profiles in both the H~{\sc i} and He~{\sc i} emission lines and found that the Fe {\sc ii} emission lines are not present, in contrast to ``typical'' LBVs \citep{Pustilnik2008,IzT09}. The absence of Fe {\sc ii} could be due to extremely low metallicity and thus low optical depth precluding considerable radiative pumping. \citet{Pustilnik2017} described the state of the LBV in 2015-2016 as being in a fading phase. Observations carried out by \citet{Annibali2019} with LBT/MODS1 and LBT/MODS2 in February 2017 do not show the characteristic signs of a LBV star. This led them to conclude that, by 2017, the LBV was back in a quiescent phase. We provide here a more complete picture of the time evolution of the LBV in DDO\,68~\#3 by monitoring its spectrum. Our observations start from 2008, and occur as often and as regularly as possible afterwards, ending in 2018. We show in Fig.~\ref{fig6} the time evolution of the H$\alpha$ line. We emphasize that all spectra were obtained with the same telescope and instrumental setup (3.5m APO/DIS) and reduced in an uniform manner, so they are directly comparable. We remark that, as seen in Fig.~\ref{fig6} and more clearly in Fig.~\ref{fig2}, the [N~{\sc ii}] emission lines near H$\alpha$ are nearly not detected, due to the low metallicity of DDO\,68~\#3. Our new APO observations in the monitoring series of DDO\,68~\#3 reveal that the fluxes of the H$\alpha$ broad components are nearly an order of magnitude higher at the end of October 2008 as compared to January 2008, when the LBV was discovered by \citet{Pustilnik2008} (Table~\ref{tab7-DDO-G} and Fig.~\ref{fig8}b, thick solid black line). The MMT observation of DDO\,68~\#3 on March 2008 of \citet{IzT09} is consistent with that trend. It showed a broad H$\beta$ flux $\sim$2 times lower than the one in the first observation in our APO monitoring series starting 6 months later, in October 2008. Our more than a decade APO monitoring showed that the features characteristic of eruption in the LBV in DDO\,68~\#3 persist until the period somewhere between the end of 2010 and the beginning of 2012, with small variations in the component fluxes. The width of the broad component remains unchanged until October 2010 with a FWHM $\sim$1000--1200 km s$^{-1}$ (Table~\ref{tab7-DDO-G}). After 2011 the flux and the velocity of the broad component markedly decreased. Furthermore, the absorption feature in the P Cygni profile disappeared toward the beginning of 2012. That disappearance together with the sharp decrease of the fluxes of the broad features are possibly due to the dense stellar envelope being destroyed and scattered. In addition to the presence of broad components with absorption components, observed during the period from 2008 up to 2012 (Fig.~\ref{fig2} a -- g), a low-intensity high-velocity redward tail made its appearance in 2012. It can be seen as a flux excess above the Gaussian fit to the broad component (Fig.~\ref{fig2}h and \ref{fig2}i). It should be noted, however, that the best fit to the 2012 profiles could only be obtained by including two Gaussians for the broad component (Fig.~\ref{fig3}, a broad one and a very broad one). Thus, a supplementary broad component with a small FWHM = 300 -- 500 km s$^{-1}$ is present, in addition to the very broad component with a larger FWHM $\sim$ 1000 km s$^{-1}$ (see Fig.~\ref{fig3}, solid turquoise curves). Note the blueward shift in the broad components in Fig.~\ref{fig3} (solid turquoise curves). Likely, the appearance of the high-velocity tail and the subsequent disappearance of the P Cygni profile can be interpreted as the break-up of a dense circumstellar envelope by a rapid outflow. In 2013, the spectrum, despite being somewhat noisy, still clearly shows the broad blueward component (Fig.~\ref{fig2}j). By 2018, the broad components have completely gone from the spectra, suggesting that the LBV shell has disappeared. The outburst event has thus lasted from 2008 to 2013, for a duration of about 5 years. Assuming that the broad components of the emission lines belong exclusively to the LBV, and that the narrow emission components are predominantly nebular, we can trace the decay of the LBV outburst by using the flux ratio of the broad-to-narrow components in each of the hydrogen and helium lines. This can be done most reliably for the brightest H$\alpha$ line. The temporal evolution of the H$\alpha$ broad-to-narrow flux ratio is shown in Fig.~\ref{fig8}a. A maximum for this flux ratio is seen between 2009 and 2010. It is more pronounced than the maximum for the flux of the broad component (Fig.~\ref{fig8}b). However, in general, the variation of the broad-to-narrow flux ratio follows tightly the flux temporal evolution of the broad component. To derive line flux luminosities for DDO\,68, we need to know the distance of the dwarf galaxy. Based on the same imaging {\sl HST} observations of the dwarf galaxy (GO 11578, PI: Aloisi) \citet{Cannon2014,Sacchi2016,Makarov2017} have applied the TRGB (Tip of the Red Giant Branch) method to derive distances $D$ = 12.74, 12.65 and 12.75 Mpc for DDO\,68, respectively. New {\sl HST} observations of a stream-like system associated with DDO\,68 by \citet{Annibali2019a} gives $D$ = 12.8 $\pm$ 0.7 Mpc, using the same TRGB method. The mean of these values is $D$ = 12.7 Mpc, which we adopt. This distance is more than twice as large than the previous indirect distance determination of $D$$\sim$5 -- 7 Mpc (NED) \citep[see e.g. ][]{Pustilnik2005,IzT09}. With this distance, the luminosity of the H$\alpha$ broad component at the maximum is $\sim$ 2$\times$10$^{38}$ ergs s$^{-1}$. This maximum luminosity and the FWHM ($\sim$1000--1200 km s$^{-1}$) of the broad component of the lines during the transient event are in the ranges of those observed for LBV stars. The data thus suggest that a LBV star in the H~{\sc ii} region DDO\,68~\#3 has undergone an outburst during the period 2008--2013. The terminal velocity $v$$_{\rm term}$ of the LBV stellar wind is an important parameter for confronting theory with observation. It is obtained from the wavelength difference between the maximum of the broad emission line profile and the minimum of the blue absorption line profile. The terminal velocity $v$$_{\rm term}$ in DDO\,68~\#3 is of $\sim$800 km s$^{-1}$ and does not change significantly over time (Table~\ref{tab7-DDO-G}). It remains also unchanged during the ``maximum'' of the eruption, as well as during the decay period starting in 2013. Nearly the same value of the expanding wind terminal velocity was reported by \citet{IzT09} and \citet{Pustilnik2017}. To summarize, the new data suggest that the broad component fluxes of the hydrogen lines of the LBV reached a maximum during the time interval $\sim$ 2008--2012. The narrow component also reached a maximum during the same period, but less pronounced. The broad-to-narrow component flux ratio reached a maximum at about the same period This also supports the hypothesis that the LBV star underwent an eruptive event. It can be seen in Fig.~\ref{fig8} that the H$\alpha$ luminosity of the LBV star is now at a minimum or close to it. \subsection{PHL\,293B}\label{subsec:PHL 293B} The situation with understanding the physical processes taking place in PHL\,293B is more complex than in DDO\,68~\#3. PHL\,293B (J2230--0006) is a well known SFG with a moderately low metallicity 12+logO/H $\sim$ 7.6--7.7 \citep[e.g. ][]{Papaderos2008,IzGusFrHenkel2011,Fernandez2018}. \citet{IzT09} first detected a broad component in the strong hydrogen emission lines with P Cygni profiles of PHL\,293B in a VLT UVES spectrum obtained on 8 November 2002, and in a SDSS spectrum taken on 22 August 2001. Those authors suggested that these broad features are due to a transient LBV phenomenon. Earlier observations of PHL\,293B by \citet{Kinman1965} and \citet{French1980} did not mention any broad emission. The broad component was again detected by \citet{IzGusFrHenkel2011} in a VLT X-Shooter observation on 16 Aug 2009. Later observations have been carried out, and various other hypotheses have been proposed to interpret them. Thus, \citet{Terlevich2014} have suggested that the observations of PHL\,293B can be explained by a young dense expanding supershell driven by a stellar cluster wind and/or two supernova remnants or by a stationary wind. Hydrodynamic models have been built by \citet{Tenorio2015}. On the basis of 10.4m GTC (Gran Telescopio Canarias)/MEGARA observations performed in July 2017, \citet{Kehrig2020} found a flux ratio $I$$_{br}$/$I$$_{nar}$ $\sim$0.10 in the H$\alpha$ emission line of all integrated regions. They interpret this low value as due to a diminution of the broad H$\alpha$ emission. \citet{Allan2020}, from spectroscopic observations including the 2019 VLT X-Shooter data and radiation transfer modeling, report the absence of the broad emission component since 2011 and conclude that the LBV was in an eruptive state during 2001 - 2011 that has ended. \citet{Burke2021} also report the decrease of the $I$$_{br}$/$I$$_{nar}$ ratio from 0.41 (SDSS, 2001) to $\sim$0.10, using Gemini observations taken in December 2019. Despite the fact that a AGN-like damped random walk model works well to fit the observed light curve, those authors concluded that a long-lived stellar transient of type SN IIn can better explain all the data for PHL\,293B. On the other hand, \citet{Prestwich2013} have emphasized the lack of X-ray emission in the galaxy, establishing an upper limit of $L_X$ $\sim$ 3 $\times$ 10$^{38}$ erg s$^{-1}$ (their table 6). This casts some doubt on the supernova model, thought to be the most probable one for explaining the PHL 293 phenomenon. \citet{Kehrig2020} have detected P Cygni profiles in the H$\alpha$ and H$\beta$ emission lines in July 2017, while those features were not found by \citet{Burke2020} in H$\alpha$ in December 2019. These variations suggest that long-time monitoring, such as the observations described here, is important for distinguishing between various models. Contrary to the conclusions of \citet{IzT09}, based on the archival VLT/UVES observations on 2002-11-08, that P Cygni profiles are seen only in the hydrogen lines, and those of \citet{Terlevich2014}, our new PHL\,293B observations, as well as a reconsideration of the SDSS archival data (2001-08-22) and the VLT/X-Shooter observation by \citet{IzGusFrHenkel2011} on 2009-08-16, show that blue-shifted absorption lines are detected not only in hydrogen emission lines, but also in He~{\sc i} lines (Fig.~\ref{fig6a}). In other words, the situation is the same as in the case of DDO\,68~\#3 which has a much lower metallicity. However, the permitted Fe {\sc ii} emission lines, which generally originate in dense circumstellar envelopes and are usually seen in the spectra of LBV stars experiencing a giant eruption and creating an envelope, have not been detected in our new observations. Note that \citet{Terlevich2014} found blue-shifted Fe {\sc ii} absorption with a terminal velocity of $\sim$ 800 km s$^{-1}$. We did not detect permitted Fe~{\sc ii} lines in the LBT spectrum obtained in 2020. Only forbidden [Fe~{\sc iii}]$\lambda$$\lambda$4658, 4702, 4755:, 4986, 5270\AA\ emission lines were found in this spectrum, with no absorption features. In the decomposition of the strong emission lines, we have always attempted to fit the broad component in the simplest way possible, i.e., with the smallest number (preferably one) of Gaussian profiles. We also tried to perform fitting with Lorentzian profiles. But the results were always worse than in the case of Gaussian profiles. In the case of PHL293B, not one but two broad components are required to match the observed profiles well. The decomposition of H$\alpha$ for all spectra from Table~\ref{tab7-PHL-G} into narrow, broad, very broad (and absorption lines, whenever warranted), are presented in Fig.~\ref{fig4}. The P Cygni profile has persisted over some two decades, during the period 2001--2020, as is clearly seen in Fig.~\ref{fig6a}. The blueward absorption feature with wavelength $\sim$6545\AA\ is close to the nitrogen emission line [N {\sc ii}]$\lambda$6548\AA\ so that in many cases decomposition of the absorption profile in H$\alpha$ emission line is difficult. The absorption is likely present, but masked by strong broad and very broad components and the nitrogen emission line (Fig.~\ref{fig7} and Fig.~\ref{fig4}). We obtain a high terminal velocity $v_{term}$$\sim$800 km s$^{-1}$ for the absorption component in the Balmer lines. This value is the same as the ones derived by \citet{IzT09}, \citet{IzGusFrHenkel2011} and \citet{Terlevich2014} from X-Shooter observations made in Aug. 2009 and in Aug.-Sep. 2009, respectively. The luminosity of the broad bump (i.e. the sum of broad and very broad components) varies from a few 10$^{38}$ erg s$^{-1}$ up to $\sim$10$^{39}$ erg s$^{-1}$. The very broad component derived by us from the SDSS spectrum taken in August 2001 (Fig.~\ref{fig4}, Table~\ref{tab7-PHL-G}) has nearly the same redshift as \citet{Terlevich2014}'s faint ultra-broad component measured from their 2009 observations and shown in their figure 2 but a flux that is $\sim$100 times lower. Note that \citet{Terlevich2014} also fitted H$\alpha$ in their X-Shooter observation by two broad components. At about the same redshift, \citet{IzGusFrHenkel2011} also saw an excess in their fit of the H$\beta$ broad component (their figure 3). In our subsequent series of observations of PHL\,293B starting in 2010, the very broad component does not show any redward shift. On the contrary, the broad and very broad components are centered practically at the systemic velocity of the galaxy. This means that, at least since 2010, there has been no large velocity outflows. We remark also that the broad components in the SDSS spectrum dated 2001 \citep{IzT09}, and in the VLT X-Shooter spectra dated 2009 \citep{IzGusFrHenkel2011,Terlevich2014}, have quite large FWHMs, $\sim$ 1500, 1000 and 1000 km s$^{-1}$, respectively. After 2010, these components become much narrower, with FWHM $\sim$ 160 -- 180 km s$^{-1}$. This signifies the fact that the velocity dispersions of the moving and radiating matter have decreased, at least in the broad components. If the ionization parameter is close to the limiting value of log$U$ $\la$--2, the radiation pressure can prevail over the ionized gas pressure \citep{Dopita2002}, and some contribution of a radiation-driven (i.e. by LyC and/or Ly$\alpha$) superwind from a young stellar cluster, producing a high-velocity very broad component, will be possible \citep{Komarova2021}. To check this possibility, we use accurate oxygen abundance and an observational indicator of the ionization parameter, O32 = [O~{\sc iii}] 4959,5007/[O~{\sc ii}]3727, for PHL\,293B (12 + logO/H = 7.72 and O32 = 15.52 from \citet{IzT09} and 12 + logO/H = 7.71 and O32 = 14.21 from \citet{IzGusFrHenkel2011}) to estimate log$U$. We get \citep[following ][]{Kobulnicky2004} log$U$ = --2.2 and log$U$ = --2.3, respectively, which are close to the maximum value, but still below it. Note that it is higher than log$U$ = --2.9 derived from the MMT observation by \citet{IzT09} for DDO\,68~\#3. In general, the temporal variations of both the broad-to-narrow component flux ratios and of the H$\alpha$ fluxes of the narrow and broad components for PHL\,293B (Fig.~\ref{fig8}, colored lines) are very different from those of DDO\,68~\#3 (Fig.~\ref{fig8}, black lines). A strong variability of the H$\alpha$ fluxes, simultaneously in both the narrow and broad and very broad components, from 2011 to 2018, is seen in PHL\,293B (Fig.~\ref{fig8}b) . Flux jumps by a factor of $\sim$6 are observed in both the narrow and the sum of the broad and very broad components (Table~\ref{tab7-PHL-G}). We have checked the logs of the observations and have determined that all observations were carried out in photometric conditions, so that the flux variations cannot be attributed to sky variability. The flux variations, both in the narrow and broad and even very broad components (a sharp flux decrease in 2014 followed by a rise at the end of 2015, followed by another decrease in 2017) can be explained by inaccurate pointing when observing with a narrow slit of 0.9 arcsec. However the variations are likely to be real because the light curves derived by \citet{Burke2021} from SDSS and DES (Dark Energy Survey) imaging between 1998 and 2018 (their figure 2) also show small-amplitude variability in the $g$ and $r$ bands during the same years. The synchronicity in the variability of fluxes in the narrow and broad components does not hold anymore starting 2018 \citep[see also the fading of the broad component in ][]{Allan2020,Burke2021}. In contrast to the outburst nature of the variability over time of the broad-to-narrow flux ratio in DDO\,68~\#3, which manifests itself in the form of a peak (Fig.~\ref{fig8}a, black line), PHL\,293B does not undergo such a type of variation. Its broad-to-narrow flux ratios depend weakly on time, at least over the past two decades. However, a decrease in the broad-to-narrow flux ratio can be seen starting from the very end of 2017 and continuing to 2020 (Fig.~\ref{fig8}a). This decrease can be partly explained by an increase of the narrow component relative to the broad one (Fig.~\ref{fig8}b, blue and red dashed and green solid lines). In the case of PHL\,293B, the behavior of the broad component may be explained by several effects. Enduring broad hydrogen Balmer P Cygni profiles with absorption feature blue-shifted by $\sim$800 km s$^{-1}$, can indicate the presence of fast moving ejecta. The persistent luminosity of the broad Balmer emission and the possibility that the Balmer emission could be due to a long-lived stellar transient, like LBV/SN, motivate additional follow-up spectroscopy to distinguish between these effects. \subsection{LBVs in other SFGs} We now compare the behavior of the candidate LBVs in DDO\,68~\#3 and PHL\,293B with those in other SFGs. The LBV in the relatively low-metallicity galaxy Mrk 177 = UGCA 239 at the post-merger stage, with 12 + logO/H = 8.58 and $D_L$ = 28.9 Mpc, has undergone multiple outbursts during the last 20 years \citep{Kokubo2021}. The luminosity of the broad H$\alpha$ component of Mrk 177 varies from a maximum of 10$^{40}$ erg s$^{-1}$ during the strongest explosion to 10$^{39}$ erg s$^{-1}$ during the next two strong explosions. This is an order of magnitude higher than the corresponding values of $L$(H$\alpha$) (broad) = (2 -- 9)$\times$10$^{38}$ erg s$^{-1}$ in PHL\,293B. As for the H$\alpha$ broad-to-narrow flux ratio for the LBV in Mrk 177, it varies in the interval 3.4 -- 4.5 during last two outbursts. This is in the same range as the LBV outburst maximum in DDO\,68~\#3, but one order of magnitude higher than in PHL\,293B. The strongest outburst in Mrk 177 during 2001 is characterised by a H$\alpha$ broad-to-narrow ratio that is approximately 10 times higher than those during subsequent explosions, i.e. $\sim$ two orders of magnitude higher than in PHL\,293B. On the other hand, the luminosity $L$(H$\alpha$) $\sim$10$^{38}$ erg s$^{-1}$ \citep{Drissen2001} of the LBV-V1 in the low-metallicity \citep[12 + logO/H = 7.89, ][]{Iz97} galaxy Mrk 71 (NGC 2363A) is lower than that of the cLBV in PHL\,293B, with a similar metallicity. \section{CONCLUSIONS}\label{sec:conclusion} Over nearly two decades, we have monitored the time variation of the broad component fluxes and of the broad-to-narrow flux ratios of the strong hydrogen and helium emission lines in two low metallicity star-forming galaxies (SFG), DDO\,68 (12 + logO/H = 7.15) and PHL\,293B (12 + logO/H = 7.72). These two SFGs have the particularity of harboring candidate luminous blue variable (cLBV) stars. We have carried out this monitoring by obtaining over time spectra of these two objects with the 3.5m Apache Point Observatory (APO) telescope and the 2$\times$8.4m Large Binocular Telescope (LBT). Our main results are the following: (1) The broad emission with a P Cygni profile of the H$\alpha$ line emitted by the DDO\,68 H~{\sc ii} region ~\#3 shows a marked increase of its luminosity during the period 2005 to 2017, reaching a maximum $L$(H$\alpha$) of $\sim$ 10$^{38}$ erg s$^{-1}$ in 2008 -- 2011, adopting the distance derived from brightness of the Tip of the Red Giant branch (TRGB). The absorption feature of the P Cygni-like profile and the broad component rapidly decayed and disappeared in 2018. These properties are characteristic of an eruptive event in a LBV star. The derived H$\alpha$ luminosity and the FWHM $\sim$1000--1200 km s$^{-1}$ of the broad component during the eruption event are in the range of those observed for LBV stars. A terminal velocity $v$$_{\rm term}$$\sim$800 km s$^{-1}$ is measured from the absorption profile. It does not change significantly over time. These observations are also consistent with the earlier findings of \citet{Pustilnik2017} who described the LBV in DDO\,68~\#3 as going through a fading phase during 2015--2016, and with those of \citet{Annibali2019} who did not find any characteristic sign of a LBV star at the beginning of 2017. Thus, our spectroscopic monitoring indicates that the LBV has passed through the maximum of its eruption activity and is now in a fairly quiet phase. (2) The situation is quite different in the BCD PHL\,293B. Since the discovery of the cLBV in it on the basis of SDSS 2001 observations, the fluxes of the broad component and the broad-to-narrow flux ratios of the hydrogen emission lines in PHL\,293B have remained nearly constant, with small variations, over 16 years, at least until 2015. A decrease of the broad-to-narrow flux ratio in recent years (2017--2020) can partly be attributed to an increase of the narrow component flux relative to that of the broad component. The luminosity of the broad H$\alpha$ component varies from $\sim$2$\times$10$^{38}$ erg s$^{-1}$ to $\sim$10$^{39}$ erg s$^{-1}$, and the FWHM varies in the range $\sim$500--1500 km s$^{-1}$ over the whole period of monitoring of PHL\,293B, from 2001 to 2020. Unusually persistent P Cygni features with broad and very broad components of hydrogen emission lines and blueward absorption in the H~{\sc i} and He~{\sc i} emission lines are clearly visible, even at the very end of 2020, despite the several decreases of the broad-to-narrow flux ratio in the most recent years (2017-2020). A terminal velocity $v$$_{\rm term}$$\sim$800 km s$^{-1}$ is measured in PHL\,293B, similar to the one obtained for DDO\,68~\#3, although the latter is 3.7 times more metal deficient than the former. The terminal velocity does not change significantly with time. The near-constancy of the H$\alpha$ flux suggests that the cLBV in PHL\,293B is a long-lived stellar transient of type LBV/SN IIn. However, other mechanisms such as a stationary wind from a young stellar cluster, cannot be ruled out. Further spectroscopic time monitoring of the BCD is needed to narrow down the nature of the phenomenon in PHL\,293B. \section*{Acknowledgements} N.G.G. and Y.I.I. acknowledge support from the National Academy of Sciences of Ukraine (Project No. 0121U109612 ``Dynamics of particles and collective excitations in high energy physics, astrophysics and quantum microsystem''). T.X.T. thanks the hospitality of the Institut d'Astrophysique in Paris where part of this work was carried out. The APO 3.5 m telescope is owned and operated by the Astrophysical Research Consortium (ARC). The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia. This paper used data obtained with the MODS spectrographs built with funding from NSF grant AST-9987045 and the NSF Telescope System Instrumentation Program (TSIP), with additional funds from the Ohio Board of Regents and the Ohio State University Office of Research. {\sc iraf} is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \section{DATA AVAILABILITY} The data underlying this article will be shared on reasonable request to the corresponding author. \input{ref1.tex} \bsp \label{lastpage} \end{document}
train/arxiv
BkiUdUc5qsMAI7WbBJll
5
1
\section{Introduction} It is thought that stars with an initial mass more than $\sim 25\,{\rm M}_\odot$ ultimately end their lives as black holes. Though the mass thresholds are dependent on the metallicity of the stars and are currently uncertain, stars with an initial mass in the range of $\sim 25$--$40\,{\rm M}_\odot$ are expected to form black holes following a supernova explosion. Even more massive stars collapse to black holes without spectacular explosion \citep{Heger_et_al_2003}. These events are thought to correspond to the observed phenomena known as ``failed supernovae'', in which sudden brightening occurs as in the early stage of a supernova, but then does not develop to full supernova luminosity \citep{Adams_et_al_2017}. Hence stellar-mass black holes should be ubiquitous. Population statistics of these stellar-mass black holes in our Galaxy may be estimated with a reasonable initial mass function, a star formation rate, and using assumptions of the density structure of the Galaxy. Though the uncertainty is large, it is estimated that more than 100 million black holes reside in our Galaxy \citep{Brown_Bethe_1994, Mashian_Abraham_2017, Breivik_et_al_2017, Lamberts_et_al_2018, Yalinewich_et_al_2018, Yamaguchi_et_al_2018}. Since black holes emit no light, there is no way to see them directly. However, if a black hole is in a close binary system with an ordinary star, its presence can manifest itself in X-ray emission as follows: Sometimes the black hole swallows gas from the companion star. As this gas swirls around the black hole, a huge amount of potential energy is liberated, ultimately emitting X-rays that are detectable from space. However, X-ray binaries are not limited to systems containing a black hole. In general, they are close binaries composed of a compact object, either a neutron star or a black hole, which is accreting mass from a companion donor star. Spectroscopic radial velocity measurements of the optically visible donor star allow us to deduce the lower limit of the invisible compact object of the binary system, even if the latter is not visible itself. According to current theory, a neutron star more massive than $3\,{\rm M}_\odot$ is unstable and will collapse into a black hole. Therefore, a mass function that corresponds to a companion exceeding this critical mass is considered to be reliable evidence for a black hole. This is how stellar-mass black holes have so far been detected and confirmed (see reviews, e.g. \citealt{Cowley_1992, Remillard_McClintock_2006, Casares_Jonker_2014}, and references therein). But more importantly, despite the aforementioned estimates of $10^8$ black holes in our Galaxy, fewer than 20 stellar-mass black holes have so far been confirmed \citep{Corral-Santana_et_al_2016, Torres_et_al_2019}. The X-ray emissions resulting from accretion may occur only in a close binary system. Hence, it is natural to expect quiescent stellar-mass black holes, without X-ray emission, in binary systems with large separations. \section{Black-hole X-ray binaries} About one third of X-ray binaries are not persistently visible but are detected as transient sources. The Galactic stellar-mass black holes have so far been found, except for Cyg X-1, as transient X-ray sources in binaries with low-mass K or M dwarf counterparts \citep{Tanaka_Shibazaki_1996}. Figure\,\ref{shibahashi:fig01} shows the $\gamma$-ray light curve, obtained by the Burst Alert Telescope (BAT) on the Neil Gehrels Swift Observatory ({\it Swift} satellite) \citep{Swift_2004, BAT_2005}, of such a low-mass X-ray binary, a {\it Ginga} source GS\,2023+338. Though the light curves are different from source to source, these systems are often called X-ray novae due to their X-ray brightening. The outbursts are caused by mass transfer instabilities in an accretion disc, which is fed by a low-mass dwarf star \citep{Mineshige_Wheeler_1989}. During the $\gamma$-ray (and X-ray) burst in 2015, the optical counterpart known as V404 Cyg brightened by $\sim 6$\,mag in $V$ band. Such optical outbursts, first notified as Nova Cygni 1938 by A. A. Wachmann, seem to be caused by reprocessing of X-ray photons in the accretion disk. In some other X-ray transients, the sudden optical brightening was also classified as a nova, like Nova Vel 1993 in the case of GRS\,1009$-$45. The naming may be misleading, since the physical mechanism of these systems are different from that for classical novae, which are caused by a thermonuclear flash triggered by the accumulation of accreted gas on the surface of neutron stars. \begin{figure}[ht!] \centering \includegraphics[width=0.6\textwidth,clip]{shibahashi_5k07_fig1.pdf} \caption{A $\gamma$-ray light curve of GS\,2023+338 obtained by the Burst Alert Telescope (BAT) onboard the {\it Swift} satellite. The time series data are taken from the Swift data archives.} \label{shibahashi:fig01} \end{figure} In the X-ray quiescent phase between outbursts of GS\,2023+338, V404 Cyg is as faint as $V\sim 18\,{\rm mag}$, and the optical source is the low-mass donor star itself \citep{Casares_et_al_2019}, hence radial velocity measurements become possible and the orbital elements are deduced from them. The mass function giving the lower limit of the invisible compact object is determined by the orbital period $P_{\rm orb}$, the amplitude of radial velocity variation $K_{\rm opt}$ and the eccentricity $e$: \begin{eqnarray} f(M_{\rm opt},M_{\rm X},\sin i) &:=& {{M_{\rm X}^3\sin^3 i}\over{(M_{\rm opt}+M_{\rm X})^2}} \nonumber\\ &=& {{1}\over{2\uppi G}} P_{\rm orb} K_{\rm opt}^3 \left(1-e^2\right)^{3/2}, \label{eq:2.1} \end{eqnarray} where $M_{\rm X}$ and $M_{\rm opt}$ denote the masses of the X-ray emitting, but optically invisible, compact object and of the optical counterpart, respectively, $i$ is the inclination angle, and $G$ is the gravitational constant. For V404 Cyg, the orbital period is 6.5\,d, and the amplitude of (almost sinusoidal) radial velocity variation is $\sim 200\,{\rm km}\,{\rm s}^{-1}$, so the mass function is $\sim 6\,{\rm M}_\odot$, giving a companion mass $M_{\rm X}$ much larger than the critical mass for a neutron star. Hence, this binary system contains a stellar-mass black hole \citep{Casares_et_al_1992}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth,clip]{shibahashi_5k07_fig2.pdf} \caption{Orbital period vs projected semi-major axis of X-ray binary systems. Mass functions are shown in units of ${\rm M}_\odot$.} \label{shibahashi:fig02} \end{figure} Besides GS\,2023+338, sixteen X-ray binaries have been confirmed, from spectroscopic radial velocities of the optical counterparts, to be composed of a late-type low mass star and a stellar-mass black hole. In addition, the presence of a stellar-mass black hole has been confirmed in Cyg X-1 whose optical counterpart is an early type star \citep{Webster_Murdin_1972, Bolton_1972}. Its strong stellar winds are accreted by the black hole, producing X-ray emission. The binary properties of all of these Galactic black holes in the literature are listed in Table\,\ref{table:01} (\citealt{Corral-Santana_et_al_2016} and references therein). For each X-ray binary system, the projected semi-major axis of the optical counterpart, in units of light-seconds, is plotted versus the orbital period in Fig.\,\ref{shibahashi:fig02}, where both axes use a logarithmic scale. Here, the semi-major axis is \begin{equation} {{a_{\rm opt}\sin i}\over{c}} = {{1}\over{2\uppi c}}P_{\rm orb} K_{\rm opt}\left(1-e^2\right)^{1/2}, \label{eq:2.2} \end{equation} where $c$ is the speed of the light. Systems of the same mass function would form a line with an inclination of 2/3. In the same diagram, some representative data for neutron-star X-ray binaries are plotted. As expected, most of the black-hole binaries (BHB) have a mass function larger than 1\,${\rm M}_\odot$, while the neutron-star binaries (NSB) have substantially smaller values. \begin{table}[t] \caption{Binary properties of the dynamically confirmed black-hole X-ray binaries in our Galaxy.} \begin{center} \begin{tabular}{cccccc} \toprule X-ray source & Opt. & Sp. Type & $P_{\rm orb}$ (h) & $f(M)$ (M$_\odot$) & $M_{\rm BH}$ (M$_\odot$) \\ \midrule Cyg X-1 & HDE226868 & O9Iab & 134.4 & $0.244 \pm 0.006$ & $14.8 \pm 1.0$ \\ GROJ0422$+$32 & N. Per & M4-5V & 5.09 & $1.19 \pm 0.02$ & 2 -- 15 \\ 3A0620$-$003 & N. Mon & K2-7V & 7.75 & $2.79 \pm0.04$ & $6.6 \pm 0.3$ \\ GRS1009$-$45 & N. Vel 93 & K7-M0V & 6.84 & $3.2 \pm 0.1$ & $ > 4.4$ \\ XTEJ1118$+$480 & & K7-M1V & 4.08 & $6.27 \pm 0.04$ & 6.9 -- 8.2 \\ GRS1124$-$684 & N. Mus 91 & K3-5V & 10.38 & $3.02 \pm 0.06$ & 3.8 -- 7.5 \\ GS1354$-$64 & BW Cir & G5III & 61.07 & $5.7 \pm 0.3$ & $> 7.6$ \\ 4U1543$-$475 & IL Lup & A2V & 26.79 & $0.25 \pm 0.01$ & 8.4 -- 10.4 \\ XTEJ1550$-$564 & & K2-4IV & 37.01 & $7.7 \pm 0.4$ & 7.8 -- 15.6 \\ XTEJ1650$-$500 & & K4V & 7.69 & $2.7 \pm 0.6$ & $< 7.3$ \\ GROJ1655$-$40 & N. Sco 94 & F6IV & 62.92 & $2.73 \pm 0.09$ & $6.0 \pm 0.4$ \\ 1HJ1659$-$487 & GX 339-4 & $>$ GIV & 42.14 & $5.8 \pm 0.5$ & $> 6$ \\ H1705$-$250 & N. Oph 77 & K3-M0V & 12.51 & $4.9 \pm 0.1$ & 4.9 -- 7.9 \\ SAXJ1819.3$-$2525 & V4641 Sgr & B9III & 67.62 & $2.7 \pm 0.1$ & $6.4 \pm 0.6$ \\ XTEJ1859$+$226 & V406 Vul & K5V & 6.58 & $4.5 \pm 0.6$ & $> 5.42$ \\ GRS1915$+$105 & & K1-5III & 812 & $7.0 \pm 0.2$ & $12 \pm 2$ \\ GS2000$+$251 & QZ Vul & K3-7V & 8.26 & $5.0 \pm 0.1$ & 5.5 -- 8.8 \\ GS2023$+$338 & V404 Cyg & K3III & 155.31 & $6.08 \pm 0.06$ & $9.0^{+0.2}_{-0.6}$ \\ \bottomrule \end{tabular} \end{center} \label{table:01} \end{table} Since the low-mass star is tidally locked, the spin period is the same as the orbital period. Hence the rotation velocity and the orbital velocity are different only by the factor of the ratio of the size of the star and the size of the orbit. The star fills its Roche lobe, so the size of the star is the Roche lobe size. The Roche lobe size relative to the binary separation is determined by the mass ratio \citep{Paczynski_1971}. Hence, the ratio of the rotation velocity to the orbital velocity gives the mass ratio. With a reasonable estimate for the mass of the low-mass star from its spectrum, the mass of the invisible object is thus observationally determined. The results taken from the literature are illustrated in Fig.\,\ref{shibahashi:fig03} (\citealt{Corral-Santana_et_al_2016} and references therein). The X-ray objects with an estimated mass in the range of $1.5$--$3\,{\rm M}_\odot$ are neutron stars. The X-ray transient systems with an invisible object with a mass higher than $3\,{\rm M}_\odot$ are considered as the best signature of stellar-mass black holes. \begin{figure}[b!] \centering \includegraphics[width=0.6\textwidth,clip]{shibahashi_5k07_fig3.pdf} \caption{Mass of the invisible compact object vs orbital period of X-ray binary systems.} \label{shibahashi:fig03} \end{figure} X-ray binaries containing black holes are mostly distinguishable from those with neutron stars by their X-ray spectrum. The former are characterised by an ultrasoft thermal component ($\lesssim 1.2\,{\rm keV}$), accompanied by a hard tail and seen after the flux reaches to the maximum, while the latter show a little bit harder ($\sim$ a few keV) blackbody component, which is thought to be from the neutron star envelope, and a softer but still a bit harder component than the black-hole X-ray binaries, most probably from the accretion disk. Ultrasoft X-ray transient sources are regarded as a signature of binaries containing a black hole. There are $\sim 60$ such X-ray sources suspected to be black hole binaries from their spectra, classified as ``black-hole candidates'', but not yet dynamically confirmed and then less secure (\citealt{Corral-Santana_et_al_2016} and references therein). \section{Search for quiet black holes in single-lined spectroscopic binaries} Quiescent black holes in wide binary systems could be found in radial velocity surveys by searching for single-lined systems with high mass functions. Such an attempt was first proposed by \citet{Guseinov_Zeldovich_1966}, who selected seven plausible candidates in an attempt to detect ``collapsed stars'', as they were then called, before \citet{Wheeler_1968} popularized the terminology ``black hole''. Their attempt was later followed by \citet{Trimble_Thorne_1969} (see also \citealt{Trimble_Thorne_2018}) by using the sixth catalogue of the orbital elements of spectroscopic binary systems \citep{SB6_1967}. For each system, the mass of the primary star was estimated from its spectral type, and an approximate lower limit to the mass of the unseen companion was then calculated from the observed mass function. \citet{Trimble_Thorne_1969} listed 50 systems having an unseen secondary star more massive than the Chandrasekhar mass limit for a white dwarf. \begin{figure}[ht!] \centering \includegraphics[width=0.6\textwidth,clip]{shibahashi_5k07_fig4.pdf} \caption{Orbital period vs projected semi-major axis of binary systems listed in the ninth catalogue of spectroscopic binary systems (SB9). Mass functions are shown in units of ${\rm M}_\odot$.} \label{shibahashi:fig04} \end{figure} Figure\,\ref{shibahashi:fig04} shows the distribution of spectroscopic binaries listed in the latest ninth catalogue of spectroscopic binary systems \citep{SB9_2004} in the orbital period versus the semi-major axis plane. More than fifty systems are found to have a mass function larger than 1\,${\rm M}_\odot$. However, in most cases, careful follow-up observations unveiled the presence of a fainter secondary star of larger mass (e.g. \citealt{Stickland_1997}). A plausible scenario is that the primary was originally more massive than the secondary and it has evolved faster than the secondary. A substantial amount of mass loss in the red giant phase made the primary significantly less massive than the secondary, which is still in the main-sequence phase and relatively much fainter than the evolved primary. Generally, a larger and more homogeneous sample of main-sequence stars is favourable, however, such a sample has been difficult to procure in practice. Spectroscopic data have to be collected as time series covering the orbital phase of each binary system, often obtained one by one from ground-based observing sites with a large amount of observing time, often on large telescopes for fainter stars. This is the bottle-neck to binary studies. Recently, two months after this conference, \citet{Thompson_2019} reported their finding of a black hole-giant star binary system, 2MASS J05215658+4359220, in multi-epoch radial velocity measurements acquired by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey \citep{APOGEE_2017}. The system was found to have a nearly circular orbit with the period $P_{\rm orb}=83.2\pm 0.06\,{\rm d}$ and the semi-amplitude $K_{\rm opt}\simeq 44.6\pm 0.1\,{\rm km\,s}^{-1}$. The mass function is then $0.766\pm 0.006\,{\rm M}_\odot$. The photometric data show periodic variation with the same period as the radial velocity variation, implying spin-orbit synchronisation of the star. A combination of the measured projected spin velocity and the period leads to $R_{\rm opt}\sin i \simeq 23\pm 1\,{\rm R}_\odot$, and the mass of the giant star is estimated to be $M_{\rm opt}\sin^2 i \simeq 4.4^{+2.2}_{-1.5}\,{\rm M}_\odot$ from this radius and the spectroscopically estimated $\log g$. An independent combination of the apparent luminosity, the spectroscopically determined effective temperature and Gaia distance measurements gives $R_{\rm opt} = 30^{+9}_{-6}\,{\rm R}_\odot$, being consistent with the value based on the spin velocity. As a consequence, from the aforementioned mass function, the minimum mass of the unseen companion is estimated to be $\sim 2.9\,{\rm M_\odot}$, marginally exceeding the critical mass for a neutron star. This is likely to be the first success of finding an X-ray quiet black hole lurking in a binary system, though within the uncertainties the companion could be a neutron star instead. \section{Search for quiet black holes based on phase modulation of pulsating stars in binaries} Starting with the Canadian {\it MOST} mission \citep{MOST_2003}, through the European Space Agency (ESA) mission {\it CoRoT} \citep{CoRoT_2009}, the National Aeronautics and Space Administration (NASA) space missions {\it Kepler} \citep{Kepler_2010} and {\it TESS} \citep{TESS_2015} and the international {\it BRITE}-Constellation \citep{BRITE_2014}, space-based photometry with extremely high precision over long time spans has led to a drastic change of the situation mentioned in the previous section, and has revolutionized our view of variability of stars. Some variability has been detected in almost all stars, and tens of thousands of pulsating stars were newly discovered, along with hundreds of eclipsing binaries \citep{Kirk_et_al_2016}. {\it Kepler}'s four-year simultaneous monitoring of nearly 200\,000 stars also opened a new window to a statistical study of binary stars and their orbits \citep{Murphy_et_al_2018, Murphy_2018, Liege_2019}. This pedigree will be further taken over by the ESA mission {\it PLATO}\footnote{http://sci.esa.int/plato/59252-plato-definition-study-report-red-book/}. Binary orbital motion causes a periodic variation in the path length of light travelling to us from a star. Hence, if the star is pulsating, the time delay manifests itself as a periodically varying phase shift in the form of the product with the intrinsic angular frequency \citep{FM_2012, FM2_2015}. The light-arrival time delay is hence measurable by dividing the observed phase variation by the frequency \citep{PM_2014}. In the past, the light-time effect on the observed times of maxima in luminosity, which vary over the orbit, was utilized to find unseen binary companions (the so-called $O-C$ method; see, e.g. \citealt{Sterken_2005} and other papers in those proceedings). Such a pulse timing method works well in the case of stars pulsating with a single mode, since the intensity maxima are easy to track and any deviations from precise periodicity are fairly easy to detect. However, when the pulsating star is multiperiodic as in the case of most objects observed from space, the situation is much more complex and measurement of the time delay by careful analysis of phase modulation is more suitable. Figure\,\ref{shibahashi:fig05} shows a time delay curve calculated with {\it Kepler} data. The time delay curve immediately provides us with qualitative information about the orbit \citep{Murphy_Shibahashi_2015, PM4_2016}. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{shibahashi_5k07_fig5.pdf} \caption{An example of time delay curve (KIC\,9651065) using nine different pulsation modes. The weighted average is shown as filled black squares. Adopted from \citet{Murphy_Shibahashi_2015}. \label{shibahashi:fig05}} \end{figure} \citet{Murphy_et_al_2018} applied this technique to all targets in the original {\it Kepler} field with effective temperatures ranging from 6600 to 10000\,K and discovered 341 new binary systems containing $\delta$ Scuti stars (main-sequence A stars pulsating in pressure modes). Importantly, many of these binaries would not have been detectable by other techniques, because A stars are often rapid rotators \citep{royeretal2007}, making spectroscopic radial velocities difficult to obtain. Using space-based photometry to measure the phase modulation of pulsating stars is a very efficient way to create a homogeneous sample of binary systems. Indeed, these asteroseismically detected binaries tripled the number of intermediate-mass binaries with full orbital solutions, and importantly, provided a homogeneous dataset for statistical analysis. The newly detected binaries are plotted in the $(P_{\rm orb}$-$a_1\sin i/c)$-diagram shown as Fig.\,\ref{shibahashi:fig06}, together with, for comparison, the black-hole X-ray binaries listed in Table\,\ref{table:01} and 162 spectroscopic binaries listed in the ninth catalogue of spectroscopic binaries with primary stars of similar spectral type (A0--F5). Only systems with full orbital solutions with uncertainties were selected. Some remarks should be given here. Binaries with orbital periods shorter than 20\,d were not found by the asteroseismic method. This is because the light curve is divided into short segments, such as 10\,d, in order to measure the phases of pulsation modes of close frequencies. It is then unfavourable to deal with binary stars with orbital periods shorter than the segment size dividing the observational time span, which is typically $\sim$10\,d. With short-period binaries also having smaller orbits (hence smaller light travel times), the binaries with periods in the range of 20--100\,d are difficult to detect by the asteroseismic method and the sample in this period range must be considerably incomplete. It is also in this period range that binaries are most likely to eclipse, and eclipsing binaries were removed from the asteroseismic sample so as to avoid biasing the detection \citep{Murphy_et_al_2018}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth,clip]{shibahashi_5k07_fig6.pdf} \caption{Orbital period vs projected semi-major axis of newly discovered $\delta$ Sct binary systems, together with binaries with an A-type star as the primary catalogued in SB9. Mass functions are shown in units of ${\rm M}_\odot$.} \label{shibahashi:fig06} \end{figure} The mass range of $\delta$ Scuti type stars used by \citet{Murphy_et_al_2018} is 1.8\,$\pm$\,0.3\,M$_\odot$. Hence the systems having a mass function larger than $\sim 1\,{\rm M}_\odot$ are thought to have a binary counterpart more massive than the neutron-star mass threshold, $\sim 3\,{\rm M}_\odot$ and they may be regarded as systems containing a stellar-mass black hole. As seen in Fig.\,\ref{shibahashi:fig06}, several systems with large mass functions have been found. Those with a mass function larger than $1\,{\rm M}_\odot$ are listed in Table\,\ref{table:02}. However, the seemingly massive secondary could be double or multiple itself, rendering each component less massive and fainter than expected. Indeed some systems with large mass functions were eventually found to be triples via follow-up with ground-based radial velocity observations made by Holger Lehman, Simon Murphy and their many collaborators (in preparation). In some other cases with a highly eccentric orbit, the radial velocity observations clarified that the amplitude of phase modulation was simply overestimated (see Fig.\,\ref{shibahashi:fig07}). Nevertheless, there still remain a few systems that are found to be single-line spectroscopic binary with large mass functions. They could be systems with a massive white dwarf or a neutron star. Further detailed observations are required before definitely concluding. \renewcommand{\arraystretch}{1.2} \begin{table}[b] \caption{Binary properties of $\delta$\,Sct stars in the {\it Kepler} field having a mass function larger than 1\,${\rm M}_\odot$. } \begin{center} \begin{tabular}{cccccc} \toprule KIC & $P_{\rm orb}$ (d) & $a_1\sin i/c$ (s) & $e$ & $\varpi_1$ (rad) & $f(M)$ (M$_\odot$) \\ \midrule 5219533 & $1571.0^{+15.0}_{- 13.0}$ & $1367.0^{+11.0}_{-10.0}$ & $0.5767^{+0.0081}_{-0.0081}$ & $0.478^{+0.016}_{-0.013}$ & $1.111^{+0.032}_{-0.032}$ \\ 8459354 & $53.559^{+0.020}_{-0.022}$ & $276.0^{+57.0}_{-76.0}$ & $0.9809^{+0.0062}_{-0.0180}$ & $6.085^{+0.034}_{-0.078}$ & $7.9^{+4.9}_{-6.5}$ \\ \bottomrule \end{tabular} \end{center} \label{table:02} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth,clip]{shibahashi_5k07_fig7.pdf} \caption{An example of systems with a highly eccentric orbit, about which the radial velocity observations clarified that the amplitude of phase modulation was simply overestimated.} \label{shibahashi:fig07} \end{figure} \section{Self-lensing black-hole binaries} In the case of an edge-on black-hole binary system, whose orbital axis is almost perpendicular to the line-of-sight, evidence for a lurking black hole would be available in addition to its large mass function. At superior conjunction when the black hole transits in front of the optical companion, contrary to the case of an ordinary eclipsing binary, a luminosity brightening is induced by the gravitational microlensing \citep{Einstein_1936, Leibovitz_Hube_1971, Maeder_1973}. In a general case, the characteristic size of this lens, called the Einstein radius $R_{\rm E}$, is given by \begin{equation} R_{\rm E} := \sqrt{ {{4GM_{\rm BH}}\over{c^2}}{{d_{\rm S}}\over{d_{\rm L}}} \left(d_{\rm S}-d_{\rm L}\right) } \label{eq:5.1} \end{equation} with the notation described in \citet{Gould_2000}, where $G$ is the gravitational constant, $M_{\rm BH}$ is the mass of the black hole, and $d_{\rm S}$ and $d_{\rm L}$ denote the distance to the source and that to the lens, respectively. For the present case, $d_{\rm S} = d_{\rm L}+a(1-e^2)/(1-e\sin\varpi_{\rm opt})$, where $a:=a_{\rm opt}+a_{\rm BH}$ is the summation of the semi-major axes of the two components and $\varpi_{\rm opt}$ is the argument of the periapsis of the optical companion. Hence the Einstein radius in units of light-seconds is \begin{equation} {{R_{\rm E}}\over{c}}=4.44\times 10^{-2} \left( {{a/c}\over{100\,{\rm s}}} \right)^{1/2} \left({{M_{\rm BH}}\over{{\rm M}_\odot}}\right)^{1/2} \left( {{1-e^2}\over{1-e\sin\varpi_{\rm opt}}}\right)^{1/2} \,{\rm s} . \label{eq:5.2} \end{equation} This is as small as a few percent of one solar radius, so the geometry for the lensing is very restricted to the edge-on case: the inclination angle, $i$, should be in the range of $\uppi/2 - \left(R_{\rm opt}/a\right) \leq i \leq \uppi/2 + \left(R_{\rm opt}/a\right)$. The tangential velocity of the black hole relative to the optical component, at the superior conjunction is \begin{equation} {{v_{\rm t}}\over{c}} = {{2\uppi (a/c)}\over{P_{\rm orb}}} {{(1-e\sin\varpi_{\rm opt})}\over{\sqrt{1-e^2}}} . \label{eq:5.3} \end{equation} The duration of the transit is then given by \begin{eqnarray} t_{\rm trans} &:=& {{2\sqrt{R_{\rm opt}^2-a^2\cos^2 i}}\over{v_{\rm t}}} \nonumber\\ &=& {{P_{\rm orb}}\over{\uppi}} \left\{ \left( {{R_{\rm opt}}\over{a}} \right)^2 - \cos^2 i \right\}^{1/2} { { \sqrt{1-e^2} }\over{ 1-e\sin\varpi_{\rm opt}} } . \label{eq:5.4} \end{eqnarray} Luminosity brightening repeats every superior conjunction, i.e. once per orbital period. Similar self-lensing systems with white dwarfs have so far been found for five cases \citep{Kruse_Agol_2014, Kawahara_et_al_2018, Masuda_et_al_2019}, so the probability to detect such a rare, but physically important event is not necessarily hopeless \citep{Masuda_Hotokezawa_2019}. The brightness enhancement of the microlens during the transit is $A (R_{\rm E}/R_{\rm opt})^2$, where we estimate $A$ to be 1.27 by integrating light emitted from points behind the Einstein radius using equation 5 of \citet{Paczynski_1986}. Therefore, the luminosity enhancement during the black-hole transit is of the order of a few percent in the case of $R_{\rm E}/R_{\rm opt}=1/10$. The transit light curves (magnification versus time) of microlensing events resemble inverted planetary transits. \section{Search for quiet black holes by astrometry} Another promising method of finding quiet black-hole binaries is with Gaia astrometry, which is extremely sensitive to non-linear proper motions of astrometric binaries with periods in the range 0.03 to 30 years. Target stars are not restricted to pulsating stars. Gaia is expected to detect ``approximately 60 per cent of the estimated 10 million binaries down to 20 mag closer than 250\,pc to the Sun''\footnote{https://sci.esa.int/web/gaia/-/31441-binary-stars}. Like a simulation illustrated in Fig.\,\ref{shibahashi:fig09}, the orbital motion in each astrometric binary system can be seen directly, hence the orbital elements and then mass functions are determined. Gaia simultaneously carries out spectroscopic observations, though not for all stars. If the target has spectroscopy, then the mass of the star is independently estimated once its temperature is spectrosocpically determined. If one component is unseen, its mass is deduced from the mass function and the spectroscopic mass of the optical counterpart. The unseen single companion should be either a white dwarf, a neutron star, or a black hole, depending on its mass. \begin{figure}[h!] \centering \includegraphics[angle=0,width=0.65\textwidth,clip]{shibahashi_5k07_fig9.pdf} \caption{Illustration of the projected motion of a single star and of a star in a binary system with an unseen companion. } \label{shibahashi:fig09} \end{figure} \section{Summary} In summary, \begin{itemize} \item X-ray quiet BHBs are expected to be present. \item Searching for X-ray quiet BHBs is challenging, but worth pursuing. \item Space-based asteroseismology and the measurements of light-arrival time delays opened a new window to binary statistics. \item Binary systems with unseen companions and high mass functions could result from the secondary being a black hole. \item Self-lensing BHBs are expected to show periodic brightening. \item Space-based astrometry also opens another window to binary statistics. \end{itemize} \section*{Acknowledgements} \begin{acknowledgements} The authors thank the organizing committee of the conference, led by Werner Weiss, for their excellent organization and good atmosphere of this fruitful meeting. This work was partly supported by the JSPS Grant-in-Aid for Scientific Research (16K05288). \end{acknowledgements} \bibliographystyle{aa}
train/arxiv
BkiUdQDxK6EuNCwyvRFd
5
1
\section{INTRODUCTION The subject of Egyptian fractions (fractions with numerator equal to one and a positive integer as its denominator) has incited the minds of many people going back for more than three millennia and continues to interest mathematicians to this day. For instance, the table of decompositions of fractions $\frac{2}{2k+1}$ as a sum of two, three, or four unit fractions found in the Rhind papyrus has been the matter of wander and stirred controversy for some time between the historians. Recently, in \cite{aaa}, the author proposes a definite answer and a full explanation of the way the decompositions were produced. Our interest in this subjected started with finding decompositions with only a few unit fractions. It is known that every positive rational number can be written as a finite sum of different unit fractions. One can verify this using the so called Fibonacci method and the formula $\frac{1}{n}=\frac{1}{n+1}+\frac{1}{n(n+1)}$, $n\in \mathbb N$. For more than three fourths of the natural numbers $n$, $\frac{4}{n}$ can be written as sum of only two unit fractions: the even numbers, and the odd numbers $n$ of the form $n=4k-1$, via the identities $\frac{2}{2k-1}=\frac{1}{k}+\frac{1}{k(2k-1)}$, and $\frac{4}{4k-1}=\frac{1}{k}+\frac{1}{k(4k-1)}$, $k\in \mathbb N$. It is clear that if we want to write $$\frac{4}{n}=\frac{1}{a}+\frac{1}{b}+\frac{1}{c},\ \ a,b,c\in\mathbb N,$$ \noindent we can just look at primes $n$. However, just as a curiosity, for $n=2009=7^2(41)$ (multiple of four plus one) one needs still two unit fractions, and there are only three such representations: $$\frac{4}{2009}=\frac{1}{504}+ \frac{1}{144648}= \frac{1}{574}+\frac{1}{4018}=\frac{1}{588} +\frac{1}{3444}.$$ This follows from the following characterization theorem which is well known (see \cite{ec} and \cite{gm}). \begin{theorem}\label{twofractions} Let $m$ and $n$ two coprime positive integers. Then $$\frac{m}{n}=\frac{1}{a}+\frac{1}{b}$$ for some positive integers $a$ and $b$, if and only if there exists positive integers $x$ and $y$ such that (i) $xy$ divides $n$, and (ii) $x+y\equiv 0$ (mod m). \end{theorem} \noindent In what follows we will refer to the equality \begin{equation}\label{esconj} \frac{4}{n}=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}. \end{equation} \noindent and say that {\it $n$ has a representation} as in (\ref{esconj}), or that (\ref{esconj}) {\it has a solution}, if there exist $a$, $b$, $c$ ($a\le b\le c$) natural numbers, not necessarily different satisfying (\ref{esconj}). Since $\frac{4}{n}> \frac{1}{a}$, the smallest possible value of $a$ is $\lceil n/4\rceil$. The biggest possible value of $a$ is $\lfloor \frac{3n}{4}\rfloor$, for instance $\frac{4}{9}=\frac{1}{6}+\frac{1}{6}+\frac{1}{9}$. If $n=4k+1$ ($k\in\mathbb N$) then we can try to use the smallest value first for $a$, i.e. $a=k+1$: \begin{equation}\label{firstequation} \frac{4}{4k+1}=\frac{1}{k+1}+\frac{3}{(k+1)(4k+1)}. \end{equation} Now, if the second term in the right hand side of (\ref{firstequation}) could be written as a sum of two unit fractions we would be done. This is not quite how the things are in general, but if we analyze the cases $k=3l+r$ with $r\in \{0,1,2\}$, $l\ge 0$, $l\in \mathbb Z$, we see that there is only one excepted case in which we get stuck: $k=3l$. This because Theorem~\ref{twofractions} can be used in one situation: $k=3l+1$ implies $1+(3l+1+1)\equiv 0$ (mod 3). On the other hand, if $k=3l+2$ we get $(k+1)=(3l+2)+1=3(l+1)\ge 3$ and the second term is already a unit fraction. In order to simplify the statements of some of the facts in what follows we will introduce a notation. For every $i\in \mathbb N$ let $\mathcal C_i$ be defined by $${\mathcal C}_i:=\{n | \ (\ref{esconj})\ has\ a\ solution\ with\ a\le \frac{n+4i-1}{4}\}.$$ It is clear that ${\mathcal C}_i\subset {\mathcal C}_{i+1}$ and then {\it Erd$\ddot{o}$s}-Straus' conjecture is equivalent to $\underset{i\in \mathbb N}{\bigcup} {\mathcal C}_i=\mathbb N$. Thus we obtained a pretty simple fact about the Diophantine equation (\ref{esconj}): \begin{proposition} \label{fprop} The equation (\ref{esconj}) has at least one solution for every prime number $n$, except possibly for those primes of the form $n\equiv 1$ (mod 12). Moreover, $$\mathbb N\setminus {\mathcal C}_1\subset \{n| n\equiv 1\ (mod\ 12)\}.$$ \end{proposition} We observe that $12=2^2(3)$, a product of a combination of the first two primes. The first prime that is excluded in this proposition is $13$. The equality (\ref{firstequation}) becomes \begin{equation}\label{firstequationprim} \frac{4}{12l+1}=\frac{1}{3l+1}+\frac{3}{(3l+1)(12l+1)}. \end{equation} At this point we can do another analysis modulo any other number as long we can reduce the number of possible situations for which we cannot say anything about the decomposition as in (\ref{esconj}). It is easy to see that $3l+1$ is even if $l$ is odd and then Theorem~\ref{twofractions} can be used easily with $x=1$ and $y=2$. This means that we have in fact an improvement of the Proposition~\ref{fprop}: \begin{proposition} \label{sprop} The equation (\ref{esconj}) has at least one solution for every prime number $n$, except possibly for those primes of the form $n\equiv 1$ (mod 24). In fact, $$\mathbb N\setminus {\mathcal C}_1\subset \{n| n\equiv 1\ (mod\ 24)\}.$$ \end{proposition} Let us observe that $24+1=5^2$, $48+1=7^2$, which pushes the first prime excluded by this last result to $73$. Quite a bit of progress if we think in terms of the primes in between that have been taken care off, almost by miracle. If $n=24k+1$, then the smallest possible value for $a$ is $6k+1$ and at this point let us try now the possibility that $a=6k+2=\frac{n+7}{4}$, \begin{equation}\label{equation2} \frac{4}{24k+1}=\frac{1}{6k+2}+\frac{7}{2(3k+1)(24k+1)},\ \ k\in \mathbb N. \end{equation} In the right hand side of (\ref{equation2}), the second term has a bigger numerator but the denominator has now at least three factors. This increases the chances that Theorem~\ref{twofractions} can be applied and turn that term into a sum of only two unit fractions. Indeed, for $k=7l+r$, we get that $n=24k+1\equiv 0$ (mod 7) if $r=2$, $2(3k+1)+1\equiv 0$ (mod 7) if $r=3$, $n+1=2(12k+1)\equiv 0$ (mod 7) if $r=4$, and $n+2=24k+3\equiv 0$ (mod 7) if $r=6$. Calculating the residues modulo 168 in the cases $r\in \{0,1,5\}$ we obtain: \begin{proposition}\label{aftermod7} The equation (\ref{esconj}) has at least one solution for every prime number $n$, except possibly for those primes of the form $n\equiv r$ (mod 168), with $r\in \{ 1, 5^2, 11^2\}$, $k\in \mathbb Z$, $k\ge 0$. More precisely, $$\mathbb N\setminus {\mathcal C}_2\subset \{n| n\equiv 1, 5^2, 11^2\ (mod\ 168)\}.$$ \end{proposition} Let us observe that $168=2^3(3)(7)$, $168+1=13^2$, and the excepted residues modulo 168 are all perfect squares. Because of this, somehow, the first prime that is excluded by this result is $193=168+25$. Again, we have even a higher jump in the number of primes that have been taken care of. As we did before there is an advantage to continue using (\ref{equation2}) and do an analysis now on $k$ modulo $5$. For $k=5l+r$, we have $n\equiv 0$ (mod 5) if $r=1$, $3k+1\equiv 0$ (mod 5) if $r=3$, and $6k+1\equiv 0$ (mod 5) if $r=4$, which puts $n\in {\mathcal C}_2$ again. Therefore, we have for $r\in \{0,2\}$ the following excepted residues modulo 120. \begin{proposition}\label{aftermod5} The equation (\ref{esconj}) has at least one solution for every prime number $n$, except possibly for those primes of the form $n\equiv r$ (mod 120), with $r\in \{ 1, 7^2\}$, $k\in \mathbb Z$, $k\ge 0$. More precisely, $$\mathbb N\setminus {\mathcal C}_2\subset \{n| n\equiv 1, 7^2\ (mod\ 120)\}.$$ \end{proposition} One can put these two propositions together and get Mordell's Theorem. \begin{theorem}\label{mordel} {\bf (Mordell \cite{gm})} The equation (\ref{esconj}) has at least one solution for every prime number $n$, except possibly for those primes of the form $n=840k+r$, where $r\in \{1, 11^2, 13^2, 17^2, 19^2, \ 23^2\}$, $k\in \mathbb Z$, $k\ge 0$. Moreover, we have $$\mathbb N\setminus {\mathcal C}_2\subset \{n| n\equiv 1, 11^2, 13^2, 17^2, 19^2, 23^2\ (mod\ 840)\}.$$ \end{theorem} \hspace{\parindent}{P{\scriptsize ROOF}}.\ By Proposition~\ref{aftermod7}, $n=168k+1$ may be an exception but if $k=5l+r$, with $r\in \{0,1,2,3,4\}$ we have $n\equiv 1$ or $7^2$ (mod 120) only for $r\in\{0,1\}$. These two cases are the exceptions for both propositions and they correspond to $n\equiv 1$ or $13^2$ (mod 840). All other excepted cases are obtained the same way.$\hfill\bsq$\par Let us observe that $840=2^3(3)(5)(7)$ and the residues modulo 840 are all perfect squares. Not only that but $840+1=29^2$, $840+11^2=31^2$, and $1009=840+13^2$ is the first prime that is excluded by this important theorem. While 193 is the 44-th prime number, 1009 is the $169^{th}$ prime. It is natural to ask if a result of this type can be obtained for an even bigger modulo. We will introduce here the next natural step into this analysis, which implies to allow $a$ be the next possible value, i.e. $\frac{n+11}{4}$, and we will be using the identities \begin{equation}\label{equations120} \frac{4}{120k+1}=\frac{1}{30k+3}+\frac{11}{3(10k+1)(120k+1)},\ \ k\in \mathbb N, \end{equation} \begin{equation}\label{equations120k49} \frac{4}{120k+49}=\frac{1}{30k+15}+\frac{11}{3(5)(2k+1)(120k+49)},\ \ k\in \mathbb N. \end{equation} \section{The analysis modulo 11 } According to Proposition~\ref{aftermod5} we may continue to look only at the two cases modulo 120 and use only the two formulae above. If we continue the analysis modulo 11 in these two cases we obtain the following theorem. \begin{theorem}\label{1320} The equation (\ref{esconj}) has at least one solution for every prime number $n$, except possibly for those primes of the form $n=1320k+r$, where $$r\in \{1, 7^2, 13^2, 17^2, 19^2, \ 23^2, 29^2, 31^2, 7(103), 1201, 7(127), 23(47)\}:=E, \ k\in \mathbb Z, \ k\ge 0.$$ Moreover, we have $$\mathbb N\setminus {\mathcal C}_3\subset \{n| n \in E (mod\ 1320)\}.$$ \end{theorem} \hspace{\parindent}{P{\scriptsize ROOF}}.\ If $n=120k+1$ and $k=11l+1$, we see that $n\equiv 0$ (mod 11) and so (\ref{equations120}) gives the desired decomposition as in (\ref{esconj}) right away. If $k=11l+r$ and $r\in \{2,4,5 \}$ the Theorem~\ref{twofractions} can be employed to split the second term in (\ref{equations120}) as a sum of two unit fractions. For instance, for $r=5$ we have $1+3(10(11l+5)+1)\equiv 0$ (mod 11), so one can take $m=11$, $x=1$ and $y=30k+3$ in Theorem~\ref{twofractions}. Hence we have seven exceptions in this situation: \begin{itemize} \item $r=0$ corresponds to $n\equiv 1$ (mod 1320), \item $r=3$ gives $n\equiv 19^2$ (mod 1320), \item $r=6$ corresponds to $n \equiv 7(103)$ (mod 1320), \item $r=7$ gives $n\equiv 29^2$ (mod 1320), \item $r=8$ corresponds to $n \equiv 31^2$ (mod 1320), \item $r=9$ gives $n\equiv 23(47)$ (mod 1320), and finally \item $r=10$ corresponds to $n \equiv 1201$ (mod 1320). \end{itemize} If $n=120k+49$ and $k=11l+r$, then for $r=5$ we have $n\equiv 0$ (mod 11). If $r\in \{3,5,6,8,9,10\}$ we can use Theorem~\ref{twofractions}. The exceptions then are: \begin{itemize} \item $r=0$ corresponds to $n\equiv 7^2$ (mod 1320), \item $r=1$ gives $n\equiv 13^2$ (mod 1320), \item $r=2$ corresponds to $n \equiv 17^2$ (mod 1320), \item $r=4$ gives $n\equiv 23^2$ (mod 1320), \item $r=7$ corresponds to $n \equiv 7(127)$ (mod 1320). $\hfill\bsq$\par \end{itemize} Putting Theorem~\ref{aftermod7} and Theorem~\ref{1320} together we get the following 36 exceptions: $$\begin{array}{cccccc} 1^2 & 13^2& 17^2 & 19^2 & 23^2 & 29^2 \\ 31^2 & \underline{1201} & 37^2 & 41^2 & 43^2 & 13(157)\\ 47^2 &\underline{2521} & 19(139) & \underline{2689} & 53^2 & \underline{3361}\\ 59^2& \underline{3529}& 61^2 & 29(149) & 67^2& 71^2\\ 13(397) & 73^2 & \underline{5569} & 17(353) & 31(199) & 79^2 \\ 83^2& \underline{7561} & \underline{7681} & 89^2 & \underline{8089} & \underline{8761} \end{array}$$ The residue 1201, the first prime in this list is not really an exception because of the following identity: \begin{equation}\label{1201} \frac{4}{9240k+1201}=\frac{1}{2310k+308}+\frac{1}{5(9240k+1)(15k+2)}+\frac{1}{770(9240k+1)(15k+2)}, \end{equation} \noindent which shows that $9240k+1201\in {\mathcal C}_8$ for all $k\in \mathbb Z$, $k\ge 0$. We checked for similar identities and found just another similar identity for the exception $17(353)=6001$: \begin{equation}\label{6001} \begin{array}{l} \displaystyle \frac{4}{9240k+6001}= \frac{1}{2310k+1540}+ \\ \\ \displaystyle \frac{1}{385(9240k+6001)(2034k+1321)}+\frac{1}{22(3k+2)(2034k+1321)}, \end{array} \end{equation} \noindent which shows that $9240k+6001\in {\mathcal C}_{40}$ for all $k\in \mathbb Z$, $k\ge 0$. \begin{theorem}\label{mordellextension} The equation (\ref{esconj}) has at least one solution for every prime number $n$, except possibly for those primes of the form $n\equiv r$ (mod 9240) where $r$ is one of the 34 entries in the table: \vspace{0.1in} \centerline{ \begin{tabular}{|c|c|c|c|c|c|} \hline $1^2$ & $13^2$& $17^2$ & $19^2$ & $23^2$ & $29^2$ \\ \hline $31^2$ & $37^2$ & $41^2$ & $43^2$ & $13(157)$& $47^2$ \\ \hline $\underline{2521}$ & $19(139)$ & $\underline{2689}$ & $53^2$ & $\underline{3361}$ & $59^2$\\ \hline $\underline{3529}$& $61^2$ & $29(149)$ & $67^2$& $71^2$ & $13(397)$\\ \hline $73^2$ & $\underline{5569}$& $31(199)$ & $79^2$ & $83^2$ & $\underline{7561}$\\ \hline $\underline{7681}$ & $89^2$ & $\underline{8089}$ & $\underline{8761}$ & & \\ \hline \end{tabular}} \vspace{0.1in} Moreover, if $n$ is not of the above form, it is in the class ${\mathcal C}_3$, or in ${\mathcal C}_8$ if $n\equiv 1201$ (mod 9240), or in ${\mathcal C}_{40}$ if $n\equiv 6001$ (mod 9240). \end{theorem} \hspace{\parindent}{P{\scriptsize ROOF}}. \ We look to see for which values of $r\in \{0,1,2,3,4,5,6\}$ we have $1320(7l+r)+s\in \{1,5^2,11^2\}$ (mod 168) with $s\in E$ ($E$ as in Theorem~\ref{1320}). For instance, if $s=19^2$ we get that $r$ must be in the set $\{0,1,3\}$ in order to have $1320(7l+r)+19^2\in \{1,5^2,11^2\}$ (mod 168). These three cases correspond to residues $19^2$, $41^2$ and $29(149)$ modulo 9240. Each residue in $E$ gives rise to three exceptions. We leave the rest of this analysis to the reader.$\hfill\bsq$\par \section{Numerical Computations and Comments } We observe that the first ten of these residues in Theorem~\ref{mordellextension} are all perfect squares. In fact, all 19 squares of primes less than 9240 and greater than $11^2$ are all excepted residues. There is something curious about the fact that all the perfect squares possible are excepted. This may be related with the result obtained by Schinzel in \cite{s} who shows that identities such as (\ref{1201}), (\ref{6001}) and others in this note, cannot exist if the residue is a perfect square. The same phenomenon is actually captured in Theorem 2 in \cite{y}. The good news about Theorem~\ref{mordel}, Theorem~\ref{1320}, and Theorem~\ref{mordellextension}, is that the first excepted residues are all perfect squares or composite and moreover their number is essentially increasing with the moduli. With our analysis unfortunately, there are a few other composite and 9 prime residues that have to be excluded. The prime 2521 is only the 369-th prime and it is the first prime that is excluded by this theorem. However, a decomposition with the smallest $a$ possible is exhibited in the equality $$\frac{4}{2521} = \frac{1}{636} + \frac{1}{70588} + \frac{1}{5611746},$$ \noindent which puts $2521\in {\mathcal C}_6$. The other primes are in the smallest class $\mathcal C$ as follows: \vspace{0.1in} \centerline{ \begin{tabular}{|c|c|c|c|c|c|} \hline ${\mathcal C}_1$ & ${\mathcal C}_2$ & ${\mathcal C}_3$ & ${\mathcal C}_4$ & ${\mathcal C}_5$& ${\mathcal C}_6$ \\ \hline \hline \\ 3361, 7681, 8089 & 3529, 5569 & 8761 & 2689& 7561 & 2101, 2521 \\ \hline \end{tabular}} \vspace{0.1in} Clearly, one can continue this type of analysis by adding more primes to the modulo which is at this point 9240. It is natural to just add the primes in order regardless if they are of the form $4k+1$ or $4k+3$. We see that {\it Erd$\ddot{o}$s}' conjecture is proved to be true if one can show that the smallest excluded residue for a set of moduli that converges to infinity is not a prime. One way to accomplish this is to actually show that the pattern mentioned above continues, i.e. the number of excluded residues which are perfect squares or composite is essentially growing as the modulus increases. This is actually our conjecture that we talked about in the abstract. Numerical evidence points out that for residues $r$ which are primes, we have $9240s+r\in {\mathcal C}_{k(s,r)}$ with $k(s,r)$ bounded as a function of $s$. For example, $9240s+2521 \in {\mathcal C}_{12}$ for every $s=1..100000$ and the distribution through the smaller classes is \vspace{0.1in} \centerline{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline ${\mathcal C}_1$ & ${\mathcal C}_2$ & ${\mathcal C}_3$ & ${\mathcal C}_4$ & ${\mathcal C}_5$& ${\mathcal C}_6$ & ${\mathcal C}_7$& ${\mathcal C}_8$& ${\mathcal C}_9$ & ${\mathcal C}_{10}$ & ${\mathcal C}_{11 }$ & ${\mathcal C}_{12 }$ \\ \hline \hline \\ 10852 & 6444 & 5332 & 811 & 612 & 277 & 63 & 82 & 6 & 7 & 0 & 5 \\ \hline $44.3\%$& $26.31\%$ & $21.78\%$ & $3.3\%$ & $2.5\%$ & $1.13\%$ & $0.26\%$ & $0.34\%$ & $0.025\%$ & $0.029\%$ & $0\%$ & $0.021\%$ \\ \hline \end{tabular}} \vspace{0.1in} Now, if we add 13 to the factors we would have an analysis modulo 120120. It turns out that 2521 is not an modulo 120120 exception because of the identity $$\frac{4}{120120k+2521}=\frac{1}{30030k+4004}+\frac{1}{1001(120120k+2521)(810k+17)} \frac{1}{22(15k+2)(810k+17)},$$ \noindent which shows that $120120k+2521\in {\mathcal C}_{3374}$ for all $k\in \mathbb Z$, $k\ge 0$. \noindent We found similar identities for the residues 2689, 3529, 29(149), 5569, 31(199), 7561, and 7681 modulo 120120. This suggests that one may actually be able to obtain Mordell type results for bigger moduli, in the sense that the perfect squares residues appear essentially in bigger numbers, by implementing a finer analysis that involves higher classes than $\mathcal C_3$. It is natural to believe that this might be true taken into account that Vaughan \cite{v} showed that $$\frac{1}{m}\# \{n\in \mathbb N|\ n\le m,\ and\ ( \ref{esconj})\ does \ not\ have \ a\ solution \}\le e^{-c(\ln\ m)^{2/3}}, m\in \mathbb N,$$ \noindent for some constant $c>0$. This is saying, roughly speaking, that the proportion of the those $n\le m$ for which a writing with three unit fractions of $4/n$ goes to zero a little slower than $\frac{1}{m}$ as $m\to \infty$. The first few primes that require a bigger class than the ones before are 2, 73, 1129, 1201, 21169, 118801, 8803369,..., corresponding to classes ${\mathcal C}_1$, ${\mathcal C}_2$, ${\mathcal C}_3$, ${\mathcal C}_4$, ${\mathcal C}_8$, ${\mathcal C}_{15}$, ${\mathcal C}_{27}$, .... which shows a steep increase in the size of classes relative to the number of jumps. In \cite{y}, Yamamoto has a different approach to ours and obtains a lesser number of exceptions at least for the primes involved in Theorem~\ref{mordellextension}. For each prime $p$ of the form $4k+3$ between 11 and 97, there is a table in \cite{y} of exceptions for congruency classes $r$ ($n\equiv r$ (mod p) ) that is used to check the conjecture using a computer for al $n\le 10^7$. However, in \cite{rg}, Richard Guy mentions that the conjecture is checked to be true for all $n\le 1003162753$. We extended the search for a counterexample further for all $n\le 4,146,894,049$. For our computations we wrote a program that pushes the analysis for a modulus of $M=2,762,760=2^3(3)(5)(7)(11)(13)(23)$. The primes chosen here are optimal, in the sense that the excepted residues are in number less than the ones obtained by other options. The first 12 exceptions in this case are $1$, $17^2$, $19^2$, $29^2$, $31^2$, $37^2$, $41^2$, $43^2$, $47^2$, $53^2$, $3361$, and $59^2$. The number of these exceptions was 2299 but it is possible that our program was not optimal from this point of view. Nevertheless, this meant that we had to check the conjecture, on average, for every other $\approx 1201$ integer. The primes generated, 889456 of them, are classified according to the smallest class they belong to in the next tables: \vspace{0.1in} \centerline{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline ${\mathcal C}_1$ & ${\mathcal C}_2$ & ${\mathcal C}_3$ & ${\mathcal C}_4$ & ${\mathcal C}_5$& ${\mathcal C}_6$ & ${\mathcal C}_7$& ${\mathcal C}_8$& ${\mathcal C}_9$ & ${\mathcal C}_{10}$ & ${\mathcal C}_{11 }$ & ${\mathcal C}_{12 }$ \\ \hline \hline \\ 380547 & 228230 & 128494& 61129& 50853& 17116 & 8459& 9580& 1836& 1386 & 547& 855\\ \hline $42.8\%$& $25.7\%$ & $14.4\%$ & $6.9\%$ & $5.7\%$ & $2\%$ & $0.9\%$ & $1\%$ & $0.2\%$ & $0.15\%$ & $0.06\%$ & $0.096\%$ \\ \hline \end{tabular}} \vspace{0.1in} \centerline{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline ${\mathcal C}_{13}$ & ${\mathcal C}_{14}$ & ${\mathcal C}_{15}$ & ${\mathcal C}_{16}$ & ${\mathcal C}_{17}$& ${\mathcal C}_{18}$ & ${\mathcal C}_{19}$& ${\mathcal C}_{20}$& ${\mathcal C}_{21}$ & ${\mathcal C}_{22}$ & ${\mathcal C}_{23 }$ & ${\mathcal C}_{24 }$ \\ \hline \hline \\ 115 & 124 & 111 & 26 & 10 & 27 & 2 & 4 & 4 & 0 & 0 & 0 \\ \hline $0.013\%$& $0.014\%$ & $0.012\%$ & $0.003\%$ & $0.001\%$ & $0.003\%$ & $0.0002\%$ & $0.00045\%$ & $0.00045\%$ & $0\%$ & $0\%$ & $0\%$ \\ \hline \end{tabular}} \vspace{0.1in} \centerline{ \begin{tabular}{|c|c|c|} \hline ${\mathcal C}_{25}$ & ${\mathcal C}_{26}$ & ${\mathcal C}_{27}$ \\ \hline \hline \\ 0& 0 & 1 \\ \hline $0\%$& $0\%$ & $0.0001\%$ \\ \hline \end{tabular}} \vspace{0.1in} \vspace{0.1in} So far, we have not seen a prime in a class ${\mathcal C}_{k}$ with $k> 27$. The result obtained in \cite{s} seems to imply that the minimum class index for each prime, assuming the conjecture is true, should have a limit superior of infinity. \vspace{0.1in}
train/arxiv
BkiUf1LxK6nrxl9bP8pQ
5
1
\section{Introduction} Jamming attacks pose a serious threat to the continuous operability of wireless communication systems \cite{economist2021satellite}. Effective methods to mitigate such attacks are necessary as wireless systems become increasingly critical to modern infrastructure~\cite{popovski2014ultra}. In the uplink of massive multi-user multiple-input multiple-output (MU-MIMO) systems, effective jammer mitigation is rendered possible by the asymmetry in the number of antennas between the basestation (BS), which has many antennas, and a mobile jamming device, which has one or few antennas. One possibility, for instance, is to project the receive signals on the subspace orthogonal to the jammer's channel~\cite{marti2021snips,yan2016jamming}. But such methods require accurate knowledge of the jammer's channel. If a jammer transmits permanently and with a static signature (often called barrage jamming), the~BS~can estimate the required quantities, for instance during a dedicated period in which the user equipments (UEs) do not transmit~\cite{marti2021snips} or in which they transmit predefined symbols~\cite{yan2016jamming}. Instead of barrage jamming, however, a smart jammer might jam the system only at specific time instants. Such attacks may prevent the BS from estimating a jammer's channel with simple~methods. \nopagebreak \subsection{State of the Art} Multi-antenna wireless systems have the unique potential to effectively mitigate jamming attacks, and a variety of multi-antenna methods have been proposed for the mitigation of jamming attacks in MIMO systems \cite{marti2021snips, shen14a, yan2016jamming,zeng2017enabling, vinogradova16a, do18a, akhlaghpasand20a, akhlaghpasand20b, marti2021hybrid}. Common to all of them~is the assumption---in one way or other---that information about the jammer's transmit characteristics (e.g., the jammer's channel, or the covariance matrix between the UE transmit signals and the jammed receive signals) can be estimated based on some specific subset of the receive samples.\footnote{The method of \cite{vinogradova16a} is to some extent an exception as it estimates the~UEs' subspace and projects the receive signals thereon. However, this method~dist-inguishes the UEs' from the jammer's subspace based on the receive power, thereby presuming that the UEs and the jammer transmit with different power.} \fref{fig:traditional}~illustrates the approach taken by these methods: The data phase is preceded by an augmented training phase in which the jammer's transmit characteristics are estimated in addition to the channel matrix. This augmented training phase can either complement a traditional pilot phase with a period during which the UEs do not transmit in order to enable jammer estimation (e.g., \cite{marti2021snips, shen14a}), or it can consist of an extended pilot phase so that there exist pilot sequences which are unused by the UEs and on whose span the receive signals can be projected to estimate the jammer's subspace (e.g., \cite{do18a, akhlaghpasand20a, akhlaghpasand20b}). The estimated jammer characteristics are then used to perform jammer-mitigating data detection. Such an approach succeeds in the case of barrage jammers, but is unreliable for estimating the transmit characteristics of smart jammers: A smart jammer can evade estimation and thus circumvent mitigation by not~transmitting in those samples, for instance because it is aware of the defense mechanism, or simply because it jams in brief bursts only. For this reason, our proposed method MAED unifies jammer estimation, channel estimation and data detection, see \fref{fig:maed}. Many studies have already shown how smart jammers can disrupt wireless communication systems by targeting only specific parts of the wireless transmission \cite{miller2010subverting, miller2011vulnerabilities, clancy2011efficient, sodagari2012efficient, lichtman2013vulnerability, lichtman20185g, lichtman2016lte, girke2019towards,lapan2012jamming} instead of using barrage jamming. Jammers that jam only the pilot phase have received considerable attention \cite{miller2010subverting,miller2011vulnerabilities,clancy2011efficient,sodagari2012efficient,lichtman2013vulnerability}, as such attacks can be more energy-efficient than barrage jamming in disrupting communication systems that do not defend themselves against jammers~\cite{clancy2011efficient,sodagari2012efficient,lichtman2013vulnerability}. However, if a jammer is active during the pilot phase, then a BS that \emph{does} defend itself against jamming attacks can estimate the jammer's channel by exploiting knowledge of the UE transmit symbols, for instance with the aid of unused pilot sequences \cite{do18a, akhlaghpasand20a, akhlaghpasand20b}. To disable such jammer-mitigating communication systems, a smart jammer might therefore refrain from jamming the pilot phase and only target the data phase, even if such jamming attacks have not received much attention so far \cite{lichtman2016lte, girke2019towards}. Other threat models that have been analyzed include jammers that attack specific control \mbox{channels \cite{lichtman2013vulnerability, lichtman2016lte, lichtman20185g} or the time synchronization phase \cite{lapan2012jamming}.} \begin{figure}[tp] \vspace{-1mm} \centering ~ \subfigure[Existing jammer-mitigation methods separate jammer estimation (JEST) and channel estimation (CHEST) from the jammer-resilient data detection (DET). Such methods are~ineffective~against~jammers that attack only the data~phase and do not transmit in the training phase.]{ \includegraphics[width=0.956\columnwidth]{figures/sota.pdf} \label{fig:traditional} } \newline \subfigure[MAED unifies jammer estimation and mitigation, channel estimation, and data detection to mitigate jamming attacks regardless of their activity~pattern.]{ \includegraphics[width=0.956\columnwidth]{figures/maed.pdf} \label{fig:maed} } \caption{ The approach to jammer mitigation taken by existing methods (a) compared to the approach taken by MAED (b). In the figure, $\{\bmy_1,\dots,\bmy_K\}$ are the receive signals, and $\hat\Hj, \hat\bH$ and $\hat\bS_D$ are the estimates of the jammer channel, the UE channel matrix, and the UE transmit symbols, respectively. } \label{fig:maed_vs_trad} \end{figure} \subsection{Contributions} We propose MAED (short for MitigAtion, Estimation, and Detection), a novel method for jammer mitigation that does not depend on the jammer being active during \textit{any} specific period. Instead, our approach leverages the fact that a jammer cannot change its subspace instantaneously. To this end, MAED utilizes a problem formulation which unifies jammer estimation and cancellation, channel estimation, and data detection, instead of dealing with these tasks independently, see \fref{fig:maed}. In doing this, the method builds upon techniques for joint channel estimation and data detection (JED) \cite{vikalo2006efficient, xu2008exact, kofidis2017joint, castaneda2018vlsi, yilmaz2019channel, he2020model, song2021soft}. The proposed problem formulation is approximately solved using an efficient iterative algorithm. Extensive simulation results demonstrate that MAED is able to effectively mitigate a wide variety of na\"ive and smart jamming attacks, without requiring any \textit{a~priori} knowledge about the attack type. \subsection{Notation} Matrices and column vectors are represented by boldface uppercase and lowercase letters, respectively. For a matrix $\bA$, the transpose is $\tp{\bA}$, the conjugate transpose is $\herm{\bA}$, the entry in the $\ell$th row and $k$th column is $[\bA]_{\ell,k}$, the submatrix consisting of the columns from $n$ through $m$ is $\bA_{[n:m]}$, and the Frobenius norm is $\| \bA \|_F$. The $N\!\times\!N$ identity matrix is $\bI_N$. For a vector~$\bma$, the $\ell_2$-norm is $\|\bma\|_2$, the real part is $\Re\{\bma\}$, and the imaginary part is $\Im\{\bma\}$. The expectation with respect to the random vector~$\bmx$ is denoted by \Ex{\bmx}{\cdot}. We define $i^2=-1$. The complex $n$-hypersphere of radius $r$ is denoted by $\mathbb{S}_r^n$, and the span of a vector $\bma$ is denoted by $\textit{span}(\bma)$. \section{System Setup}\label{sec:setup} We consider the uplink of a massive MU-MIMO system in which $U$ single-antenna UEs transmit data to a $B$ antenna BS in the presence of a single-antenna jammer. The channels are assumed to be frequency flat and block-fading with coherence time $K=T+D$. The first $T$ time slots are used to transmit pilot symbols; the remaining $D$ time slots are used to transmit payload data. The UE transmit matrix is $\bS = [\bS_T,\bS_D]$, where $\bS_T\in \opC^{U\times T}$ and $\bS_D\in\setS^{U\times D}$ contain the pilots and the transmit symbols, respectively, and $\setS$ is the transmit constellation, which is taken to be QPSK with symbol energy $\Es$ throughout this paper.\footnote{Our method can be extended to higher-order modulation schemes.} We assume that the jammer does not prevent the UEs and the BS from establishing time synchronization, which allows us to use the discrete-time input-output relation \begin{align} \bY = \bH\bS + \Hj\tp{\bsj} + \bN. \label{eq:io} \end{align} Here, $\bY\in\opC^{B\times K}$ is the BS-side receive matrix that contains the \mbox{$B$-dimensional} receive vectors over all $K$ time slots, \mbox{$\bH\in\opC^{B\times U}$} models the MIMO uplink channel, $\Hj\in\opC^B$ models the channel between the jammer and the BS, $\bsj\in\opC^K$ contains the jammer's transmit symbols over all $K$ time slots, and $\bN\in\opC^{B\times K}$ models thermal noise and contains independently and identically distributed (i.i.d.) circularly-symmetric complex Gaussian entries with variance $N_0$. In what follows, we usually assume that the jammer's transmit symbols $\bsj$ are independent of the UE transmit symbols $\bS$. However, we will also consider a scenario in which we drop this assumption (see \fref{sec:smart}). We make no other assumptions about the distribution of $\bsj$. In particular, we do not assume that its entries are distributed~i.i.d. \section{Joint Jammer Estimation and Mitigation, Channel Estimation, and Data Detection} Our goal is threefold: (i) mitigating the jammer's interference by locating its one-dimensional subspace $\textit{span}(\Hj)$ and projecting the receive matrix $\bY$ onto the orthogonal complement of~that subspace, (ii) estimating the channel matrix $\bH$, and (iii) recovering the UE data in $\bS_D$. To this end, we develop a novel problem formulation that combines all three of these aspects. We then propose an iterative algorithm to approximately solve this optimization problem. \subsection{The MAED Optimization Problem} In the absence of a jammer, the maximum-likelihood problem for joint channel estimation and data detection is \cite{vikalo2006efficient} \begin{align} \big\{\hat\bH, \hat\bS_D\big\} &= \argmin_{\substack{\hspace{1.3mm}\tilde\bH\in\opC^{B\times U}\\ \tilde\bS_D\in\setS^{U\times D}}}\! \big\|\bY - \tilde\bH \tilde\bS \big\|^2_F, \label{eq:ml_jed} \end{align} where, for brevity, we define $\tilde\bS \triangleq [\bS_T,\tilde\bS_D]$ and leave the dependence on $\tilde\bS_D$ implicit. This objective already integrates the goals of estimating the channel matrix and detecting the transmit symbols. The jammer, however, will lead to a residual \begin{align} \|\bY - \bH\bS\|^2_F &= \|\Hj\tp{\bsj} + \bN\|^2_F \approx \|\Hj\tp{\bsj}\|^2_F \end{align} when plugging the true channel and data matrices into \fref{eq:ml_jed}, and assuming that the contribution of the noise $\bN$ is negligible. Consider now the projection onto the orthogonal subspace (POS) for jammer mitigation \cite{marti2021snips}: POS nulls a jammer by orthogonally projecting the receive signals onto the orthogonal complement of $\textit{span}(\Hj)$ using the matrix $\bP(\Hj) \triangleq \bI_B - \Hj\herm{\Hj}/\|\Hj\|^2$: \begin{align} \bP(\Hj)\bY &= \bP(\Hj)\,\bH\bS + \bP(\Hj)\,\Hj\tp{\bsj} + \bP_{\Hj}\,\bN \label{eq:pos} \\ &= \bP(\Hj)\,\bH\bS + \bP(\Hj)\,\bN. \end{align} One can then define \mbox{$\bY_{\bP(\Hj)}\triangleq\bP(\Hj)\bY,~\bH_{\bP(\Hj)}\triangleq\bP(\Hj)\bH$,} and perform channel estimation and data detection using the resulting jammer-free system. The difficulty is, of course, that the projection matrix $\bP(\Hj)$ depends on the (unknown) direction~$\Hj/\|\Hj\|$ of the jammer's channel. Now, consider what happens when we take the matrix\footnote{The dependence of $\tilde\bP$ on $\tilde\bmp$ is left implicit here and throughout the paper.} $\tilde\bP\triangleq\bI-\tilde\bmp\herm{\tilde\bmp},~\tilde\bmp\in \mathbb{S}_1^B$ which orthogonally projects a signal onto the orthogonal complement of some arbitrary one-dimen-sional subspace $\textit{span}(\tilde\bmp)$, and then apply that projection to the objective of \eqref{eq:ml_jed} as follows: \begin{align} \|\tilde\bP(\bY - \tilde\bH\tilde\bS)\|^2_F. \label{eq:ml_p_jed} \end{align} If we now plug the true channel and data matrices into \fref{eq:ml_p_jed} and assume that the noise $\bN$ is negligible, then we obtain \begin{align} \|\tilde\bP(\bY - \bH\bS)\|^2_F &= \|\tilde\bP\Hj\tp{\bsj} + \tilde\bP\bN\|^2_F \\ &\approx \|\tilde\bP\Hj\tp{\bsj}\|^2_F \\ &\geq 0, \end{align} with equality if and only if $\tilde\bmp$ is collinear with $\Hj$. In other words, the unit vector $\tilde\bmp$ which---in combination with the true channel and data matrices---minimizes \eqref{eq:ml_p_jed} is collinear with the jammer's channel, and hence $\tilde\bP$ is the POS matrix. So, assuming the noise $\bN$ is negligible, then $\{\tilde\bmp,\tilde\bH,\tilde\bS\}$ minimizes \eqref{eq:ml_p_jed} if (i) $\tilde\bP$ is the orthogonal projection onto~the orthogonal complement of $\textit{span}(\Hj)$, (ii) $\tilde\bH$ is the true channel matrix, and (iii) $\tilde\bS$ contains the true data matrix. These~are, of course, exactly the goals that we wanted to attain. % Following this insight, we now formulate the MAED joint jammer estimation and mitigation, channel estimation, \mbox{and data detection~problem:} \begin{align} \big\{\hat\bmp, \hat\bH_\bP, \hat\bS_D\big\} &= \argmin_{\substack{\tilde\bmp\in \mathbb{S}_1^B\hspace{1.4mm}\\ \hspace{1.3mm}\tilde\bH_\bP\in\opC^{B\times U}\\ \tilde\bS_D\in\setS^{U\times D}}}\! \big\|\tilde\bP\bY - \tilde\bH_\bP \tilde\bS \big\|^2_F.\! \label{eq:obj1} \end{align} Note that, compared to \eqref{eq:ml_p_jed}, we have absorbed the projection matrix $\tilde\bP$ directly into the unknown channel matrix $\tilde\bH_\bP$. Otherwise the columns of $\tilde\bH_\bP$ would be ill-defined with respect to the length of their components in the direction of $\tilde\bmp\approx\Hj$, meaning that one could not distinguish between channel estimates $\tilde\bH + \alpha\Hj\tp{\tilde{\bsj}}$ with different $\alpha,\tilde{\bsj}$. \subsection{Solving the MAED Optimization Problem} The objective \eqref{eq:obj1} is quadratic in $\tilde\bH_\bP$, so we can derive the optimal value of $\tilde\bH_\bP$ as a function of $\tilde\bP$ and $\tilde\bS$, as \begin{align} \hat\bH_\bP = \tilde\bP\bY\pinv{\tilde\bS}, \end{align} where $\pinv{\tilde\bS}=\herm{\tilde\bS}\inv{(\tilde\bS\herm{\tilde\bS})}$ is the Moore-Penrose pseudo-inverse of $\tilde\bS$. Substituting $\hat\bH_\bP$ back into \eqref{eq:obj1} yields \begin{align} \big\{\hat\bmp, \hat\bS_D\big\} = \argmin_{\substack{\tilde\bmp\in \mathbb{S}_1^B\hspace{1.4mm}\\ \tilde\bS_D\in\setS^{D\times U}}} \big\|\tilde\bP\bY(\bI_K - \pinv{\tilde\bS}\tilde\bS)\big\|^2_F. \label{eq:obj3} \end{align} Solving \eqref{eq:obj3} is difficult due to its combinatorial nature, so we resort to solving it approximately. First, we relax the constraint set $\setS$ to its convex hull $\setC\triangleq\textit{conv}(\setS)$ as in \cite{castaneda2018vlsi}. We then solve this~relaxed problem formulation approximately by alternately performing a forward-backward splitting step in $\tilde\bS$ and a minimization step in $\tilde\bP$. \subsection{Forward-Backward Splitting Step in $\tilde\bS$} \label{sec:fbs} Forward-backward splitting (FBS) \cite{goldstein16a}, also called proximal gradient descent, is an iterative method that solves convex optimization problems of the form \begin{align} \argmin_{\tilde\bms}\, f(\tilde\bms) + g(\tilde\bms), \label{eq:fbs1} \end{align} where $f$ is convex and differentiable, and $g$ is convex but not necessarily differentiable, smooth, or bounded. Starting from an initialization vector $\tilde\bms^{(0)}$, FBS solves the problem in~\eqref{eq:fbs1} iteratively by computing \begin{align} \tilde\bms^{(t+1)} = \proxg\big(\tilde\bms^{(t)} - \tau^{(t)}\nabla f(\tilde\bms^{(t)}); \tau^{(t)}\big). \end{align} Here, $\nabla f(\tilde\bms)$ is the gradient of $f(\tilde\bms)$, $\tau^{(t)}$ is the stepsize at iteration $t$, and $\proxg$ is the proximal operator of $g$ \cite{parikh13a}. For a suitably chosen sequence of stepsizes $\{\tau^{(t)}\}$, FBS solves convex optimization problems exactly (provided that the number of iterations is sufficiently large). FBS can also be utilized to approximately and efficiently solve non-convex problems, even though there are typically no guarantees for optimality or even convergence~\cite{goldstein16a}. For the optimization problem in \fref{eq:obj3}, we define $f$ and $g$ as \begin{align} f(\tilde\bS) &= \big\|\tilde\bP\bY(\bI_K - \pinv{\tilde\bS}\tilde\bS)\big\|^2_F, \end{align} and \begin{align} g(\tilde\bS) &= \begin{cases} 0 &\text{if }\,\tilde\bS_{[1:T]}=\bS_T \text{ and } \tilde\bS_{[T+1:K]}\in\setC^{U\times D} \!\!\!\\ \infty &\text{else}. \end{cases} \end{align} The gradient of $f$ in $\tilde\bS$ is given by \begin{align} \nabla f(\tilde\bS) = \herm{(\pinv{\tilde\bS})}\herm\bY\tilde\bP\bY(\bI_K - \pinv{\tilde\bS}\tilde\bS) \end{align} and the proximal operator for $g$ is simply the orthogonal projection onto the constraint set, which is \begin{align} [\proxg(\tilde\bS)]_{u,k} = \begin{cases} [\bS_T]_{u,k} &\text{ if } k\in[1:T] \\ \text{proj}_\setC([\tilde\bS_{u,k}]) &\text{ else,} \end{cases} \label{eq:proxg} \end{align} where (for QPSK) $\text{proj}_\setC$ acts entry-wise on $[\tilde\bS]_{u,k}$ as \begin{align} \text{proj}_\setC(x) =\, & \min\{\max\{\Re(x),-\sqrt{\sfrac{\Es}{2}}\},\sqrt{\sfrac{\Es}{2}}\} \nonumber\\ &+ i\min\{\max\{\Im(x),-\sqrt{\sfrac{\Es}{2}}\},\sqrt{\sfrac{\Es}{2}}\}. \end{align} For the selection of the stepsizes $\{\tau^{(t)}\}$, we use the Barzilai-Borwein method \cite{barzilai1988two} as detailed in \cite{goldstein16a,zhou2006gradient}. \subsection{Minimization Step in $\tilde\bP$} After the FBS step in $\tilde\bS$, we minimize \eqref{eq:obj3} with respect to the vector~$\tilde\bmp$. Defining $\tilde\bE\triangleq \bY(\bI_K - \pinv{\tilde{\bS}}\tilde{\bS})$ and performing standard algebraic manipulations yields \begin{align} \hat\bmp &= \argmin_{\tilde\bmp\in \mathbb{S}_1^B} \big\|\tilde\bP \tilde\bE \big\|^2_F \\ &= \argmax_{\tilde\bmp\in \mathbb{S}_1^B} \, \herm{\tilde\bmp} \tilde\bE \herm{\tilde\bE} \tilde\bmp. \label{eq:rayleigh} \end{align} This implies that the minimizer $\hat\bmp$ is the unit vector which maximizes the Rayleigh quotient of $\tilde\bE \herm{\tilde\bE}$. The~solution to~this problem is the eigenvector $\bmv_1(\tilde\bE \herm{\tilde\bE})$ belonging~to~the largest eigenvalue of $\tilde\bE \herm{\tilde\bE}$, normalized to unit length~\cite[Thm.\,4.2.2]{horn2013matrix}, \begin{align} \hat\bmp=\frac{\bmv_1(\tilde\bE \herm{\tilde\bE})}{\big\|\bmv_1(\tilde\bE \herm{\tilde\bE})\big\|_2}. \end{align} Calculating this eigenvector for every iteration is computationally expensive, so we only do it for the very first iteration. In~all subsequent iterations, we then approximate its value with a single power method step \cite[Sec.\,8.2.1]{GV96}, i.e., we estimate \begin{align} \hat\bmp^{(t+1)} = \frac{\tilde\bE^{(t+1)} \herm{(\tilde\bE^{(t+1)})}\hat\bmp^{(t)}}{\|\tilde\bE^{(t+1)} \herm{(\tilde\bE^{(t+1)})}\hat\bmp^{(t)}\|^2_2}, \end{align} where we initialize the power method with the subspace estimate $\hat\bmp^{(t)}$ from the previous iteration. \subsection{The MAED Algorithm} We now have all the building blocks for MAED, which is summarized in \fref{alg:maed}. Its only input is the receive matrix $\bY$. MAED is initialized with $\tilde\bS_D^{(0)}=\mathbf{0}_{U\times D}$, $\tilde\bP^{(0)} = \bI_B,$ and $\tau^{(0)}=\tau_0=0.1$, and runs for a fixed number of $t_{\max}$~iterations. \begin{figure*}[tp] \centering \!\! \subfigure[strong barrage jammer (J1)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_static_gaussian_rho25_NJE1_T30_1000Trials.pdf} \label{fig:strong:static} }\!\!\! \subfigure[strong pilot jammer (J2)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_pilot_gaussian_rho25_NJE1_T30_1000Trials.pdf} \label{fig:strong:pilot} }\!\!\! \subfigure[strong data jammer (J3)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_data_gaussian_rho25_NJE1_T30_1000Trials.pdf} \label{fig:strong:data} }\!\!\! \subfigure[strong sparse jammer (J4)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_burst_gaussian_rho25_NJE1_T30_1000Trials.pdf} \label{fig:strong:burst} }\!\! \caption{Uncoded bit error-rate (BER) for different detectors in the presence of a \emph{strong} jammer. The jammer transmits Gaussian symbols either (a) during the entire coherence interval, (b) during the pilot phase only, (c) during the data phase only, or (d) in random unit-symbol bursts with a duty cycle of $\alpha=20\%$. The jammer receive energy over the whole coherence interval is $\rE=25$\,dB higher than that of the average UE. } \label{fig:strong_jammers} \end{figure*} \begin{figure*}[tp] \centering \!\! \subfigure[weak barrage jammer (J1)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_static_constellation_rho0_NJE0_T30_1000Trials.pdf} \label{fig:weak:static} }\!\!\! \subfigure[weak pilot jammer (J2)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_pilot_constellation_rho0_NJE0_T30_1000Trials.pdf} \label{fig:weak:pilot} }\!\!\! \subfigure[weak data jammer (J3)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_data_constellation_rho0_NJE0_T30_1000Trials.pdf} \label{fig:weak:data} }\!\!\! \subfigure[weak sparse jammer (J4)]{ \includegraphics[height=4cm]{figures/tikzplots/128x32_QPSK_D64_burst_constellation_rho0_NJE0_T30_1000Trials.pdf} \label{fig:weak:burst} }\!\! \caption{Uncoded bit error-rate (BER) for different detectors in the presence of a \emph{weak} jammer. The jammer transmits QPSK symbols either (a) during the entire coherence interval, (b) during the pilot phase only, (c) during the data phase only, or (d) in random unit-symbol bursts with a duty cycle of $\alpha=20\%$. The jammer receive power during the jamming phase is as high as that of the average UE ($\rP=0$\,dB). } \label{fig:weak_jammers} \end{figure*} \section{Simulation Results} \subsection{Simulation Setup} \label{sec:setup} We simulate a massive MU-MIMO system as described in Section~\ref{sec:setup} with $B=128$ BS antennas, $U=32$ single-antenna UEs, and with one single-antenna jammer. The UEs transmit for $K=96$ time slots. The first $T=32$ time slots are used to transmit orthogonal pilots $\bS_T$ in the form of a $32\times32$ Hadamard matrix (scaled to symbol energy $\Es$). The remaining $D=64$ time slots are used to transmit QPSK payload data (also with symbol energy $\Es$). The channels of the UEs and the jammer are modelled as i.i.d. Rayleigh fading. We define the average receive signal-to-noise ratio (SNR) as follows: \begin{align} \textit{SNR} \define \frac{\Ex{\bS}{\|\bH\bS\|_F^2}}{\Ex{\bN}{\|\bN\|_F^2}}. \end{align} In our evaluation, we consider four different types of jammers: (J1) barrage jammers that transmit i.i.d.\ jamming symbols during the entire coherence interval, (J2) pilot jammers that transmit i.i.d.\ jamming symbols during the pilot phase but do not jam the data phase, (J3) data jammers that transmit i.i.d.\ jamming symbols during the data phase but do not jam the pilot phase, (J4) sparse jammers that transmit i.i.d.\ jamming symbols during some fraction $\alpha$ of randomly selected bursts of unit length (i.e., one time slot), but do not jam the remaining time slots. The jamming symbols are either circularly symmetric complex Gaussian or selected uniformly at random from the QPSK constellation. Unless stated otherwise, the jamming symbols are also independent of the UE transmit symbols $\bS$. We quantify the strength of the jammer's interference relative to the strength of the average UE, either as the ratio between total receive \textit{energy} \begin{align} \rE \define \frac{\Ex{\bsj}{\|\Hj\bsj\|_2^2}}{\frac1U\Ex{\bS}{\|\bH\bS\|_2^2}}, \end{align} or as the ratio between receive \textit{power during those phases that the jammer is jamming} \begin{align} \rP = \frac{\rE}{\lambda}, \end{align} where $\lambda$ is the jammer's duty cycle and equals $1,\frac{T}{K},\frac{D}{K}$, or~$\alpha$ for barrage, pilot, data, or sparse jammers, respectively. \begin{algorithm}[tp] \caption{MAED} \label{alg:maed} \begin{algorithmic}[1] \setstretch{1.05} \vspace{1mm} \State {\bfseries input:} $\bY$ \State \textbf{initialize:} $\tilde\bS^{(0)} = \left[\bS_T, \mathbf{0}_{U\times D} \right],\, \tilde\bP^{(0)} = \bI_B,\, \tau^{(0)} = \tau_0$ \For{$t=0$ {\bfseries to} $t_{\max}-1$} \State $\nabla f(\tilde\bS^{(t)}) = \herm{\big(\tilde\bS^{(t)}{}^\dagger\big)} \herm\bY\tilde\bP^{(t)}\bY(\bI_K - \tilde\bS^{(t)}{}^\dagger \tilde\bS^{(t)})$ \State $\tilde\bS^{(t+1)} = \proxg\big(\tilde\bS^{(t)} + \tau^{(t)}\nabla f(\tilde\bS^{(t)})\big)$ \hfill(cf.\,\eqref{eq:proxg}) \State $\tilde\bE^{(t+1)} = \bY(\bI_K - \tilde\bS^{(t+1)}{}^\dagger \tilde\bS^{(t+1)})$ \If{$t=0$} \State $\hat\bmp^{(t+1)} = \bmv_1(\tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H})/\|\bmv_1(\tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H})\|_2$ \Else \State $\hat\bmp^{(t+1)} = \tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H}\, \hat\bmp^{(t)}/\|\tilde\bE^{(t+1)} \tilde{\bE}^{(t+1)}{}^\text{H}\, \hat\bmp^{(t)}\|_2$ \EndIf \State $\tilde\bP^{(t+1)} = \bI_B - \hat\bmp^{(t+1)}\hat\bmp^{(t+1)}{}^\text{H}$ \State $\tau^{(t+1)} = \text{Barzilai-Borwein}(\tau^{(t)},\tilde\bS^{(t)},\tilde\bS^{(t+1)}, \dots \newline ~\hspace{4.25cm}\!\nabla f(\tilde\bS^{(t)}), \!\nabla f(\tilde\bS^{(t+1)}))$ \hfill\cite{goldstein16a} \EndFor \State \textbf{output:} $\tilde\bS^{(t_{\max})}_{[T+1:K]}$ \end{algorithmic} \end{algorithm} \subsection{Performance Baselines} \label{sec:baseline} Unless stated otherwise, we run MAED with $t_{\max}=30$ iterations and compare it with the following baseline methods: The first baseline called ``LMMSE'' does not mitigate the jammer in any way and separately performs least-squares channel estimation and LMMSE data detection. The second baseline called ``geniePOS'' represents a jammer-robust variant of LMMSE that is furnished with ground-truth knowledge of the jammer channel $\Hj$ and projects the receive signals~$\bY$ onto the orthogonal complement of $\textit{span}(\Hj)$ as in \eqref{eq:pos}. The method then separately performs least-squares channel estimation and LMMSE data detection in this projected subspace. The third baseline called ``JL-JED'' serves as a performance upper bound and operates in a jammerless but otherwise equivalent scenario. JL-JED performs joint channel estimation and data detection by approximately solving~\fref{eq:ml_jed} (with $\setS$ relaxed to its convex hull $\setC$) using the same FBS procedure as MAED (cf.~\fref{sec:fbs}), except that it misses the projection~$\tilde\bP$. \subsection{Mitigation of Strong Gaussian Jammers} We first investigate the ability of MAED to mitigate strong jamming attacks. For this, we simulated Gaussian jammers with \mbox{$\rE=25$\,dB} of all four jamming types introduced in Section~\ref{sec:setup} and evaluated the performance of MAED compared to the baselines of Section~\ref{sec:baseline} (\fref{fig:strong_jammers}). We point out that the performance of geniePOS and JL-JED is independent of the considered jammer: geniePOS uses the genie-provided jammer channel to null the jammer perfectly, and JL-JED operates on a jammerless system from the beginning. Unsurprisingly, the jammer-oblivious LMMSE baseline performs significantly worse than the jammer-robust geniePOS baseline under all attack scenarios. MAED succeeds in mitigating all four jamming attacks with very high effectiveness, even outperforming the genie-assisted geniePOS method by a considerable margin.\footnote{The potential for MAED to outperform geniePOS is a consequence of the superiority of JED over separating channel estimation from data detection.} The efficacy of MAED is further reflected in the fact that its BER approaches the BER of the jammerless reference baseline JL-JED to within $1$\,dB in all considered scenarios. \subsection{Mitigation of Weak QPSK Jammers} We now turn to the analysis of more restrained jamming attacks in which the jammer transmits QPSK symbols with relative power $\rP=0$\,dB during its on-phase (to pass itself off as just another UE, for instance \cite{vinogradova16a}). For now, we still make the assumption that the jamming symbols are independent of the UE transmit matrix $\bS$. (We will consider an alternative scenario in~\fref{sec:smart}.) Simulation results for all four jammer types are shown in~\fref{fig:weak_jammers}. The baseline performance of geniePOS and \mbox{JL-JED} are again independent of the jammer and mirror the curves of~\fref{fig:strong_jammers}. Because of the weaker jamming attacks, the jammer-oblivious LMMSE baseline performs much closer to the jammer-resistant geniePOS baseline. MAED again mitigates all attack types successfully, outperforming geniePOS and approaching the JL-JED baseline to within $1$\,dB. Comparing~\fref{fig:weak_jammers} with~\fref{fig:strong_jammers} reveals an interesting phenomenon. MAED achieves better absolute performance under strong jamming attacks than under weak ones, even if the difference is subtle. The reason for this behavior is the following: MAED searches for the jamming subspace by looking for a dominant dimension of the iterative residual error $\tilde\bE^{(t)}$, see~\fref{eq:rayleigh}. If the received jamming energy is small compared to the received signal energy, then it becomes harder to distinguish the residual errors due to the jammer's impact from those that are caused by the errors in the channel and transmit matrix estimates $\tilde\bH_\bP^{(t)}$ and $\tilde\bS^{(t)}$. \subsection{What if No Jammer Is Present?}~\label{sec:no_jammer} This observation leads to the question of how MAED performs if no jammer is present, or---equivalently---if a jammer does not transmit for a given coherence interval.~\fref{fig:nojam:no} shows simulation results for this scenario. MAED still outperforms the LMMSE baseline at low SNR, but shows an error floor at high SNR. This error floor is caused by the slower convergence of MAED with (infinitely) weak jammers. \fref{fig:nojam:speed}~shows the jammerless performance of MAED for different numbers~$t_{\max}$ of algorithm iterations. For $t_{\max}=100$ iterations, MAED essentially achieves the excellent performance that it has in combination with strong jammers. For $t_{\max}=10$ iterations, however, MAED exhibits an error floor as high as $0.2\%$. In contrast, in the presence of a $\rE=25$\,dB strong barrage Gaussian jammer, MAED requires no more than $t_{\max}=10$ iterations for optimal~performance. The slow convergence in the absence of a jammer can be explained by the fact that, in every iteration, the strongest dimension of the residual error matrix $\tilde\bE^{(t)}$ is mistakenly attributed to a hypothesized jammer instead of to the residual errors in the channel and transmit matrix estimates $\tilde\bH_\bP^{(t)}$~and~$\tilde\bS^{(t)}$. This recurring misattribution prevents fast convergence. Nonetheless, while MAED was conceived for jammer mitigation, it shows robust performance even in the absence of jamming. \subsection{What Happens with a \textit{Truly} Smart Jammer?} \label{sec:smart} Finally, we turn to a scenario in which the jammer knows the UE pilot sequences and attacks a specific UE by transmitting that UE's pilot sequence during the pilot phase (at $\rP=25$\,dB higher power). The jammer does not transmit during the data phase. \fref{fig:smart_jammer} shows simulation results for this scenario. The geniePOS baseline nulls the jammer perfectly using its ground-truth knowledge. Thus, its performance remains unaffected regardless of the jammer. In contrast, MAED exhibits an error floor as high as $1\%$, only marginally outperforming the LMMSE baseline. Excluding the attacked UE and evaluating the BER among the remaining $31$ UEs (labeled $\overline{\text{UE}}_\text{j}$ in \fref{fig:smart:single}) reveals that the decoding errors are focused entirely on the attacked UE, and that the BER among the remaining UEs appears to be unaffected by the jammer. This experiment shows that MAED cannot identify the jammer's subspace if the jammer passes itself off as a UE by transmitting that UE's pilot sequence. It is not clear, however, whether such a jammer could be distinguished from a legitimate UE, even in principle. One way to prevent smart jammers from utilizing such impersonation attacks would be to use encrypted pilot sequences~\cite{basciftci2015securing}. Finally, \fref{fig:smart:multi} shows the performance of \mbox{MAED} (over all 32 UEs) when the jammer transmits the average of multiple pilot sequences during the pilot phase (and refrains from transmitting during the data phase). Evidently, a jammer that targets multiple UEs quickly enables MAED to locate the jammer's subspace and mitigate the jammer effectively. \begin{figure}[tp] \centering \!\! \subfigure[no jammer]{ \includegraphics[height=3.85cm]{figures/tikzplots/128x32_QPSK_D64_JL_rho-Inf_NJE0_T30_1000Trials.pdf} \label{fig:nojam:no} }\!\!\! \subfigure[speed of convergence]{ \includegraphics[height=3.85cm]{figures/tikzplots/128x32_QPSK_D64_JL_rho-Inf_NJE0_Tvar_1000Trials} \label{fig:nojam:speed} }\!\! \caption{Uncoded bit error-rate (a) for different detectors in absence of a jammer and (b) for MAED with different number $t_{\max}$ of iterations in absence of a jammer (JL), as well as in the presence of a barrage jammer that transmits Gaussian symbols at $\rE=25$\,dB higher energy than the average UE~(J$_{25\text{dB}}$). } \label{fig:no_jammer} \end{figure} \begin{figure}[tp] \centering \!\! \subfigure[single pilot sequence]{ \includegraphics[height=3.85cm]{figures/tikzplots/128x32_QPSK_D64_smart_zero_rho25_NJE0_T30_1000Trials.pdf} \label{fig:smart:single} }\!\!\! \subfigure[combination of pilot sequences]{ \includegraphics[height=3.85cm]{figures/tikzplots/128x32_QPSK_D64_smartvar_zero_rho25_NJE0_T30_1000Trials.pdf} \label{fig:smart:multi} }\!\! \caption{Uncoded bit error rate (BER) for a smart jammer that (a) transmits the pilot sequence of UE$_\text{j}$ or (b) transmits the average of multiple UE pilot sequences. The jammer transmits at $\rP=25$\,dB higher power than the average UE. The $\overline{\text{UE}}_\text{j}$ curve depicts the BER averaged over all UEs except $\text{UE}_\text{j}$. } \label{fig:smart_jammer} \end{figure} \section{Conclusions} We have proposed MAED in order to mitigate smart jamming attacks on the uplink of massive MU-MIMO systems. In contrast to existing mitigation methods, MAED does not rely on jamming activity during any particular epoch for successful jammer mitigation. Instead, our method exploits the fact that the jammer's subspace remains constant within a coherence interval. To this end, MAED uses a novel problem formulation that combines jammer estimation and mitigation, channel estimation, and data detection. The resulting optimization problem is approximately solved using an efficient iterative algorithm. Without requiring any \textit{a priori} knowledge, MAED is able to effectively mitigate a wide range of jamming attacks. In particular, MAED succeeds in mitigating attack types like data jamming and sparse jamming, for which---to the best of our knowledge---no mitigation methods have existed so far. \balance
train/arxiv
BkiUdko4eIZjmpBtQuY7
5
1
\section{Introduction} Gamma-ray Bursts (GRBs) are short, intense gamma-ray flashes that are by far the most violent explosions in the universe \citep{2006RPPh...69.2259M,2007ChJAA...7....1Z,2009ARA&A..47..567G,2015PhR...561....1K}. A lot of researches have shown that GRBs can be divided into two categories, the long Gamma-ray bursts (LGRB) with duration $\rm T_{90} > 2s$ and the short Gamma-ray bursts (SGRB) with $\rm T_{90} \leq 2s$ \citep{1993ApJ...413L.101K}. For different kinds of GRBs, their origins are also different. The progenitors of LGRBs are regarded as collapsed massive stars \citep{1993ApJ...405..273W}, while SGRBs are related to the coalescence of compact objects such as binary neutron stars or neutron star-black holes \citep{2007PhR...442..166N,2017PhRvL.119p1101A,2017ApJ...848L..14G,2017ApJ...848..115H}. A variety of correlations between observable GRB properties have been proposed in the literatures, and they can be classified into prompt correlations, afterglow correlations and prompt-afterglow correlations based on the episode in which the observables are measured \citep{2017NewAR..77...23D,2018PASP..130e1001D}. Among the prompt correlations, the so-called ``Amati relation'' \citep{2002AA...390...81A} has been widely discussed. It describes a relation between the isotropic-equivalent energy ($E_{\rm iso}$) and the rest-frame peak energy ($E_{\rm peak}$) of the $\gamma$-ray spectrum. Another commonly cited relation is the ``Yonetoku relation'' \citep{2003MNRAS.345..743W,2004ApJ...609..935Y}, which demonstrates the connection between $E_{\rm peak}$ and the isotropic–equivalent luminosity ($L_{\rm iso}$). The interpretation of these relations are still under debate \citep{2006ApJ...645L.113C,2006MNRAS.372.1699G}, and it is noteworthy that though the Amati relation and Yonetoku relation exist in both LGRBs and SGRBs, the best-fit parameters of the two relations are very likey to be different for the two GRB categories \citep{2012ApJ...750...88Z}. Since LGRBs and SGRBs are thought to have different origins but may share similar jet launching and radiation mechanisms, the correlations may help to discriminate among different models. The prompt correlations related to the intrinsic energy/luminosity can serve as distance indicators, hence can help to study the luminosity function and redshift distribution of GRBs \citep{2002AA...390...81A,2003A&A...407L...1A,2004ApJ...612L.101D,2004ApJ...609..935Y,2005ApJ...633..611L,2014ApJ...789...65Y,2018ApJ...852....1Z,2018AdAst2018E...1D,2019gbcc.book.....D}. However the large dispersion, which is about an order of magnitude for both Amati relation and Yonetoku relation brings extra uncertainties on such studies, hence finding new correlations as redshift/distance indicators is more important and challenging. Inspired by the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ relation for long GRBs found by \citet{2006MNRAS.370..185F}, here we manage to find a universal correlation for both LGRBs and SGRBs using the prompt emission properties $E_{\rm iso}$, $L_{\rm iso}$, $E_{\rm peak}$ and $T_{0.45}$. This paper is arranged as follows. In Section \ref{data}, we describe the selection of GRB samples; in Section \ref{correlations} we analyze the reliability of some potential correlations, and report the discovery of a universal correlation; in Section \ref{application}, the application of the universal correlation on the study of SGRB's luminosity function and formation rate is presented; finally, we give conclusions and discussion in Section \ref{SD}. \section{Sample Selections}\label{data} In order to reduce the uncertainties in the correlation study, we require a careful selection on the GRBs samples. For the purpose of this work, we only include samples with the following information: \begin{enumerate} \item the redshit z. \item the peak flux P and the peak fluence F. \item the peak energy $E_{\rm peak}^{\rm obs}$ in the observer's frame. And we take $E_{\rm peak} \equiv (1+z)E_{\rm peak}^{\rm obs}$ as the cosmological rest-frame $\nu f_\nu$ spectrum peak energy (in brief, the rest-frame peak energy). \item low-energy power-law index $\alpha$ and high-energy power-law index $\beta$ of Band fuction \citep{1993ApJ...413..281B}. \item $T_{0.45}^{\rm obs}$, and we take $T_{0.45} \equiv T_{0.45}^{\rm obs}/(1+z)$. \end{enumerate} One uncertainty for calculating a burst's intrinsic energy/luminosity comes from the assumed spectrum in the integration, so the 4th criterion above is to ensure we use a unified spectral form to evaluate $E_{\rm iso}$ and $L_{\rm iso}$, and the most energetic part of the emission is within the observed frequency band. As a consequence, all of our samples are reported to be best fitted with the Band function: \begin{equation} \mathop N\nolimits_{{\rm{band}}} = \left\{ {\begin{array}{*{20}{c}} {A{{\left( {E/100} \right)}^\alpha }\exp \left( { - E\left( {2 + \alpha } \right)/{E_{{\rm{peak}}}}} \right)}&{{\rm{if\ }}E < {E_{\rm{b}}}}\\ {A{{\left\{ {\left( {\alpha - \beta } \right){E_{{\rm{peak}}}}/\left[ {100\left( {2 + \alpha } \right)} \right]} \right\}}^{\left( {\alpha - \beta } \right)}}\exp \left( {\beta - \alpha } \right){{\left( {E/100} \right)}^\beta }}&{{\rm{if\ }}E \ge {E_{\rm{b}}}} \end{array}} \right. \end{equation} where ${E_{\rm{b}}} = ({\alpha - \beta }){E_{{\rm{peak}}}}/( {2 + \alpha })$. Finally, after applying all the selection criteria, our sample includes 49 LGRBs and 19 SGRBs. In Table \ref{tab:1} we list the information of the 49 LGRBs, and the information of the selected SGRBs are listed in Table \ref{tab:2}. In our following analysis we also include the 20 LGRB samples in \citet{2006MNRAS.370..185F}. In order to eliminate the influence of different observational energy bands on the calculating results, we make K-correction \citep{2001AJ....121.2879B} in the calculation of bursts' isotropic-equivalent luminosity/energy: \begin{equation}\label{liso} L_{\rm iso} = 4\pi D_{\rm L}^2(z)PK \end{equation} \begin{equation} d_{\rm L}(z)=\frac{c}{H_0}\int_{0}^{z}\frac{dz}{\sqrt{1-\Omega_{\rm m}+\Omega_{\rm m}(1+z)^3 }} \end{equation} \begin{equation} K=\frac{\int_{1KeV}^{10^4KeV}Ef(E)dE}{\int_{E_{\rm min}(1+z)}^{E_{\rm max}(1+z)}Ef(E)dE} \end{equation} where $P$ is the peak flux observed between a certain energy range $(E_{\rm min},E_{\rm max})$ in the unit of $\rm erg/cm^2/s$ (For $E_{\rm iso}$, $P$ is replaced by the Fluence $F$ in Eq.(\ref{liso})). We assume a flat $\Lambda$ cold dark matter universe with $\Omega_{\rm m}=0.27$ and $H_0=70\ \rm{km\ s^{-1}Mpc^{-1}}$ in the calculation. \section{Study on The Correlations}\label{correlations} \subsection{The $E_{\rm iso}-E_{\rm peak}$ and $L_{\rm iso}-E_{\rm peak}$ Correlations} In previous studies, a plenty of works about LGRB's prompt correlations were carried out. For SGRBs, however, such studies are much less due to the number of SGRBs with konwn redshifts are very small. Nevertheless, several studies have shown that for SGRBs they may follow a different $E_{\rm iso}-E_{\rm peak}$ relation with respect to LGRBs, while the $L_{\rm iso}-E_{\rm peak}$ relation may be similar to that of LGRBs \citep{2012ApJ...755...55Z,2013MNRAS.430..163Q,2012ApJ...755...55Z,2014ApJ...789...65Y,2014PASJ...66...42T}. In the following we will test these correlations with our samples (49 LGRBs and 19 SGRBs together with 20 LGRBs in Firmani et al. (2006)). Here we constructed a likelihood function \citep{ 2006ApJ...653.1049W,2007ApJ...665.1489K} to fit the data. It can be defined as: \begin{equation}\label{the_L} \L(a_{j},b|x_{j},y)=\prod_{i}\frac{1}{\sqrt{2\pi(\sigma_{yi}^2+\sum_{j}(a_{j}\sigma_{xji})^2)}}exp({-\frac{1}{2}\frac{(y_{i}-b-\sum_{j}a_{i}x_{ji})^2}{\sigma_{yi}^2+\sum_{j}(a_{j}\sigma_{xji})^2}}) \end{equation} where $y_i$ and $x_{ji}$ are the $E_{\rm peak}$ and $E_{\rm iso}$(or $L_{\rm iso}$) of GRB samples, $\sigma_{yi}$ and $\sigma_{xji}$ are their measurement errors. As shown by \citet{2002ApJ...574..740T} and \citet{2006ApJ...653.1049W}, the above likelihood function can be expressed as: \begin{equation}\label{the_log(L)} \log[L(a_{j},b|x_{j},y)]=- \sum_{i}\frac{(y_i-b-\sum_{j}a_{j}x_{ji})^2}{\sigma_{yi}^2+\sum_{j}(a_{j}\sigma_{xji})^2}+constant=-\chi^2+constant \end{equation} where $\chi^2$ is merit function \citep{1992nrca.book.....P,2003MNRAS.345.1057M,2006A&A...456..439K}. We determine the values of the parameters and their confidence intervals by performing Bayesian estimation based on the likelihood constructed above, we set uniform priors for all parameters. The results are presented in Fig.\ref{fig:fit1}. Obviously, the SGRB samples show systematically bias against the best-fit curve (see Section \ref{corrlation} for more details), thus it is inadequate to fit both LGRB and SGRB samples with a single relation on $E_{\rm iso}-E_{\rm peak}$ or $L_{\rm iso}-E_{\rm peak}$ plane. \begin{figure}[ht!] \figurenum{1}\label{fig:fit1} \centering \includegraphics[angle=0,scale=0.4]{fig1a.png} \includegraphics[angle=0,scale=0.4]{fig1b.png} \caption{The $E_{\rm iso}-E_{\rm peak}$ and $L_{\rm iso}-E_{\rm peak}$ for LGRBs and SGRBs, the blue hollow points is the data for LGRBs in table 1, the red solid points is the data for LGRBs in \citet{2006MNRAS.370..185F} , the green solid points is the data for SGRBs in table 2. The solid line is the best-fit line for LGRBs and SGRBs. The Spearman's rank correlation coefficients are P= 0.71 and P= 0.46 for left and right, respectively.} \hfill \end{figure} \subsection{Discovery of a Universal Three-Parameter Correlation}\label{corrlation} \citet{2006MNRAS.370..185F} found a correlation between $L_{\rm iso}$, $E_{\rm peak}$ and $T_{0.45}$ of LGRBs. Inspired by their work, and benefit from the accumulating number of SGRBs with know redshifts these years, we may be able to clarify whether both the SGRBs and LGRBs have the same three-parameter correlation. In the following we discuss both $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ and $E_{\rm iso}-E_{\rm peak}-T_{0.45}$ relations. Considering the relations have the form of $L_{\rm iso} \propto E_{\rm peak}^{p1} T_{0.45}^{p2}$ ($E_{\rm iso} \propto E_{\rm peak}^{p1} T_{0.45}^{p2}$), and using the statistics defined by Eq.(\ref{the_log(L)}), we derive the best-fit correlation: \begin{equation}\label{the_relation} L_{\rm iso} \propto E_{\rm peak}^{1.94 \pm 0.06} T_{0.45}^{0.37 \pm 0.11} \end{equation} \begin{equation} E_{\rm iso} \propto E_{\rm peak}^{1.68 \pm 0.09} T_{0.45}^{1.09 \pm 0.13} \end{equation} Where the errors are reported in 1 $\sigma$ confidence level. The results are shown in Fig.\ref{fig:fit2} together with their $3\sigma$ intervals. \begin{figure}[ht!] \figurenum{2}\label{fig:fit2} \centering \includegraphics[angle=0,scale=0.4]{fig2a.png} \includegraphics[angle=0,scale=0.4]{fig2b.png} \caption{The three-parameter relations for LGRBs and SGRBs, the blue hollow points is the data for LGRBs in table 1, the red solid points is the data for LGRBs in \citet{2006MNRAS.370..185F}, the green solid points is the data for SGRBs in table 2. The solid line is the best fit line. The dotted lines represent the 3$\sigma$ confidence bands. The Spearman's rank correlation coefficients are P= 0.81 and P= 0.78 for left and right, respectively.} \hfill \end{figure} We quantitatively discuss the reliability of the relations above (i.e., $E_{\rm iso}-E_{\rm peak}$, $L_{\rm iso}-E_{\rm peak}$, $E_{\rm iso}-E_{\rm peak}-T_{0.45}$ and $L_{\rm iso}-E_{\rm peak}-T_{0.45}$) in two aspects. First, by using the Anderson-Darling test, we find that the residuals of the data points with respect to the best-fit line can be fitted with a Gaussian function. The residuals of each fit are plotted in Fig.\ref{fig:res}, and we fit the LGRB samples (blue) and SGRB samples (red) separately with Gaussian function. We find that in the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ and $E_{\rm iso}-E_{\rm peak}-T_{0.45}$ fits, the mean values of the residuals of the two GRB categories are consistent with zero within the error range, and their standard deviation ($\sigma$) of LGRB and SGRB also consist with each other, indicating that in this two correlations the dispersions of LGRB and SGRB are nearly the same. We further perform Student's t test on the residuals, the null hypothesis is that the mean values of the residuals of LGRBs and SGRBs are equal. We find that, given the significant level $a = 0.01$, only the residuals in the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ fit pass the test, with $P=0.07$, while in other correlations, $P=0.009, 2\times 10^{-6}$ and $1.5\times10^{-11}$ for $E_{\rm iso}-E_{\rm peak}-T_{0.45}$, $E_{\rm iso}-E_{\rm peak}$ and $L_{\rm iso}-E_{\rm peak}$ respectively. Second, the robustness of a relation can be reflected by the correlation coefficient, here we use the Spearman's rank correlation coefficient. For the correlations $L_{\rm iso}-E_{\rm peak}-T_{0.45}$, $E_{\rm iso}-E_{\rm peak}-T_{0.45}$, $L_{\rm iso}-E_{\rm peak}$ and $E_{\rm iso}-E_{\rm peak}$, the correlation coeffiicients are 0.81, 0.78, 0.71 and 0.47 respectively, which indicates that the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ relation is tighter than any of other relations. \begin{figure}[ht!] \figurenum{3}\label{fig:res} \centering \includegraphics[angle=0,scale=0.8]{fig3a.pdf} \includegraphics[angle=0,scale=0.8]{fig3b.pdf} \includegraphics[angle=0,scale=0.8]{fig3c.pdf} \includegraphics[angle=0,scale=0.8]{fig3d.pdf} \caption{Left: The graph at the top shows that scatter with gaussian distribution of LGRB and SGRB for $L_{\rm iso}-E_{\rm peak}$ relationship. The red part is the SGRBs' scatter with $\mu=-0.42$, $\sigma=0.21$, the blue part is the LGRBs' scatter with $\mu=0.01$, $\sigma=0.22$. The graph below shows that scatter of LGRB and SGRB for $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ relationship. The red part is the SGRBs' scatter with $\mu=0.06$, $\sigma=0.37$, the blue part is the LGRBs' scatter with $\mu=-0.12$, $\sigma=0.38$. Right: The graph at the top shows that scatter with gaussian distribution of LGRB and SGRB for $E_{\rm iso}-E_{\rm peak}$ relationship. The red part is the SGRBs' scatter with $\mu=-0.34$, $\sigma=0.30$, the blue part is the LGRBs' scatter with $\mu=0.08$, $\sigma=0.31$. The graph below shows that scatter of LGRB and SGRB for $E_{\rm iso}-E_{\rm peak}-T_{0.45}$ relationship. The red part is the SGRBs' scatter with $\mu=0.23$, $\sigma=0.46$, the blue part is the LGRBs' scatter with $\mu=-0.10$, $\sigma=0.47$.} \hfill \end{figure} Based on the analysis above, we conclude that LGRBs and SGRBs could follow a universal $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ correlation as shown in Eq.(\ref{the_relation}). We also compare this relation with the Yonetoku relation which using only SGRB samples and find that this relation is also slightly tighter than SGRB's Yonetoku relation, therefore the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ correlation can serve as a potential redshift indicator for SGRBs. \section{Application on the Study of Luminosity Function and Formation Rate of SGRBs}\label{application} \subsection{A New Redshift Indicator} The $E_{\rm iso}-E_{\rm peak}$ and $L_{\rm iso}-E_{\rm peak}$ correlations were widely used to estimate the redshifts of GRBs \citep{2002AA...390...81A,2004ApJ...612L.101D,2004ApJ...609..935Y,2005ApJ...633..611L,2011MNRAS.418.2202D,2012int..workE.116A,2014ApJ...789...65Y,2018ApJ...852....1Z}. In order to apply the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ correlation to the estimation of the redshift of GRBs, we first need to discuss the accuracy of the redshift calculated from this correlation. In Fig.\ref{fig:redshift}, we present the pseudo-redshifts $z^*$ calculated by the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ correlation versus the true observed redshift $z$ of our SGRB samples. By calculating the relative errors between pseudo-redshifts of SGRBs and observed redshifts, we find that the $68\%$ of our pseudo redshifts (i.e. 13 out of 19 sources) are within the $30\%$ of the real values. Then we perform the Kolmogorov–Smirnov test between them and find the chance probabilities of this is 0.74, which indicates that our estimated pseudo redshifts are reasonable. Then this encourages us to take a further step of using it in the study of Luminosity Function and Formation Rate of SGRBs. \begin{figure}[ht!] \figurenum{4}\label{fig:redshift} \centering \includegraphics[angle=0,scale=0.9]{fig4.pdf} \caption{Comparison of real and pseudo-redshifts of 19 short GRBs. Y-axis is pseudo-redshifts, X-axis is real redshifts.} \hfill \end{figure} \subsection{Constraining the Luminosity Function and Formation Rate of SGRBs} We use the $L_{\rm iso}-E_{\rm peak}-T_{0.45}$ correlation to determine redshifts of SGRBs. Our samples are collected from the \textit{Swift} GRB catalog with known $T_{0.45}$, peak flux and $E_{\rm peak}$ (or Bayes $E_{\rm peak}$ \footnote{\url{http://butler.lab.asu.edu/Swift/bat_spec_table.html}}) but lack of redshift detection. There are 75 bursts satisfy this criterion. Together with the 19 short burst samples with known redshifts, we use a total number of 94 bursts to study the luminosity function of SGRBs. The observed GRB distributions suffer from selection effects, which are dominated by the Malmquist bias caused by the limited sensitivity of instruments. We use Lynden-Bell's $c^-$ method, which has been widely used in the previous studies, to eliminate this effect \citep{1971MNRAS.155...95L,1992ApJ...399..345E,1993ApJ...402L..33P,1999ApJ...518...32M,2002ApJ...574..554L,2004ApJ...609..935Y,2014ApJ...789...65Y,2012MNRAS.423.2627W,2013ApJ...774..157D,2015ApJS..218...13Y,2015ApJ...800...31D,2016A&A...587A..40P, 2017ApJ...850..161T,2018ApJ...852....1Z}, The distribution of SGRB can be written as $\Psi(L,z)=\psi(L)\rho(z)$ \citep{1992ApJ...399..345E}, where $\psi(L)$ is the luminosity function and $\rho(z)$ is the formation rate of SGRBs. However in general the luminosity and redshift is not independent \citep{2002ApJ...574..554L}, the luminosity function $\psi(L)$ could still evolve with redshift z, and this degeneracy can be eliminated by adjusting the luminosity L with a factor $g(z)$, so that $\Psi(L,z)=\rho(z)\psi(L/g(z))/g(z)$, where $g(z)$ means the luminosity evolution, and the value $L_0=L/g(z)$ corresponds to the luminosity after removing the luminosity evolution effect. By making such substitution, the $\psi(L/g(z))$ is independent of redshift and represents the local luminosity distribution. $g(z)$ is often taken as $g(z)=(1+z)^k$ in the literatures \citep{2002ApJ...574..554L,2014ApJ...789...65Y,2015ApJS..218...13Y}. Following \citet{1992ApJ...399..345E}, we use the non-parametric test method of $\tau$ statistical to derive the value of $k$. We get $k=4.78^{+0.17}_{-0.18}$ (the error is reported in 1$\sigma$ confidence level). For comparison, \citet{2014ApJ...789...65Y} gave $k=3.3^{+1.7}_{-3.7}$ using Swift samples, \citet{2018MNRAS.477.4275P} gave $k=4.269^{+0.134}_{-0.134}$ and \citet{2018ApJ...852....1Z} gave $k=4.47^{+0.47}_{-0.47}$ for Fermi samples. After removing the effect of luminosity evolution through $L_0=L/(1+z)^{4.78}$, the cumulative luminosity function can be obtained by the following method \citep{1971MNRAS.155...95L,1992ApJ...399..345E}. For each point $(L_i, z_i)$, we define the set $J_i$ as \begin{equation} J_i=\lbrace{j}\mid{L_j}\geq {L_i},{z_j}\leq {z^{max}_{i}}\rbrace \end{equation} where $L_i$ is the luminosity of $i$th SGRB, the parameter $z^{max}_{i}$ is the maximum redshift at which the SGRBs with the luminosity $L_i$ can be detected. The number of SGRBs contained in this region is $n_i$. Then we use the following equation to calculate the cumulative luminosity function \citep{1971MNRAS.155...95L} \begin{equation} \psi(L_{0i})=\prod_{j<i}(1+\frac{1}{N_j}) \end{equation} where $j < i$ means that the $j$th SGRB has a larger luminosity than $i$th sGRB. The results are shown in the left panel of Fig.\ref{fig:distribution}, which can be fitted with a broken power-law as \begin{eqnarray}\label{lcmf} \psi(L_0) \propto \left\{ \begin{array}{cc} L_0^{-0.63 \pm 0.07} \qquad L_0 \le L_0^b \\ L_0^{-1.96 \pm 0.28} \qquad L_0 > L_0^b \end{array} \right. \end{eqnarray} where $L_0^b = 6.95_{-0.76}^{+0.84}\times10^{50} \rm{erg/s}$ is the break luminosity. This result is roughly in agreement with previous works \citep{2014ApJ...789...65Y, 2018ApJ...852....1Z}. For the formation rate of sGRBs, we define $J^{\prime}_i$ as \begin{equation} J_i^{\prime}=\lbrace{j}\mid{L_j}> {L_i^{lim}},{z_j}< {z_{i}}\rbrace \end{equation} Where $z_i$ is the redshift of $i$th SGRB, the parameter $L^{min}_i$ is the minimum luminosity which can be detected at redshift $z_i$. The number of SGRBs contained in this region is $m_i$. Then we can obtain the cumulative redshift distribution as \citep{1971MNRAS.155...95L}: \begin{equation} \phi(z_i)=\prod_{j<i}(1+\frac{1}{M_j}) \end{equation} where $j<i$ means that the $j$th SGRB has a less redshift than $i$th SGRB. The results are shown in the middle panel of Fig.\ref{fig:distribution}. Then the probability density function (PDF) of redshift distribution can be calculated by: \begin{equation} \rho(z)=\frac{d\phi(z)}{dz}(1+z)(\frac{dV(z)}{dz})^{-1} \end{equation} \begin{equation} \frac{dV(z)}{dz}=4\pi \left ( \frac{c}{H_0} \right )^3 \left [ \int_{0}^{z}\frac{dz}{\sqrt{1-\Omega_{\rm m}+\Omega_{\rm m}(1+z)^3}}\right ]^2 \times \frac{1}{\sqrt{1-\Omega_{\rm m}+\Omega_{\rm m}(1+z)^3}} \end{equation} The results are shown in the right panel of Fig.\ref{fig:distribution}. Again, the formation rate of SGRBs can be fitted by: \begin{eqnarray}\label{zpdf} \rho(z) \propto \left\{ \begin{array}{cc} (1+z)^{-4.39 \pm 0.55} \qquad z \le 1.5 \\ (1+z)^{-5.51 \pm 0.32} \qquad z > 1.5 \end{array} \right. \end{eqnarray} Meanwhile, we can derive the formation rate of sGRBs in the local universe, $\rho(0)=15.5 \pm 5.8\ \rm{Gpc^{-3}yr^{-1}}$, which is roughly consistent with the results of \citet{2015ApJ...815..102F} that the local event rate is $10 \rm{ Gpc^{-3}yr^{-1}}$ and $\rho(0)=7.53 \rm{Gpc^{-3}yr^{-1}}$ in \citet{2018ApJ...852....1Z}. At last, we perform the Monte Carlo simulation to test whether our results can recover the sGRB distributions. We simulate a set of points $(L_0, z)$ which follow the equations Eq.(\ref{lcmf}) and Eq.(\ref{zpdf}), and then calculate the luminosity $L$ using the relation $L=L_0(1+z)^{4.78}$, thus we obtain a set of points $(L, z)$. We simulate $10^{5}$ points and divide them into 100 groups. For each group, we select one sGRBs from them and get 100 pseudo sGRBs to compare with the observed data. Finally, we perform the same analysis as above to obtain the luminosity function and formation rate of sGRBs. In Fig.\ref{fig:mctest} we present our results, where the blue lines are the simulated data for the cumulative luminosity function and cumulative redshift distribution, the red lines and green lines are the observed data and the mean of the simulated data, respectively. We also perform the Kolmogorov–Smirnov test between observed data and the mean distribution of simulated data, the chance probabilities are 0.62 and 0.83 respectively, which indicates that the cumulative luminosity function and formation rate are reliable. \begin{figure}[ht!] \figurenum{5}\label{fig:distribution} \centering \includegraphics[angle=0,scale=0.26]{fig5a.png} \includegraphics[angle=0,scale=0.26]{fig5b.png} \includegraphics[angle=0,scale=0.26]{fig5c.png} \caption{Left: cumulative luminosity function of SGRBs. The dotted lines are the $95\%$ confidence bands (Moreira et al. 2010); middle: cumulative redshift distribution of SGRBs; right: the PDF of the redshift distribution derived from the cumulative distribution, with its first bin normalized to unity.} \hfill \end{figure} \begin{figure}[ht!] \figurenum{6}\label{fig:mctest} \centering \includegraphics[angle=0,scale=0.4]{fig6a.png} \includegraphics[angle=0,scale=0.4]{fig6b.png} \caption{Comparison of simulated data (blue) and observed data (red). Left is the cumulative luminosity function. Right is the cumulative redshift distribution. The green lines are the mean of simulated data.} \hfill \end{figure} \section{Summary and Discussion}\label{SD} Gamma-ray bursts are the most powerful explosions in the universe, although great progress have been achieved in recent years, their nature is still unclear. Meanwhile since their huge luminosity/energy, GRBs can be detected to large redshift and thus can serve as a valuable cosmological tool. A variety of correlations among intrinsic properties have been proposed in these years, on one hand these correlations can help to reveal the nature of GRBs, and on the other hand some correlations may be used as a redshift indicator, such as the so-called "Amati relation" \citep{2002AA...390...81A} and "Yonetoku relation" \citep{2014ApJ...789...65Y}. However none of these correlations is valid for both long GRBs and short GRBs. In this paper, we study the correlations among gamma-ray burst prompt emission properties and find a universal correlation which is suitable for both long and short GRBs, i.e. $L_{iso}\propto {E^{1.94}_{peak}}T^{0.37}_{0.45}$ This universal correlation may have important implications for GRB physics, implying that the long and short GRBs may share similar radiation processes. The parameters of the relation obtained in our study is different from that in \citet{2006MNRAS.370..185F}. The reason may not only be the different sample (more long GRBs and the inclusion of short GRBs) we used in our study, but could also be the degeneracy between parameters in the three-parameter fitting. The degeneracy could be studied by examine the joint posterior distributions obtained via MCMC/nested sampling approaches, and we leave it for future study. Some other three-parameter correlations have also been found for GRBs, for example, \citet{2005ApJ...633..611L} found that there is a tight correlation between the isotropic gamma-ray energy, the peak energy of the gamma-ray spectra, and the break time of the optical afterglow light curves. For GRBs with X-ray plateau phase, the three-parameter correlations also have been reported to exist between the end time of the plateau phase, the corresponding X-ray luminosity and the peak luminosity or isotropic energy in the prompt emission phase \citep{2014PASJ...66...42T,2016ApJ...825L..20D,2017ApJ...848...88D, 2019ApJS..245....1T}. In these relations we need to know the information of the afterglow emission, such as the break time of the afterglow light curves, the duration and luminosity of the X-ray plateau phase. Nonetheless, the universal correlation found in this paper only involves properties of GRB prompt emission and does not require any information of afterglow phase. This universal correlation can be used as a redshift indicator. Here we use this relation to calculate the pseudo redshifts of short GRBs, and then use the Lynden-Bell's $c^-$ method to study the luminosity function and formation rate of SGRBs. We find that the luminosity function can be expressed as $\psi(L_0)\propto{L_0^{-0.63\pm{0.07}}}$ for dim SGRBs and $\psi(L_0)\propto{L_0^{-1.96\pm{0.28}}}$ for bright SGRBs, with the break luminosity $L^b_0=6.95_{-0.76}^{+0.84}\times{10^{50}} \rm{erg/s}$. For the formation rate of sGRBs, we give the result that $\rho(z)\propto(1+z)^{-4.39\pm{0.55}}$ for $z<1.5$ and $\rho(z)\propto(1+z)^{-5.51\pm{0.32}}$ for $z>1.5$, and also we have obtained the local SGRB rate as $\rho(0)=15.5 \pm 5.8\ \rm{Gpc^{-3}yr^{-1}}$, these results are roughly consistent with \citet{2018ApJ...852....1Z}. If we take the typical beaming factor as $f_b^{-1}=30$ \citep{2015ApJ...815..102F}, then the actual total local event rate of SGRBs is about $450 \rm{Gpc^{-3}yr^{-1}}$, which is also consistent with the results inferred from the gravitational wave detections \citep{2019PhRvX...9c1040A}. \acknowledgments We thank the anonymous referee for valuable comments. We thank Jinyi Shangguan for helpful discussions. This work was supported by NSFC (No. 11933010), by the Chinese Academy of Sciences via the Strategic Priority Research Program (No. XDB23040000), Key Research Program of Frontier Sciences (No. QYZDJ-SSW-SYS024). \clearpage \begin{deluxetable*}{lrrrrrrrrl} \tablecaption{The 49 Long GRB samples\label{tab:1}} \tablewidth{700pt} \tabletypesize{\scriptsize} \tablehead{ \colhead{Name} & \colhead{$z$} & \colhead{$E_{\rm peak}$} & \colhead{$T_{\rm 0.45}$} & \colhead{$\alpha$} & \colhead{$\beta$} & \colhead{$L_{\rm iso}$} & \colhead{$E_{\rm iso}$} & \colhead{Detection Band} & \colhead{Ref.} \\ \colhead{} & \colhead{} & \colhead{(KeV)} & \colhead{(s)} & \colhead{} & \colhead{} & \colhead{($10^{52}$erg/s)} & \colhead{($10^{52}$erg/s)} & \colhead{(KeV)} & \colhead{} } \startdata 50401 & 2.9 & $ 464.10 \pm 26 $ & $ 5.18 \pm 0.212 $ & -0.83 & -2.37 & $ 20.90 \pm 0.1 $ & $ 35.00 \pm 7 $ & $ 20-2000 $ & 2 \\ 050416A & 0.6535 & $ 26.01 \pm 2 $ & $ 0.63 \pm 0.043 $ & -1 & -3.4 & $ 0.10 \pm 0.99 $ & $ 25.10 \pm 0.01 $ & $ 15-150 $ & 1 \\ 50603 & 2.821 & $ 1333.53 \pm 28 $ & $ 1.6 \pm 0.08 $ & -0.79 & -2.15 & $ 225.00 \pm 0.14 $ & $ 60.00 \pm 4 $ & $ 20-3000 $ & 3 \\ 60124 & 2.296 & $ 816.62 \pm 88 $ & $ 2.38 \pm 0.129 $ & -1.29 & -2.25 & $ 13.70 \pm 0.74 $ & $ 41.00 \pm 6 $ & $ 20-2000 $ & 4 \\ 61007 & 1.261 & $ 1125.98 \pm 48 $ & $ 16.8 \pm 0.122 $ & -0.53 & -2.61 & $ 17.80 \pm 0.31 $ & $ 86.00 \pm 9 $ & $ 20-10000 $ & 5 \\ 061222A & 2.088 & $ 1090.06 \pm 54 $ & $ 8.97 \pm 0.159 $ & -1 & -2.32 & $ 14.80 \pm 0.4 $ & $ 20.80 \pm 0.602 $ & $ 20-10000 $ & 1 \\ 071010B & 0.947 & $ 101.24 \pm 10 $ & $ 4.68 \pm 0.075 $ & -1.25 & -2.65 & $ 0.65 \pm 2.17 $ & $ 1.70 \pm 0.9 $ & $ 20-1000 $ & 6 \\ 080319B & 0.937 & $ 1307.48 \pm 22 $ & $ 20.3 \pm 0.058 $ & -0.86 & -3.59 & $ 10.50 \pm 0.1 $ & $ 114.00 \pm 9 $ & $ 20-7000 $ & 1 \\ 080319C & 1.95 & $ 905.65 \pm 0.06 $ & $ 5.04 \pm 0.152 $ & -1.01 & -1.87 & $ 9.46 \pm 2.28 $ & $ 14.10 \pm 2.8 $ & $ 20-4000 $ & 7 \\ 80605 & 1.6398 & $ 784.02 \pm 40 $ & $ 4.75 \pm 0.079 $ & -0.87 & -2.58 & $ 33.30 \pm 0.69 $ & $ 24.00 \pm 2 $ & $ 20-2000 $ & 8 \\ 80607 & 3.036 & $ 1404.53 \pm 27 $ & $ 7.52 \pm 0.094 $ & -0.76 & -2.57 & $ 221.00 \pm 0.44 $ & $ 188.00 \pm 10 $ & $ 20-4000 $ & 9 \\ 80721 & 2.602 & $ 1790.19 \pm 62 $ & $ 4.51 \pm 0.195 $ & -0.96 & -2.42 & $ 111.00 \pm 0.18 $ & $ 126.00 \pm 22 $ & $ 20-7000 $ & 1 \\ 80810 & 3.35 & $ 2523.00 \pm 263 $ & $ 31.35 \pm 2.914 $ & -1.2 & -2.5 & $ 239.00 \pm 0.14 $ & $ 45.00 \pm 5 $ & $ 15-1000 $ & 11 \\ 80913 & 6.7 & $ 931.70 \pm 39 $ & $ 2.16 \pm 0.139 $ & -0.82 & -2.5 & $ 12.40 \pm 0.18 $ & $ 8.60 \pm 2.5 $ & $ 15-150 $ & 12,13 \\ 81028 & 3.038 & $ 240.91 \pm 6 $ & $ 78.12 \pm 2.09 $ & 0.36 & -2.25 & $ 4.91 \pm 0.45 $ & $ 17.00 \pm 2 $ & $ 8-35000 $ & 10 \\ 81121 & 2.512 & $ 726.63 \pm 43.8 $ & $ 5.51 \pm 0.26 $ & -0.21 & -1.86 & $ 13.80 \pm 0.22 $ & $ 26.00 \pm 5 $ & $ 8-35000 $ & 10 \\ 81222 & 2.77 & $ 629.59 \pm 8 $ & $ 4.2 \pm 0.053 $ & -0.9 & -2.33 & $ 10.10 \pm 0.03 $ & $ 30.00 \pm 3 $ & $ 8-35000 $ & 1 \\ 90424 & 0.544 & $ 250.13 \pm 2 $ & $ 1.98 \pm 0.03 $ & -1.02 & -3.26 & $ 1.14 \pm 0.02 $ & $ 4.60 \pm 0.9 $ & $ 8-35000 $ & 1 \\ 90516 & 4.109 & $ 262.60 \pm 11.4 $ & $ 36.96 \pm 2.259 $ & -1.03 & -2.1 & $ 8.70 \pm 0.33 $ & $ 88.50 \pm 19.2 $ & $ 8-1000 $ & 14 \\ 90618 & 0.54 & $ 482.33 \pm 14 $ & $ 23.04 \pm 0.706 $ & -0.91 & -2.42 & $ 1.87 \pm 0.08 $ & $ 25.40 \pm 0.6 $ & $ 8-35000 $ & 10 \\ 90809 & 2.737 & $ 722.74 \pm 11 $ & $ 3.43 \pm 0.456 $ & -0.47 & -2.16 & $ 34.00 \pm 0.28 $ & $ 4.20 \pm 1.2 $ & $ 8-35000 $ & 10 \\ 90927 & 1.37 & $ 141.42 \pm 1.81 $ & $ 0.84 \pm 0.09 $ & -0.68 & -2.12 & $ 0.37 \pm 0.23 $ & $ 0.70 \pm 0.312 $ & $ 8-35000 $ & 10 \\ 91020 & 1.71 & $ 506.77 \pm 25 $ & $ 6 \pm 0.166 $ & -1.2 & -2.29 & $ 3.44 \pm 0.04 $ & $ 12.20 \pm 2.4 $ & $ 8-35000 $ & 1 \\ 91127 & 0.49 & $ 88.91 \pm 1.81 $ & $ 1.54 \pm 0.063 $ & -0.68 & -2.12 & $ 0.77 \pm 0.19 $ & $ 1.63 \pm 0.02 $ & $ 8-35000 $ & 10 \\ 100615A & 1.398 & $ 205.58 \pm 8 $ & $ 9.43 \pm 0.222 $ & -1.24 & -2.27 & $ 1.06 \pm 0.03 $ & $ 4.22 \pm 1.21 $ & $ 8-1000 $ & 15 \\ 100621A & 0.542 & $ 146.49 \pm 15 $ & $ 19.43 \pm 0.327 $ & -1.7 & -2.45 & $ 0.32 \pm 0.25 $ & $ 4.37 \pm 0.5 $ & $ 20-2000 $ & 1 \\ 100728A & 1.567 & $ 1001.13 \pm 25 $ & $ 57.35 \pm 0.185 $ & -0.47 & -2.5 & $ 6.45 \pm 1.08 $ & $ 63.74 \pm 12.2 $ & $ 20-10000 $ & 16 \\ 100816A & 0.8049 & $ 246.73 \pm 4.7 $ & $ 0.84 \pm 0.016 $ & -0.31 & -2.77 & $ 0.74 \pm 0.12 $ & $ 0.73 \pm 0.02 $ & $ 10-1000 $ & 17,18 \\ 100906A & 1.727 & $ 490.86 \pm 40 $ & $ 11.07 \pm 0.123 $ & -1.1 & -2.2 & $ 6.90 \pm 0.77 $ & $ 28.90 \pm 0.3 $ & $ 20-2000 $ & 19 \\ 101213A & 0.414 & $ 437.91 \pm 40 $ & $ 20.16 \pm 0.69 $ & -1.1 & -2.35 & $ 0.06 \pm 0.43 $ & $ 3.01 \pm 0.64 $ & $ 10-1000 $ & 20 \\ 101219B & 0.55 & $ 108.50 \pm 8 $ & $ 8.36 \pm 0.736 $ & -0.33 & -2.12 & $ 0.04 \pm 0.38 $ & $ 0.59 \pm 0.04 $ & $ 10-1000 $ & 21 \\ 110422A & 1.77 & $ 681.42 \pm 34 $ & $ 9.24 \pm 0.09 $ & -0.53 & -2.65 & $ 0.29 \pm 0.36 $ & $ 43.10 \pm 0.13 $ & $ 20-2000 $ & 22 \\ 110503A & 1.613 & $ 572.25 \pm 19 $ & $ 2.1 \pm 0.056 $ & -0.98 & -2.7 & $ 18.90 \pm 0.19 $ & $ 10.49 \pm 11.4 $ & $ 20-5000 $ & 1 \\ 110715A & 0.82 & $ 218.40 \pm 11 $ & $ 1.45 \pm 0.025 $ & -1.23 & -2.7 & $ 1.19 \pm 0.39 $ & $ 2.93 \pm 0.12 $ & $ 20-10000 $ & 23 \\ 110731A & 2.83 & $ 1164.32 \pm 13 $ & $ 3.36 \pm 0.071 $ & -0.8 & -2.98 & $ 30.60 \pm 0.07 $ & $ 118.05 \pm 9.12 $ & $ 10-1000 $ & 24 \\ 110801A & 1.858 & $ 400.12 \pm 50 $ & $ 54.56 \pm 1.731 $ & -1.7 & -2.5 & $ 0.44 \pm 0.84 $ & $ 15.84 \pm 1.32 $ & $ 15-350 $ & 25 \\ 111228A & 0.714 & $ 58.28 \pm 3 $ & $ 9 \pm 0.307 $ & -1.9 & -2.7 & $ 0.67 \pm 0.25 $ & $ 4.41 \pm 0.202 $ & $ 10-1000 $ & 26 \\ 120119A & 1.728 & $ 516.14 \pm 8 $ & $ 13.6 \pm 0.198 $ & -0.98 & -2.36 & $ 5.98 \pm 0.14 $ & $ 20.79 \pm 1.98 $ & $ 10-1000 $ & 27 \\ 120326A & 1.798 & $ 129.97 \pm 3.67 $ & $ 4.68 \pm 0.114 $ & -0.98 & -2.53 & $ 0.59 \pm 0.1 $ & $ 3.11 \pm 0.617 $ & $ 10-1000 $ & 28 \\ 120712A & 4.1745 & $ 641.64 \pm 26 $ & $ 4.68 \pm 0.146 $ & -0.6 & -1.8 & $ 13.50 \pm 0.08 $ & $ 8.35 \pm 1.49 $ & $ 10-1000 $ & 29 \\ 120922A & 3.1 & $ 154.57 \pm 3.5 $ & $ 43.93 \pm 1.488 $ & -1.6 & -2.3 & $ 3.02 \pm 0.27 $ & $ 19.79 \pm 5.89 $ & $ 10-1000 $ & 30 \\ 121128A & 2.2 & $ 199.04 \pm 4.6 $ & $ 4.75 \pm 0.198 $ & -0.8 & -2.41 & $ 1.53 \pm 0.19 $ & $ 9.24 \pm 1.11 $ & $ 10-1000 $ & 31 \\ 130215A & 0.597 & $ 247.54 \pm 63 $ & $ 19.5 \pm 1.148 $ & -1 & -1.6 & $ 0.22 \pm 0.18 $ & $ 3.14 \pm 0.88 $ & $ 10-1000 $ & 32 \\ 130408A & 3.758 & $ 1294.18 \pm 40 $ & $ 1.26 \pm 0.092 $ & -0.7 & -2.3 & $ 61.20 \pm 0.59 $ & $ 7.41 \pm 1.41 $ & $ 20-10000 $ & 33 \\ 130427A & 0.3399 & $ 1112.12 \pm 5 $ & $ 7.64 \pm 0.382 $ & -0.789 & -3.06 & $ 19.00 \pm 0.001 $ & $ 46.18 \pm 8.26 $ & $ 8-1000 $ & 34 \\ 130505A & 2.27 & $ 1975.08 \pm 49 $ & $ 6.23 \pm 0.623 $ & -0.31 & -2.26 & $ 398.00 \pm 0.17 $ & $ 1012.60 \pm 253.9 $ & $ 20-1200 $ & 35 \\ 130831A & 0.4791 & $ 99.10 \pm 4 $ & $ 4.07 \pm 0.094 $ & -1.51 & -2.8 & $ 0.34 \pm 0.39 $ & $ 1.49 \pm 0.0578 $ & $ 20-10000 $ & 36 \\ 130907A & 1.238 & $ 872.82 \pm 16 $ & $ 40.31 \pm 0.139 $ & -0.65 & -2.22 & $ 18.20 \pm 0.08 $ & $ 122.31 \pm 10.92 $ & $ 20-10000 $ & 37 \\ 131030A & 1.295 & $ 406.22 \pm 10 $ & $ 7.35 \pm 0.118 $ & -0.71 & -2.95 & $ 10.80 \pm 0.11 $ & $ 24.72 \pm 3.19 $ & $ 20-10000 $ & 38 \\ \enddata \tablecomments{ References. (1) \citet{2012MNRAS.421.1256N}; (2) \citet{2005GCN..3179....1G}; (3) \citet{2005GCN..3518....1G}; (4) \citet{2006GCN..4599....1G}; (5) \citet{2006GCN..5722....1G}; (6) \citet{2007GCN..6879....1G}; (7) \citet{2008GCN..7487....1G}; (8) \citet{2008GCN..7854....1G}; (9) \citet{2008GCN..7862....1G}; (10) Nava et al. (2011); (11) \citet{2008GCN..8101....1S}; (12) \citet{2008GCN..8256....1P}; (13) \citet{2009ApJ...693.1610G}; (14) \citet{2009GCN..9415....1M}; (15) \citet{2010GCN.10851....1F}; (16) \citet{2010GCN.11021....1G}; (17) \citet{2010GCN.11124....1F}; (18) \citet{2010GCN.11124....1F}; (19) \citet{2010GCN.11251....1G}; (20) \citet{2010GCN.11454....1G}; (21) \citet{2010GCN.11477....1V}; (22) \citet{2011GCN.11971....1G}; (23) \citet{2011GCN.12166....1G}; (24) \citet{2011GCN.12223....1G}; (25) \citet{2011GCN.12276....1S}; (26) \citet{2011GCN.12744....1B}; (27) \citet{2012GCN.12874....1G}; (28) \citet{2012GCN.13145....1C}; (29) \citet{2012GCN.13469....1G}; (30) \citet{2012GCN.13809....1Y}; (31) \citet{2012GCN.14012....1M}; (32) \citet{2013GCN.14219....1Y}; (33) \citet{2013GCN.14368....1G} ; (34) \citet{2013GCN.14473....1V}; (35) \citet{2013GCN.14575....1G}; (36) \citet{2013GCN.15145....1G}; (37) \citet{2013GCN.15203....1G}; (38) \citet{2013GCN.15413....1G}. } \end{deluxetable*} \begin{deluxetable*}{lrrrrrl} \tablecaption{The 19 Short GRB samples\label{tab:2}} \tablewidth{700pt} \tabletypesize{\scriptsize} \tablehead{ \colhead{Name} & \colhead{$z$} & \colhead{$E_{\rm peak}$} & \colhead{$T_{\rm 0.45}$} & \colhead{$L_{\rm iso}$} & \colhead{$E_{\rm iso}$} & \colhead{Ref.} \\ \colhead{} & \colhead{} & \colhead{(KeV)} & \colhead{(s)} & \colhead{($10^{52}$erg/s)} & \colhead{($10^{52}$erg/s)} & \colhead{} } \startdata 050509B & 0.225 & $ 102 \pm 10 $ & $ 0.02 \pm 0.04 $ & $ 0.01 \pm 0.09 $ & $ 0.00024 \pm 0.00044 $ & 14 \\ 051221A & 0.546 & $ 621 \pm 114 $ & $ 0.16 \pm 0.008 $ & $ 2.77 \pm 0.29 $ & $ 0.3 \pm 0.04 $ & 1,2 \\ 060502B & 0.287 & $ 193 \pm 19 $ & $ 0.04 \pm 0.007 $ & $ 0.089 \pm 0.05 $ & $ 0.003 \pm 0.005 $ & 1 \\ 61201 & 0.111 & $ 969 \pm 508 $ & $ 0.22 \pm 0.014 $ & $ 0.3445 \pm 0.4 $ & $ 3 \pm 4 $ & 1 \\ 61217 & 0.827 & $ 216 \pm 22 $ & $ 0.1 \pm 0.008 $ & $ 1.498 \pm 2.2 $ & $ 0.03 \pm 0.04 $ & 1 \\ 070429B & 0.902 & $ 813 \pm 81 $ & $ 0.08 \pm 0.011 $ & $ 1.873 \pm 1.6 $ & $ 0.07 \pm 0.11 $ & 1 \\ 070714B & 0.92 & $ 2150 \pm 1113 $ & $ 1.4 \pm 0.132 $ & $ 6.56 \pm 1.36 $ & $ 0.83 \pm 0.1 $ & 4 \\ 070724A & 0.457 & $ 119 \pm 12 $ & $ 0.11 \pm 0.011 $ & $ 0.087 \pm 0.005 $ & $ 0.00245 \pm 0.00175 $ & 13 \\ 70809 & 0.473 & $ 91 \pm 9 $ & $ 0.26 \pm 0.018 $ & $ 0.042 \pm 0.001 $ & $ 0.00131 \pm 0.00103 $ & 1 \\ 90510 & 0.903 & $ 8370 \pm 760 $ & $ 0.12 \pm 0.013 $ & $ 104 \pm 24 $ & $ 3.75 \pm 0.25 $ & 5 \\ 090426A & 2.609 & $ 320 \pm 54 $ & $ 0.32 \pm 0.025 $ & $ 1.46 \pm 0.38 $ & $ 1.1 \pm 0.38 $ & 1 \\ 100117A & 0.915 & $ 551 \pm 135 $ & $ 0.14 \pm 1.693 $ & $ 1.89 \pm 0.21 $ & $ 0.09 \pm 0.01 $ & 5 \\ 100206A & 0.407 & $ 638.98 \pm 131.21 $ & $ 0.06 \pm 0.009 $ & $ 1 \pm 1.15 $ & $ 0.0763 \pm 0.03 $ & 5 \\ 100625A & 0.452 & $ 1018 \pm 166 $ & $ 0.14 \pm 0.007 $ & $ 1.4 \pm 0.06 $ & $ 0.399 \pm 0.06 $ & 11 \\ 101219A & 0.718 & $ 842 \pm 170 $ & $ 0.24 \pm 0.011 $ & $ 1.56 \pm 0.24 $ & $ 0.49 \pm 0.23 $ & 6 \\ 130603B & 0.356 & $ 891.66 \pm 135.6 $ & $ 0.04 \pm 0.007 $ & $ 3.04 \pm 0.44 $ & $ 1.476 \pm 0.44 $ & 9 \\ 61006 & 0.4377 & $ 966 \pm 322 $ & $ 0.24 \pm 0.01 $ & $ 2.06 \pm 0.15 $ & $ 0.983 \pm 0.15 $ & 3 \\ 71227 & 0.381 & $ 1000 \pm 31 $ & $ 0.5 \pm 0.034 $ & $ 0.443 \pm 0.139 $ & $ 0.1 \pm 0.01 $ & 12 \\ 101224A & 0.72 & $ 393.88 \pm 161 $ & $ 0.12 \pm 0.013 $ & $ 0.824 \pm 0.125 $ & $ -- $ & 10 \\ \enddata \tablecomments{ Reference: (1)\citet{2007ApJ...671..656B} and references therein; (2)\citet{2005GCN..4394....1G}; (3)\citet{2006GCN..5710....1G}; (4)\citet{2007GCN..6638....1O}; (5)\citet{2013MNRAS.431.1398T}; (6)\citet{2010GCN.11470....1G}; (7)\citet{2016GCN.19843....1S}; (8)\citet{2016GCN.19570....1H}; (9)\citet{2013GCN.14771....1G}; (10) \citet{2010GCN.11489....1M}; (11)\citet{2010GCN.10912....1B}; (12)\citet{2007GCN..7155....1G}; (13)\citet{2007GCNR...74....2Z}; (14)\citet{2005GCN..3385....1B}. (15)\citet{2019arXiv190205489W} (16)\citet{2018ApJ...852....1Z} (17)\citet{2013MNRAS.430..163Q}. } \end{deluxetable*} \clearpage
train/arxiv
BkiUbck5qoYDgYgh4Mkd
5
1
\section{INTRODUCTION} \label{sec-introd} The progenitors for the narrow-lined Type II supernovae (SN IIn) remain a mystery, and this mystery is even deeper for the rare class of very luminous Type IIn, with $M \la -20$. While significant evidence points toward an origin from luminous blue variable stars (LBV) \citep[e.g.,][]{Smith2008,Smith2008b,Smith2010}, suggestions for the energy source of the events span a wide range, from pair-instabilities \citep{Smith2007,GalYam2009}, magnetar power \citep{Kasen2010,Nicholl2013}, or extremely energetic core-collapse SNe \citep{Umeda2008}, to ongoing shock interactions or shock breakout from an extended progenitor \citep{Falk1977,Grasberg1987,Smith2007a,Chevalier2011,Ginzburg2012}. Before SN 2010jl, the most convincing direct detection of a Type IIn progenitor is SN 2005gl at a distance of 66 Mpc, which was a moderately luminous SN IIn that transitioned into a more normal SN II. Pre-explosion images of SN 2005gl showed a star at the position of the SN with luminosity consistent with an LBV \citep{Gal-Yam2009}. Recently it has been suggested that SN~1961V was in fact a core-collapse SN IIn, rather than an Eta Carinae-like eruption, in part because the peak absolute magnitude was $-18$ and because there is no surviving star detected in Spitzer images \citep{Kochanek2011,Smith2011b}. The progenitor of SN~1961V was clearly detected for decades before 1961 as a very luminous blue star, consistent with an extremely massive LBV. Even more recently, SN 2009ip \citep{Mauerhan2013,Pastorello2012,Fraser2013} and SN 2010mc \citep{Ofek2013b} have become interesting cases of a ``supernova impostor'' becoming a Type IIn SN . As we will show, these objects are similar in some ways, but differ from SN 2010jl in the essential feature of total energy release. From a Spitzer/IRAC survey of Type IIn SNe, \cite{Fox2011} find mid-IR emission from $\sim 15 \%$ of the SNe. The most likely origin is emission from pre-existing dust heated by the optical emission from circumstellar interaction between the SN and the CSM of the progenitor. Estimates of the dust mass are in the range $10^{-5} - 10^{-1} ~M_\odot$. With a dust to gas ratio of 1:100 this is consistent with the total mass lost in giant LBV eruptions \citep{SmithOwocki2006}. While medium luminosity Type IIn SNe, like SN 1988Z and SN 1995N, often are strong radio and X-ray emitters, the high luminosity IIn SNe are generally very weak. This may be a result of the extended massive envelopes thought to be responsible for the high luminosity of these SNe, as discussed by \cite{Chevalier2012}. In this paper we discuss HST and ground based observations of the Type IIn SN 2010jl. SN 2010jl was discovered on 2010 Nov. 3.52 by Newton \& Puckett (2010). The first detection of SN 2010jl was, however, from pre-discovery images on 2010 October 9.6 by \cite{Stoll2010}. Based on the presence of narrow Balmer lines it was classified as a Type IIn SN by \cite{Benetti2010}. From archival HST images \cite{Smith2011} identify a possible progenitor. Although the exact nature of this star is uncertain, several facts indicate a progenitor with mass $\ga 30 ~M_\odot$, although it is difficult to distinguish between a single massive star or a young cluster. \cite{Stoll2010} find that the host is a low metallicity galaxy, as found for other luminous Type IIn host galaxies, which typically have metallicities $\la 0.4 \ Z_\odot$ \citep{Neill2011,Chatzopoulos2011}. The peak V and I magnitudes were 13.7 and 13.0, respectively, corresponding to absolute magnitudes M$_V \approx -19.9$ and M$_I \approx -20.5$ \citep{Stoll2010}. This is considerably more luminous than the average Type IIn SN, M$_V = -18.4$ \citep{Kiewe2012}, and places it among the very luminous Type IIn SNe, and close to the super-luminous SNe, defined as $M \la -21$ \citep{Gal-Yam2012}. The light curve showed a slow evolution with a decline by $\sim 1$ mag during the first 200 days \citep{Andrews2011}. \cite{Smith2012} report optical spectroscopy of the SN from 2 -- 236 days after discovery. The spectrum shows a number of strong emission lines, with a narrow and a broad component. The H$\alpha$ line is well fitted with a `Lorentzian' profile with full width half maximum FWHM $\sim 1800 \rm ~km~s^{-1}$ that extends to $\sim 6000 \rm ~km~s^{-1}$. The narrow H$\alpha$ line was found to be double-peaked, and can be approximated by a Gaussian emission component with FWHM = 120 $\rm ~km~s^{-1}$. The shift of the line peak of the broad H$\alpha$ line was seen as evidence for dust formation in the post-shock gas. However, Smith et. al. discussed also other scenarios based on a combination of electron scattering and continuum absorption. In this paper we show that electron scattering can explain all aspects of the line shapes. \cite{Zhang} presented an extensive data set, including UBVRI photometry and low resolution spectroscopy during the first $\sim 525$ days after first detection. Although the light curves and the optical low resolution spectra agree well up to $\sim 200$ days, there are some important differences in the later observations and interpretation between Zhang et al. and this paper (see Section \ref{sec_phot_results}). From Spitzer 3.6 and 4.5 $\mu$m and JHK photometry at an age of 90 -- 108 days \cite{Andrews2011} find evidence for pre-existing dust with a mass of $0.03-0.35$ $~M_\odot$ at a distance of $\sim 6\EE{17}$ cm from the SN. Here, we complement these observations with more extensive coverage of the very late phases. An important clue to the geometry is that \cite{Patat2011} find strong, wavelength independent polarization at a level of $\sim 2 \%$, which indicates an asphericity with axial ratio $\sim 0.7$. SN 2010jl was discovered as an X-ray source by \cite{Immler2010}. Subsequent Chandra observations by Chandra et al. (2012) on 2010 Dec. $7-8$, and 2011 Oct. 17, i.e., 59 days and 373 days after first detection, indicate a hard spectrum with $kT \ga 10$ keV and a very large column density at the first epoch, $N_H \sim 10^{24}$ cm$^{-2}$, which decreased to $N_H \sim 3 \times 10^{23}$ cm$^{-2}$ at 373 days. This absorption is most likely intrinsic to the SN or its CSM. NuSTAR and XMM observations by \cite{Ofek2013} in 2012 Oct. -- Nov. (days $728-754$) were able to provide a tighter constraint on the X-ray temperature of $\sim 12$ keV. In addition, they also present UV and X-ray observations with the SWIFT satellite. No radio detections have been reported. \cite{Ofek2013} also discussed the optical light curve and found from a mass loss rate of $\sim 0.8 ~\Msun ~\rm yr^{-1}$, based on their assumed wind velocity of $300 \rm ~km~s^{-1}$, and a total mass lost of $\ga10 \ M_\odot$. Because of their limited optical photometry they assumed a constant bolometric correction to the photometry. They also obtained spectroscopy at only a few epochs, and these data are most important at very late phases. While most of the qualitative conclusions in their paper are similar to ours, there are some important differences which we discuss in the paper. They do not analyze the emission line spectrum in any detail, partly due to the lower spectral resolution of their data which prevents separation of the narrow component from the broad. Recently \cite{Borish2014} discuss near-IR spectroscopic observations where they find evidence for high velocity gas from the He I $\lambda 10,830$ line with velocities up to 5500 $\rm ~km~s^{-1}$. This gas may be related to the X-ray emitting gas, which originates from shocks with similar velocities. Based on a single spectrum at 513 days \cite{Maeda2013} proposed that the line shifts of the Balmer lines, discussed by \cite{Smith2012} and in this paper, result from dust formation in the ejecta. This is also the subject of a paper by \cite{Gall2014}, which makes a detailed analysis of the dust properties based on this interpretation. In this paper, based on extensive observations, we show that this interpretation, however, has severe problems. In this paper we discuss an extensive set of optical, UV, and IR photometric and spectroscopic observations of SN 2010jl, together with interpretation of these observations. Compared to other papers, our data set includes broader wavelength coverage, more extensive time coverage, and higher spectral resolution. We present a unique set of UV observations obtained with HST. X-ray observations and information about the progenitor from the literature enable us to provide a more complete picture of the event. In particular, we provide strong constraints on the structure of the progenitor and its environment. Our paper is organized as follows. In Sect. \ref{sec-obs} we report the observations and give our results in Sect. \ref{sec-results}. In Sect. \ref{sec-discussion} we discuss these in relation to the properties of the CSM, dynamics, energetics and different progenitor scenarios. We also put SN 2010jl in the context of other IIn SNe. Our main conclusions are summarized in Sect. \ref{sec-conculsions}. Based on the first detection we adopt 2010 Oct 9 (Julian Date 2,455,479) as the explosion date of the SN. With a recession velocity of the host galaxy 3207 $\pm 30 \rm ~km~s^{-1} $ \citep{Stoll2010}, we adopt a distance to UGC 5189A of $49 \pm 4$ Mpc in agreement with \cite{Smith2011} . \section{OBSERVATIONS} \label{sec-obs} \subsection{Photometry} Most of the optical imaging was obtained with the 1.2-m telescope at the F. L. Whipple Observatory (FLWO) using the KeplerCam instrument. KeplerCam data were reduced using IRAF and IDL procedures described in \citet{Hicken07}. A few additional late time epochs were obtained with the 6.5-m Baade telescope at the Magellan Observatory using the IMACS instrument and the 2.5-m Nordic Optical Telescope (NOT) using the ALFOSC instrument. These were reduced with IRAF using standard methods. High quality NOT imaging was used to construct $B$-, $V$-, $r$- and $i$-band templates by PSF subtraction of the SN using the IRAF DAOPHOT package. These templates were then subtracted from the FLWO, Magellan Observatory and NOT imaging using the HOTPANTS package. Template subtraction is necessary to obtain accurate photometry after $\sim$100 days as the SN is located close to the center of the galaxy. The optical photometry was calibrated to the Johnson-Cousins (JC) and Sloan Digital Sky Survey (SDSS) standard systems using local reference stars in the SN field, in turn calibrated using standard field observations on five photometric nights. As the SN Spectral Energy Distribution (SED), in particular at late phases, is line-dominated, we have applied the technique of S-corrections \citep{Stritzinger02} to transform from the natural system of each instrument to the standard systems. This was done for all bands except the $u$-band. Color-terms and filter response functions were adopted from \citet{Hicken12} for the FLWO and from Ergon et al. (2013) for the NOT. Filter response functions for the JC and SDSS standard systems were adopted from \citet{Bessel12} and \citet{Doi10}. Most of the near-infrared (NIR) imaging was obtained with the 1.3-m Peters Automated Infrared Imaging Telescope (PAIRITEL) at the FLWO. The data were processed into mosaics using the PAIRITEL Mosaic Pipeline version 3.6 implemented in python. Details of PAIRITEL observations and reduction of SN data can be found in \citet{Friedman12}. Two additional late time epochs were obtained with NOT using the NOTCAM instrument. This high quality imaging was used to construct $J$-, $H$-, $K$-band templates by PSF subtraction of the SN using the IRAF DAOPHOT package. These templates were then subtracted from the PAIRITEL imaging using the HOTPANTS package. As in the optical, template subtraction is necessary to obtain accurate photometry after $\sim$100 days and is particularly important for the late $J$-band photometry. The NIR photometry was calibrated to the 2MASS standard system using reference stars from the 2MASS Point Source catalogue \citep{Skrutskie06} within the SN field. As the PAIRITEL was one of the two telescopes used for the 2MASS survey the natural system photometry is already on the 2MASS standard system and no transformation is needed. The NOT photometry was transformed to the 2MASS standard system using linear color-terms from Ergon et al. (2013). The absence of strong lines in the X-shooter NIR spectrum indicates that this is sufficient and that S-corrections is not needed. As a check we can compare with the J, H and K$_s$ magnitudes from \cite{Andrews2011} at 104 days. Our J and H magnitudes agree within $\sim 0.1 \ $ mag with those of Andrews et al. However, our K$_s$ magnitude close to this epoch is $\sim 12.7$, while that of Andrews et al. is $\sim 13.75$. Judging from the SED plot in Andrews et al. it, however, seems that there is a typographical error and that the magnitude should be $12.75\pm 0.1$. In this case there is excellent agreement. In Table \ref{table_phot_opt} and Table \ref{table_phot_ir}. we give the photometry for all epochs, including errors. \begin{deluxetable*}{lccccccl} \tabletypesize{\footnotesize} \tablecaption{Optical photometry.\label{table_phot_opt} } \tablewidth{0pt} \tablehead{ \colhead{JD} & \colhead{epoch$^a$} &\colhead{u'}& \colhead{B}&\colhead{V}& \colhead{r'} &\colhead{i'} &\colhead{Telescope(Instrument)}\\ } \startdata 55508.98 & 29.98 & 13.85 (0.12) & 14.08 (0.03) & 13.79 (0.03) & 13.49 (0.02) & 13.54 (0.03) & FLWO (KeplerCam) \\ 55510.98 & 31.98 & 13.91 (0.12) & 14.10 (0.03) & 13.80 (0.03) & 13.49 (0.02) & 13.55 (0.03) & FLWO (KeplerCam) \\ 55513.96 & 34.96 & 13.94 (0.12) & 14.13 (0.03) & 13.84 (0.03) & 13.52 (0.02) & 13.58 (0.03) & FLWO (KeplerCam) \\ 55516.00 & 37.00 & 14.00 (0.12) & 14.16 (0.03) & 13.85 (0.03) & 13.53 (0.02) & 13.60 (0.03) & FLWO (KeplerCam) \\ 55522.91 & 43.91 & ... & ... & 13.88 (0.03) & 13.55 (0.02) & 13.63 (0.03) & FLWO (KeplerCam) \\ 55527.02 & 48.02 & 14.12 (0.12) & 14.30 (0.03) & 13.95 (0.03) & 13.64 (0.02) & 13.68 (0.03) & FLWO (KeplerCam) \\ 55529.95 & 50.95 & 14.19 (0.12) & 14.35 (0.03) & 14.02 (0.03) & 13.66 (0.02) & 13.74 (0.03) & FLWO (KeplerCam) \\ 55530.93 & 51.93 & 14.24 (0.12) & 14.31 (0.03) & 13.99 (0.03) & 13.61 (0.02) & 13.74 (0.03) & FLWO (KeplerCam) \\ 55533.98 & 54.98 & 14.22 (0.12) & 14.38 (0.03) & 14.07 (0.03) & 13.71 (0.02) & 13.79 (0.03) & FLWO (KeplerCam) \\ 55537.01 & 58.01 & 14.25 (0.12) & 14.40 (0.03) & 14.11 (0.03) & 13.71 (0.02) & 13.83 (0.03) & FLWO (KeplerCam) \\ 55540.95 & 61.95 & 14.30 (0.12) & 14.39 (0.03) & 14.09 (0.03) & 13.69 (0.02) & 13.82 (0.03) & FLWO (KeplerCam) \\ 55558.99 & 79.99 & 14.57 (0.12) & 14.62 (0.03) & 14.32 (0.03) & 13.87 (0.02) & 14.03 (0.03) & FLWO (KeplerCam) \\ 55564.02 & 85.02 & 14.54 (0.12) & 14.77 (0.03) & 14.46 (0.03) & 13.89 (0.02) & 14.13 (0.03) & FLWO (KeplerCam) \\ 55565.02 & 86.02 & 14.52 (0.12) & 14.84 (0.03) & 14.47 (0.03) & 13.89 (0.02) & 14.14 (0.03) & FLWO (KeplerCam) \\ 55566.03 & 87.03 & 14.61 (0.12) & 14.82 (0.03) & 14.51 (0.03) & 13.91 (0.02) & 14.18 (0.03) & FLWO (KeplerCam) \\ 55566.91 & 87.91 & 14.61 (0.12) & 14.80 (0.03) & 14.48 (0.03) & 13.88 (0.02) & 14.15 (0.03) & FLWO (KeplerCam) \\ 55568.98 & 89.98 & 14.64 (0.12) & ... & 14.51 (0.03) & 13.90 (0.02) & 14.20 (0.03) & FLWO (KeplerCam) \\ 55570.00 & 91.00 & 14.62 (0.12) & 14.81 (0.03) & 14.50 (0.03) & 13.91 (0.02) & 14.20 (0.03) & FLWO (KeplerCam) \\ 55571.98 & 92.98 & 14.62 (0.12) & 14.86 (0.03) & 14.51 (0.03) & 13.91 (0.02) & 14.21 (0.03) & FLWO (KeplerCam) \\ 55572.86 & 93.86 & 14.63 (0.12) & 14.84 (0.03) & 14.54 (0.03) & 13.92 (0.02) & 14.21 (0.03) & FLWO (KeplerCam) \\ 55575.89 & 96.89 & 14.62 (0.12) & 14.91 (0.03) & 14.55 (0.03) & 13.94 (0.02) & 14.24 (0.03) & FLWO (KeplerCam) \\ 55576.76 & 97.76 & ... & ... & 14.65 (0.03) & 13.94 (0.02) & 14.28 (0.03) & FLWO (KeplerCam) \\ 55578.00 & 99.00 & 14.76 (0.12) & 14.92 (0.03) & 14.57 (0.03) & 13.93 (0.02) & 14.26 (0.03) & FLWO (KeplerCam) \\ 55580.88 & 101.88 & 14.66 (0.12) & ... & 14.62 (0.03) & 13.94 (0.02) & 14.27 (0.03) & FLWO (KeplerCam) \\ 55587.89 & 108.89 & 14.80 (0.12) & ... & 14.62 (0.03) & 13.95 (0.02) & 14.34 (0.03) & FLWO (KeplerCam) \\ 55588.88 & 109.88 & ... & 14.94 (0.03) & 14.63 (0.03) & 13.96 (0.02) & 14.33 (0.03) & FLWO (KeplerCam) \\ 55589.87 & 110.87 & 14.72 (0.12) & 14.95 (0.03) & 14.63 (0.03) & 13.98 (0.02) & 14.34 (0.03) & FLWO (KeplerCam) \\ 55594.91 & 115.91 & 14.71 (0.12) & 15.00 (0.03) & 14.70 (0.03) & 13.97 (0.02) & 14.37 (0.03) & FLWO (KeplerCam) \\ 55597.77 & 118.77 & ... & 14.98 (0.03) & 14.69 (0.03) & 13.99 (0.02) & 14.39 (0.03) & FLWO (KeplerCam) \\ 55600.86 & 121.86 & 14.94 (0.12) & 15.00 (0.03) & 14.71 (0.03) & 13.98 (0.02) & 14.40 (0.03) & FLWO (KeplerCam) \\ 55602.70 & 123.70 & 14.76 (0.12) & ... & 14.71 (0.03) & 13.99 (0.02) & 14.41 (0.03) & FLWO (KeplerCam) \\ 55605.94 & 126.94 & 14.79 (0.12) & 15.09 (0.03) & 14.75 (0.03) & 14.01 (0.02) & 14.44 (0.03) & FLWO (KeplerCam) \\ 55606.82 & 127.82 & 14.85 (0.12) & ... & ... & ... & ... & FLWO (KeplerCam) \\ 55607.93 & 128.93 & 14.88 (0.12) & 15.08 (0.03) & 14.77 (0.03) & 14.02 (0.02) & 14.44 (0.03) & FLWO (KeplerCam) \\ 55608.86 & 129.86 & 14.83 (0.12) & ... & 14.75 (0.03) & 14.00 (0.02) & 14.45 (0.03) & FLWO (KeplerCam) \\ 55615.95 & 136.95 & 14.81 (0.12) & ... & 14.79 (0.03) & 14.00 (0.02) & 14.51 (0.03) & FLWO (KeplerCam) \\ 55623.65 & 144.65 & 14.82 (0.12) & 15.13 (0.03) & 14.81 (0.03) & 14.02 (0.02) & 14.53 (0.03) & FLWO (KeplerCam) \\ 55628.78 & 149.78 & 14.87 (0.12) & 15.12 (0.03) & 14.83 (0.03) & ... & 14.56 (0.03) & FLWO (KeplerCam) \\ 55631.66 & 152.66 & 14.98 (0.12) & 15.14 (0.03) & 14.84 (0.03) & 14.02 (0.02) & 14.55 (0.03) & FLWO (KeplerCam) \\ 55647.73 & 168.73 & 15.02 (0.12) & 15.17 (0.03) & 14.90 (0.03) & 14.02 (0.02) & 14.63 (0.03) & FLWO (KeplerCam) \\ 55667.74 & 188.74 & ... & ... & 14.96 (0.03) & 14.03 (0.02) & 14.69 (0.03) & FLWO (KeplerCam) \\ 55668.81 & 189.81 & ... & ... & 14.98 (0.03) & 14.03 (0.02) & 14.69 (0.03) & FLWO (KeplerCam) \\ 55669.74 & 190.74 & ... & ... & 14.97 (0.03) & 14.03 (0.02) & 14.69 (0.03) & FLWO (KeplerCam) \\ 55673.70 & 194.70 & ... & 15.21 (0.03) & 14.97 (0.03) & 14.04 (0.02) & 14.69 (0.03) & FLWO (KeplerCam) \\ 55682.68 & 203.68 & ... & 15.23 (0.03) & 15.00 (0.03) & 14.04 (0.02) & 14.72 (0.03) & FLWO (KeplerCam) \\ 55686.68 & 207.68 & ... & 15.22 (0.03) & ... & 14.03 (0.02) & 14.73 (0.03) & FLWO (KeplerCam) \\ 55688.67 & 209.67 & ... & 15.24 (0.03) & 14.98 (0.03) & 14.02 (0.02) & 14.72 (0.03) & FLWO (KeplerCam) \\ 55691.70 & 212.70 & ... & 15.21 (0.03) & 15.02 (0.03) & 14.04 (0.02) & 14.76 (0.03) & FLWO (KeplerCam) \\ 55696.63 & 217.63 & ... & ... & 15.01 (0.03) & 14.04 (0.02) & 14.74 (0.03) & FLWO (KeplerCam) \\ 55698.67 & 219.67 & ... & ... & 15.04 (0.03) & 14.04 (0.02) & 14.76 (0.03) & FLWO (KeplerCam) \\ 55704.68 & 225.68 & ... & ... & 15.01 (0.03) & 14.04 (0.02) & 14.78 (0.03) & FLWO (KeplerCam) \\ 55717.68 & 238.68 & ... & 15.27 (0.03) & 15.05 (0.03) & 14.05 (0.02) & 14.83 (0.03) & FLWO (KeplerCam) \\ 55855.99 & 376.99 & ... & 16.38 (0.03) & ... & 14.72 (0.02) & 15.72 (0.03) & FLWO (KeplerCam) \\ 55866.97 & 387.97 & ... & 16.39 (0.03) & 16.15 (0.03) & 14.79 (0.02) & 15.79 (0.03) & FLWO (KeplerCam) \\ 55888.01 & 409.01 & ... & 16.71 (0.03) & 16.40 (0.03) & 15.01 (0.02) & 15.98 (0.03) & FLWO (KeplerCam) \\ 55888.98 & 409.98 & ... & 16.70 (0.03) & 16.43 (0.03) & 15.02 (0.02) & 15.98 (0.03) & FLWO (KeplerCam) \\ 55916.98 & 437.98 & ... & 17.08 (0.03) & 16.77 (0.03) & 15.25 (0.02) & 16.27 (0.03) & FLWO (KeplerCam) \\ 55918.82 & 439.82 & ... & 17.20 (0.03) & 16.82 (0.03) & 15.29 (0.02) & 16.27 (0.03) & FLWO (KeplerCam) \\ 55948.84 & 469.84 & ... & 17.47 (0.03) & 17.06 (0.03) & 15.52 (0.02) & 16.52 (0.03) & FLWO (KeplerCam) \\ 55952.80 & 473.80 & ... & 17.41 (0.03) & 17.03 (0.03) & 15.50 (0.02) & 16.57 (0.03) & FLWO (KeplerCam) \\ 55972.87 & 493.87 & ... & 17.54 (0.03) & 17.33 (0.03) & 15.62 (0.02) & 16.73 (0.03) & FLWO (KeplerCam) \\ 55978.77 & 499.77 & ... & 17.62 (0.03) & 17.26 (0.03) & 15.65 (0.02) & 16.71 (0.03) & FLWO (KeplerCam) \\ 56009.73 & 530.73 & ... & 17.80 (0.03) & 17.62 (0.03) & 15.81 (0.02) & 16.97 (0.03) & FLWO (KeplerCam) \\ 56037.66 & 558.66 & ... & 18.16 (0.03) & 17.83 (0.03) & 15.95 (0.02) & 17.23 (0.03) & FLWO (KeplerCam) \\ 56040.65 & 561.65 & ... & 18.01 (0.03) & 17.84 (0.03) & 15.99 (0.02) & ... & FLWO (KeplerCam) \\ 56060.63 & 581.63 & ... & 18.36 (0.03) & 17.89 (0.03) & 16.06 (0.02) & 17.33 (0.03) & FLWO (KeplerCam) \\ 56065.63 & 586.63 & ... & 18.29 (0.03) & 18.02 (0.03) & 16.12 (0.02) & 17.48 (0.03) & FLWO (KeplerCam) \\ 56241.73 & 762.73 & ... & 19.43 (0.03) & 19.19 (0.03) & 17.04 (0.02) & 18.76 (0.03) & NOT (ALFOSC) \\ 56248.83 & 769.83 & ... & 19.61 (0.03) & 19.31 (0.03) & 17.10 (0.02) & 18.52 (0.03) & MAG (IMACS) \\ 56327.54 & 848.54 & ... & 19.91 (0.03) & 19.73 (0.03) & 17.59 (0.02) & ... & NOT (ALFOSC) \\ 56396.48 & 917.48 & 19.16 (0.12) & 20.43 (0.03) & 20.40 (0.03) & 18.19 (0.02) & 20.14 (0.03) & NOT (ALFOSC) \\ \enddata \tablenotetext{a}{Relative to first detection date, JD 2,455,479.0.} \end{deluxetable*} \begin{deluxetable*}{lccccl} \tabletypesize{\footnotesize} \tablecaption{NIR Photometry. \label{table_phot_ir} } \tablewidth{0pt} \tablehead{ \colhead{JD} &\colhead{Epoch}&\colhead{J}& \colhead{H}&\colhead{K}&\colhead{Telescope(Instrument) }\\ } \startdata 55505.94 & 26.94 & 12.84 (0.06) & 12.56 (0.07) & ... & PAIRITEL (2MASS) \\ 55507.99 & 28.99 & ... & ... & 12.32 (0.08) & PAIRITEL (2MASS) \\ 55508.98 & 29.98 & ... & 12.57 (0.07) & ... & PAIRITEL (2MASS) \\ 55511.96 & 32.96 & 12.88 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 55514.90 & 35.90 & 12.89 (0.06) & 12.56 (0.07) & ... & PAIRITEL (2MASS) \\ 55517.97 & 38.97 & ... & 12.68 (0.07) & ... & PAIRITEL (2MASS) \\ 55518.95 & 39.95 & 12.88 (0.06) & ... & 12.36 (0.08) & PAIRITEL (2MASS) \\ 55524.00 & 45.00 & 12.92 (0.06) & 12.61 (0.07) & ... & PAIRITEL (2MASS) \\ 55525.03 & 46.03 & 12.87 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 55533.85 & 54.85 & 13.03 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 55536.95 & 57.95 & 13.04 (0.06) & ... & 12.34 (0.08) & PAIRITEL (2MASS) \\ 55537.95 & 58.95 & 13.04 (0.06) & 12.69 (0.07) & 12.41 (0.08) & PAIRITEL (2MASS) \\ 55540.87 & 61.87 & 13.05 (0.06) & 12.82 (0.07) & 12.43 (0.08) & PAIRITEL (2MASS) \\ 55569.92 & 90.92 & 13.27 (0.06) & 13.04 (0.07) & 12.54 (0.08) & PAIRITEL (2MASS) \\ 55572.84 & 93.84 & 13.26 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 55575.83 & 96.83 & 13.28 (0.06) & 13.07 (0.07) & 12.69 (0.08) & PAIRITEL (2MASS) \\ 55578.79 & 99.79 & 13.30 (0.06) & 13.03 (0.07) & 12.66 (0.08) & PAIRITEL (2MASS) \\ 55599.77 & 120.77 & 13.42 (0.06) & 13.19 (0.07) & 12.76 (0.08) & PAIRITEL (2MASS) \\ 55602.70 & 123.70 & 13.44 (0.06) & 13.13 (0.07) & 12.81 (0.08) & PAIRITEL (2MASS) \\ 55607.80 & 128.80 & 13.47 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 55617.74 & 138.74 & ... & 13.25 (0.07) & 12.86 (0.08) & PAIRITEL (2MASS) \\ 55620.76 & 141.76 & 13.53 (0.06) & ... & 12.86 (0.08) & PAIRITEL (2MASS) \\ 55623.77 & 144.77 & ... & 13.29 (0.07) & ... & PAIRITEL (2MASS) \\ 55629.73 & 150.73 & 13.55 (0.06) & ... & 12.84 (0.08) & PAIRITEL (2MASS) \\ 55640.73 & 161.73 & 13.63 (0.06) & 13.31 (0.07) & ... & PAIRITEL (2MASS) \\ 55643.73 & 164.73 & 13.59 (0.06) & ... & 12.91 (0.08) & PAIRITEL (2MASS) \\ 55644.71 & 165.71 & 13.57 (0.06) & 13.32 (0.07) & ... & PAIRITEL (2MASS) \\ 55647.79 & 168.79 & 13.61 (0.06) & ... & 12.99 (0.08) & PAIRITEL (2MASS) \\ 55651.72 & 172.72 & 13.63 (0.06) & 13.36 (0.07) & 12.96 (0.08) & PAIRITEL (2MASS) \\ 55670.66 & 191.66 & 13.61 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 55676.68 & 197.68 & 13.65 (0.06) & 13.49 (0.07) & 13.06 (0.08) & PAIRITEL (2MASS) \\ 55688.63 & 209.63 & 13.63 (0.06) & 13.51 (0.07) & 12.95 (0.08) & PAIRITEL (2MASS) \\ 55705.68 & 226.68 & 13.72 (0.06) & 13.51 (0.07) & ... & PAIRITEL (2MASS) \\ 55708.67 & 229.67 & 13.66 (0.06) & 13.52 (0.07) & ... & PAIRITEL (2MASS) \\ 55947.89 & 468.89 & 13.76 (0.06) & 12.76 (0.07) & 11.88 (0.08) & PAIRITEL (2MASS) \\ 55949.87 & 470.87 & ... & ... & 11.79 (0.08) & PAIRITEL (2MASS) \\ 55999.65 & 520.65 & 14.01 (0.06) & ... & 11.86 (0.08) & PAIRITEL (2MASS) \\ 56002.59 & 523.59 & ... & 12.81 (0.07) & ... & PAIRITEL (2MASS) \\ 56010.63 & 531.63 & ... & 12.80 (0.07) & 11.96 (0.08) & PAIRITEL (2MASS) \\ 56011.63 & 532.63 & 14.06 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56013.71 & 534.71 & ... & 12.84 (0.07) & 11.94 (0.08) & PAIRITEL (2MASS) \\ 56014.63 & 535.63 & 14.06 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56015.69 & 536.69 & ... & 12.82 (0.07) & 11.86 (0.08) & PAIRITEL (2MASS) \\ 56017.69 & 538.69 & 14.06 (0.06) & 12.84 (0.07) & ... & PAIRITEL (2MASS) \\ 56019.70 & 540.70 & ... & ... & 11.88 (0.08) & PAIRITEL (2MASS) \\ 56025.70 & 546.70 & 14.14 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56026.68 & 547.68 & 14.15 (0.06) & 12.88 (0.07) & ... & PAIRITEL (2MASS) \\ 56027.69 & 548.69 & 14.10 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56028.68 & 549.68 & 14.16 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56029.68 & 550.68 & 14.16 (0.06) & ... & 11.95 (0.08) & PAIRITEL (2MASS) \\ 56031.68 & 552.68 & 14.16 (0.06) & ... & 12.03 (0.08) & PAIRITEL (2MASS) \\ 56035.61 & 556.61 & ... & 12.88 (0.07) & 12.02 (0.08) & PAIRITEL (2MASS) \\ 56036.68 & 557.68 & 14.17 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56055.71 & 576.71 & 14.22 (0.06) & ... & 11.98 (0.08) & PAIRITEL (2MASS) \\ 56058.71 & 579.71 & ... & 12.97 (0.07) & ... & PAIRITEL (2MASS) \\ 56060.68 & 581.68 & 14.26 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56061.67 & 582.67 & 14.30 (0.06) & 12.99 (0.07) & ... & PAIRITEL (2MASS) \\ 56066.65 & 587.65 & 14.23 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56217.02 & 738.02 & 15.03 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56226.02 & 747.02 & 14.99 (0.06) & 13.50 (0.07) & 12.37 (0.08) & PAIRITEL (2MASS) \\ 56228.02 & 749.02 & 15.09 (0.06) & ... & ... & PAIRITEL (2MASS) \\ 56229.00 & 750.00 & ... & 13.58 (0.07) & ... & PAIRITEL (2MASS) \\ 56231.97 & 752.97 & 15.12 (0.06) & ... & 12.29 (0.08) & PAIRITEL (2MASS) \\ 56344.39 & 865.39 & 15.80 (0.06) & 13.77 (0.07) & 12.60 (0.08) & NOT (NOTCAM) \\ 56374.44 & 895.44 & 15.90 (0.06) & 14.02 (0.07) & 12.73 (0.08) & NOT (NOTCAM) \\ 56621.69 & 1142.7&17.28 (0.25)&14.40 (0.11)&12.99 (0.12)& NOT (NOTCAM)\\ \enddata \end{deluxetable*} \subsection{Spectroscopy} \label{sec-spec} \subsubsection{$HST$-COS and STIS Observations and Data Reduction} HST observations with the Space Telescope Imaging Spectrograph (STIS) of SN2010jl were obtained at four epochs, 23, 33, 107 and 573 days after first detection. The observations were carried out as part of program GO-12242 `UV Studies of a Core Collapse Supernova'. The G230LB and G430L gratings were used at each epoch with the $52\times 0.2\arcsec$ slit. The data were reduced using the standard HST Space Telescope Science Data Analysis System (STSDAS) routines to bias subtract, flat-field, extract, wavelength calibrate, and flux-calibrate each SN spectrum. The spectral resolution of these gratings corresponds to $500-600 \rm ~km~s^{-1}$ at the short wavelength limit of the gratings and $250-300 \rm ~km~s^{-1}$ at the long limit. Table \ref{hsttable} summarizes the STIS observations including exposure time and spectral resolution. \ifx \apjloutput \undefined \begin{deluxetable*}{lccclccr} \footnotesize \fi \tabletypesize{\scriptsize} \tablewidth{0in} \tablecaption{Log of HST observations for SN 2010jl\label{hsttable}} \tablehead{ \colhead{Date} & \colhead{J.D.}&\colhead{Epoch}&\colhead{Instrument}&\colhead{Grism/Grating}&\colhead{Wavelength}&\colhead{Spectral}&\colhead{Exposure} \\ \colhead{} &\colhead{2,450,000+} &\colhead{(days)$^a$}&\colhead{}&\colhead{} &\colhead{ range (\AA)}&\colhead{resolution} &\colhead{s} \\ } \startdata 2010-11-11&5512.37&33.9&STIS&G230LB&1685-3065&615-1135&3550\\ &5512.45&&STIS&G430L&2900-5700&530-1040&700\\ 2010-11-22&5522.69&44.2&COS&G130M&1150-1450&16,000-21,000&1000\\ &5522.70&&COS&G160M&1405-1775&16,000-21,000&1000\\ &5522.56&&STIS&G230LB&1685-3065&615-1135&3575\\ &5522.64&&STIS&G430L&2900-5700&530-1040&700\\ 2011-01-23&5585.33&106.8&COS&G130M&1150-1450&16,000-21,000&2380\\ &5585.39&&COS&G160M&1405-1775&16,000-21,000&2950\\ &5585.13&&STIS&G230LB&1685-3065&615-1135&2300\\ &5585.27&&STIS&G430L&2900-5700&530-1040&1200\\ 2012-05-03&6051.45&573.0&STIS&G230L&1568-3184&615-1135&4400\\ &6051.39&&STIS&G430L&2900-5700&530-1040&600\\ 2012-06-20&6098.75&620.6&COS&G130M&1150-1450&16,000-21,000&7400\\ &6099.41&&COS&G160M&1405-1775&16,000-21,000&7400 \enddata \tablenotetext{a}{Relative to first detection date, 2010 Oct. 9, JD 2,455,479} \ifx \apjloutput \undefined \end{deluxetable*} \fi The SN was also observed with the medium resolution far-UV modes of $HST$-COS (G130M and G160M) at 44 days, 107 days and at 621 days. A description of the COS instrument and on-orbit performance characteristics can be found in \cite{Osterman2011} and \cite{Green2012}. All observations were centered on SN 2010jl (R.A. = 09$^{\mathrm h}$ 42$^{\mathrm m}$ 53.33$^{\mathrm s}$, Dec. = +09\arcdeg 29\arcmin 41.8\arcsec ; J2000) and COS performed NUV imaging target acquisitions with the PSA/MIRRORB mode. The MIRRORB configuration introduces optical distortions into the target acquisition image, but a first-order analysis indicates that the observations were centered on the point-like SN. It is important to realize that COS has only limited spatial resolution, which means that objects as far as $2 \arcsec$ \ from the center contribute to the spectrum. This is important for objects with nearby H II regions. Our last COS spectrum of SN 2010jl, at 621 days, has strong galaxy contamination from this source. The G130M data were processed with the COS calibration pipeline, CALCOS\footnote{We refer the reader to the Cycle 18 COS Instrument Handbook for more details: {\tt http://www.stsci.edu/hst/cos/documents/handbooks/current/cos\_cover.html}} v2.12, and combined with the custom IDL coaddition procedure described by~\citet{Danforth2010} and~\citet{Shull2010}. The coaddition routine interpolates all detector segments and grating settings onto a common wavelength grid, and makes a correction for the detector QE-enhancement grid. No correction for the detector hex pattern is performed. The data cover the 1155~--~1773~\AA\ bandpass, with two breaks at the COS detector segment gaps, [$\lambda_{gap}$,$\Delta\lambda_{gap}$] = [1302~\AA, 16~\AA] and [1592~\AA, 20~\AA] for the G130M and G160M modes, respectively. The resolving power of the medium resolution COS modes is $R$~$\equiv$~$\lambda / \Delta\lambda$~$\approx$~18,000 ($\Delta$$v$~=~17 km s$^{-1}$). \subsubsection{Optical spectroscopy} Complementing the HST observations, an extensive spectroscopic campaign involving several ground based telescopes was launched, and a log of all spectroscopic observations is available in Table~\ref{alfosctable}. \begin{deluxetable*}{lccccccl} \tabletypesize{\footnotesize} \tablecaption{Journal of spectroscopic observations \label{alfosctable}} \tablehead{ \colhead{UT date } & \colhead{ J.D.M} &\colhead{Epoch$^a$}&\colhead{Range}& \colhead{FWHM res.}&\colhead{Exposure}&\colhead{airmass}&\colhead{Instrument} \\ &&\colhead{days}&\colhead{\AA} &\colhead{\AA}&\colhead{s}&&\\ } \startdata 2010-11-07 & 5507.53& 28.5 &$3475-7415$&6.2 & 240 & 1.13&FAST/300GPM\\ 2010-11-09& 5509.71 & 30.7 & $6350-6850$ & 1.2 & 600x4 & 1.30 & ALFOSC/gr17 \\ 2010-11-10 & 5510.51&31.5&$3475-7415$&6.2 &600& 1.16&FAST/300GPM\\ 2010-11-10& 5510.72 & 31.7 & $3500-5060$ & 1.9 & 400x3 & 1.25 & ALFOSC/gr16 \\ 2010-11-14& 5515.4\phantom{0}& 36.4&$3177-8527$&6.5&120&1.72&MMT/300GPM\\ 2010-11-15& 5516.4\phantom{0}& 37.4&$3188-8534$&6.5&120&1.14&MMT/300GPM\\ 2010-11-16& 5517.4\phantom{0}& 38.4&$3186-4516$&1.45&1200&1.40&MMT/1200GPM\\ 2010-11-16& 5517.5\phantom{0}& 38.5&$4395-5731$&1.45&750&1.35&MMT/1200GPM\\ 2010-11-16& 5517.5\phantom{0}& 38.5&$5587-6931$&1.45&600&1.30&MMT/1200GPM\\ 2010-11-16& 5517.5\phantom{0}& 38.5&$3177-8399$&6.5&120&1.15&MMT/300GPM\\ 2010-11-18& 5518.68 & 39.7 & $6350-6850$ & 1.2 & 600x2 & 1.37 & ALFOSC/gr17 \\ 2010-11-18& 5518.71 & 39.7 & $3500-5060$ & 1.9 & 500x3 & 1.21 & ALFOSC/gr16 \\ 2010-11-24& 5524.76 & 45.8 & $6350-6850$ & 1.2 & 600x2 & 1.07 & ALFOSC/gr17 \\ 2010-11-24& 5524.72 & 45.7 & $3500-5060$ & 2.9 & 500x3 & 1.13 & ALFOSC/gr16 \\ 2010-11-28& 5529.4\phantom{0}& 50.4 &$5653-7538$&2.0&900&1.4&MMT/832GPM\\ 2010-12-06 & 5536.54& 57.5&$3475-7415$&6.2&900&1.11&FAST/300GPM\\ 2010-12-07 & 5537.49& 58.5&$3475-7415$&6.2&1500&1.08&FAST/300GPM\\ 2010-12-09& 5540.40& 61.4&$5575-6908$&1.45&450&1.1&MMT/1200GPM\\ 2010-12-11 & 5541.50& 62.5&$3475-7415$&6.2&900&1.08&FAST/300GPM\\ 2010-12-12 & 5542.47& 63.5&$3475-7415$&6.2&1020&1.09&FAST/300GPM\\ 2010-12-16 & 5546.41 & 67.4 & $3850-9000$ & 0.56 & 3600 & 1.14 & TRES\\ 2010-12-19 & 5549.47 & 70.5 & $3850-9000$ & 0.56 & 3000 & 1.08 & TRES\\\ 2010-12-28 & 5558.37 & 79.4 & $3850-9000$ & 0.56 & 3600 & 1.17 & TRES\\ 2010-12-28& 5558.63 & 79.6 & $6350-6850$ & 1.2 & 600x3 & 1.13 & ALFOSC/gr17 \\ 2010-12-28& 5558.65 & 79.7 & $3500-5060$ & 2.9 & 600x3 & 1.08 & ALFOSC/gr16 \\ 2011-01-02 & 5563.36&84.4&$3475-7415$&6.2&900&1.19&FAST/300GPM\\ 2011-01-05 & 5566.43& 87.4&$3475-7415$&6.2&1020&1.08&FAST/300GPM\\ 2011-01-08 & 5569.39& 90.4&$3475-7415$&6.2&900&1.09&FAST/300GPM\\ 2011-01-13 & 5574.45 & 95.5 & $3850-9000$ & 0.56 & 3600 & 1.15 & TRES\\ 2011-01-28 & 5589.29& 110.3&$3475-7415$&6.2&300&1.20&FAST/300GPM\\ 2011-01-31& 5592.55 & 113.6 & $6350-6850$ & 1.2 & 600x4 & 1.10 & ALFOSC/gr17 \\ 2011-01-31& 5592.59 & 113.6 & $3500-5060$ & 1.9 & 600x2 & 1.06 & ALFOSC/gr16 \\ 2011-02-05 & 5597.41& 118.4&$3475-7415$&6.2&1200&1.18&FAST/300GPM\\ 2011-02-26 & 5618.14 & 139.1&$3475-7415$&6.2&1200&1.56&FAST/300GPM\\ 2011-03-01 & 5621.27& 142.3&$3475-7415$&6.2&1500 &1.08&FAST/300GPM\\ 2011-03-01& 5621.5\phantom{0} &142.5 & $6350-6850$ & 1.2 & 1200x3 & 1.18 &ALFOSC/gr17 \\ 2011-03-03 & 5623.23 & 144.2&$3475-7415$&6.2& 1544 & 1.10&FAST/300GPM\\ 2011-03-09 & 5629.21 & 150.2&$3475-7415$&6.2&900 & 1.11&FAST/300GPM\\ 2011-03-10&5630.56 &151.6 & $3500-5060$ & 1.9 & 900x1 & 1.17 &ALFOSC/gr16 \\ 2011-03-16 & 5636.23 & 157.2&$3475-7415$&6.2&900 & 1.08&FAST/300GPM\\ 2011-03-29&5650.41 & 171.4 & $6350-6850$ & 1.2 & 900x3 & 1.07 &ALFOSC/gr17\\ 2011-04-03 & 5654.18 & 175.2&$3475-7415$&6.2& 819 & 1.08&FAST/300GPM\\ 2011-04-08 & 5660.41& 181.4 & $3500-5060$ & 1.9 & 900x3 & 1.06 &ALFOSC/gr16 \\ 2011-05-19&5701.40 & 222.4 & $6350-6850$ & 1.2 & 900x3 & 1.31 &ALFOSC/gr17\\ 2011-05-21& 5703.39&224.4 & $3500-5060$ & 1.9 & 900x2 & 1.29 &ALFOSC/gr16 \\ 2011-10-28 & 5862.52 & 383.5&$3475-7415$&6.2& 1010 & 1.28&FAST/300GPM\\ 2011-10-30 & 5864.48 & 385.5&$3475-7415$&6.2& 1200 & 1.44&FAST/300GPM\\ 2011-11-03 & 5868.49 &389.5&$3475-7415$&6.2&1800 & 1.33&FAST/300GPM\\ 2011-11-28 & 5894.5\phantom{0} &415.5&$5653-7538$&0.71/px&900&1.2& MMT/832GPM \\ 2011-12-24 & 5919.51 &440.5&$3475-7415$&6.2& 1800& 1.15&FAST/300GPM\\ 2011-12-27 & 5922.36 &443.4&$3475-7415$&6.2& 1800 & 1.26&FAST/300GPM\\ 2011-12-31 & 5926.31 &447.3&$3475-7415$&6.2& 1800 & 1.47&FAST/300GPM\\ 2012-01-14 & 5940.23 & 461.2&$3000-10200$&0.1&5376&1.22-1.41&X-SHOOTER\\ 2012-01-14 & 5940.23 & 461.2&$10200-24800$&0.3&5928&1.22-1.41&X-SHOOTER\\ 2012-01-18 & 5944.32 & 465.3&$3475-7415$&6.2&1800&1.16&FAST/300GPM\\ 2012-03-24 & 6010.15 &531.2&$3475-7415$&6.2& 1800 & 1.14&FAST/300GPM\\ 2012-05-12 & 6059.15 &580.2&$3475-7415$&6.2& 1800 & 1.22&FAST/300GPM\\ 2012-05-15 & 6062.18 &583.2 &$3475-7415$&6.2&1800 & 1.43&FAST/300GPM\\ 2012-05-29& 6076.18 & 597.2 &$4000-6900$& 3.5 & 900x3 & 1.9 & MDM/OSMOS \\ 2012-06-15 & 6093.16 &614.2&$3475-7415$&6.2& 1200 & 2.40&FAST/300GPM\\ 2012-10-20 & 6221.00 & 742.0 &$3475-7415$& 6.2 & 1800 & 1.38 & FAST/300GPM \\ 2012-11-12 & 6244.00 & 765.0 &$3475-7415$& 6.2 & 1200 & 1.10 & FAST/300GPM \\ 2012-11-14 & 6246.00 & 767.0 &$3475-7415$& 6.2 & 1800 & 1.20 & FAST/300GPM \\ 2012-12-21&6282.8\phantom{0}&803.8&$5587-6931$&1.45&1200&1.1&MMT/1200GPM\\ 2013-02-02&6325.8\phantom{0}&846.8&$3177-8527$&6.5&1800&1.2&MMT/300GPM\\ 2013-02-03&6326.8\phantom{0}&847.8&$5587-6931$&1.45&1800&1.3&MMT/1200GPM\\ 2013-11-10&6606.7\phantom{0}&1127.7&$6400-6900$&1.45&1800&1.3&MMT/1200GPM\\ \enddata \tablenotetext{a}{Relative to first detection date, JD 2,455,479.0.} \end{deluxetable*} Optical spectroscopy of SN~2010jl was obtained on 16 epochs with the 2.5 m Nordic Optical Telescope (NOT) on La Palma, Spain with the ALFOSC spectrograph. To optimize the resolution for this narrow line SN we used two set-ups. Grism 17 covers basically only the H$\alpha$ region, while grism 16 was used for the bluer part, covering the rest of the Balmer series and connecting with the HST NUV data. All observations were performed at the parallactic angle, and the airmass was always less than 1.4 at start of the observations. To further enhance the resolution we used narrow slits, of width 0.9 arcsec for grism 17 and 0.5-0.75 arcsec for grism 16. The spectra were reduced in a standard manner using IRAF scripts as implemented in the QUBA pipeline. Wavelength calibrations were determined from exposures of HeNe arc lamps, and were checked against bright night sky emission lines. Flux calibration was performed by means of spectrophotometric standard stars. A large number of optical spectra (3400--7300~\AA) were also obtained at the the FLWO 1.5 m Tillinghast telescope using the FAST spectrograph \citep{Fabricant98} from day 28 to day 767. FLWO/FAST data are reduced using a combination of standard IRAF and custom IDL procedures \citep{Matheson05}. High resolution spectra were also obtained with TRES (Tillinghast Reflector Echelle Spectrograph), which is a fiber-fed echelle spectrograph on the 1.5-meter Tillinghast telescope at FLWO. Moderate-resolution optical spectra were obtained with the 2.4 m Hiltner telescope at MDM Observatory, on Kitt Peak, AZ on day 597. The Ohio State Multi-Objet Spectrograph (OSMOS; \citealt{Martini2011}) was used with the VPH grism and $1.4\arcsec$ inner slit in combination with the MDM4K CCD detector. These spectra were reduced and calibrated employing standard techniques in IRAF. Cosmic rays and obvious cosmetic defects have been removed. Wavelengths were checked against night sky emission lines and flux calibrations were applied using observations of \cite{Stone1977} and \cite{Massey1990} standard stars. The 6.5 m MMT spectra were all taken by the Blue Channel spectrograph. All observation used the $1.0 \arcsec$ slit oriented at the parallactic angle. The reduction procedure was the same as for the FLWO/FAST spectra. One medium resolution spectrum with X-shooter at the Very Large Telescope at ESO (VLT) on day 461 (2012 Jan. 14) was obtained, including both the optical and NIR ranges. The slit used was 1.0\arcsec \ in the UV and 0.9\arcsec \ in the optical and NIR. The resolutions were 4350, 7450 and 5300 in the UV, optical and NIR, respectively, corresponding to 69, 40 and 56 $\rm ~km~s^{-1}$, respectively. The X-shooter spectrum was pre-reduced using version 1.1.0 of the dedicated ESO pipeline \citep{Goldoni}, with calibration frames (biases, darks, flat fields and arc lamps) taken during daytime. Spectrum extraction and flux calibration were done using standard IRAF tasks. For the latter we used a spectrophotometric standard taken from the ESO list (http://www.eso.org/sci/facilities/paranal/instruments/xshooter/tools/specphot\_list.html) and observed during the same night. Telluric bands were removed using a telluric standard spectrum taken at the same airmass as the SN. Absolute fluxes of all our spectra were determined from r' band photometry. With the strong H$\alpha$ line from the supernova, this is the band least affected by the galaxy background. For the last epoch MMT spectrum at 1128 days we do not have any simultaneous optical photometry and we estimate that the absolute flux is only accurate to $\sim 25$ \%. \section{RESULTS} \label{sec-results} \subsection{Reddening} \label{sec-redd} From \cite{Schlegel1998}, \citet{Smith2011} estimate a Milky Way reddening corresponding to E$_{\rm (B-V)} = 0.027$ mag. Based on the weak Na I lines, \cite{Smith2011} assume negligible reddening from the host galaxy. From the damping wings of the Ly$\alpha$ absorption in our COS spectra we can make an independent estimate of this for both the host and for the Milky Way. We discuss the COS spectrum in detail later. Here we use the damping wings of the Ly$\alpha$ absorption to derive a value of the column density, $N_{\rm H}$ \citep[e.g.,][Chap. 9.4]{Draine2011}. The Milky Way absorption is affected by the Si III $\lambda 1206.5$ absorption on the red side, so we use the blue side for the fit. For the host galaxy we instead use the red side to avoid the slight overlap with the Milky Way absorption. From fits of these, shown in Figure \ref{fig_nh}, we derive a value of $N_{\rm H} = (1.05\pm 0.3)\EE{20}$ cm$^{-2}$ for $b = 10 \rm ~km~s^{-1}$ for the host galaxy, while the corresponding value is N(HI)$_{\rm MW} = 1.75 (\pm 0.25 ) \times 10^{20} \ {\rm cm}^{-2}$ from the Milky Way. There is some uncertainty on the Milky Way column density, because the Si III 1206.5 \AA \ absorption line from the host galaxy falls on the red edge of the Milky Way Ly$\alpha$ profile. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{HIabs_SN2010jl_030513kf.eps}} \caption{Fit to the Ly$\alpha$ absorptions on 23 January 2011 from the Milky Way and from the host galaxy. The blue line shows the fit, while the black line shows the observations and the red error bars the one sigma error. } \label{fig_nh} \end{center} \end{figure} Using the $N_{\rm H I}$ vs. E$_{\rm (B-V)}$ relation in \cite{Bohlin1978}, and assuming the same $N(H_2)/N(H I)$ ratio for the host galaxy of 2010jl as in the Milky Way, we find E$_{\rm (B-V)} = 0.036$ mag from the Milky Way and E$_{\rm (B-V)} = 0.022$ mag for the host galaxy. The value for the Milky Way agrees well with that derived from the FIR emission, but we also note that there is a non-neglible absorption from the host galaxy. For the remainder of the paper, where needed, we will adopt a value of E$_{\rm (B-V)} = 0.058$ mag for the total reddening. \subsection{Light curves and total energy output} \label{sec_phot_results} \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[scale=.80,angle=0]{2010jl_CFA_photometry_texp_cleaned_v8.eps}} \caption{Light curves of SN 2010jl in different bands. Triangles are early, pre-discovery I- and V band observations from \cite{Stoll2010}.} \label{fig_photometry} \end{center} \end{figure} Figure \ref{fig_photometry} shows the light curve from our photometry, complemented with early measurements from \cite{Stoll2010}. The B, V and i' bands all show nearly the same slow decline by $\sim 0.8$ mag/100 days during the first $\sim 175$ days. After this the light curves become almost constant up to $\sim 240$ days. However, the r' band already shows signs of flattening at age 100 days. We discuss this band, dominated by H$\alpha$, in more detail in Sect. \ref{sec-profiles}. Being most sensitive to the decreasing temperature, the u' band shows the steepest decline. After the first gap produced by conjunction with the sun, all optical bands show a considerably faster decline at $\ga 300$ days. From 400 -- 850 days the decline in the optical bands is roughly linear. The decline rate is $8.0 \times 10^{-3}$ mag/day in i', $4.9 \times 10^{-3}$ mag/day in r', $8.0 \times 10^{-3}$ mag/day in V and $7.1\times 10^{-3}$ mag/day in B. We attribute the slow r band decline to emission in H$\alpha$: as seen in the spectra, H$\alpha$ is stronger with respect to the continuum as time goes by (Sect. \ref{sec_spec_results}). Compared to the R-band photometry of \cite{Ofek2013} we agree within less than 0.1 mag over the whole period. Up to day 200 our optical photometry agrees within $\sim 0.1-0.2$ mag in the r', B and V bands also with the light curves by \cite{Zhang}, in spite of the difference in filter profiles between our r' band and their R band. This is probably because the r' and R bands are both dominated by H$\alpha$. Our i' magnitudes are $\sim 0.5$ mag fainter than the I magnitudes by Zhang et al. and $\sim 0.3$ mag fainter in u' compared to their U band. Most likely this difference can be explained by the different filter responses between the SDSS i' and u' bands and the standard I and U bands. At epochs later than 350 days our magnitudes are 0.5 -- 1.5 mag fainter in all bands, a difference which increases with time. The origin of this is not clear, but we note that Zhang et al. do not give any errors in their table of the photometry for these epochs. Also, our photometry showed a similar flattening of the light curve until we subtracted the background from the host galaxy. The NIR light curves show a similar decline to the optical up to $\sim 200$ days. After this the light curves flatten considerably in J and H, while the K-band shows an increase compared to the flux at earlier times. The difference between the NIR and the optical behavior is real and has important consequences for our physical picture for SN~2010jl. As we discuss in Sect. \ref{sec_dust}, we interpret this emission as coming from dust in the CSM of the progenitor. The pseudo-bolometric light curve for the flux in the wavelength range 3500-25000 \AA \ was calculated using the method described in Ergon et al. (2013). In the optical region where we have a well sampled spectral sequence, we made use of both photometry and spectroscopy to accurately calculate the bolometric luminosity, whereas in the NIR region we used photometry to interpolate the shape of the SED. The SED has then been de-reddened using E$_{\rm (B-V)} = 0.036$ mag from the Milky Way and E$_{\rm (B-V)} = 0.022$ mag for the host galaxy, as derived in Sect. \ref{sec-redd}. For the Milky Way reddening we use the extinction law by \cite{Cardelli1989} and being a star forming galaxy with WR features \citep{Shirazi2012} we use the star burst extinction from \cite{Calzetti1994} for the host galaxy. As is clear from Figure \ref{fig_photometry}, the contribution from the NIR bands becomes increasingly important. The luminosity in the NIR region (9000-24000 \AA ) is shown in Figure \ref{fig_bollc} as red points, whereas the total (3500-25000 \AA ) luminosity is shown as black points. While the optical bolometric contribution starts to drop at $\sim 350$ days the NIR part has a long plateau from 200 to $\sim 450$ days. At age 100 days, the IR represents 33 \% of the flux, equaling the optical output at day 400; by age 500 days, it is 73\% and by day 770, 85\%. This era is dominated by a dust contribution (Sect. \ref{sec_dust}), and reflects both the instantaneous energy output and a contribution from a light echo. In Sect. \ref{sec_dust} we show that the SN itself dominates the NIR at early epochs, while later the NIR comes from an echo. The pseudo-bolometric light curve in Figure \ref{fig_bollc} only includes the optical and NIR bands and ignores both the UV and midIR. Especially at the early phases, the comparatively hot spectrum has a large UV contribution. We will discuss the spectra in more detail later; here we only estimate this contribution from an integration of the flux from COS and STIS for days 44, 107 and 573. We do not include the COS spectrum for day 621 because this is severely contaminated by the host galaxy. On day 44 we find a luminosity in the $1100-3600$ \AA \ range equal to $7.2 \times 10^{42} \rm ~erg~s^{-1}$ and for day 107 $3.4 \times 10^{42} \rm ~erg~s^{-1}$. For day 573 we find a luminosity of $4.5 \times 10^{41} \rm ~erg~s^{-1}$ in the STIS $1700-3600$ \AA \ range. We estimate the uncertainty in these fluxes as $\sim 25 \%$. There is therefore a substantial UV contribution to the bolometric luminosity especially at late epochs, up to $\sim 50 \%$ of the optical luminosity at the last epoch. There may also be a substantial EUV and X-ray luminosity that is not included in our observations. Up to day $\sim 200$ our total bolometric luminosity agrees with that of \cite{Zhang}. Zhang et al. have, however, no NIR (or UV) observations. These account for an increasing fraction of the luminosity as the event ages, so their luminosities are increasingly incomplete. In addition, the background contribution from the host is very important after day $\sim 350$, as was discussed above. At early epochs there is some discrepancy between our bolometric luminosities and those of Zhang et al. It is not clear what produces this difference: it may be due to Zhang et al. using photometry alone, while we use a combination of photometry and spectroscopy to combine the bands. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[scale=0.1,angle=0]{2010jl_bol_light_curve_v6.eps}} \caption{Pseudo-bolometric light curves of SN 2010jl. The contributions from the $3600-9000$ \AA \ and $9000-24,000$ \AA \ ranges are shown separately. Also shown as magenta squares are the UV luminosities at the time of our HST observations. The day 44 and day 107 luminosities include the $1100 - 3600$ \AA \ range, while day 573 only includes the STIS $1700 - 3600$ \AA \ range. } \label{fig_bollc} \end{center} \end{figure} Assuming that the luminosity is the same as at our first determinations at day 27 (see below), we find a total integrated energy from day 0 to day 920 in the $3600 - 9000$ \AA \ range of $3.3\times10^{50} \ \rm ergs$ and in the $9000 - 24,000$ \AA \ range $2.6\times10^{50} \ \rm ergs$. The total in the $3600 - 24,000$ \AA \ range is therefore $5.8\times10^{50} \ \rm ergs$. The correct way to use the IR measurements depends on the interpretation (Sect. \ref{sec_dust}) and on the relative contributions between the photosphere emission and dust: NIR light from an echo reflects the optical - UV output, while NIR light from shock heated dust should be added to the energy budget. To disentangle the dust and direct, photospheric luminosity from the SN we have made fits to the Spectral Energy Distribution (SED) from the photometry for most epochs. We have here included the Spitzer 3.6 and 4.5 $\mu$m fluxes from \cite{Andrews2011} at day 87 (JD 2,455,565) and from \cite{Fox2013} for day 254 (JD 2,455,733), as well as unpublished Spitzer observations from the Spitzer archive from days 465, 621 and 844. For these epochs we find fluxes of 8.55, 8.63, 7.73 mJy at 3.6 $\mu$m and 8.30, 8.66, 8.24 mJy at 4.5 $\mu$m. In Figure \ref{fig_sed} we show the SEDs for these dates, as well as for one epoch (750 days) where we only have optical and NIR data. These SEDs are modeled with two blackbodies with different temperatures and we minimize the chi square with the photospheric effective temperature and radius $T_{\rm eff}$ and $R_{\rm phot}$, and the corresponding parameters for the dust shell, $T_{\rm dust}$, $R_{\rm dust}$, as free parameters. We discuss the motivation for this choice of spectrum further in Sect. \ref{sec_dust}. In these fits we exclude the r-band, because it is dominated by the H$\alpha$ line. For the Spitzer photometry at day 254, 621, and 844, where no other photometry exists, we interpolate the Spitzer fluxes and the NIR and optical photometry to days 238, 586 and 865, respectively. Given the slow evolution of the Spitzer fluxes at these epochs, this should only introduce a minor error. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{sed_fits_v5.eps}} \caption{Spectral energy distributions at different epochs, together with blackbody fits from a dust component and a photospheric component. Because of the dominance of the H$\alpha$ line in the r-band this band is not included in the fits. For clarity each spectrum has been shifted by one decade in luminosity relative to the previous. } \label{fig_sed} \end{center} \end{figure} From these models we find that up to day $\sim 400$ the photospheric contribution dominates also the NIR, while at later stages the dust component takes over. The best fit parameters are for these models given in Table \ref{table_dust_param} for the different dates, and in Figure \ref{fig_dust_tbb_rbb} we show the blackbody temperature, radius and total luminosity of the dust and photospheric components from these fits, together with the standard deviations. The large errors in $T_{\rm dust}$ and $R_{\rm dust}$ at the first two epochs is a result of the dominance of the photospheric component also for the NIR, while the errors in $T_{\rm eff}$ is mainly a result of the relatively few bands (mainly B, V, I) where there is a large contribution from this component. \begin{deluxetable*}{lcccc} \tabletypesize{\footnotesize} \tablecaption{Parameters for the SED fits assuming two blackbody components. \label{table_dust_param}} \tablewidth{0pt} \tablehead{ \colhead{Epoch}& \colhead{$T_{\rm eff}$}&\colhead{$R_{\rm phot}$}& \colhead{$T_{\rm dust}$}&\colhead{$R_{\rm dust}$}\\ days&K&$10^{15}$ cm&K&$10^{16}$ cm\\ } \startdata \phantom{0}92& \phantom{0}6900 &3.20&1685& 1.60\\ 238& \phantom{0}7450 &2.18&1830& 1.44\\ 465& \phantom{0}9200 &0.56&2040& 2.21\\ 586& \phantom{0}9900 &0.29&1790& 2.61\\ 750& \phantom{0}9300 &0.22&1520& 3.24\\ 880& \phantom{0}7750 &0.23&1410& 3.43\\ \tableline \enddata \end{deluxetable*} At epochs earlier than $\sim 400$ days the dust temperature is constant within errors at $\sim 1850 \pm 200$ K, and then slowly decays to $\sim 1400$ K at 850 days. The blackbody radius is $\sim (1-2)\times10^{16}$ cm for the first $\sim 300$ days, and then slowly increases to $\sim 3\times 10^{16}$ cm at the last observation. The dust luminosities we obtain for the first epochs are lower than the NIR luminosities in Figure \ref{fig_bollc}. The reason for this, as can be seen in Figure \ref{fig_sed}, is that the photospheric contribution dominates the J, H and K bands for these epochs. At epochs later than the 465 day observation the opposite is true, which is a result of including the total dust emission from the blackbody fit and not only the NIR bands. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{sed_tbb_rbb_rsn_lum.eps}} \caption{Blackbody temperature, radius and luminosity for the dust component and radius and effective temperature for the SN component for the epochs in Figure \ref{fig_sed}.} \label{fig_dust_tbb_rbb} \end{center} \end{figure} Already at $\sim 90$ days \cite{Andrews2011} found from NIR and Spitzer observations an IR excess due to warm dust, but with a lower temperature of $\sim 750$ K than we find. Andrews et al., however, only include the Spitzer fluxes to the dust component, while we include also the J, H and K bands in this component, explaining our higher dust temperatures. We note that \cite{Andrews2011} underestimate the K-band flux in their SED fit. Using the SED fitting we can improve on the bolometric light curve by separating the SN and dust contributions of the IR flux to the bolometric luminosity and add this to the BVri contribution in Figure \ref{fig_bollc}. Based on the UV flux at the epochs with HST observations we multiply this by a factor 1.25 (Sect. \ref{sec_phot_results}). In this way we arrive at the bolometric light curve from the SN ejecta alone in Figure \ref{bol_log_log}, now shown in a log -- log plot. From this we see that the bolometric light curve from the ejecta can be accurately characterized by a power law decay from $\sim 20 - 320$ days, given by $L(t) \sim 1.75\times 10^{43} (t/100 \ {\rm days})^{-0.536} \rm ~erg~s^{-1}$ and a final steep decay $L(t) = 8.71 \times 10^{42} (t/320 \ {\rm days})^{-3.39} \rm ~erg~s^{-1}$ after day 320. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{bol_acc_fit.eps}} \caption{Bolometric light curve for the SN ejecta, excluding the dust echo. The dashed lines show power law fits to the early and late light curve used to construct the density distribution of the explosion. Note the pronounced break in the light curve at $\sim 320$ days. The dashed lines give power law fits to the luminosity before and after the break (see Sect. \ref{sec_energy} for a discussion). } \label{bol_log_log} \end{center} \end{figure} \cite{Ofek2013} estimate the bolometric light curve by assuming a constant bolometric correction of $-0.27$ mag to the R-band photometry. With this assumption they find a flatter light curve with $L(t) \propto t^{-0.36}$ for the same explosion date as we use here. The reason for this difference is that the R-band decays slower than most of the other bands, as can be seen from Fig. \ref{fig_photometry}. The bolometric light curve will therefore be steeper than the R-band light curve. The slope depends on the assumed shock breakout date. \cite{Ofek2013} discuss this based on the light curve and and find a likely range of 15-25 days before I-band maximum, corresponding to JD 2,455,469 - 2,455,479. Using 2,455,469 instead of our 2,455,479 would change the best fit luminosity decline to $L(t) \sim 1.9\times 10^{43} (t/100 \ {\rm days})^{-0.61} \rm ~erg~s^{-1}$. To estimate the total energy output from the SN we assume that the bolometric luminosity before our first epoch at 26 days was constant at the level at 26 days, which is supported by the early observations by \cite{Stoll2010}, shown in Figure \ref{fig_photometry}. The total energy from the SN (excluding the echo) is then $6.5 \times 10^{50}$ ergs. In addition, there is a contribution from the EUV as well as X-rays and mid-IR (Sect. \ref{sec-big}). Even ignoring these, we note that the total radiated energy is a large fraction of the energy of a `normal' core collapse SN (see Sect. \ref{sec-big}). \subsection{Spectroscopic evolution} \label{sec_spec_results} Figures \ref{fig_fast_spec} and \ref{fig1} show the SN~2010jl spectral sequence for days 29 -- 847 obtained with FAST at FLWO and with grism 16 at NOT, respectively. The former have the advantage of showing the full spectral interval between 3500 -- 7200 \AA, while the NOT spectra below 5100 \AA \ have a higher dispersion, showing the narrow line profiles better. From our first optical spectra at 29 days to the last at 848 days we see surprisingly little change in the lines present (Figure \ref{fig_fast_spec}). The main difference is that the continuum is getting substantially redder with time. This is also apparent from the steep light curve of the u' band in Figure \ref{fig_photometry}. At the same time the Balmer discontinuity at 3640 \AA \ becomes weaker. The most conspicuous features of the spectra are the strong, symmetric Balmer emission lines, from H$\alpha$ up to H$\delta$. To the blue of H$\delta$, the broad emission lines blend into a continuum. In addition to these blended broad emissions in Figure \ref{fig1}, each Balmer line up to at least H16 shows a narrow blueshifted P-Cygni absorption. In addition to the Balmer lines and several He I lines, some of which show prominent P-Cygni profiles (Sect. \ref{sec-narrow}), we also see narrow emission lines from [Ne III] $\lambda3868$, [O III] $\lambda\lambda4363, 4959, 5007$, several [Fe III] lines, as well as He II $\lambda4686$ (for more details see Sect. \ref{sec-narrow}). Our high resolution X-shooter spectrum reveals an additional large number of weaker narrow lines (Sect. \ref{xshooter}). \begin{figure*} \begin{center} \resizebox{150mm}{!}{\includegraphics[scale=.80,angle=0]{./full_spec_2010jl_fast_time_seq_29_850d_templ_v3.eps}} \caption{Spectral sequence in the optical from observations with FAST and MMT. Each spectrum has been shifted upwards by $10^{-14} \ \mathrm{ erg ~s^{-1} ~cm^{-2}~ } \mbox{\normalfont\AA}^{-1}$ relative to the one below. The wavelengths of the Balmer lines are shown, as well as the broad He I $\lambda 5876$ line.} \label{fig_fast_spec} \end{center} \end{figure*} \begin{figure*} \begin{center} \resizebox{150mm}{!}{\includegraphics[scale=.80,angle=0]{./SN2010jl_NOT_Gr16_abs_flux_calib_texp_v2.eps}} \caption{The grism 16 spectral series from NOT showing the Balmer series bluewards of H$\alpha$ as well as the Balmer decrement. Note the narrow lines due to [Ne III] $\lambda 3868.8$, [O III] $\lambda \lambda 4959, 5007$, and He II $\lambda 4686$, and the unusually strong [O III] $\lambda 4363$ line. The day 32 and day 80 spectra have been shifted upward and downward, respectively, by $2.5 \times 10^{-15} \rm ~erg~s^{-1} cm^{-2} \mbox{\normalfont\AA}^{-1}$.} \label{fig1} \end{center} \end{figure*} Figure \ref{fig_full_spec} shows the far-UV spectrum with COS from days 44, 107, and 621, while Figure \ref{fig_full_stis_spec} shows the STIS spectra from days 34, 44, 107 and 573. \begin{figure*} \begin{center} \resizebox{150mm}{!}{\includegraphics[angle=0]{full_spectrum_2010jl_COS_1100_1760_log_v2.eps}} \caption{COS spectra from day 44 (blue), day 107 (red) and day 620 (green) with the most important lines marked. We also show the positions of the strongest absorptions from the Milky Way and host galaxy in the lower part of the figure. The continuum in the day 620 spectrum is dominated by the continuum emission from the host galaxy background. } \label{fig_full_spec} \end{center} \end{figure*} Again, we see a number of prominent, broad lines with strong, narrow emission components, but also in many cases P-Cygni absorptions. The main difference from the optical is that in the UV we find mainly lines of highly ionized elements, like C III-IV, N III-V, O III-IV and Si III-IV, although there are also neutral and low ionization lines of C II, O I and Mg II. Comparison of the STIS spectra reveals little evolution of the spectrum between the first two epochs. The COS spectra show a similar slow evolution. At the time of the third epoch the continuum has faded by a factor of $\sim 2$, while the narrow lines have decreased by a modest amount. \begin{figure*} \begin{center} \resizebox{125mm}{!}{\includegraphics[angle=0]{full_spectrum_2010jl_STIS_1700_3000_v4.eps}} \resizebox{125mm}{!}{\includegraphics[angle=0]{full_spectrum_2010jl_STIS_3000_5600_jl.eps}} \caption{UV and optical STIS spectra from days 34 (red), 44 (blue), 107 (green) and 573 (magenta). The last spectrum has been multiplied by a factor of 5.0 for clarity. Note the strong [N II] and N III] lines in the UV spectrum, as well as the broad, symmetric line profiles for the strongest lines. } \label{fig_full_stis_spec} \end{center} \end{figure*} In addition to the lines from the SN, the spectrum also shows a number of interstellar absorption lines from the Milky Way and the host galaxy, shown in the lower part of Figure \ref{fig_full_spec}. In wavelength order we identify these as Si II $\lambda \lambda$ 1190.4, 1193.3, 1194.5, 1197.4, N I $\lambda \lambda$ 1199.6, 1200.2, Si III $\lambda$ 1206.5, N V $\lambda \lambda$ 1238.8, 1242.8, Si II $\lambda \lambda$ 1260.4, 1264.7, O I $\lambda \lambda$ 1302.2, 1304.9, 1306.0, C II $\lambda \lambda$ 1334.5, 1335.7, Si IV $\lambda \lambda$ 1393.8, 1402.8, Si II $\lambda \lambda$ 1526.7, 1533.4, C IV $\lambda \lambda$ 1548.2, 1550.8, Fe II $\lambda$ 1608.5, and Al II $\lambda$ 1670.8. The lower S/N and spectral resolution of the STIS spectra makes ISM line identifications more difficult. We do, however, detect a number of strong lines, Fe II $\lambda \lambda$ 2344.2, 2374.5, 2382.8, 2586.6, 2600.2, 2607.9, Mg II $\lambda \lambda$ 2796.4, 2803.5. All these lines are commonly seen in different directions of the Milky Way as well as in different galaxies. To show the relative contributions from the UV and optical ranges we show in Figure \ref{fig_full_opt_uv} the complete UV/optical spectrum at two epochs, one early and one very late. For the first epoch this is a combination of the COS and STIS spectra from day 44 and an interpolation of the FAST spectra from day 32 and day 58 to this date. For the late spectrum we exclude the COS spectrum because of the strong contamination of the continuum by the host galaxy by the large COS aperture. The emission lines in this spectrum are well-defined and, with a few exceptions, free from contamination. From Figure \ref{fig_full_opt_uv} we note the increasing importance of the UV at late phases, as was also concluded from Figure \ref{fig_bollc}. This is similar to other Type IIn SNe, like SN 1995N \citep{Fransson2002} and SN 1998S \citep{Fransson2005}, but in contrast to Type IIP and Ibc SNe, which rapidly become faint in the UV. We also see a clear change in character of the spectrum, from one dominated by a continuum at early epochs to one dominated by line emission at the last epochs. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{full_uv_opt_spectrum_2010jl_june12_stis_mdm_dl.eps}} \caption{Combined UV and optical spectra from day 44 and day 573 -- 599 from COS, STIS and ground based observations. For $\lambda > 5700$ \AA \ on day 44, we use an interpolated spectrum from the FAST day 32 and day 58 observations. Both spectra are dereddened. We plot $\log \lambda dL/d\lambda$ against $\log \lambda$, which gives the luminosity distribution per wavelength decade.} \label{fig_full_opt_uv} \end{center} \end{figure} \subsection{Line profiles and flux evolution} \label{sec-profiles} The medium resolution optical spectra in Figure \ref{fig1}, as well as the high S/N COS spectrum in Figure \ref{fig_full_spec}, clearly show that there are two distinct velocity components in the strongest emission lines. This is apparent for the H I and Mg II lines, but also the high ionization UV lines like C IV, N III-V, O III-IV, Si IV and N V. The broad components extend to $\sim 2400-10,000 \rm ~km~s^{-1}$, while the narrow components have an order of magnitude lower velocities. The maximum velocity to where the broad component can be traced depends mainly on the flux of the line and S/N of the spectra. For both of these reasons the H$\alpha$ line displays the most extended wings. We now discuss the properties of these components in more detail. \subsubsection{The broad component} \label{sec-broad} In Figure \ref{fig3b} we show the SN~2010jl spectral sequence of the H$\alpha$ line from day 31 to day 1128. The broad lines display a smooth, peaked profile, characteristic of electron scattering \citep{Munch1948,Auer1972,Hillier1991,Chugai2001}. At early times the lines are symmetric between the red and blue wings, while after $\sim 50$ days they display a pronounced bump to the blue. As we discuss in detail in Sect. \ref{sec_broad}, we interpret this as a result of a macroscopic velocity. We discuss (and reject) the alternative, in which dust absorption determine the line shape, in Sect. \ref{sec_broad}. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[scale=.60,angle=0]{Ha_profile_2010jl_NOT_MDM_XS_MMT_abs_1128d.eps}} \caption{Evolution of the broad component of H$\alpha$ from day 31 to day 1128 from NOT grism 17 (upper five panels), FAST (day 384), X-shooter (day 461), MDM (day 597) and MMT (days 804 and 1128). Note the evolution from a symmetric profile centered at $v= 0$ to the late blueshifted line profile. The narrow P-Cygni component is smoothed out in the lower resolution day 384 FAST spectrum and partly also in the day 597 MDM spectrum. Each spectrum is shifted by $2 \times 10^{-14} \rm ~erg~s^{-1} cm^{-2} \mbox{\normalfont\AA}^{-1}$. The dashed lines show the zero level for each date. For clarity, the day 804 spectrum has been multiplied by a factor 5 and the day 1128 spectrum by a factor 10.} \label{fig3b} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[scale=.60,angle=0]{SN2010jl_FAST_flux_HI_flux_templ_tight_panels_v4_test.eps}} \caption{Upper panel: Evolution of the continuum subtracted flux of the broad H$\alpha$ line within $\pm 10,000 \rm ~km~s^{-1}$ (blue dots) and the continuum flux in the same region (black asterisks). Lower panel: The H$\alpha$/H$\beta$ ratio (red squares) and the H$\beta$/H$\gamma$ ratio (green triangles) for the same period. The dashed lines are polynomial least square fits to the respective data points. We have also added the Ly$\alpha$/H$\beta$ ratios for the three epochs where we have COS-spectra (black dots). Luminosities and line ratios are corrected for reddening. } \label{fig_haflux} \end{center} \end{figure} To calculate the flux of H$\alpha$, H$\beta$, and H$\gamma$ we determine the continuum level on either side of the line between $-15,000$ to $-12,000 \rm ~km~s^{-1}$ and between $12,000$ to $15,000 \rm ~km~s^{-1}$. A continuum level is then determined as a linear interpolation between these velocities, and subtracted from the total flux. Figure \ref{fig3b} shows that this should be an accurate estimate. In Figure \ref{fig_haflux} we show the H$\alpha$ luminosity together with the H$\alpha$/H$\beta$ and H$\beta$/H$\gamma$ ratios. The dashed lines give polynomial least square fits to the data. In addition, we show the continuum flux in the same velocity interval, $\pm 10,000 \rm ~km~s^{-1}$, as the lines. From the figure we see that H$\alpha$ increases by a factor $\sim 2.5$ from day 29 to day 175. After the first observational gap between $\sim 200$ days and $\sim 400$ days the flux has dropped somewhat, and then drops by another factor of $\sim 10$ from 400 to 850 days. From the polynomial fit it is likely that the maximum H$\alpha$ luminosity occurred at $\sim 260$ days when the SN was close to the sun. The bell shaped light curve of the H$\alpha$ luminosity is in contrast to the continuum light curve in the same velocity region, shown as black dots, which decreases monotonically, and by a factor of $\sim 100$ from the first observations up to 800 days. The total luminosity (continuum plus H$\alpha$) remains nearly constant up to $\sim 400$ days, after which it too drops. The increasing flux in H$\alpha$ during the first $\sim 300$ days explains the flatter light curve of the r' band shown in Figure \ref{fig_photometry}. For the first $\sim 500$ days our H$\alpha$ luminosity agrees well with the total luminosity from the `narrow' and `intermediate' components of \cite{Zhang} Their terminology is different from that used in this paper. Their `narrow' and `intermediate' components are both part of our `broad', while their data do not resolve what we refer to as the `narrow' component. Our light curve, however, covers approximately one more year, showing a continued decay of H$\alpha$. The H$\alpha$/H$\beta$ flux ratio increases steadily from $\sim 3.6$ initially to $10-15$ at $\sim 800$ days. The H$\beta$/H$\gamma$ ratio similarly increases from $\sim 3.4$ initially to $\sim 5.0$ at $\sim 400$ days, after which it decreases to $\sim 2.8$ at 800 days (Figure \ref{fig_haflux}). For Balmer lines higher than H$\delta$ the broad components blend together into a quasi-continuum, and the Balmer decrement for these is therefore difficult to determine. The Ly$\alpha$ line, shown in Figure \ref{fig_lprof_lya}, is severely distorted by interstellar absorption lines both from the host galaxy and from the Milky Way. The broad Ly$\alpha$ absorption from the host galaxy makes it impossible to determine if any narrow emission is present. The red side of the broad component is relatively unaffected by absorptions and extends to $\sim 2460 \rm ~km~s^{-1}$. The blue side reaches into the damping profile of the host galaxy absorption. However, a lower limit of $\sim 2300 \rm ~km~s^{-1}$ to the blue extent of the line can be determined. As is shown by the comparison with the H$\beta$ line from the same epoch STIS spectrum, the two lines have similar profiles in the velocity ranges where Ly$\alpha$ is unaffected by absorptions. The lower flux of the Ly$\alpha$ line between $-2000 \rm ~km~s^{-1}$ and $- 1000 \rm ~km~s^{-1}$ compared the H$\alpha$ and H$\beta$ lines is caused by the overlapping ISM absorptions, as well as the Si III $\lambda 1206.5$ line (Sect. \ref{sec-redd}). A similar reasoning applies to the other epochs, although we also observe a change in the ratio of the blue and red segments of the line from the first two epochs to the last. This is caused by the blueshift of the line, so that more of the red side is absorbed by the host galaxy absorption, while the blue region is increasing for the same reason. \begin{figure} \begin{center} \resizebox{82mm}{!}{\includegraphics[angle=0]{Ha_Hb_Ly_a_101123.eps}} \resizebox{82mm}{!}{\includegraphics[angle=0]{Ha_Hb_Ly_a_110126.eps}} \resizebox{82mm}{!}{\includegraphics[angle=0]{Ha_Hb_Ly_a_120620.eps}} \caption{The Ly$\alpha$ line profile (blue) from the COS spectra of days 44, 107 and 621 compared to the H$\alpha$ (red) and H$\beta$ (green) lines. The scale and continuum level of H$\alpha$ and H$\beta$ lines have been adjusted to agree with those of Ly$\alpha$. The positions of the N V emission lines are marked, as well as the strongest interstellar lines from the host galaxy and our Galaxy in the Ly$\alpha$ spectrum.} \label{fig_lprof_lya} \end{center} \end{figure} In Sect. \ref{sec-redd} we found the damping wings of the Ly$\alpha$ absorption imply a value of N$_H \sim (1.05\pm 0.3)\EE{20}$ cm$^{-2}$ for $b = 10 \rm ~km~s^{-1}$ from the host galaxy. This means that the large column density derived from the Chandra observations, $\sim 10^{24} \ \rm cm^{-2}$ \citep{Chandra2012}, must be intrinsic to the SN itself or its CSM. The ratios between the Ly$\alpha$, H$\beta$, and H$\alpha$ lines are useful as indicators of the conditions in the broad line region of AGNs \citep[e.g., ][]{Krolik1978,Fraser2013}. Because of the broad absorptions the total flux of the Ly$\alpha$ line can not be directly determined. Instead we estimate the line ratio of this line to H$\alpha$ and H$\beta$ by first subtracting the continuum and then scaling the line profiles so that these fit each other as well as possible for the parts of the lines unaffected by interstellar absorptions (see Figure \ref{fig_lprof_lya}). The scaling factor then gives the line ratio. Including reddening with $E_{\rm B-V}=0.058$ (Sect. \ref{sec-redd}), we find for the first COS observation at day 44 $F({\rm Ly}\alpha)/F({\rm H}\beta) = 1.1$ and $F({\rm H}\alpha)/F({\rm H}\beta) = 2.8$, for day 107 $F({\rm Ly\alpha})/F({\rm H}\beta) = 0.53$ and $F({\rm H}\alpha)/F({\rm H}\beta)= 4.1$ and for day 620 $F({\rm Ly}\alpha)/F({\rm H}\beta) = 2.5$ and $F({\rm H}\alpha)/F({\rm H}\beta) = 9.0$. Because of the difficulty in the calibration of the COS-spectra relative to the optical spectra the uncertainty in the Ly$\alpha/$H$\beta$ ratio is large. We have plotted the Ly$\alpha/$H$\beta$ ratio in Figure \ref{fig_haflux} as black dots. Although the uncertainties in the Ly$\alpha$ flux are large, we see from these ratios that the broad line region has a Balmer decrement, as well as a Ly$\alpha$/H$\alpha$ ratio, very different from the recombination values. This is similar to what is found in the dense broad line regions in AGNs, and indicates a combination of high optical depths in the lines and high densities. Models show that column densities of $\ga 10^{24} \ \rm cm^{-2}$ in combination with high densities, $\ga 5 \times 10^8 \ \rm cm^{-3}$, are needed to get these ratios \citep{Krolik1978,Kwan1984,Fraser2013}. A flat X-ray spectrum also lowers the Ly$\alpha$/H$\alpha$ ratio and increases the H$\alpha$/H$\beta$ ratio \citep{Kwan1986}. There is, however, a substantial degeneracy between the density, column density and X-ray flux. In Sect. \ref{sec-big} we discuss this further based on real models. Besides the H I lines, also N V $\lambda \lambda 1238.8, 1242.8$, O I $\lambda \lambda 1302.2-1306.0$, O IV] $\lambda \lambda 1397.2-1407.3$ + SI IV $\lambda \lambda 1393.8, 1402.8$, N IV] $\lambda 1486.5$, C IV $\lambda \lambda 1548.2-1550.8$ have broad components. This is probably also the case for He II $\lambda 1640.5$, O III] $\lambda \lambda 1660.8-1666.2$, N III] $\lambda \lambda 1746.8-1754.0$, and Si III] $\lambda 1892.0$ / C III] $\lambda 1908.7$, although the flux of the broad component is comparable to the S/N for these lines. In Figure \ref{fig_lprof_niv_civ} we compare the broad components from the strong C IV and N IV] lines with the profile of H$\beta$. Although the S/N in the COS observation is lower, we see that the profiles are basically identical, as expected for electron scattering from lines originating at similar depths. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{NIV_vel_2010jl_comp.eps}} \resizebox{85mm}{!}{\includegraphics[angle=0]{CIV_vel_2010jl_comp.eps}} \caption{Comparison of the day 44 N IV] (left) and C IV (right) lines with the H$\beta$ line profile (blue). The positions of some of the interstellar absorptions lines from the host and our Galaxy are marked. } \label{fig_lprof_niv_civ} \end{center} \end{figure} Figure \ref{fig_lprof_lya_mgii} shows the evolution of the most important broad UV lines from the COS and STIS observations. The scattering wings of both the N IV] and C IV lines get fainter between the first two observations at 44 and 107 days, opposite to the evolution of the H$\alpha$ line. At 621 days the wings of the high ionization lines have completely disappeared, in contrast to the Ly$\alpha$, O I $\lambda \lambda 1302.2-1306.0$ and Mg II lines, which all still have strong wings in the last HST observations. The evolution of the latter lines agree more with H$\alpha$. \begin{figure}[ht!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{COS_STISLya_NIV_CIV_MgII_evol_lin_vel.eps}} \caption{Evolution of the profiles of the Ly$\alpha$, N IV], C IV and Mg II lines. The Ly$\alpha$, N IV], C IV spectra are from COS, while the Mg II profile is from the lower resolution STIS spectra. Note that for clarity the flux of the COS spectrum for day 621 has been multiplied by a factor of 4.0. }\label{fig_lprof_lya_mgii} \end{center} \end{figure} \subsubsection{The narrow component} \label{sec-narrow} As noted by \citet{Smith2011}, many of the optical lines display a narrow component, in most cases with a P-Cygni profile, as shown in Figure \ref{fig1}. In Figure \ref{fig3} we show the narrow H$\alpha$ line on a larger velocity scale from NOT/Grism17, TRES, X-shooter and MDM spectra. The peak of the line is close to zero velocity, while the red wing reaches $\sim 150 \rm ~km~s^{-1}$, as marked by the vertical lines. The blue wing is more complex, consisting of both an absorption, reaching $\sim -105 \rm ~km~s^{-1}$, and a high velocity emission reaching $\sim -250 \rm ~km~s^{-1}$. These velocities do not change appreciably with time although the emission components get weaker. The emission wing on the blue side of the P-Cygni absorption is likely to be caused by electron scattering in the CSM of the SN. Examples of this are known for Wolf-Rayet stars, where \citet[][see his Figure 8]{Hillier1991} calculated the P-Cygni profile including electron scattering in the line forming zone and obtained a line profile similar to that seen for H$\alpha$. The expansion velocity of the CSM should therefore be $\sim -105 \rm ~km~s^{-1}$, rather than the higher velocity characteristic of the blue wing. This also shows that the electron scattering optical depth is not negligible in the CSM of the SN. In Figure \ref{fig_narrowHa} we show the net flux of the narrow H$\alpha$ line from our highest dispersion spectra. For the narrow component we have interpolated the broad line flux on each side of the line ($-250 \rm ~km~s^{-1}$ and $200 \rm ~km~s^{-1}$, respectively), and then calculated the net emission above this level. In the day 461 spectrum the P-Cygni absorption is stronger than the emission and the luminosity of the line is consequently negative. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[scale=.60,angle=0]{Ha_profile_narrow_2010jl_NOT_TRES_XS_MDM.eps}} \caption{The narrow component of H$\alpha$ from day 31 to day 599. The `continuum' level has been set as the average flux between 300 -- 500 $\rm ~km~s^{-1}$ and each spectrum has then been shifted by $2\times 10^{-14} \rm ~erg~s^{-1}$ cm$^{-2}$ s$^{-1}$ \AA$^{-1}$ relative to the previous. The vertical dashed lines are at $-250$, $-105$, $0$ and $+150 \rm ~km~s^{-1}$. The day 96 and day 142 spectra are from TRES, the day 461 from X-shooter and the day 599 spectrum from MDM, while the others come from NOT/grism 17. The flux of the day 599 spectrum has been multiplied by a factor 2. } \label{fig3}, \end{center} \end{figure} The main change with time of the narrow component is a decrease of the flux (and equivalent width) of the line, from a net emission line to a line with zero or even negative equivalent width. When we compare the broad and narrow emission we see that they evolve independently from each other. The narrow component is in fact more correlated to the continuum luminosity than to the broad H$\alpha$. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[scale=.60,angle=0]{SN2010jl_broad_narrow_v2.eps}} \caption{Upper panel: The luminosity of the broad and narrow components of H$\alpha$ from day 31 to day 461. Only spectra with resolution higher than FWHM 1.2 \AA \ have been included for the narrow component. The day 96 and day 142 spectra are from TRES, the day 461 from X-shooter, while the others come from NOT/grism 17. For the narrow line we give the net luminosity in the P-Cygni line. The last measurement at 461 days has a stronger absorption component than emission component and the luminosity is therefore negative. Note that the luminosity scale of the narrow and broad lines are different by a factor 10. Lower panel: Ratio of these luminosities. } \label{fig_narrowHa}, \end{center} \end{figure} Of the He I lines the following lines are detected: $\lambda 3889$ (blended with H 8-2), $\lambda 3965$ (P-Cygni), $\lambda 4471$ (P-Cygni), $\wl4713$ (only emission), $\wl4922$ (probable), $\wl5016$ (P-Cygni), $\wl5876$ (P-Cygni ), $\wl6678$ (P-Cygni), $\wl7065$ (pure emission). Although some of these are seen in H II regions, the fact that P-Cygni profiles are observed for most of them shows that they originate in the SN environment. Additional He I lines are present in the NIR (Sect. \ref{xshooter}). Figure \ref{fig_lprof1} shows a sample of the most important other narrow emission lines in the UV and optical ranges on a velocity scale for the day 44 and day 621 spectra. When comparing the line profiles of these lines one has to take into account the absorption of the UV resonance lines from the host galaxy at redshift 3207 $\rm ~km~s^{-1}$ and the Milky Way. An important case is the C IV $\lambda \lambda 1548.2, 1550.8$ doublet. The velocity separation of the $\lambda$1550.8 component relative to the $\lambda 1548.2$ component corresponds to 500 $\rm ~km~s^{-1}$. The absorption from the Milky Way does not interfere with the $\lambda 1548.2$ line below 2700 $\rm ~km~s^{-1}$. Absorption from the host may, however, be important. Typical rotational velocities are $100 - 200 \rm ~km~s^{-1}$, and this may well interfere with both the absorption and emission components, making a determination of the expansion velocity difficult for these lines. When we compare the different lines in Figure \ref{fig_lprof1} we see that the absorptions of the O I $\lambda$ 1302.2, Si IV $\lambda 1393.8$ and the C IV $\lambda 1548.2$ lines all are considerably wider than the intercombination lines. The strong N IV] $\lambda 1486.5$ line has a FWHM $\sim 50 \rm ~km~s^{-1}$ and extends to $\sim 100 \rm ~km~s^{-1}$ to the red side and $\sim 80 \rm ~km~s^{-1}$ to the blue. These velocities are similar to the red emission component of the C IV lines, but considerably less than the C IV absorption. Similar widths are found for the O III] $\lambda 1666.2$ and He II $\lambda 1640.4$ lines, although the S/N are lower for these lines. In contrast, the velocity of the blue edge of the C IV $\lambda 1548.2$ absorption corresponds to $\sim -280 \rm ~km~s^{-1}$. The extent of the strong emission component to the red is, however, only $\sim 100 \rm ~km~s^{-1}$, although it is difficult to determine this accurately. The higher velocity of the absorption of the C IV $\lambda \lambda 1548.2, 1550.8$ lines probably originate in the host galaxy ISM, or possibly in higher velocity CSM gas of low density whose emission is too faint to be seen. This is also consistent with the other resonance lines. For the O I $\lambda$ 1302.2 line one can even distinguish a separation between the P-Cygni absorption at $\sim 100 \rm ~km~s^{-1}$ and the higher absorption component. When we compare the line profiles of the day 44 and day 621 spectra we do not see any clear evolution in velocity widths. The velocities of the high ionization narrow UV lines are similar the optical lines, i.e., $\sim 100 \rm ~km~s^{-1}$. The fact that the Balmer and the He I lines have P-Cygni absorptions show that the population of these excited levels are high even at $\sim 600$ days. This in turn argues for a high density, $\ga 10^8 \rm ~cm^{-3}$, in the line forming region (Sect. \ref{sec-big}). In Sect. \ref{sec-fluxes} we discuss other independent density determinations which indicate high density circumstellar gas. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{velocities_2010jl_low_pan_opt_uv_nov10_500kms_ha.eps}} \resizebox{85mm}{!}{\includegraphics[angle=0]{velocities_2010jl_low_pan_opt_uv_june12_500_ha.eps}} \caption{The central, low velocity region of the line profiles of the H$\alpha$, He II, C IV, N IV], N V, O I, O III], O IV] and Si IV lines on a velocity scale from day 44 (upper three rows) and day 621 (lower three rows). The H$\alpha$ spectra are from days 46 and 597, respectively.} \label{fig_lprof1} \end{center} \end{figure} Tables \ref{table_stis_1} and \ref{table_cos} give fluxes of the identified narrow UV lines from the STIS and COS spectra. To measure the fluxes of the different lines we first determine the continuum level on both sides of the line and fit this to a polynomial, which is subtracted from the total flux in the line. This introduces uncertainties in both the continuum level and the extent of the line, which is most serious for the lower resolution STIS spectrum which also has a lower S/N. In addition, the low spectral resolution of STIS also has the consequence that multiplets, like N III] $\lambda \lambda 1746.8 - 1754.0$, are not resolved. To complicate things further, several lines, like the N III], Si III] and C III] lines, sit on top of broad electron scattering profiles (see below). The higher spectral resolution COS spectra are less sensitive to these effects. Unfortunately, this setting of COS does not cover the C III] lines. \begin{deluxetable*}{llccccl} \tabletypesize{\footnotesize} \tablecaption{STIS Line Catalog. \label{table_stis_1} } \tablewidth{0pt} \tablehead{ \colhead{Species} & \colhead{$\lambda_{vac}$} && \colhead{Line Flux} & && \colhead{Notes} \\ & \colhead{(\AA)} & & (10$^{-14}$ ergs cm$^{-2}$ s$^{-1}$) && \\ &&day 34&day 44&day 107&day 573& } \startdata O III]&1660.81-1666.15&2.8:&2.4:&--&&at wavelength limit \\ N III]&1746.82-1754.00&18.4\phantom{0}$\ \pm 2.0$\phantom{0}&13.5\phantom{0}$\ \pm 2.0$\phantom{0}&5.88$\ \pm 0.8$&0.60$\ \pm 0.10$&\\ Si III]&1882.71-1896.64&8.80$\ \pm 0.6$&9.12$\ \pm 1.5$&7.64$\ \pm 0.5$&0.51$\ \pm 0.15$&\\ C III]&1906.68-1909.60&8.50$\ \pm 1.2$&8.40$\ \pm 1.2$&5.20$\ \pm 1.0$&0.56$\ \pm 0.15$&\\ N II]&2139.68-2143.45&4.37 $\ \pm 0.1$&5.01$\ \pm 0.2$&3.36$\ \pm 0.2$&0.45$\ \pm 0.10$& \\ C II]&2324.21-2328.83&2.46$\ \pm 0.3$&2.75 $\ \pm 0.2$&2.12$\ \pm 0.2$&0.37$\ \pm 0.10$ &\\ Mg II&2796.34-2803.52&14.5\phantom{0}$\ \pm 0.5$\phantom{0}&25.6\phantom{0}$\ \pm 1.0$\phantom{0}&38.4\phantom{0}$\ \pm2.0$\phantom{0}&12.8$\ \pm 2.00$&broad component\\ $[$Ne III$]$&3869.85&1.44$\ \pm 0.1$&1.37$\ \pm 0.2$&0.77$\ \pm 0.3$&-& \\ H I&4102.89&7.73$\ \pm 0.3$&6.72$\ \pm 3.0$&5.43$\ \pm 1.8$&0.57$\ \pm 0.20$&broad component \\ H I&4341.68&18.6\phantom{0}$\ \pm 0.9$\phantom{0}&16.2\phantom{0}$\ \pm 2.5$\phantom{0}&19.1\phantom{0}$\ \pm1.5$\phantom{0}&2.39$\ \pm 0.30$&broad component \\ $[$O III$]$&4364.45&0.85$\ \pm 0.2$&0.56$\ \pm 0.3$&0.50$\ \pm 0.15$&0.12$\ \pm 0.03$& \\ He II&4687.02&1.45$\ \pm 0.2$&0.57$\ \pm 0.3$&$- $&-&marginal detection\\ H I&4862.68&46.9\phantom{0}$\ \pm 8.0$\phantom{0}&46.5\phantom{0}$\ \pm 7.0$\phantom{0}&58.7\phantom{0}$\ \pm 3.0$\phantom{0}&9.03$\ \pm 0.50$&broad component \\ $[$O III$]$&5008.24&0.79$\ \pm 0.2$&0.70$\ \pm 0.3$&\phantom{0}0.30$\ \pm 0.15$&0.23$\ \pm 0.10$& \\ \enddata \end{deluxetable*} \begin{deluxetable*}{lcccccccl} \tabletypesize{\footnotesize} \tablecaption{SN2010jl COS Line Catalog. \label{table_cos} } \tablewidth{0pt} \tablehead{ \colhead{Species} & \colhead{$\lambda_{vac}$} &&& \colhead{Line Flux}&&&Notes \\ & (\AA) &&& (10$^{-15}$ ergs cm$^{-2}$ s$^{-1}$)&&&&\\ &&day 44&&day 107&&day 620&&\\ } \startdata N V & 1238.82& -0.76\phantom{0} &$\pm 0.3$ & -0.44\phantom{0}&$\pm 0.2$&0.08&$\pm 0.05$&P-Cygni abs \\ N V & 1238.82& 7.38 & $\pm 0.5$& 6.64&$\pm 0.5$&4.24&$\pm 0.10$& \\ N V & 1242.80& -0.68\phantom{0}&$\pm 0.2$& -0.35\phantom{0}&$\pm 0.2$&-0.64&$\pm 0.20$&P-Cygni abs \\ N V & 1242.80& 3.25&$\pm 0.5$& 2.96 &$\pm 0.4$&1.68&$\pm 0.08$&\\ O I & 1302.17& -2.40\phantom{0} &$\pm1.0 $& -2.05\phantom{0} &$\pm 0.9$&-2.01&$\pm$&P-Cygni abs\\ O I & 1302.17& 1.71&$\pm 0.6$& 1.33 &$\pm 0.6$&-&&\\ O I & 1304.86& -&-& -&-&-1.42&$\pm$&complex P-Cygni abs \\ O I & 1304.86& 2.64 &$\pm 0.3$& 2.05 &$\pm 0.3$&-&&\\ O I & 1306.03& 1.55&$\pm0.7$& 1.84 &$\pm 0.3$&-&&\\ Si II & 1309.28& -0.61\phantom{0} &$\pm 0.3$& -0.57\phantom{0} &$\pm 0.2$&&&P-Cygni abs\\ Si II & 1309.28& 0.37 &$\pm 0.5$& 0.07 &$\pm 0.4$&&&\\ Si IV & 1393.76& -2.21\phantom{0} &$\pm 0.9$& -1.66\phantom{0} &$\pm 0.5$&-1.23&$\pm 0.15$&P-Cygni abs \\ Si IV & 1393.76& 3.36&$\pm 1.6$& 1.04 &$\pm 0.4$&-&&\\ O IV] & 1397.20& 0.53&$\pm 0.2$& 0.39 &$\pm 0.2$&-&& \\ O IV] & 1399.78& 0.92&$\pm 0.2$& 0.77 &$\pm 0.2$&0.22&$\pm 0.10$&\\ O IV & 1401.16& 4.04&$\pm 0.2$& 3.59 &$\pm 0.3$&0.59&$\pm 0.20$&\\ Si IV & 1402.77& -1.53\phantom{0} &$\pm 0.5$& -0.99\phantom{0} &$\pm 0.3$&-1.03&$\pm$&P-Cygni abs\\ Si IV & 1402.77& 2.16 &$\pm 0.9$& 0.48 &$\pm 0.3$&-&&\\ O IV] & 1404.81& 2.20 &$\pm 0.5$& 2.52 &$\pm 0.3$&0.56&$\pm 0.15$&\\ S IV] & 1406.00& 1.06 &$\pm 0.2$& 0.72 &$\pm 0.2$&-&&\\ O IV] & 1407.38& 0.40 &$\pm 0.2$& 0.57 &$\pm 0.2$&-&&\\ N IV] & 1483.32& -&&-&&0.25&$\pm0.05$&\\ N IV] & 1486.50& 54.7\phantom{00} &$\pm 1.8$& 51.7\phantom{00} &$\pm 1.5$&9.61&$\pm0.10$&\\ C IV & 1548.19& -4.25\phantom{0} &$\pm 1.7$& -2.85\phantom{0} &$\pm 0.5$&-&&P-Cygni abs\\ C IV & 1548.19& 15.9\phantom{00} &$\pm 1.0$& 8.98 &$\pm 0.8$&0.58&$\pm0.07$&\\ C IV & 1550.77& -4.36\phantom{0} &$\pm 0.4$& -2.54\phantom{0} &$\pm 0.5$&-&&P-Cygni abs\\ C IV & 1550.77& 8.90&$\pm 2.0$& 4.90 &$\pm 0.7$&0.31&$\pm0.05$&\\ Ne IV] & 1601.5?& -&& -&&0.65&$\pm0.07$&\\ He II & 1640.40& 8.55 &$\pm 0.3$& 8.64 &$\pm 0.3$&1.81&$\pm0.10$&\\ O III] & 1660.81& 4.96&$\pm 0.2$& 2.88&$\pm 0.2$&0.53&$\pm0.10$&\\ O III] & 1666.15& 12.6\phantom{00} &$\pm 0.3$& 9.64 &$\pm 0.3$&1.48&$\pm0.10$&\\ N III] & 1746.82& 1.53 &$\pm 0.5$& 0.73 &$\pm 0.3$&-&&\\ N III] & 1748.65& 5.51 &$\pm 0.7$& 3.89 &$\pm 0.3$&-&&\\ N III] & 1749.67& 42.9\phantom{00} &$\pm 0.8$& 27.1\phantom{00} &$\pm 0.7$&1.62&$\pm0.07$&\\ N III] & 1752.16& 20.8\phantom{00} &$\pm 0.7$& 15.1\phantom{00} &$\pm 0.4$&0.86&$\pm0.05$&\\ N III] & 1754.00& 7.44&$\pm 0.7$& 6.47 &$\pm 0.3$&0.23&$\pm0.05$&\\ \enddata \end{deluxetable*} A further complication, when using these lines for diagnostic purposes, is that all the resonance lines in the UV have P-Cygni components. The photons scattered out of the line of sight (LOS) to the observer, which give rise to the absorption component, are partly compensated for by those scattered into the LOS from the area of the wind outside of the projected photosphere. The net emission in a line from the CSM comes from the emission component plus the scattered photons. The fraction of the photons from the back of the SN, and which are in the line of sight of the optically thick ejecta, do not contribute to the emission. An upper limit to this contribution is the absorbed flux relative to the continuum in the absorption component. The exact fraction depends on the emissivity as function of the radius from the photosphere. For this reason we list in Table \ref{table_cos} both the flux in the emission component and the flux deficit in the absorption component for the COS spectra. An inspection of Table \ref{table_stis_1}, as well as Figure \ref{fig_full_stis_spec}, shows that between the first two epochs there is relatively little evolution of the flux of the narrow lines in the STIS range, although the continuum has decreased somewhat. At 107 days the fluxes of both continuum and lines have, however, decreased by a factor of $\sim 2$. The same trends can be seen in the COS spectra (Table \ref{table_cos} and Figure \ref{fig_full_spec}). The decrease in the narrow line fluxes correlates with the broad component of the high ionization UV lines (Figure \ref{fig_lprof_lya_mgii}). In the last STIS spectrum at 573 days both lines and continuum have dropped by a factor $\sim 10$, although the Mg II $\lambda \lambda 2796, 2804$ only by a factor $\sim 3$. \subsubsection{The X-shooter spectrum at 461 days} \label{xshooter} We also obtained one medium/high resolution X-shooter spectrum from the VLT for 2012 Jan. 14 (day 461), shown in Figure \ref{fig_xshooterspec}. Because of the higher spectral resolution and NIR coverage we discuss this spectrum separately. Starting with the continuum, we note the excess in the NIR with a maximum at $\sim 1.5 \ \mu$m. This NIR excess agrees well with the rise in the J, H and K$_s$ bands we see in the photometry in Figure \ref{fig_photometry} later than $\sim 400$ days and is most likely caused by hot dust emission; its origin will be discussed in Sect. \ref{sec_dust}. We fit this well with a blackbody spectrum at a temperature of $\sim 1870$ K. The corresponding continuum in the optical is fitted with a temperature of $\sim 9000$ K. The value for the dust temperature differs by $\sim 8 \%$ from that obtained from the photometry in Table \ref{table_dust_param}, 2045 K. This gives an estimate of the errors in this parameter from the errors in the photometry, spectral calibration and fitting errors. Because of the many lines in the optical region the error in the photospheric temperature is at least as large as this, although they here happen to be very close. In the NIR the most interesting lines are the strong broad and narrow components of He I $\lambda 10,830$, as well as Pa$\alpha$, Pa$\beta$ and Pa$\gamma$. The He I $\lambda 10,830$ confirms the identification of the He I $\lambda 5876$ line in the optical. The $\lambda 2.0581 \mu$m line is, however, only seen as a narrow absorption. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{./xshooter_jan_2012_lin_opt_ir_dust.eps}} \caption{X-shooter spectrum from 2012 January 14 (day 461). The continuum has been fit with a two-component blackbody with temperatures of $9000$ K and $1870$ K, respectively. Note the many narrow lines from the circumstellar gas, as well as the strong, broad He I and H I lines in the NIR. }\label{fig_xshooterspec} \end{center} \end{figure} We identify the broad, asymmetric feature with a peak at $\sim 8435$ \AA \ with O I $\lambda 8446$. The extended red wing is a blend of the Ca II triplet $\lambda \lambda 8498.0, 8542.1, 8662.1$ and higher members of the Paschen series, ($n = 12 - 15$). The identification with the O I $\lambda 8446$ line is supported by the presence of the O I 1.129 $\mu$m line. These lines are likely to be a result of Ly$\beta$ fluorescence with O I, where the upper level of the 1.129 $\mu$m line is pumped by Ly$\beta$, and then decays into the 1.129 $\mu$m and $\lambda 8446$ lines \citep[][]{Grandi1980}. The relative flux of these lines agrees well with that expected, although blending of the O I $\lambda 8446$ line prevents a detailed comparison. O I $\lambda 8446$ was also seen in the Type IIn SN 1995G \citep{Pastorello2002}, although in this case no NIR spectrum exists. There is also a line consistent with O I $\lambda 8446$ in the SN 1995N spectrum in \cite{Fransson2002}, although this was identified as mainly Fe II $\lambda 8451$. In the case of SN 2010jl there are, however, no lines corresponding to Fe II $\lambda \lambda 9071, 9128, 9177$, which would be expected to accompany the Fe II $\lambda 8451$ line. The high resolution in combination with high S/N allows us to identify a large number of weak, narrow emission lines not seen in the other lower S/N spectra spectra which cast additional light on the CSM. In Figure \ref{fig_xshooterspec_opt} we show a close-up of the optical range with some of the most important lines marked. \begin{figure*} \begin{center} \resizebox{150mm}{!}{\includegraphics[angle=0]{./xshooter_jan_2012_lin_opt_panels.eps}} \caption{The optical part of the X-shooter spectrum from 2012 January 14 (day 461). A number of the most important lines from the CSM have been marked. }\label{fig_xshooterspec_opt} \end{center} \end{figure*} The most interesting of these lines are the large range of ionization stages of forbidden lines from [Fe V-XIV]. The [Fe VII] lines serve as diagnostics of the density and the excitation conditions in the CSM and are discussed in Sect. \ref{sec-fluxes}. Of the coronal lines [Fe X] $\lambda 6374.5$ is strong, as well as [Fe XI] $\lambda 7891.8$, while [Fe XIV] $\lambda 5302.9$ is only barely seen as a weak line. [Fe XIII] $\lambda 10,746$ is not seen, but sits on the strong wing of He I $\lambda$10,830. Other high ionization lines include [Ne V] $\lambda \lambda 3345.8, 3 425.9$, [Ca V] $\lambda 6086.8$, [Ar X] $\lambda 5533.2$ and a weak [Ar XIV] $\lambda 4412.6$. The presence of these lines confirms the high state of ionization in the CSM of this SN. In these high S/N spectra the Balmer series can be traced up to $n = 33$ ($\lambda 3659.4$) with strong absorption components, but only weak P-Cygni emission. The Paschen series is seen from Pa$\alpha$ to $n=17$ ($\lambda 8467.3$). In contrast to the Balmer lines, these lines are all in pure emission. Br$\gamma$ is also seen in emission. \subsection{Density diagnostics of the narrow lines} \label{sec-fluxes} Using fluxes from the previous section we use the forbidden and intercombination lines in the UV and optical as diagnostics of the density. The ratios of the N III] $\lambda \lambda 1746.8, 1748.7, 1749.7,1752.2, 1754.0$ lines are sensitive to the electron density \citep{Keenan1994}. In particular, the $\lambda \lambda 1752.2/1749.7$ ratio is $0.55-0.58$ in the range $10^5 - 10^8 \rm ~cm^{-3}$. Above this density the ratio decreases to become $0.20$ at $10^{10} \rm ~cm^{-3}$. The $\lambda \lambda 1754.0/1749.7$ ratio has a similar behavior. Unfortunately, the $\lambda 1754.0$ line is at the very boundary of the wavelength range of the G160M grating, and its flux is uncertain. The $\lambda$ 1748.7 and $\lambda$ 1754.0 lines have the same upper level with transition rates within a few percent of each other, so we use the $\lambda \lambda$ 1748.7/1749.7 ratio instead of the $\lambda \lambda 1754.0/1749.7$ ratio for the diagnostics. Figure \ref{fig_niii_fit_spec} shows a fit of these lines for different electron densities between $10^7 - 10^{10} \rm ~cm^{-3}$ for the day 44 COS spectrum. The fits show that (with the exception of the $\lambda 1754.0$ line) all lines agree with an electron density in the range $10^5 - 10^8 \rm ~cm^{-3}$. Higher values give too low a value for the $\lambda \lambda 1752.2/1749.7$ ratio. We have also examined the day 107 and day 621 COS spectra and find essentially the same density constraint, although the S/N is somewhat lower. Figure \ref{fig_oiv_fit_spec} shows a similar analysis of the O IV] $\lambda \lambda 1393.1, 1397.2, 1399.8, 1401.2, 1404.8, 1407.4$ and S IV] $\lambda \lambda 1398.0, 1404.8, 1406.0, 1416.9, 1423.8$ multiplets. As the red curve shows, the lines are fit with the same density interval, $10^5 - 10^8 \rm ~cm^{-3}$, as the N III] lines. This is not surprising since these ions are expected to arise in the same region. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{full_spectrum_2010jl_COS_NIII_fit.eps}} \caption{Fits to the N III] $\lambda \lambda 1746.8, 1748.7, 1749.7,1752.2, 1754.0$ multiplet (marked by the vertical lines) from the COS spectrum from day 44. The red line is for $10^7 \rm ~cm^{-3}$, the magenta for $10^8 \rm ~cm^{-3}$, the green for $10^9 \rm ~cm^{-3}$, and the cyan for $10^{10} \rm ~cm^{-3}$. The dashed red line marks the continuum. The dip at 1753.3 \AA \ is an interstellar absorption line. } \label{fig_niii_fit_spec} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{2010jl_COS_OIV_fit_epoch1.eps}} \caption{Same as Figure \ref{fig_niii_fit_spec} for the O IV] $\lambda \lambda 1393.1, 1397.2, 1399.8, 1401.2, 1404.8, 1407.4$ and S IV] $\lambda \lambda 1398.0, 1404.8, 1406.0, 1416.9, 1423.8$ multiplets. The red line is for $10^7 \rm ~cm^{-3}$, the magenta for $10^8 \rm ~cm^{-3}$, the green for $10^9 \rm ~cm^{-3}$, and the cyan for $10^{10} \rm ~cm^{-3}$. The solid red line marks the continuum. We have not attempted a fit of the Si IV $\lambda \lambda 1393.3, 1402.3$ P-Cygni lines}. \label{fig_oiv_fit_spec} \end{center} \end{figure} The ratio of the N IV] lines at 1482.9 \AA \ and 1486.1 \AA \ is sensitive to density in the range $10^4 - 10^{8} \rm ~cm^{-3}$ \citep{Keenan1995}, with a ratio of $\sim 1.6$ below $10^4 \rm ~cm^{-3}$ and decreasing above this density. From the June 20 2012 spectrum we estimate a $\lambda \lambda 1482.9/1486.1$ flux ratio of $0.029 \pm 0.05$. From \citet[][their Figure 1]{Keenan1995} we find that this corresponds to an electron density $(3-6) \times 10^6 \rm ~cm^{-3}$ for temperatures in the range $10^4 - 2\times 10^4$ K. The O III] $\lambda \lambda 1660.8, 1666.2$, and [O III] $\lambda \lambda 4363.2, 4958.9, 5007.0$ lines are especially useful as density and temperature diagnostics \cite[][]{Crawford2000}. Figure \ref{fig_full_stis_spec} shows the [O III] $\lambda \lambda 4363.2, 4958.9, 5007.0$ region for the two first STIS spectra and all three [O III] lines are easily identified. This is even clearer in the ground based spectra in Figure \ref{fig_oiii_feiii}. The first thing to note is the very strong $\lambda 4363$ line, which is usually faint compared to the $\lambda \lambda 4949, 5007$ lines. Typical [O III] $\lambda \lambda 5007/4363$ ratios for H II regions are $\ga 50$ \citep{Osterbrock2006}. \cite{Pilyugin2007} determine [O III] $\lambda \lambda 4959, 5007/4363$ ratios of 166 and 140 for UGC 5189 at different radii, which is much larger than the ratio observed for SN 2010jl and strongly argues for a circumstellar origin of these lines. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{SN2010jl_NOT_Gr16_abs_flux_calib_oiii_region_texp.eps}} \caption{The [O III] and [Fe III] region from observations with NOT from day 32 to day 176. Note the strong narrow [O III] $\lambda 4363 $ line compared to the $\lambda 5007$ line. For clarity the day 32 and day 80 spectra have been shifted in flux by the amount indicated in the figure. } \label{fig_oiii_feiii} \end{center} \end{figure} In Figure \ref{fig_oiii_ratio} we give the [O III] $\lambda \lambda 5007/4363$ ratio and the [O III] $\lambda 4363$ flux for the different dates for selected medium resolution spectra with NOT and MMT (2010 Nov. 15), as well as the STIS observations. The limited spectral resolution, as well as the S/N, make the fluxes of the STIS spectra uncertain. They do, however, have the advantage of minimizing the background contamination of especially the $\lambda$ 5007 line. We note that the flux of the $\lambda 4363$ line shows a smooth decay by a factor of $\sim 2$ during the period. However, the $\lambda \lambda 5007/4363$ ratio varies by a large factor from observation to observation. The reason for this is that the $\lambda 5007$ line is severely affected by nearby H~II regions in some of the ground based observations. Observations with poor seeing ($\ga 1\arcsec$) therefore have a much larger $\lambda 5007$ flux than the ones with good seeing. This is also consistent with the STIS observations, which in spite of a low S/N, show a low $\lambda 5007$ flux and a low $\lambda \lambda 5007/4363$ ratio. Because of the high $\lambda \lambda 5007/4363$ ratio in the nearby H II regions (see above) the $\lambda 4363$ line is only marginally affected by this. For the nebular analysis below we only use the lowest $\lambda \lambda 5007/4363$ ratios, which are in the range $0.5 - 0.8$. The region of O III] $\lambda \lambda 1660.8, 1666.2$ is noisy in the STIS spectra, but does show a clear line above the noise (Figure \ref{fig_full_stis_spec}). The flux of the line is consequently uncertain. However, the fluxes in the COS and STIS spectra are consistent with each other for the dates of the two COS observations. The observation from day 44 gives a total O III] $\lambda \lambda 1661 + 1666$ flux of $1.8 \times 10^{-14} \rm ~erg~s^{-1}$, close to that of the STIS observation for the same date. We conclude that the O III] $\lambda \lambda 1661 + 1664$ flux is accurate to at least $\sim 50 \%$. The expected line ratio of O III] $\lambda 1666.2 / \lambda 1660.8$ is 2.5, consistent with the measured ratio of $2.3$ from COS. As a very conservative estimate we take O III] $\lambda \lambda (1661 + 1666)/5007 = 1.2 - 8$. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{SN2010jl_NOT_OIII_ratio_texp.eps}} \caption{The flux of the narrow [O III] $\lambda 4363$ line (blue) and the [O III] $\lambda \lambda 5007/4363$ ratio (red). Observations with STIS are marked with squares, with MMT with triangles and NOT/grism 16 with circles. The errors in the $\lambda 4363$ fluxes are $\sim 25 \%$, while the ground based O III] $\lambda \lambda 5007/4363$ ratios are systematically overestimated because of background contamination from nearby H II regions. The observations with the best seeing give the lowest line ratios, as is confirmed with the STIS observations, although the S/N of these are lower. } \label{fig_oiii_ratio} \end{center} \end{figure} Figure \ref{OIII_ratios} shows the [O III] $\lambda \lambda 4363/5007$ and $\lambda \lambda 1666/5007$ ratios as a function of density and temperature. We have also plotted the observed ranges of these ratios as horizontal lines in the figure. The atomic data are taken from \cite{Crawford2000} and \cite{Palay2012}. From this figure we see that for reasonable temperatures, i.e., $\la 50,000$ K, a $\lambda \lambda 5007/4363$ ratio in the range $0.5 - 0.8$ requires an electron density $\ga 3 \times 10^6 \rm ~cm^{-3}$. Further, the $\lambda \lambda 1664/5007$ ratio results in a range $3 \times 10^6 - 10^9 \rm ~cm^{-3}$ for $T_{\rm e} = (1-3) \times 10^4$ K. A higher reddening than we have assumed would only increase this range marginally. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{OIII_ratio_v3.eps}} \caption{The [O III] $\lambda \lambda 4363/5007$ (black, dash-dotted lines) and $\lambda \lambda 1666/5007$ (red, solid lines) ratios as a function of density and temperature (in $10^4$ K). The corresponding observed ranges are plotted as horizontal dotted lines for the $\lambda \lambda 4363/5007$ ratio and dashed lines for the $\lambda \lambda 1666/5007$ ratio.} \label{OIII_ratios} \end{center} \end{figure} We identify faint, but clear narrow lines of [Fe III] at $4657\pm1$ \AA \ and $4701\pm1$ \AA \ in the high S/N NOT spectra (Figure \ref{fig_oiii_feiii}). These lines can be seen at all epochs with approximately the same fluxes. In addition, there may also be a weak line at $\sim 5010$ \AA, but this is disturbed by the [O III] $\lambda 5007$ line, as well as noise. These lines are interesting in that they can provide some independent diagnostics of the narrow line gas. \cite{Keenan1993} find that the [Fe III] $\lambda \lambda 4702/4658$ ratio varies from $\sim 0.30$ for $n_{\rm e} \la 10^3 \rm ~cm^{-3}$ to $ 0.46-0.52$ for $n_{\rm e} \ga 10^5 \rm ~cm^{-3}$. The observed ratio is $\sim 0.5$, in agreement with the high density limit. The $\lambda \lambda 4881/4658$ ratio increases from $\sim 0.2$ for $n_{\rm e} \approx 10^2 \rm ~cm^{-3}$ to $\sim 0.5$ for $n_{\rm e} \approx 10^5 \rm ~cm^{-3}$; at higher densities it rapidly decreases to $\la 0.1$. There is no sign of the $\lambda 4881$ line in the spectra, consistent with a density $n_{\rm e} \ga 10^5 \rm ~cm^{-3}$. The $\lambda 5011.3$ line is, on the other hand, probably present. Together, these lines are consistent with $n_{\rm e} \ga 10^6 \rm ~cm^{-3}$. The final diagnostics come from the [Fe VII] lines in the X-shooter spectrum from day 461 (Figure \ref{fig_xshooterspec_opt}). The [Fe VII] $\lambda \lambda 4988.6 / 5720.7$ ratio is 0.16. Using the diagnostic diagram from \cite{Keenan2001} we find that this corresponds to an electron density of $\sim 5 \times 10^5 \rm ~cm^{-3}$. We note, however, that the derived density is sensitive to the uncertain collision strengths, as is discussed in \cite{Keenan2001}. Summarizing these different density diagnostics, i.e., the N III], N IV], [O III], O IV], [Fe III] and [Fe VII] ratios, as well as the presence of P-Cygni absorptions in the Balmer lines, we find that these lines indicate a density $3 \times 10^6 - 10^8 \rm ~cm^{-3}$ for the region emitting the narrow lines in SN 2010jl. \subsection{CNO abundances} \label{sec-cno} The strong [N II], N III], N IV] and N V lines in the STIS and COS spectra (Figs. \ref{fig_full_spec} and \ref{fig_full_stis_spec}) indicate a nitrogen enrichment in the narrow line region of the CSM of the SN. To estimate the relative CNO abundances we note that for a photoionized plasma the ionization zones of N III and C III coincide closely with each other, and similarly for the C IV, N IV and O III zones \citep[][their Figure 6a]{Kallman1982}. The N V and O IV zones overlap, but not to the same extent as the lower ionization ions. In addition, the temperature in these zones is nearly constant. Using the line fluxes for these ions allows us to derive the relative ionic abundances and, with some assumptions, elemental abundances. Because the excitation energies are similar, the line ratios are insensitive to the temperature for reasonable ranges, $(1 - 3)\times 10^4$ K. Here we use $T_{\rm e} =2\times10^4$ K. They do depend on the electron density, but this is mainly important above $\sim 10^9 \rm ~cm^{-3}$, where the lines are suppressed, and densities this high were excluded through the analysis in the previous Section. The narrow range of wavelengths also makes the results insensitive to the reddening. The most interesting lines for the CNO analysis analysis are N IV] $\lambda 1486.5$, C IV $\lambda \lambda 1548.2, 1550.8$, O III] $\lambda \lambda 1660.8, 1666.2$, N III] $\lambda \lambda 1746.8-1754.0$, and C III] $\lambda 1908.7$. With the exception of the C III] lines, all are within the range of our COS spectra and accurate relative fluxes can be derived (within a few percent). The C III] line, however, poses a special problem because of the lower S/N and low spectral resolution of the STIS spectra (Figure \ref{fig_full_stis_spec}). An additional complication is that the line sits on top of a broad electron scattering feature centered on the Si III] $\lambda 1892.0$ line. This is also the case for the N III] $\lambda \lambda 1746.8-1754.0$ lines, but in this case it is easy to determine the background flux from the high resolution COS spectra. An estimate of the systematic error in the N III] flux from STIS can consequently be obtained if one compares the COS and STIS fluxes for the same epoch. For the day 44 observation we get for the total N III] flux $(11.5\pm 2.0) \EE{-14} \rm ~erg~cm^{-2}~s^{-1}$ in the STIS observation, depending on where the 'continuum' level is set, compared to $ 7.9 \EE{-14}\rm ~erg~cm^{-2}~s^{-1}$ for the COS spectrum. For the day 107 spectrum the corresponding numbers are $(5.57\pm 0.8) \EE{-14} \rm ~erg~cm^{-2}~s^{-1}$ and $(5.33\pm 0.1) \EE{-14} \rm ~erg~cm^{-2}~s^{-1}$, respectively. There is a tendency to overestimate the flux from the STIS observations, mainly from the difficulty in determining the continuum level. The same is likely to be the case for the C III] line. As a consequence of this discussion, to minimize the systematic error we use the N III]/C III] ratio from the STIS observations, in spite of the lower S/N in the STIS observations for N III] compared to the COS flux. To estimate the systematic error in the C III] flux we have calculated the line flux for a range of assumptions about the background 'continuum'. The error may still be up to $\sim 50 \%$ and dominates the error budget in the relative N III/C III ratio. For the other line ratios, N IV]/C IV and N IV]/O III] we use fluxes from the COS spectra. For the C IV resonance doublet we assume two limiting cases. In the first case we subtract the full absorption component from the emission component to correct for the scattering, as discussed in Sect. \ref{sec-narrow}. This may underestimate the net emission by up to $\sim 60 \%$. In the other case we neglect this correction and use the observed flux in the emission component. This is likely to overestimate the net emission. These limits should bracket the thermal excitation emission and is the best we can do without knowledge about the radius of the emission region. The adopted line ratios with errors are given in Table \ref{table_cno}. There is a fairly large scatter in both the N III]/C III] ratio and the N IV]/C IV ratio, reflecting the difficulties discussed above. \begin{deluxetable*}{lccc} \tabletypesize{\footnotesize} \tablecaption{Observed and derived ionic abundances. \label{table_cno}} \tablewidth{0pt} \tablehead{ \colhead{Date} & \colhead{N III/C III} & \colhead{N IV/C IV} & \colhead{N IV/O III} \\ } \startdata Line fluxes used&&&\cr 34 days&$13.5\pm2.0~/~2.99\pm1.2$ &--&--\cr 44 days&$11.5\pm2.0~/~2.96\pm1.2$ &$5.47\pm0.18~/~1.62\pm0.28$\tablenotemark{b}&$5.47\pm0.18~/~1.76\pm0.05$\cr &&$5.47\pm0.18~/~2.48\pm0.22$\tablenotemark{c}&\cr 107 days &$5.57\pm0.8~/~2.24\pm1.0$&$5.17\pm0.15~/~0.85\pm0.13$\tablenotemark{b}&$5.17\pm0.15~/~1.25\pm0.05$ \cr &&$5.17\pm0.15~/~1.39\pm0.11$\tablenotemark{c}&\cr Observed line ratios&&&\cr 34 days&$4.51\pm1.93$ &--&--\cr 44 days&$3.89\pm1.71$ &$3.37\pm0.59$\tablenotemark{b}&$3.11\pm0.14$\cr &&$2.21\pm0.21$\tablenotemark{c}&\cr 107 days &$2.49\pm1.16$&$6.08\pm0.95$\tablenotemark{b}&$4.14\pm0.20$ \cr &&$3.71\pm0.31$\tablenotemark{c}&\cr 573 days&$2.49\pm1.16$&$6.08\pm0.95$\tablenotemark{b}&$4.14\pm0.20$ \cr &&$3.71\pm0.31$\tablenotemark{c}&\cr 621 days&-&$6.08\pm0.95$\tablenotemark{b}&$4.14\pm0.20$ \cr &&$3.71\pm0.31$\tablenotemark{c}&\cr &&&\cr Derived ionic ratios \tablenotemark{a}&&&\cr &&&\cr Nov 11 &$37.9\pm 16.2 - 30.3 \pm 13.0$&--&--\cr Nov 22 & $32.7\pm 14.4 - 26.1 \pm11.5$ & $22.2\pm3.9 - 22.9 \pm4.0$\tablenotemark{b}& $0.74\pm 0.04 - 0.70 \pm 0.04$ \cr &&$14.6\pm 1.4 - 15.0 \pm1.4$\tablenotemark{c} &\cr Jan 23 & $20.9\pm \phantom{0}9.8 - 16.7 \pm\phantom{0}7.8$ & $40.1\pm 6.3 - 41.3 \pm6.5$\tablenotemark{b} & $0.99\pm 0.05 - 0.93 \pm 0.05$ \cr &&$24.5\pm 1.0 - 25.9 \pm1.2$\tablenotemark{c} &\cr \enddata \tablenotetext{a}{~Assuming $n_{\rm e}$ in the range $10^6 - 10^9 \rm ~cm^{-3}$ and $T_{\rm e}=2\times10^4$ K and E$_{B-V} = 0.027$ mag.} \tablenotetext{b}{Corrected for scattering.} \tablenotetext{c}{No correction for scattering.} \end{deluxetable*} Using these line ratios we can now derive relative ionic abundances. For the collision strengths and radiative transition rates we use data from the Chianti data base \citep{Landi2012}. In Table \ref{table_cno} we show the derived ionic abundances for the different dates and $T_{\rm e} =2\times10^4$ K, typical of an X-ray heated plasma. As discussed above, the results are not very sensitive to the temperature. To show the dependence on the assumed density we give the abundance ratios for both $n_{\rm e} = 10^6 \rm ~cm^{-3}$ and for $n_{\rm e} = 10^9 \rm ~cm^{-3}$. There are several things to note here. The scatter and large errors in the STIS fluxes are reflected in the N III/C III ratio. Taking an average of the three epochs we get $n({\rm N III})/n({\rm C III) }=30.5 \pm 13.7$ for $10^6 \rm ~cm^{-3}$ and $n({\rm N III})/n({\rm C III})= 24.3 \pm 11.0$ for $10^9 \rm ~cm^{-3}$. Also, the N IV/C IV ratio varies greatly with both epoch and the assumption about the scattering contribution. The scattering introduces an uncertainty of $\sim 60 \%$, while the variation between the different epochs amounts to $70-80 \%$. These variations are probably related to each other. A decrease in the continuum flux means that the scattering contribution will decrease. There may therefore be an evolution from a line dominated by scattering to a line where scattering is less important. The N IV/O III ratio does not have any of these complications since this ratio involves two intercombination lines. Consequently, it is relatively constant between the two epochs, and has also small individual errors. In addition to the observational errors and the uncertainty of the scattering correction, there is some uncertainty in the correspondence of the ionization zones. Although both the C III and N III zones, as well as the C IV, N IV and O III zones, nearly coincide in the X-ray photoionized models by \cite{Kallman1982} this is not obviously so for our case with a different ionizing spectrum and density, although it is likely that the narrow lines are indeed excited by X-rays (Sect. \ref{sec_csm}). In spite of these caveats, the need for a very high N/C ratio is clear. From the N III/C III and N IV/C IV ratios we find a 'best value' $n({\rm N})/n({\rm C})=25 \pm 15$. The N/O ratio is better constrained to be $n({\rm N})/n({\rm O})=0.85\pm 0.15$. For comparison, the corresponding solar ratios are $n({\rm N})/n({\rm C}) =0.25$ and $n({\rm N})/n({\rm O})=0.14$ \citep{Asplund2009}. \section{Discussion} \label{sec-discussion} \subsection{The narrow circumstellar lines} \label{sec_csm} Our analysis in Sect. \ref{sec-results} provides several pieces of important information about the SN and its CSM that provide clues to the nature of the progenitor and its evolutionary status. In particular, we have evidence for a CSM of the progenitor with a velocity of $\sim 100 \rm ~km~s^{-1}$ and a density $3\times10^6 - 10^8 \rm ~cm^{-3}$. Further, the derived CNO abundances indicate a large nitrogen enrichment, $n({\rm N})/n({\rm C})=25 \pm 15$ and $n({\rm N})/n({\rm O})=0.85\pm 0.15$, typical of CNO processed gas. A large CNO enrichment has been observed for a number of SNe, either in the ejecta itself or in the CSM. This includes SN 1979C (IIL), SN 1995N (IIn), SN 1998S (IIn), SN 1993J (IIb) and SN 1987A (IIpec) \citep[see][for a summary of these with references]{Fransson2005}. As for potential progenitors, CNO enrichment is observed for several types of evolved massive stars. In particular, this is consistent with an LBV scenario for the progenitor. The best studied cases are for Eta Carinae and AG Carinae, which we discuss below. \cite{Hillier2001} make a detailed analysis of STIS observations of the central source in Eta Carinae. The mass loss rate is determined to $\sim 10^{-3} ~\Msun ~\rm yr^{-1}$ for a wind velocity of $\sim 500 \rm ~km~s^{-1}$. From the weakness of the electron scattering wings they conclude that the wind has to be clumpy, with a filling factor of $\sim 10 \%$. Hillier et al. also find that the N abundance in the primary star is at least a factor of 10 higher than solar, while C and O are severely depleted. In the nebula of Eta Carinae the CNO abundances can be determined more reliably. In particular, \cite{Dufour1997} and \cite{Verner 2005} find that N is enhanced by a factor 10 -- 20, while C and O are depleted by factors 50 -- 100. \cite{Smith2004} also find a strong radial gradient in the N/O ratio in Eta Carinae, with a high N/O ratio close to the star, decreasing outwards. Most of the mass in the CSM of Eta Carinae is in a molecular shell surrounding the central star. \cite{Smith2006} estimates a total mass of $\sim 11 ~M_\odot$. The distance from the star to this hour-glass shaped shell is between $3\EE{16}$ cm and $3\EE{17}$ cm. Especially interesting is that \cite{Smith2006} derives a density $\ga 3\EE6 \rm ~cm^{-3}$ of the molecular shell around Eta Carinae. This density bound is similar to that we find for the CSM of SN 2010jl. In addition, gas with similar densities are seen even closer, at distances $\sim 10^{15}$ cm from the star, as [Fe II] emission. Besides Eta Carinae, AG Carinae is one of the most extensively studied LBV stars, showing a variation of the S-Dorados type. Over the two periods studied the mass loss rate has varied between $(1.5 - 3.7)\times 10^{-5} ~\Msun ~\rm yr^{-1} $, while the wind velocity varied between $300 - 105 \rm ~km~s^{-1}$, in anti-phase with the mass loss \citep{Groh2009}. It has clearly undergone a more intense mass loss period earlier, as is apparent from the fact that it has a circumstellar nebula with an ionized mass of $\sim 4.2 ~M_\odot$ and a dynamical age of $\sim 8.5\times10^3$ years \citep{Nota1995}. An even higher mass of $\sim 25 ~M_\odot$ in the neutral medium has been estimated based on IR imaging, assuming a standard dust to gas ratio of 1: 100 \citep{Voors2000}. From a spectral analysis of the central star \cite{Groh2009} find a N mass fraction of $11.5\pm3.4$ times solar, while C is only $0.11\pm0.03$ times solar, and O is $0.04\pm0.2$ times solar, clearly indicating CNO processed material. The abundances in the nebula are more uncertain, with an N/O ratio of $6\pm 2$, compared to $39^{+28}_{-18}$ for the star. It is likely that the surface of the star has undergone more CNO processing than the nebula. The wind velocity we find for SN 2010jl, $\sim 100 \rm ~km~s^{-1}$, is not that of a red supergiant, whose velocities are in the range $10-40 \rm ~km~s^{-1}$ \citep{Jura1990}, so we rule these out as progenitors. Here we differ from \cite{Zhang}, who find a wind velocity of $28 \rm ~km~s^{-1}$, but this is a result of their use of the velocity of maximum absorption in the P-Cygni profile and not the maximum blue velocity. The maximum absorption velocity we observe is also considerably higher than theirs (Figure \ref{fig3}). Their conclusion that the progenitor was a red supergiant is based on a wind velocity that is too low. We find a velocity that is more typical of LBV stars, which have velocities in the range $100 - 1000 \rm ~km~s^{-1}$ \citep[][]{Smith2011b}. The velocity we find, $\sim 100 \rm ~km~s^{-1}$, is on the low end of this range, which may be a potential problem with the LBV interpretation. However, the expansion velocity of the molecular shell in Eta Carinae is highly anisotropic, ranging from $\sim 60 \rm ~km~s^{-1}$ at the equator to $\sim 650 \rm ~km~s^{-1}$ at the poles \citep{Smith2006}, and is likely to vary substantially with time. As we noted above, the velocity in the high mass loss phase of AG Carinae is very similar to the one we derive for SN 2010jl. Although it is clear that the narrow lines originate in fairly dense circumstellar gas, it is not obvious what excites them. The recombination time is $\sim 1/\alpha n_{\rm e} \sim 5.8 (n_{\rm e} / 10^7 \rm ~cm^{-3})^{-1}$ days for H and $T_{\rm e} \sim 10^4$ K. The gas is therefore in near equilibrium with the ionizing radiation. An early, undetected UV burst is therefore not likely to be important for the excitation. The widths of the lines, $\la 100 \rm ~km~s^{-1}$, also makes it unlikely that the gas is shock ionized. Instead we believe that the lines are excited by the same X-rays as are observed with the Chandra satellite \citep{Chandra2012}. As we discuss in Sect. \ref{sec_broad}, these are likely to come from fast shocks connected to an anisotropic high velocity ejection. We can estimate the state of ionization from the models in \cite{Kallman1982}. With an X-ray luminosity $\sim 3\EE{41} \rm ~erg~s^{-1}$, corresponding to the absorbed flux of $\sim 10^{-12} \ \rm erg \ cm^{-2} \ s^{-1}$ \citep{Chandra2012}, the ionization parameter, $\zeta = L/n_{\rm e} r^2$, becomes $\zeta \sim 300 (n_{\rm e} / {10^7 \rm ~cm^{-3}})^{-1} (r / 10^{16} \rm \ cm)^{-2}$. From the optically thin Model 1 in \cite{Kallman1982} we find that the presence of N III -- N V requires $\log \zeta \sim 0 - 1$. Optically thick models with a density $10^{11} \rm ~cm^{-3}$ have the same ions at a somewhat higher ionization parameter, $\log \zeta \sim 1.5$. The presence of [Fe XIV] (Sect. \ref{xshooter}) is also consistent with this ionization parameter. The observed X-ray luminosity indicates a distance of $ \sim (2-20) \EE{16} (n_{\rm e} / 10^7 \rm ~cm^{-3})^{-1/2}$ cm. The radius of the shock at 500 days after outburst is $\sim 1.3 \EE{16 }(V_{\rm s}/{3000 \rm ~km~s^{-1}}) (t/500 \ {\rm days})$ cm, where $V_{\rm s}$ is the shock velocity. There is therefore a rough consistency between the estimated distance of the narrow line emitting region and the location of the SN shock wave. This estimate of the ionization parameter is approximate for several reasons. First, the density has a considerable uncertainty. Further, as we discuss below, the shock velocity, and therefore the shock radius, is also uncertain and may be in the range $500 - 3000 \rm ~km~s^{-1}$. Finally, the X-ray models in \cite{Kallman1982} are not completely appropriate, since these assume a 10 keV bremsstrahlung spectrum, while the X-ray spectrum of SN 2010jl is heavily absorbed, having a deficit of photons below $\sim 1$ keV. The \cite{Kallman1982} models were also calculated for a density of $10^{11} \rm ~cm^{-3}$, while the narrow lines in SN 2010jl indicate a considerably lower density. Nevertheless, this estimate shows that it is likely that the narrow lines originate in X-ray ionized gas at a distance and density comparable to the molecular shell of Eta Carinae discussed above. In Sect. \ref{sec-big} we discuss more detailed photoionization calculations that confirm these estimates. The fact that we observe fairly 'normal' P-Cygni profiles with both a deep absorption and a red emission wing implies that the CSM at the distance where the narrow lines arise must be fairly symmetrically distributed, unlike the distribution for the broad component as described below. This is discussed further in Sect. \ref{sec-big}. \subsection{Origin of the dust emission} \label{sec_dust} The NIR excess seen in the photometry in Figs. \ref{fig_photometry} and \ref{fig_sed}, as well as the X-shooter spectrum in Figure \ref{fig_xshooterspec}, is most likely a result of hot dust. From the photometric light curve in Figure \ref{fig_bollc} we see that by day 400 the flux in the NIR is higher than that in the optical. This NIR/optical ratio increases with time and at the last epochs the NIR luminosity dominates completely. In Sect. \ref{sec_phot_results} we used simple blackbody fits of the optical and NIR photometry. We have investigated this in more detail following \cite{Kotak2009}. We have here used the grain emissivities from \cite{Draine1984} and \cite{Laor1993} and the escape probability formalism from \cite{Osterbrock2006}. For the grain sizes we use the MRN distribution with $dn/da \propto a^{-3.5}$ \citep{Mathis1977} with a minimum grain size $0.005 \mu$m and a maximum size $0.05 \mu$m. We then fit the sum of the dust spectrum with a given optical depth in the V-band, $\tau_{\rm V}$, and a photospheric blackbody spectrum, minimizing the chi-square with respect to the photospheric effective temperature and radius $T_{\rm eff}$ and $R_{\rm phot}$, and the corresponding parameters for the dust shell, $T_{\rm dust}$, $R_{\rm dust}$. A more detailed modeling, calculating the temperature of the dust as function of radius, as in \cite{Andrews2011}, requires assumptions about the geometry, and is out of the scope of this paper. In Figure \ref{fig_sed_dust} we show an example of this kind of fit for day 465 for which we have observations at 3.6 $\mu$m and 4.5 $\mu$m. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{sed_fit_465d_graph_sil.eps}} \caption{SED fits to the extinction corrected photometry at 465 days for two different dust compositions, silicates and graphite. The blue curves show the photospheric contribution, while the red show the dust component and the black curves the total. The dashed lines show the optically thin models with $\tau_{\rm V}=0.04$, while the solid lines show the corresponding optically thick models. As discussed in the text, the R-band, which is dominated by the H$\alpha$ line, is not included in the fits. We note the $10 \mu$m silicate feature for the optically thin model, which is the main discriminator for the dust composition. . } \label{fig_sed_dust} \end{center} \end{figure} From this figure it is clear that without any observations longward of 4.5 $\mu$m it is impossible to discriminate between the different compositions and only marginally between different optical depths for the dust. This is in agreement with the modeling by \cite{Andrews2011}. The main difference between the optically thin and thick models is the larger radius of the former in order to produce the required luminosity. The dust temperature is lower in the optically thin models. At 465 days the best fit silicate model in Figure \ref{fig_sed_dust} with an optical depth in the V-band of $\tau_{\rm V}=0.04$ had $T_{\rm dust}=1680$ K and $R_{\rm dust}=3.3\times 10^{17}$ cm. The photospheric values were less affected with $T_{\rm eff}=8900$ K and $R_{\rm phot}=6.0\times 10^{14}$ cm. The corresponding graphite model had $T_{\rm dust}=1360$ K, $R_{\rm dust}=7.8\times 10^{17}$ cm, $T_{\rm eff}=8600$ K and $R_{\rm phot}=6.6\times 10^{14}$ cm. These values should be compared to $T_{\rm dust}=2040$ K, $R_{\rm dust}=2.2\times 10^{16}$ cm, $T_{\rm eff}=9200$ K and $R_{\rm phot}=5.6\times 10^{14}$ cm for the optically thick model (Table \ref{table_dust_param}). We find similar results for the other epochs. As Figure \ref{fig_sed_dust} shows, the separation between the dust and photospheric components are only marginally affected. The light curves of the photospheric and dust components based on the blackbody fits in Sect. \ref{sec_phot_results} should therefore be accurate. As an independent check we find from the X-shooter spectrum at 461 days a dust temperature $\sim 1870$ K and radius $R_{\rm dust} \sim 2.5 \times 10^{16}$ cm for the optically thick model. To fit the NIR continuum of the X-shooter spectrum in Figure \ref{fig_xshooterspec} we need a blackbody radius of on day 461, in good agreement with that derived from the broad band photometry. For the dates with only broad band photometry we give $R_{\rm dust}$ in Figure \ref{fig_dust_tbb_rbb}. It should be noted that the blackbody radius only represents a lower limit to the NIR emitting region. A covering factor, $f$, less than unity will increase the radius by $R_{\rm dust} \propto f^{-1/2}$. As was discussed above, an optically thin model would increase this further. The blackbody radius can be compared to the shock radius, given by $R_{\rm s} \sim V_{\rm s} t \sim 1.0 \times 10^{16} (V_{\rm s}/3000 \rm ~km~s^{-1}) (t/400 \ {\rm days})$ cm, where we have scaled the velocity to what we believe is the maximum velocity of the shock (Sect. \ref{sec_broad}). With this velocity, or higher, the blackbody radius is comparable to the shock radius. In addition, a lower shock velocity is likely in the directions where the column density is larger than that inferred from the X-rays, which would give a smaller shock radius (Sect. \ref{sec-big}). The blackbody and shock radii can also be compared to the evaporation radius, given by $ R_{\rm evap} = [L_{\rm max} Q_{\rm abs} / (16\pi \sigma T^{4}_{\rm evap} Q_{\rm emiss}(T_{\rm evap})) ]^{1/2} $ \citep{Draine1979}. Here $L_{\rm max}$ is the maximum bolometric luminosity of the SN, $Q_{\rm abs}$ is the wavelength averaged dust absorption efficiency over the SN spectrum and $Q_{\rm emiss}$ is the the Planck averaged dust emission efficiency, $\sigma$ is the Stefan-Boltzmann constant, and $T_{\rm evap}$ is the evaporation temperature. $Q_{\rm abs} \propto a$, where $a$ is the size of the dust grain. The parameter $Q_{\rm abs} / Q_{\rm emiss}$ depends upon the evaporation temperature of the dust, the grain size of the dust, and the effective temperature of the SN, $T_{\rm eff}$. For our calculations of $R_{\rm evap}$ we adopted values of $T_{\rm evap}$ $=$ 1900 K for graphite and $T_{\rm evap}$ $=$ 1500 K for silicates and assumed $T_{\rm SN}$ $=$ 6,000 K and 10,000 K. For both graphite and for silicates we use grain emissivities from \cite{Draine1984} and \cite{Laor1993}. Because $R_{\rm evap} \propto L_{\rm max} ^{1/2}$ we give values for $L_{\rm max} = 10^{43} \rm ~erg~s^{-1}$ in Table \ref{table_evap}, which can then be scaled to different luminosities. \begin{deluxetable*}{lccc} \tabletypesize{\footnotesize} \tablecaption{Dust evaporation radii for $L_{\rm bol}=10^{43} \rm ~erg~s^{-1}$. \label{table_evap}} \tablewidth{0pt} \tablehead{ \colhead{a}& \colhead{$T_{\rm eff}$}&\colhead{$R_{\rm 0 \ evap}$}&\colhead{$R_{\rm 0 \ evap}$}\\ $\mu$m&K&$10^{17}$ cm&$10^{17}$ cm\\ \colhead{}& \colhead{}&\colhead{Silicates$^a$}&\colhead{Graphite$^a$} } \startdata 0.001 & 10,000 &2.59 &1.43 \\ 1.0 & &1.30 &0.34 \\ 0.001 & \phantom{0}6,000 &0.86 &0.86 \\ 1.0 & &0.72 &0.35 \\ \enddata \tablenotetext{a}{Assuming $T_{\rm evap}=1500$ K for silicates and $T_{\rm evap}=1900$ K for graphite.} \end{deluxetable*} We note that $R_{\rm evap}$ is not very sensitive to $T_{\rm eff}$, except for very small dust grains. As a typical value at shock break-out we use $T_{\rm eff} = 10^4$ K for our estimates below. The peak luminosity of the SN was $L_{\rm max} \sim 3 \times 10^{43} \rm ~erg~s^{-1}$ (Figure \ref{fig_bollc}). Using a grain size of $\sim 0.001 \ \mu$m we get $R_{\rm evap} \sim 2.5 \times 10^{17}$ cm for graphite and $R_{\rm evap} \sim 4.5 \times 10^{17}$ cm for silicates. For a grain size of $\sim 1 \ \mu$m the corresponding values are $R_{\rm evap} \sim 6.0 \times 10^{16}$ cm and $R_{\rm evap} \sim 2.3 \times 10^{17}$ cm, respectively. The dust temperatures we find at $\la 500$ days, $1600 - 2000$ K, are close to the evaporation (or formation) temperature for dust. It may therefore either be newly formed dust, close to the shock, or dust heated to close to the evaporation temperature by either the radiation from the SN or the shock wave. We now discuss different scenarios for the origin of the dust emission. The cool dense shell formed behind a radiative shock has often been mentioned as a favorable place for dust formation. The formation and survival of dust in this environment is, however, difficult to explain at epochs earlier than $\sim 320$ days. Dust formation in the dense post shock gas requires temperatures $\la 1900$ K. At these epochs the shock is most likely in the optically thick region of the SN. The photospheric temperature at these epochs is $\ga 7000$ K (Table \ref{table_dust_param}). Even if the shock is radiative it is therefore unlikely to cool to less than this temperature, and may be higher. A related argument comes from noticing that the shock radius is well within the evaporation radius even at $\sim 200$ days. As Table \ref{table_evap} shows, $R_{\rm evap} \ga 3\times10^{16}$ cm for a luminosity of $\sim 10^{43} \rm ~erg~s^{-1}$ at 200 days, independent of dust size or composition for $T_{\rm eff} \ga 6000$ K. It is therefore difficult to see how any dust could form close to the shock region for a realistic shock velocity. It is also difficult to see how the large IR luminosity at late epochs can be produced by such dust. The photospheric luminosity is then low and the X-ray luminosity is likely to be less than that at the last observation by \cite{Chandra2012} on day 373, $\sim 7\EE{41} \rm ~erg~s^{-1}$, which is a factor of $\sim 5$ lower than the NIR luminosity at 500 days. This agrees with the conclusions by \cite{Andrews2011}, although with somewhat different arguments. Reprocessing of radiation from the SN, possibly resulting in evaporation of pre-existing dust or heating and evaporation of the dust by the shock, is more likely. As Andrews et al. conclude, there is evidence for pre-existing dust around the progenitor star. The dust heating mechanism is, however, not clear. Previous treatments of dust in Type IIn SNe have discussed both heating by the radiation from the SN in combination with an echo and collisional heating by the hot gas behind the shock \citep[e.g.,][]{Gerardy2002,Fox2010}. We first discuss the case where the dust emission comes from dust which is collisionally evaporated by the shock of the SN. As discussed in next section, there is also direct evidence for such a velocity component in other Type IIn SNe. Massive dust shells into which the shock is propagating are also natural if the progenitor is an LBV. However, if we compare the shock radius with the evaporation radius at early epochs we find that even the lowest values of the latter, $R_{\rm evap} \sim 7 \times 10^{16}$ cm, is considerably larger than the shock radius at the first epochs, $R_{\rm s} \sim 2.6 \times 10^{15} (V_{\rm s}/3,000 \rm ~km~s^{-1}) (t/100 \ {\rm days})$ cm, where we have scaled to the velocity derived from the temperature of the X-rays. This radius should be seen as an upper limit to the shock radius. We therefore exclude this possibility. A more attractive alternative is based on an echo from radiatively heated dust, as has been discussed for other Type IIns. The plateau in the IR light curve between $\sim 200$ to $\sim 450$ days and slow decline thereafter (Figs. \ref{fig_photometry} and \ref{fig_bollc} ) would then be expected. Also the energetics may be consistent with this. The integrated optical luminosity from the SN during the first $\sim 400$ days was $\sim 3.2 \times 10^{50}$ ergs. In addition, there is a considerable luminosity in the UV, as well as at shorter wavelengths. This can be compared to the integrated luminosity in the dust component, which we estimate from the blackbody fits in Figure \ref{fig_dust_tbb_rbb}; we find a total energy of $\sim 2.7 \times 10^{50}$ ergs. There is therefore reasonable consistency between the energy emitted from the central source and that emitted by the dust. It does, however, require a large covering factor of the dust, as well as a high optical depth in these directions, which is consistent with the blackbody shape of the spectrum. As Andrews et al. discuss, this may indicate an anisotropic geometry; they discuss torus models with different inclinations. To explain the plateau as an echo the inner radius would have to be $\ga 450/2$ light days or $\sim 6 \times 10^{17}$ cm, depending on the inclination of a disk or torus. At the same time the high temperature indicates that the inner radius is close to the evaporation radius. A radius of $\sim 6 \times 10^{17}$ cm is, however, considerably larger that the estimates earlier for $R_{\rm evap}$, even if the luminosity used for this estimate may be an underestimate, considering the fact that only the observed wavelengths were included, and also that the peak luminosity may have been before the first observations. A solution to this discrepancy may be that most of the emission comes from very small grains. Some support for the presence of small grains in winds come from observations and modeling of the emission from some O-rich AGB stars in the LMC, where a grain distribution with a minimum size of $0.01 \mu$m to $0.1 \mu$m gave the best fit \citep{Sargent2010}. Although the most appealing model, this scenario has some difficulties to simultaneously explain both the light curve and the high dust temperature, unless one stretches the parameters. We have calculated light curves from an echo from a spherically symmetric dust shell of inner radius $6 \times 10^{17}$ cm. We, however, find that the rise of the light curve occurs too rapidly compared to the observed NIR light curve in Figure \ref{fig_bollc} and the total dust luminosity in Figure \ref{fig_dust_tbb_rbb}. \cite{Emmering1988} have studied echoes from asymmetric dust distributions. As shown in their Figs. 2 and 3, more disk like distributions give a considerably slower rise for low values of the inclination of the disk or torus. A low value of the inclination also agrees with the low reddening in the spectrum of the SN, while a high optical depth in the disk is needed for the high dust to SN luminosity, discussed above. Even more asymmetric distributions with most of the dust behind the SN relative to our line of sight may also be consistent with the light curve, although we have not studied this quantitatively. The increase of the dust emitting area and simultaneous decrease of the dust temperature (Fig. \ref{fig_dust_tbb_rbb}) are, however, qualitatively consistent with an echo, as the light echo paraboloid expands to larger radii into dust with decreasing temperature. In the discussion about different dust emission mechanisms for Type IIn SNe \citet[][see also \cite{Gerardy2002}]{Fox2011} proposed a scenario where pre-existing dust was radiatively heated by the radiation from ongoing circumstellar interaction. Although the heating of the dust is radiative, echo effects are less important and the shell can be at a smaller evaporation radius, without invoking small grains. If the late luminosity, including all wavelengths, is of the same order as the maximum bolometric luminosity this may account for the high temperature observed. The main problem is that the luminosity of the late IR emission is high, $\ga 5\times 10^{42} \rm ~erg~s^{-1}$ at 500 days, while the optical flux from the SN is decreasing rapidly. This scenario would therefore require a very large EUV-X-ray luminosity. There may be some indication of this from the H$\alpha$ luminosity, as discussed in Sect. \ref{sec-big}. \subsection{The broad emission lines} \label{sec_broad} The profiles of the broad lines give important information about the structure of the envelope and the dynamics of the shock wave. As we discuss in the previous and next Sections, dust formation has severe problems in both the ejecta and in connection to the shock at early epochs due to the high luminosity and temperature of the radiation. Instead, the blueshift of the lines is more easily explained as a result of the line formation and dynamics in the outer parts of the extensive envelope/CSM of the progenitor star, as was also suggested as a possible alternative by Smith et al. (2012). As discussed in detail by e.g., \citet{Chevalier2011}, for a radiation dominated shock in an extended envelope the radiation starts to escape when $\tau_{\rm e} \la c/V_{\rm s}$. The region outside the shock is therefore optically thick and line radiation emitted here will be Compton scattered and can give rise to the broad wings observed, even if the emission is coming from regions which have not yet been accelerated by the shock radiation. Only when the region with $\tau_{\rm e} \la 1$ becomes accelerated will the macroscopic velocity become dominant. An emission line undergoing electron scattering in a hot medium at rest will give rise to a symmetric line profile, since the velocity broadening is provided by microscopic, thermal velocities of the electrons \citep{Munch1948}. However, if the medium is moving with a bulk velocity towards us, this can introduce a blueshift of the line profile. For a spherically symmetric expansion one expects a redshift of the line peak \citep[][]{Auer1972,Fransson1989}. This is clearly not the case here. However, if the scattering medium is primarily moving coherently towards us one may instead get a blueshift of the peak, although still symmetric around the expansion velocity. We believe that this is the case here. We can test this interpretation quantitatively by first shifting each line in Figure \ref{fig3b} by a macroscopic velocity, $V_{\rm bulk}$, to the red. We adjust this so that a reflection about zero velocity gives a best $\chi^2$ fit for Doppler shifts between $\sim 1000 - 5000 \rm ~km~s^{-1}$, $\chi^2 = \sum_i{ [F_\lambda(V-V_{\rm bulk}) - F_\lambda(-(V-V_{\rm bulk}))]^2]/\sigma^2}$, where $V$ is the velocity rest wavelength and $\sigma$ is the r.m.s. of the flux. The lower velocity limit is chosen to avoid influence on the line profile by the narrow P-Cygni absorption and emission, and the scattering wings of this emission. We treat $V_{\rm bulk}$ as a free parameter, representing the macroscopic velocity of the emitting gas. The result of this procedure for a selection of days is shown in Figure \ref{Ha_shift}. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{Ha_vel_shift_2010jl_lines_v2.eps}} \caption{The broad H$\alpha$ line for selected dates after shifting the line to the red by a velocity given in the upper panel of Figure \ref{Ha_HWHM} (black line). The magenta line gives the reflected line profile. Note the nearly perfect symmetry between the red and blue wings, characteristic of electron scattering. The 'horns' close to the center of the line are due to the P-Cygni absorption from the narrow component coming from the CSM which is close to the rest velocity of the host galaxy. The separation between the 'horns' is therefore a measure of the velocity shift of the broad line component. } \label{Ha_shift} \end{center} \end{figure} From this figure we see that this simple, linear transformation results in nearly perfectly symmetric line profiles between the red and blue wings for essentially all epochs. This therefore argues that the blueshift in Figure \ref{fig3b} is due to a macroscopic velocity. In addition, the symmetric line profiles are a natural result of electron scattering. Other processes, like dust or other absorption processes would result in asymmetric lines about the peak in flux, with a time dependent line shape \citep[e.g.,][]{Lucy1989}. Electron scattering in an expanding medium which is optically thick, however, naturally results in symmetric profiles, as demonstrated explicitly below. \begin{figure} \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{Ha_vel_shift_fit_2010jl_peak_v4.eps}} \caption{Upper panel: Velocity shift, representing a macroscopic velocity of the H$\alpha$ emitting region as function of time. Blue markers represent $V_{\rm bulk}$ from measurements of the symmetry of the lines, as discussed in the text, and red markers the peak velocity of the broad component. Circles represents measurement with medium resolution spectra, while the squares are measurements from low resolution spectra. Lower panel: Full width, half maximum (FWHM) width of H$\alpha$ as function of time. } \label{Ha_HWHM} \end{center} \end{figure} In the upper panel of Figure \ref{Ha_HWHM} we show with blue markers $V_{\rm bulk}$ as function of time. In this panel we also show the velocity of the peak of the broad component as red markers. For the early epochs this agrees well with $V_{\rm bulk}$, but is somewhat higher for the later epochs. The broad agreement of these two determinations is another way of showing the symmetry of the line around the velocity shifted peak of the broad component. Because the peak velocity is more difficult to determine in the late spectra and is somewhat influenced by the narrow component, we prefer to use $V_{\rm bulk}$ as the macroscopic velocity for the rest of the paper. To quantify the width of the line profiles we use the Full Width at Half Maximum (FWHM), as measured from the continuum subtracted line profiles. This is shown in the lower panel of Figure \ref{Ha_HWHM}. During the first 300-400 days there is an increase in $V_{\rm bulk}$ to $\sim 700 \rm ~km~s^{-1}$. After this epoch the velocity decreases slightly and then becomes nearly constant at $\sim 400-500 \rm ~km~s^{-1}$. Considering the errors in the last observations the significance of this decrease is marginal. The FWHM width increases from $\sim 2000 \rm ~km~s^{-1}$ to $\sim 3000 \rm ~km~s^{-1}$ during the first $\sim 200$ days, and then decreases slowly to $\sim 1500 \rm ~km~s^{-1}$ on day 1128. Before discussing this result in more detail, we first test the electron scattering interpretation further by fitting the line profiles for a few different dates. This is done by a Monte Carlo code developed for this purpose, which is similar to that developed by \cite{Auer1972} and \cite{Chugai2001}. The details of this is discussed at length in e.g., \cite{Pozdniakov1977} and \cite{Gorecki1984}. Except for the Compton recoil, which is negligible at these wavelengths, this calculates the line profile for an arbitrary temperature and spatial distribution of electrons. In this calculation relativistic effects are unimportant and are ignored. Because the macroscopic bulk velocity can be transformed away in the way described above, we only consider scattering in a static medium. For photons injected at a specific optical depth, $\tau_{\rm e}$, the parameters determining the line shape are only $\tau_{\rm e}$ and temperature, $T_{\rm e}$. The optical depth determines the average number of scatterings, $ N_{\rm sc} \approx \tau_{\rm e}^2$. The thermal velocity of the electrons is $v_{\rm rms} = 674 (T_{\rm e}/10^4 \ {\rm K})^{1/2} \ \rm ~km~s^{-1}$. The FWHM width of the line is $\sim N_{\rm sc}^{1/2} v_{\rm rms} \approx 674 \ \tau_{\rm e} (T_{\rm e}/10^4 \ {\rm K})^{1/2} \ \rm ~km~s^{-1}$. For a constant electron temperature medium with no internal emission our Monte-Carlo calculations show that the FWHM can be fitted more accurately with $\Delta v_{\rm FWHM} = 900 (T_{\rm e}/10^4 \ {\rm K})^{1/2} \tau_{\rm e} \ \rm ~km~s^{-1}$. A similar scaling also applies if the photons are internally generated. This means that there is some degeneracy in temperature and optical depth for a given $\Delta v_{\rm HWHM}$. There is, however, a constraint on $ \tau_{\rm e}$ from the luminosity of the unscattered fraction of photons relative to those scattered is approximately $\exp(-\tau_{\rm e})$. In Figure \ref{el_scatt_fits} we show fits of the H$\alpha$ profile for two different dates, day 31 and day 222, covering both the very early and late phases. In this model we have used an $n_{\rm e} \propto r^{-2}$ density and assumed that the photons are injected in the scattering region with an emissivity $\propto n_{\rm e} ^2$, as applies if recombination dominates the H$\alpha$ emission. Note that the narrow component of the H$\alpha$ line is at the rest velocity of the galaxy and does not shift in velocity as the broad line becomes blueshifted, showing that these photons are not the dominant source for the scattering. The injected photons span a velocity range of $0-700 \rm ~km~s^{-1}$, as expected for radiative acceleration (see Section \ref{sec_radacc}), which decreases the un-scattered narrow peak, in agreement with the observations. The temperature is taken to decrease from $2 \times 10^4$ K to $1 \times 10^4$ K, similar to what is found in detailed simulations of shocks \citep[e.g., Figure 6 in ][]{Moriya2011}, as well as from our photoionization calculations in Sect. \ref{sec-big}. However, the profile is not sensitive to these assumptions. As can be seen from the figure, we get excellent fits for both dates, except at zero velocity, where the P-Cygni profile from the CSM affects the profile. This confirms that electron scattering dominates the line formation. There is no need for two different velocity components in the broad lines as advocated by \cite{Zhang}. The same conclusion is reached by \cite{Borish2014} for the day 36 Pa$\beta$ line. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{Ha_fit_2010jl_101108.eps}} \resizebox{85mm}{!}{\includegraphics[angle=0]{Ha_fit_2010jl_110519.eps}} \caption{Electron scattering line profiles for day 31 (left) and day 222 (right). The observed profiles (from NOT) are in blue and the calculated profiles from the Monte Carlo simulations in red.} \label{el_scatt_fits} \end{center} \end{figure} The fact that the line profile can be explained by electron scattering plus a linear velocity transformation is surprising. As was pointed out above, one expects a redshift rather than blueshifted line profile if the expansion of the ejecta/CSM is spherically symmetric \citep[see ][Figure 6]{Fransson1989}. Instead we propose that the H$\alpha$ emission comes from a highly asymmetric, almost planar region expanding towards us (although possibly at some inclination relative to the LOS), as discussed in Sect. \ref{sec-big}. \subsection{Origin of the line shifts} \subsubsection{Dust absorption from the ejecta} \label{sec_ejectadust} In Figure \ref{Ha_HWHM} we see a gradual shift of the H$\alpha$ peak. We have already in Secs. \ref{sec_dust} and \ref{sec_broad} argued against dust in the ejecta or at the reverse shock as a reason for this. Nevertheless, this has been discussed by several authors so we consider additional arguments on this topic. \cite{Smith2012} analyze the line profiles during the first 236 days and discuss two different scenarios for the observed blueshift. From a comparison of the optical and near-IR hydrogen line profiles at $\sim 100$ days they claim to see a wavelength dependent asymmetry, which they interpret as a sign of wavelength dependent extinction, indicative of dust formation in the post-shock gas. However, Smith et al., also discuss electron scattering as an explanation for the line profile. \cite{Maeda2013} argue for a dust origin based on the line shift, drop in luminosity and IR execss. However, their analysis has several limitations which we discuss based on our observations. The line shifts discussed by Maeda et al. are based on a single spectrum on day 513 with a limited S/N for lines other than H$\alpha$ (see their Figs. 4 and 5). The line shifts derived from these observations are therefore highly uncertain, and do not include the red wings of the lines. As we show in Figs. \ref{Ha_shift} and \ref{el_scatt_fits}, we can get a satisfactory fit of both wings with an electron scattering profile. Maeda et al. also claim to observe a systematic shift of the velocity with wavelength, characteristic of dust extinction. In Figure \ref{ha_hb_hei_pb} we compare the line profiles of our high S/N X-shooter spectrum for H$\alpha$, H$\beta$, H$\gamma$, and Pa$\beta$ at 461 days, an epoch not very different from that of Maeda et al. For these lines we have applied a smoothing of $\sim 50\rm ~km~s^{-1}$ to the spectra and normalized the fluxes to the same peak flux for the broad component and the same 'continuum' level between $\pm (6000-7000) \rm ~km~s^{-1}$. This is somewhat lower than for the determination of the continuum level of the H$\alpha$ line in Figure \ref{fig3b}, but is necessary to avoid line contamination for H$\beta$ and Pa$\beta.$ As seen in Figure \ref{ha_hb_hei_pb}, this is even more serious for H$\gamma$ and we have therefore scaled this line to the level of the other lines at the peak and at the maximum velocity where the line dominates, $\sim 2000 \rm ~km~s^{-1}$. The high spectral resolution makes it possible to isolate the narrow P-Cygni component of the lines, which contaminates the low resolution line profiles of Maeda et al. Both the spectral resolution and the S/N are crucial for a reliable determination of the peak velocity as well as the general line profile. Although the day 461 spectrum is the best one in terms of S/N and coverage to the NIR, we do not see any significant difference in the wavelength shifts for any other epochs. This includes the UV lines (Figs. 14 and 15), which should be most sensitive to extinction. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{xshooter_jan_2012_line_prof_halpha_hbeta_hgamma_palpha_SM1_6000kms_v2.eps}} \caption{ Comparison of the scaled line profiles of H$\alpha$, H$\beta$, H$\delta$ and Pa$\beta$ from the X-shooter spectrum from day 461. The dashed line gives zero velocity, while the solid line gives the -720 $\rm ~km~s^{-1}$ blueshift of the H$\alpha$ peak. Note the near coincidence of the peaks of all lines with that of H$\alpha$.} \label{ha_hb_hei_pb} \end{center} \end{figure} As can be seen from the figure, there is no notable difference in the velocity of the broad lines. The peak of H$\alpha$ is shifted by $\sim 720 \rm ~km~s^{-1}$ for this date. The H$\delta$, H$\beta$, Pa$\beta$ and He I $\lambda 10,830$ lines have their peaks at $640 \rm ~km~s^{-1}$, $670 \rm ~km~s^{-1}$, $700 \rm ~km~s^{-1}$ and $690 \rm ~km~s^{-1}$, respectively. From variations of the smoothing we estimate an error of $\pm 50 \rm ~km~s^{-1}$. For H$\alpha$, with a very high S/N, the error is $\la 30 \rm ~km~s^{-1}$. We do therefore not find any significant trend of the peak velocity or line shape with wavelength. Although distorted by the interstellar absorptions one can also compare the Ly$\alpha$ line profile to the H$\alpha$ and H$\beta$ lines in Figure \ref{fig_lprof_lya} for days 44, 107 and 621. It is clear that there is no significant difference in these in spite of the large wavelength difference and large time span. This is in contrast to the changing line profiles of the Ly$\alpha$, Mg II and H$\alpha$ in SN 1998S \citep[e.g., Figure 8 in][]{Fransson2005}, where such a difference was indeed seen, indicative of dust formed behind the reverse shock. The line profiles in SN 1998S were also very asymmetric. Thus we find that the main argument for ejecta dust in Maeda et al. is doubtful. Another indicator of ejecta dust discussed by Maeda et al., the NIR excess, has already been discussed as the probable result of an echo. The third, the drop in the light curve, has a natural explanation in terms of the shock either exiting the circumstellar shell or transiting to a momentum driven phase, as proposed by \cite{Ofek2013}, and discussed in Sect. \ref{sec_energy}. After this paper was first submitted \citet[][in the following G14]{Gall2014} published an analysis of the line shift based on dust absorption. In contrast to \cite{Andrews2011} and this paper, G14 interpret the NIR dust emission as well as the blueshifts being caused by dust formed behind the reverse shock at an initial radius of $\sim 2\EE{16}$ cm. There is, however, a number of problems with this interpretation. First, it requires the shock to have reached this radius at their first observation, 26 days after peak or 66 days after first detection. This requires a shock velocity of $\sim 35,000 \rm ~km~s^{-1}$. To support this G14 claim to see velocities up to $\sim 20,000 \rm ~km~s^{-1}$ from an assumed P-Cygni absorption in H$\beta$ extending to this velocity. If this would be real, there would in that case also be an emission component up to a similar velocity, because the photosphere, assumed to be at $\sim 7500 \rm ~km~s^{-1}$, will only occult a small fraction of the ejecta. Such a broad emission component is not seen. The only way around this is to assume that the ejecta is extremely asymmetric. The 'P-Cygni absorption' is instead likely to be the result of contributions by weaker emission lines bluewards of H$\beta$. Further, there are no indications from similar high velocities from any other lines. In particular, the Ly$\alpha$ line, being a resonance line, should show such absorption and emission if real. As seen in Fig. \ref{fig_lprof_lya} this is not the case. Nor do any other of the UV resonance lines show such a component (Fig. \ref{fig_lprof_niv_civ}). There are also other observations disfavoring similar high velocities. The analysis of the X-rays imply a velocity $\la 6000 \rm ~km~s^{-1}$ \citep{Ofek2013}. The only direct evidence for high expansion velocities comes from NIR spectra, where the He I $\lambda 10,830$ line shows evidence for expansion velocities of $\sim 5500$ between 100-200 days towards us \citep{Borish2014}. There are additional problems with the scenario in G14. If the emission lines would be coming from the cool dense shell and absorbed by the dust formed in this the lines would be expected to have a boxy profile, not a symmetric profile peaked at low velocities. Boxy profiles are indeed seen in objects like SN 1993J \citep{Matheson2000} and SN 1995N \citep{Fransson2002}, where the cool dense shell is believed to dominate the emission. Also in objects where there indeed are strong indications for dust formation, like SN 1998S and SN 2006jc, the line profiles are less centrally peaked and more irregular \citep{Leonard2000,Pozzo2004, Fransson2005,Smith2008a}. As we have shown, the line profiles are instead well characterized by that resulting from electron scattering. The mechanism behind the blueshift is not discussed in the G14 paper. It would, however, be surprising if this gave an intrinsically symmetric profile a simple shift in velocity, as we find, rather than a more irregular shape, given the expected clumping in a cold dense shell \citep{Chevalier1995}. Taken these results together, we therefore conclude that a cold dense shell at $\sim 35,000 \rm ~km~s^{-1}$ is unlikely and that the dust emission is instead coming from preexisting dust, as concluded by \cite{Andrews2011}. If the shock velocity is much lower than $\sim 35,000 \rm ~km~s^{-1}$ the shock will be well inside the evaporation radius even for the large grains G14 propose, as Table \ref{table_evap} and also the estimates in G14 show. Formation of the dust is then very difficult, as discussed before. We are also surprised that G14 exclude the H$\alpha$ line in their analysis, given that this is by far the highest S/N line, and therefore best suited for this kind of analysis. In addition, it should be noted that the asymmetry of H$\alpha$ and other lines is sensitive to the assumed continuum level. A continuum subtraction has apparently not been done, as can be seen in Fig. 5 of G14. The claims for a strong asymmetry in H$\alpha$ is therefore doubtful. Instead, it can be seen from our Fig. \ref{fig_full_stis_spec} that H$\beta$ is considerably more complicated to analyze, with a number of interfering lines. G14 further argue against preexisting dust based of the high dust temperature observed. It is not clear what this is based on, and on the contrary this is a natural consequence of dust evaporation by the strong flux from the SN. The same conclusion is reached by \cite{Borish2014}. Taken together we therefore think the results regarding the dust formation and derived properties of this in G14 are highly questionable. Based on this and our earlier discussion we exclude the dust alternative, and instead we believe that the line shifts are due to a bulk velocity of the scattering material. There are at least two possibilities for this. \subsubsection{Gradual contribution from post-shock gas} \label{sec_viscshock} One alternative is that the velocity shift is caused by an increasing contribution of emission from the cooling gas behind the outgoing viscous shock. A possible scenario would be that at the first epochs the scattering optical depth in front of the shock is large enough that most of the line photons are emitted from un-shocked gas with low velocities. As the optical depth decreases an increasing contribution of the emitted photons will come from shocked gas behind the radiative forward shock. That this shock is indeed cooling is shown in Sect. \ref{sec_energy}. These photons will be scattered by electrons both in front and behind the shock, producing the symmetric wings of the lines. There will therefore be a gradual shift from zero velocity to the velocity of the shock of the line profiles. The main problem with this model is the high density in the post-shock gas. The forward shock is expected to be radiative. The temperature immediately being the shock will be $\sim 10^8$ K, but will cool down to $(1-2)\times 10^4$ K. The compression behind the shock will therefore be a factor of $10^3-10^4$ in density, implying a density of $\ga 10^{13} \rm ~cm^{-3}$ where the gas is cool enough to emit H$\alpha$ and other low ionization lines. Calculations discussed in Sect. \ref{sec-big} show that at densities $\ga 10^{11} \rm ~cm^{-3}$ the emission is in LTE and thermalized, with a very low efficiency of conversion into line emission. Most radiation will emerge as a blackbody continuum. The large H$\alpha$ luminosity is very difficult to explain in this scenario. In principle, the H$\alpha$ emission could come from the unshocked ejecta, where the density is lower. From the swept up mass derived from the light curve (Eq. \ref{eq_mass} below) the column density of the cool shell is $\sim 10^{25} \ \rm cm^{-2}$, meaning that all X-rays below $\sim 15$ keV will be absorbed in the shell. Only the reverse shock will therefore be able to ionize this region. The luminosity of the reverse shock is only $\sim 10$ \% of the forward shock, so there will be a serious problem with the energy. Finally, it is not clear that a superposition of the emission from the pre-shock and post-shock gas will result in the observed symmetric profiles. Taken together, we believe that this explanation for the line shifts is unlikely. \subsubsection{Radiative acceleration} \label{sec_radacc} A different alternative is that the line shift is the result of acceleration of the pre-shock gas by the extremely energetic radiation field from the shock and thermalized optical radiation from the ejecta and cool shell. This has been discussed earlier in different contexts \citep[e.g.,][]{Chevalier1976,Fransson1982}, but no conclusive observational evidence has been found. As a simple model we consider an optically thin shell at a radius, $r(t)$, illuminated from below by the radiation field of the SN, having a luminosity $L(t)$. Assuming that we can consider the radiation as radially free streaming, which should be a reasonable approximation at small optical depths close to the surface, the velocity is given by \begin{equation} V(t) = {\kappa \over 4 \pi c} \int _0^{t} {L(t') \over r(t')^2 } dt' \ . \label{eq_vel_acc} \end{equation} For $\kappa$ we use the electron scattering opacity, $\sim 0.35$, applicable for an ionized medium with a He/H ratio of 0.1. For the bolometric luminosity from the SN we use the result from Figure \ref{bol_log_log}, shown on a linear scale in the upper panel of Figure \ref{vel_acc}. The total radiated energy from the SN (excluding the echo) was $\sim 6.5 \times 10^{50}$ ergs. Using this energy and assuming a radius constant with time, we can estimate of the final velocity as $V \approx 670 (r/3\times 10^{15} \rm cm)^{-2} \rm ~km~s^{-1}$. This estimate is, however, only approximate. The reason is that the hydrodynamic time scale of this gas is comparable to the length over which most of the energy is emitted, $t_{\rm hydro} = 580 ~ (r/ 3 \times 10^{15} \ {\rm cm}) / (V/700 \rm ~km~s^{-1}) $ days. The increase of the radius of the shell must therefore be taken into account when calculating the velocity. This is really a hydrodynamic problem, which includes the interaction of the shell with the surrounding gas. We ignore the latter effects and treat the expansion as ballistic, calculating the radius as $r = \int_{t_0}^t v(t') dt' + r_0$. The only free parameter is the initial radius, $r_0 $, of the emitting shell, which gives the normalization of the velocity. For the best fit we find $r_0=2.6 \times 10^{15}$ cm, resulting in a velocity as a function of time shown by the blue dots in the lower panel of Figure \ref{vel_acc}. \begin{figure}[t!] \begin{center} \resizebox{85mm}{!}{\includegraphics[angle=0]{bol_acc_v2.eps}} \caption{Upper panel: Bolometric light curve from the SN, excluding the IR dust echo, but including the UV and IR contributions from the direct SN emission. Note the linear luminosity scale in this figure, most relevant for the radiative acceleration, compared to the logarithmic one in Figure \ref{fig_bollc}. Lower panel: The velocity shift found from H$\alpha$ together with the velocity predicted from the acceleration by the SN radiation from Eq. (\ref{eq_vel_acc}). } \label{vel_acc} \end{center} \end{figure} Given that there are uncertainties in the estimate of the bolometric luminosity, we find a good agreement with the observed velocity shift from Figure \ref{Ha_HWHM} and shown as black dots in Figure \ref{vel_acc}. At epochs later than $\sim 500$ days there is a considerable scatter in the observed line shift, and it is difficult to judge if there is a slight decrease of the velocity, as may happen if there is a braking effect by the swept up CSM. Apart from the general agreement with the evolution of the velocity, it is also interesting that the initial radius which gives the best fit, $r_0=2.6\times 10^{15}$ cm, is close to that which gives the best fit to the photospheric radius from the blackbody fitting of the SED at the early epochs, $R_{\rm phot}\approx 3.2 \times 10^{15}$ cm (Table \ref{table_dust_param}). The final radius after $\sim 900$ days is $7 \times 10^{15}$ cm and the velocity $650 \rm ~km~s^{-1}$. As we discussed in Section \ref{sec_broad}, an important constraint on the source of the H$\alpha$ emission is the relation of the narrow unscattered line and the scattered wings, with the fraction of the former approximately given by $\propto \exp(-\tau_{\rm e})$ for a planar geometry. Because the typical optical depth required for the wings is $2-5$, depending on the temperature, this would in general give a strong narrow line. This is indeed observed at early time, but disappears at later epochs at the same time as the blueshift starts (Figure \ref{fig3b}). In this scenario this may be explained by the fact that the source of the H$\alpha$ photons become increasingly spread out in velocity, reflecting the local, accelerated velocity, but now spanning an increasing velocity range. The peak flux will decrease with the increasing velocity of this material, as observed. One therefore obtains a consistency check between the flux of the unscattered emission and the velocity shift. Note that the narrow CSM line most likely comes from a different, more distant component than the broad lines, based on its constant velocity and different flux evolution (Fig. \ref{fig_narrowHa}). The velocity of this radiatively accelerated material is considerably lower than the $\sim 3000 \rm ~km~s^{-1}$ one infers from the X-rays. After a time $t \approx r_0/(V_s - V_{\rm acc}) \approx 1.0 (r_0/2.6\times10^{15} {\rm cm})/(V_s/650 \rm ~km~s^{-1} -1)$ years the shock will sweep up the accelerated shell. For this not to happen too early, the shock velocity interior to the H$\alpha$ emitting shell has to be below $\sim 1500 \rm ~km~s^{-1}$, depending on the velocity of the accelerated shell, corrected for the LOS angle. As discussed earlier, a reason for a lower shock velocity may be that the shell is anisotropic with the highest column density where the broad lines arise. The shock velocity will then be lower than that inferred from the X-rays, which are likely to come from directions of lower column density (see next section). The mass of the accelerated gas does not need to be large and is likely to be small. The scattering layer only has to have a $\tau_{\rm e} \ga 1$, corresponding to $\sim 0.14 (r/3 \times 10^{15} {\rm~ cm})^2 (\Omega/4 \pi) \ M_\odot$, where $r$ is the radius of the shell and $\Omega$ the solid angle of the shell. To accelerate it to $\sim 1000 \rm ~km~s^{-1}$ only takes $\sim 1.5 \times 10^{48} (\Omega/4 \pi)$ ergs, which is small compared to the total radiated energy. Most of the mass in the CSM is instead in the optically very thick region inside the accelerated layer. \subsection{Shock breakout, circumstellar interaction and energy source} \label{sec_energy} The high luminosity, $\sim 3\times 10^{43} \rm ~erg~s^{-1}$ at maximum, and large total radiated energy, $\ga 6.5 \times 10^{50} \ \rm ergs$ (Section \ref{sec_phot_results}), require an efficient conversion of kinetic to radiative energy. In Section \ref{sec_dust} we argued that the flux from the optical and NIR up to $\sim 350$ days is dominated by the direct flux from the SN, while most of the NIR flux is from the echo after this epoch. We now discuss the energetics in terms of the shock propagation through a dense envelope. The shock breakout for extended SNe has been discussed by several authors \citep{Falk1977,Grasberg1987,Moriya2011,Moriya2013,Chevalier2011,Ginzburg2012}. For large optical depths the shock will be radiation dominated with a width $\tau_{\rm e} \sim c/ v$ \citep{Weaver1976}. \cite{Chevalier2011} estimate the diffusion time scale of the CSM, assumed to have $\rho \propto r^{-2}$, as \begin{eqnarray} t_{\rm diff} &\approx& 6.6 \left({\dot M \over 0.1 \Msun ~\rm yr^{-1}}\right) \left({u_{\rm w} \over 100 \rm ~km~s^{-1}}\right)^{-1} \nonumber \\ &&\left({\kappa \over 0.34 \ \rm cm^2 \ g^{-1}}\right) \rm days. \label{eqdiff} \end{eqnarray} Here $\dot M$ is the mass loss rate and $u_{\rm w}$ is the wind velocity, and $\kappa$ the opacity. For the mass loss rates we derive below this is likely to be short for the epochs of interest here. As the shock approaches the surface at $\tau_{\rm e} \la c/ v$ a viscous shock will form \cite[for a discussion see][]{Chevalier2011}. If the circumstellar density is high the viscous shock will be radiative, implying an efficient conversion of kinetic to radiative energy. The cooling time for the forward shock is given by $t_{\rm cool} = 3 kT/n_{\rm e} \Lambda$, where the cooling rate can be approximated by $\Lambda = 2.4\times 10^{-23} T_8^{0.5} \rm ~erg~s^{-1} \ cm^3$, for temperatures above $\sim 2\times 10^7$ K. For close to solar abundances, the shock temperature is given by $T_{\rm e} = 1.2 \times 10^8 (V_{\rm s}/3000 \rm ~km~s^{-1})^2$ K. For an ejecta profile given by $\rho_{\rm ejecta} \propto r^{-n}$ the shock radius is \begin{eqnarray} R_{\rm s} &=& 9.47\times 10^{15} {(n-2)\over (n-3)} \left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right) \left( {t \over {\rm years}}\right)^{(n-3)/(n-2)} \\ &=& 1.23\times 10^{16} \left({V_{\rm s,320}\over 3000 \rm ~km~s^{-1}}\right) \left( {t \over {\rm years}}\right)^{0.82} \ \rm cm, \end{eqnarray} for $n=7.6$ as we find below. Based on the shock velocity derived by \cite{Ofek2013} we scale this and the following estimates to a shock velocity at 320 days, $V_{\rm s,320}$, of $3000 \rm ~km~s^{-1}$. Note that this velocity may vary with the direction if the mass loss rate is anisotropic (see below), as we argue. This gives a density immediately behind the forward shock of \begin{eqnarray} n_{\rm s, e} &=& 8.1\times 10^{8} \left({\dot M \over 0.1 \Msun ~\rm yr^{-1}}\right) \left({u_{\rm w} \over 100 \rm ~km~s^{-1}}\right)^{-1} \nonumber\\ &&\left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right) \left( {t \over {\rm years}}\right)^{-1.64} \rm ~cm^{-3}, \end{eqnarray} which leads to \begin{eqnarray} t_{\rm cool}& =& 26.6 \left({\dot M \over 0.1 \Msun ~\rm yr^{-1}}\right)^{-1} \left({u_{\rm w} \over 100 \rm ~km~s^{-1}}\right) \nonumber\\ && \left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right)^{3} \left( {t \over {\rm years}}\right)^{1.46} \rm days. \end{eqnarray} Therefore, the forward shock should be radiative for several years as long as the shock is in the high density shell. Further, assuming that the ingoing X-ray flux from the shock is thermalized in the ejecta and cool shell behind the shock, the blackbody luminosity is given by $(1/4) \dot M V_{\rm s}^3/u_{\rm w}$. An equal amount will partly be absorbed and ionize the pre-shock gas and partly escape as the observed X-ray flux. Using this luminosity for the radiation density the ratio of Compton to free-free cooling is then \begin{eqnarray} {P_{\rm Compton}\over P_{\rm free-free}} = 7.4\tau_e \left({V_{\rm s} \over 10^4 \rm ~km~s^{-1}}\right)^4 \end{eqnarray} \citep{Chevalier2012}. Therefore, for shocks with velocity below $\sim 5000 \rm ~km~s^{-1}$ free-free cooling dominates, unless the optical depth is very high. For a general circumstellar density given by $\rho = \dot M /(4 \pi u_{\rm w} r_0^2) (r_0/r)^s$ the ejecta velocity at the shock depends on the mass loss rate and time as \begin{equation} V_{\rm s} \propto \left( { \dot M \over v_{\rm w}} \right)^{-1/(n-s)} t^{-(3-s)/(n-s)} \label{eq_vshock} \end{equation} \citep[e.g.,][]{FLC1996}, assuming the interaction can be described by a similarity solution. If the mass loss rate is a function of polar angle, $V_{\rm s}$ will therefore depend on angle as well. For a radiative shock we find for the luminosity \begin{eqnarray} L &=& 2 \pi \rho V_{\rm s}^3 r^2 = {1 \over 2} {\dot M \over u_{\rm w}} \left(r_0 \over r\right)^{s-2} V_{\rm s}^3 \nonumber\\ &\propto& \left(\dot M \over u_{\rm w}\right)^{(n-5)/(n-s)} t^{-[15+s(n-6)-2n]/(n-s)} \ . \label{eq_lum} \end{eqnarray} For $s=2$ we obtain $L \propto t^{-3/(n-2)} $. The fit of Eq. (\ref{eq_lum}) to the data in Figure \ref{bol_log_log} implies a value of $n=7.6$ for $s=2$, as shown by the dashed line in this figure. The break at $\sim 320$ days in the light curve can either be caused by the break-out of the shock through the dense shell, or as a result of a transition to a momentum conserving phase, occurring when the swept up mass is comparable to the ejecta mass \citep{Ofek2013}. The latter explanation has, however, been contested by \cite{Moriya2014}, who find that the transition to the momentum driven phase gives too smooth and slow a decrease of the luminosity. As we discuss in Section \ref{sec-big}, the breakout in a bipolar shell may be compatible with this. For $s=2$ (consistent with the X-ray light curve, see below) we get \begin{eqnarray} L &=& 8.51~\times~10^{42} \left({\dot M \over 0.1 \Msun ~\rm yr^{-1}}\right) \left({u_{\rm w} \over 100 \rm ~km~s^{-1}}\right)^{-1} \nonumber\\ &&\left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right)^3 \left({t \over 320 \ \rm days}\right)^{-0.537}\rm ~erg~s^{-1} \label{eq_lum_mdot} \end{eqnarray} For a luminosity at the break at $\sim 320$ days of $9.33 \times 10^{42} \rm ~erg~s^{-1}$ we then find \begin{equation} \dot M = 0.11 \left({u_{\rm w} \over 100 \rm ~km~s^{-1}}\right) \left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right)^{-3} \ \Msun ~\rm yr^{-1} \ . \label{eq_massloss} \end{equation} The total mass swept up is then \begin{equation} \Delta M = \dot M {V_{\rm s} \over u_{\rm w}} t = 2.89 \left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right)^{-2} \ M_\odot \ . \label{eq_mass} \end{equation} Both the mass loss and total mass are sensitive to the expansion velocity of the ejecta. In this estimate we assume that the ejecta with a velocity $\sim 3000 \rm ~km~s^{-1}$ dominates the energy input to the optical light curve. In reality there should be a range in velocities and there is therefore a considerable uncertainty in this estimate. The fact that we observe both hard X-rays with a temperature corresponding to a shock velocity $\sim 3000 \rm ~km~s^{-1}$ and with a column density corresponding to an electron scattering depth $\tau_{\rm e} \la 1$, as well as high column density gas with $\tau_{\rm e} \ga 3$ argues for a highly anisotropic CSM. This is further strengthened by the blue shoulder seen in the He I $\lambda 10,830$ line between $\sim 100 - 200$ days after explosion by \cite{Borish2014}. The velocity of this decreases during this interval from $\sim 6000 \rm ~km~s^{-1}$ to $\sim 5000 \rm ~km~s^{-1}$, indicating deceleration of this material. The shock velocity in the X-ray obscured regions may, however, be considerably lower (Eq. \ref{eq_vshock}). The mass loss rate above, as well as the total mass lost, should therefore be considered as lower limits. Note, however, that while the mass loss rate scales with the velocity of the CSM, the total mass lost is independent of this parameter. \cite{Ofek2013} find a considerably higher mass loss rate of $\sim 0.8 \Msun ~\rm yr^{-1}$. The main reason for this is that they are scaling to a higher wind velocity of $u_w = 300 \rm ~km~s^{-1}$, while we find $u_w \approx 100 \rm ~km~s^{-1}$. They also assume an efficiency of $\sim 0.25$ in converting the shock energy into the observed luminosity. The reason for this is not clear. Another important difference is that they assume an ejecta profile $n=10$, close to that found by \cite{Matzner1999} for the ejecta of a radiative envelope, while we derive $n\approx 7.6$ from our bolometric light curve. Their larger value of $n$ is a result of the flatter light curve they derive from the R-band only, $L\propto t^{-0.38}$ compared to our $L\propto t^{-0.54}$ and using $L\propto t^{-3/(n-2)}$ (Eq. \ref{eq_lum}). Even considering the above uncertainties, the mass loss rate and the total mass lost are very large, but as discussed in Sect. \ref{sec_csm}, these are of the same magnitude as that inferred from the CSM of local LBVs, like Eta and AG Carinae. We discuss the implications of this further in Section \ref{sec-big}, but we first place it in relation to other Type IIn SNe. \subsection{Comparison to other Type IIn supernovae} \label{sec-comp} Although interesting as a case by itself, it is at least as interesting to put SN 2010jl into the context of other Type IIn SNe. There have been several Type IIn SNe which show similarities to SN 2010jl to various degrees, although in most cases these events are less extreme in terms of luminosity or total radiated energy. As we show in this section, most of these, SNe 1995N, 1998S, 2005ip, 2006jd, although observationally quite different, are related to SN 2010jl, while two others recently discussed, SN 2009ip and SN 2010mc, while having some properties in common, show major differences from SN 2010jl. SN 1998S is one of the best observed Type IIn SNe, and is interesting as a less extreme case of a Type IIn SN. Initially it showed typical Type IIn signatures with narrow symmetric lines \citep{Leonard2000,Fassia2001} dominated by electron scattering of H and He I lines \citep{Chugai2001} The symmetric lines disappeared after about a week and instead broad P-Cygni profiles with an expansion velocity of $\sim 7000 \rm ~km~s^{-1}$ appeared. Later optical spectra at $\ga 70$ days showed increasingly box-like profiles typical of circumstellar interaction, with H$\alpha$ by far the strongest and a steep Balmer decrement. Spectra later than a year showed a strong suppression of the red wing of H$\alpha$, as well as Ly$\alpha$, indicating dust formation in the ejecta or reverse shock \citep{Leonard2000,Pozzo2004, Fransson2005}. As was remarked earlier, these line profiles were very different from thise in SN 2010jl. High velocity ejecta profiles of H$\alpha$ were also seen for SN 1995N \citep{Fransson2002}, SN 2005ip and 2006jd \citep{Smith2009,Stritzinger2012}. For the latter two maximum ejecta velocities of $16,000 - 18,000 \rm ~km~s^{-1}$ were seen at early times \citep{Smith2009,Stritzinger2012}. For SN 2006jd a high velocity wing at $\sim 7000 \rm ~km~s^{-1}$ could be seen even at 1540 days. In addition, an intermediate velocity component with a FWHM of $\sim 1800 \rm ~km~s^{-1}$ was seen in both SNe. There were no strong indications of electron scattering wings in these SNe, although this may have contributed to the intermediate component or faded away before their discovery, as for SN 1998S. In the last HST spectra of SN 1998S broad features from O I $\lambda \lambda 1302, 1356$ were seen, indicating processed material \citep{Fransson2005}. Recently, \cite{Mauerhan2012} found that at 14 years the spectrum of SN 1998S was dominated by strong lines of [O I-III]. This shows that the reverse shock has now propagated close to the core of the SN and that newly processed gas is dominating the spectrum. It is therefore no doubt that a core collapse has taken place. For SN 2010jl this phase has not yet occurred. Hard X-rays with $kT \sim 10$ keV and a luminosity of $\sim (5-8) \times 10^{39} \rm ~erg~s^{-1}$ were observed for SN 1998S 2-3 years after explosion \citep{Pooley2002}. The X-ray light curve of SN 2006jd stayed nearly flat with an unabsorbed luminosity of $(3-4)\times 10^{41} \rm ~erg~s^{-1}$ between 400 - 1600 days \citep{Chandra2012b}, lower than for SN 2010jl. Also the column density of these SNe, $\la 1.5 \times 10^{21}$ cm$^{-2}$, were considerably lower than for SN 2010jl and did not show any strong time evolution as for SN 2010jl. High resolution spectra of SN 1998S by \cite{Fassia2001} exhibited a low velocity P-Cygni line with a velocity of $40-50 \rm ~km~s^{-1}$ and a higher velocity extension with velocity $350 \rm ~km~s^{-1}$. The low velocity component argues for a red supergiant progenitor. The origin of the higher velocity component is not clear. Variations of the wind velocity is also seen in LBVs and S Doradus stars and such a progenitor is probably not excluded. In the case of SN 2005ip the narrow component was only marginally resolved with FWHM $\sim 120 \rm ~km~s^{-1}$ \citep{Smith2009}. The very different optical spectra of SN 1998S and the other mentioned SNe compared to SN 2010jl can naturally be explained as a result of the different CSM densities and shock velocities. The mass loss rate of SN 1998S was from late observations estimated to be quite modest by Type IIn standards, $\sim 2\times 10^{-5} ~\Msun ~\rm yr^{-1}$ \citep{Fassia2001}. The time of optical depth unity to electron scattering is for a steady wind (for simplicity assumed to extend to infinity) given by \begin{eqnarray} t(\tau_{\rm e}=1)&=&{\kappa_{\rm T}\dot M \over 4 \pi u_{\rm w} V_{\rm s}} = \nonumber\\ &&680 \left({\dot M \over 0.1 ~\Msun ~\rm yr^{-1}}\right) \left({u_{\rm w} \over 100 \rm ~km~s^{-1}}\right)^{-1} \nonumber\\ && \left({V_{\rm s}\over 3000 \rm ~km~s^{-1}}\right)^{-1} \ \rm days, \label{eq_tau_el} \end{eqnarray} where we have scale the parameters to SN 2010jl. Note, however, that we in Sect. \ref{sec_energy} argue that $V_{\rm s}$ is considerably lower in the directions we observe. For SN 1998S with $\dot M \approx 2\times 10^{-5} ~\Msun ~\rm yr^{-1}$, $u_{\rm w} \approx 50 \rm ~km~s^{-1}$ and $V_{\rm s} \approx7000 \rm ~km~s^{-1}$ we find that $t(\tau_{\rm e}=1) \approx 0.1$ days. The fact that electron scattering wings were observed for several days argues for a denser shell at $\sim 10^{15}$ cm from the progenitor with $\dot M \approx 3\times 10^{-3} (u_{\rm w} /10 \rm ~km~s^{-1}) ~\Msun ~\rm yr^{-1}$ \citep{Chugai2001}. Although there are uncertainties in these quantities one can therefore conclude that {\it the fast disappearance of the electron scattering wings and the transparency to the processed core in SN 1998S (and the other discussed medium luminosity Type IIns) compared to SN 2010jl, is consistent with the much lower mass loss rate of the former. Because the X-ray column density is proportional to the electron scattering optical depth, this also explains the much lower X-ray column densities for these SNe.} This argument also applies to the even higher luminosity 'Superluminous' Type II SNe \citep[e.g.,][]{Gal-Yam2012}. The UVOIR luminosity of SN 1998S was at the peak $\sim 2\times 10^{43} \rm ~erg~s^{-1}$, while a blackbody fit gave a considerably higher luminosity of $\sim 6\times 10^{43} \rm ~erg~s^{-1}$ \citep{Fassia2000}. This is similar to the peak luminosity of SN 2010jl. The initially much higher effective temperature, $\sim 18,000$ K, compared to $\sim 7300$ K for SN 2010jl \citep{Zhang}, however, had the consequence that the R-band magnitude at maximum was only $\sim -19.1$, compared to $\sim -20.0$ for SN 2010jl. A main difference compared to SN 2010jl was that the decay was nearly exponential up to $\sim 100$ days with a fast decay rate of $\sim 25$ days. Including only the UVOIR luminosity we estimate an uncertain total energy of $\sim 1.1 \times 10^{50}$ ergs. Using instead the bolometric luminosity from the blackbody fits by Fassia et al., one gets a factor of $2 - 3$ higher radiated energy. As \cite{Pozzo2004} argue, the early light curve is likely to be dominated by the released shock energy, and the main reason for the high initial luminosity was a large extent of the progenitor, as indicated by the dense CSM during the first days, discussed above. The fast decay as well as the lower radiated total energy, however, indicates a lower total mass of the CSM compared to SN 2010jl. \cite{Chugai2001} estimate a total mass of $\sim 0.1 ~M_\odot$ in the CSM of SN 1998S, $\sim 2$ orders of magnitude smaller than for SN 2010jl. SN 2005ip had a moderate peak luminosity for a Type IIn SN, $\sim 4\times 10^{42} \rm ~erg~s^{-1}$ \citep{Smith2009,Stritzinger2012}. It then decayed on roughly the ${}^{56}$Co time scale to a nearly constant level at $\sim 200$ days, dominated by the NIR. The optical luminosity during this plateau phase was $\sim 2.5\times 10^{41} \rm ~erg~s^{-1}$, decaying slowly as $t^{-0.3}$ from $\sim 200$ to $\ga 1600$ days. From blackbody fits Stritzinger et al. find a luminosity of $\sim 8\times 10^{41} \rm ~erg~s^{-1}$ for the NIR warm component. Stritzinger et al. also discuss optical and IR observations of another interesting Type IIn, SN 2006jd. This SN had a similar early luminosity, but considerably higher optical luminosity in the plateau phase, $\sim 8\times 10^{41} \rm ~erg~s^{-1}$, and nearly constant from $\sim 200$ to $\ga 1600$ days. From \cite{Stritzinger2012} we estimate the total radiated energy of SN 2005ip to $\sim 4 \times 10^{49}$ ergs from the photospheric emission and $\sim 7 \times 10^{49}$ ergs in the dust component, and for SN 2006jd to $\sim 10^{50}$ ergs from the photospheric emission and $\sim 2 \times 10^{50}$ ergs in the dust component. Although the luminosity and energy are high, both these SNe are therefore considerably less extreme in terms of mass loss and degree of interaction compared to SN 2010jl. An independent indicator of strong mass loss is the large observed N/C ratio, characteristic of CNO processed material in the CSM. This is constistent with the fact that high N/C ratios have been observed for all Type IIn, IIb and IIL SNe observed with HST, SN 1979C, 1993J, 1995N, 1998S and SN 2010jl (Sect. \ref{sec_csm}). A common feature of all these SNe is also the presence of high ionization lines from the CSM. UV spectra of SN 1998S from day 28 to day 485 showed increasingly strong lines from C III-IV and N III-V originating in the CSM. Also SN 2005ip and 2006jd revealed a number of narrow high ionization lines from the CSM. For SN 2006jd this included lines from [Fe X-XI] and [Ar X], while SN 2005ip showed even higher ionization lines, including [Fe XIV] $\lambda 5302.9$ and [Ar XIV] $\lambda 4412.3$ \citep{Smith2009}. High ionization lines were also seen in SN 1995N, which showed a very similar spectrum to SN 2006jd \citep{Fransson2002}. We note that \cite{Hoffman2008} observed similar high ionization coronal lines in the Type IIn SN 1997eg. Nearly all of these SNe had a strong X-ray flux. The connection between this and the presence of circumstellar lines therefore indicates that the latter are excited by the X-rays in most of these cases. Conversely, the presence of circumstellar lines may be used as a diagnostic of a strong X-ray flux. NIR spectra and photometry of SN 1998S by \cite{Fassia2000} revealed a dust excess already at 136 days. Later observations by \cite{Pozzo2004} showed a comparatively hot dust spectrum in the first observations, with a dust temperature $\sim 1000-1250$ K at $\sim 1$ year, decreasing to $\sim 750-850$ K at 1198 days. Pozzo et al. argue for a dust echo for the first epochs, while the later emission may have an origin in condensed the dust in the cool dense shell. In contrast to SN 2010jl, there is no problem with dust evaporation at the shock radius for epochs later than $\sim 1$ year because the bolometric luminosity from the SN was then $\la 10^{41} \rm ~erg~s^{-1}$ \citep{Pozzo2004}. The dust evaporation radius is then $(1-2)\times 10^{16}$ cm (Table \ref{table_evap} ), while the shock radius is $\sim 3\times 10^{16}$ cm for a shock velocity of $10^4 \rm ~km~s^{-1}$. Also the Type Ibn SN 2006jc showed evidence for dust formation behind the reverse shock \citep{Smith2008a,Mattila2008}. Even at maximum the luminosity of this SN was $\la 10^{42} \rm ~erg~s^{-1}$, declining to $\la 2\times 10^{41} \rm ~erg~s^{-1}$ later than 100 days \citep{Mattila2008}. The expansion velocity was estimated to be $\ga 8000 \rm ~km~s^{-1}$, indicating a shock radius of $\ga 7 \times 10^{15}$ cm at 100 days. The high dust temperature, $\sim 1800$ K at maximum, argues for carbon dust. From Table \ref{table_evap} we then find a dust evaporation radius of $\sim 5.6 \times 10^{15}$ cm at 100 days for carbon dust with a size of 1 $\mu$m. Because of the much lower luminosity compared to SN 2010jl dust formation behind the reverse shock is therefore possible. \cite{Gerardy2002} discuss NIR photometry and spectra for several Type IIn SNe. The best observed of these is SN 1995N with NIR observations from 730 to 2435 days after explosion. During the first observations the NIR luminosity was $\sim (7-10)\times 10^{41} \rm ~erg~s^{-1}$, depending on the assumed dust emissivity. It then slowly decayed by a factor of $\sim 10$ at the time of the last observations. The dust temperature was $700 - 900$ K during most of the evolution. Also the other SNe observed showed similar high temperatures. From the fading of the red wing of the H$\alpha$ line in SN 1995N late dust formation at an age of $\sim 1000$ days was indicated \citep{Fransson2002}. Evidence for dust formation for SN 2005ip was first presented for this SN based on NIR photometry by \cite{Fox2009}. Further observations in the NIR and with Spitzer showed evidence for two dust components with different temperatures \citep{Fox2010}. Fox et al. argue that the hot component comes from newly formed dust in the ejecta or reverse shock, while the cool emission comes from pre-existing dust heated by the late circumstellar interaction. Independent evidence for dust came from line profile asymmetries \citep{Smith2010}. Both SN 2005ip and 2006jd showed strong IR excesses already shortly after the explosion. The fit to the warm components reveal for both SNe a maximum dust temperature of $\sim 1600$ K at 100-200 days, decreasing to $\sim 1000$ K at 1000 days. The luminosity of the dust component of SN 2006jd increased to $\sim 3\times 10^{42} \rm ~erg~s^{-1}$ at $\sim 500$ days and then decayed slowly to $\sim 5\times 10^{41} \rm ~erg~s^{-1}$ at $\sim 1700$ days. The fraction of the bolometric luminosity in this warm component increased already at $\sim 100$ days to $\sim 80 \%$, and to even higher values at later times. SN 2005ip showed a similar behavior, although the luminosity of the warm component was only $\sim 5\times 10^{41} \rm ~erg~s^{-1}$ and at a more constant level. While we believe that an echo from pre-existing dust may dominate the NIR emission in SN 2010jl, the other discussed mechanisms may well be important for other Type IIn SNe. All these are physically plausible and their relative importance depends on the specific case in terms of progenitor mass loss rate, dust shell geometries, SN luminosity, including the X-rays, and viewing direction. Besides the discussed SNe, two recent Type IIn SNe SN 2009ip \citep{Smith2010a,Foley2011,Pastorello2012,Mauerhan2013} and SN 2010mc \citep{Ofek2013b} have received considerable attention. Both SNe showed minor outbursts with $M_{\rm R} \sim -14 - -15$ before the last major eruption. The main outburst had an absolute magnitude of $M_R \sim -18$, corresponding to a luminosity of $\sim (5-8) \times 10^{42} \rm ~erg~s^{-1}$, a factor $\sim 4-6$ lower than SN 2010jl. Another difference is that the flux decayed considerably faster during the first $\sim 60$ days, with an e-folding decay time scale of $\sim 20$ days. From the bolometric light curve we estimate a total radiated energy of $\sim 2 \times 10^{49}$ ergs for SN 2009ip and a similar energy for SN 2010mc. This is more than an order of magnitude lower than what we estimate for SN 2010jl. Both narrow lines with broad electron scattering wings, as well as broad ejecta lines, were seen. \cite{Pastorello2012} stress the important observation that for SN 2009ip already in the minor outbursts a year before the most recent large outburst expanding material at a velocity of $\sim 12,500 \rm ~km~s^{-1}$ was seen. It is therefore clear that high velocities are not a unique signature of a core collapse, but can also occur in stages before this. In connection to the September 2012 eruption both an electron scattering profile with FWHM $\sim 550-800 \rm ~km~s^{-1}$ and a broad absorption extending to $14,000-15,000 \rm ~km~s^{-1}$ were seen. Based on the large velocities and other arguments \cite{Mauerhan2013} propose a core collapse scenario, which is, however, challenged by \cite{Pastorello2012}. From the fact, mentioned above, that also the previous outbreaks showed similar high velocities, high luminosity and long term variability, they propose that SN 2009ip is instead a pulsational pair-instability event, and that the star may have survived the September 2012 outburst. From these comparisons we conclude that there are similarities between SN 2010jl and these two objects. All three objects could be classified as Type IIn SNe, based on the narrow lines with profiles typical of electron scattering. It is clear that they all have very dense CSM into which the shock waves are propagating. At least for SN 2009ip there is evidence for pre-existing dust which may have been formed a few years before the last eruption \citep{Foley2011,Smith2013}, like SN 2010jl. The main differences between these objects and SN 2010jl is that the latter showed a considerably higher peak luminosity, slower decay and an order of magnitude higher total radiated energy compared to SNe 2009ip and 2010mc. We therefore believe that the basic mechanism for these objects is different from that in SN 2010jl. The fact that no very broad lines with velocities typical of core collapse SNe were seen for SN 2010jl is less important. As we have already argued, this is mainly an effect of different mass loss rates and total mass lost. \subsection{Putting it all together} \label{sec-big} In this section we summarize the main points of the previous discussion and discuss how this information can be put together in a coherent scenario for SN 2010jl. Most of the bolometric luminosity is produced by the radiation from the radiative shock with a velocity of up to $\sim 3000 \rm ~km~s^{-1}$, which propagates through the dense CSM resulting from a mass loss rate of $\ga 0.1 ~\Msun ~\rm yr^{-1}$ and a velocity of $\sim 100 \rm ~km~s^{-1}$, which may be the result of a previous LBV-like eruption. The total mass lost is $\ga 3 ~M_\odot$. The ingoing X-rays from the shock will be thermalized at early epochs in the dense shell behind the shock and later in the ejecta. This will there be converted into UV and optical continuum radiation with a spectrum close to a blackbody. Most of the outgoing X-rays will be absorbed by the pre-shock wind and will there give rise to UV and optical emission lines. An important issue is the relative location of the source of the UV and optical line emission and the electron scattering region. The fact that the strong high ionization UV lines, like the N IV], C IV, N III] lines, at early epochs had strong electron scattering wings besides the narrow component, shows that at least a large fraction of these arise in or interior to the electron scattering region. The same is true for the Balmer lines. As we have already discussed in Sect. \ref{sec_csm}, the narrow lines most likely arise in a more extended region outside the denser part of the CSM. Because the velocity to which the gas is accelerated is $\propto r^{-2}$ (Eq. \ref{eq_vel_acc}) the absence of line shifts of the narrow component is consistent with a distance $\ga 2 \times 10^{16}$ cm of this (Sect. \ref{sec_csm}). To explain the broad wings of the lines the electron scattering optical depth of this CSM has to be $\ga 3$. The mass loss rates we find correspond to a total optical depth to electron scattering of a wind with outer radius $R_{\rm shell} $ \begin{eqnarray} \tau_{\rm tot} &=& 1.56 \left({\dot M \over 0.1 \Msun ~\rm yr^{-1}}\right) \left({u_{\rm w} \over 100 \rm ~km~s^{-1}}\right)^{-1} \nonumber\\ &&\left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right)^{-1} \left( {t \over {\rm years}}\right)^{-0.82} \left(1 - {R_s \over R_{\rm shell} } \right) \ , \end{eqnarray} if completely ionized. From this it is clear that to get large enough $\tau_{\rm tot}$ either the shock velocity has to be lower than that corresponding to the X-ray temperature, $\sim 3000 \rm ~km~s^{-1}$, or that the mass loss rate in these directions is higher than the average, or most likely both, as we discuss in the end of this section. One can estimate the extent of the ionized region, assuming that most of the X-ray and EUV emission from the cooling shock is absorbed by the CSM in front of the shock, and will there give rise to a Str\"omgren zone (see e.g., \cite{Fransson1984} for a similar situation). We also assume that the X-rays emitted inwards are thermalized by the cool shell and ejecta, resulting in optical and UV emission, but only contributing marginally to the ionization. Balancing the number of ionizing photons, $\epsilon_i L / h\nu_{\rm ion} $ with the number of H II recombinations, $4 \pi \alpha_B \int n_e(r)^2 r^2 dr$, where $\epsilon_i$ is the fraction of the energy going into ionizations, and with $\tau_e = \sigma_T \int n_e dr$ one obtains \begin{equation} \tau_e = { \sigma_T \over \alpha_B } {m_p u_w \over \dot M} {\epsilon_i L\over h\nu_{\rm ion}}. \end{equation} With $\alpha_B = 2\times 10^{-13} T_4^{-0.7} {\rm cm^{3} \ s^{-1}}$ one finds \begin{eqnarray} \tau_e &=& 4.05 \epsilon_i \left({L\over 10^{43} \rm ~erg~s^{-1}}\right) T_4^{0.7} \left({\dot M \over 0.1 \Msun ~\rm yr^{-1}} \right)^{-1} \nonumber\\ &&\left({u_{\rm w}\over100 \rm ~km~s^{-1}} \right). \end{eqnarray} Assuming that the X-ray/EUV luminosity is produced by the shock and given by Eq. (\ref{eq_lum_mdot}) (for $n=7.6$), we get \begin{equation} \tau_e = 1.9 \epsilon_i T_4^{0.7} \left({V_{\rm s,320} \over 3000 \rm ~km~s^{-1}}\right)^3 \left( {t \over 320 \ {\rm days}}\right)^{-0.54} \ . \label{eq_taue_s} \end{equation} Therefore, one obtains naturally a width of the ionized zone in front of the shock of the order of unity {\it independent of the mass loss rate, wind velocity or radius. } This assumes that the total optical depth of the wind exceeds this value. To estimate the conversion efficiency from X-rays to H$\alpha$, as well as to determine the general UV and optical spectrum expected, we have made some exploratory calculations using CLOUDY \citep{Ferland2013}. For this we have taken a simple free-free X-ray spectrum and a density profile given by a $\rho \propto r^{-2}$ wind. The X-ray temperature was 10 keV and we studied a range of shock luminosities and mass loss rates. A problem is here that for the most interesting parameters the electron scattering optical depth is larger than unity (Eq. \ref{eq_taue_s}), which can not be handled properly by CLOUDY. As pointed out by \cite{Chevalier2012}, one effect of this is that the state of ionization is underestimated since the relevant quantity for this is the ratio of the radiation to matter density. This is given by $\zeta = \tau_e L/n r^2$, increasing the usual ionization parameter by a factor $\tau_e$. Nevertheless, the qualitative results are interesting. With regard to the efficiency of X-ray to H$\alpha$ conversion, we find that it is very difficult to obtain an efficiency larger than $\sim 5-7 \%$, and then only for large column densities and for densities less than $\sim 10^{10} \rm ~cm^{-3}$. Above this density the efficiency decreases rapidly due to thermalization of the H$\alpha$ line. The electron density we infer from the bolometric luminosity in Sect. \ref{sec_energy} is for $n({\rm He)}/n({\rm H)}=0.1$ \begin{eqnarray} n_{\rm e} &=& 2.86\times 10^{9} \left({\dot M \over 0.1 \Msun ~\rm yr^{-1}} \right) \left({u_{\rm w}\over100 \rm ~km~s^{-1}} \right)^{-1} \nonumber\\ &&\left({r \over 3\times10^{15} \ \rm cm} \right)^{-2} \ \rm cm^{-3}, \end{eqnarray} which is in the range where we expect a high H$\alpha$ efficiency. Further, the observed maximum H$\alpha$ luminosity was $\sim 1\times 10^{42}\rm ~erg~s^{-1}$ (Figure \ref{fig_haflux}). Although we find that there is some contribution from the photospheric emission from ionizations in the Balmer continuum, the observed H$\alpha$ luminosity implies a total X-ray luminosity of $\ga 10^{43} \rm ~erg~s^{-1}$. There is therefore a rough agreement between the bolometric luminosity, H$\alpha$ luminosity and a high H$\alpha$ efficiency. Compared to this the observed level of X-rays is $\sim 10^{42} \rm ~erg~s^{-1}$ \citep{Chandra2012}. This is by itself no problem, because most of the X-rays may be absorbed by the pre-shock gas, and is in fact needed to explain the UV emission lines. A major problem is, however, that, as discussed in the Introduction, the X-rays at 59 days indicated a column density of $\sim 10^{24} \ {\rm cm}^{-2}$, decreasing to $3\times 10^{23} \ {\rm cm}^{-2}$ at 373 days\footnote{ Chandra et al. used a metallicity of 0.3 times solar in agreement with that derived for the host galaxy. The connection between the hydrogen column density and the X-ray absorption, which is dominated by metals, depends on the metallicity. The main indication of synthesized material in the outer layers is the CNO processing discussed in Sect. \ref{sec-cno}. This only results in a redistribution between the CNO elements, which have similar cross sections and ionization thresholds. As discussed later, we do not find any other indication of processing. It is therefore reasonable to assume that, except for a different host metallicity, the standard conversion between X-ray absorption column density and N$_H$, and therefore electron scattering depth, applies. }. This column density corresponds to an electron scattering depth of $\tau_{\rm e} \la1$, which is too low to explain the line profiles in the optical. There are several possible explanations to this discrepancy. \cite{Ofek2013} argue that the column density could be decreased by a higher ionization due to a large electron scattering optical depth. As we argue above, the optical depth of the {\it ionized} gas is limited and the effect will not be dramatic. Instead we believe that the different column densities are due to large scale asymmetries in the outflow, giving rise to different column densities in different directions. Alternatively, it could be due to small scale inhomogeneities resulting in a leakage of X-rays in the low density regions. This is also supported by the low column density \cite{Ofek2013} found from their NuSTAR + XMM observations at $728-754$ days. We discuss further evidence for these possibilities below. For a wind, most of H$\alpha$ originates close to the shock, as can be seen from $dL/dr = 4 \pi \alpha_B r^2 n_e^2 \propto 1/r^2$. This is also the region of largest optical depth to electron scattering. Emission and scattering therefore take place in the same region, although some of the H$\alpha$ emission also occurs at low optical optical depths, resulting in a narrow line core as long as the bulk velocity of the emitting material is low (Sect. \ref{sec_radacc}). From our CLOUDY calculations we find that high ionization lines, like N III] $\lambda 1750$, N IV] $\lambda 1486$ and C IV $\lambda \lambda 1548, 1551$, arise interior to the H$\alpha$ emitting region and therefore experience the same electron scattering as H$\alpha$. We also find that the luminosities of these lines are comparable to H$\alpha$, as is observed. For constant X-ray luminosity the H$\alpha$/H$\beta$ ratio increases as the density decreases, which may explain the trend seen in Figure \ref{fig_haflux}. The temperature in the H$\alpha$ emitting zone is $\sim (1.5-2)\times 10^4$ K. A related model is discussed by \cite{Dessart2009} who calculate the line formation in Type IIn SNe with application to SN 1994W. Although the authors point out that this is only a preliminary attempt and related to a specific scenario, this is the only self-consistent modeling of the line formation we are aware of. The density of the envelope in the model by \cite{Dessart2009} is $3\times 10^9 \rm ~cm^{-3}$, which is similar to what we find for the density in front of the shock. Dessart et al. also point out a number of useful signatures, depending on the relative location of the region where the Balmer emission originates and the electron scattering region. In particular they find that if the photons are internally generated, H$\alpha$, H$\beta$ and H$\gamma$ form at increasing optical depth. The electron scattering depth also increases for these lines, leading to increasingly strong wings for H$\beta$ and H$\gamma$ compared to H$\alpha$. If the lines, on the other hand, are created outside the scattering region they are expected to have similar line profiles. In principle, these possibilities can be tested with our observations. The limited S/N and relatively steep Balmer decrement makes it difficult to see any differences between these line profiles. The best set of line profiles are from our spectrum from X-shooter in Figure \ref{ha_hb_hei_pb}. This does show some minor differences in the width of the lines, but these are sensitive to the exact way the continuum is subtracted as well as to blending by other lines. The latter is especially important for the He I $\lambda 10,830$ line, which is blended with Pa$\gamma$; the red wing of H$\beta$ also shows indications of blending. It is therefore premature to draw any firm conclusions. As argued in Sect. \ref{sec_radacc}, we believe that the bulk velocity of $\sim 600-700 \rm ~km~s^{-1}$ of the Balmer emitting region is a result of the pre-acceleration of the circumstellar material in front of the main shock by the intense radiation from the thermalized radiation from the shock. This is supported by the agreement between the observed velocity evolution and that predicted from the evolution of the bolometric luminosity. A surprising result is that to produce the line shift of the H$\alpha$ line we required that most of the emitting material was moving with similar velocity relative to us. This in turn requires the flow to subtend a fairly small solid angle. The expansion does, however, not necessarily have to be in our LOS, which would increase the true expansion velocity by $1/\cos \theta$, where $ \theta$ is the angle between these directions. Both the line shifts and the X-ray observations therefore give evidence for a highly asymmetric ejecta interaction in SN 2010jl. On a larger scale we find evidence for an asymmetric dust distribution (Sect. \ref{sec_dust}). In addition, the early polarization measurements by \cite{Patat2011} at $14.7$ days after discovery showed a large, constant continuum polarization of $1.2 - 2 \%$, indicating an asphericity with axial ratio $\sim 0.7$. Close to the center of H$\alpha$ the polarization decreased to a low level. The wings of the line have, however, a polarization close to the continuum level. As suggested by the authors, this is probably an indication of another component dominated by recombination. Our high resolution observations do show that especially at early phases the narrow component from the CSM is strong, and will therefore dominate in the line center. Most likely this is produced mainly by recombination and is probably also coming from a more symmetric CSM, as indicated by the strong absorption component of the P-Cygni lines (Sect. \ref{sec_csm}). Asymmetries have been discussed from time to time for Type IIn SNe. This includes the Type IIn SN 1988Z \citep{Chugai1994}, SN 1995N \citep{Fransson2002}, SN 1998S \citep{Leonard2000,Fransson2005}, and SN 2006jd \citep{Stritzinger2012}. In these cases it has mainly been disk like or clumpy distributions that have been considered. A clumpy model does, however, not have the large scale asymmetry needed to match the observed bulk velocity of hydrogen lines. A disk like geometry also has problems because one would then expect the expansion into a disk to be cylindrically symmetric, which would not give the coherent velocity in the line of sight indicated by the broad line profiles above. Instead we believe that a bipolar outflow may be more compatible with the observations. As was noted in Sect. \ref{sec_csm}, the expansion velocity of the molecular shell in Eta Carinae is highly anisotropic with a velocity of up to $\sim 650 \rm ~km~s^{-1}$ at the poles \citep{Smith2006}. Even more interesting is that most of the mass is ejected between $45\degr$ and the pole. Higher velocities up to $\sim 6000 \rm ~km~s^{-1}$ are also seen, although little mass is involved \citep{Smith2008}. The initially small shift of the lines in SN 2010jl, however, requires this high density shell to have a low velocity, $\la 100 \rm ~km~s^{-1}$, before the explosion, similar to what we see from the narrow lines. The increasing velocity shift in H$\alpha$ may then be a direct representation of the gradual acceleration of this shell. The column density of the shell in Eta Carinae is $\sim 10^{24} \ \rm cm^{-2}$ located at a radius between $3\EE{16}$ cm and $3\EE{17}$ cm \citep{Smith2006}. This scales as $N_{\rm H} \propto r^{-2}$, so if one would observe it at an earlier stage with a radius of a few $\times 10^{15}$ cm, as in SN 2010jl, the column density would be $2-3$ orders of magnitude higher, more than sufficient for explaining the large electron scattering optical depths we have evidence for in the broad lines. The planar geometry of the H$\alpha$ emitting gas may resemble that seen in Eta Carinae at high latitudes. \cite{Smith2006} find above $\sim 60\degr$ latitude an almost perfectly planar shape of the molecular shell, containing most of the gas \cite[see e.g., Fig. 4 in ][]{Smith2006}. If one would observe an explosion in this direction (or higher latitude) the expansion would clearly be planar. The large column density of the shell would also prevent the observation of the rear parts of the CSM. The scenario we infer from the observations has many qualitative similarities to the one calculated by \cite{vanMarle2010}. Using 2-D hydrodynamical simulations they have calculated the interaction of a core collapse SN with a dense circumstellar shell ejected 2 years before explosion. The total shell mass was in most cases $ 10 ~M_\odot$, while the ejecta mass for this shell mass was varied between $ 10 - 60 ~M_\odot$. Models with both spherical geometry and a density and mass distribution similar to the bipolar structure observed for Eta Carinae were calculated. Optically thin cooling, but no radiative transfer, was included. At the time the shock collides with the shell the shock velocity is $\sim 5500 \rm ~km~s^{-1}$, for the chosen parameters, but decreases to 1000-2000 $\rm ~km~s^{-1}$ in the shell, depending on mass of ejecta, shell mass and SN energy. Because of the large density and low velocity, implying a shock temperature of $\la 10^7$ K, the shock becomes radiative. After the shock has penetrated the shell the velocity again increases and the shock then in most cases becomes adiabatic. This evolution illustrates what happens when a SN with 'normal' energy interacts with a dense CSM. The low shock velocity resulting from this is in line of what is observed for SN 2010jl. In their bipolar simulations (models D01-D03) the ejecta first interact with the equatorial region and then later with the polar region. Because of the smaller column density in the equatorial direction the shock penetrates this region first at which time the shock temperature increases rapidly. In the polar direction it takes a considerably longer time for the shock to reach the outer boundary of the shell and the shock velocity is also lower. This may illustrate the range of velocities which may be present in an anisotropic CSM, and which may explain the different column densities inferred from the X-rays and the electron scattering wings, discussed above. These anisotropic models demonstrate nicely how one can have both high shock velocities (in the equatorial directions), giving rise to hard X-rays, and slow shocks still in the optically thick circumstellar shell (in the polar directions) at the same time. The drop in luminosity during the breakout phase is in the bipolar models $L \propto t^{-\alpha}$, where $\alpha = 3.6 - 4.4$ \citep[Figure 16 in ][]{vanMarle2010}. This depends on the parameters of the models, but is at least in qualitative agreement with the observations, which show that $L(t) \propto t^{-3.4}$ after the break (Sect. \ref{sec_phot_results}). For the spherical models in \cite{vanMarle2010} the light curves in general consist of a rapid rise when the shock encounters the shell and a slowly decreasing plateau. As the shock has passed the outer shell, there is a rapid drop as the shock becomes adiabatic. Because of the gradual emergence of the shock with latitude the bipolar case results in a smoother evolution of the light curve without the same sharp break as the spherical shell. This result depends on the angular dependence of the column density. A stronger concentration to the poles would give a sharper drop of the light curve, in line of what we see in SN 2010jl. The efficiency in converting kinetic energy into radiation was found to be in the range $10-40 \%$, decreasing with increasing ejecta mass and expansion velocity of the shell (determining the distance of the shell from the SN). With an efficiency in this range the radiated energy of $\sim 6.5 \times 10^{50}$ ergs would correspond to an explosion energy of $\sim (4\pm 3)\times 10^{51}$ ergs. The fact that we do not see evidence for expansion in the opposite direction is a natural result of the obscuration of this region by the optically thick material in the core of the SN and from the ionized material exterior to this moving towards us. The decline in the flux at late epochs is a result of the decrease in the photospheric radius (see Table \ref{table_dust_param}), which in principle might indicate that the extent of the photosphere is not large enough to obscure the receding side of the SN. However, the photospheric radius is the thermalization radius which must be deep inside the $\tau_{\rm e} = 1$ radius. In our last spectra at $\sim 1100$ days the optical depth to electron scattering is likely to be $\ga 1$. The obscuration of the back-side of the SN by the scattering region even at late epochs is therefore not a problem. As we have already discussed, at the late epochs the shock should have penetrated the dense circumstellar shell in the low column density regions close to the equator, while it is still inside the optically thick region in our line of sight, although outside the photosphere. With regard to the narrow lines we note that a steady wind with a mass loss rate of $10^{-3} ~\Msun ~\rm yr^{-1}$, as in the quiescent state in Eta Carinae, and wind velocity of $100 \rm ~km~s^{-1}$ corresponds to a density of $\sim 3 \times 10^{6} (\dot M/10^{-3} ~\Msun ~\rm yr^{-1}) (u_w/100 \rm ~km~s^{-1})^{-1} (r/10^{16}$ $\rm cm)^{-2} \rm ~cm^{-3}$, which is similar to what we find from the nebular diagnostics of the narrow lines for SN 2010jl, but below that of the Balmer lines. A larger radial extent of the 'narrow line region', as indicated from the ionization parameter (Sect. \ref{sec_csm}) is natural in this context. The deep absorption component in the P-Cygni profile of the narrow H$\alpha$ line (Figure \ref{fig3b}) means that this material must have an appreciable covering factor of the underlying emission from the SN. The emission component of this, as well as the depth of the absorption, tends to decrease with time. This may be a result of the expansion of the SN relative to the slow CSM. An example of this kind of evolution can be seen in the analytical models in Figure 5 of \cite{Fransson1984}, where the relative size of the scattering layer to the 'photospheric' emission is varied. The dust shell, giving rise to the echo, is at a considerable larger distance, $\ga 10^{17}$ cm. The high temperature and large distance argues for small dust grains. To reproduce the NIR light curve the dust should have an anisotropic distribution. The dust is likely to have been formed in previous mass loss phases of the progenitor, as seen in Eta Carinae. Finally, we remark that even in our last full spectrum at $\sim 850$ days there is no clear transition to a nebular stage, although the spectrum is dominated by emission lines, with H$\alpha$ by far the strongest line, rather than continuum emission (Figure \ref{fig_fast_spec}). Electron scattering is apparently still important in the line emitting region, and there are no signs of any processed material in even the last spectrum. This may in principle speak in favor of the pure LBV scenario without core collapse. Because of the extreme mass loss rate the hydrogen envelope is, however, likely to still be opaque to the emission from the core (Sect. \ref{sec-comp}). We will therefore have to wait until the optical depth is low enough for any firm conclusions. \subsection{Progenitor scenarios} \label{sec_prog} As discussed by \cite{Miller2010} \citep[see also][]{Smith2008b,Smith2010}, the extremely luminous Type IIn SNe, with absolute magnitude brighter than $M_V = -20$, may arise from ejections where the ejected shell has not expanded to more than a few $\times 10^{15}$ cm, resulting in a dense, optically thick shell. This would correspond to SN 2010jl. Ejected shells which have expanded to larger radii, and therefore have lower density and optical depth, would give rise to more `normal' Type IIns, like SNe 1988Z and 2008iy, with absolute magnitudes fainter than $ -20$. These are also expected to be strong X-ray and radio sources, in contrast to the more luminous Type IIns. As we discuss in Sect. \ref{sec-comp}, the mass loss rate and its duration may be the most important parameters distinguishing these properties. Although we have in this paper mainly discussed our results in terms of conditions around LBVs, the standard LBV scenario has a number of problems in explaining the Type IIn SNe. In particular, the high frequency of Type IIns may be incompatible with very massive progenitors and also the evolutionary state of the LBVs may be a problem. Recently there have been different suggestions, discussed below, addressing these problems. One should note that the LBV interpretation is mainly phenomenological, based on observational characteristics like the very dense CSM, the CNO processing and the typical velocities for the CSM. It has also been suggested that a large fraction of Type IIn SNe may arise as a result of explosions in red supergiants with enhanced mass loss \citep{Fransson2002,Smith2009}, which would alleviate these problems. A single star version has recently been put forward by \cite{Groh2013} based on a moderately fast rotating star with a mass $20 - 25 ~M_\odot$. The average rotational velocity on the main sequence was $\sim 200 \rm ~km~s^{-1}$ for these models. The rotation in combination with mixing results in a more massive He-core than for a non-rotating star. This has the consequence that the star ends up as a hot supergiant, with effective temperature $\sim 2\times10^4$ K, and with a thin hydrogen envelope. The heavy mass loss in the red supergiant and final hot phase results in a dense CSM. The most interesting result in the paper by Groh et al. is that a full NLTE atmospheric calculation of the hydrostatic and wind regions in the final phase gives a spectrum similar to that of an LBV, with a large number of emission lines. Because of heavy mass loss and mixing, CNO products are transported to the surface, with essentially complete CNO burning. Their 20 $~M_\odot$ model has N/C = 128 and N/O = 16 at the end of C-burning. These are considerably higher than our determination for SN 2010jl, including the uncertainty in these. As discussed by Groh et al., this could perhaps be explained by He-burning products mixed to the surface. It may, however, also indicate less complete CNO burning, as is e.g., seen in the $15 ~M_\odot$ models by \cite{Ekstrom2012}. These lower mass stars, however, end their life as red supergiants, not LBV-like objects. A dense CSM may nevertheless be present, but the wind velocity would probably be lower than we observe. The mass loss rate Groh et al. find for the $20 - 25 ~M_\odot$ models in the pre-SN phase is $(1-4)\times 10^{-4} ~\Msun ~\rm yr^{-1}$ and wind velocity $270 - 320 \rm ~km~s^{-1}$. The wind velocity is considerably higher that what we find for SN 2010jl. Scaling to a lower wind velocity this mass loss rate corresponds to an electron density of $ \sim 2.3 \times 10^5 (\dot M / 10^{-4} ~\Msun ~\rm yr^{-1}) (u_{\rm w}/300 \rm ~km~s^{-1})^{-1} (r/10^{16} \rm cm)^{-2} \rm ~cm^{-3}$, where $r$ is the radius of the line emitting material. We have assumed a He/H ratio of 0.8 by mass \citep{Groh2013}. Depending on the mass loss rate and radius of the line emitting gas, this may be compatible with what we observe for the narrow lines from the CSM in SN 2010jl. However, there are large uncertainties in this estimate. First, as discussed in Sect. \ref{sec-fluxes} and \ref{sec_broad}, there is considerable uncertainty in the radius of the line emitting gas. Also, the mass loss rate of the models is highly uncertain since, as \cite{Groh2013} remark, these stars are close to the Eddington luminosity. An increased mass loss is therefore not unexpected, as discussed by \citet[][and references therein]{Grafner2012}. The mass loss rate could also have been higher in the previous red supergiant phase, further boosting the density of the CSM at large radii. Rotational effects could also make the mass loss highly anisotropic. Overall, we find that given these uncertainties there is at least some rough agreement between the models and our observations. This scenario may therefore solve the evolutionary and rate problems, while at the same time at least qualitatively explain the observational LBV characteristics. A major issue is to explain the mechanism behind the outbursts needed to explain the dense inner shell. As a second alternative, based now on a binary model, \cite{Chevalier2012b} has proposed a merger scenario, where a compact companion in the form of either a neutron star or a black hole may merge with a normal companion star. The energy released by the in-spiraling will release a large energy into the envelope of the companion star, resulting in heavy mass loss. A dense CSM will therefore result, mainly concentrated to the orbital plane of the binary. This may have implications for the line profiles, but a more definite comparison is outside the scope of this paper. We also remark that an LBV-like eruption may also be the result of a merger or possibly only a tight binary encounter in a lower mass system. This would then be in better agreement with the relatively high frequency of Type IIn SNe. The high luminosity would, however, be difficult to understand. A third scenario is discussed by \cite{Quataert 2012} based on convective motions connected to the carbon burning and later stages when the core luminosity is super Eddington. This turbulence in turn excites internal gravity waves which may convert a fraction of their energy into sound waves. These finally dissipate their energy and cause mass loss from the stellar envelope. The advantage with this model is that it directly connects the stellar explosion and strong mass loss of the progenitor. \cite{Shiode2013} have developed this further and made predictions for the duration of the wave-driven mass loss, as well as the total mass lost. Most of the mass loss occurs in the Ne and O burning stages, which limits the duration to $\la 10$ years before explosion. Only for stars with He-core masses below $\sim 10 ~M_\odot$, or ZAMS masses below $\sim 20 ~M_\odot$ do they find substantial mass loss, with a total mass lost between $0.1 -1 ~M_\odot$. While the wind velocity we find, $\sim 100 \rm ~km~s^{-1}$, is close to the escape velocities of their models, the total mass lost in the wave-driven phase is substantially lower than what we find for SN 2010jl. With a duration of $\la 10$ years also the extent of the dense shell, $\la 3\times 10^{15}$ cm, is on the low side. Although interesting in that this scenario directly connects the heavy mass loss phase with the explosion, many details connected to the conversion efficiency, effects of binary interaction etc. are uncertain and require numerical work along the lines of \cite{Meakin2006} to test this scenario. \cite{Smith2014} have recently discussed the consequences of instabilities from turbulent convection in the latest burning phase, which may trigger strong mass loss just before explosion. \section{Conclusions} \label{sec-conculsions} SN 2010jl represents the best observed Type IIn supernova to date, and in addition one of the brightest. In this paper we have combined our optical, NIR and UV observations with X-ray observations to get a full view of this SN in both time and in wavelength. Our most important conclusions are: \begin{itemize} \item We have presented one of the most complete data set of any Type IIn SN, covering the UV, optical and IR range. It also represents a nearly complete coverage in time from the explosion to $\sim 1100$ days. \item We find a large number of narrow UV, optical and NIR lines from the CSM from both low and very high ionization stages, including coronal lines, most likely excited by the X-ray emission from the SN shock wave. The UV lines provide strong evidence for CNO processed gas in the CSM, with N/C$=25\pm15$ and N/O=$0.85\pm0.15$. The density of the CSM and distance to the narrow line emission are consistent with observations of LBVs in our Galaxy. \item The expansion velocity of the CSM makes red supergiants unlikely as progenitors, but is consistent with LBVs. \item The profiles of the broad lines are symmetrical in the co-moving frame up to $\sim 1100$ days, with a shape typical of electron scattering, but are shifted by an increasing velocity to $\sim 600-700 \rm ~km~s^{-1}$ along the line of sight. The profiles of the broad lines show no strong evidence of wavelength dependence, as would be expected if dust was responsible for the shift of the broad lines. Instead, the shift of the broad lines which develop at late stages is explained as a result of the bulk velocity of the gas, rather than as a result of dust in the ejecta. \item We find that the most likely explanation for the velocity shift is radiative acceleration by the flux from the SN. This is a natural consequence of the large radiated energy of this SN in combination with the gradual release of this. \item The optical depth needed to produce the electron scattering wings, the low and coherent bulk velocity inferred from the velocity shifts together with the comparatively low X-ray column densities and the high X-ray temperature provide strong evidence for an asymmetric shock wave, with both varying velocities and column density, consistent with what is inferred from polarization measurements at early epochs. A bipolar outflow from an LBV a few years before the explosion would be compatible with these observations, although we can not exclude a one-sided asymmetry. \item The NIR dust excess is likely to originate from an echo at a distance of $\sim 6\times 10^{17}$ cm in the CSM of the SN. The dust is heated by the radiation from the SN to nearly the evaporation temperature. The high dust temperature and large distance may require a small grain size. \item The expansion velocity of the CSM is $\sim 100 \rm ~km~s^{-1}$ and the mass loss rate has to be $\ga 0.1 \Msun ~\rm yr^{-1}$ to explain the bolometric light curve by circumstellar interaction. The total mass lost is $\ga 3 ~M_\odot$. These numbers are likely lower limits, depending strongly on the uncertain shock velocity and anisotropies in this. \item There are no indications of a nebular stage, or any processed material even at $\sim 850$ days. The last spectrum is dominated by a strong H$\alpha$ line. The core region is, however, likely to still be opaque as a result of the dense CSM. \item When compared to other Type IIn SNe we find strong similarities with other well observed objects, like SNe 1995N, 1998S, 2005ip and 2006jd in terms of early electron scattering line profiles, total radiated energy, CSM properties and X-ray emission. The main difference is the several order of magnitude higher mass loss rate in SN 2010jl, which slows down the evolution to the nebular stage. \item The UV spectrum will add to the small set of nearby UV bright objects, which may dominate future high redshift SN surveys. \end{itemize} \acknowledgments We are grateful to Eran Ofek and Avishay Gal-Yam for useful comments on the draft. This research was supported by the Swedish Research Council and National Space Board. The Oskar Klein Centre is funded by the Swedish Research Council. The CfA Supernova Program is supported by NSF grant AST-1211196 to the Harvard College Observatory and has also been supported by PHY-1125915 to the Kavli Institute of Theoretical Physics. F.B. acknowledges support from FONDECYT through Postdoctoral grant 3120227 and from the Millennium Center for Supernova Science through grant P10-064-F (funded by �Programa Bicentenario de Ciencia y Tecnolog�a de CONICYT� and �Programa Iniciativa Cient\'�fica Milenio de MIDEPLAN�). The research of RAC is supported by NASA grant NNX12AF90G. SB is partially supported by the PRIN-INAF 2011 with the project �Transient Universe: from ESO Large to PESSTO� . Support for Program GO-12242 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. Based on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias under Period P42, P43 and P46 and P47 (P.I. Sollerman) .The data presented here were obtained in part with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOTSA. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (Program 088.D-0195, P.I. Sollerman) {\it Facilities:} \facility{HST (COS,STIS)}, \facility{VLT (X-shooter)}, \facility{FLWO:1.5m (FAST)}, \facility{MMT}, \\ \facility{NOT(ALFOSC,NOTCAM)}, \facility{Magellan:Baade}
train/arxiv
BkiUdFY4eIXh1x7bjCOX
5
1
\section{Introduction} The Cd $5s^2\,\,^1\!S_0$ -\, $5s5p\,\,^3\!P_0^o$ transition has several desirable attributes for the development of a lattice clock. This clock has more than an order of magnitude smaller blackbody radiation (BBR) shift (a Stark shift resulting from the thermal radiation of the atoms environment, which is generally at 300 K temperature) in comparison with Sr and Yb~\cite{YamSafGib19,OvsMarPal16,DzuDer19}. The size of a BBR shift is a property of the specific atomic transition used as a frequency standard and an uncertainty in the BBR shift is known to be one of the limiting systematic uncertainties in the clock uncertainty budget~\cite{NicCamHut15,HunSanLip16}. Short of cryogenic cooling, it cannot be suppressed and need to be quantified with high accuracy. Two isotopes, $^{111}$Cd and $^{113}$Cd, both with 12\% natural abundance, have a nuclear spin of 1/2, which precludes tensor light shifts from the lattice light, another advantageous feature. Cd has the narrow $5s^2\,\,^1\!S_0$ -\, $5s5p\,\,^3\!P_1^o$ intercombination transition allowing Doppler cooling to 1.58 $\mu$K and simplifying a control of higher-order lattice light shifts~\cite{YamSafGib19}. The light for all of the transitions needed for the Cd clock, including the magic lattice, can be generated by direct, or frequency-doubled or quadrupled semiconductor lasers~\cite{YamSafGib19}. In 2019, Cd clock magic wavelength was measured to be $419.88(14)$ nm \cite{YamSafGib19}, in excellent agreement with a theoretical calculation reported in the same work. At magic wavelength, upper and lower clock states experience the same light shift, up to multipolar and higher-order effects considered in this work. The fractional BBR shift was calculated to be $2.83(8) \times 10^{-16}$ at 300 K in Ref.~\cite{YamSafGib19}, in agreement with Ref.~\cite{DzuDer19}. Recent progress opens a pathway to a rapid progress in Cd clock development and calls for a detailed investigation of the clock systematic effects. In this work we calculated properties needed to quantify higher-order light shifts: magnetic dipole and electric quadrupole polarizabilities and linear and circular hyperpolarizabilities of the $5s^2\,\,^1\!S_0$ and $5s5p\,\,^3\!P_1^o$ clock states at the magic wavelength and estimated uncertainties of these quantities. We also evaluated the second-order Zeeman clock transition frequency shift in the presence of a weak magnetic field. The paper is organized as follows. The general formalism and main formulas are presented in Section~\ref{Gen_for}. In Section~\ref{MetCalc}, we briefly describe the method of calculation. Section~\ref{results} is devoted to a discussion of the results obtained, and Section~\ref{concl} contains concluding remarks. \section{General formalism} \label{Gen_for} We consider the Cd atom in a state $|0\rangle$ (with the total angular momentum $J=0$) placed in a field of the lattice standing wave with the electric-field vector given by \begin{equation} {\boldsymbol{\mathcal{E}}} = 2 {\boldsymbol{\mathcal{E}}_0}\, \mathrm{cos}(kx)\, \mathrm{cos}(\omega t) . \label{field} \end{equation} Here $k=\omega/c$, $\omega$ is the lattice laser wave frequency, $c$ is the speed of light, and the factor 2 accounts for the superposition of forward and backward traveling along the x-axis waves. The atom-lattice interaction leads to the optical lattice potential for the atom that at $|kx| \ll 1$ can be approximated as~\cite{OvsMarPal16,PorSafSafKoz18} \begin{eqnarray} U(\omega) \approx &-&\alpha^{E1}(\omega)(1-k^2x^2)\,\mathcal{E}_0^2 \notag \\ &-&\{\alpha^{M1}(\omega) + \alpha^{E2}(\omega)\}k^2 x^2\,\mathcal{E}_0^2 \notag \\ &-&\beta(\omega)(1-2k^2x^2)\, \mathcal{E}_0^4 . \label{DeltaE} \end{eqnarray} Here $\alpha^{E1}$, $\alpha^{M1}$, and $\alpha^{E2}$ are the electric dipole, magnetic dipole, and electric quadrupole polarizabilities, respectively, and $\beta$ is the hyperpolarizability defined below. The ac $2^K$-pole polarizability of the $|0\rangle$ state with the energy $E_0$ is expressed (we use atomic units $\hbar=m=|e|=1$) as~\cite{PorDerFor04} \begin{eqnarray} \alpha^{\lambda K}(\omega) &=& \frac{K+1}{K} \frac{2K+1}{[(2K+1)!!]^2} (\alpha \,\omega)^{2K-2} \notag \\ &\times& \sum_n \frac{(E_n-E_0) | \langle n||T_{\lambda K}||0 \rangle |^2}{(E_n-E_0)^2-\omega^2} , \label{Qlk} \end{eqnarray} where $\lambda$ stands for electric, $\lambda = E$, and magnetic, $\lambda =M$, multipoles and $\langle n||T_{\lambda K}||0 \rangle$ are the reduced matrix elements of the multipole operators, $T_{E1} \equiv D$, $T_{M1} \equiv \mu$, and $T_{E2} \equiv Q$. The expression for the hyperpolarizability of the $|0\rangle$ state depends on the polarization of the lattice wave. Below we consider the cases when the lattice wave is linearly or circularly polarized, and the 4th order correction to an atomic energy is determined by the linear or circular hyperpolarizability, respectively. The expression for the linear hyperpolarizability $\beta_l(\omega)$ is given by~\cite{PorSafSafKoz18} \begin{equation} \beta_l (\omega) = \frac{1}{9}\,Y_{101}(\omega) + \frac{2}{45}\,Y_{121}(\omega) , \label{beta_l} \end{equation with the quantities $Y_{101}\left( \omega \right) $ and $Y_{121}\left( \omega \right) $ determined as \begin{eqnarray*} \label{Y101} Y_{101}(\omega) &\equiv& \sum_q \mathcal{R}_{101} (q\omega,2q\omega,q\omega) \notag \\ &+& \sum_{q,q'} \left[ \mathcal{R}'_{101}(q\omega,0,q'\omega) - \mathcal{R}_1(q'\omega) \mathcal{R}_1(q\omega,q\omega) \right], \notag \\ Y_{121}(\omega) \! &\equiv& \! \sum_q \left[ \mathcal{R}_{121} (q\omega,2q\omega,q\omega) + \sum_{q'} \mathcal{R}_{121}(q\omega,0,q'\omega) \right], \end{eqnarray* and $q,q'= \pm 1$. The circular hyperpolarizability $\beta_c(\omega)$ can be written as \begin{equation} \beta_{c}=\frac{1}{9}\,X_{101}(\omega) + \frac{1}{18}\,X_{111}(\omega) + \frac{1}{15}\,X_{121}(\omega) , \label{beta_c} \end{equation} where \begin{eqnarray*} \label{X101} X_{101}(\omega) &\equiv& \sum_{q,q'} \left[ \mathcal{R}'_{101}(q\omega,0,q'\omega) - \mathcal{R}_1(q'\omega) \mathcal{R}_1(q\omega,q\omega) \right], \notag \\ X_{111}(\omega) &\equiv& \sum_{q,q'} (-1)^{(q+q')/2}\, \mathcal{R}_{111}(q\omega,0,q'\omega), \notag \\ X_{121}(\omega) \! &\equiv& \! \sum_q \left[ \mathcal{R}_{121} (q\omega,2q\omega,q\omega) + \frac{1}{6}\sum_{q'} \mathcal{R}_{121}(q\omega,0,q'\omega) \right] , \end{eqnarray* and \begin{widetext} \begin{eqnarray} \label{R_J1} \mathcal{R}_{J_{m}J_{n}J_{k}}\left( \omega _{1},\omega _{2},\omega_{3}\right) &\equiv& \sum_{\gamma_m,\gamma_n,\gamma_k} \frac{\left\langle \gamma _{0}J_{0}\left\Vert d\right\Vert \gamma_{m}J_{m}\right\rangle \left\langle \gamma _{m}J_{m} \left\Vert d\right\Vert \gamma _{n}J_{n}\right\rangle \left\langle \gamma _{n}J_{n}\left\Vert d\right\Vert \gamma _{k}J_{k}\right\rangle \left\langle \gamma_{k}J_{k}\left\Vert d\right\Vert \gamma _{0}J_{0}\right\rangle } {\left(E_{m}-E_{0}-\omega_1\right) \left( E_{n}-E_{0}-\omega_2\right) \left(E_k-E_0-\omega_3\right) }, \\ \label{R_J3} \mathcal{R}_{J_m}(\omega) &\equiv& \sum_{\gamma_m} \frac{|\langle \gamma_0 J_0 ||d|| \gamma_m J_m\rangle|^2}{E_m-E_0-\omega}, \qquad \mathcal{R}_{J_k} (\omega,\omega) \equiv \sum_{\gamma_k} \frac{|\langle \gamma_0 J_0 ||d|| \gamma_k J_k \rangle|^2}{(E_k-E_0-\omega)^2} . \end{eqnarray} \end{widetext} The notation $\mathcal{R}^{\prime}_{101}$, i.e., the prime over $\mathcal{R}$, means that the term $|\gamma_n\, 0 \rangle =|\gamma_0\, 0 \rangle$ (where $\gamma_n$ includes all other quantum numbers except $J$) should be excluded from the summation over $\gamma_n$ in Eq.~(\ref{R_J1}). The properties of the lattice potential for the Cd atom in its ground and excited clock states are determined by \eref{DeltaE} and depend on the frequency. Below we analyze these properties at the experimentally determined magic wavelength $\lambda^* = 419.88(14)$ nm~\cite{YamSafGib19}. The magic frequency, $\omega^*$, corresponding to this wavelength, is $\omega^* \approx 23816$ cm$^{-1} \approx 0.108515 \,\,\mathrm{a.u.}$. At the magic frequency the electric dipole polarizabilities of the clock $5s^2\, ^1\!S_0$ and $5s5p\, ^3\!P^o_0$ states are equal to each other, i.e., $\alpha^{E1}_{^1\!S_0}(\omega^*) = \alpha^{E1}_{^3\!P_0^o}(\omega^*)$. These polarizabilities were calculated in Ref.~\cite{YamSafGib19} to be $63.7(1.9)$ a.u.. Using the formulas given above, we calculated the $M1$ and $E2$ polarizabilities and the linear and circular hyperpolarizabilities $\beta_{l,c}$ of the clock states at the magic frequency $\omega^*$, found respective differential polarizabilities and hyperpolarizabilities, and determined uncertainties of these values. \section{Method of calculation} \label{MetCalc} We carried out calculations in the framework of high-accuracy relativistic methods combining configuration interaction (CI) with (i) many-body perturbation theory (CI+MBPT method~\cite{DzuFlaKoz96}) and (ii) linearized coupled-cluster (CI+all-order method)~\cite{SafKozJoh09}. In these methods the energies and wave functions are found from the multiparticle Schr\"odinger equation \begin{equation} H_{\mathrm{eff}}(E_n) \Phi_n = E_n \Phi_n, \label{Heff} \end{equation} where the effective Hamiltonian is defined as \begin{equation} H_{\mathrm{eff}}(E) = H_{\mathrm{FC}} + \Sigma(E). \end{equation} Here, $H_{\mathrm{FC}}$ is the Hamiltonian in the frozen core approximation and $\Sigma$ is the energy-dependent correction, which takes into account virtual core excitations in the second order of the perturbation theory (the CI+MBPT method) or in all orders of the perturbation theory (the CI+all-order method). To accurately calculate the valence parts of the polarizabilities and hyperpolarizabilities, we solve the inhomogeneous equation using the Sternheimer~\cite{Ste50} or Dalgarno-Lewis~\cite{DalLew55} method following formalism developed in Ref.~\cite{KozPor99}. We use an effective (or ``dressed'') electric-dipole operator in our calculations that includes the random-phase approximation (RPA). To calculate such complicated quantities as $\mathcal{R}_{J_m J_n J_k}$ and carry out accurately three summations over intermediate states, we solve the inhomogeneous equation twice. A detailed description of this approach is given in Ref.~\cite{PorSafSafKoz18}. \section{Results and discussion} \label{results} We carried out calculations of the $M1$ and $E2$ polarizabilities and the hyperpolarizabilities in the CI+MBPT and CI+all-order approximations. In both cases the theoretical energies were used. The CI+all-order calculations include higher-order terms in comparison with the CI+MBPT calculations and are more accurate. The difference of these two calculations gives us an estimate of the uncertainty of the results. \subsection{Linear and circular hyperpolarizabilities of the $^1\!S_0$ and $^3\!P_0^o$ clock states} In calculating quantities given by Eqs.~(\ref{R_J1}) and (\ref{R_J3}) a main contribution comes from valence electrons. The core electrons contribution is much smaller and we included it only to $\mathcal{R}_1(\omega)$ terms. Indeed, as follows from Eq.~(\ref{R_J3}), the quantity $\mathcal{R}_1 (\omega,\omega)$ can be treated as the derivative of $\mathcal{R}_1(\omega)$ over $\omega$, i.e., \begin{equation*} \mathcal{R}_1 (\omega,\omega) = \frac{\partial \mathcal{R}_1 (\omega)} \partial \omega} = \underset{\Delta \rightarrow 0}{\lim} \frac{\mathcal{R _1(\omega +\Delta) - \mathcal{R}_1(\omega)}{\Delta}. \end{equation*} Since the core contribution to $\mathcal{R}_{1}(\omega )$ is rather insensitive to $\omega $ and $\Delta $ is small, the core contributions to $\mathcal{R}_{1}(\omega +\Delta )$ and $\mathcal{R}_{1}(\omega )$ are practically identical and cancel each other in the expression for $\mathcal{R}_{1}(\omega, \omega )$. Taking into account the uncertainty of our results for the $^{1}\!S_{0}$ and $^{3}\!P_{0}^{o}$ hyperpolarizabilities, we assume that the core contribution to the $\mathcal{R}_{1Jn1}(\omega _{1},\omega _{2},\omega_{3})$ terms is also negligible. This assumption is based on the calculation of the static hyperpolarizability for the Sr$^{2+}$ ground state that was found to be 62.6 a.u.~\cite{YuSuoFen15}. This is negligibly small compared to valence contribution to $\mathcal{R}_{1Jn1}(\omega _{1},\omega _{2},\omega_{3})$ in case of the quite similar $5s^2\,^1\!S_0$ and $5s5p\,^3\!P_0^o$ clock states in Sr~\cite{PorSafSafKoz18}. The results of calculation of the linear and circular hyperpolarizabilities of the $^1\!S_0$ and $^3\!P_0^o$ clock states are presented in Table~\ref{Tab:beta}. \begin{table}[tbp] \caption{Contributions to the linear and circular hyperpolarizabilities $\beta_{l,c} (5s^2\,^1\!S_0)$ and $\beta_{l,c}(5s5p\,^3\!P_0^o)$ (in a.u.) calculated in the CI+all-order (labeled as ``CI+All'') and CI+MBPT (labeled as ``CI+PT'') approximations at the magic frequency $\omega^* = 0.108515$ a.u.. The ``Total'' values are obtained according to Eqs.~(\ref{beta_l}) and (\ref{beta_c}). $\Delta \beta_{l,c} \equiv \beta_{l,c}(^3\!P^o_0) - \beta_{l,c}(^1\!S_0)$ is the difference of the ``Total'' $^3\!P^o_0$ and $^1\!S_0$ values. Numbers in brackets represent powers of 10. The uncertainties are given in parentheses.} \label{Tab:beta \begin{ruledtabular} \begin{tabular}{llrrrr} && \multicolumn{2}{c}{$5s^2\,^1\!S_0$} & \multicolumn{2}{c}{$5s5p\,^3\!P_0^o$} \\ &Contrib. &\multicolumn{1}{c}{CI+All} &\multicolumn{1}{c}{CI+PT} &\multicolumn{1}{c}{CI+All} &\multicolumn{1}{c}{CI+PT} \\ \hline \\ [-0.3pc] $\beta_l$ & $\frac{1}{9}Y_{101}(\omega)$ & 3.61[4] & 2.71[4] & -5.30[5] & -5.37[5] \\[0.4pc] & $\frac{2}{45}Y_{121}(\omega)$ & 5.64[4] & 5.08[4] & 4.37[5] & 4.81[5] \\[0.4pc] & Total & 9.24[4] & 7.80[4] & -9.23[4] & -5.61[4] \\[0.3pc] $\Delta \beta_l$ && \multicolumn{1}{c}{-1.85[5]} & \multicolumn{1}{c}{-1.34[5]} & \\ & Recommended & \multicolumn{2}{c}{$-1.85(50) \times 10^5$} && \\[0.3pc] & Ref.~\cite{OvsMarPal16} &\multicolumn{2}{c}{$-10.2 \times 10^5$} & & \\[0.3pc] \hline \\ [-0.6pc] $\beta_c$ & $\frac{1}{9}X_{101}(\omega)$ &-1.98[4] &-1.88[4] & -6.03[5] & -5.95[5] \\[0.4pc] & $\frac{1}{18}X_{111}(\omega)$ & 41 & 34 & 7.21[6] & 6.61[6] \\[0.4pc] & $\frac{1}{15}X_{121}(\omega)$ & 6.21[4] & 5.53[4] & -1.45[6] & -1.11[6] \\[0.4pc] & Total & 4.23[4] & 3.66[4] & 5.15[6] & 4.90[6] \\[0.3pc] $\Delta \beta_c$ && 5.11[6] & 4.86[6] & & \\[0.3pc] & Recommended &\multicolumn{2}{c}{$5.11(25) \times 10^6$} & & \\[0.3pc] & Ref.~\cite{OvsMarPal16} &\multicolumn{2}{c}{$3.65 \times 10^6$} & & \end{tabular} \end{ruledtabular} \end{table} Our recommended value of differential linear hyperpolarizability, $\Delta \beta_l (\omega^*)= -1.85(50) \times 10^{5} \,\, {\rm a.u.}$, is two orders of magnitude smaller (in absolute value) than analogous differential hyperpolarizability for Sr, $\Delta \beta_l = -1.5(4)\times 10^{7}$ a.u.~\cite{PorSafSafKoz18}. In case of Cd, the absolute values of the contributing terms are generally smaller than in Sr, and there are significant cancellations between them. The circular hyperpolarizability of the $^3\!P_0^o$ state is two orders of magnitude larger in absolute value than the circular hyperpolarizability of the $^1\!S_0$ state and the linear hyperpolarizability of the $^3\!P_0^o$ state. This is explained as follows: the main contribution to $\beta_c(^3\!P_0^o)(\omega^*)$ comes from the term \begin{widetext} \begin{equation*} \mathcal{R}_{111}(\omega^*,0,\omega^*) \equiv \sum_{\gamma_m,\gamma_n,\gamma_k} \frac{\langle ^3\!P_0^o ||d|| \gamma_m J_m=1 \rangle \langle \gamma_m J_m=1 ||d|| \gamma_n J_n=1 \rangle \langle \gamma_n J_n=1 ||d|| \gamma_k J_k=1 \rangle \langle \gamma_k J_k=1 ||d|| ^3\!P_0^o \rangle} {(E_m - E_{^3\!P_0^o} -\omega^*) (E_n - E_{^3\!P_0^o}) (E_k - E_{^3\!P_0^o} - \omega^*)} . \end{equation*} \end{widetext} In the sum over $\gamma_n$ there is the intermediate state $5s5p\,\,^3\!P_1^o$ separated from $^3\!P_0^o$ by the fine-structure interval. In this case the energy denominator $E_{^3\!P_1^o} - E_{^3\!P_0^o} \approx 542\,\,{\rm cm}^{-1}$ is small and, respectively, the contribution of this term is large, leading to much larger hyperpolarizability for the circular polarization. We compare our results with those obtained in Ref.~\cite{OvsMarPal16} in \tref{Tab:beta}. There is a reasonable agreement for differential circular hyperpolarizability while our differential linear hyperpolarizability is 5 times smaller in absolute value than that found in Ref.~\cite{OvsMarPal16}. \begin{table}[t] \caption{The dynamic $M1$ and $E2$ polarizabilities (in a.u.) of the $5s^2\,^1\!S_0$ and $5s5p\,^3\!P_0^o$ states at the magic frequency, calculated in the CI+MBPT (labeled as ``CI+MBPT'') and CI+all-order (labeled as ``CI+All'') approximations. The recommended value of $\Delta \alpha^{QM}$ is given in the line ``Recom. $\Delta \alpha^{QM}$''. The uncertainties are given in parentheses.} \label{Tab:alpha \begin{ruledtabular} \begin{tabular}{ccc} Polariz. & CI+MBPT & CI+All \\ \hline \\ [-0.5pc] $\alpha^{M1}(^1\!S_0)$ & 1.5$\times 10^{-9}$ & 1.6$\times 10^{-9}$ \\[0.3pc] $\alpha^{M1}(^3\!P^o_0)$ & -4.0$\times 10^{-6}$ & -3.9$\times 10^{-6}$ \\[0.3pc] $\Delta \alpha^{M1}$ & -5.5$\times 10^{-6}$ & -5.5$\times 10^{-6}$ \\[0.5pc] $\alpha^{E2}(^1\!S_0)$ & 2.29$\times 10^{-5}$ & 2.43(14)$\times 10^{-5}$ \\[0.3pc] $\alpha^{E2}(^3\!P^o_0)$ & 8.97$\times 10^{-5}$ & 8.88(8)$\times 10^{-5}$ \\[0.3pc] $\Delta \alpha^{E2}$ & 6.68$\times 10^{-5}$ & 6.45(23)$\times 10^{-5}$ \\[0.5pc] $\Delta \alpha^{QM}$ & 6.28$\times 10^{-5}$ & 6.05(23)$\times 10^{-5}$ \\[0.7pc] Recom. $\Delta \alpha^{QM}$ & & 6.05(23)$\times 10^{-5}$ \\[0.5pc] Ref.~\cite{OvsMarPal16} & & 3.13 $\times 10^{-5}$ \end{tabular} \end{ruledtabular} \end{table} \subsection{$M1$ and $E2$ polarizabilities at the magic frequency} \label{M1andE2} To accurately calculate the valence part of the $E2$ polarizabilities of the clock states at the magic frequency, we solved inhomogeneous equation with the electric quadrupole operator $Q$ in the right hand side. As in the case of hyperpolarizability, we calculated these quantities using both the CI+all-order and CI+MBPT methods, including the RPA corrections to the operator $Q$. The core contributions were calculated in the RPA. For the $M1$ polarizabilities, only a few low-lying intermediate states give dominant contributions, and it is sufficient to calculate their sum. We estimate the uncertainties of the results as the difference between the CI+all-order and CI+MBPT values. The final values of the polarizabilities and their uncertainties are listed in Table~\ref{Tab:alpha}. We also determined the recommended value of $\Delta \alpha^{QM} \equiv \Delta \alpha^{E2} + \Delta \alpha^{M1}$, where \begin{eqnarray} \Delta \alpha^{M1} &\equiv& \alpha^{M1}(^3\!P^o_0) - \alpha^{M1}(^1\!S_0), \notag \\ \Delta \alpha^{E2} &\equiv& \alpha^{E2}(^3\!P^o_0) - \alpha^{E2}(^1\!S_0) . \label{delEM} \end{eqnarray} To determine the uncertainty of $\Delta \alpha^{QM}$ we note that the $\alpha^{M1}(^1\!S_0)$ polarizability is very small and we can neglect it. The $\alpha^{M1}(^3\!P^o_0)$ polarizability is more than three orders of magnitude larger in absolute value than $\alpha^{M1}(^1\!S_0)$, but still an order of magnitude smaller than $\Delta \alpha^{E2}$. Therefore, the uncertainty of $\Delta \alpha^{QM}$ is mostly determined by the uncertainty in $\Delta \alpha^{E2}$ and we estimate it to be 4\%. Comparing our recommended value for $\Delta \alpha^{QM}$ with the result obtained in Ref.~\cite{OvsMarPal16}, we see that there is a fair agreement between them. \subsection{Second order Zeeman shift} In this section we consider a systematic effect due to second order Zeeman shift which both clock states experience in the presence of a weak external magnetic field. If an atom is placed in a such magnetic field $\mathbf{B}$, the interaction of the atomic magnetic moment $\bm \mu$ with $\mathbf{B}$ is described by the Hamiltonian \begin{equation} H = - {\boldsymbol \mu} \cdot \mathbf{B}. \label{eq:magnetic_H} \end{equation} The atomic magnetic moment $\bm \mu$ is mostly determined by the electronic magnetic moment and can be written as \begin{equation} \bm \mu = -\mu_0 ({\bf J} + {\bf S}), \end{equation} where ${\bf J}$ and ${\bf S}$ are the total and spin angular momenta of the atomic state and $\mu_0$ is the Bohr magneton defined as $\mu_0 = |e| \hbar/(2mc)$. Directing the external magnetic field $\mathbf{B}$ along the $z$-axis (${\bf B} = B_z \equiv B$), we calculate the second order Zeeman shift, $\Delta E$, (in the absence of hyperfine interaction) as \begin{equation} \Delta E = -\frac{1}{2} \alpha^{\rm M1} B^2, \label{DelE} \end{equation} where $\alpha^{\rm M1}$ is the magnetic-dipole polarizability. For a state $|J=0\rangle$ it is reduced to the scalar polarizability, given by \begin{equation} \alpha^{\rm M1} = \frac{2}{3} \sum_n \frac{|\langle n || \mu || J=0 \rangle|^2}{E_n-E_0}. \label{alphaM1} \end{equation} To estimate the second order Zeeman shift for the clock transition $$\Delta \nu \equiv \frac{\Delta E(^3\!P^o_0) - \Delta E(^1\!S_0)}{h}$$ we note that the $\alpha^{M1}(^1\!S_0)$ polarizability is negligibly small compared to $\alpha^{M1}(^3\!P^o_0)$, so we can write $\Delta \nu \approx \Delta E(^3\!P^o_0)/h$. For an estimate of $\alpha^{M1}(^3\!P^o_0)$ we take into account that the main contribution to this polarizability comes from the intermediate state $5s5p\,\,^3\!P_1^o$. Then, from \eref{alphaM1} we obtain \begin{equation} \alpha^{\rm M1}(^3\!P^o_0) \approx \frac{2}{3} \frac{\langle ^3\!P^o_1 || \mu || ^3\!P^o_0 \rangle^2}{E_{^3\!P^o_1}-E_{^3\!P^o_0}} . \end{equation} Using for an estimate \begin{equation} |\langle ^3\!P^o_1 || \mu || ^3\!P^o_0 \rangle| \approx \sqrt{2} \mu_0 \end{equation} and substituting it to \eref{DelE} we find \begin{equation} \Delta E (^3\!P^o_0) \approx -\frac{2}{3} \frac{\mu_0^2}{E_{^3\!P^o_1}-E_{^3\!P^o_0}}\,B^2 \label{DE3P0} \end{equation} in agreement with the result obtained in Ref.\cite{BoyZelLud07}. Using the experimental value of energy difference $E_{^3\!P^o_1}-E_{^3\!P^o_0} \approx 542\,\, {\rm cm}^{-1}$, we arrive at $$\Delta \nu \approx -80\, B^2,$$ where $\Delta \nu$ is in mHz and the magnetic field $B$ is in G. \section{Conclusion} \label{concl} We carried out calculations of the magnetic dipole and electric quadrupole polarizabilities as well as linear and circular hyperpolarizabilities of the clock $5s^2\,\,^1\!S_0$ and $5s5p\,\,^3\!P_0^o$ states at the magic wavelength and compared them with other available data. We also evaluated the second-order Zeeman shift for the clock transition frequency. These values are required for an assessment of the higher-order corrections to the light shift of the $5s^2\,\,^1\!S_0$ -\, $5s5p\,\,^3\!P_0^o$ clock transition. We have demonstrated that the linear differential hyperpolarizability for the clock transition for Cd is two orders of magnitude smaller than for Sr and Yb. We also found the circular hyperpolarizability to be much larger than the linear hyperpolarizability and explained the source of this difference. A knowledge of the multipolar polarizabilities and hyperpolarizabilities at different polarizations of the lattice wave is needed for further Cd clock development and selection of the lattice configurations to minimize the higher-order light shifts. \section{Acknowledgements} We thank Kurt Gibble for helpful discussions. This work was supported by the Office of Naval Research under Grant No. N00014-17-1-2252. S.G.P acknowledges support by the Russian Science Foundation under Grant No.~19-12-00157.
train/arxiv
BkiUdUw4dbjiU_-ddWM7
5
1
\section{Introduction} The instantaneous reproduction number $R_t$ is an effective empirical measurement of the velocity with which an epidemic is spreading through a population. A value of $R_t$ greater than 1 denotes an epidemic which is growing exponentially at that time, while values smaller than 1 are witnessed in declining phases of the epidemic. Because of its immediate interpretation and its independence of detailed modelling assumptions, estimate of the value of $R_t$ are of immense importance to public health experts and policy makers, and are normally an essential consideration in determining measures to fight the spread of an epidemic. $R_t$ is defined as the expected number of secondary infections at time $t$ from each infected case, or equivalently the ratio of the number of new infected cases at time $t$ to the average infectiousness of individuals at that time (see Section \ref{maths} for a more precise definition). As such, it returns a local (in time), model-free exponential approximation to the epidemic dynamics, providing precious information to population health scientists and policy makers. While alternative approximations which do not assume exponential dynamics exist \cite{chowell2016characterizing}, in practice most epidemiological studies utilise the exponential approximation approach, as witnessed in the manifold studies of the dynamics of the recent covid19 pandemic, e.g. \cite{flaxman2020report,kwok2020herd}. Because the ascertainment of dates of infection is in general difficult if not impossible, in practice $R_t$ is estimated on the basis of new cases whose symptom onset happens at time $t$, as suggested in several publications, see e.g. \cite{cori2013new}. Therefore, estimates of $R_t$ are describing the dynamics of the epidemic in the subset of the infected population which develops sufficiently severe symptoms as to be detected as infected. Such a restriction is often considered as a positive feature, as it avoids the unavoidable uncertainty linked to detection of asymptomatic and paucisymptomatic cases, which is heavily confounded by testing strategies which may be highly time-varying. Mathematically, computing $R_t$ from symptomatic cases only is fully justified under the assumption that new symptomatic cases constitute a constant fraction $\alpha\le 1$ of the total (unknown) number of new cases in a day. Such an assumption guarantees that the overall dynamics of the epidemic among the symptomatic cases are identical to the true epidemic, just rescaled by a factor $\alpha$. However, empirical evidence from the covid19 pandemic in western Europe suggests that this assumption is not always justified. It is well known that infection by the new coronavirus results in a large fraction of asymptomatic cases, however such fraction is heavily dependent on a number of additional covariates. In particular, it is well known that younger people are far less likely to develop symptoms than older people \cite{davies2020age}, therefore if the progression of the disease is different between different age groups, it is plausible that the ratio $\alpha$ of symptomatic cases to total cases will not remain constant, but it will follow the spread of the epidemic across the age groups. Such a situation has been widely observed during the autumnal second wave of the infection, when the virus spread rapidly among younger people in holiday resorts in the summer, and gradually started spreading to older people during the autumn, with a gradual increase of the average age of the cases detected witnessed. In this brief paper, I show that estimating $R_t$ from symptomatic cases only can lead to mistaken conclusions when the probability of developing symptoms depends on a covariate which is not constant in time. I also show how the problem can be fixed by introducing latent variables, and how an extremely simple correction can provide very accurate maximum likelihood estimates of the true $R_t$ when the number of cases is large and the dependence of the probability of developing symptoms on the covariate is known. Numerical examples on a simple yet informative model confirm the potential relevance of these considerations. \section{Mathematical formulation of the problem and proposed solutions}\label{maths} \subsection{Definitions and problem statement} The instantaneous reproduction number $R_t$ is defined mathematically by the equation \begin{equation} I_t=R_t\sum_{\tau=1}^tp(\tau)I_{t-\tau}\label{Rtdef} \end{equation} where $I_t$ is the number of infected individuals at time $t$, and $p(\tau)$ is the distribution over generation times, effectively accounting for incubation times. In practice, the distribution over generation times is assumed known from epidemiological studies; for example, the Italian Istituto Superiore di Sanit\'a (ISS) uses estimates of generation times distribution obtained from the early Covid19 outbreak in Lombardy \cite{cereda2020early} to compute $R_t$ estimates in its weekly monitoring. Equation \eqref{Rtdef} defines an instantaneous rate of change of the infected population independently of the stochastic generative process underpinning the epidemic. Practically one simply combines equation \eqref{Rtdef} with a Poisson or Gaussian noise distribution on the output $I_t$ to provide straightforward Bayesian or maximum likelihood estimates of the parameter of interest $R_t$. Equation \eqref{Rtdef} is linear in the number of new daily infections $I_t$, therefore it still holds when the vector of new infections is rescaled by a constant. Therefore, under the assumption that symptomatic cases are a constant fraction of new cases, estimates of $R_t$ obtained using only symptomatic cases will be identical to estimates obtained using all new infections. Such an assumption will however fail if the fraction of symptomatic cases depends on time. The Covid19 pandemic in Western Europe, during its second wave in the autumn of 2020, has shown a constant gradual increase in the median age of cases. As an example, the Italian ISS reported a median age of Covid19 positives of 29 in the period 17-23 August 2020, which increased to 47 in the period from mid October to mid November 2020\footnote{Weekly data on median age of cases appear to be unavailable after September. Data from ISS reports \url{https://www.epicentro.iss.it/coronavirus/sars-cov-2-dashboard}.}. The fraction of Covid19 cases presenting clinically relevant symptoms is estimated to vary from less than 20\% in young individuals to over 70\% in elderly people \cite{davies2020age}. Therefore, the symptomatics rate must have changed in the period August-November, invalidating the assumptions underlying the procedure to estimate $R_t$. In particular, part of the increase in symptomatic cases will be the result of an increased rate of symptomatic cases in older individuals, leading therefore to an inflation in the estimates of $R_t$. \subsection{Latent variable model} A natural solution to the problem is to retain the original definition of $R_t$ in terms of infected numbers, and to introduce observable variables $S_t(l)$ denoting number of symptomatic cases at time $t$ with covariate value equal to $l$\footnote{This treatment is equally suited to handling discrete or continuous covariates, however for practical convenience it might be easier to group continuous covariates such as age into discrete classes.}. The observables are obtained from the latent number of infected cases with covariate $l$ $I_t(l)$ through a Binomial observation model\begin{equation} S_t(l)\sim\mathrm{Binom}\left(I_t(l),\pi_l\right) \end{equation} where $\pi_l$ is the symptomatic rate for infected cases in group $l$. Assuming this rate to be known (for example from epidemiological studies such as \cite{davies2020age}), then the variables $I_t(l)$ are all independent {\it a posteriori}\footnote{This is because the renewal equations \eqref{Rtdef} and \eqref{RtAges} are deterministic conditioned on $I_t$ and $I_t(l)$ respectively.} and, given a suitable prior, they can be estimated independently via e.g. the Metropolis-Hastings (M-H) algorithm or any other Bayesian sampler\footnote{I am not aware of a conjugate prior distribution over the number of trials to a binomial likelihood where the success rate is known.}. To obtain a posterior distribution over the parameter of interest $R_t$, one simply needs to rewrite equation \eqref{Rtdef} by summing over the covariate values \begin{equation} \sum_l I_t(l)=R_t\sum_{\tau,l}\pi_lp(\tau)I_{t-tau}(l)\label{RtAges} \end{equation} and plugging samples of $I_t(l)$ in equation \eqref{RtAges}\footnote{We assume that generation times are independt of the covariate $l$.}. If the symptomatic rate $\pi_l$ is not known, then it can also be estimated within a Bayesian framework. An efficient solution could be to assign each $\pi_l$ independent Beta priors and run a block Metropolis-within-Gibbs scheme: conditioned on observables and $\pi_l$ values, the $I_t(l)$ can be sampled in parallel via independent M-H samplers. Given sampled values of $I_t(l)$, the posterior on each $\pi_l$ value can be obtained analytically and sample values of $\pi_l$ could be drawn in a straightforward way. It is however probable that informative priors on $\pi_l$ would be needed for the estimates to have an acceptable level of uncertainty. \subsection{Gaussian approximation and analytical corrections} Assuming that the numbers of infected individuals for each group $I_t(l)$ are sufficiently large, and that the symptomatic rates $\pi_l$ are also not too close to zero or one, it is then possible to approximate the binomial distribution with a Gaussian conditional on $I_t(l)$, so that \begin{equation} p\left(S_t(l)\vert I_t(l),\pi_l\right)\simeq\mathcal{N}\left(\pi_l I_t(l),\pi_l(1-\pi_l) I_t(l)\right).\label{GaussApp} \end{equation} Introducing $\delta_t(l)=I_t(l)-\frac{S_t(l)}{\pi_l}$, we can rewrite the exponent in \eqref{GaussApp} as \[ -\frac{\left(S_t(l)-\pi_l I_t(l)\right)^2}{\pi_l(1-\pi_l) I_t(l)}=-\frac{\pi_l}{1-\pi_l}\frac{\delta_t(l)^2}{\frac{S_t(l)}{\pi_l}\left(1+\frac{\pi_l\delta}{s_t(l)}\right)} \] from which it is clear that the maximum likelihood value for $I_t(l)$ is obtained for $\delta=0$, i.e. when $I_t(l)=\frac{S_t(l)}{\pi_l}$. Plugging this estimator in the equation for $R_t$ \eqref{RtAges} yields the following estimator\begin{equation} R_t^{MLE}=\frac{\sum_lS_t(l)\pi_l^{-1}}{\sum_{\tau=1}^t\sum_lS_{t-\tau}(l)\pi_l^{-1}p(\tau)}\label{Rt_MLE} \end{equation} \section{Numerical illustration} To illustrate the impact of violating the assumption of constant probability of symptoms onset, we analyse a simple epidemic model which simulates the spread of a disease among two equal sized populations with weak interactions. The two populations have equal size of 200.000 individuals, and differ only in the probability that an infected individual will develop symptoms: infected individuals in one population, Y, will develop symptoms with probability 0.3, while individuals in the other, O, will develop symptoms with probability 0.8. These values were chosen to mimic the values reported for COVID19 in young and old individuals \cite{davies2020age}. Infection dynamics are almost identical in the two sub-populations, according to a stochastic SI model with $R_0=1.4$. To simplify further, we assumed that infected individuals remain infectious for only 1 time step, i.e. there is no incubation period and can only infect individuals the day after they were infected. Individuals in the Y population can also infect with low probability individuals in the O population. The resulting epidemic dynamics over 100 simulation steps are shown in Figure \ref{fig:sim_model} top left, resulting in two nearly identical waves of infections with a delay of approximately 20 time units. The observed dynamics of symptomatic cases is however very different, with a first low peak of symptomatic Y cases and a later, much larger peak of O cases (Fig. \ref{fig:sim_model} top right). The probability of showing symptoms transitions sharply at around time 30 from the Y probability (0.3) to the O probability (0.8), reflecting the change in prevalence of the infection among the two populations (Fig \ref{fig:sim_model} bottom left). Correspondingly, around time 30 we see a significant deviation of the $R_t$ estimate computed on symptomatics only (Fig \ref{fig:sim_model} bottom right blue line) from the ground truth estimate computed on all cases (red line). The Gaussian MLE correction proposed in equation \eqref{Rt_MLE} is in almost perfect agreement with the true $R_t$ (cyan line). \begin{figure} \centering \includegraphics[width=0.45 \textwidth]{truCurves_simp_mod.png} \includegraphics[width=0.45 \textwidth]{sympCurves_simp_mod.png}\\ \includegraphics[width=0.45 \textwidth]{probSympt_simp_mod.png} \includegraphics[width=0.45 \textwidth]{R_tEst_simp_mod.png} \caption{Example results in a simple SI epidemic model with two populations with different probabilities of symptoms onset (young, $p(s_y\vert I_y)=0.3$, and old $p(s_o\vert I_o)=0.8$): top left, new daily infections for young (blue) and old (red) populations. An early wave of infection in the young population slowly trickles to the old population, which develops a nearly identical wave approximately 20 days later. Top right: observed new daily symptomatic cases, showing an apparently much larger infection among the older. Bottom left, fraction of new symptomatic cases over total new cases, showing a clear rapid increment when the infection among the old population starts to dominate. Bottom right, estimated $R_t$ using symptomatic cases (blue), total cases (red) and corrected maximum likelihood estimates (cyan), in the time interval when the fraction of symptomatic cases changes. Standard estimates based on symptomatic cases only overestimate $R_t$ by approximately 10\%, while the corrected maximum likelihood estimate is nearly identical to the true $R_t$ value consistently throughout the period.} \label{fig:sim_model} \end{figure} While the analysis of this simple model reveals a marked deviation of the estimated $R_t$ from the true $R_t$, the effect appears relatively modest, approximately around 10\% relative error. One possible explanation for this is that the simple model does not involve incubation times, so that the rate of change of the fraction of symptomatics is slow relative to the fast contagion time. I therefore explored a different scenario where the generation time distribution is zero everywhere except for 0.5 at days -3 and -4 (i.e., there is an incubation time of 3 days and infectiousness lasts two days). I set the $R_0$ to 2.4 to yield similar epidemic dynamics to the simple case in Figure \ref{fig:sim_model}. The deviation of the estimates from symptomatic cases from the true values appear more marked, presumably because the nonzero incubation time leads to a sharper apparent increment in symptomatic cases due to the shift in population. Once again, the simple correction from the Gaussian approximation seems to rescue the problem to a large extent. Quantifying the errors in estimates across 100 independent simulations, we see that for both the simple model with no incubation and the more complex model the Gaussian approximation yields a very significant improvement in estimates of the true $R_t$, with error reduction ranging from a factor 7 in the simple model to over an order of magnitude for the more complex model. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{diffRt_simple.png} \includegraphics[width=0.45\textwidth]{diffRt_incubation.png} \caption{Errors in estimates of $R_t$ over 100 independent simulations: {\it left} boxplot of differences in estimates from symptomatics only (left box) and corrected by Gaussian approximation (right box) in model without incubation time; {\it right} similar as left panel but on model with incubation time} \label{fig:my_label} \end{figure} \section{Discussion} In this short paper, I have identified and discussed a possible shortcoming in the current method for estimating the reproduction number $R_t$ of an epidemic. While estimating $R_t$ from symptomatic cases only is certainly a practical solution to avoid the difficulties in estimating total infection numbers in epidemics with many asymptomatic individuals, I argue both theoretically and numerically that this strategy can yield erroneous results when the proportion of symptomatic cases is not constant in time. I also showed that the problem can be rescued to a large extent when the change of symptomatic cases is the result of a shift of the epidemic across populations with different (known) characteristics. A major limitation of this work is the assumption that the time dependence of the fraction of symptomatics results only from the population structure and how the epidemic spreads across the population. In reality, myriad effects might lead to even larger errors, from local difficulties in recording and reporting symptomatic cases due to health services being overwhelmed, to changes in testing policies (for example leading to the identification of more or less individuals with light symptoms). Nevertheless, I would argue that, despite its limitations, the proposed analysis has practical merit, since changes in median age of infected individuals have been recorded widely in the current COVID19 pandemic, leading to errors that could easily be corrected. It is worth pointing out that even fractional errors in estimates of $R_t$ can have huge practical consequences, since the reproduction number is the main quantitative indicator adopted by governments in deciding containment measures which have frequently enormous economic impact. This issue is even more pressing because frequently estimates of $R_t$ at local level are adopted, where factors such as infections spreading in care homes (which have by definition a high median age and hence a large fraction of symptomatics) can completely dominate the recorded numbers of symptomatic cases. Methodologically, this work points to a more substantial usage of latent variable models in epidemiological studies. Latent variable models are a popular and well established area of research in statistics and machine learning; to my knowledge, the proposal of using latent variables to estimate $R_t$ in the current covid19 pandemic has been explored in \cite{de2020impact}, where however the focus was reconstructing total numbers of infected individuals from total numbers of positive tests. It is widely believed that machine learning methods might greatly help monitoring and predicting the spread of pandemics, however their application is critically limited by data availability. Regrettably, relevant real data to test models is scarce: in the case of this work, while median age of cases and total recorded cases are widely available, I have not been able to obtain real data on symptomatic cases per day and their age categories, therefore preventing a more meaningful evaluation of the proposed methodology. \footnotesize \bibliographystyle{plain}
train/arxiv
BkiUdhA5qhLA6eZZnfvW
5
1
\section{Introduction} Using a fully-connected \gls{crf} \cite{FullyConnectedCRF} in conjunction with a \gls{fcnn} \cite{long2015fully} is the state-of-the art type of approach for semantic segmentation of two-dimensional natural images \cite{DeepLab}. The core idea behind this approach is that the \gls{fcnn} will serve as a feature extractor that produces a coarse segmentation which is later refined by the \gls{crf}. The \gls{crf} takes as input the segmentation produced by the network as well as the original input image. Unlike a convolution layer which employs local filters, the \gls{crf} looks at every possible pair of pixels in the image, also known as a clique. The \gls{crf} is a graphical model where every clique is defined not only by the spatial distance between pixels but also by their distance in colour space. This allows the \gls{crf} to produce a segmentation with much sharper edges when compared with only using a \gls{fcnn}. This means that the receptive field of a \gls{crf} is the entire image. Graphical models such as the fully-connected \gls{crf} have also been extensevely used in 3D medical imaging segmentation with good resuts \cite{medseg_example_1, medseg_example_2, medseg_example_3, medseg_example_4, medseg_example_5}. One of the issues with applying a fully-connected \gls{crf} to 3D images is the fact that the third dimension introduces exponentially more pairs of hyper-voxels in the graph, also known as cliques. For example, if a 2D square image with $width=N$ will have $N^2$ cliques in the graph, a 3D image with the same width will have $N^3$ cliques in the graph. This makes these models more expensive and likely justifies why they are not as widespread in 3D medical images as in 2D images. Another issue with using a \gls{crf} to improve the quality of a segmentation is that the \gls{crf} has to be trained separately after the base classifier has been trained. Because of this, in \cite{CRFasRNN} the authors propose writing the \gls{crf} mean-field approximation described in \cite{FullyConnectedCRF} as an \gls{rnn} which can be placed on top of \gls{cnn} and train the whole system end-to-end. As far as we know, this system has not been tested for three-dimensional medical images. As a result, we set out to test if this algorithm could be successfully applied in this domain. \section{Methods} \subsection{Theoretical background.} In this section we present a summary of theoretical background of a fully-connected \gls{crf}\cite{FullyConnectedCRF}. Consider an n-dimensional image with $N$ hyper-voxels (pixels, voxels, 4D-voxels, etc\dots) on which we wish to perform semantic segmentation, \textit{i.e.} to assign a label to every hyper-voxels. We define $X_j$ and $I_j$ to be the label and colour value of hyper-voxel $j$, respectively. Consider a random field $\textbf{X}$ defined over a set of variables $\{X_1, X_2, \dots, X_N\}$ each taking a value from a set of labels $\mathcal{L} = \{ l_1, l_2, \dots, l_k\}$. Consider another random field $\textbf{I}$ defined over the variables $\{I_1, I_2, \dots, I_N\}$ where the domain of each variable is the possible colour values of a hyper-voxel in the image. A \acrlong{crf} $(\textbf{I}, \textbf{X})$ is characterized by a Gibbs distribution: \begin{equation} P(\textbf{X} | \textbf{I}) = \frac{1}{Z(\textbf{I})}\exp{ \left ( -\sum_{c \in \mathcal{C_G} }{ \phi_c(\textbf{X}_c| \textbf{I})} \right )}, \end{equation} where $\mathcal{G}$ is a graph on $\textbf{X}$ and each clique $c$ in the set of cliques $\mathcal{C_G}$ induces a potential $\phi_c$. The Gibbs energy of labelling $\textbf{x} \in \mathcal{L}^N$ is $E(\textbf{x}|\textbf{I}) = \sum_{c \in \mathcal{C_G} }{ \phi_c(\textbf{X}_c| \textbf{I})} $ and the \gls{map} labelling of the random field is $\textbf{x}^\ast = \text{arg max}_{\textbf{x} \in \mathcal{L}^N} P(\textbf{X} | \textbf{I})$. $Z(\textbf{I})$ is a normalization constant that ensures $P(\textbf{X} | \textbf{I})$ is a valid probability distribution. For notational convenience the conditioning will be omitted from now on, we define $\psi_c(\textbf{x}_c)$ to denote $\phi_c(\textbf{x}_c| \textbf{I})$. The Gibbs energy of the fully-connected pairwise \gls{crf} is the set of all unary and pairwise potentials: \begin{equation} E(x) = \sum_{i}{\psi_u(x_i)} + \sum_{i<j}{\psi_p(x_i, x_j)}, \end{equation} where $i$ and $j$ range from 1 to $N$. The unary potential $\psi_u(x_i)$ is computed independently for each hyper-voxel by a classifier, \textit{i.e.} the choice of label for one hyper-voxel does have a direct impact on the labels of other hyper-voxels. The pairwise potentials are given by: \begin{equation} \psi_p(x_i, x_j) = \mu(x_i, x_j) \underbrace{\sum_{m=1}^{K}{w^{(m)}k^{(m)}(\textbf{f}_i, \textbf{f}_j)}}_{k(\textbf{f}_i, \textbf{f}_j)}, \end{equation} where $k^{(m)}$ is a Gaussian kernel applied to arbitrary feature vectors $\textbf{f}_i$ and $\textbf{f}_j$, $w^{(m)}$ is linear combination of trainable weights and $mu$ is a compatibility function between labels. The feature vectors $\textbf{f}_i$ and $\textbf{f}_j$ can be constructed from any feature space regarding the image. However, in this setting, they are chosen to take into account positions $p_i$ and $p_j$, and the colour values $I_i$ and $I_j$ of the hyper-voxels in the image: \begin{equation} k(\textbf{f}_i, \textbf{f}_j) = w^{(1)}\underbrace{\exp{\left ( -\frac{|p_i-p_j|^2}{2\theta_\alpha^2} - \frac{|I_i-I_j|^2}{2\theta_\beta^2} \right )}}_{\text{appearance kernel}} + w^{(2)}\underbrace{\exp{\left ( - \frac{|p_i-p_j|^2}{2\theta_\gamma^2}\right )}}_{\text{smoothness kernel}}. \end{equation} The parameters $\theta_\alpha$, $\theta_\beta$ and $\theta_\gamma$ are hyper-parameters that control the importance of the hyper-voxel difference in a specific feature space. This choice of $k(\textbf{f}_i, \textbf{f}_j)$ includes both an appearance kernel (aka bilateral kernel), which penalizes different labels for hyper-voxels that are close in space and color value, and a smoothness kernel (aka Gaussian kernel) which penalizes different labels for hyper-voxels close only in space. In our case, the compatibility function, $\mu$, is a $k$ by $k$ matrix learnt from the data. It has zeros along its diagonal and trainable weights elsewhere in order for the model to be able to penalize different pairs of labels differently. For instance, in brain tumour segmentation we might want to penalize assigning the background class to the tumour's core more than assigning the oedema class to the tumour's core. Since the direct computation of $P(\textbf{X})$ is intractable we use the mean field approximation to compute the distribution $Q(\textbf{X})$ that minimizes the KL-divergence $\textbf{D}(Q||P)$, where $Q$ can be written as a product of independent marginals, $Q(\textbf{X})=\prod_i Q_i(X_i)$. Minimizing the KL-divergence yields the following iterative update equation: \begin{equation} Q_i(x_i=l) = \frac{1}{Z_i}\exp{\left ( -\psi_u(x_i) - \sum_{l' \in \mathcal{L}}\mu(l, l') \sum_{m=1}^{K}w^{(m)} \sum_{i \neq j} k^{(m)}(\textbf{f}_i, \textbf{f}_j) Q_j(l')\right )}, \end{equation} which leads to the inference algorithm detailed in Algorithm~\ref{alg:mean_field}. \begin{algorithm} \SetAlgoLined $Q_i(x_i) = \frac{1}{Z_i}\exp\{-\phi_u(x_i)\}$; \hfill Initialize $Q$ \While{not converged}{ $\tilde{Q}^{(m)}_i(l) \leftarrow \sum_{i \neq j}{k^{(m)}(\textbf{f}_i, \textbf{f}_j)Q_j(l)}$ for all $m$; \hfill Message passing $\hat{Q}_i(x_i) \leftarrow \sum_{l \in \mathcal{L}}{\mu^{(m)}(x_i, l)\sum_{m}{w^{(m)}}\tilde{Q}_i^{(m)}(l)}$; \hfill Compatibility transform $Q_i(x_i) \leftarrow \exp\{-\psi(xi) - \hat{Q}_i(x_i)$\}; \hfill Local update normalize $\hat{Q}_i(x_i)$ } \caption{\gls{crf} mean field approximation.} \label{alg:mean_field} \end{algorithm} The only step in Algorithm~\ref{alg:mean_field} that is not straightforward is the message passing step from every $X_i$ to $X_j$. For our choice of kernels, this step involves applying a bilateral and a Gaussian filter to $Q$. A brute force implementation has a time complexity of $\mathcal{O}(N^2)$, therefore, we use the permutohedral lattice to approximate high-dimensional filtering \cite{adams2010fast} which has linear time complexity with $N$ (even though it has quadratic time complexity with the number of dimensions of the position vectors). The key insight of the \gls{crf} as \gls{rnn} paper \cite{CRFasRNN} is that this inference algorithm can be written as a sequence of steps which can propagate gradient backwards like a \gls{rnn}. This can be easily implemented in an existing deep learning framework. The authors called this new layer a \gls{crf} as \gls{rnn} layer which can be placed on top of existing \gls{cnn} architectures to improve the quality of semantic segmentation. The main advantage of this layer is the ability to train a model which includes a \gls{crf} end-to-end with gradient descent methods. \subsection{Implementation} The previously proposed system was designed and implemented for 2D RGB images. In our work, we adapted the algorithm to work with n-dimensional images and with any number of channels. From a conceptual stand-point extending this algorithm to the general case is straightforward. However, the implementation details are a bit more complicated and hence are the core contribution of this work. Most of the steps in the inference algorithm can be easily written using existing operations in popular deep learning frameworks. Unfortunately, the message passing step which involves high-dimensional filtering cannot be easily implemented using existing operations. Both public implementations of the fully-connected \gls{crf} and \gls{crf} as \gls{rnn} algorithms are based on the permutohedral lattice algorithm \cite{adams2010fast} and the code provided in that paper. The permutohedral lattice is a fast approximation of high dimensional filters which can be used for Gaussian and bilateral filtering. The available implementation of the permutohedral lattice was designed for 2D RGB images and only used CPU kernels \footnote{The authors of the permutohedral lattice paper also provided a GPU implementation which proved to be slower than the CPU version due to bugs in the code.}. The brunt of this work was re-implementing the permutohedral lattice so that: \begin{itemize} \item The implementation supported any number of spatial dimensions, input channels/output channels (class labels from the perspective of the \gls{crf}) and reference image channels. \item The implementation contained not only a CPU C++ kernel but also as a C++/CUDA kernel for fast computation in GPU. \item The implementation included a TensorFlow Op wrapper so that it could be easily used in Python and incorporated in any TensorFlow graph. \end{itemize} Our code for the permutohedral lattice (both CPU and GPU) implemented as a TensorFlow operation is available at \url{https://github.com/MiguelMonteiro/permutohedral_lattice}. The code for the \gls{crf} as \gls{rnn} algorithm also implemented in TensorFlow and which uses the aforementioned permutohedral lattice is available at \url{https://github.com/MiguelMonteiro/CRFasRNNLayer}. \section{Experiments} To test whether using the \gls{crf} as \gls{rnn} layer on top of a \gls{cnn} improved the segmentation quality for 3D medical images, we conducted two experiments. The aim of these experiments was not to achieve the best possible performance on these tasks, but to compare the performance difference between using and not using the proposed algorithm. The underlying network architecture used for segmentation was the V-Net architecture \cite{VNet}. \subsection{PROMISE 2012} The PROMISE 2012 dataset \cite{PROMISE_2012} is a set of 50 three-dimensional \gls{mri} prostate images and the respective expert segmentations of the prostate. The images have different resolutions and a different number of slices, regardless, they all have one channel. In our experiment, we re-sampled the images so that they all had resolution $1\times1\times2$ millimetres. This resulted in images of size $[x, y, z, c] = [200, 200, 63, 1]$ voxels, where $c$ denotes the channel dimension. An example slice from the PROMISE 2012 dataset along with the respective expert segmentation is shown in Figure~\ref{fig:promise_2012}. \begin{figure}[ht] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{promise_2012_image.png} \caption{Image.} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{promise_2012_segmentation.png} \caption{Labels.} \end{subfigure} \caption{PROMISE 2012 example slice.} \label{fig:promise_2012} \end{figure} Since this is a binary segmentation problem the performance metric used is simply the \gls{dc}. Given that there were only 50 labelled images we used 5-fold cross-validation and ran each fold for 200 epochs (8000 steps). The results for this experiment with and without the \gls{crf} as \gls{rnn} layer are presented in Table~\ref{tab:promise_2012}. \begin{table}[!ht] \centering \caption{PROMISE 2012 results} \label{tab:promise_2012} \begin{tabular}{@{}ccc@{}} \toprule & Dice Coefficient \\ \midrule Without CRF & $0.767 \pm 0.109$ \\ With CRF & $0.780 \pm 0.110$ \\ \bottomrule \end{tabular} \end{table} \subsection{BraTS 2015 High-Grade Gliomas} The \gls{brats_2015} \cite{BRATS_2015} training dataset for \gls{hgg} is composed of 220 multimodal \gls{mri} images of brain tumors. All of the images have the same resolution $1\times 1 \times 1$ millimeters and the same size ($[x, y, z] = [240, 240, 155]$ voxels). For each case there exist 4 different images (T1, T1c, T2 and Flair). This results in images of size $[x, y, z, c] = [240, 240, 155, 4]$. The expert segmentation has 5 distinct labels: background, oedema, enhancing tumour core, non-enhancing tumour core and necrotic tumour core. However, the main performance metrics for this task are the whole tumour \gls{dc} (includes everything except the background) and the core tumour \gls{dc} (only includes the enhancing, non-enhancing and necrotic cores). An example slice from the BraTS 2015 dataset along with the respective expert segmentation is shown in Figure~\ref{fig:brats_2015}. \begin{figure}[ht] \centering \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{brats_2015_flair.png} \caption{Flair.} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{brats_2015_t1.png} \caption{T1.} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{brats_2015_t1ce.png} \caption{T1C.} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{brats_2015_t2.png} \caption{T2.} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{brats_2015_segmentation.png} \caption{Labels.} \end{subfigure} \caption{BraTS 2015 HGG example slice.} \label{fig:brats_2015} \end{figure} For our experiments, we split the data-set into training and holdout set (85\%/15\%) which meant that 33 cases were used to measure performance on. The network was trained for 100 epochs (18700 steps). The results for this experiment with and without the \gls{crf} as \gls{rnn} layer are presented in Table~\ref{tab:brats_2015}. \begin{table}[!h] \centering \caption{\gls{brats_2015} \gls{hgg} Results} \label{tab:brats_2015} \begin{tabular}{@{}ccc@{}} \toprule & Whole Tumor Dice Coefficient & Core Tumor Dice Coefficient \\ \midrule Without CRF & $0.735 \pm 0.105$ & $0.488 \pm 0.244$ \\ With CRF & $0.738 \pm 0.105$ & $0.482 \pm 0.257$ \\ \bottomrule \end{tabular} \end{table} \section{Discussion} Looking at close means and large standard deviations presented in Table~\ref{tab:promise_2012} we can see that it is unlikely that a statistical test will reveal that the small performance increase in using the \gls{crf} is statistically significant. In fact, a paired t-test reveals exactly this: the performance difference between using and not using the \gls{crf} as \gls{rnn} is not statistically significant. The same is true for the results of the BraTS 2015 experiment, presented in Table~\ref{tab:brats_2015}. Hence, we conclude that using the \gls{crf} as \gls{rnn} layer on top of a \gls{cnn} does not improve the segmentation quality. The fact that this algorithm seemingly works for 2D RBG images \cite{CRFasRNN} but not for 3D \gls{mri} medical images can be due to a number of factors. Here we explore some of those factors. Natural images tend to have much higher contrast and much sharper edges than \gls{mri} images. The edges between objects in natural images tend to be much more well defined (e.g. A building against a blue sky) than in \gls{mri} images (e.g. the oedema in a brain \gls{mri} is a slightly different shade of grey than the healthy region surrounding it). Since \gls{mri} images have much less contrast and tend to have blurry edges, the object of interest often fuses with the background slowly and seamlessly Trained radiologists can use their knowledge of human anatomy and pathology in conjunction with the observed image to infer where the object of interest starts and ends. In contrast, the \gls{crf} only has access to differences between hyper-voxels, and these differences are zero or close to zero in low contrast, blurry edge environments. This means that there is much more sensitivity to the parameters $\theta_\alpha$, $\theta_\beta$ and $\theta_\gamma$. Setting these parameters becomes very difficult and when taking into account inter-image variability observed during training, there simply isn't a set of $\theta$ parameters that works for all (or most of) the images. Despite having many voxels, 3D medical images have much lower resolution then natural images which compounds the problem of having blurry edges. The regions of interest to be segmented in medical images tend to be ``local''. In our experiment, the prostate and brain tumours fit inside the receptive field of the neural network. This may not be the case in natural images where we might, for example, want to segment multiple birds out of the sky. For this reason, it is possible that the \gls{fcnn} was already able to capture all of the relevant spatial and colour relations in the image and hence the \gls{crf} has no room to improve. One thing the reader might wondering at this point is whether there was a bug with our implementation. To test this proposition we tested our implementation of the algorithm for 2D RGB images. We took an image and its respective segmentation, we distorted the segmentation to simulate the output of a "bad" \gls{cnn}. After, we overfitted the \gls{crf} as \gls{rnn} layer by giving it as input the distorted segmentation and minimizing the cross-entropy between the output of the \gls{crf} as \gls{rnn} layer and the correct labels. The results of this experiment are shown in Figure~\ref{fig:rgb_experiment}. As we can from Figure~\ref{fig:rgb_experiment} the predicted segmentation that is accomplished just by training two scalar weights on filter outputs and running the recurrent inference algorithm gives better results than even the expert provided segmentation (as seen from the skier's baton). This indicates that both the filters and inference algorithm are implemented correctly. \begin{figure} \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{image.jpg} \caption{\centering Image. \protect\linebreak} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{noisy_segmentation.png} \caption{\centering Distorted segmentation.} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{segmentation.png} \caption{\centering Correct segmentation.} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=\linewidth]{output_image.png} \caption{\centering Predicted segmentation.} \end{subfigure}% \caption{2D RGB experiment.} \label{fig:rgb_experiment} \end{figure} We also performed the same experiment for 3D medical images from the PROMISE 2012 and BraTS 2015 dataset and observed that the \gls{crf} as \gls{rnn} layer was able to remove the greater part of the noise. However, the algorithm was not able to overfit the visual features nearly as well as in the 2D RGB case. \section{Conclusion} In this paper we applied the \gls{crf} as \gls{rnn} layer for semantic segmentation to 3D medical imaging segmentation. As far as we know we provide the first publicly available version of this algorithm that works for any number of spatial dimensions, input/output channels and reference channels. We tested the \gls{crf} as \gls{rnn} layer on top a \gls{fcnn} (with a V-Net architecture) on two distinct medical imaging datasets. We concluded that the performance differences observed were not statically significant and we provide a discussion as to why this technique does not transfer well from 2D RGB images to 3D multi-modal medical images. \bibliographystyle{plain}
train/arxiv
BkiUdus4eIOjRy7MQv78
5
1
\section{Introduction} The recent experimental realization of Bose--Einstein condensation (BEC) in ultra-cold atomic gases, \cite{Science,Hulet} has triggered the theoretical exploration of the properties of Bose gases. Specifically there has been a great interest in the development of applications which make use of the properties of this new state of matter. Perhaps, the recent development of the so--called atom laser \cite{atomlaser} is the best example of the interest of these applications. The current model used to describe a system with a fixed mean number $N$ of weakly interacting bosons, trapped in a parabolic potential $V(\vec{r})$ is the following Nonlinear Schr\"odinger Equation (NLSE) (which in this context is called the Gross--Pitaevskii equation (GPE)) \begin{equation} \label{pura} i \hbar \frac{\partial \psi}{\partial t} = -\frac{\hbar^2}{2 m} \nabla ^{2} \psi + V(\vec{r},t)\psi + U_0 |\psi|^2 \psi, \end{equation} which is valid when the particle density and temperature of the condensate are small enough. Here $U_0 = 4 \pi \hbar^2 a/m$ characterizes the interaction and is defined in terms of the ground state scattering length $a$. The normalization for $\psi$ is $N = \int |\psi|^2 \ d^3 \vec{r},$ and the trapping potential is given by \begin{equation} \label{parabolic} V(\vec{r},t) = \frac{1}{2} m\nu^2 \left( \lambda_x^2(t) x^2 + \lambda_y^2(t) y^2 + \lambda_z^2(t) z^2 \right), \end{equation} The $\lambda_\eta, \ (\eta = x,y,z)$ are, as usual, functions that describe the anisotropies of the trap \cite{Dalfovo}. In real experiments with stationary systems they are constants and the geometry of the trap imposes usually the condition $\lambda_x= \lambda_y=1$. $\lambda_z=\nu_z/\nu$ is the quotient between the frequency along the $z$-direction $\nu_z$ and the radial one $\nu_r \equiv \nu$. At this point we want to emphasize here that our analysis does not restrict to the stationary case. Instead it focuses on situations where $\lambda_\eta$ are periodic sinusoidal functions of the time. To be more precise we will adopt the notation \begin{equation} \label{lambdas} \lambda_i(t) = \lambda_{i,0}(1 + \epsilon_i cos(\omega_i t)) \end{equation} with $i = x,y,z$. In its original derivation Eq. (\ref{pura}) was supposed to be strictly valid in the $T=0$ and low density limits. Recent theoretical work extends the applicability of the GPE to the high density limit \cite{Gard,Ziegler}. On the other hand linearized stability analysis based on perturbative expansions on $1/N$ seems to point that the validity of the equation is restricted to the cases where no exponential separation of nearby orbits appear as it happens for example in chaotic pulsations of the atom cloud \cite{Castin2,Castin3}. A lot of work has concentrated in the analysis of the resonance structure when a periodic time dependent perturbation is applied to the magnetic field, mainly because of the availability of experimental data \cite{expfreq}. The theoretical advances include both semi-classical analysis of Eq. (\ref{pura}) in the hydrodynamic limit \cite{Stringari}, variational methods \cite{theorfre,Victor2} and numerical simulations \cite{Ruprecht1,Ruprecht2}. Recent research based on Eq. (\ref{pura}) also includes the study of the different problems using tools from nonlinear science \cite{Bolitons,Tsurumi1,otros} and the analysis of its multicomponent extensions \cite{multiple1,multiple2}. The variational approach provides us with very simple equations for the evolution of the widths of the Gaussian atom cloud together with a simple picture of the movement of the condensate \cite{Victor2}. However, those equations were initially derived for the static field case and only applied to the analysis of the normal modes and frequencies of the condensate motion in the weak perturbation regime. Similar equations are found using various scaling arguments \cite{Kagan,Castin1,Castin2} or moment theory \cite{multiple1}. Evolution of the condensate in a time dependent trap has been addressed in many different ways. There are papers where the GPE is solved numerically, either by watching the time evolution of a suitable initial condition \cite{Ruprecht1,Ruprecht2} or by using a linear expansion in normal modes \cite{Ruprecht2}. And there are other works \cite{Kagan,Castin2} where the authors derive a set of ordinary differential equations using scaling arguments plus the Thomas--Fermi approximation. Among all the papers that treat time dependent potentials, the one which is closer to what we present here is \cite{Castin2}. In that work the authors study the excitations of condensed and non--condensed clouds under a periodic parametric perturbation and find that the condensate depletes and that chaotic evolution comes out. We will comment more on this at Sec. \ref{Conclu}. It is our intention in this paper to explore the behavior of the condensate under periodic perturbations of the trap strength using analytical and numerical tools. Analytical techniques include exact results and a variational analysis extended to situations with a time dependent potentials. Comparison of the results of both methods will allow us to state rigorously and derive a simple model for the resonant behavior and to qualitatively predict the evolution of the system under other conditions (less symmetry, dissipation regimes, non--condensed corrections, etc). Other conclusions as well as experimental implications will be found in the course of the analysis. Our plan is as follows: In Sec. \ref{Center} we obtain a set of exact equations for the evolution of the center of mass of the condensate. The result is found to be three uncoupled Mathieu equations. We show that a complete perturbative analysis is possible and obtain the resonance structure for the center of mass. In Section \ref{Variational} we introduce the time dependent variational model for the time dependent GP equation. In Sec. \ref{Radial} we solve the GPE numerically in the radially symmetric case. We show the resonance structure for the condensate in all regimes -- small and large amplitude oscillations --. Next we turn to the variational equations for the widths and show that numerical simulations agree qualitatively with the preceding picture. Finally, we use those ordinary differential equations to predict the locations of the resonances in a simple model that connects the linear and the nonlinear cases using tools from Sec. \ref{Center}. This is the main result of our paper. In Sec. \ref{Nonsym} we consider the possibility of extending the analysis to the multidimensional case and study the effect of the coupling between the different variational parameters. The resulting predictions are found to agree with a posterior set of 3D simulations of the GPE equation. In Sec. \ref{losses} we comment on the effect of losses. We show that these resonances are of a persistent nature and show how chaotic evolution may arise from losses effects. Finally in Sec. \ref{Conclu} we discuss the experimental implications of our results and summarize our conclusions. Note that, unless otherwise stated, all magnitudes are in adimensional units. To adimensionalize these figures we have employed the change of variables that is introduced in Sec. \ref{Variational} \section{Exact analytical results} \label{Center} \subsection{Variational form of the GP equation} It can be proved that every solution of Eq. (\ref{pura}) is a stationary point of an action corresponding, up to a divergence, to the Lagrangian density \begin{eqnarray} \label{density} {\cal L} & = & \frac{i\hbar}{2} \left( \psi\frac{\partial \psi^{\ast}}{\partial t} - \psi^{\ast} \frac{\partial \psi}{\partial t} \right) \nonumber \\ & + & \frac{\hbar^2}{2m} |\nabla \psi|^2 + V(r) |\psi|^2 + U_0 |\psi|^4, \end{eqnarray} where the asterisk denotes complex conjugation. That is, instead of working with the NLSE we can treat the action, \begin{equation} \label{action} S = \int {\cal L} d^3rdt = \int_{t_i}^{t_f} L(t) dt, \end{equation} and study its invariance properties and extrema, which are in turn solutions of Eq. (\ref{pura}). For instance, from the invariance of Eq. (\ref{density}) under global phase transformations, one can assure the conservation of the norm of the wave function \begin{equation} \label{norm} N = \int |\psi|^2 d^3r, \end{equation} which in this context is interpreted as the number of particles in the Bose condensed state. We can also define another quantity, \begin{equation} \label{energy} E = \int \left\{\frac{\hbar^2}{2m}|\nabla\psi|^2 + V(r,t)\psi + \frac{U_0 |\psi|^4}{2} \right\} d^3r, \end{equation} that can be thought of as the energy, and whose time evolution is simply \begin{equation} \label{energy-dot} \frac{dE}{dt} = \int \frac{dV}{dt} |\psi|^2 d^3r. \end{equation} Thus, when the potential is not time dependent, the energy is another conserved quantity. And when the potential has the form (\ref{parabolic}), the evolution of energy can be easily connected to that of the mean square radii of the cloud \begin{equation} \label{energy-dot-here} \frac{dE}{dt} = \frac{1}{2}m\nu^2 \sum_{\eta = x,y,z} \frac{d\lambda_\eta}{dt}<\eta^2>. \end{equation} Eqs. (\ref{norm}) and (\ref{energy-dot-here}) are also useful to test the stability of the numerical scheme we will use to simulate Eq. (\ref{pura}). \subsection{Newton's equations for the center of mass in a general GPE} Let us consider the following function \begin{equation} \psi({\vec r}) = \phi({\vec r}-{\vec r}_0), \end{equation} where $\phi$ is a solution of Eq. \ref{pura}. Substituting it in Eq. (\ref{density}) and calculating the averaged Lagrangian, we obtain \begin{equation} L = \int {\cal L} d^3r = L_{free}[\phi] + L_{cm}[\phi,{\vec r}_0]. \end{equation} The Lagrangian has been splitted in two parts: one, the ``free'' contribution $L_{free}$, which depends only on $\phi$ \begin{eqnarray} L_{free}[\phi] = \int \frac{i\hbar}{2}\left(\phi \partial_t \phi^\ast - \phi^\ast \partial_t \phi\right) d^3r \nonumber \\ + \int \frac{\hbar^2}{2m}|\nabla \phi|^2 d^3r + \int \frac{U_0}{2} |\phi|^4 d^3r, \end{eqnarray} and another one, $L_{cm}$, that includes both the potential and the displacement ${\vec r}_0$ \begin{equation} L_{cm}[\phi,{\vec r}_0] = \int \left\{ V({\vec r} + {\vec r}_0) |\phi({\vec r})|^2 - i\hbar \phi^\ast \nabla \phi \dot{\vec r}_0 \right\} d^3r. \end{equation} If we impose that the action be stationary for some ${\vec r}_0(t)$, i.e. if we use Lagrange's equations (\ref{lagrange-eq}), we get \begin{equation} \label{CM} \frac{d}{dt}{<-i \hbar \nabla>} = - <\frac{\partial}{\partial r_i} V({\vec r} + {\vec r}_0)>. \end{equation} Here the brackets denote, as usual, the mean value of an operator over the unperturbed wave function, $\phi$, $ <A> = \int \phi^{\ast}({\vec r}) A \phi({\vec r}) d^3r.$ As a final step let us use that $\phi$ is a solution of the GPE. Then Eq. (\ref{CM}) must be satisfied at least for ${\vec r}_0 = 0$ \begin{equation} \label{law2} \frac{d}{dt}{<-i \hbar \nabla>} = - <\frac{\partial}{\partial r_i} V({\vec r})>. \end{equation} It is now easy to prove the relation between the mean value of the moment operator, $i \hbar \nabla$, and the speed of the center of mass. We start from \begin{equation} \frac{d}{dt}< \vec{r} > = \int {\vec r}(\phi^\ast \frac{\partial}{\partial t} \phi + \phi \frac{\partial}{\partial t} \phi^\ast) d^3r. \end{equation} Next we replace the time derivatives with spatial ones using Eq. (\ref{pura}) and its complex conjugate \begin{equation} \frac{d}{dt}<{\vec r}> = \frac{1}{i\hbar} \int {\vec r}(\phi^\ast \frac{-\hbar^2}{2m}\nabla^2 \phi - \phi \frac{-\hbar^2}{2m}\nabla^2 \phi^\ast) d^3r, \end{equation} and finally integrate this expression to obtain \begin{equation} \label{law1} \frac{d}{dt}<{\vec r}> = <-i \hbar \nabla>. \end{equation} Eqs. (\ref{law2}) and (\ref{law1}) are the quantum equivalent of Newton's Second Law and are exact for functions $\phi$ that satisfy the GPE, in fact these equations coincide with the ones appearing in the linear Schr\"odinger equation. \subsection{The condensate in a harmonic trap. Mathieu equations for the center of mass} In our setup $V(\vec{r})$ is a harmonic potential with trap strengths of the form of Eq. (\ref{lambdas}). Thus, Eqs. (\ref{law2}) become a set of three decoupled ODE \begin{mathletters} \begin{eqnarray} \frac{d^2}{dt^2}<x> = -\frac{1}{2}m \omega^2 \lambda_x^2(t) <x>, \label{edo-cm-x} \\ \frac{d^2}{dt^2}<y> = -\frac{1}{2}m \omega^2 \lambda_x^2(t) <y>, \label{edo-cm-y} \\ \frac{d^2}{dt^2}<z> = -\frac{1}{2}m \omega^2 \lambda_x^2(t) <z>. \label{edo-cm-z} \end{eqnarray} \end{mathletters} After a change of scale, all of the preceding equations are equivalent to a model one that we will write as \begin{equation} \label{mathieu-eq} \ddot{x} + (1 + \epsilon \cos (\omega t)) x = 0. \end{equation} This equation is known as Mathieu's equation. It is a well know problem which appears frequently in the study of parametrically forced oscillators and where one can obtain a lot of information by analytical means \cite{Bogoliuvov,Jordan,Foale2,Thompson}. First, Floquet's theory for linear ODE with periodic time dependent coefficients \cite{Jordan} shows that Eq. (\ref{mathieu-eq}) has an infinite set of instability regions in the parameter space. The limits of these zones can be found and have the shape of wedges that start on the points $(\omega_{min},\epsilon_{min}) = (2,0), (1,0), (2/3,0),...,$ and widen as $\epsilon$ is increased up from zero. Inside this regions at least one branch oscillates with an exponentially increasing amplitude. Either with an asymptotic method, or by making use of the singular perturbation theory, we can also locate those resonances and study the evolution of the system around them. For a perturbation frequency close enough to the first resonance, that is for $|\omega-2|=o(1)$, an asymptotic method \cite{Bogoliuvov} yields up to first order \begin{mathletters} \begin{eqnarray} \sigma = \pm \sqrt{\frac{\epsilon^2}{4\omega^2} - \delta^2}, \label{exp-fac} \\ r \simeq c e^{\sigma t} \cos(\omega t / 2 + \theta_0). \end{eqnarray} \end{mathletters} Here we see that for some values of $\delta$ and $\epsilon$ the exponent $\sigma$ is a positive real number and the amplitude of the oscillations grows unlimitedly. Also, the strength of the resonance is maximum for a value of \begin{equation} \label{max-eff} \delta_{max} = -1 + \sqrt{1 - \frac{\epsilon^2}{4}} \simeq - \epsilon^2 + {\cal O}(\epsilon^4). \end{equation} A second order Taylor expansion in Eq. (\ref{exp-fac}) lays the following limits \begin{equation} |\omega-2| \leq \frac{\epsilon}{2} + \frac{\epsilon^2}{32}. \end{equation} The treatment of other resonances is more difficult as they are caused by higher order terms --at least of second order in the $\omega=1$ case--. In practice this means that they have a smaller region of influence and that they are not so strong. One has to choose large values of $\epsilon$, initial conditions $(x,\dot{x})$ not too close to the equilibrium point and an excellent numerical integration method, in order to find real instabilities. However, if one concentrates not just in looking for exponential divergences, but on efficient pumping, it will be easily observed that on top of these subharmonics there are peaks of the energy gain speed (See Fig \ref{fig-spectrum-ode}b). Finally, we wish to point out that these resonances are of a peculiarly persistent nature: as we will precise in section \ref{losses}, they do resist even the presence of dissipation. This has a serious and inmediate consequence which is that feeding the condensate in a resonant regime can result in a large amplitude oscillation of the center of mass. On the other hand, as we will outline later in Sec. \ref{losses}, a measure of this effect can give us some insight in the losses effects, as well as of any additional terms that could be added to Eq. (\ref{pura}) so as to better modeling the condensate. \section{Variational equations for the time--dependent GP equation with a periodic parametric perturbation} \label{Variational} Although the center of mass of the wave packet satisfies very simple equations it does not happen the same to other parameters such as the width. Only in the two dimensional case it is possible to apply moment techniques and find analytically its time evolution \cite{Porras,Perez95} but in the fully three dimensional problem no exact results have been derived yet using the moment technique. To simplify the problem, we restrict the shape of the function $\psi$ to a convenient family of trial functions and study the time evolution of the parameters that define that family. A natural choice, which corresponds to the exact solution in the linear limit ($U_0 = 0$) and provided quite good results in our previous works \cite{theorfre,Victor2} is a three dimensional Gaussian-like function with sixteen free parameters \begin{equation} \label{ansatz} \psi(x,y,z,t) = A \prod_{\eta=x,y,z} \exp \left\{ \frac{-[\eta-\eta_0]^2}{2w_\eta^2} + i \eta \alpha_\eta+ i \eta^2\beta_\eta \right\}. \end{equation} For a matter of convenience and ease of interpretation we will make here a change of parameters, from $A$ and $A^\ast$ to $N$ (the norm of the wave function) and $\phi$ (its global phase): \begin{equation} A = \frac{N}{\pi^{3/2}w_xw_yw_z} e^{i\phi}. \end{equation} The rest of the parameters are $w_\eta$ (width), $\alpha_\eta$ (slope), $\beta_{\eta}$ (square root of the curvature radius) and $\eta_0$ (center of the cloud). This trial function must now be placed in Eq. (\ref{action}) to obtain an averaged Lagrangian per particle \begin{eqnarray} \label{lagrange} \frac{L}{N} = \frac{1}{N} \int_{-\infty}^{+\infty}{\cal L}d^3r = \nonumber \\ \hbar \dot{\phi} + \sum_\eta \left\{ \frac{w_\eta^2}{2} + \eta_0^2 \right\} \left\{ \hbar\dot{\beta}_\eta + \frac{2 \hbar^2}{m}\beta_\eta^2 + \frac{1}{2}m\nu^2\lambda_\eta^2(t) \right\} \nonumber \\ + \sum_\eta \left\{ \hbar\dot{\alpha}_\eta + \frac{\hbar^2}{m}2\alpha_\eta\beta_\eta \right\} \nonumber \\ + \frac{\hbar^2}{m} \sum_\eta \left\{ \frac{1}{2w_\eta^2} + \alpha_\eta^2 \right\} + \frac{U_0}{4\sqrt{2}}\frac{N}{\pi^{3/2}w_x w_y w_z}. \end{eqnarray} The evolution of the parameters is ruled by the corresponding set of Lagrange equations \begin{equation} \label{lagrange-eq} \frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q_j}}\right) = \frac{\partial L}{\partial q_j}, \end{equation} which give us equations for the conservation of the norm, \begin{equation} \label{cons-norm} \frac{dN}{dt} = 0; \end{equation} the movement of the center of mass, \begin{equation} \label{mov-center} \ddot{\eta}_0 + m\nu^2\lambda_\eta(t) \eta_0 = 0; \end{equation} the evolution of slope and curvature, \begin{mathletters} \begin{eqnarray} \beta_\eta & = & \frac{m \dot{w}_\eta}{2 \hbar w_\eta}, \\ \alpha_\eta & = & \frac{m}{\hbar} \dot{\eta}_0 - 2\beta_\eta \eta_0; \end{eqnarray} \end{mathletters} and finally the evolution of the widths, \begin{mathletters} \begin{eqnarray} \label{widths} \ddot{w_x} + \nu^2\lambda_x^2(t)w_x & = & \frac{\hbar^2}{m^2}\frac{1}{w_x^3} + \frac{U_0}{2\sqrt{2}m}\frac{N}{\pi^{3/2}w_x^2w_yw_z}, \\ \ddot{w_y} + \nu^2\lambda_y^2(t)w_y & = & \frac{\hbar^2}{m^2}\frac{1}{w_y^3} + \frac{U_0}{2\sqrt{2}m}\frac{N}{\pi^{3/2}w_xw_y^2w_z}, \\ \ddot{w_z} + \nu^2\lambda_z^2(t)w_z & = & \frac{\hbar^2}{m^2}\frac{1}{w_z^3} + \frac{U_0}{2\sqrt{2}m}\frac{N}{\pi^{3/2}w_xw_yw_z^2}. \end{eqnarray} \end{mathletters} As one can see, the introduction of a time dependent potential does not affect the form of the equations, which remain the same as those of \cite{Victor2}. Let us introduce the constants $P = \sqrt{2/\pi}Na/a_0$ (strength of the atom-atom interaction) and $a_0 = \sqrt{\hbar/(m\nu)}$ (harmonic potential length scale), as well as a set of rescaled variables for time, $\tau = \nu t$, and the widths, $w_\eta = a_0 v_\eta, (\eta=x,y,z)$. This leads us to \begin{mathletters} \begin{eqnarray} \label{widths2} \ddot{v}_x + \lambda_x^2(t)v_x & = & \frac{1}{v_x^3} + \frac{P}{v_x^2v_yv_z}, \label{vx} \\ \ddot{v}_y + \lambda_y^2(t)v_y & = & \frac{1}{v_y^3} + \frac{P}{v_xv_y^2v_z}, \label{vy} \\ \ddot{v}_z + \lambda_z^2(t)v_z & = & \frac{1}{v_z^3} + \frac{P}{v_xv_yv_z^2}. \label{vz} \end{eqnarray} \end{mathletters} This set of rescaled variables and units is the one that we will use throughout the paper, unless otherwise stated. \section{Analysis of the radially symmetric case} \label{Radial} \subsection{The equations} In this section we will analyse the case where the trap has the same strength on all directions, that is \begin{equation} V(x,y,z) = \frac{1}{2} m \nu^2 \lambda^2(t) (x^2 + y^2 + z^2). \end{equation} and the solutions are supposed to be radially symmetric. This high degree of symmetry simplifies the equations considerably. First, the variational model for the widths of the condensate (\ref{widths}) reduces to a single ODE for the radial width $v(t)$ \begin{equation} \label{radial-ode} \ddot{v} = - \lambda^2(t) v + \frac{1}{v^3} + \frac{P}{v^4}. \end{equation} And secondly the following change of function \begin{equation} \label{radial-wave} \psi(r,t) = A \frac{u(r)}{r}, \end{equation} with the constrains \begin{mathletters} \begin{eqnarray} u(r) \rightarrow 0, r \rightarrow 0 \\ \int_0^\infty u(r) dr = 1 \\ |A|^2 = 4 \pi N, \end{eqnarray} \end{mathletters} transforms Eq. (\ref{pura}) into the one-dimensional PDE \begin{equation} \label{radial-pde} i \hbar \partial_t u = - \frac{\hbar^2}{2m} \partial_r^2 u + \left\{ \frac{1}{2}m \nu^2 \lambda^2(t) r^2 + 4 \pi \frac{U_0 N}{4 \pi} \frac{|u|^2}{r^2} \right\} u. \end{equation} This is the equation that we have actually solved numerically. \subsection{Numerical study of the equations} The numerical solution of Eqs. (\ref{radial-ode}) and (\ref{radial-pde}) is not a trivial job. We can see without much effort that both equations are stiff \cite{Hairer} when the width of the cloud becomes very small, due to the presence of strong singular potentials. Thus, we need numerical methods that account for the nonlinearities and are stable enough to be trusted when close to a resonance. To solve Eq. (\ref{radial-ode}) we have used an adaptive step size Runge--Kutta--Fehlberg method, a Dormand--Prince pair \cite{Hairer}, the ODE Suite of MATLAB \cite{MATLAB}, and finally Vazquez's conservative scheme \cite{Vazquez} --a finite differences scheme that conserves a discretized version of the energy and is unconditionally stable--. All of them gave the same accurate results about the frequencies and amplitude of the oscillations, the regions of divergence, etc. To solve Eq. (\ref{radial-pde}) we have utilized a modification of a second order accurate finite difference scheme developed in \cite{esquema}. This new scheme is time reversible, conserves the norm and has a discrete analogue for Eq. (\ref{energy-dot}) which provides enhanced stability \cite{Sanz-Serna}. On the other hand, even using the best methods, we face another important difficulty which is the finite size of either the spatial grid --in finite differences schemes-- or the momentum space --in spectral or pseudospectral methods--. This size effect becomes particularly important in the case of parametrical perturbations and imposes a severe limit on the time for which simulations may be trusted. Using all this computational machinery we have achieved several important results. First we have checked our programs with low amplitude oscillations. In the PDE we imposed a gaussian initial condition of width $v0$ and used this same value as initial condition for Eq. (\ref{radial-ode}). By this procedure we obtained the linearized excitation frequencies of the condensate, concluding that there is a significant coincidence of both models as was shown in \cite{Victor2}. In the large amplitude simulations we found a surprising result which is that the variational model still follows the evolution of the PDE with a 90\% of accuracy even in situations where the condensate remains no longer gaussian but gains an important contribution from the first and second modes. Another confirmed fact is that the width of the condensate develops fast tough bounces against the ``origin'' (Fig. \ref{fig-large}a-b), with its excitation frequency deviating from the linearized predictions of \cite{Victor2} and approaching those of the harmonic trap. We have also simulated the system with a periodic time dependent perturbation of the form \begin{equation} \label{radial-lambda} \lambda^2(t) = 1 + \epsilon \cos (\omega t). \end{equation} Using the variational equation Eq. \ref{radial-ode} we have scanned the parameter space $(\omega,\epsilon)$. We have found one region where the radial width diverges exponentially (Fig. \ref{fig-divergence}a) and causes most numerical methods to fail after a finite time, and two more where the width grows almost exponentially up to a point where the simulation cannot account for this growth. All of these zones have the form of wedges, with a peak at $(\omega_{min}, \epsilon_{min})$, and a growing width as $\epsilon$ is increased. The most important resonance stands on $\omega_{min} = 2.04$ (See Fig. \ref{fig-divergence}b). The other ones are weaker and rest on $\omega_{min} = 1.02, 0.68$. We have studied a wide range of setups and found that these frequencies change no more than a $0.5\%$ depending on the initial conditions and the nonlinearity. On the other hand, the lowest perturbation amplitude for which the resonance exists $\epsilon_{min}$ does exhibit a strong dependence on the initial conditions. Indeed, around the weaker resonances, instability is never reached for points close to the minimum of the potential. However, as we already pointed in Sec. \ref{Center} this is probably a numerical effect. In order to study the location of the resonances we have also made some plots of the efficiency of the energy absorption process against the perturbation frequency for different values of the perturbation amplitude for the variational system and the partial differential equation (Figs. \ref{fig-spectrum-ode}a-b, \ref{fig-spectrum-pde}a-b). The way we have measured ``efficiency'' is by letting the system evolve for a fixed time and then computing the maximum value of the mean square radius of the cloud. For the sake of simplicity and to approach the experimental setups, we have used an equilibrium state of the static GPE as initial condition. In a rather complete inspection of the parameter space using the efficiency plots, we have found that, though the resonance regions cannot be precisely delimited because of the dependence on the initial data, they do exist and behave much like the variational model predicted. As we see in Figs. \ref{fig-spectrum-ode} and \ref{fig-spectrum-pde} there are two important features in the response of the condensate. The main one is the width of the resonances and the dependence of that width on the strength of the perturbation. The second important feature is the change of the frequency for which the perturbation is most efficient. This peak is centered on the frequency of the linearized model only for very small perturbation amplitudes, and switches to the trap natural frequencies very quickly as the amplitude is made stronger -- of about $10\%$ or so--. For even stronger amplitudes the optimal frequency decreases slowly. Another consequence of this work is that response of the cloud is stronger in the PDE than in the variational simplification. For instance, it is possible to find (See Fig. \ref{fig-divergence}a) perturbation amplitudes that do not cause an significant growth in Eq. (\ref{radial-ode}) but make the cloud width increase quite linearly in the PDE. We have also studied the case where an exponential growth of the width is present in both equations --(\ref{radial-ode}) and (\ref{radial-pde})-- and we have seen that the growth is qualitatively similar though there is a tendency in the exact model to exhibit slightly larger amplitude motion. These discrepancies are originated by high modes which are not present in the variational treatment. In fact, even though an important part of the cloud remains close to the origin, it is observed that for long times a long tail appears which is hard to appreciate but is responsible for the growth of the mean square width (and the presence of higher order modes). \subsection{Analysis of the Mathieu equation with a singular potential} In this subsection we are to develop a simplified model that explains why do resonances appear in the perturbed GPE equation. This model makes use of the variational equations for the cloud width but {\em does not care for the actual shape of the variational ansatz}. The reason for it is that both the approximate model and the exact one agree for short times, the response of the latter being always stronger. Let us limit ourselves to Eq. (\ref{radial-ode}). In the previous subsection we said that this system is very stiff and that the origin acts as an elastic wall. In view of this, it is intuitively appealing to replace the singular but differentiable potential $1/v^3 + P/v^4$ with a discontinuous bounce condition on the origin, i.e. an impact oscillator. Numerical simulations confirm that this approximation is good for large amplitude oscillations. Also, the connection between soft singular potentials and impact oscillators has been widely studied and these kind of models have found ample application in real--life situations in the fields of Physics and Engineering (See \cite{Foale2,Thompson} for many references). Since the variational model is a lossless one, our boundary condition must be elastic. We replace Eq. (\ref{radial-ode}) with the following one \begin{eqnarray} \label{bounce} \ddot{v} + \lambda^2(t) v & = & 0, \\ \lim_{t \rightarrow t_c^-}(v,\dot{v}) = (0^+, V_c) & \iff & \lim_{t \rightarrow t_c^+} (v,\dot{v}) = (0^+, -V_c), \nonumber \end{eqnarray} where $t_c$ denotes any isolated instance when the system bounces against the $v=0$ singularity. Let us show that this equation is in turn equivalent to an elastic oscillator {\em without} barrier conditions. We introduce the change of variables \begin{equation} \label{change-barrier} v = |u|, \end{equation} where $u$ is an unrestricted real number and satisfies the following one--dimensional harmonic oscillator equation \begin{equation} \label{harmonic-osc} \ddot{u} + \lambda^2(t) u = 0, \end{equation} It is now easy to prove that every solution of Eq. (\ref{harmonic-osc}) provides a solution of Eq. (\ref{bounce}). And vice versa, from every solution of Eq. (\ref{bounce}) it is possible to construct a solution of Eq. (\ref{harmonic-osc}), unique up to a sign. So, what do we have now? We have proved that for large amplitude motion Eq. (\ref{radial-ode}) behaves as Mathieu's equation. This implies that in the regime of medium to large amplitude oscillations, or in a situation of large amplitude perturbations, Eq. (\ref{radial-ode}) will have instability regions that are more or less centered on Mathieu's frequencies $\omega = 2, 1, 1/3,...$. This prediction is indeed confirmed by the numerical simulations: the main resonance is capable of causing an exponential divergence, while the other ones are harder to track down but do appear in plots of pumping ``efficiency''. Another, important but minor result of this equivalence is that these resonance regions must become wider and move on to smaller frequencies as the perturbation amplitude is increased. This result is also obtained in the numerical simulations. Indeed, from the graphs we can estimate how fast the optimal frequency decreases as $\epsilon$ grows, and we will see that the order of magnitude corresponds to that of Eq. (\ref{max-eff}). Summing up, what we have seen here is that for low amplitude oscillations the condensate moves in an effective potential that is parabolic. Thus, the frequency that excites the condensate most efficiently in a parametric way is the one that results from the linearization of Eq. (\ref{radial-ode}). On the other hand, when we start to consider large amplitude oscillations, we find that the harmonic trap gains importance over the details of the well. It is in this situation that the instability arises, and it happens for perturbations that oscillate according to multiples of the frequency of the trap. Even more, higher modes are little influenced by the nonlinearity, just because they are more spread and the value of $|\psi(r,t)|^2$ is smaller. As a consequence, the energies of these modes come even closer to those of the linear harmonic oscillator, resulting in the fact that the response of the GPE is stronger than what the variational model, limited to a gaussian shape, predicts. \section{Analysis of the nonsymmetric case} \label{Nonsym} After finishing the study of the radially symmetric problem, it seems a natural step to proceed with the nonsymmetric one. However this step is not simple for several reasons, the main one being that the simulation of full three dimensional GPE is computationally very expensive work. Thus, it would be unwise to directly attack the full problem without gaining some insight on what is to be expected from the simulations by cheaper means. In our case the cheapest tool is the variational model. We have seen that it describes rather well the behaviour of a radially symmetric condensate. And, as we pointed out before, there are exact analytical studies \cite{Porras,Perez95} where the moment method reduce exactly the {\it two dimensional} GPE to a set of ODE which are similar to the ones we have. \subsection{Predictions of the variational model} When we remove the radial symmetry in the variational equations, we are left with two to three coordinates, and the perturbation can bear many different forms. However, trying to follow the experimental setups \cite{JILA}, we should once more take a sinusoidal time dependence for every $\lambda_{\eta}(t)$ coefficient, as in Eq. (\ref{lambdas}). This choice accounts both for the $m = 0$ mode $(\epsilon_x = \epsilon_y, \epsilon_z = 0)$ and the $m = 2$ perturbations $(\epsilon_x = -\epsilon_y, \epsilon_z = 0)$ from the JILA experiment \cite{JILA}. In the latter case the potential is a parabolic one, with fixed frequencies on a rotating frame. However, for our trial function (Eq. (\ref{pura}) it behaves just as a potential of the form of Eq. (\ref{parabolic})). Substituting our effective perturbation frequencies into Eq. (\ref{widths2}) we get a set of three coupled Mathieu equations with a potential that is singular on the $v_x = 0$, $v_y = 0$ and $v_z = 0$ planes. The singularities are at least as strong as $1/v^3$, and the numerical simulations again confirm that they act as elastic walls, so we now proceed with a change of variables formally equivalent to that of Eq. (\ref{change-barrier}): \begin{mathletters} \label{decoupled-3d} \begin{eqnarray} w_\eta & = & |u_\eta| \\ \ddot{u}_\eta & = & -\lambda_\eta^2(t) u_\eta \end{eqnarray} \end{mathletters} for $ \eta=x,y,z$. Now the situation is a bit more complex. The first new feature is the existence of several sets of instability regions. Due to having three {\em a priori} different constants $\lambda_{0\eta}$, the three oscillators in Eq. (\ref{decoupled-3d}) are not equivalent and we may get three sets of resonances in the $(\epsilon_\eta, \omega)$ space, each one containing the instability regions that start on the frequencies \begin{equation} \frac{\omega_{\eta,min}}{\lambda_{0\eta}} = 2, 1, 2/3, ... \end{equation} Numerical simulations of the variational equations for the $m=0$ type excitation confirm this prediction with a relative accuracy around $0.5\%$ in the frequencies. The results show again that, opposite to the pure Mathieu equation, these ``wedges'' rest on a nonzero value of the perturbation amplitude $\epsilon_{min,\eta}$. In Fig. \ref{fig-cylindric} we see the evolution of the condensate width with parameters close to the main resonance region. Another new feature is the possibility of coupling between the widths of the condensate. This coupling is seen both in the $m=0$ and $m=2$ excitations setups. In the first one, the perturbed width feeds the unperturbed one. Figure \ref{fig-spectrum-pde} demonstrates that efficiency is not very high. In the $m=2$ case two widths are associated to the same trap frequencies and the perturbations are of equal magnitude and opposite sense. The explanation is that both widths block each other, eliminating the resonance and leaving just a bounded ``movement'' of small amplitude. As a side note, we must say that the variational model itself predicts the lack of resonances in the $m=2$ setup. We already pointed out that the JILA $m=2$ perturbation corresponds to a situation where the potential of the trap is rotated while maintaining its shape. It can be easily proved that if we choose {\em any} family of trial functions with enough degrees of freedom for a general rotation, the variational solution will always stick to the potential and rotate at the same speed. This also implies that the trial function (\ref{ansatz}) is not suitable for describing the condensate when this kind of perturbation is applied. \subsection{Numerical study of the three dimensional GPE} To perform the simulation of the full GPE we have used a fourier pseudospectral method, using typically a grid of $108^3$ collocation points and integrating in time with a second order, symmetrized split step method \cite{bpm1,bpm2}. Our numerical study is based on a $O((\Delta t)^2)$ scheme which behaves extremely well for long time runs. However, as in the radial PDE, no matter how accurate the scheme is, there's a limit in the time during which simulations can be trusted and this limit is imposed by the growth rate of the condensate width and the size of the grid. This limit is specially important for the $m=0$ perturbation, where the condensate develops a large tail in the unperturbed direction, and this tail breaks the simulation after a certain time. The more asymmetric the trap is, the sooner this effect comes out. In our simulations the grid was a box of $108^3$ equally spaced points and whose sides measured from 20 to 40 length units, depending on the symmetry and intensity of the trap. This allowed us to track the condensate for about 12 periods in truly resonant setups, and for much more in nonresonant ones. We have applied the algorithm to many different problems. First we checked our programs against stationary and radially symmetric problems. Secondly we introduced time dependent traps and reproduced the calculations of Section \ref{Radial}. In both cases we got the expected results. The third set of experiments consisted in a resonant time dependent radially symmetric trap applied to several slightly asymmetric gaussian wave packets. The initial asymmetry was slight enought to treat it as a weak perturbation and we saw that it departured little from the symmetric case --i.e., no modes with higher energy or angular moments break the exponential growth. The fourth and probably most important group of simulations consisted in the study of the $m = 0$ perturbation from the JILA \cite{JILA} experiment. Here we confirmed the predictions of the previous subsection, that is we obtained at least one resonance region where the variational model predicts. We also observed that the response of the condensate as modeled by the GPE is stronger and exhibits a slightly more intense growth rate than what the variational model predicts. This and other similar results from the radial case (Sect. \ref{Radial}) favor the theory of a cooperative staircase effect, where the higher modes contribute to the energy absorption process without interfering in the evolution of the width. Figs. \ref{fig-cylindric-pde1} and \ref{fig-cylindric-pde2} show just two examples of the kind of evolution of have seen for this perturbation model on the resonant regions. The reason for this cooperative effect seems to lay in the energy level structure of the GPE equation. We have studied the evolution of the correlation a condensate wave against its initial data. In all cases the initial data was a displaced and deformed gaussian cloud, while the environment corresponded to a stationary trap. In the linear case it is easy to show that the spectrum of the correlation must reveal a subset of the eigenvalues of the hamiltonian, and indeed that is what we got. In a nonlinear context it is not clear what the frequencies of the correlation mean --they may be or may be not eigenvalues of the GPE--, but at least we know that they must rule the energy absorption process somehow. What numerical experiments show is that for an extensive family of initial conditions these generalized ``spectra'' can be approximated by the formula $E_n = \omega n + E_0$, where $E_0$ depends on the nonlinearity and the level spacing is a regular, harmonic one. This picture of equally spaced levels is intuitively appealing to explain the existence of such strong resonances. On one side we have the fact that, in any other system, a continous parametric perturbation would become bounded as the higher state become populated. This is probably what one would first think when facing this system. On the other side we see that due to being equally spaced these higher energy states are themselves sensitive to the same resonant frequency and offer no resistence to the particle promotion process. In the end, this is more or less what happens in the variational equations, considered from the point of view of Classical Mechanics. However, it is not relevant for the existence of resonances whether they cause a sustained growth or not. What is confirmed without any doubt is that the perturbations that are most efficient in the variational model are also the most efficient in the full GPE. \section{Analysis of the effect of losses} \label{losses} We now want to show the effect of a dissipative term in the variational equations. This term will be introduced in a phenomenological way so as to model the damping of the oscillations of the condensate in regimes where the number of noncondensed particles is small. We will choose a viscous damping term that models well the behaviour of the condensate in the experiments \cite{JILA}. This choice introduces a significant loss of energy in the oscillations while preserving the number of condensed particles. Using this term we will see that the resonance regions of the pure Mathieu persist. Let us see what Eq. (\ref{mathieu-eq}) looks like once we add damping: \begin{equation} \label{mathieu-disip} \ddot{x} + (1 + \epsilon \cos (\omega t)) x + \gamma \dot{x} = 0. \end{equation} A simple change of variables $x(t) = \rho(t) e^{-\gamma t}$ makes this new term disappear, transforming it back to a pure Mathieu equation \begin{equation} \label{damped} \ddot{\rho} + (1 - \gamma^2 + \epsilon \cos (\omega t)) \rho = 0. \end{equation} With the introduction of the damping we are shifting the resonances to lower values, given by \begin{equation} \frac{\omega}{\nu(\gamma)} = 2,1,2/3... \end{equation} where $\nu^2(\gamma) = 1 - \gamma^2$ is the new effective frequency for the trap. We can approximately solve Eq. (\ref{damped}) around the first resonance, obtaining \begin{equation} x(t) \simeq c e^{(\sigma - \gamma)t} \cos(\frac{\omega t}{2} + \theta_0), \end{equation} where $\lambda$ is given by Eq. (\ref{exp-fac}). This shows that the resonance regions in the parameter space are constrained to values of $(\omega, \epsilon)$ for which the strength of the resonance, $\lambda$, is larger that the strength of the dissipative term. The new regions have a larger, nonzero, value of $\epsilon_{min}$, and are typically thinner, but do not disappear unless $\gamma$ is very large. This effect is reproduced in the variational model when we introduce similar viscous damping terms. For instance, taking the data from the JILA experiment \cite{JILA}, we can estimate a condensate lifetime of about 110 ms and a value for $\gamma$ of $0.15$ in natural units of the condensate. Such damping makes the $\epsilon_{min}$ value raise from $0.09$ to $0.18$ for the $P=9.2$ case. Thus, the instability should not be appreciated unless the perturbation amplitude exceeds the 20$\%$. An interesting effect of damping is that the evolution of a continuously perturbed condensate outside the instability regions becomes more ordered than in the undamped mode since the motion is constrained to limit cycle {\em synchronized to the frequency of the parametric perturbation}, and with a size that depends only on the perturbation parameters, $(\omega,\epsilon)$. In Fig. \ref{fig-damped} we show the different limit cycles that appear under peridic perturbations. The largest one is always the one with its frequency on top of the peak of the resonance as shown by Fig. \ref{fig-spectrum-pde} and the size of the limit cycle decreases as the frequency is detuned from this value. Finally we wish to point out that the appearance of a limit cycle opens the door to a wide family of phenomena, from chaotic motion to bifurcation theory \cite{Thompson2,Thompson3}. This limit cycle would exist under a great variety of dissipative terms, and is not exclusive of linear damping. Also, the dependence of the limit cycle on the damping constant can be useful from the experimental viewpoint to separate the condensed and noncondensed clouds as will be pointed out in the last section. \section{Conclusion and discussion.} \label{Conclu} In this work we have analysed the resonant dynamics of the parametrically forced time dependent GPE using exact analytical techniques, approximate time dependent variational techniques and numerical simulations. All the results point to the existence of various resonant behaviors associated to the same parameter regions concerning the motion of the center of mass and the width oscillations of the wave packet. The role played by the nonlinearity is to provide a strong repulsive term at the origin which acts as a barrier, which depending on the dimensionality and the symmetry of the external forcing could be stronger than the repulsive term related to the linear dispersion (kinetic energy term). We have developed a simplified version of the variational equations for the spherically symmetric condensate under periodic sinusoidal change of the trapping potential. This model is based on an impact oscillator, i.e. a harmonic oscillator with an elastic barrier condition which allows to find explicit solutions. When applied to the parametrically perturbed condensate, our model predicts that medium to large oscillations approach the harmonic trap frequencies, not the ones resulting from the linearization of the variational equations. The model also predicts that the response of the condensate to an external perturbation is ruled by the harmonic trap frequencies. In the case of a sinusoidal parametric perturbation it accurately predicts the existence of a family of resonances on multiples of the trap natural frequencies. In the end, what we get from this work is a precise picture of the resonances for a wide family of equations that include the Eq. (\ref{pura}) and the harmonic oscillator. In this description for the radially symmetric case we have one base frequency that is essentially the same for both equations (linear and nonlinear one) and which corresponds to the energy separation between the ground state and the first excited state of the harmonic oscillator up to a high precision. The invariance of this base frequency has been checked for a nonlinearity constant going from $P=9.2$ --the JILA \cite{JILA} experiment-- to 20 times this value. It differs from the predictions of \cite{Yukalov}, where formulas regarding the $P \rightarrow 0$ and $P \rightarrow \infty$ are derived. Also there is a whole set of subharmonics of this frequency, all of which are capable of exciting the cloud quite efficiently. At least three of them have been found, both with significant responses. These subharmonics are not predicted in \cite{Yukalov}. We have also found that for this kind of parametric drive the resonances are wide. The width grows with the strength of the interaction and decreases with the effect of dissipation. Both facts can be checked in the experiments by forcing the system for a longer time than what it is currently done. Both predictions are exact in the linear limit, $U_0=0$, and have been confirmed with simulations coming both from the exact variational model and the GPE. The discretization of the GPE has also allowed us to scan the parameters space, studying the efficiency of the perturbation process. In this study we have only found peaks centered on Mathieu's frequencies. For the general case with more than one degree of freedom (axial and nonsymmetric cases) we obtain a set of two to three decoupled pure Mathieu equations. We have shown that, due to having more than one frequency, the predicted Mathieu resonances do exist in a larger number. On the other hand, we have also seen that some of these resonances may disappear due to the locking of ``equivalent'' variables, an effect that our decoupled equations do not account for. This resonance scheme for the nonsymmetric case has been confirmed with accurate simulations of the full GPE for the $m=0$ perturbation. An analysis of the correlation of a state against its initial data shows that both the linear and the nonlinear problems exhibit a spectral structure which is likely to present such behavior. Finally, damping has been shown to limit the effect of the parametric perturbation. Once more we have proved that only frequencies close to the Mathieu resonance regions do excite the condensate as a whole in an efficient way, causing the appearance of a stable limit cycle. All other frequencies are inefficient in the sense that the system stays {\em extremely} close to the equilibrium configuration, which acts as a focus. A main conclusion of this work is that for this set of resonances to exist one only needs a singularity that prevents from collapse. The variational method showed that the kinetic terms in the evolution equations guarantee a $1/x^3$ singularity as far as we impose a repulsive interaction between the atoms in the cloud. This is the reason why we say that we have a family of systems that behave much the same. An immediate result of this is that the response of the noncondensed atoms under the parametrical perturbation will be qualitatively similar to that of the condensed ones, with the only difference that the former are subject to a more intense dissipation. But as we already saw in Sec. \ref{losses} this dissipation can be enough to distinguish both kind of fluids: while the condensed part might suffer an exponential growth, the uncondensed part might develop low amplitude bounded oscillations. We have also demonstrated that these resonances show up in the movement of the center of mass as well, causing any initial displacement of the center of mass to be exponentially amplified while the perturbation works. Opposite to our models for the widths, this is an exact prediction based solely on the GPE, and it shows that the parametrical perturbation may also have a disturbing effect in the experiments. On the other hand, a measure of this effect can give us information about the intensity of dissipation and collision effects. All of the preceding statements are based solely on the GPE. In a few words, they include the existence of resonance regions both for the widths and the center of mass, the shape and the location of those regions, and its intensity as possible measure of damping. The failure of any prediction should be interpreted as a failure of the GPE to describe the condensate. Thus we provide with {\em simple} experimental checks to perform a quantitative study of the regimes for which the GPE properly describes the Bose-Einstein condensates in time dependent traps. Finally, we must mention that throughout this work we have concentrated on regular motion regions in the parameter space. These regions can be ``safely'' reached in the experiments. There are many other cases where chaos appears in the variational equations and complex behavior is seen in the numerical simulations of Eq. (\ref{pura}). Although the study of those disordered regions could be interesting from the nonlinear science point of view they seem not to be of interest for Bose--Einstein condensation since the exponential separation of nearby orbits which is characteristic of chaotic behavior has been shown to induce instabilities and take the system out of the regime where it can be described using the mean field GP equation \cite{Castin2,Castin3}. \acknowledgements This work has been supported in part by the Spanish Ministry of Education and Culture under grants PB95-0389, PB96-0534 and AP97-08930807.
train/arxiv
BkiUccLxK3YB9i3RKF4E
5
1
\section{Introduction} \label{sec:intro} This paper involves the study of various types of deletion operations applied to languages accepted by deterministic classes of machines. Deletion operations, such as left and right quotients, and word operations such as prefix, suffix, infix, and outfix, are more commonly studied applied to languages accepted by classes of nondeterministic machines. Indeed, many language families accepted by nondeterministic acceptors form {\em full trios} (closure under homomorphism, inverse homomorphism, and intersection with regular languages), and every full trio is closed under left and right quotient with regular languages, prefix, suffix, infix, and outfix \cite{G75}. For families of languages accepted by deterministic machines however, the situation is more tricky due to the nondeterministic behaviour of the deletion. Indeed, deterministic pushdown automata are not even closed under left quotient with a set of individual letters. Here, most deterministic machine models studied will involve restrictions of one-way deterministic reversal-bounded multicounter machines $(\DCM)$. These are machines that operate like deterministic finite automata with an additional fixed number of counters, where there is a bound on the number of times each counter switches between increasing and decreasing \cite{Baker1974,Ibarra1978}. The family $\DCM(k,l)$ consists of languages accepted by machines with $k$ counters that are $l$-reversal-bounded. $\DCM$ languages have many decidable properties, such as emptiness, infiniteness, equivalence, inclusion, universe, and disjointness \cite{Ibarra1978}. Furthermore, $\DCM(1,l)$ forms an important restriction of deterministic pushdown automata. These machines have been studied in a variety of different applications, such as to membrane computing \cite{counterMembrane}, verification of infinite-state systems \cite{verification,stringTransducers,modelChecking,verificationDiophantine}, and Diophantine equations \cite{verificationDiophantine}. Recently, in \cite{EIMInsertion2015}, a related study was conducted for insertion operations; specifically operations defined by ideals obtained from the prefix, suffix, infix, and outfix relations, as well as left and right concatenation with languages from different language families. It was found that languages accepted by one-way deterministic reversal-bounded counter machines with one reversal-bounded counter are closed under right concatenation with $\Sigma^*$, but having two 1-reversal-bounded counters and right concatenating $\Sigma^*$ yields languages outside of both $\DCM$ and $2\DCM(1)$ (languages accepted by two-way deterministic machines with one counter that is reversal-bounded). It also follows from this analysis that the right input end-marker is necessary for even one-way deterministic reversal-bounded counter machines, when there are at least two counters. Furthermore, concatenating $\Sigma^*$ to the left of some one-way deterministic 1-reversal-bounded one counter languages yields languages that are neither in $\DCM$ nor $2\DCM(1)$. Other recent results on reversal-bounded multicounter languages include a technique to show languages are outside of $\DCM$ \cite{Chiniforooshan2012}. Closure properties of nondeterministic counter machines under other types of deletion operations were studied in \cite{parInsDel}. In this paper we investigate closure properties of types of deterministic machines. In Section \ref{sec:prelims}, preliminary background and notation are introduced. In Section \ref{sec:closure}, erasing operations where $\DCM$ is closed are studied. It is shown that $\DCM$ is closed under right quotient with context-free languages, and that the left quotient of $\DCM(1,1)$ by a context-free language is in $\DCM$. Both results are generalizable to quotients with a variety of different families of languages containing only semilinear languages. In Section \ref{sec:nonclosure}, non-closure of $\DCM$ under erasing operations are studied. It is shown that the set of suffixes, infixes, or outfixes of a $\DCM(1,3)$ or $\DCM(2,1)$ language can be outside of both $\DCM$ and $2\DCM(1)$. In Section \ref{sec:DPDA}, $\DPCM$s (deterministic pushdown automata augmented by reversal-bounded counters), and $\NPCM$s (the nondeterministic variant) are studied. It is shown that $\DPCM$ is not closed under prefix or suffix, and the right or left quotient of the language accepted by a $1$-reversal-bounded deterministic pushdown automaton by a $\DCM(1,1)$ language can be outside $\DPCM$. In Section \ref{sec:reg}, the effective closure of regular languages with other families is briefly discussed, and in Section \ref{sec:bounded}, bounded languages are discussed. \section{Preliminaries} \label{sec:prelims} The set of non-negative integers is denoted by $\natzero$, and the set of positive integers by $\natnum$. For $c \in \natzero$, let $\pi(c)$ be $0$ if $c=0$, and $1$ otherwise. We assume knowledge of standard formal language theoretic concepts such as finite automata, determinism, nondeterminism, semilinearity, recursive, and recursively enumerable languages \cite{Baker1974,HU}. Next, we will give some notation used in the paper. The empty word is denoted by $\lambda$. If $\Sigma$ is a finite alphabet, then $\Sigma^*$ is the set of all words over $\Sigma$ and $\Sigma^+ = \Sigma^* \setminus \set{\lambda}$. For a word $w \in \Sigma^*$, if $w = a_1 \cdots a_n$ where $a_i \in \Sigma$, $1\leq i \leq n$, the length of $w$ is denoted by $\abs{w}=n$, and the reversal of $w$ is denoted by $w^R = a_n \cdots a_1$, which is extended to reversals of languages in the natural way. In addition, if $a \in \Sigma$, $|w|_a$ is the number of $a$'s in $w$. A language over $\Sigma$ is any subset of $\Sigma^*$. Given a language $L\subseteq \Sigma^*$, the complement of $L$, $\Sigma^* \setminus L$ is denoted by $\overline{L}$. Given two languages $L_1,L_2$, the left quotient of $L_2$ by $L_1$, $L_1^{-1}L_2 = \{ y \mid xy \in L_2, x \in L_1\}$, and the right quotient of $L_1$ by $L_2$ is $L_1 L_2^{-1} = \{x \mid xy \in L_1, y \in L_2\}$. A {\em full trio} is a language family closed under homomorphism, inverse homomorphism, and intersection with regular languages \cite{HU}. Let $n \in \mathbb{N}$. Then $Q \subseteq \mathbb{N}_0^n$ is a {\em linear} set if there is a vector $\vec{c} \in \mathbb{N}_0^n$ (the constant vector), and a set of vectors $V = \{\vec{v_1}, \ldots, \vec{v_r} \}, r \geq 0$, each $\vec{v_i} \in \mathbb{N}_0^n$ such that $Q = \{c + t_1 \vec{v_1} + \cdots + t_r\vec{v_r} \mid t_1, \ldots, t_r \in \mathbb{N}_0\}$. A finite union of linear sets is called a {\em semilinear set}. A language $L$ is {\em word-bounded} or simply {\em bounded} if $L \subseteq w_1^* \cdots w_k^*$ for some $k \ge 1$ and (not-necessarily distinct) words $w_1, \ldots, w_k$. Further, $L$ is {\em letter-bounded} if each $w_i$ is a letter. Also, $L$ is {\em bounded-semilinear} if $L \subseteq w_1^* \cdots w_k^*$ and $Q = \{(i_1, \ldots, i_k) ~|~ w_1^{i_1} \cdots w_k^{i_k} \in L \}$ is a semilinear set \cite{boundedSemilin}. We now present notation for common word and language operations used throughout the paper. \begin{definition} \label{def:opGeneralize} For a language $L \subseteq \Sigma^*$, the prefix, suffix, infix, and outfix operations are defined by: \begin{itemize} \item $\pref(L) = \set{w \ensuremath{\mid} wx \in L, x \in \Sigma^* }$, \item $\suff(L) = \set{w \ensuremath{\mid} xw \in L, x \in \Sigma^* }$, \item $\infx(L) = \set{w \ensuremath{\mid} xwy \in L, x,y \in \Sigma^* }$, \item $\outf(L) = \set{xy \ensuremath{\mid} xwy \in L, w \in \Sigma^* }$. \end{itemize} \end{definition} Note that $\pref(L) = L ( \Sigma^*)^{-1}$ and $\suff(L) = (\Sigma^*)^{-1}L$. The outfix operation has been generalized to the notion of embedding \cite{JKT}: \begin{definition} The $m$-embedding of a language $L \subseteq \Sigma^*$ is the following set: $\emb(L, m) = \{w_0 \cdots w_m \ensuremath{\mid} w_0 x_1 \cdots w_{m-1} x_m w_m \in L$, $w_i \in \Sigma^*, 0 \leq i \leq m, x_j \in \Sigma^*, 1 \leq j \leq m\}$. \end{definition} Note that $\outf(L) = \emb(L, 1)$. A {\em nondeterministic multicounter machine} is a finite automaton augmented by a fixed number of counters. The counters can be increased, decreased, tested for zero, or tested to see if the value is positive. A multicounter machine is {\em reversal-bounded} if there exists a fixed $l$ such that, in every accepting computation, the count on each counter alternates between increasing and decreasing at most $l$ times. Formally, a {\em one-way $k$-counter machine} is a tuple $M = (k,Q,\Sigma, \lhd, \delta, q_0, F)$, where $Q, \Sigma, \lhd, q_0,F$ are respectively the finite set of states, the input alphabet, the right input end-marker, the initial state in $Q$, and the set of final states that is a subset of $Q$. The transition function $\delta$ (defined as in \cite{Ibarra1978} except with only a right end-marker since we only use one-way inputs) is a relation from $Q \times (\Sigma \cup \{\lhd\}) \times \{0,1\}^k$ into $Q \times \{{\rm S},{\rm R}\} \times \{-1, 0, +1\}^k$, such that if $\delta(q,a,c_1, \ldots, c_k)$ contains $(p,d,d_1, \ldots, d_k)$ and $c_i =0$ for some $i$, then $d_i \geq 0$ to prevent negative values in any counter. The direction of the input tape head movement is given by the symbols ${\rm S}$ and ${\rm R}$ for either {\em stay} or {\em right} respectively. The machine $M$ is {\em deterministic} if $\delta$ is a partial function. A {\em configuration} of $M$ is a $k+2$-tuple $(q, w\lhd , c_1, \ldots, c_k)$ for describing the situation where $M$ is in state $q$, with $w\in \Sigma^* $ still to read as input, and $c_1, \ldots, c_k\in \natzero$ are the contents of the $k$ counters. The derivation relation $\vdash_M$ is defined between configurations, where $(q, aw, c_1, \ldots , c_k) \vdash_M (p, w'$ $, c_1 + d_1, \ldots, c_k+d_k)$, if $(p, d, d_1, \ldots, d_k) \in \delta(q, a, \pi(c_1), \ldots, \pi(c_k))$ where $d \in \{{\rm S}, {\rm R}\}$ and $w' =aw$ if $d={\rm S}$, and $w' = w$ if $d={\rm R}$. Extended derivations are given by $\vdash^*_M$, the reflexive, transitive closure of $\vdash_M$. A word $w\in \Sigma^*$ is accepted by $M$ if $(q_0, w\lhd, 0, \ldots, 0) \vdash_M^* (q, \lhd, c_1, \ldots, c_k)$, for some $q \in F$, and $c_1, \ldots, c_k \in \natzero$. The language accepted by $M$, denoted by $L(M)$, is the set of all words accepted by $M$. The machine $M$ is $l$-reversal bounded if, in every accepting computation, the count on each counter alternates between increasing and decreasing at most $l$ times. We denote by $\NCM(k,l)$ the family of languages accepted by one-way nondeterministic $l$-reversal-bounded $k$-counter machines. We denote by $\DCM(k,l)$ the family of languages accepted by one-way deterministic $l$-reversal-bounded $k$-counter machines. The union of the families of languages are denoted by $\NCM = \bigcup_{k,l \geq 0} \NCM(k,l)$ and $\DCM = \bigcup_{k,l \geq 0} \DCM(k,l)$. Further, $\DCA$ is the family of languages accepted by one-way deterministic one counter machines (no reversal-bound). We will also sometimes refer to a multicounter machine as being in $\NCM(k,l)$ ($\DCM(k,l)$), if it has $k$ $l$-reversal bounded counters (and is deterministic). Note that a counter that makes $l$ reversals can be simulated by $\lceil \frac{l+1}{2} \rceil$ 1-reversal-bounded counters. Hence, for example, $\DCM(1,3) \subseteq \DCM(2,1)$. We denote by $\REG$ the family of regular languages, by $\NPDA$ the family of context-free languages, by $\DPDA$ the family of deterministic pushdown languages, by $\DPDA(l)$ the family of $l$-reversal-bounded deterministic pushdown automata (with an upper bound of $l$ on the number of changes between non-increasing and non-decreasing the size of the pushdown), by $\NPCM$ the family of languages accepted by nondeterministic pushdown automata augmented by a fixed number of reversal-bounded counters \cite{Ibarra1978}, and by $\DPCM$ the deterministic variant. We also denote by $\TwoDCM$ the family of languages accepted by two-way input, deterministic finite automata (both a left and right input tape end-marker are required) augmented by reversal-bounded counters, and by $2\DCM(1)$, $2\DCM$ with one reversal-bounded counter \cite{IbarraJiang}. A machine of this form is said to be {\em finite-crossing} if there is a fixed $c$ such that the number of times the boundary between any two adjacent input cells is crossed is at most $c$ \cite{Gurari1981220}. A machine is {\em finite-turn} if the input head makes at most $k$ turns on the input, for some $k$. Also, $2\NCM$ is the family of languages accepted by two-way nondeterministic machines with a fixed number of reversal-bounded counters, while $2\DPCM$ is the family of two-way deterministic pushdown machines augmented by a fixed number of reversal-bounded counters. We give four examples below to illustrate the workings of the reversal-bounded counter machines. \begin{example} Let $L = \{a^nb^n \mid n \ge 1\}$. This language can be accepted by a $\DCM(1,1)$ machine which reads $a^n$ and stores $n$ in the counter and then reads the $b$'s while decrementing the counter. It accepts if the counter becomes zero at the end of the input. \end{example} \begin{example} Let $L_k = \{ x_1 \# \cdots \# x_k \mid x_i \in \{a,b\}^+, x_j \ne x_k$ for $j \ne k \}$. This can be accepted by a $\NCM(k(k+1)/2, 1)$, where we identify counter $i$ by the name $C_i$, for $ 1 \leq i \leq k(k+1)/2$. The machine $M_k$ operates by reading the input and verifying that for $1 \le i < j \le k$, $x_i$ and $x_j$ are different in at least one position, or are different lengths, which is easy to check using the counters. To verify that they differ in at least one position, while scanning $x_i$, $M_k$ stores in counter $C_i$ a nondeterministically guessed position $p_i$ of $x_i$ and records in the state the symbol $a_{p_i}$ in that location. Then, when scanning $x_j$, $M_k$ stores in counter $C_j$ a guessed location $p_j$ of $x_j$ and records in the state the symbol $a_{p_j}$ in that location. At the end of the input, on stay transitions, $M_k$ checks that $a_{p_i} \ne a_{p_j}$ and $p_i = p_j$ (by decrementing counters $C_i$ and $C_j$ simultaneously and verifying that they become zero at the same time). \end{example} \begin{example} Let $L = \{x x^R \mid x \in \{a,b\}^+, |x|_a > |x|_b \}$, which is not a context-free language, can be accepted by an $\NPCM(2,1)$ $M$ (with counters labelled $C_1$ and $C_2$) which operates as follows: It scans the input and uses the pushdown to check that the input is of the form $x x^R$, by guessing the middle of the string, pushing $x$, and popping $x$ while reading $x^R$. In parallel, $M$ uses two counters $C_1$ and $C_2$ to store the numbers of $a$'s and $b$'s it encounters. Then, at the end of the input, on stay transitions, $M$ decrements $C_1$ and $C_2$ simultaneously and verifies that the contents of $C_1$ is larger than that of $C_2$. Clearly the counters are 1-reversal-bounded. Note that the language $L' = \{x \#x^R \mid x \in \{a,b\}^+, |x|_a > |x|_b \}$, where $\#$ is a new symbol, is in $\DPCM(2,1)$. \end{example} \begin{example} $L = \{a^mb^n \mid m, n \ge 1, m \mbox{~is divisible by~} n\}$ can be accepted by a $2\DCM(1)$, which reads $a^m$ and stores $m$ in the counter. Then it checks that $m$ is divisible by $n$ by making left-to-right and right-to-left sweeps on $b^n$ while decrementing the counter. \end{example} We note that each of $\NCM(k,l), \NPCM(k,l), \NCM, \NPCM, \NPDA, \REG$ (for each $k,l$) form full trios (discussed in Section \ref{sec:intro}) \cite{Ibarra1978,G75}, and are therefore immediately closed under left and right quotient with regular languages, prefix, suffix, infix, and outfix. The next result proved in \cite{boundedSemilin} gives examples of weak and strong machines that are equivalent over word-bounded languages. \begin{theorem} \cite{boundedSemilin} \label{WWW} The following are equivalent for every word-bounded language $L$: \begin{enumerate} \item $L$ can be accepted by an $\NCM$. \item $L$ can be accepted by an $\NPCM$. \item $L$ can be accepted by a finite-crossing $2\NCM$. \item $L$ can be accepted by a $\DCM$. \item $L$ can be accepted by a finite-turn $2\DCM(1)$. \item $L$ can be accepted by a finite-crossing $2\DPCM$ \item $L$ is bounded-semilinear. \end{enumerate} \end{theorem} \noindent We also need the following result in \cite{IbarraJiang}: \begin{theorem} \cite{IbarraJiang} \label{unary} Let $L \subseteq a^*$ be accepted by a $2\NCM$ (not necessarily finite-crossing). Then $L$ is regular, hence, semilinear. \end{theorem} \section{Closure of $\DCM$ Under Erasing Operations} \label{sec:closure} First, we discuss the left quotient of $\DCM$ with finite sets. \begin{proposition} $\DCM$ is closed under left quotient with finite languages. \end{proposition} \begin{proof} It is clear that $\DCM$ is closed under left quotient with a single word. Then the result follows from closure of $\DCM$ under union \cite{Ibarra1978}. \qed \end{proof} This is in contrast to $\DPDA$, which is not even closed under left quotient with sets of multiple letters. Indeed, the language $\{\#a^n b^n \mid n > 0\} \cup \{\$ a^n b^{2n} \mid n> 0\}$ is a $\DPDA$ language, but taking the left quotient with $\{\$,\#\}$ produces a language which is not a $\DPDA$ language \cite{GinsburgDPDAs}. Next, we show the closure of $\DCM$ under right quotient with languages accepted by any nondeterministic reversal-bounded multicounter machine, even when augmented with a pushdown store. \begin{proposition} \label{fullQuotientClosure} Let $L_1 \in \DCM$ and let $L_2 \in \NPCM$. Then $\rquot{L_1}{L_2} \in \DCM$. \end{proposition} \begin{proof} Consider a $\DCM$ machine $M_1=(k_1,Q_1, \Sigma, \lhd, \delta_1, s_0, F_1)$ and $\NPCM$ machine $M_2$ over $\Sigma$ with $k_2$ counters where $L(M_1) = L_1$ and $L(M_2) = L_2$. A $\DCM$ machine $M'$ will be constructed accepting $\rquot{L_1}{L_2}$. Let $\Gamma = \{a_1, \ldots, a_{k_1}\}$ be new symbols. First, intermediate $\NPCM$ machines will be built, one for each state $q$ of $M_1$. The intuition will be described first. The machine built for state $q$ will accept all counter values (encoded over $\Gamma$) of the form $a_1^{p_1} \cdots a_{k_1}^{p_{k_1}}$, whereby there exists some $x \in \Sigma^*$, such that: \begin{itemize} \item $M_1$ accepts $x$ starting from state $q$ and counter values $(p_1, \ldots, p_{k_1})$, and \item $M_2$ accepts $x$.\end{itemize} The key use of these machines is that they are all bounded languages, and therefore Theorem \ref{WWW} applies, and they can all be effectively converted to a $\DCM$ machine. From these, our final deterministic machine $M'$ can be built by simulating $M_1$ until it hits the end-marker $\lhd$ in some state $q$. It then deterministically simulates the {\it values that remain in the counters} with the intermediate $\DCM$ machine constructed for state $q$, accepting if this intermediate machine does. Indeed, it will therefore accept if and only if, from these counter values and state, $M_1$ can continue to lead to acceptance on some word $x$, and $x$ is also in $L_2$. Hence, the deterministic simulation of the intermediate machines is the key. Formally, for each $q \in Q_1$, let $M_c(q)$ be an intermediate $k_1+k_2$ counter (plus a pushdown) $\NPCM$ machine over $\Gamma$ constructed as follows: on input $a_1^{p_1} \cdots a_{k_1}^{p_{k_1}}$, $M_c(q)$ increments the first $k_1$ counters to $(p_1, \ldots, p_{k_1})$. Then $M_c(q)$ nondeterministically guesses a word $x\in \Sigma^*$ (by using only stay transitions of $M_c(q)$ but simulating lettered transitions on each letter of $x$ nondeterministically one at a time) and simulates $M_1$ on $x\lhd$ starting from state $q$ and from the counter values of $(p_1, \ldots, p_{k_1})$ using the first $k_1$ counters, while in parallel, simulating $M_2$ on $x$ using the next $k_2$ counters and the pushdown. This is akin to the product automaton construction described in \cite{Ibarra1978} showing $\NPCM$ is closed under intersection with $\NCM$. Then $M_c(q)$ accepts if both $M_1$ and $M_2$ accept. \begin{claim} Let $ L_c(q) = \{ a_1^{p_1} \cdots a_{k_1}^{p_{k_1}} \ensuremath{\mid} \exists x \in L_2 \mbox{~such that ~} (q, x\lhd, p_1, \ldots, p_{k_1}) \vdash_{M_1}^* (q_f, \lhd, p'_1, \ldots , p'_{k_1}) , p'_i \geq 0, 1 \leq i \leq k_1, q_f \in F_1 \}$. Then $L(M_c(q)) = L_c(q)$. \end{claim} \begin{proof} Consider $w = a_1^{p_1} \cdots a_{k_1}^{p_{k_1}} \in L_c(q)$. Then there exists $x$ where $x \in L_2$ and $(q, x\lhd, p_1, \ldots, p_{k_1}) \vdash_{M_1}^* (q^1_f, \lhd, p'_1, \ldots , p'_{k_1})$, where $q^1_f \in F_1$. There must then be some final state $q^2_f \in F_2$ reached when reading $x\lhd$ in $M_2$. Then, $M_c(q)$, on input $w$ places $(p_1, \ldots, p_{k_1}, 0, \ldots, 0)$ on the counters and then can nondeterministically guess $x$ letter-by-letter and simulate $x$ in $M_1$ from state $q$ on the first $k_1$ counters and simulate $x$ in $M_2$ from its initial configuration on the remaining counters and pushdown. Then $M_c(q)$ ends up in state $(q^1_f, q_f^2)$, which is final. Hence, $w \in L(M_c(q))$. Consider $w = a_1^{p_1} \cdots a_{k_1}^{p_{k_1}} \in L(M_c(q))$. After adding each $p_i$ to counter $i$, $M_c(q)$ guesses $x$ and simulates $M_1$ on the first $k_1$ counters from $q$ and simulates $M_2$ on the remaining counters and the pushdown from an initial configuration. It follows that $x \in L_2$, and $(q, x\lhd, p_1, \ldots, p_{k_1}) \vdash_{M_1}^* (q_f^1, \lhd, p'_1, \ldots , p'_{k_1}), p_i' \geq 0, 1 \leq i \leq k_1, q_f^1 \in F_1$. Hence, $w \in L_c(q)$. \qed \end{proof} Since for each $q \in Q_1$, $M_c(q)$ is in $\NPCM$, it accepts a semilinear language \cite{Ibarra1978}, and since the accepted language is bounded, it is bounded-semilinear and can therefore be accepted by a $\DCM$-machine by Theorem \ref{WWW}. Let $M_c'(q)$ be this $\DCM$ machine, and let $k'$ be the maximum number of counters of any $\DCM$ machine $M_c'(q),q \in Q_1$. \begin{comment} When we read a word in $M_1$ and reach the end of input, that word is in $\rquot{L_1 }{L_2}$ if and only if there's some word $x \in L_2$ where we can reach acceptance in $M$ reading $x$ starting at the configuration we ended at in $M_1$. But this is exactly the condition for a counter value being in $L_C(p)$. \end{comment} Thus, a final $\DCM$ machine $M'$ with $k_1+k'$ counters is built as follows. In it, $M'$ has $k_1$ counters used to simulate $M_1$, and also $k'$ additional counters, used to simulate some $M_c'(q)$, for some $q\in Q_1$. Then, $M'$ reads its input $x\lhd$, where $x\in \Sigma^*$,while simulating $M_1$ on the first $k_1$ counters, either failing, or reaching some configuration $(q, \lhd, p_1, \ldots, p_{k_1})$, for some $q\in Q_1$, upon first hitting the end-marker $\lhd$. If it does not fail, we then simulate the $\DCM$-machine $M_c'(q)$ on input $a_1^{p_1} \cdots a_{k_1}^{p_{k_1}}$, but this simulation is done deterministically by subtracting $1$ from the first $k_1$ counters, in order, until each are zero instead of reading input characters, and accepts if $a_1^{p_1} \cdots a_{k_1}^{p_{k_1}} \in L(M_c'(q))= L_c(q)$. Then $M'$ is deterministic, and accepts \begin{eqnarray*} && \{ x \mid \begin{array}[t]{l} \mbox{either~} (s_0, x\lhd, 0, \ldots, 0) \vdash_{M_1}^* (q', a\lhd, p_1', \ldots, p_{k_1}') \vdash_{M_1} (q, \lhd, p_1, \ldots, p_{k_1}), \\ a\in \Sigma, \mbox{~or~} (s_0, x\lhd, 0, \ldots, 0) = (q, \lhd, p_1, \ldots, p_{k_1}), \mbox{~s.t.~} a_1^{p_1} \cdots a_{k_1}^{p_{k_1}} \in L_c(q)\} \end{array} \\ & = & \{x \mid \begin{array}[t]{l} \mbox{either~} (s_0, x\lhd, 0, \ldots, 0) \vdash_{M_1}^* (q', a\lhd, p_1', \ldots, p_{k_1}') \vdash_{M_1} (q, \lhd, p_1, \ldots, p_{k_1}), \\ a \in \Sigma, \mbox{~or~} (s_0, x\lhd, 0, \ldots, 0) = (q, \lhd, p_1, \ldots, p_{k_1}), \mbox{~where~} \exists y \in L_2 \mbox{~s.t.~} \\ (q,y\lhd, p_1, \ldots, p_{k_1}) \vdash_{M_1}^* (q_f, \lhd, p_1'', \ldots, p_{k_1}''), q_f \in F_1\} \end{array} \\ & = & \{x \mid xy \in L_1, y \in L_2\} \\ & = & L_1 L_2^{-1}. \end{eqnarray*} \qed \end{proof} This immediately shows closure for the prefix operation. \begin{corollary} \label{closedprefix} If $L \in \DCM$, then $\pref(L) \in \DCM$. \end{corollary} We can modify this construction to show a strong closure result for one-counter languages that does not increase the number of counters. \begin{proposition} \label{rightquotientwithNPCM} Let $l \in \natnum$. If $L_1 \in \DCM(1,l)$ and $L_2 \in \NPCM$, then $\rquot{L_1}{L_2} \in \DCM(1,l)$. \end{proposition} \begin{proof} The construction is similar to the one in Proposition \ref{fullQuotientClosure}. However, we note that since the input machine for $L_1$ has only one counter, $L_c(q)$ is unary (regardless of the number of counters needed for $L_2$). Thus $L_c(q)$ is unary and semilinear, and Parikh's theorem states that all semilinear languages are letter-equivalent to regular languages \cite{harrison1978}, and all unary semilinear languages are regular. Thus $L_c(q)$ is regular, and can be accepted by a DFA. We can then construct $M'$ accepting $\rquot{L_1}{ L_2}$ as in Proposition \ref{fullQuotientClosure} without requiring any additional counters or counter reversals, by transitioning to the DFA accepting $L_c(q)$ when we reach the end of input at state $q$. \qed \end{proof} \begin{corollary} \label{closedprefixDCM1} Let $l \in \natnum$. If $L \in \DCM(1,l)$, then $\pref(L) \in \DCM(1,l)$. \end{corollary} In fact, the constructions of Propositions \ref{fullQuotientClosure} and \ref{rightquotientwithNPCM} can be generalized from $\NPCM$ to any class of automata that can be defined using Definition \ref{counteraugmentable}. A condition to describe these classes of automata is described in more detail in \cite{Harju2002278}. But we only define the condition below in a way specific to our use in this paper. Only the first two conditions are required for Corollary \ref{generalizedSemilinear}, while the third is required for Corollary \ref{evenMoreGeneralSemilinear}. \begin{definition} \label{counteraugmentable} A family of languages $\mathscr{F}$ is said to be {\em reversal-bounded counter augmentable} if \begin{itemize} \item every language in $\mathscr{F}$ is effectively semilinear, \item given $\DCM$ machine $M_1$ with $k$ counters, state set $Q$ and final state set $F$, and $L_2 \in \mathscr{F}$, we can effectively construct, for each $q\in Q$, the following language in $\mathscr{F}$, $$\{ a_1^{p_1} \cdots a_k^{p_k} \ensuremath{\mid} \begin{array}[t]{l} \exists x \in L_2 \mbox{~such that ~} (q, x\lhd, p_1, \ldots, p_k) \vdash_{M_1}^* (q_f, \lhd, p'_1, \ldots p'_k), \\ p'_i \geq 0, 1 \leq i \leq k, q_f \in F \}, \end{array}$$ \item given $\DCM$ machine $M_1$ with $k$ counters, state set $Q$, initial state $q_0$, and $L_2 \in \mathscr{F}$, we can effectively construct, for each $q\in Q$, the following language in $\mathscr{F}$, $$\{ a_1^{p_1} \cdots a_k^{p_k} \ensuremath{\mid} \exists x \in L_2 \mbox{~such that ~} (q_0, x, 0, \ldots,0) \vdash_{M_1}^* (q, \lambda, p_1, \ldots p_k)\}.$$ \end{itemize} \end{definition} \begin{corollary} \label{generalizedSemilinear} Let $L_1 \in \DCM$ and $L_2 \in \mathscr{F}$, a family of languages that is reversal-bounded counter augmentable. Then $\rquot{L_1 }{ L_2} \in \DCM$. Furthermore, if $L_1 \in \DCM(1,l)$ for some $l \in \natnum$, then $\rquot{L_1 }{ L_2} \in \DCM(1,l)$. \end{corollary} There are many reversal-bounded counter augmentable families that $L_2$ could be from in this corollary, such as: \begin{itemize} \item $\MPCA$'s: one-way machines with $k$ pushdowns where values may only be popped from the first non-empty stack, augmented by a fixed number of reversal-bounded counters \cite{Harju2002278}. \item $\TCA$'s: nondeterministic Turing machines with a one-way read-only input and a two-way read-write tape, where the number of times the read-write head crosses any tape cell is finitely bounded, again augmented by a fixed number of reversal-bounded counters \cite{Harju2002278}. \item $\QCA$'s: $\NFA$'s augmented with a queue, where the number of alternations between the non-deletion phase and the non-insertion phase is bounded by a constant \cite{Harju2002278}, augmented by a fixed number of reversal-bounded counters. \item $\EPDA$'s: embedded pushdown automata, modelled around a stack of stacks, introduced in \cite{Vijayashanker:1987:STA:913947} augmented by a fixed number of reversal-bounded counters. These accept the languages of tree-adjoining grammars, a semilinear subset of the context-sensitive languages. As was stated in \cite{Harju2002278}, we can augment this model with a fixed number of reversal-bounded counters and still get an effectively semilinear family. \end{itemize} Finally, the construction of Proposition \ref{fullQuotientClosure} can be used to show that deterministic one counter languages (non-reversal-bounded) are closed under right quotient with $\NCM$. \begin{proposition} Let $L_1 \in \DCA$, and let $L_2 \in \NCM$. Then $L_1 L_2^{-1} \in \DCA$. \end{proposition} \begin{proof} Again, the construction is similar to Proposition \ref{fullQuotientClosure}. However, since the input machine for $L_1$ has only one counter, $L_c(q)$ is unary (regardless of the number of counters needed for $L_2$). Then $L_c(q)$ is unary and is indeed an $\NPCM$ language, as $M_c(q)$ simulates $M_1$, this time using the unrestricted pushdown to simulate the potentially non-reversal-bounded counter of $M_1$, while simulating $M_2$ on the reversal-bounded counters. Thus, because $\NPCM$ accept only semilinear languages \cite{Ibarra1978}, $L_c(q)$ is in fact a regular language and can be accepted by a DFA. $M'$ can then be constructed to accept $L_1 L_2^{-1}$ without requiring any additional counters or counter reversals by transitioning to the DFA accepting $L_c(q)$ when we reach the end of input at state $q$. \qed \end{proof} Next, for the case of one-counter machines that make only one counter reversal, it will be shown that a $\DCM$-machine that can accept their suffix and infix languages can always be constructed. However, in some cases, these resulting machines often require more than one counter. Thus, unlike prefix, $\DCM(1,1)$ is not closed under suffix, left quotient, or infix. But, the result is in $\DCM$. As the proof is quite lengthy, we will give some intuition for the result first. First, $\DCM$ is closed under union \cite{Ibarra1978} (following from closure under intersection and complement) and so the second statement of Proposition \ref{leftquotientwithNPCM} follows from the first. For the first statement, an intermediate $\NPCM$ machine is constructed from $L_1$ and $L$ that accepts a language $L^c$ (here, $c$ is a superscript label rather than an exponent). This language contains words of the form $qa^i$ where there exists some word $w$ such that both $w \in L_1$, and also from the initial configuration of $M$ (accepting $L$), it can read $w$ and reach state $q$ with $i$ on the counter. Then, it is shown that this language is actually a regular language, using the fact that all semilinear unary languages are regular. Then, $\DCM(1,1)$ machines are created for every state $q$ of $M$. These accept all words $w$ such that $qa^i \in L^c$, and in $M$, from state $q$ and counter $i$ with $w$ to read as input, $M$ can reach a final state while emptying the counter. The fact that $L^c$ is regular allows these machines to be created. \begin{proposition} \label{leftquotientwithNPCM} Let $L \in \DCM(1,1), L_1 \in \NPCM$. Then $L_1^{-1} L$ is the finite union of languages in $\DCM(1,1)$. Furthermore, it is in $\DCM$. \end{proposition} \begin{proof} For the first statement, let $M_1$ be an $\NPCM$ machine accepting $L_1$, and let $M = (1,Q,\Sigma, \lhd, \delta, q_0,F)$ be a 1-reversal bounded, 1-counter machine accepting $L$. Next, we will argue that it is possible to assume, without loss of generality, that $M$ has the following form: \begin{enumerate} \item $Q = Q\down \cup Q\up$, \item for all $q \in Q\down$, all transitions defined on $q$ either decrease the counter or keep it the same, \item for all $q \in Q\up$, all transitions defined on $q$ either increase the counter or keep it the same, \item the sequence of states $p_0 p_1 \cdots p_n, n \geq 0, p_0 = q_0, n \geq 0$ traversed in every computation from the initial configuration satisfies $p_0 \cdots p_i \in Q\up^*, p_{i+1} \cdots p_n \in Q\down^*, 0 \leq i \leq n$, with the transition from $p_{i+1}$ to $p_{i+2}$ being the (first, if it exists) decreasing transition, \item for all states $q \in Q\down$ all stay transitions defined on $q$ (except on $\delta(q,\lhd,0)$) change the counter, \item $\delta(q,d,1)$ is defined for all $q \in Q, d\in \Sigma$, \item the counter always empties before accepting. \end{enumerate} Indeed, it is possible to transform a $\DCM(1,1)$ machine of the form of $M$ to another $\DCM(1,1)$ machine $\bar{M} = (1,\bar{Q}, \Sigma, \lhd, \bar{\delta}, q_0, \bar{F})$ satisfying the first four conditions, as follows. First, let $Q\up = Q$, and $Q\down = \{q' \mid q \in Q\}$ (primed versions of the state set), $\bar{Q} = Q\up \cup Q\down$, and $\bar{F} = F \cup \{q_f' \mid q_f \in F\}$. Then, for all transitions that either decrease the counter or keep it the same, $(p,T, j) \in \delta(q,c,i), i \in \{0,1\}, j \in \{0, -1\}$, instead create the transition $(p',T, j) \in \bar{\delta}(q',c,i)$. Further, for all transitions of $\delta$ that either increase the counter or keep it the same, keep this transition in $\bar{\delta}$. Then, the first three conditions are satisfied. Then, for all those decreasing transitions $(p,T, -1) \in \delta(q,c,1)$, add in $(q', {\rm S}, 0) \in \bar{\delta}(q,c,1)$. Therefore, condition four is satisfied. It is clear that $L(\bar{M}) = L(M)$, as any sequence of transitions with at most one counter reversal (all accepting computations have at most one reversal) can traverse the same transitions (using states in $Q\up$) until the first decrease transition, at which point only the new stay transitions from $Q\up$ to $Q\down$ are defined, and then the computation continues with transitions on $Q\down$. This machine can be further transformed into one accepting the same language and additionally satisfying conditions 5, 6, and 7 as any sequence of stay transitions that does not change the counter can be ``skipped over'' to either a right transition or a decrease transition, a ``dead state'' can be added to satisfy condition 6, and the states can enforce 7. Therefore, assume without loss of generality that $M$ satisfies these conditions. This will simplify the rest of the construction. Next, we create an $\NPCM$ machine $M'$ that accepts $$L^c = \{qa^i \mid \exists w \in L_1, (q_0, w, 0) \vdash_M^* (q,\lambda, i)\},$$ where $a$ is a new symbol not in $\Sigma$. Indeed, $M'$ operates by nondeterministically guessing a word $w\in \Sigma^*$, letter-by-letter, and simulating in parallel (using stay transitions), the $\NPCM$ machine $M_1$ using the pushdown and a set of counters, as well as simulating $M$ on $w$ on an additional counter. Then, after reading the last letter of the guessed $w$, $M'$ verifies that the simulated machine $M$ is in state $q$ (reading the state $q$ as part of the input), and verifies that the contents of the simulated counter of $M$ is $i$, matching the input. Then, it verifies that $w$ is in $L_1$ by continuing the simulation of $M_1$ on the end-marker. Furthermore, for each $q \in Q$, the set $q^{-1}L^c$ is a unary $\NPCM$ language (as discussed in Section \ref{sec:prelims}, $\NPCM$ is closed under left quotient with regular languages). Indeed, every $\NPCM$ language is semilinear \cite{Ibarra1978}, and it is also known that every unary semilinear language is regular \cite{harrison1978}, and effectively constructable. Thus, $L^c = \bigcup_{q\in Q}(q(q^{-1}L^c))$ is regular as well. Let $M^c = (Q^c, Q \cup\{a\}, \delta^c, s_0^c, F^c)$ be a DFA accepting $L^c$. Assume without loss of generality that $M^c$ is a complete DFA. For the remainder of the proof, the layout will proceed by creating three sets of $\DCM(1,1)$ machines and languages as follows: \begin{enumerate} \item $M_0^q$, for all $q\in Q$, and $L_0^q = L(M_0^q)$. We will construct it such that \begin{equation} L_0^q= \{w \mid (q,w\lhd,0) \vdash_M^* (q_f, \lhd, 0), q_f \in F, qa^0 = q \in L^c\}. \label{bigequation1} \end{equation} \item $M^q\up$, for all $q\in Q\up$, and $L^q\up = L(M^q\up)$. We will construct it such that \begin{equation} L^q\up= \{w \mid \exists i >0, (q,w\lhd,i) \vdash_M^* (q_f, \lhd, 0), q_f \in F, qa^i \in L^c\}. \label{bigequation2} \end{equation} \item $M^q\down$, for all $q\in Q\down$, and $L^q\down = L(M^q\down)$. We will construct it such that \begin{equation} L^q\down= \{w \mid \exists i > 0, (q,w\lhd,i) \vdash_M^* (q_f, \lhd, 0), q_f \in F, qa^i \in L^c\}. \label{bigequation3} \end{equation} \end{enumerate} It is clear that $$L_1^{-1}L(M) = \bigcup_{q\in Q}L_0^q \cup \bigcup_{q\in Q\up}L^q\up \cup \bigcup_{q\in Q\down}L^q\down,$$ and thus it suffices to build the $\DCM(1,1)$ machines and show that Equations (\ref{bigequation1}), (\ref{bigequation2}), and (\ref{bigequation3}) hold. We will do this with each type next. First, for (\ref{bigequation1}), construct $M_0^q$ for $q\in Q$ as follows: $M_0^q$ operates just like $M$ starting at state $q$ if $q \in L^c$, and if $q \notin L^c$, then it accepts $\emptyset$. Hence, (\ref{bigequation1}) is true. Next, we will show (\ref{bigequation3}) is true. It will be shown that $L^q\down$ is a regular language. Then the construction and proof of correctness of (\ref{bigequation3}) will be used within the proof and construction of (\ref{bigequation2}). A slight generalization of (\ref{bigequation3}) will be used in order to accommodate its use for (\ref{bigequation2}). Despite the languages being regular, $\NCM(1,1)$ machines will be built to accept them, but without using the counter (for consistency of notation, and to use nondeterminism). It is immediate that these are regular, can be converted to NFAs, then to DFAs, then to $\DCM(1,1)$ machines that do not use the counter. Intuitively, each $\NCM(1,1)$ machine (for each $q \in Q\down$) will simulate $M$ starting at state $q$, but then only non-increasing transitions can be used, as only transitions on $Q\down$ can be reached from $q$. However, instead of decreasing from a counter, the $\NCM(1,1)$ machine instead simulates the DFA $M^c$ in parallel, and reads a single $a$ for every decrease of the simulated computation of $M$. If the simulated computation of $M^c$ is in a final state, then the counter could be zero in the simulated computation of $M$ and reach that configuration. But the simulated computation of $M$ may only accept from configurations with larger counter values (depending on the remaining sequence of transitions). Thus, the new machine uses nondeterminism to try every possible configuration where zero could occur on the counter, trying each to see if the rest of the input accepts (by directly simulating $M$). We will give the construction here of the intermediate $\NCM(1,1)$ machines that do not use the counter, then the proof of correctness of the construction. All the machines $\overline{M^{q,q'}\down}\in \NCM(1,1)$, for each $q \in Q\down, q' \in Q$ will have the same set of input alphabets, states, transitions, and final states, with only the initial state differing. Formally, let $q \in Q\down, q' \in Q, q_0^c = \hat{\delta}^c(s_0^c,q')$. Then $\overline{M^{q,q'}\down} = (1,P\down, \Sigma,\lhd,\delta\down, s^{q,q'}\down,F\down)$, where $P\down = (Q\times Q^c) \cup Q\down, s^{q,q'}\down = (q,q_0^c), F\down = F$. The transitions of $\delta\down$ are created (none using the counter) by the following algorithm: \begin{enumerate} \item\label{itm:subtypeA} For all transitions $(p,{\rm S},-1) \in \delta(r,d,1), p,r \in Q\down, d \in \Sigma \cup \{\lhd\}$, and all $r^c \in Q^c$, create $$((p,\delta^c(r^c,a)),{\rm S}, 0) \in \delta\down((r,r^c),d,0),$$ and if $\delta^c(r^c,a) \in F^c$, create $$(p,{\rm S},0) \in \delta\down((r,r^c),d,0).$$ \item\label{itm:subtypeB} For all transitions $(p,{\rm R},0) \in \delta(r,d,1), p,r \in Q\down, d \in \Sigma$, and all $r^c \in Q^c$, create $$((p,r^c),{\rm R},0) \in \delta\down((r,r^c),d,0).$$ \item\label{itm:subtypeC} For all transitions $(p,{\rm R}, -1) \in \delta(r,d,1), p,r \in Q\down, d \in \Sigma$, and all $r^c \in Q^c$, create $$((p,\delta^c(r^c,a)),{\rm R},0) \in \delta\down((r,r^c),d,0),$$ and if $\delta^c(r^c,a) \in F^c$, create $$(p,{\rm R},0) \in \delta\down((r,r^c),d,0).$$ \item For all transitions $(p,{\rm R},0) \in \delta(r,d,0), p,r \in Q\down, d \in \Sigma$, create $$(p,R,0) \in \delta\down(r,d,0).$$ \item For all transitions $(p,{\rm S},0) \in \delta(r,\lhd,0), p,r \in Q\down$, create $$(p,{\rm S},0) \in \delta\down(r,\lhd,0).$$ \end{enumerate} The states of the machine consist of ordered pairs in $Q \times Q^c$ in addition to states of $Q\down$ to allow the simulation of a $M$ and $M^c$ in parallel, until the computation of $M^c$ can hit a final state, at which point it can (optionally) switch to only simulating $M$ on an empty counter. Transitions created in step 1 simulate all decreasing transitions that stay on the input, by reading an $a$ from the simulation computation of $M^c$ to simulate the decrease; and if the simulated computation of $M^c$ can reach a final state, the simulated computation of $M^c$ can optionally end. Transitions created in step 2 simulate all transitions of $M$ defined on a positive counter that move right on the input but do not change the counter (thus not changing the state of $Q^c$). Transitions created in step 3 simulate all transitions of $M$ that move right on the input and decrease the counter by $M^c$ reading $a$ and optionally ending. Transitions created in step 4 and 5 simulate those of $M$ defined on an empty counter verbatim. Intuitively, the next claim demonstrates that it is possible to simulate $M$ starting at $q$ and counter value $i$ whereby $q' a^i \in L^c$ as follows: first it simulates the computation of $M$ starting at $q$ using the first component, and each decrease in counter reads $a$ from the simulated computation of $M^c$. This continues until the counter reaches zero after $i$ decreasing transitions, at which point the simulated computation of $M^c$ is in a final state. Then, the simulation of $M$ can continue verbatim. The formal proof is presented next. \begin{claim} \label{claim:subsetlang} For all $q\in Q\down, q' \in Q$, $$\{w \mid \exists i > 0, (q,w \lhd,i) \vdash_M^* (q_f, \lhd, 0), q_f \in F, q'a^i \in L^c\} \subseteq L(\overline{M^{q,q'}\down}).$$ \end{claim} \begin{proof} Let $q \in Q\down, q' \in Q$. Let $w$ be such that there exists $i> 0, q_f\in F, q'a^i \in L^c$, and $(q,w \lhd,i) \vdash_M^* (q_f, \lhd,0)$. Let $p_j,w_j,x_j, 0 \leq j \leq m$ be such that $p_0=q, w=w_0, x_0=i, q_f = p_m, w_m =\lambda,x_m=0$ and $(p_l,w_l \lhd,x_l)\vdash_M (p_{l+1},w_{l+1} \lhd ,x_{l+1}), 0 \leq l <m$, via transition $t_{l+1}$. Then $$(p_0,w_0 \lhd ,x_0)\vdash_M^* (p_{\gamma},w_{\gamma} \lhd,x_{\gamma}) \vdash_M^* (p_m, w_m \lhd,x_m),$$ where $\gamma$ is the smallest number such that $x_{\gamma} <i$ (it exists since $i>0$), and $\mu$ the smallest number greater than or equal to $\gamma$ such that $x_{\mu} = 0$. The transitions $t_1, \ldots, t_{\gamma-1}$ are of the form, for $0 \leq l <\gamma -1$, $(p_{l+1}, T_{l+1},y_{l+1}) \in \delta(p_l,d_l,1)$, where $i$ is on the counter on all $x_0, \ldots, x_{\gamma-1}$ (since $x_0 = i$, and $x_{\gamma}$ is the first counter value less than $\gamma$), and $y_0, \ldots, y_{\gamma-1}$ are all equal to $0$. These must all be right transitions since they do not change the counter and so they create transitions in step \ref{itm:subtypeB} of the construction, of the form $$((p_{l+1},q_0^c), R,0) \in \delta\down((p_l, q_0^c), d_l, 0),$$ for $0 \leq l < \gamma -1$. Then, $$((p_0, q_0^c),w_0 \lhd, x_0 - i = 0) \vdash_{\overline{M^{q,q'}\down}}^* ((p_{\gamma-1}, q_0^c), w_{\gamma-1} \lhd, x_{\gamma-1}-i=0).$$ The transitions $t_{\gamma}, \ldots, t_{\mu}$ are of the form, for $\gamma -1 \leq l < \mu$, $(p_{l+1}, T_{l+1}, y_{l+1}) \in \delta(p_l, d_l, 1)$, and for $\gamma - 1 \leq l < \mu -1$ ($t_{\mu}$ is the last decreasing transition), creates transitions in steps \ref{itm:subtypeA}, \ref{itm:subtypeB}, and \ref{itm:subtypeC} of the form $$((p_{l+1}, q_{l+1}^c), T_{l+1},0) \in \delta\down((p_l, q_l^c), d_l, 0),$$ for some $q_l^c, q_{l+1}^c \in Q^c$. Then, $$((p_{\gamma-1},q_0^c),w_{\gamma -1} \lhd, 0) \vdash_{\overline{M^{q,q'}\down}} \cdots \vdash_{\overline{M^{q,q'}\down}} ((p_{\mu-1}, q_{\mu-1}^c),w_{\mu-1} \lhd,0),$$ where there are exactly $i-1$ decreasing transitions being simulated in this sequence. From $q_{\mu -1}^c$, reading one more $a$, $\delta^c(q_{\mu-1},a) \in F^c$ since $q'a^i \in F^c$, and thus $(p_{\mu},T_{\mu},y_{\mu}) \in \delta(p_{\mu-1},d_{\mu-1},1)$ creates $(p_{\mu},T_{\mu},y_{\mu}) \in \delta\down((p_{\mu-1},q_{\mu-1}^c),d_{\mu-1},0)$ in step \ref{itm:subtypeA} or \ref{itm:subtypeC}. Then there remains transitions $t_{\mu+1}, \ldots, t_{m}$, for $\mu \leq l < m$ of the form $(p_{l+1}, T_{l+1},0) \in \delta(p_l, d_l,0)$. These transitions are all in $\delta\down$ and thus $$(p_{\mu}, w_{\mu}\lhd ,0) \vdash^*_{\overline{M^{q,q'}\down}} (p_m = q_f, \lhd,0),$$ and hence $w\in L(\overline{M^{q,q'}\down})$. \qed \end{proof} The converse can be seen by examining an arbitrary computation of $\overline{M^{q,q'}\down}$ which must have two components in the states, corresponding to a simulation of $M$ in the first component and $M^c$ in the second component, until some configuration where it switches to one component, continuing the simulation of $M$. The number of transitions used that simulate the reading of an $a$ from the second component must be some $i$, where $q' a^i \in L^c$, and therefore a computation of $M$ can proceed as in the simulation starting with $i$ in the counter, and reach a final state. The formal proof is presented next. \begin{claim} \label{claim:supersetlang} For all $q\in Q\down, q' \in Q$, $$L(\overline{M^{q,q'}\down}) \subseteq \{w \mid \exists i > 0, (q,w \lhd,i) \vdash_M^* (q_f, \lhd, 0), q_f \in F, q'a^i \in L^c\}.$$ \end{claim} \begin{proof} Let $w \in L(\overline{M^{q,q'}\down}), q \in Q\down, q' \in Q$. Let $\mu$ ($\mu$ is the last position of the derivation with an ordered pair as state), $p_l, w_l, 0 \leq l \leq m$, and $q_j^c, 0 \leq j \leq \mu < m$ be such that $p_0 = q, w_0=w, w_m = \lambda, q_m \in F,$ and $$((p_l,q_l^c), w_l \lhd, 0) \vdash_{\overline{M^{q,q'}\down}} ((p_{l+1}, q_{l+1}^c), w_{l+1} \lhd, 0),$$ for $0 \leq l < \mu$, via transition $t_{l+1}$ of the form $((p_{l+1},q_{l+1}^c),T_{l+1},0) \in \delta\down((p_l,q_l^c),d_l,0)$, and $$((p_{\mu},q_{\mu}^c),w_{\mu} \lhd ,0) \vdash_{\overline{M^{q,q'}\down}} (p_{\mu+1},w_{\mu+1} \lhd, 0),$$ via transition $t_{\mu+1}$ of the form $(p_{\mu+1},T_{\mu+1},0) \in \delta\down((p_{\mu},q_{\mu}^c),d_{\mu},0)$ and $$(p_l,w_l \lhd,0) \vdash_{\overline{M^{q,q'}\down}} (p_{l+1},w_{l+1} \lhd, 0),$$ for $\mu+1 \leq l < m$ via transitions $t_{l+1}$ of the form $(p_{l+1}, T_{l+1},0 ) \in \delta\down(p_l,d_l,0)$. Let $i$ be the number of times transitions created in step \ref{itm:subtypeA} or \ref{itm:subtypeC} are applied. Then by the transition $t_{\mu +1}$, this implies $q'a^i \in F^c$. Then, this implies that there are transitions $(p_{l+1}, T_{l+1},y_{l+1}) \in \delta(p_l,d_l,1)$, for all $l, 0 \leq l \leq \mu$, with $i$ decreasing transitions and $(p_{l+1}, T_{l+1},0) \in \delta(p_l,d_l,0)$, for all $l, \mu+ 1 \leq l < m$, by the construction. Hence, the claim follows. \qed \end{proof} We let $M^{q,q'}= (1,Q^{q,q'},\Sigma,\lhd, \delta^{q,q'}\down,s^{q,q'}\down, F^{q,q'}\down)$ be a $\DCM(1,1)$ machine (that is hence deterministic) accepting $L(\overline{M^{q,q'}})$ that never uses the counter, which can be created since it is regular. Assume all the sets of states of different machines $Q^{q,q'}\down$ are disjoint. Then, to prove Equation (\ref{bigequation3}), only machines $M^{q}\down = M^{q,q}\down, q \in Q\down$ accepting the languages $L^{q,q}\down, q \in Q\down$ need to be considered, and they are all indeed regular. The construction for $M^q\up$ will be given next, and it will use the transitions from the machines $M^{r,q}\down$ within it. Intuitively, $M^q\up$ will simulate computations of $M$ that start from configuration $(q,u\lhd, i)$ up to a maximum counter value of $\alpha$ and back to counter value $i$ again. However, these computations are simulated by starting at a counter value of $0$ instead of $i$ (from $(q,u\lhd,0)$) to a maximum of $\alpha-i$ (instead of $\alpha$), back to $0$ again (instead of $i$), ending at a configuration of the form $(r,u'\lhd,0)$. Thus, the simulated computation takes place with $i$ subtracted from each counter value of each configuration. Then, $M^q\up$ uses the machine $M^{r,q}\down$ to test if the rest of the input can be accepted starting at $r$ with any counter value that can reach $q$ by using words in $L^c$ that start with $q$. Formally, for $q \in Q\up$, $M^q\up = (1,P\up, \Sigma,\lhd,\delta\up, s^q\up,F\up)$, where $P\up = Q\cup \bigcup_{r \in Q} Q^{r,q}\down, s^q\up = q, F\up = \bigcup_{r \in Q\down}F^{r,q}$, where $Q$ is disjoint from other states. The transitions of $\delta\up$ are created by the following algorithm: \begin{enumerate} \item\label{itm:fromoriginal} For all transitions $(p,T,y) \in \delta(r,d,1), p,r \in Q, d\in \Sigma \cup \{\lhd\}, T \in \{{\rm S}, {\rm R}\}, y \in \{-1,0,1\}$, create $$(p,T,y) \in \delta\up(r,d,e),$$ for both $e=1$, and $e=0$ if $r \in Q\up$, \item\label{itm:switchtobigstate} Create $(s^{r,q}\down,{\rm S},0) \in \delta\up(r,d,0)$, for all $ d\in \Sigma \cup \{\lhd\}$, and for all $ r \in Q\down$, \item\label{itm:fromdown} Add all transitions from $M^{s,q}\down, s \in Q\down$. \end{enumerate} Indeed, $M^q\up$ is deterministic as those transitions created in step \ref{itm:fromoriginal} are in $M$, and $M^{s,p}\down$ is deterministic, for all $s,p$. \begin{claim} For all $q\in Q\up$, $$\{w \mid \exists i >0, (q,w \lhd,i) \vdash_M^* (q_f, \lhd, 0), q_f \in F, qa^i \in L^c\} \subseteq L^q\up.$$ \end{claim} \begin{proof} Let $q \in Q\up$. Let $w$ be such that there exists $i>0, q_f\in F, qa^i \in L^c$, and $(q,w\lhd,i) \vdash_M^* (q_f, \lhd,0)$. Let $p_j,w_j,x_j, 0 \leq j \leq m$ be such that $p_0=q, w=w_0, x_0=i, q_f = p_m, \lambda = w_m,x_m=0$ and $(p_l,w_l\lhd,x_l)\vdash_M (p_{l+1},w_{l+1}\lhd,x_{l+1}), 0 \leq l <m$, via transition $t_{l+1}$. Assume that there exists $\alpha >1$ such that $x_{\alpha} > i$, and let $\alpha$ be the smallest such number. Then, there exists $$(p_0,w_0\lhd,x_0)\vdash_M^* (p_{\alpha},w_{\alpha}\lhd,x_{\alpha}) \vdash_M^* (p_{\beta},w_{\beta}\lhd,x_{\beta})\vdash_M^* (p_m, w_m\lhd,x_m),$$ where $\beta$ is the smallest number bigger than $\alpha$ such that $x_{\beta} = i$. In this case, in step \ref{itm:fromoriginal} of the algorithm, transitions $t_1, \ldots, t_{\alpha}$ of the form $(p_l,T_l,y_l) \in \delta(p_{l-1}, d_{l-1},1), 0 < l \leq \alpha$, create transitions of the form $(p_l,T_l,y_l) \in \delta\up(p_{l-1}, d_{l-1},0)$, and thus $$ (p_0,w_0\lhd,x_0-i=0) \vdash_{M^q\up}^* (p_{\alpha-1},w_{\alpha-1}\lhd, x_{\alpha-1} - i = 0)\vdash_{M^q\up} (p_{\alpha},w_{\alpha}\lhd,x_{\alpha}-i),$$ where $x_{\alpha} - i > 0$. In step \ref{itm:fromoriginal} of the algorithm, transitions $t_{\alpha+1}, \ldots, t_{\beta}$ of the form $(p_l,T_l,y_l ) \in \delta(p_{l-1},d_{l-1},1), \alpha < l \leq \beta$ create transitions of the form $$(p_l,T_l, y_l) \in \delta\up(p_{l-1}, d_{l-1}, 1).$$ Thus, $(p_{\alpha},w_{\alpha}\lhd, x_{\alpha}-i) \vdash_{M^q\up}^* (p_{\beta}, w_{\beta}\lhd, x_{\beta}-i = 0)$, since $x_{\alpha}-i, \ldots, x_{\beta-1}-i$ are all greater than $0$. Then, using a transition of type \ref{itm:switchtobigstate}, $(p_{\beta}, w_{\beta}\lhd,0)\vdash_{M^q\up} (s^{p_{\beta},q}\down,w_{\beta}\lhd,0)$. Then since $(p_{\beta},w_{\beta}\lhd, x_{\beta}) \vdash_M^* (p_m,\lhd,0), p_m \in F$, and $p_{\beta} \in Q\down, qa^i \in L^c$, then $w_{\beta} \in L^{p_{\beta},q}\down$, by Claim \ref{claim:subsetlang}. Hence, $$(s^{p_{\beta},q}\down,w_{\beta}\lhd,0) \vdash^*_{M^{p_{\beta},q}\down} (q_f',\lhd,0),$$ $q_f' \in F$, and therefore, this occurs in $M^q\up$ as well. Lastly, the case where there does not exist an $\alpha >i$ such that $x_{\alpha}>i$ (thus $i$ is the highest value in counter) is similar, by applying transitions of type \ref{itm:fromoriginal} until the transitions before the first decrease (the first time a state from $Q\down$ is reached), then a transitions of type \ref{itm:switchtobigstate}, followed by a sequence of type \ref{itm:fromdown} transitions as above. \qed \end{proof} The reverse containment can be shown by examining any accepting sequence of configurations, which has some initial simulation of $M$, followed by a computation of a machine $M\down^{q',q}$. The initial simulation can occur in $M$ with $i >0 $ added to each counter value, and the correctness of the remaining portion of $M\down^{q',q}$ follows from Claim \ref{claim:supersetlang}. \begin{claim} For all $q\in Q\up$, $$L^q\up \subseteq \{w \mid \exists i >0, (q,w\lhd,i) \vdash_M^* (q_f, \lhd, 0), q_f \in F, qa^i \in L^c\}.$$ \end{claim} \begin{proof} Let $w \in L(M^q\up)$. Then $$(q,w\lhd,0)\vdash_{M^q\up}^* (q', w'\lhd,0) \vdash_{M^q\up} ((q',\delta^c(s_0^c,q)),w'\lhd,0) \vdash_{M^q\up}^* (q_f',\lhd,0),$$ where $q_f' \in F^{q',q}$. Let $\beta, p_l, w_l, x_l, 0 \leq l \leq \beta$ be such that $p_0=q, w_0=w, x_0=0, q' = p_{\beta}, w' = w_{\beta}, x_{\beta}=0$ such that $(p_l,w_l\lhd,x_l) \vdash_{M^q\up} (p_{l+1},w_{l+1}\lhd, x_{l+1}), 0 \leq l < \beta$. Then $w' \in L^{q',q}\down$, and therefore by Claim \ref{claim:supersetlang}, there exists $i > 0$ such that $(q',w'\lhd,i) \vdash_M^* (q_f, \lhd, 0), q_f \in F, qa^i \in L^c$. By the construction in step \ref{itm:fromoriginal}, $$(p_0,w_0\lhd, x_0+i) \vdash_M \cdots \vdash_M (p_{\beta},w_{\beta}\lhd,x_{\beta}+i),$$ and since $x_0 = x_{\beta}=0$ and $w' = w_{\beta}$ and $q' = p_{\beta}$, then $(q,w\lhd,i) \vdash_M^* (q_f,\lhd,0)$ and $qa^i \in L^c$ and the claim follows. \qed \end{proof} Hence, Equation \ref{bigequation2} holds. It is also known that $\DCM$ is closed under union (by increasing the number of counters) \cite{Ibarra1978}. Therefore, the finite union is in $\DCM$. \qed \end{proof From this, we obtain the following general result. \begin{proposition} Let $L \in \DCM(1,1), L_1, L_2 \in \NPCM$. Then both $(L_1^{-1}L)L_2^{-1}$ and $L_1^{-1}(L L_2^{-1})$ are a finite union of languages in $\DCM(1,1)$. Furthermore, both languages are in $\DCM$. \end{proposition} \begin{proof} It will first be shown that $(L_1^{-1}L)L_2^{-1}$ is the finite union of languages in $\DCM(1,1)$. Indeed, $L_1^{-1}L$ is the finite union of languages in $\DCM(1,1), 1 \leq i \leq k$ by Proposition \ref{leftquotientwithNPCM}, and so $L_1^{-1}L = \bigcup_{i=1}^k X_i$ for $X_i \in \DCM(1,1)$. Further, for each $i$, $X_i L_2^{-1}$ is the finite union of $\DCM(1,1)$ languages by Proposition \ref{rightquotientwithNPCM}. It remains to show that $\bigcup_{i=1}^k X_i L_2^{-1} = (L_1^{-1}L)L_2^{-1}$. If $w\in \bigcup_{i=1}^k X_i L_2^{-1}$, then $w \in X_i L_2^{-1}$ for some $i$, $1 \leq i \leq k$, then $wy \in X_i, y \in L_2$. Then $wy \in L_1^{-1}L$, and $w \in (L_1^{-1} L) L_2^{-1}$. Conversely, if $w \in (L_1^{-1}L)L_2^{-1}$, then $wy \in L_1^{-1}L$ for some $y \in L_2$, and so $wy \in X_i$ for some $i$, $1 \leq i \leq k$, and thus $w \in X_i L_2^{-1}$. For $L_1^{-1}(L L_2^{-1})$, it is true that $L L_2^{-1} \in \DCM(1,1)$ by Proposition \ref{rightquotientwithNPCM}. Then $L_1^{-1}(L L_2^{-1})$ is the finite union of $\DCM(1,1)$ by Proposition \ref{leftquotientwithNPCM}. It is also known that $\DCM$ is closed under union (by increasing the number of counters) \cite{Ibarra1978}. Therefore, both finite unions are in $\DCM$. \qed \end{proof} And, as with Corollary \ref{generalizedSemilinear}, this can be generalized to any language families that are reversal-bounded counter augmentable. \begin{corollary} \label{evenMoreGeneralSemilinear} Let $L \in \DCM(1,1), L_1 \in \mathscr{F}_1, L_2 \in \mathscr{F}_2$, where $\mathscr{F}_1$ and $\mathscr{F}_2$ are any families of languages that are reversal-bounded counter augmentable. Then $(L_1^{-1}L)L_2^{-1}$ and $L_1^{-1}(L L_2^{-1})$ are both a finite union of languages in $\DCM(1,1)$. Furthermore, both languages are in $\DCM$. \end{corollary} As a special case, when using the fixed regular language $\Sigma^*$ for the right and left quotient, we obtain: \begin{corollary} \label{suffinfDCM} Let $L \in \DCM(1,1)$. Then $\suff(L)$ and $\infx(L)$ are both $\DCM$ languages. \end{corollary} It is however sometimes necessary that the number of counters increase to accept $\suff(L)$ and $\infx(L)$, when $L \in \DCM(1,1)$ as seen from the next Proposition indicating that the suffix, infix, and outfix of a $\DCM(1,1)$ language can be outside of $\DCM(1,1)$. \begin{proposition} \label{suff11} There exists $L \in \DCM(1,1)$ where all of $\suff(L), \infx(L), \outf(L)$ are not in $\DCM(1,1)$. \end{proposition} \begin{proof} Assume otherwise. Let $L = \{a^n b^n c^n \mid n \geq 0\}, L_1 = \{ a^n b^n c^k \mid n,k \geq 0\}, L_2=\{a^n b^m c^m \mid n,m \geq 0\}, L_3 = \{a^n b^m c^k \mid n,m,k \geq 0\}$. Let $\Sigma = \{a,b,c\}$ and $\Gamma = \{d,e,f\}$. It is well-known that $L$ is not a context-free language, and therefore is not a $\DCM(1,1)$ language. However, each of $L_1, L_2, L_3$ are $\DCM(1,1)$ languages, and therefore, so are $\overline{L_1}, \overline{L_2}, \overline{L_3}$ \cite{Ibarra1978} and so is $L' = d \#_1\overline{L_1} \#_2 \cup e \#_1\overline{L_2} \#_2 \cup f \#_1 \overline{L_3} \#_2$ (all complements with respect to $\Sigma^*$). The symbols $d,e,f$ are needed here, as each deterministically triggers the computation of a different $\DCM(1,1)$ machine so that the resulting machine can be a $\DCM(1,1)$ machine (although $\DCM$ is closed under union, this closure can increase the number of counters; but this type of marked union does not increase the number of counters). It can also be seen that $\overline{L} = \overline{L_1} \cup \overline{L_2} \cup \overline{L_3}$. But $\suff(L') \cap \#_1\Sigma^* \#_2 = \infx(L') \cap \#_1\Sigma^* \#_2 = \outf(L') \cap \#_1\Sigma^* \#_2 = \#_1 \overline{L} \#_2$, and since $\DCM(1,1)$ is closed under intersection with regular languages, under left and right quotient by a symbol, and under complement, this implies $L$ is a $\DCM(1,1)$ language, a contradiction. \qed \end{proof} \section{Non-Closure Under Suffix, Infix, and Outfix for Multi-Counter and Multi-Reversal Machines} \label{sec:nonclosure} In \cite{EIMInsertion2015}, a technique was used to show that languages are not in $\DCM \cup 2\DCM(1)$. The technique uses undecidable properties to show non-closure. As $2\DCM(1)$ machines have a two-way input and a reversal-bounded counter, it is difficult to derive ``pumping'' lemmas for these languages. Furthermore, unlike $\DCM$ and $\NCM$ machines, $2\DCM(1)$ machines can accept non-semilinear languages. For example, $L_1= \{a^i b^k ~|~ i, k \ge 2, i$ divides $k \}$ can be accepted by a 2\DCM(1) whose counter makes only one reversal. However, $L_2 = \{a^i b^j c^k ~|~ i,j,k \ge 2, k = ij \}$ cannot be accepted by a 2\DCM(1) \cite{IbarraJiang}. This technique from \cite{EIMInsertion2015} works as follows. The proof uses the fact that there is a recursively enumerable but not recursive language $L_{\rm re} \subseteq \natzero$ that is accepted by a deterministic 2-counter machine \cite{Minsky}. Here, these machines do not have an input tape, and acceptance is defined whereby $n \in \natzero$ is accepted (i.e., $n \in L_{\rm re}$) if and only if, when started with $n$ in the first counter (encoded in unary) and zero in the second counter, $M$ eventually halts (hence, acceptance is by halting). Examining the constructions in \cite{Minsky} of the 2-counter machine demonstrates that the counters behave in a regular pattern. Initially one counter has some value $d_1$ and the other counter is zero. Then, the machine's operation can be divided into phases, where each phase starts with one of the counters equal to some positive integer $d_i$ and the other counter equals 0. During the phase, the positive counter decreases, while the other counter increases. The phase ends with the first counter containing 0 and the other counter containing $d_{i+1}$. In the next phase, the modes of the counters are interchanged. Thus, a sequence of configurations where the phases are changing will be of the form: $$(q_1, d_1, 0), (q_2, 0, d_2), (q_3, d_3, 0), (q_4, 0, d_4), (q_5, d_5, 0), (q_6, 0, d_6), \dots$$ where the $q_i$'s are states, with $q_1 = q_s$ (the initial state), and $d_1, d_2, d_3, \ldots$ are positive integers. The second component of the configuration refers to the value of the first counter, and the third component refers to the value of the second. Also, notice that in going from state $q_i$ in phase $i$ to state $q_{i+1}$ in phase $i+1$, the 2-counter machine goes through intermediate states. For each $i$, there are 5 cases for the value of $d_{i+1}$ in terms of $d_i$: $d_{i+1} = d_i, 2d_i, 3d_i, d_i/2, d_i/3$ (the division operation only occurs if the number is divisible by 2 or 3, respectively). The case applied is determined by $q_i$. Hence, a function $h$ can be defined such that if $q_i$ is the state at the start of phase $i$, $d_{i+1} = h(q_i)d_i$, where $h(q_i)$ is one of $1, 2, 3, 1/2, 1/3$. Let $T$ be a 2-counter machine accepting a recursively enumerable language that is not recursive. Assume that $q_1=q_s$ is the initial state, which is never re-entered, and if $T$ halts, it does so in a unique state $q_h$. Let $Q$ be the states of $T$, and $1$ be a new symbol. In what follows, $\alpha$ is any sequence of the form $\#I_1 \#I_2 \#\cdots\# I_{2m}\#$ (thus we assume that the length is even), where for each $i$, $1 \leq i \leq 2m$, $I_i = q1^k$ for some $q \in Q$ and $k \ge 1$, represents a possible configuration of $T$ at the beginning of phase $i$, where $q$ is the state and $k$ is the value of the first counter (resp., the second) if $i$ is odd (resp., even). Define $L_0$ to be the set of all strings $\alpha$ such that \begin{enumerate} \item $\alpha = \#I_1 \#I_2\# \cdots \#I_{2m}\#$; \item $m \ge 1$; \item for $1 \le j \le 2m-1$, $I_j \Rightarrow I_{j+1}$, i.e., if $T$ begins in configuration $I_j$, then after one phase, $T$ is in configuration $I_{j+1}$ (i.e., $I_{j+1}$ is a valid successor of $I_j$); \end{enumerate} Then, the following was shown in \cite{EIMInsertion2015}. \begin{lemma} \label{lem1} $L_0$ is not in $\DCM \cup 2\DCM(1)$. \end{lemma} We will use this language exactly to show that taking either the suffix, infix, or outfix of a language in $\DCM(1,3), \DCM(2,1)$, or $2\DCM(1)$ can produce languages that are in neither $\DCM$ nor $2\DCM(1)$. \begin{proposition} \label{NonclosureSuffix} There exists a language $L \in \DCM(1,3)$ (respectively $L \in \DCM(2,1)$, and $L \in 2\DCM(1)$) such that $\suff(L) \not \in \DCM \cup 2\DCM(1)$, $\inf(L) \not \in \DCM \cup 2\DCM(1)$, and $\outf(L) \not \in \DCM \cup 2\DCM(1)$. \end{proposition} \begin{proof} Let $L_0$ be the language defined above, which is not in $\DCM \cup 2\DCM(1)$. Let $a, b$ be new symbols. Clearly, $bL_0b$ is also not in $\DCM \cup 2\DCM(1)$. A configuration of $T$ is any string of the form $q 1^k$, $q \in Q, k \geq 1$ (whether it appears in any computation or not). Then let $$L = \{a^i b \# I_1 \# I_2 \# \cdots \# I_{2m} \# b \mid \begin{array}[t]{l} I_1, \ldots, I_{2m} \mbox{~are configurations of 2-counter machine~} T, i \le 2m-1, \\ I_{i+1} \mbox{~is not a valid successor of~} I_i \}.\end{array}$$ Clearly $L$ is in $\DCM(1,3)$, in $\DCM(2,1)$ (as $\DCM(1,3)$ is a subset of $\DCM(2,1)$ as mentioned in Section \ref{sec:prelims}), and in $2\DCM(1)$ (as $\DCM(1,3)$ is a subset of $2\DCM(1)$). Let $L_1$ be $\suff(L)$. Suppose $L_1$ is in $\DCM$ (resp., $2\DCM(1)$). Then $L_2 = \overline{L_1}$ is also in $\DCM$ (resp., $2\DCM(1)$) since both are closed under complement \cite{Ibarra1978,IbarraJiang}. Let $R = \{b \# I_1 \# I_2 \cdots \# I_{2m} \# b ~|~ I_1, \ldots, I_{2m}$ are configurations of $T \}$. Then since $R$ is regular, $L_3 = L_2 \cap R$ is in $\DCM$ (resp, $2\DCM(1)$) as both are closed under intersection with regular languages \cite{Ibarra1978,IbarraJiang}. We get a contradiction, since $L_3 = bL_0b$. Non-closure under infix and outfix can be shown similarly (for outfix, the intersection with $R$ enforces that only erasing of all of the $a$'s is considered). \qed \end{proof} This implies non-closure under left-quotient with regular languages, and this result also extends to the embedding operation, a generalization of outfix. \begin{corollary}\label{leftquotR} There exists $L \in \DCM(1,3)$ (respectively $L \in \DCM(2,1)$, and $L \in 2\DCM(1)$), and $R \in \REG$ such that $ \lquot{R}{L} \not \in \DCM \cup 2\DCM(1)$. \end{corollary} \begin{corollary} Let $m>0$. Then there exists $L \in \DCM(1,3)$ (respectively $L \in \DCM(2,1)$, and $L \in 2\DCM(1)$) such that $\emb(L, m) \not \in \DCM \cup 2\DCM(1)$. \end{corollary} The results of Proposition \ref{NonclosureSuffix} and Corollary \ref{leftquotR} are optimal for suffix and infix as these operations applied to $\DCM(1,1)$ are always in $\DCM$ by Corollary \ref{suffinfDCM} (and since $\DCM(1,2) = \DCM(1,1)$). But whether the outfix and embedding operations applied to $\DCM(1,1)$ languages is always in $\DCM$ is an open question. \section{Closure and Non-Closure for $\NPCM$, $\DPCM$, and $\DPDA$} \label{sec:DPDA} To start, we consider quotients of nondeterministic classes, then use these results for contrast with deterministic classes. \begin{proposition} Let $\LL_1$ and $\LL_2$ be classes of languages where $\LL_1$ is a full trio closed under intersection with languages in $\LL_2$, and where $L \in \LL_2$ implies $\Sigma^* \# L, L\# \Sigma^* \in \LL_2$, for an alphabet $\Sigma$ and new symbol $\#$. Then $\LL_1$ is closed under left and right quotient with $\LL_2$. \end{proposition} \begin{proof} For right quotient, let $L_1 \in \LL_1, L_2 \in \LL_2$. If $L_1 \in \LL_1$, then using an inverse homomorphism (where the homomorphism is from $(\Sigma \cup \{\#\})^*$ to $\Sigma^*$ that erases $\#$ and fixes all other letters), and intersection with the regular language $\Sigma^* \# \Sigma^*$, it follows that $L_1' = \{x \# y \mid xy \in L_1\}$ is also in $\LL_1$. Let $L_2' = \Sigma^* \# L_2 \in \LL_2$. Then $L = L_1' \cap L_2' \in \LL_1$. Then, as every full trio is closed under gsm mappings, it follows that $L_1 L_2^{-1} \in \LL_1$ by erasing everything starting at the $\#$ symbol. Similarly with left quotient. \qed \end{proof} \begin{corollary}\label{nondeterministicclosure} $\NPCM$ ($\NCM$ respectively) is closed under left and right quotient with $\NCM$. \end{corollary} This follows since $\NPCM$ is a full trio closed under intersection with $\NCM$ \cite{Ibarra1978}, and $\NCM$ is closed under concatenation. The question remains as to whether this is also true for deterministic machines instead. For machines with a stack, we have: \begin{proposition} The right quotient of a $\DPDA(1)$ language (i.e., deterministic linear context-free) with a $\DCM(2,1)$ language is not necessarily an $\NPDA$ language. \end{proposition} \begin{proof} Take the $\DPDA(1)$ language $L_1 = \{d^l c^k b^j a^i \# a^i b^j c^k d^l \mid i, j, k, l > 0\}$. Take the $\DCM(2,1)$ language $L_2 = \{ a^i b^j c^i d^j \mid i,j >0\}$. This is clearly a non-context-free language that is in $\DCM(2,1)$. However, $L_1 L_2^{-1} = L_2^R$, which is also not context-free. \qed \end{proof} Next we see that, in contrast to $\DCM$ and $\DPDA$, $\DPCM$ is closed under neither prefix nor suffix. Indeed, both $\DCM$ and $\DPDA$ are closed under prefix (and right quotient with regular sets), but not left quotient with regular sets. Yet combining their stores into one type of machine yields languages that are closed under neither. \begin{proposition} $\DPCM$ is not closed under prefix or suffix. \label{DPCMprefixsuffix} \end{proposition} \begin{proof} Assume otherwise. Let $L$ be a language in $\NCM(1,1)$ that is not in $\DPCM$, which was shown to exist \cite{OscarNCMA2014journal}. Let $M$ be an $\NCM(1,1)$ machine accepting $L$. Let $T$ be a set of labels associated bijectively with transitions of $M$. Consider the language $L' = \{ t_m \cdots t_1 \$ w \mid M \mbox{~accepts~} w \mbox{~via transitions~} t_1, \ldots , t_m \in T\}$. This language is in $\DPCM$ since a $\DPCM$ machine $M'$ can be built that first pushes $t_m \cdots t_1$, and then simulates $M$ deterministically on transitions $t_1, \ldots, t_m$ while popping from the pushdown and reading $w$. Then $\suff(L') \cap \$ \Sigma^* = \$ L$, a contradiction, as $\DPCM$ is clearly closed under left quotient with a single symbol. Similarly for prefix, consider $L^R$, and create a machine $M^R$ accepting $L^R$, which is possible since $\NCM(1,1)$ is closed under reversal. Then $L'' = \{ w \$ t_1 \cdots t_m \mid M^R \mbox{~accepts~} w^R \mbox{~via~} t_1, \ldots, t_m \in T\}$. This is also a $\DPCM$ language as one can construct a machine $M''$ that pushes $w$, then while popping $w^R$ letter-by-letter, simulates $M$ deterministically on transitions $t_1, \ldots, t_m$ on $w^R$. Then $\pref(L'') \cap \Sigma^*\$ = L\$$, a contradiction, as $\DPCM$ is clearly closed under right quotient with a single symbol. \qed \end{proof} \begin{corollary} $\DPCM$ is not closed under right or left quotient with regular sets. \end{corollary} Thus, the deterministic variant of Corollary \ref{nondeterministicclosure} gives non-closure. The following is also evident from the proof of Proposition \ref{DPCMprefixsuffix}. \begin{corollary} Every $\NCM$ language can be obtained by taking the right quotient (resp. left quotient) of a $\DPCM$ language by a regular language. \end{corollary} The statement of this corollary cannot be weakened to taking the quotients of a $\DPDA$ with a regular language, since $\DPDA$ is closed under right quotient with regular languages \cite{harrison1978}. Lastly, we will address the question of whether the left or right quotient of a $\DPDA$ language with a $\DCM$ language is always in $\DPCM$. \begin{proposition} The right quotient (resp. left quotient) of a $\DPDA(1)$ language with a $\DCM(1,1)$ language can be outside $\DPCM$. \end{proposition} \begin{proof} To start, it is known that there exists an $\NCM(1,1)$ language that is not in $\DPCM$ \cite{OscarNCMA2014journal}. Let $L$ be such a language, and let $M$ be a $\NCM(1,1)$ machine accepting $L$. Then $L^R$ is also an $\NCM(1,1)$ language, and let $M^R$ be an $\NCM(1,1)$ machine accepting it. Let $T$ be a set of labels associated bijectively with transitions of $M^R$. Then, we can create a $\DCM(1,1)$ machine $M'$ accepting words in $\#(\Sigma \cup T)^*$ such that after reading $\#$, $M'$ simulates $M^R$ deterministically by reading a label $t \in T$ before simulating $t$ deterministically. That is, if $M'$ reads a letter $a \in \Sigma$, $M'$ stores it in a buffer (that can hold exactly one letter), and if $M'$ reads a letter $t\in T$, $M'$ simulates $M^R$ on the letter $a$ in the buffer using transition $t$, completely deterministically. Then if $t$ is a stay transition, the next letter must be in $T$, and the buffer stays intact, whereas if $t$ is a right transition, then the buffer is cleared, and the next letter must be in $\Sigma$. If the input is not of this form (for example, if there are two letters from $\Sigma$ in a row, or a transition label representing a right move followed by another transition label), the machine crashes (cannot continue and does not accept). The first transition must also be from the initial state, and the simulation must end in a final state. It is clear then that if $h$ is a homomorphism that erases letters of $T$ and fixes letters of $\Sigma$, then $h(L(M')) = L(M^R)$. Then, consider the language $L_1 = \{ w \# x \mid w \in \Sigma^*, x \in (\Sigma \cup T)^*, h(x) = w^R\}$. Then $L_1 \in \DPDA(1)$. Consider $L_2 = L_1 L(M')^{-1}$. Then $L_2 = \{ w \mid w \in \Sigma^*, \mbox{~there exists~} x \in (\Sigma \cup T)^* \mbox{~such that~} h(x) = w^R, \mbox{~and~} h(x) \in L(M^R)\}$. Hence, $L_2 = \{ w \mid w \in \Sigma^*, w \in L(M)\} = L$, which is not in $\DPCM$. Similarly for left quotient by using the $\DPDA(1)$ language $L_1 = \{ x \# w \mid w \in \Sigma^*, x \in (\Sigma \cup T)^*\}$. \qed \end{proof} The following is also evident from the proof above. \begin{corollary} Every $\NCM$ language can be obtained by taking the right quotient (resp. left quotient) of a $\DPDA(1)$ language by a $\DCM$ language. \end{corollary} Again, this statement cannot be weakened to the right quotient of a $\DPDA$ with a regular language since $\DPDA$ languages are closed under right quotient with regular languages \cite{GinsburgDPDAs}. \section{Right and Left Quotients of Regular Sets} \label{sec:reg} Let $\mathscr{F}$ be any family of languages (which need not be recursively enumerable). It is known that $\REG$ is closed under right quotient by languages in $\mathscr{F}$ \cite{HU}. However, this closure need not be effective, as it will depend on the properties of $\mathscr{F}$. The following is an interesting observation which connects decidability of the emptiness problem to effectiveness of closure under right quotient: \begin{proposition} \label{reg1} Let $\mathscr{F}$ be any family of languages which is effectively closed under intersection with regular sets and whose emptiness problem is decidable. Then $\REG$ is effectively closed under both left and right quotient by languages in $\mathscr{F}$. \end{proposition} \begin{proof} We will start with right quotient. Let $L_1 \in \REG$ and $L_2$ be in $\mathscr{F}$. Let $M$ be a DFA accepting $L_1$. Let $q$ be a state of $M$, and $L_q = \{ y \mid M \mbox{~from initial state~} q \mbox{~accepts~} y \}$. Let $Q' = \{ q \mid q \mbox{~is a state of~} M, L_q \cap L_2 \neq \emptyset \}$. Since $\mathscr{F}$ is effectively closed under intersection with regular sets and has a decidable emptiness problem, $Q'$ is computable. Then a DFA $M'$ accepting $L_1 L_2^{-1}$ can be obtained by just making $Q'$ the set of accepting states in $M$. Next, for left quotient, let $L_1$ be in $\mathscr{F}$, and $ L_2$ in $\REG$ be accepted by a DFA $M$ whose initial state is $q_0$. Let $ L_q = \{ x \mid M \mbox{~on input~} x \mbox{~ends in state~} q \}$. Let $Q' = \{q \mid q \mbox{~is a state of~} M, L_q \cap L_1 \neq \emptyset \}$. Then $Q'$ is computable, since $\mathscr{F}$ is effectively closed under intersection with regular sets and has a decidable emptiness problem. We then construct an NFA (with $\lambda$-transitions) $M'$ to accept $L_1^{-1} L_2$ as follows: $M'$ starting in state $q_0$ with input $y$, then nondeterministically goes to a state $q$ in $Q'$ without reading any input, and then simulates the DFA $M$. \qed \end{proof} \begin{corollary} \label{reg2} $\REG$ is effectively closed under left and right quotient by languages in: \begin{enumerate} \item the families of languages accepted by $\NPCM$ and $2\DCM(1)$ machines, \item the family of languages accepted by $\MPCA$s, $\TCA$s, $\QCA$s, and $\EPDA$s, \item the families of ET0L and Indexed languages. \end{enumerate} \end{corollary} \begin{proof} These families are closed under intersection with regular sets. They have also a decidable emptiness problem \cite{Harju2002278,Ah68,RS}. The family of ET0L languages and Indexed languages are discussed further in \cite{RS} and \cite{Ah68} respectively. \qed \end{proof} \section{Closure for Bounded Languages} \label{sec:bounded} In this subsection, deletion operations applied to bounded and letter-bounded languages will be examined. We will need the following corollary to Theorem \ref{unary}. \begin{corollary} \label{unarycor} Let $L \subseteq \#a^*\#$ be accepted by a $2\NCM$. Then $L$ is regular. \end{corollary} \begin{proposition} \label{bounded} If $L$ is a bounded language accepted by either a finite-crossing $2\NCM$, an $\NPCM$ or a finite-crossing $2\DPCM$, then all of $\pref(L)$, $\suff(L)$, $\inf(L)$, $\outf(L)$ can be accepted by a $\DCM$. \end{proposition} \begin{proof} By Theorem \ref{WWW}, an $\NCM$ can be constructed that accepts $L$. Further, one can construct $\NCM$'s accepting $\pref(L), \suff(L), \inf(L), \outf(L)$ since one-way $\NCM$ is closed under prefix, suffix, infix and outfix. In addition, it is known that applying these operations on bounded languages produce only bounded languages. Thus, by another application of Theorem \ref{WWW}, the result can then be converted to a $\DCM$. \qed \end{proof} The ``finite-crossing'' requirement in the proposition above is necessary: \begin{proposition} There exists a letter-bounded language $L$ accepted by a $2\DCM(1)$ machine which makes only one reversal on the counter such that $\suff(L)$ (resp., $\inf(L)$, $\outf(L), \pref(L)$) is not in $\DCM \cup 2\DCM(1)$. \end{proposition} \begin{proof} Let $L = \{a^i \#b^j\# ~|~ i, j \ge 2, j$ is divisible by $i \}$. Clearly, $L$ can be accepted by a $2\DCM(1)$ which makes only one reversal on the counter. If $\suff(L)$ is in $\DCM \cup 2\DCM(1)$, then $L' = \suff(L) \cap \#b^+\#$ would be in $\DCM \cup 2\DCM(1)$. From Corollary \ref{unarycor}, we get a contradiction, since $L'$ is not semilinear. The other cases are shown similarly. \qed \end{proof} \begin{comment} \section{Conclusion} The results presented in this paper can largely be presented in the following table. Assume $R \in \REG$ and $L_{\NPCM} \in \NPCM$. \noindent The question: For all $L \in \DCM(k,l)$: \begin{table}[H] \begin{tabular}{|l|ll|ll|} \hline \textbf{Operation} & is $Op(L) \in \DCM(k,l)$? & & is $Op(L) \in \DCM$? & \\\hline$\pref(L)$ &Yes if $k = 1, l\geq 1$ & Cor \ref{closedprefixDCM1} & Yes if $k,l \geq 1$ & Cor \ref{closedprefix} \\ &Open if $k \geq 2$ & & & \\\hline$\suff(L)$ &No if $k,l \geq 1$ & Prop \ref{suff11}, Thm \ref{NonclosureSuffix} & Yes if $k=1, l =1$ & Cor \ref{suffinfDCM} \\ & & & No if $k\geq 2$ or $l \geq 3$ & Thm \ref{NonclosureSuffix} \\\hline$\infx(L)$ &No if $k,l \geq 1$ & Prop \ref{suff11}, Thm \ref{NonclosureSuffix} & Yes if $k=1, l=1$ & Cor \ref{suffinfDCM} \\ &&& No if $k\geq 2$ or $l \geq 3$ & Thm \ref{NonclosureSuffix} \\\hline$\outf(L)$ &No if $k,l \geq 1$ & Prop \ref{suff11}, Thm \ref{NonclosureSuffix} & Open for $k=1, l=1$ & \\ && & No if $k\geq 2$ or $l \geq 3$ & Thm \ref{NonclosureSuffix} \\\hline$L R^{-1}$ &Yes if $k=1, l\geq 1$ & Prop \ref{rightquotientwithNPCM} & Yes if $k,l \geq 1$ & Prop \ref{fullQuotientClosure} \\ &Open if $k \geq 2$ & & & \\\hline$L L_{\NPCM}^{-1}$ &Yes if $k=1, l\geq 1$ & Prop \ref{rightquotientwithNPCM} & Yes if $k,l \geq 1$ & Prop \ref{fullQuotientClosure} \\ &Open if $k \geq 2$ & & & \\\hline$R^{-1} L$ &No if $k,l \geq 1$ & Prop \ref{suff11}, Cor \ref{leftquotR} & Yes if $k=1, l=1$ & Prop \ref{leftquotientwithNPCM} \\ & & & No if $k\geq 2$ or $l \geq 3$ & Cor \ref{leftquotR} \\\hline$L_\NPCM^{-1} L$ &No if $k,l \geq 1$ & Prop \ref{suff11}, Cor \ref{leftquotR} & Yes if $k=1, l=1$ & Prop \ref{leftquotientwithNPCM} \\ & & & No if $k\geq 2$ or $l \geq 3$ & Cor \ref{leftquotR} \\\hline \end{tabular} \caption{Summary of results for $\DCM$. When applying the operation in the first column to any $L \in \DCM(k,l)$, is the result necessarily in $\DCM(k,l)$ (column 2) and in $\DCM$ (column 3)? This is parameterized in terms of $k$ and $l$, and the theorems showing each result is provided.} \label{tab:summary} \end{table} In addition, with respect to two-way machines, when starting with a machine with one-way input, and one 3-reversal bounded counter, then applying either suffix, infix or outfix can produce languages outside of $2\DCM(1)$ (one reversal-bounded counter, with unbounded input tape turns, Theorem \ref{NonclosureSuffix}). Lastly, for the case of bounded languages accepted by finite-crossing $2\NCM$, applying any of prefix, suffix, infix or outfix can always be accepted in $\DCM$. But the finite-crossing requirement is necessary as there is a letter-bounded language accepted in $2\DCM(1)$ that makes only one reversal on the counter such that applying any of suffix, infix, outfix or prefix is outside of both $\DCM$ and $2\DCM(1)$. \end{comment} \section{Conclusions} We investigated many different deletion operations applied to languages accepted by one-way and two-way deterministic reversal-bounded multicounter machines, deterministic pushdown automata, and finite automata. The operations include the prefix, suffix, infix, and outfix operations, as well as left and right quotient with languages from different families. Although it is frequently expected that language families defined from deterministic machines will not be closed under deletion operations, we showed that $\DCM$ is closed under right quotient with languages from many different language families, such as the context-free languages. When starting with one-way deterministic machines with one counter that makes only one reversal ($\DCM(1,1)$), taking the left quotient with languages from many different language families, such as the context-free languages, yields only languages in $\DCM$ (by increasing the number of counters). It follows from these results that the suffix or infix closure of $\DCM(1,1)$ languages are all in $\DCM$. These results are surprising given the nondeterministic behaviour of the deletion operations. However, for both $\DCM(1,3)$, or $\DCM(2,1)$, taking the left quotient (or even just the suffix operation) yields languages that can neither be accepted by deterministic reversal-bounded multicounter machines, nor by 2-way nondeterministic machines with one reversal-bounded counter ($2\DCM(1)$). Some interesting open questions remain. For example, is the outfix and embedding operations applied to $\DCM(1,1)$ languages always yield languages in $\DCM$? Also, other deletion operations, such as schema for parallel deletion \cite{parInsDel} have not yet been investigated applied to languages accepted by deterministic machines. \section*{Acknowledgements} We thank the referees for helpful suggestions improving the presentation of the paper.
train/arxiv
BkiUag_xK2li-LJstZmQ
5
1
\section{Introduction} \label{intro} At a second-order phase transition the characteristic time scale of the order parameter fluctuations diverges (critical slowing down), because the energy difference between the ordered and the disordered phases, i.e., the fluctuation energy $\omega_{fl}$, vanishes continuously at the transition. If the phase transition occurs at a finite critical temperature $T_c$, quantum fluctuations of the order-parameter are always cut off by the temperature $T$, since $T\approx T_c > \omega_{fl}$, and the order-parameter fluctuations are thermally excited, i.e., incoherent (dark shaded regions in Fig.~\ref{fig1} a), b)). In this sense, such a phase transition is classical. If, however, the transition is tuned to absolute zero temperature by a non-thermal control parameter, the system is at the critical point in a quantum coherent superposition of the degenerate ordered and disordered states. The transition is called a quantum phase transition (QPT), for reviews see \cite{loehneysen07,steglich08}. The excitation spectrum above this quantum critical state may be distinctly different from the excitations of either phase, the disordered and the ordered one. Therefore, the physical properties are not only dominated by the quantum fluctuations between these phases at $T=0$, but show also unusual temperature dependence essentially due to thermal excitation of the anomalous spectrum, so that the quantum critical behavior extends up to elevated temperatures (regions marked "QCF" in Fig.~\ref{fig1} a), b)). \begin{figure}[t] \begin{center} \scalebox{0.25}[0.25]{\includegraphics[clip]{fig1.eps}}\end{center} \vspace*{-0.3cm} \caption{ Generic phase diagrams of a magnetic QPT in a HF system, driven by an antiferromagnetic RKKY coupling (parametrized by a non-thermal control parameter $x$) for a) the HM and b) the LQC scenario. For both scenarios the predicted behavior of the spin screening scale on the lattice, $T_K^{\star}$, {\it including} the presence of quantum critical fluctuations (QCF), and as extracted from local Kondo-ion spectra {\it without} lattice coherence or QCF, $T_K$, is also shown (see text for details). The maximum antiferromagnetic RKKY coupling, $x_m$, where single-ion Kondo screening terminates, is marked by a black dot. } \label{fig1} \end{figure} In particular, in a number of heavy-fermion (HF) compounds, where heavy quasiparticles are formed due to the Kondo effect and subsequent lattice coherence, a magnetic phase transition may be suppressed to $T=0$ by chemical composition, pressure or magnetic field. Two types of scenarios are in principle conceivable in these metallic systems.\\ In the first scenario, the quasiparticle system undergoes a spin-density wave (SDW) instability at the quantum critical point (QCP), as decribed by the theory of Hertz \cite{hertz76} and Millis \cite{millis93} (HM scenario). The instability can be caused by various types of residual spin exchange interactions. In this scenario the Landau Fermi liquid, albeit undergoing magnetic ordering, prevails, and the Kondo temperature $T_K$ remains finite across the QPT. \\ In the second type of scenario, the Kondo effect and, hence, the very formation of heavy fermionic quasiparticles is suppressed. This may occur due to magnetic coupling to the surrounding moments \cite{si01,coleman01} or possibly due to fluctuations of the Fermi volume involved with the onset of Kondo screening in an Anderson lattice system \cite{senthil04}. Both the bosonic order-parameter fluctuations and the local fermionic excitations then become critical at the QPT \cite{si01,coleman01}. In this case the system is in a more exotic, genuine many-body state which is not described by the Landau Fermi-liquid paradigm. For the critical breakdown of Kondo screening due to magnetic fluctuations the term ``local quantum critical (LQC)'' has been coined \cite{si01}. Unambiguously identifying the quantum critical scenario from the low-$T$ behavior, not to speak of predicting the scenario for a given system, has remained difficult. One reason for this is that the precise critical behavior is not known because of approximate assumptions implicit in the theoretical description of either one scenario, HM or LQC. While the HM theory \cite{hertz76,millis93} pre-assumes the existence of fermionic quasiparticles with only bosonic, critical order-parameter fluctuations, the extended dynamical mean field theory (EDMFT) description of the LQC scenario \cite{si01,coleman01} neglects possible changes of the critical behavior due to spatially extended critical fluctuations. Motivated by our recent ultraviolet (UPS) \cite{klein08} and X-ray (XPS) \cite{klein09} photoemission spectroscopy measurements of the Kondo resonance at elevated $T$ across the Au-concentration range of the QPT in CeCu$_{6-x}$Au$_x$, we here put forward a criterion to predict the quantum critical scenario of a HF system from its high-$T$ behavior around and above the single-ion Kondo temperature $T_K$. As seen below, this criterion derives from the fact that the complete Kondo screening breaks down when the dimensionless RKKY coupling $y$ between Kondo ions exceeds a certain strength $y_{m}$, even when critical fluctuations due to magnetic ordering do not play a role. $y_m$ can be expressed in a universal way in terms of the bare single-ion Kondo temperature $T_K(0)$ only, see Eq.(\ref{eq:ymax}) below. This breakdown is related to the unstable fixed point of the two-impurity Kondo model which separates the Kondo-screened and the inter-impurity (molecular) singlet ground states of this model \cite{jones87}. In the present paper we explore and utilize its signatures at temperatures well above the lattice coherence temperature $T_{coh}$ and the magnetic-ordering or N\'eel temperature $T_N$, i.e., in a region where neither critical fluctuations of the Fermi surface \cite{senthil04} or the magnetic order parameter play a role. In the following Section \ref{theory} we present our calculations of the high-temperature signatures of the RKKY-induced Kondo breakdown using perturbative renormalization group as well as selfconsistent diagrammatic methods. In Section \ref{experiment} we briefly recollect the UPS results for CeCu$_{6-x}$Au$_x$ \cite{klein08} and interpret them in terms of the high-$T$ signatures of Kondo breakdown. Some general conclusions are drawn in Section \ref{conclusion}. \section{Theory for single-ion Kondo screening in a Kondo lattice} \label{theory} We consider a HF system described by the Kondo lattice model of local 4f spins $S=1/2$ with the exchange coupling $J$ to the conduction electrons and the density of states at the Fermi level, $N(0)$, for temperatures well above $T_{coh}$, $T_N$. In this regime controlled calculations of renormalized perturbation theory in terms of the single-impurity Kondo model are possible and can be directly compared to experiments \cite{klein08}. In particular, the RKKY interaction of a given Kondo spin at site 0 with identical spins at the surrounding sites $i$ can be treated as a perturbative correction to the local coupling $J$. The leading-order direct and exchange corrections $\delta J^{(d)}$, $\delta J^{(ex)}$ are depicted diagrammatically in Fig.~\ref{fig2} a). As seen from the figure, these corrections involve the full dynamical impurity spin susceptibility (shaded bubbles) on the neighboring impurity sites $i$, $\chi_{4f}(T,0)\!=\!(g_L\mu_B)^2\,N(0) D_0/(4\sqrt{T_K^{2}+T^2})$, with the bare band width $D_0\approx E_F$ and the Land\'e factor $g_L$ and the Bohr magneton $\mu_B$ \cite{andrei83}. Note that the Kondo temperature of this effective single-impurity problem, $T_K$, is to be distinguished from the spin screening scale of the lattice problem \cite{pruschke00}, $T_K^{\star}$, which would also include QCF. Summing over all lattice sites $i\neq 0$ one obtains \cite{klein08}, \begin{eqnarray} \delta J^{(d)} &=& - y \frac{1}{4} J g_{i}^2 \ \frac{D_0}{\sqrt{T_K^{2}+T^2}}\ \frac{1}{1+(D/T_K)^2} \label{eq:deltaJ_d}\\ \delta J^{(ex)} &=& - y \frac{1}{4} J g_{i}^2 \left( \frac{3}{4} + \frac{T}{\sqrt{T_K^{2} + T^2}} \right)\ . \label{eq:deltaJ_ex} \end{eqnarray} Here $g_i\!=\!N(0)J_i$ is the dimensionless bare coupling on site $i\!\neq\!0$, $y$ is a dimensionless factor that describes the relation between the RKKY coupling strength and the Au content $x$. In the vicinity of the QPT we assume a linear dependence, $y=\alpha (x+x_0)$, with adjustable parameters $\alpha$ and $x_0$. In $\delta J^{(d)}$ [first diagram in Fig.~\ref{fig1} a)] the local spin response $\chi_{4f}(T,\Omega)$ restricts the energy exchange between conduction electrons and local spin, i.e. the band cutoff $D$, to $T_K$. This is described by the last factor in Eq.~(\ref{eq:deltaJ_d}) (soft cutoff). In $\delta J^{(ex)}$ [second diagram in Fig.~\ref{fig1} a)] $\chi_{4f}(T,\Omega)$ restricts the conduction-electron response to a shell of width $T_K$ around the Fermi energy $E_F$, and suppresses $\delta J^{(ex)}$ compared to $\delta J^{(d)}$ by an overall factor of $\sqrt{T_K^{2}+T^2}/D_0$, as seen in Eq.~(\ref{eq:deltaJ_ex}). The spin screening scale of this effective single-impurity problem, including RKKY corrections, can now be obtained as the energy scale where the perturbative renormalization group (RG) for the RKKY-corrected spin coupling (taken at $T=0$) diverges. The one-loop RG equation reads \begin{eqnarray} \frac{d J}{d\ln D}\!=\!-2 N(0) \left[ J+\delta J^{(d)}(D)+\delta J^{(ex)}(D) \right]^2 \ . \label{eq:RGequation} \end{eqnarray} Note that in this RG equation the bare bandwidth $D_0$ and the couplings $g_i$ on sites $i\!\neq\!0$ are not renormalized, since this is already included in the full susceptibility $\chi_{4f}$. The essential feature is that for $T=0$ the direct RKKY correction $\delta J^{(d)}$, Eq.~(\ref{eq:deltaJ_d}), is inversely proportional to the renormalized Kondo scale $T_K(y)$ itself via $\chi_{4f}(0,0)$. The solution of Eq.~(\ref{eq:RGequation}) leads to a highly non-linear self-consistency equation for $T_K(y)$, \begin{figure}[t] \begin{center} \scalebox{0.305}[0.305]{\includegraphics[clip]{fig2.eps}}\end{center} \vspace*{-0.3cm} \caption{ a) Leading order RKKY-induced corrections to the local spin-exchange coupling. Solid and dashed lines represent conduction electron and impurity spin (pseudofermion) propagators, respectively. b) The single-impurity Kondo scale $T_K(y)$ with RKKY corrections to the local exchange coupling (Fig.~\ref{fig2} a)) is shown as a function of $y$ for various values of the bare Kondo temperature $T_K(0)$. The effective perturbation parameter $f(y)$ is also shown. c) NCA result for the f spectral density on a single Kondo ion, including RKKY corrections (Fig.~\ref{fig2} a)) for $T=2\, T_K(0)$. The steep collapse of the Kondo resonance for increasing antiferromagnetic RKKY coupling $K_{RKKY}$ is clearly seen. } \label{fig2} \end{figure} \begin{eqnarray} \frac{T_K(y)}{T_K(0)} = \exp \left\{-\left(\frac{1}{2g} +\ln 2\right) \ \frac{f(u)}{1-f(u)} \right\} \ , \label{eq:TK_RG} \end{eqnarray} with $g\!=\!N(0)J$, $f(u)\!=\!u-u^2/2$, $u\!=\!yg^2D_0/[4T_K(y)]$. The single-ion Kondo scale without RKKY coupling is $T_K(0)=D_0\ {\rm exp}[-1/2g]$. Fig.~\ref{fig2} b) shows solutions of Eq.~(\ref{eq:TK_RG}) for various values of $T_K(0)$, together with the corresponding values of the effective perturbation parameter $f(y)$. It is seen that this RG treatment is perturbatively controlled in the sense that $f(y)\lesssim 0.1$, i.e., the exponent in Eq.~(\ref{eq:TK_RG}) remains small for all solutions. Remarkably, a solution of Eq.~(\ref{eq:TK_RG}) exists only up to a certain RKKY coupling strength $y_m$. For each value of the bare single-ion Kondo temperature, $\tau_K=T_K(0)/D_0$, $y_m$ can be calculated as the point where the derivative $dT_K(y)/dy$ diverges \cite{klein08}, \begin{eqnarray} y_{m}=3.128\,\tau_K\,(\ln \tau_K)^2\hspace{-0.2ex} \left[2\hspace{-0.2ex}-\hspace{-0.2ex}\ln\frac{\tau_K}{2}\hspace{-0.2ex}- \hspace{-0.2ex}\sqrt{\left(2\hspace{-0.2ex}-\hspace{-0.2ex} \ln\frac{\tau_K}{2}\right)^2\hspace{-0.2ex}-4} \right] \ . \label{eq:ymax} \end{eqnarray} By rescaling $y$ and $T_K(y)$ as $y/y_m$ and $T_K(y)/T_K(0)$, respectively, all $T_K(y)$ curves collapse onto a single, universal curve, shown in the inset of Fig.~\ref{fig5} below. For $y>y_m$ the RG Eq.~(\ref{eq:RGequation}) does not diverge, i.e., the Kondo screening breaks down at this maximum RKKY coupling strength $y_m$, even if magnetic ordering does not occur. Therefore, the physical origin of the high-$T$ criterion (\ref{eq:ymax}) is different from the well-known Doniach criterion \cite{doniach77} (which reads $T_K(0)\approx y_m N(0)J^2$), albeit it yields numerically similar values for $y_m$. According to Fig.~\ref{fig2} b) a sharp drop of $T_K$ is predicted at $y=y_m$. As seen in Fig.~\ref{fig2} c), this breakdown of Kondo screening is signalled by a collapse of the Kondo resonance in the local 4f spectrum $A_f(\omega)$ of a single Kondo ion, as the antiferromagnetic RKKY coupling to neighboring Kondo ions is increased. Fig.~\ref{fig2}~c) shows $A_f(\omega)$ as calculated for the two-impurity Anderson model within the non-crossing approximation (NCA) at $T=2\,T_K(0)$ \cite{keiter71}. For an efficient implementation of the NCA see \cite{costi96}. Details of these calculations as well as numerical renormalization group (NRG) studies of this problem will be published elsewhere. The described signatures should be directly observable in spectroscopic experiments at temperatures well above $T_N$, see section \ref{experiment}. We emphasize again that the Kondo breakdown occurs in any case, whether or not magnetic ordering sets in at low $T$. \begin{figure}[t] \begin{center} \scalebox{0.35}[0.35]{\includegraphics[clip]{fig3.eps}}\end{center} \vspace*{-0.3cm} \caption{ Schematic phase diagram of a HF system with a magnetic QPT in the $T_K(0)$--$y$ plane. The line denoted by $y_m$ represents Eq.~(\ref{eq:ymax}). At this line $T_K(y)$ undergoes an abrupt step, see text. The curve denoted by $y_{SDW}$ marks, as an example, an SDW instability of the system. The magnetic phase transition, LQC- or HM-like, occurs at whichever of the two lines is lower for a given system. The arrows indicate estimates \cite{ehm07} for $T_K(0)/D_0$ for CeCu$_{6-x}$Au$_x$ and CeNi$_{2-x}$Cu$_x$Ge$_{2}$. } \label{fig3} \end{figure} Therefore, the model predicts two quantum critical scenarios with distinctly different high-$T$ signatures: (1) The heavy Fermi liquid has a magnetic, e.g., SDW instability at $T=0$ for an RKKY parameter $y=y_{SDW}<y_m$, i.e., without breakdown of Kondo screening. In this case, $T_K(y)$, as extracted from high-$T$ UPS spectra, is essentially constant across the QCP but does have a sharp drop at $y=y_m$ inside the region where magnetic ordering occurs at low $T$, see Fig.~\ref{fig1} a). This corresponds to the HM scenario. (2) Magnetic ordering does not occur for $y<y_m$. In this case the Kondo breakdown at $y=y_m$ implies that the residual local moments order at sufficiently low $T$, i.e., the magnetic QCP coincides with $y=y_m$. Quantum critical fluctuations (not considered in the present high-$T$ theory) will suppress the actual Kondo screening scale $T_K$ below the high-$T$ estimate $T_K$, as shown in Fig.~\ref{fig1} b). This is the LQC scenario. These predictions are summarized in Fig.~\ref{fig3} as a phase diagram in terms of the bare Kondo scale $T_K(0)$ and the dimensionless RKKY coupling $y$ \cite{klein08}. \begin{figure}[t] \begin{center} \scalebox{0.3}[0.3]{\includegraphics[clip]{fig4.eps}}\end{center} \vspace*{-0.3cm} \caption{ {a)} Near-$E_F$ spectra of CeCu$_{6-x}$Au$_x$ for five different Au concentrations at $T\!=\!15$~K. The dashed lines indicate the resolution-broadened FDD at $T\!=\!15$\,K. The inset shows a larger energy range including the spin-orbit (SO) satellite at $E_B\!\approx\!260$~meV. See Ref.~\cite{klein08} for details of the experimental parameters. {b)} and {c)} show spectra for $x\!=\!0.1$ and $x\!=\!0.2$, respectively, divided by the FDD, at various $T$. The solid lines are single-impurity NCA fits. The insets in {b)} and {c)} show the corresponding raw data. } \label{fig4} \end{figure} \section{High-resolution photoemission spectroscopy at elevated temperature} \label{experiment} The theory described in the previous section should be applicable quite generally to HF systems with a magnetic QPT. Here we apply it to CeCu$_{6-x}$Au$_x$, which is one of the best characterized HF compounds \cite{loehneysen94,loehneysen96a,loehneysen98,stockert98,loehneysen98a,stroka93,stockert07,grube99}. The Au content $x$ is used to tune the RKKY interaction through the QPT at $x=x_c=0.1$, and Our recent UPS measurements \cite{klein08} on this compound at elevated $T$ have actually motivated the theoretical study. Details of the sample preparation and measurement procedures can be found in \cite{reinert01,ehm07}. The UPS measurements were done at $T = 57$\,K, 31\,K and 15\,K, i.e., well above $T_K(0)\approx 5$\,K, $T_{coh}$, and above the temperature up to which quantum critical fluctuations extend in CeCu$_{6-x}$Au$_x$ \cite{schroeder00}. Thus they record predominantly the local Ce 4f spectral density which is characterized by an effective {\it single-ion} Kondo scale $T_K$. This corresponds to the situation for which the calculations in Section \ref{theory} were done. In Fig.~\ref{fig4} a) raw UPS Ce 4f spectra are displayed, showing the onset of the Kondo resonance. A sudden decrease of the Kondo spectral weight at or near the quantum critical concentration $x_c$ can already be observed in these raw spectra. The states at energies of up to $~5k_BT$ above the Fermi level are accessible by a well-established procedure \cite{reinert01,ehm07} which involves dividing the raw UPS spectra by the Fermi-Dirac distribution function (FDD). The Kondo resonance, which in CeCu$_{6-x}$Au$_x$ is located slightly above the Fermi level, then becomes clearly visible, see Fig.~\ref{fig4}~b),~c). These figures exhibit clearly the collapse of the Kondo spectral weight above as compared to below $x_c$. It is in qualitative accordance with the Kondo resonance collapse in the theoretical spectra for $T>T_K(0)$, Fig~\ref{fig2}~c). To pinpoint the position of the Kondo breakdown more precisely, the single-ion Kondo temperature $T_K$ was extracted from the experimental spectra for various $x$. To that end, we followed the procedure successfully applied to various Ce compounds in the past \cite{patthey90,garnier97,allen00,reinert01,ehm07}: Using the non-crossing approximation (NCA) \cite{costi96} the Ce $4f$ spectral function of the single-impurity Anderson model was calculated, including all crystal-field and spin-orbit excitations. For each composition $x$ the NCA \begin{figure}[t] \begin{center} \scalebox{0.37}[0.37]{\includegraphics[clip]{fig5.eps}}\end{center} \vspace*{-0.3cm} \caption{ Dependence of the Kondo temperature $T_K$ on the Au content $x$, as determined by UPS (open circles), specific heat \cite{schlager92} (triangles) and neutron scattering \cite{stroka93,schroeder00} (diamonds). The error bars are approximately the width of the shaded area. The N\'eel temperature is labelled by $T_N$. The inset and the solid line in the main panel show the universal curve $T_K(y)/T_K(0)$ vs. $y/y_{m}$ as given by Eq.~(\ref{eq:TK_RG}). } \label{fig5} \end{figure} spectra are broadened by the experimental resolution and fitted to the experimental data, using a single parameter set for all experimental $T$. Using this parameter set, the NCA spectra were then calculated at low temperature, $T\!\approx\!0.1\,T_K$, where $T_K$ was extracted from the Kondo-peak half-width at half maximum (HWHM) of the NCA spectra. The results are shown in Fig.~\ref{fig5}. The finite Kondo scale extracted from the data for $x>x_c=0.1$ results from the high-$T$ onset of the Kondo resonance seen in Fig.~\ref{fig4}~c) which, according to our analysis, is expected not to persist to low $T$. Despite an uncertainty in $T_K$, estimated by the width of the shaded area in Fig,~\ref{fig5}, a sudden decrease of $T_K$ is clearly visible. The fact that the sharp $T_K$ drop of the experimental data occurs at (or very close to) the quantum critical concentration $x_c$ suggests to identify this drop with the theoretically expected signature of the Kondo breakdown at $y_m$, as illustrated in Fig.~\ref{fig5}. This supports that the QPT in CeCu$_{6-x}$Au$_x$ follows the LQC scenario driven by intersite magnetic fluctuations, as explained in Section \ref{theory}, and as was previously inferred from inelastic neutron scattering experiments \cite{schroeder00}. However, the nearby orthorhombic-monoclinic structural transition occurring at 220~K for $x=0$ and at 70~K for $x=0.1$ \cite{grube99} might also have an effect on the behavior of $T_K$. Therefore, future work has to substantiate the above LQC conjecture, even though across the structural transition the lattice unit cell volume and, hence, the density of states at the Fermi level tend to {\it increase} smoothly with increasing $x$, leading to an inrease rather than a drop of $T_K$. \section{Conclusion} \label{conclusion} Our theoretical analysis predicts generally that an abrupt step of the Kondo screening scale extracted from high-$T$ spectral data should occur in any HF compound with competing Kondo and RKKY interactions, as long as the single-ion Kondo screening scale is larger than the magnetic ordering temperature. Whether this distinct feature is located at the quantum critical control parameter value $x_c$ or inside the magnetically ordered region constitutes a general high-$T$ criterion to distinguish the LQC and HM scenarios. Moreover, this criterion allows to predict whether a given system should follow the HM or the LQC scenario, once estimates for the bare single-ion Kondo scale $T_K(0)$ and for dimensionless critical coupling strength $y_{SDW}$ for an SDW instability in that system are known. This is indicated in Fig.~\ref{fig3} for the examples of CeCu$_{6-x}$Au$_x$ and CeNi$_{2-x}$Cu$_x$Ge$_2$. A systematical analysis of other HF compounds in this respect is in progress. We would like to thank F. Assaad, L. Borda, S. Kirchner, A. Rosch and M. Vojta for fruitful discussions. This work was supported by DFG through Re~1469/4-3/4 (M.K., A.N., F.R.), SFB~608 (J.K.) and FOR~960 (H.v.L.). \section*{References}
train/arxiv
BkiUc0k4ubngyA6kKs2M
5
1
\section{Introduction} \label{sec:introduction} \vspace*{-3pt} Egocentric videos, also referred to as first-person videos, have been frequently advocated to provide a unique perspective into object interactions~\cite{kanade2012first,fathi2011understanding, mayol2004interaction}. These capture the viewpoint of an object close to that perceived by the user during interactions. Consider, for example, `turning a door handle'. Similar appearance and motion information will be captured from an egocentric perspective as multiple people turn a variety of door handles. Several datasets have been availed to the research community focusing on object interactions from head-mounted~\cite{de2008guide, fathi2011learning, Fathi2012,Damen2014a, lee2012discovering} and chest-mounted~\cite{pirsiavash2012detecting} cameras. These incorporate ground truth labels that mark the \textit{start} and the \textit{end} of each object interaction, such as `open fridge', `cut tomato' and `push door'. These temporal bounds are the base for automating action detection, localization and recognition. They are thus highly influential in the ability of an algorithm to distinguish one {interaction} from another. As temporal bounds vary, the segments may contain different portions of the untrimmed video from which the action is extracted. Humans can still recognize an action even when the video snippet varies or contains only part of the action. Machines are not yet as robust, given that current algorithms strongly rely on the data and the labels we feed to them. Should these bounds be incorrectly or inconsistently annotated, the ability to learn as well as assess models for action recognition would be adversely affected. In this paper, we uncover inconsistencies in defining temporal bounds for object interactions within and across three egocentric datasets. We show that temporal bounds are often ill-defined, with limited insight into how they have been annotated. We systematically show that perturbations of temporal bounds influence the accuracy of action recognition, for both hand-crafted features and fine-tuned classifiers, even when the tested video segment significantly overlaps with the ground truth segment. While this paper focuses on unearthing inconsistencies in temporal bounds, and assessing \textcolor[rgb]{0,0,0}{their} effect on object interaction recognition, we take a step further into proposing an approach for consistently labeling temporal bounds inspired by studies in the human mindset. \vspace*{2pt} \noindent\textbf{Main Contributions} \hspace{4pt} More specifically, we: \begin{itemize} \vspace*{-4pt} \item Inspect the consistency of temporal bounds for object interactions \textit{across and within} three datasets for egocentric object interactions. We demonstrate that current approaches are highly subjective, with visible variability in temporal bounds when annotating instances of the same action; \vspace*{-6pt} \item Evaluate the robustness of two state-of-the-art action recognition approaches, namely Improved Dense Trajectories~\cite{Wang2013} and Convolutional Two-Stream Network Fusion~\cite{feichtenhofer2016convolutional}, to changes in temporal bounds. We demonstrate that the recognition rate drops by 2-10\% when temporal bounds are modified albeit within an Intersection-over-Union of more than 0.5; \vspace*{-6pt} \item Propose, inspired by studies in Psychology, the Rubicon Boundaries to assist in consistent temporal boundary annotations for object interactions; \vspace*{-6pt} \item Re-annotate one dataset using the Rubicon Boundaries, and show more than 4\% increase in recognition accuracy, with improved per-class accuracies for most classes in the dataset. \end{itemize} \vspace*{-6pt} We next review related works in Section~\ref{sec:relatedWork}, before embarking on inspecting labeling consistenc{ies} in Section~\ref{sec:temporalBoundaries}, evaluating recognition robustness in Section~\ref{sec:experiments} and proposing and evaluating the Rubicon Boundaries in Section~\ref{sec:rubiconBoundaries}. The paper concludes with an insight into future directions. \section{Related Work} \label{sec:relatedWork} In this section, we review all papers that, up to our knowledge, ventured into the consistency and robustness of temporal bounds for action recognition. \vspace*{6pt} \noindent \textbf{Temporal Bounds in Non-Egocentric Datasets} \hspace{4pt} The leading work of Satkin and Hebert~\cite{satkin2010modeling} first pointed out that determining the temporal extent of an action is often subjective, and that action recognition results vary depending on the bounds used for training. They proposed to find the most discriminative portion of each segment for the task of action recognition. Given a loosely trimmed training segment, they exhaustively search for the cropping that leads to the highest classification accuracy, using hand-crafted features such as HOG, HOF \cite{laptev2008learning} and Trajectons \cite{matikainen2009trajectons}. Optimizing bounds to maximize discrimination between class labels has also been attempted by Duchenne~\textit{et al.}~\cite{duchenne2009automatic}, where they refined loosely labeled temporal bounds of actions, estimated from film scripts, to increase accuracy across action classes. Similarly, two works evaluated the optimal segment length for action recognition~\cite{schindler2008action,yang2014effective}. From the \textit{start} of the segment, {1}-7 frames were deemed sufficient in~\cite{schindler2008action}, with rapidly diminishing returns as more frames were added. More recently,~\cite{yang2014effective} showed that 15-20 frames were enough to recognize human actions from 3D skeleton joints. Interestingly, assessing the effect of temporal bounds is still an active research topic within novel deep architectures. Recently, Peng~\textit{et al.}~\cite{peng2016multi} assessed how frame-level classifications using multi-region two-stream CNN are pooled to achieve video-level recognition results. The authors reported that stacking more than 5 frames worsened the action detection and recognition results for the tested datasets, though only compared to a 10-frame stack. The problem of finding optimal temporal bounds is much akin to that of action localization in untrimmed videos~\cite{wang2016temporal, lea2016segmental, huang2016connectionist}. Typical approaches attempt to find similar temporal bounds to those used in training, making them equally dependent on manual labels and thus sensitive to inconsistencies in the ground truth labels. An interesting approach that addressed reliance on training temporal bounds for action recognition and localization is that of Gaidon \textit{et al.} \cite{gaidon2013temporal}. They noted that action recognition methods rely on temporal bounds in test videos to be strictly containing an action, and in \textit{exactly} the same fashion as the training segments. They thus redefined an action as a sequence of key atomic frames, referred to as actoms. The authors learned the optimal sequence of actoms per action class with promising results. More recently, Wang \textit{et~al.}~\cite{Wang_2016_CVPR} represented actions as a transformation from a precondition state to an effect state. The authors attempted to learn such transformations as well as locate the end of the precondition and the start of the effect. However, both approaches~\cite{gaidon2013temporal, Wang_2016_CVPR} rely on manual \textcolor[rgb]{0,0,0}{annotations} of actoms~\cite{gaidon2013temporal} or action segments~\cite{Wang_2016_CVPR}, which are potentially as subjective as the temporal bounds of the {actions themselves}. \vspace*{6pt} \noindent \textbf{Temporal Bounds in Egocentric Datasets} \hspace{4pt} Compared to {third} person action recognition (e.g. 101 action classes in~\cite{soomro2012ucf101} and 157 action classes in~\cite{sigurdsson2016hollywood}), egocentric datasets have a smaller number of classes (5-44 classes~\cite{de2008guide, fathi2011learning, Fathi2012, Damen2014a, lee2012discovering, pirsiavash2012detecting, zhou2015temporal}), with considerable ambiguities (e.g. `turn on' vs `turn off' tap). Comparative recognition results {have been} reported on these datasets in~\cite{spriggs2009temporal, taralova2011source, Singh16, li2015delving, Ryoo_2015_CVPR, ma2016going}. Previously, three works noted the challenge and difficulty in defining temporal bounds for egocentric videos~\cite{spriggs2009temporal,Damen2014a,zhou2015temporal}. In~\cite{spriggs2009temporal}, Spriggs \textit{et al.} discussed the level of granularity in action labels (e.g.~`break egg' vs `beat egg in a bowl') for the CMU dataset~\cite{de2008guide}. They also noted the presence of temporally overlapping object interactions (e.g. `pour' while `stirring'). In~\cite{wray2016sembed}, multiple annotators were asked to provide temporal bounds for the same object interaction. The authors showed variability in annotations, yet did not detail what instructions were given to annotators when labeling these temporal bounds. In~\cite{zhou2015temporal}, the human ability to order pairwise egocentric segments was evaluated as the snippet length varied. The work showed that human perception improves as the size of the segment increases to~60 frames, then levels off. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/pourExampleBig_reduced.png} \caption{Annotations for action `pour sugar/oil' from BEOID, GTEA Gaze+ and CMU. \textcolor[rgb]{0,0,0}{Aligned key frames} are shown along with ground truth annotations (red). The yellow rectangle encloses the motion strictly involved in `pour'.} \label{fig:differentBoundariesExample} \end{figure*} To summarize, temporal bounds for object interactions in egocentric video have been overlooked and no previous work attempted to analyze the influence of consistency of temporal bounds or the robustness of representations to variability in these bounds. This paper particularly attempts to answer both questions; \textit{how consistent are current temporal bound labels in egocentric datasets?} and \textit{how sensitive are action recognition results to changes in these temporal bounds?} We next delve into answering these questions. \vspace*{-6pt} \section{Temporal Bounds: Inspecting Inconsistency} \label{sec:temporalBoundaries} \vspace*{-2pt} Current egocentric datasets are annotated for a number of action classes, described using a verb-noun label. Each class instance is annotated with its label as well as the temporal bounds (i.e.~\textit{start} and \textit{end} times) that delimit the frames used to learn the action model. Little information is typically provided on how these manually labeled temporal bounds are acquired. In Section~\ref{subsec:labellingInCurrentDatasets}, we compare labels across and within egocentric datasets. We then discuss in Section~\ref{subsec:multiple} how variability is further increased when multiple annotators for the same action are employed. \subsection{Labeling in Current Egocentric Datasets} \label{subsec:labellingInCurrentDatasets} We study ground truth annotations for three public datasets, namely BEOID~\cite{Damen2014a}, GTEA Gaze+~\cite{Fathi2012} and CMU~\cite{de2008guide}. Observably, many annotations base the \textit{start} and \textit{end} of an action as respectively the first and last frames when the hands are visible in the field of view. Other annotations tend to segment an action more strictly, including only the most relevant physical object interaction within the bounds. Figure~\ref{fig:differentBoundariesExample} illustrates an example of three different temporal bounds for the `pour' action across the three datasets. Frames marked in red are those that have been labeled {in the different datasets} as containing the `pour' action. The annotated temporal bounds in this example vary remarkably{;} (i) BEOID's are the tightest; (ii) The start of GTEA Gaze+'s segment is belated: in fact, the first frame in the annotated segment shows {some oil already} in the pan; (iii) CMU's segment includes picking the oil container {and} putting it down before and after pouring. These conclusions extend to other actions in the three datasets. We observe that annotations are also inconsistent within the same dataset. Figure \ref{fig:boundariesInconsistency} shows three intra-dataset annotations. (i) For the action `open door' in BEOID, one segment includes the hand reaching the door, while the other starts with the hand already holding the door's handle; (ii) For the action `cut pepper' in GTEA Gaze+, in one segment the user already holds the knife and cuts a single slice of the vegetable. The second segment includes the action of picking up the knife, and shows the subject slicing the whole pepper through several cuts. Note that the length difference between the two segments is considerable - the segments are respectively 3 and 80 seconds long; (iii) For the action `crack egg' in CMU, only the first segment shows the user tapping the egg against the bowl. While the figure shows three examples, such inconsistencies have been discovered throughout the three datasets. However, we generally observe that GTEA Gaze+ shows more inconsistencies, which could be due to the dataset size, as it is the largest among the evaluated datasets. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{figures/boundariesInconsistencyStreched_reduced.png} \caption{Inconsistency of temporal bounds within datasets. Two segments from each action are shown with considerable differences in start and end times.} \label{fig:boundariesInconsistency} \end{figure} \vspace*{-4pt} \subsection{Multi-Annotator Labeling} \label{subsec:multiple} \vspace*{-2pt} As noted above, defining when an object interaction begins and finishes is highly subjective. There is usually little agreement when different annotators segment the same object interaction. To assess this variability, we collected 5 annotations for several object interactions from an untrimmed video of the BEOID dataset. First, annotators were only informed of the class name and asked to label the start and the end of the action. We refer to these annotations as \textit{conventional}. We then asked a different set of annotators to annotate the same object interactions following our proposed Rubicon Boundaries (RB) approach which we will present in Section \ref{sec:rubiconBoundaries}. Figure~\ref{fig:beoidAnnotations_boxPlots} shows per-class box plots for the Intersection-over-Union (IoU) measure for all pairs of annotations. RB annotations demonstrate gained consistency for all classes. For conventional annotations, we report an average IoU~=~0.63 and a standard deviation of~0.22, whereas for RB annotations we report increased average IoU~=~0.83 with a lower standard deviation of~0.11. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{figures/beoidIoUPlot2.pdf} \caption{IoU comparison between conventional (red) and RB (blue) annotations for several object interactions.} \label{fig:beoidAnnotations_boxPlots} \end{figure} To assess how consistency changes as more annotations are collected, we employ annotators via the Amazon Mechanical Turk (MTurk) for two object interactions from BEOID, namely the actions of `scan card' and `wash cup', for which we gathered 45 conventional and RB labels. Box plots for MTurk labels are included in Figure~\ref{fig:beoidAnnotations_boxPlots}, showing marginal improvements with RB annotations as well. We will revisit RB annotations in detail in Section~\ref{sec:rubiconBoundaries}. In the next Section, we assess the robustness of current action recognition approaches to variations in temporal boundaries. \vspace*{-6pt} \section{Temporal Bounds: Assessing Robustness} \label{sec:experiments} \vspace*{-4pt} To assess the effect of temporal bounds on action recognition, we systematically vary the \textit{start} and \textit{end} times of annotated segments for the {three} datasets, and report comprehensive results on the effect of such alterations. Results are evaluated using 5-fold cross validation. For training, only ground truth segments are considered. We then classify \textit{both} the original ground truth and the generated segments. We provide results using Improved Dense Trajectories \cite{Wang2013} encoded with Fisher Vector \cite{Sanchez2013} (IDT FV)\footnote{IDT features have been extracted using GNU Parallel \cite{Tange2011a}.} and Convolutional Two-Stream Network Fusion for Video Action Recognition (2SCNN) \cite{feichtenhofer2016convolutional}. The encoded IDT FV features are classified with a linear SVM. Experiments on 2SCNN are carried out using the provided code and the proposed VGG-16 architecture pre-trained on ImageNet and tuned on UCF101~\cite{soomro2012ucf101}. We fine-tune the spatial, temporal and fusion networks on each fold's training set. Theoretically, the two action recognition approaches are likely to respond differently to variations in start and end times. Specifically, 2SCNN averages the classification responses of the fusion network obtained on $n$ frames randomly extracted from a test video $v$ of length $|v|$. In our experiments, $n = \min(20, |v|)$. Such strategy should ascribe some degree of resilience to 2SCNN. IDT \textcolor[rgb]{0,0,0}{densely} samples feature points in the first frame of the video, whereas in the following frames only new feature points are sampled to replace the missing ones. This entails that IDT FV should be more sensitive to start (specifically) and end time variations, at least for shorter videos. This fundamental difference makes both approaches interesting to assess for robustness. \begin{table}[t] \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{l|c|c|c} \hline Dataset & N. of $gt$ segments & N. of $gen$ segments & Classes \\ \hline BEOID~\cite{Damen2014a} & 742 & 16691 & 34 \\ GTEA Gaze+~\cite{Fathi2012} & 1141 & 22221 & 42 \\ CMU~\cite{de2008guide} & 450 & 26160 & 31 \\ \hline \end{tabular}} \caption{Number of ground truth/generated segments and number of classes for BEOID, GTEA Gaze+ and CMU.} \label{table:datasetsInfo} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/datasetLengthDistributionGteaOriginal.pdf} \label{fig:datasetLengthDistribution} \caption{Video's length distribution {across datasets}.} \label{fig:datasetLengthDistribution} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/beoidResults.pdf} \caption{BEOID: classification accuracy vs IoU, start/end shifts and length difference between $gt$ and generated segments.} \label{fig:beoidResults} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{figures/cmuResults.pdf} \caption{CMU: classification accuracy vs IoU, start/end shifts and length difference.} \label{fig:cmuResults} \end{figure*} \subsection{Generating Segments} \label{subsec:generatingSegments} Let $v_{gt}$ be a ground truth action segment obtained by cropping an untrimmed video from time $s_{gt}$ to time $e_{gt}$, which denote the annotated ground truth start and end times. We vary both $s_{gt}$ and $e_{gt}$ in order to generate new action segments with different temporal bounds. More precisely, let ${s_{gen}^{0} = s_{gt} - \Delta}$ and let $s_{gen}^{n} = s_{gt} + \Delta$. The set containing candidate start times is defined as: \begin{equation*} \mathcal{S} = \{s_{gen}^{0}, s_{gen}^{0} + \delta, s_{gen}^{0} + 2\delta, ..., s_{gen}^{0} + (n-1) \delta, s_{gen}^{n}\} \end{equation*} Analogously, let $e_{gen}^{0} = e_{gt} - \Delta$ and let $e_{gen}^{n} = e_{gt}~+~\Delta$, the set containing candidate end times is defined as: \begin{equation*} \mathcal{E} = \{e_{gen}^{0}, e_{gen}^{0} + \delta, e_{gen}^{0} + 2\delta, ..., e_{gen}^{0} + (n-1) \delta, e_{gen}^{n}\} \end{equation*} To accumulate the set of generated action segments, we take all possible combinations of $s_{gen} \in \mathcal{S}$ and $e_{gen} \in \mathcal{E}$ and keep only those such that the Intersection-over-Union between $[s_{gt}, e_{gt}]$ and $[s_{gen}, e_{gen}] \geq 0.5$. In all our experiments, we set $\Delta = 2$ and $\delta = 0.5$ seconds. \subsection{Comparative Evaluation} \label{subsec:results} Table \ref{table:datasetsInfo} reports the number of ground truth and generated segments for BEOID, GTEA Gaze+ and CMU. Figure \ref{fig:datasetLengthDistribution} illustrates the segments' length distribution for the three datasets, showing considerable differences: BEOID and GTEA Gaze+ contain mostly short segments (1-2.5 seconds), although the latter includes also videos up to 40 seconds long. CMU has longer segments, with the majority ranging from {5 to 15 seconds.} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/gteaResultsOriginal.pdf} \caption{GTEA Gaze+: classification accuracy vs IoU, start/end shifts and length difference.} \label{fig:gteaResults} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.9\textwidth]{figures/classAccuracyComparisonGteaOriginal3.pdf} \caption{Accuracy per class differences. Most classes exhibit a drop in accuracy when testing generated segments.} \label{fig:classAccuracyComparison} \end{figure*} \noindent{\textbf{BEOID~\cite{Damen2014a}} \hspace{4pt}} is the evaluated dataset with the most consistent and the tightest temporal bounds. When testing the ground truth segments using both IDT FV and 2SCNN, we observe high accuracy for ground truth segments - respectively 85.3\% and 93.5\% - as shown in Table~\ref{table:accuracyResultsAll}. When classifying the generated segments, we observe a drop in accuracy of 9.9\% and 9.7\% respectively. Figure \ref{fig:beoidResults} shows detailed results where accuracy is reported vs IoU, start/end shifts and length difference between ground truth and generated segments. We particularly show the results for shifts in the start and the end times independently. A \textit{negative start shift} implies that a generated segment begins before the corresponding ground truth segment, and a \textit{negative end shift} implies that a generated segment finishes before the corresponding ground truth segment. These terms are used consistently throughout this section. Results show that: (i) as IoU decreases the accuracy drops consistently for IDT FV and 2SCNN - which questions both approaches' robustness to temporal bounds alterations; (ii) IDT FV exhibits lower accuracy with both negative and positive {start/end} shifts; (iii)~IDT FV similarly exhibits lower accuracy with negative and positive length differences. This is justified as BEOID segments are {tight; by} expanding a segment we include new potentially noisy or irrelevant frames that confuse the classifiers; (iv) 2SCNN is more robust to length difference which is understandable as it randomly samples a maximum of 20 frames regardless of the length. While these are somehow expected, we also note that (v)~2SCNN is robust to \textit{positive start shifts}. \begin{table}[t] \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{l|c|c||c|c} \hline Dataset & $\textcolor{idtColor}{\text{IDT FV}}_{gt}$ & $\textcolor{idtColor}{\text{IDT FV}}_{gen}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gt}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gen}$ \\ \hline BEOID & 85.3 & 75.4 & 93.5 & 83.8 \\ CMU & 54.9 & 52.8 & 76.0 & 71.7 \\ GTEA Gaze+ & 45.4 & 43.3 & 61.2 & 59.6 \\ \hline \end{tabular}} \caption{Classification accuracy for ground truth and generated segments for BEOID, CMU and GTEA~Gaze+.} \label{table:accuracyResultsAll} \end{table} \noindent{\textbf{CMU~\cite{de2008guide}} \hspace{4pt}} is the dataset with longer ground truth segments. Table \ref{table:accuracyResultsAll} compares results obtained for CMU's ground truth and generated segments. For this dataset, IDT FV accuracy drops by 2.1\% for the generated segments, whereas 2SCNN drops by 4.3\%. In Figure \ref{fig:cmuResults}, CMU consistently shows low robustness for both IDT FV and 2SCNN. As IoU changes from $> 0.9$ to $> 0.5$, we observe a drop of more than 20\% in accuracy for both. However, due to the long average length of segments in CMU, the effect of shifts in start end times as well as length differences is not visible for IDT FV. Interestingly for 2SCNN, the accuracy slightly improves with \textit{positive start shift}, \textit{negative end shift} and \textit{negative length difference}. This suggests that CMU's ground truth bounds are somewhat loose and that tighter segments are likely to contain more discriminative frames. \noindent{\textbf{GTEA Gaze+~\cite{Fathi2012}} \hspace{4pt}} is the dataset with the most inconsistent bounds, based on our observations. Table~\ref{table:accuracyResultsAll} shows that accuracy for IDT FV drops by 2.1\%, while overall accuracy for 2SCNN drops marginally (1.6\%). This should not be mistaken for robustness, and that is evident when studying the results in Figure~\ref{fig:gteaResults}. For all variations (i.e. start/end shifts and length differences), the generated segments achieve higher accuracy for both IDT FV and 2SCNN. When labels are inconsistent, shifting temporal bounds does not systematically alter the visual representation of the tested segments. {T}he generated segments tend to include (or exclude) frames that increase the similarity between the testing and training segments. Figure \ref{fig:classAccuracyComparison} reports per-class differences between generated and ground truth segments. Positive values entail that the accuracy for the given class is higher when testing the generated segments, and vice versa. Horizontal lines indicate the average accuracy difference. In total, 63\% of classes in all three datasets exhibit a drop in accuracy drop when using IDT FV compared to 80\% when using 2SCNN. \begin{table}[h] \centering \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{l|c|c||c|c} \hline Dataset & $\textcolor{cnnColor}{\text{2SCNN}}_{gt}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gen}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gt}^{\textcolor{augColor}{aug}}$ & $\textcolor{cnnColor}{\text{2SCNN}}_{gen}^{\textcolor{augColor}{aug}}$\\ \hline BEOID & \textbf{93.5} & 83.8 & 92.3 & 86.6 \\ GTEA Gaze+ & \textbf{61.2} & 59.6 & 57.9 & 58.1 \\ \hline \end{tabular}} \caption{\textcolor[rgb]{0,0,0}{2SCNN data augmentation results.}} \label{table:dataAugmentation} \end{table} \noindent{\textbf{Data augmentation:} \hspace{4pt}} {For completeness{, we evaluate the performance }when using temporal data augmentation methods {on two datasets}. {Generated segments in Section~\ref{subsec:generatingSegments} are used to augment training.} We {double} the size of the training sets, taking random samples for augmentation. Test sets remained unvaried. Results are reported in Table \ref{table:dataAugmentation}. While we observe an increase in robustness, we also notice a drop in accuracy for ground truth segments, respectively of 1\% and 4\% for BEOID and GTEA Gaze+ } In conclusion, we note that both IDT FV and 2SCNN are sensitive to changes in temporal bounds for both consistent and inconsistent annotations. Approaches that improve robustness using data augmentation could be attempted, however a broader look at how the methods could be inherently more robust is needed, particularly for CNN architectures. \vspace*{-6pt} \section{{\hspace*{-1pt}Labeling} Proposal: The Rubicon Boundaries} \label{sec:rubiconBoundaries} \vspace*{-4pt} The problem of defining consistent temporal bounds of an action is most akin to the problem of defining consistent bounding boxes of an object. Attempts to define guidelines for annotating objects' bounding boxes started nearly a decade ago. Among others, the VOC Challenge 2007~\cite{everingham2010pascal} proposed what has become the standard for defining the bounding box of an object in images. These consistent labels have been used to train state-of-the-art object detection and classification methods. With this same spirit, in this Section we propose an approach to consistently segment the temporal scope of an object interaction. \noindent \textbf{Defining RB:} The Rubicon Model of Action Phases \cite{gollwitzer1990action}, developed in the field of Psychology, posits an action as a goal a subject desires to achieve and identifies the main sub-phases the person gets through in order to complete the action. First, a person decides what goal he wants to obtain. After forming his intention, he enters the so-called \textit{pre-actional} phase, that is a phase where he plans to perform the action. Following this stage, the subject acts towards goal achievement in the \textit{actional phase}. The two phases are delimited by three transition points: the initiation of prior motion, the start of the action and the goal achievement. The model is named after the historical fact of Caesar crossing the Rubicon river, which became a metaphor for deliberately proceeding past a point of no return, which in our case is the transition point that signals the beginning of an action. We take inspiration from this model, specifically from the aforementioned transitions points, and define two phases for an object interaction: \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{figures/rubiconBoundaries3b23_reduced.png} \caption{Rubicon Boundaries labeling examples for three object interactions.} \label{fig:rubiconBoundariesExample} \end{figure} \vspace*{2pt} \noindent\textbf{\textit{Pre-actional phase}} This sub-segment contains the preliminary motion that directly precedes the goal, and is required for its completion. When multiple motions can be identified, the pre-actional phase should contain only the last one;\\ \textbf{\textit{Actional phase}} This is the main sub-segment containing the motion strictly related to the fulfillment of the goal. The actional phase starts immediately after the pre-actional phase. In the following section, we refer to a label as an RB annotation when the \textit{beginning} of an object interaction aligns with the \textit{start} of the pre-actional phase and the \textit{ending} of the interactions aligns with the \textit{end} of the actional phase. Figure \ref{fig:rubiconBoundariesExample} depicts three object interactions labeled according to the Rubicon Boundaries. The top sequence illustrates the action of cutting a pepper. The sequence shows the subject fetching the knife before cutting the pepper and taking it off the plate. Based on the aforementioned definitions, the pre-actional phase is limited to the motion of moving the knife towards the pepper in order to slice it. This is directly followed by the actional phase where the user cuts the pepper. The actional phase ends as the goal of `cutting' is completed. The middle sequence illustrates the action of opening a fridge, showing a person approaching the fridge, reaching towards the handle before pulling the fridge door open. In this case, the pre-actional phase would be the reaching motion, while the actional phase would be the pulling motion. \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{figures/preActIoU2.pdf} \vspace*{-6pt} \caption{IoU comparison among the pre-actional phase (green), the actional phase (yellow) and their concatenation (blue) for several object interactions of BEOID.} \label{fig:preActIoU} \end{figure} \noindent \textbf{Evaluating RB:} We evaluate our RB proposal for consistency, intuitiveness as well as accuracy and robustness. \noindent \textbf{(i) Consistency:} We already reported consistency results in Section \ref{subsec:multiple}, where RB annotations exhibit higher average overlap and less variation for all the evaluated object interactions - average IoU for all pairs of annotators increased from 0.63 for conventional boundaries to 0.83 for RB. Figure \ref{fig:preActIoU} illustrates per-class IoU box plots for the pre-actional and the actional phases separately, along with the concatenation of the two. For 7 out of the 13 actions, the actional phase was more consistent than the pre-actional phase, and for 12 out of the 13 actions, the concatenation of the phases proved the highest consistency. \noindent \textbf{(ii) Intuitiveness:} While RB showed higher consistency in labeling, any new approach for temporal boundaries would require a shift in practice. We collect RB annotations from university students as well as from MTurk annotators. Locally, students successfully used the RB definitions to annotate videos with no assistance. However, this has not been the case for MTurk annotators for the two object interactions `wash cup' and `scan card'. The MTurk HIT provided the formal definition of the \textit{pre-actional} and \textit{actional} phases, then ran two multiple-choice control questions to assess the ability of annotators to distinguish these phases from a video. The annotators had to select from textual descriptions what the pre and the actional phases entailed. For both object interactions, only a fourth of the annotators answered the control questions correctly. Three possible explanations could be given, namely: annotators were accustomed to the conventional labeling method {and} did not spend sufficient time to study the definitions, or the definitions were difficult to understand. Further experimentation is needed to understand the cause. \begin{figure}[!t] \centering \includegraphics[width=1.0\columnwidth]{figures/gteaRBvsOriginalCombined.pdf} \caption{GTEA Gaze+: class accuracy difference between conventional and RB annotations. \textcolor[rgb]{0,0,0}{Some classes achieved higher accuracy only with RB\textsubscript{act}, while other did only with the full RB segment. Bold highlights such cases.} } \label{fig:gteaRBClassAccuracyComparison} \end{figure} \noindent \textbf{(iii) Accuracy:} We annotated GTEA Gaze+ using the Rubicon Boundaries, by employing three people to label its 1141 segments\footnote{{RB labels and {video of results are available on project webpage:} \url{http://www.cs.bris.ac.uk/~damen/Trespass/}}}. For these experiments, we asked annotators to label both the pre-actional and the actional phase. {In Table \ref{table:gteaRBResults2}, we} report results for the actional phase alone (RB\textsubscript{act}) as well as the concatenation of the two phases (RB){,} using 2SCNN on the same 5 folds from Section~\ref{subsec:results}. The concatenated RB segments proved the most accurate, leading to an increase of more than 4\% in accuracy compared to conventional ground truth segments. {Temporal augmentation {on conventional labels ($\textcolor{originalColor}{\text{Conv}}^{\textcolor{augColor}{aug}}_{gt}$)} results in a drop of accuracy by 7.7\% compared with the RB segments, highlighting that consistent labeling cannot be substituted with data augmentation. } Figure \ref{fig:gteaRBClassAccuracyComparison} shows the accuracy per class difference between the two sets of RB annotations and the conventional labels. When using RB\textsubscript{act}, 21/42 classes improved, whereas accuracy dropped for 11 classes compared to the conventional annotations. When using the full RB segment, 23/42 classes improved, while 10 classes were better recognized with the conventional annotations. In each case, 10 and 9 classes remain unchanged. Given that the experimental setup was identical to that used for the conventional annotations, the boost in accuracy can be ascribed solely to the new action boundaries. Indeed, the RB approach helped the annotators to more consistently segment the object interactions contained in GTEA Gaze+, which is one of the most challenging datasets for egocentric action recognition. \noindent \textbf{(iv) Robustness:} Table \ref{table:gteaRBResults2} {also} compares the newly annotated RB segments to generated segments with varied start and end times, as explained in Section~\ref{subsec:generatingSegments}. While RB$_{gen}$ shows higher accuracy than the Conventional$_{gen}$ segments (59.6\% as reported in Table~\ref{table:accuracyResultsAll}), we still observe a clear drop in accuracy between $gt$ and $gen$ segments. Interestingly, we {observe} improved robustness when using the actional phase alone. Given that the actional segment's start is closer in time to the beginning of the object interaction, when varying the start of the segment we are effectively including part of the pre-actional phase in the generated segment, which assists in making actions more discriminative Importantly, we {show} that RB annotations improved both consistency and accuracy of annotations on the largest dataset of egocentric object interactions. \textcolor[rgb]{0,0,0}{We believe these} form solid basis for further discussions and experimentation on consistent labeling of temporal boundaries. \vspace*{-6pt} \section{Conclusion and Future Directions} \label{sec:conclusion} \vspace*{-4pt} \begin{table}[t] \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}} \hline $\textcolor{originalColor}{\text{Conv}}_{gt}$ & $\textcolor{originalColor}{\text{Conv}}^{\textcolor{augColor}{aug}}_{gt}$ & $\textcolor{cnnColor}{\stackrel{\text{act}}{\text{RB}}}_{gt}$ & $\textcolor{cnnColor}{\stackrel{\text{act}}{\text{RB}}}_{gen}$ & $\textcolor{rbColor}{\text{RB}}_{gt}$ & $\textcolor{rbColor}{\text{RB}}_{gen}$ \\ \hline 61.2 & 57.9 & 64.9 & 63.2 & \textbf{65.6} & 61.7 \\ \hline \end{tabular}} \caption{{GTEA Gaze+: 2SCNN classification accuracy comparison for conventional annotations (ground truth and augmented) and RB labels (ground truth and generated).}} \label{table:gteaRBResults2} \end{table} Annotating temporal bounds for object interactions is the base for supervised action recognition algorithms. In this work, we uncovered inconsistencies in temporal bound annotations within and across three egocentric datasets. We assessed the robustness of both hand-crafted features and fine-tuned end-to-end recognition methods, and demonstrated that both IDT FV and 2SCNN are susceptible to variations in start and end times. We then proposed an approach to consistently label temporal bounds for object interactions. We foresee three potential future directions: \noindent{\textbf{Other NN architectures?} \hspace{2pt}} \textcolor[rgb]{0,0,0}{W}hile 2SCNN randomly samples frames from a video segment, the classification accuracy is still sensitive to variations in temporal bounds. Other architectures, particularly those that model temporal progression using Recurrent networks (including LSTM), rely on labeled training samples and would thus equally benefit from consistent labeling. Evaluating the robustness of such networks is an interesting future direction. \noindent{\textbf{How can robustness to temporal boundaries be achieved?} \hspace{2pt} Classification methods that are inherently robust to temporal boundaries, while learning from supervised annotations, is a topic for future directions. As deep architectures reportedly outperform hand-crafted features and other classifiers, architectures that are designed to handle variations in start and end times are desired.} \noindent{\textbf{Which temporal granularity?} \hspace{2pt}} \textcolor[rgb]{0,0,0}{T}he Rubicon Boundaries address consistent labeling of temporal bounds, \textcolor[rgb]{0,0,0}{but} they do not address the concern of granularity of the action. Is the action of cutting a whole tomato composed of several short cuts or is it one long action? The Rubicon Boundaries model discusses actions relative to the goal a person wishes to accomplish. The granularity of an object interaction is another matter, and annotating the level of granularity consistently has not been addressed yet. Expanding Rubicon Boundaries to enable annotating the granularity would require further investigation. \noindent \textbf{Data Statement \& Ack:} \hspace{4pt} {Public datasets were used in this work; no new data were created as part of this study. RB annotations are available on the project's webpage. Supported by EPSRC DTP and EPSRC LOCATE (EP/N033779/1).} {\small \bibliographystyle{ieee}
train/arxiv